Image Title

Search Results for Arman:

Alessandro Barbieri and Pete Lumbis


 

>>mhm. Okay, we're back. I'm John. Fully with the Cuban. We're going to go deeper into a deep dive into unified cloud networking solution from Pluribus and NVIDIA. And we'll examine some of the use cases with Alexandra Barberry, VP of product Management and Pluribus Networks. And Pete Lambasts, the director of technical market and video. Remotely guys, thanks for coming on. Appreciate it. >>I think >>so. Deep dive. Let's get into the what and how Alexandra, we heard earlier about the pluribus and video partnership in the solution you're working together on. What is it? >>Yeah. First, let's talk about the what? What are we really integrating with the NVIDIA Bluefield deep You Technology pluribus says, uh, has been shipping, uh, in volume in multiple mission critical networks. So this adviser, one network operating systems it runs today on merchant silicon switches and effectively, it's a standard based open network computing system for data centre. Um, and the novelty about this operating system is that it integrates a distributed the control plane for Atwater made effective in STN overlay. This automation is completely open and interoperable, and extensible to other type of clouds is nothing closed and this is actually what we're now porting to the NVIDIA GPU. >>Awesome. So how does it integrate into video hardware? And specifically, how is plural is integrating its software within video hardware? >>Yeah, I think we leverage some of the interesting properties of the blue field the GPU hardware, which allows actually to integrate, um, our soft our network operating system in a manner which is completely isolated and independent from the guest operating system. So the first byproduct of this approach is that whatever we do at the network level on the GPU card is completely agnostic to the hyper visor layer or OS layer running on on the host even more. Um, uh, we can also independently manage this network. Note this switch on a nick effectively, uh, managed completely independently from the host. You don't have to go through the network operating system running on X 86 to control this network node. So you truly have the experience effectively of a top of rack for virtual machine or a top of rack for kubernetes spots. Where instead of, uh, um, if you allow me with analogy instead of connecting a server nique directly to a switchboard now you're connecting a VM virtual interface to a virtual interface on the switch on a nick. And also as part of this integration, we, uh, put a lot of effort, a lot of emphasis in accelerating the entire day to play in for networking and security. So we are taking advantage of the DACA, uh, video DACA api to programme the accelerators and this your accomplished two things with that number one, you, uh, have much greater performance, much better performance than running the same network services on an X 86 CPU. And second, this gives you the ability to free up. I would say around 2025% of the server capacity to be devoted either to additional war close to run your cloud applications. Or perhaps you can actually shrink the power footprint and compute footprint of your data centre by 20% if you want to run. The same number of computer work was so great efficiencies in the overall approach. >>And this is completely independent of the server CPU, right? >>Absolutely. There is zero quote from pluribus running on the X 86 this is what why we think this enables a very clean demarcation between computer and network. >>So, Pete, I gotta get I gotta get you in here. We heard that the GPUS enable cleaner separation of devops and net ops. Can you explain why that's important? Because everybody's talking. Def SEC ops, right now you've got Net ops. Net net SEC ops, this separation. Why is this clean separation important? >>Yeah, I think it's, uh, you know, it's a pragmatic solution, in my opinion, Um, you know, we wish the world was all kind of rainbows and unicorns, but it's a little a little messier than that. And I think a lot of the devops stuff in that, uh, mentality and philosophy. There's a natural fit there, right? You have applications running on servers. So you're talking about developers with those applications integrating with the operators of those servers? Well, the network has always been this other thing, and the network operators have always had a very different approach to things than compute operators. And, you know, I think that we we in the networking industry have gotten closer together. But there's still a gap. There's still some distance, and I think in that distance isn't going to be closed and So again it comes down to pragmatism. And I think, you know, one of my favourite phrases is look, good fences make good neighbours. And that's what this is. Yeah, >>it's a great point because devops has become kind of the calling card for cloud. Right? But devops is a simply infrastructure as code infrastructure is networking, right? So if infrastructure as code, you know, you're talking about, you know, that part of the stack under the covers under the hood, if you will. This is super important distinction. And this is where the innovation is. Can you elaborate on how you see that? Because this is really where the action is right now. >>Yeah, exactly. And I think that's where one from from the policy, the security, the zero trust aspect of this right. If you get it wrong on that network side, all of a sudden, you you can totally open up that those capabilities and so security is part of that. But the other part is thinking about this at scale, right. So we're taking one top of rack switch and adding, you know, up to 48 servers per rack, and so that ability to automate orchestrate and manage its scale becomes absolutely critical. >>Alexandra, this is really the why we're talking about here. And this is scale and again getting it right. If you don't get it right, you're gonna be really kind of up. You know what you know. So this is a huge deal. Networking matters. Security matters. Automation matters. DEVOPS. Net ops all coming together. Clean separation. Help us understand how this joint solution within video gets into the pluribus unified cloud networking vision. Because this is what people are talking about and working on right now. >>Yeah, absolutely. So I think here with this solution, we're talking to major problems in cloud networking. One is the operation of cloud networking, and the second is distributing security services in the cloud infrastructure. First, let me talk about first. What are we really unifying? If you really find something, something must be at least fragmented or disjointed. And what is this? Joint is actually the network in the cloud. If you look holistically how networking is deployed in the cloud, you have your physical fabric infrastructure, right? Your switches and routers. You build your I P clause fabric leaf and spine to apologies. this is actually well understood the problem. I would say, um, there are multiple vendors with a similar technologies. Very well, standardised. Very well understood. Um, and almost a commodity, I would say building an I P fabric these days. But this is not the place where you deploy most of your services in the cloud, particularly from a security standpoint. Those services are actually now moved into the compute layer where you actually were called. Builders have to instrument a separate network virtualisation layer, where they deploy segmentation and security closer to the workloads. And this is where the complication arise. This high value part of the cloud network is where you have a plethora of options, that they don't talk to each other, and they are very dependent on the kind of hyper visor or compute solution you choose. Um, for example, the networking API between an SX I environment or and hyper V or a Zen are completely disjointed. You have multiple orchestration layers and when and then when you throw in Also kubernetes in this In this in this type of architecture, uh, you're introducing yet another level of networking, and when you burn it, it runs on top of the M s, which is a prevalent approach. You actually just stuck in multiple networks on the compute layer that they eventually run on the physical fabric infrastructure. Those are all ships in the night effectively, right? They operate as completely disjointed. And we're trying to attack this problem first with the notion of a unified fabric, which is independent from any work clothes. Uh, whether it's this fabric spans on a switch which can become connected to a bare metal workload or can spend all the way inside the deep You where you have your multi hypervisors computer environment. It's one a P I one common network control plane and one common set of segmentation services for the network. That's probably number one. >>You know, it's interesting you I hear you talking. I hear one network different operating models reminds me the old server list days. You know there's still servers, but they called server list. Is there going to be a term network list? Because at the end of the, it should be one network, not multiple operating models. This this is like a problem that you guys are working on. Is that right? I mean, I'm not I'm just joking. Server, Listen, network list. But the idea is it should be one thing. >>Yeah, it's effectively. What we're trying to do is we're trying to recompose this fragmentation in terms of network operations across physical networking and server networking. Server networking is where the majority of the problems are because of the as much as you have standardised the ways of building, uh, physical networks and cloud fabrics with high people articles on the Internet. And you don't have that kind of, uh, sort of, uh, operational efficiency at the server layer. And this is what we're trying to attack first with this technology. The second aspect we're trying to attack is how we distribute the security services throughout the infrastructure more efficiently. Whether it's micro segmentation is a state, full firewall services or even encryption, those are all capabilities enabled by the blue field deep you technology and, uh, we can actually integrate those capabilities directly into the network fabric. Limiting dramatically, at least for is to have traffic, the sprawl of security appliances with a virtual or physical that is typically the way people today segment and secured the traffic in the >>cloud. All kidding aside about network. Listen, Civil is kind of fun. Fun play on words There the network is one thing is basically distributed computing, right? So I love to get your thoughts about this Distributed security with zero trust as the driver for this architecture you guys are doing. Can you share in more detail the depth of why DPU based approach is better than alternatives? >>Yeah, I think. What's what's beautiful and kind of what the deep you brings that's new to this model is completely isolated. Compute environment inside. So you know, it's the yo dog. I heard you like a server, So I put a server inside your server. Uh, and so we provide, you know, arm CPUs, memory and network accelerators inside, and that is completely isolated from the host. So the server, the the actual X 86 host just thinks it has a regular nick in there. But you actually have this full control plane thing. It's just like taking your top of rack, switch and shovel. Get inside of your compute node. And so you have not only the separation, um, within the data plane, but you have this complete control plane separation. So you have this element that the network team can now control and manage. But we're taking all of the functions we used to do at the top of rack Switch, and we distribute them now. And, you know, as time has gone on, we've we've struggled to put more and more and more into that network edge. And the reality is the network edge is the compute layer, not the top of rack switch layer. And so that provides this phenomenal enforcement point for security and policy. And I think outside of today's solutions around virtual firewalls, um, the other option is centralised appliances. And even if you can get one that can scale large enough, the question is, can you afford it? And so what we end up doing is we kind of hope that if aliens good enough or we hope that if you excellent tunnel is good enough, and we can actually apply more advanced techniques there because we can't physically, financially afford that appliance to see all of the traffic, and now that we have a distributed model with this accelerator, we could do it. >>So what's the what's in it for the customer real quick. I think this is an interesting point. You mentioned policy. Everyone in networking those policies just a great thing. And it has. You hear it being talked about up the stack as well. When you start getting to orchestrate microservices and what not all that good stuff going on their containers and whatnot and modern applications. What's the benefit to the customers with this approach? Because what I heard was more scale, more edge deployment, flexibility relative to security policies and application. Enablement. I mean, is that what what's the customer get out of this architecture? What's the enablement? >>It comes down to taking again the capabilities that were that top of rack switch and distracting them down. So that makes simplicity smaller. Blast Radius is for failure, smaller failure domains, maintenance on the networks and the systems become easier. Your ability to integrate across workloads becomes infinitely easier. Um, and again, you know, we always want to kind of separate each one of those layers. So, just as in, say, a Vieques land network, my leaf and spine don't have to be tightly coupled together. I can now do this at a different layer and so you can run a deep You with any networking in the core there. And so you get this extreme flexibility, you can start small. You can scale large. Um, you know, to me that the possibilities are endless. >>It's a great security control Playing really flexibility is key, and and also being situationally aware of any kind of threats or new vectors or whatever is happening in the network. Alexandra, this is huge Upside, right? You've already identified some, uh, successes with some customers on your early field trials. What are they doing? And why are they attracted? The solution? >>Yeah, I think the response from customer has been the most encouraging and exciting for for us to, uh, to sort of continuing work and develop this product. And we have actually learned a lot in the process. Um, we talked to three or two or three cloud providers. We talked to s P um, sort of telco type of networks, uh, as well as enter large enterprise customers. Um, in one particular case, um uh, one, I think. Let me let me call out a couple of examples here just to give you a flavour. There is a service provider, a cloud provider in Asia who is actually managing a cloud where they are offering services based on multiple hypervisors their native services based on Zen. But they also, um, ramp into the cloud workloads based on SX I and N K P M. Depending on what the customer picks from the piece from the menu. And they have the problem of now orchestrating through the orchestrate or integrating with Zen Centre with this fear with open stock to coordinate this multiple environments and in the process to provide security, they actually deploy virtual appliances everywhere, which has a lot of cost complication, and it's up into the service of you the promise that they saw in this technology they call it. Actually, game changing is actually to remove all this complexity, even a single network, and distribute the micro segmentation service directly into the fabric. And overall, they're hoping to get out of it. Tremendous OPEC's benefit and overall operational simplification for the cloud infrastructure. That's one important use case, um, another large enterprise customer, a global enterprise customer is running both Essex I and I purvey in their environment, and they don't have a solution to do micro segmentation consistently across Hypervisors. So again, micro segmentation is a huge driver. Security looks like it's a recurring theme talking to most of these customers and in the telco space. Um, uh, we're working with a few telco customers on the CFT programme, uh, where the main goal is actually to Arman Eyes Network operation. They typically handle all the V NFC with their own homegrown DPD K stock. This is overly complex. It is, frankly, also slow and inefficient. And then they have a physical network to manage the idea of having again one network to coordinate the provisioning of cloud services between the take of the NFC. Uh, the rest of the infrastructure is extremely powerful on top of the offloading capability. After by the blue fill the pews. Those are just some examples. >>There's a great use case, a lot more potential. I see that with the unified cloud networking. Great stuff shout out to you guys that NVIDIA, you've been following your success for a long time and continuing to innovate his cloud scales and pluribus here with unified networking. Kind of bringing the next level great stuff. Great to have you guys on and again, software keeps, uh, driving the innovation again. Networking is just part of it, and it's the key solution. So I got to ask both of you to wrap this up. How can cloud operators who are interested in in this new architecture and solution learn more? Because this is an architectural ship. People are working on this problem. They're trying to think about multiple clouds are trying to think about unification around the network and giving more security more flexibility to their teams. How can people learn more? >>And so, uh, Alexandra and I have a talk at the upcoming NVIDIA GTC conference, so it's the week of March 21st through 24th. Um, you can go and register for free and video dot com slash gtc. Um, you can also watch recorded sessions if you end up watching this on YouTube a little bit after the fact, Um, and we're going to dive a little bit more into the specifics and the details and what we're providing a solution >>as Alexandra. How can people learn more? >>Yeah, so that people can go to the pluribus website www pluribus networks dot com slash e. F t and they can fill up the form and, uh, they will contact Pluribus to either no more or to know more and actually to sign up for the actual early field trial programme. Which starts at the end of it. >>Okay, well, we'll leave it there. Thank you both for joining. Appreciate it up. Next, you're going to hear an independent analyst perspective and review some of the research from the Enterprise Strategy Group E s G. I'm John Ferry with the Cube. Thanks for watching. Mhm. Mhm.

Published Date : Mar 4 2022

SUMMARY :

And Pete Lambasts, the director of technical market and Let's get into the what and how Alexandra, we heard earlier about the pluribus and video Um, and the novelty about this operating system is that it integrates a distributed the And specifically, how is plural is integrating its software within video hardware? of the server capacity to be devoted either to additional war close to is what why we think this enables a very clean demarcation between computer and network. We heard that the GPUS enable cleaner separation of Yeah, I think it's, uh, you know, it's a pragmatic solution, in my opinion, Um, you know, So if infrastructure as code, you know, you're talking about, you know, that part of the stack But the other part is thinking about this at scale, right. You know what you know. the place where you deploy most of your services in the cloud, particularly from a security standpoint. I hear one network different operating models reminds me the old server enabled by the blue field deep you technology and, So I love to get your thoughts scale large enough, the question is, can you afford it? What's the benefit to the customers with this approach? I can now do this at a different layer and so you can run Alexandra, this is huge Upside, Let me let me call out a couple of examples here just to give you a flavour. So I got to ask both of you to wrap this bit more into the specifics and the details and what we're providing a solution How can people learn more? Yeah, so that people can go to the pluribus website www pluribus networks dot analyst perspective and review some of the research from the Enterprise Strategy Group E s G.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AlexandraPERSON

0.99+

NVIDIAORGANIZATION

0.99+

AsiaLOCATION

0.99+

Pete LambastsPERSON

0.99+

twoQUANTITY

0.99+

John FerryPERSON

0.99+

threeQUANTITY

0.99+

PluribusORGANIZATION

0.99+

20%QUANTITY

0.99+

Alexandra BarberryPERSON

0.99+

Pete LumbisPERSON

0.99+

JohnPERSON

0.99+

Alessandro BarbieriPERSON

0.99+

FirstQUANTITY

0.99+

OPECORGANIZATION

0.99+

second aspectQUANTITY

0.99+

PetePERSON

0.99+

bothQUANTITY

0.99+

firstQUANTITY

0.99+

March 21stDATE

0.99+

24thDATE

0.99+

OneQUANTITY

0.98+

secondQUANTITY

0.98+

Arman Eyes NetworkORGANIZATION

0.98+

todayDATE

0.98+

two thingsQUANTITY

0.98+

AtwaterORGANIZATION

0.98+

Pluribus NetworksORGANIZATION

0.98+

oneQUANTITY

0.98+

YouTubeORGANIZATION

0.96+

one thingQUANTITY

0.92+

DACATITLE

0.92+

one networkQUANTITY

0.92+

EnterpriseORGANIZATION

0.91+

single networkQUANTITY

0.91+

zero quoteQUANTITY

0.89+

one common setQUANTITY

0.88+

zero trustQUANTITY

0.88+

one important use caseQUANTITY

0.87+

Essex IORGANIZATION

0.84+

telcoORGANIZATION

0.84+

three cloud providersQUANTITY

0.82+

N K PORGANIZATION

0.82+

CubanPERSON

0.82+

KCOMMERCIAL_ITEM

0.81+

X 86OTHER

0.8+

zeroQUANTITY

0.79+

ZenORGANIZATION

0.79+

each oneQUANTITY

0.78+

one particular caseQUANTITY

0.76+

up to 48 servers per rackQUANTITY

0.74+

around 2025%QUANTITY

0.73+

coupleQUANTITY

0.68+

GroupORGANIZATION

0.67+

ViequesORGANIZATION

0.65+

X 86COMMERCIAL_ITEM

0.64+

XCOMMERCIAL_ITEM

0.61+

NVIDIA GTC conferenceEVENT

0.6+

pluribusORGANIZATION

0.57+

NVIDIA BluefieldORGANIZATION

0.54+

CentreCOMMERCIAL_ITEM

0.52+

X 86TITLE

0.51+

ZenTITLE

0.47+

86TITLE

0.45+

CubeORGANIZATION

0.44+

SXTITLE

0.41+

HOLD - DO NOT PUBLISH - Kishore Durg, Accenture | Accenture Executive Summit at AWS re:Invent 2019


 

>>live from Las Vegas. It's the two covering AWS executive. Something brought to you by extension, >>everyone to the cubes. Live coverage of the ex Censure Executive Summit here at AWS. Reinvent I'm your host, Rebecca Knight, and we're kicking off two days off Walter Wall coverage here at the Accenture Executive Summit. Joining me today is Key Shore, Dirk. He is the global lead growth and strategy and Cloud Attic Center. Thank you so much for coming on the Q. Thank >>you. Thanks. Very nice to be here. I'm absolutely excited to be here on dhe love to talk to you about our new platform. >>Exactly. So the thing about Cloud and then this is this is really the topic of the day is that it presents this opportunity to drive innovation and power, business agility and to reduce costs and to streamline operations. But with that tremendous opportunity, there comes this really over abundance of choice. How? Let's before we start talking about your new platform, talk about how you think companies ought to start to think through these multiple decisions that they have to make when trying to decide the right cloud solution. >>If you know, we actually talked to our Lord for clients we work with. And when we actually looked at, you know, the cloud adoption among enterprise is only 20% of actually adopted cloud. 80% of the enterprises are looking to see how to leverage it. Now, when we talk to our own clients and then we figured out, you know that you know, what is it that is challenging them to get the cloud? And we also had, uh, data points, which existed A 2/3 of them are not seeing the full value off what they need to expect from the cloud. So these challenges were in front of us, and we really wanted to help our clients. And if you really look at the complexity that is that is there today in terms of choices, there are multiple options. Do I go public private hybrid and our clients a challenge. A paralyzed with all these choices. And how do I How do I build my enterprise? You know, earlier it was all about just infrastructure. They're not the enterprise applications went to cloud. Now they want to run their business in the cloud. If you're betting your business in the cloud. You really need to be sure it's not just a business, Lee deciding. I want to be in the cloud for this application. So when you have that strategic choice, you really need good advice and they're looking at us like, you know, Hey, sure, Help me, help me decide. Help me figure out the business case helped me plan. I need to see what are the options and what is the right choice for me. That's a plank. So that's where we're willing to help. And that's the context. And that's the genesis off. Why we thought about a platform like my name is about navigating this complex city. Life was simple earlier. Now >>it's a little bit >>complex, and we're helping you navigate that complex. >>So you've painted this picture of companies. As you said, only 20% have adopted the cloud. Many have yet to see value from it, and they are paralyzed by choice. So you've created my now tell us more about mine enough so one of our >>clients are all about I want to get this >>right the first time, >>so they have tried multiple times and and there's a reason why only 20% of their they've tried it multiple times. There had some challenges. Ah, lot off. Our clients want to get their data application aspect and strategy right for the cloud. They want to get the right solution there. Bean challenged with the right solution. What is it that is gonna be in the cloud, or is the architecture looked like? And they've not been able to visualize it until unless they put hundreds of people on the ground. You actually make it work if their performance challenges. So let me just step back a bit where, you know, you had your application running for 10 years, and suddenly you're taking to Cloud doesn't perform the same way in the cloud as it was performing in a data center. So these challenges are to be assimilated for our clients. So one of the aspects in supporting hundreds of people on the ground for 6 12 weeks, Why can't I do it in a day to figure out how to assimilate this and that is the power of minor were able to figure out the right architecture, the right solution, and simulate that for our clients to visualize, you know, think of it like you have a new home and you >>want to >>kind of figure out How does that new home look like? No. Does the kitchen look different? You want to visualize it? Would you go to a new home without a plan? Would you go to a new home without an architecture? And what if I can give you a three d simulation of how that whole plan looks like? My nap does that for you. My now helps you navigate through that architecture recommence the right solution. Then you can visualize. Oh, this is the right thing for me. Obviously, you have a lot of constraints. You gotta get your kitchen stuff, right? Bedroom stuff, right? How do you bundle things? Very similarly, Adi Bundle applications. How does it look there? And that's exactly what my numbers. >>I'm thinking about it in terms of the way that they trained pilots in this in the simulator atmosphere. So tell it, how does it work? So let me give >>you a gamut of things that we do. So a lot of clients asked me, Hey, you know, I'm talking about 80 person who are not in the cloud is their business case. So I had to give them a view off. Well, it all comes down to What is the financial financials off it? Is it the five year run? You know, Is it like, how much how much I'm going to say you're one? Is that your two year three? I was gonna back my bottom line. That's the first part. Then it comes down to who do I go? It, You know, what are the choices I have? Then it comes down to, you know, I'm taking my say enterprise application to cloud. What is the architecture looked like today? What is the architecture looked like in the cloud? And what is the architecture looked like two years down the line, which includes Arman increase customer base. I have tohave Ah, lot more users that are gonna be added to my enterprise application. I need to see what that architecture looks like. It's one click of a button. My now gives that to you. And a lot of my clients asked me how long is it gonna take? It's a very simple question, but then you gotta figure out how you bundled applications. How do you take the migration plan. Then you'll have some holidays where you don't want to do anything. You want to stop the business while they're doing your cloud migration. So we actually give you a migration plan coming out off. It is your what we call this bill of Materials. Essentially, this is exactly what you need for you to be in the cloud. That plan is what minor gives you. And then after that, you're gonna execute, and then we have ability to manage it through our management platforms. So minor helps you and therefore phases, which is discover, assess, architect and similar it and then you actually do the migration, and then you do the manage part. So the discover assets architects simulate, which is what I've been talking to you about today is what might have does. So it helps you discover the infrastructure aspect application aspect, date aspect it will assess based on your needs, what you need. Then it'll architect it for you, and it'll also assimilated for you. We have not had a platform that helps you simulate things in the cloud in applying conversations. So this is something that plane's value. I have a lot of planes across Jim, Japan, Spain, all over the world, reaching out saying, Hey, I really need help. This is exactly what I was looking for and that's that's how these time conversations are going for us. And they're like, I needed to be part off your core aspect, how you deliver these things. So that's how we do workshops with our clients. We can work with them and say, This is how we do this And once they get comfortable So the 80% of the people are waiting for some comfort level disk, Use them that comfortable that Yes, I know what I'm doing. These guys know what they're doing, and I feel I can go back and run my business there. >>So I mean, as you said, so many of them are paralyzed because they want to get it right the first time. So So my novice, really giving them the comfort level to make these decisions? Or are they then really, just understanding what they need and then how to think forward in terms of creating that plan. >>In Accenture, you have done 30,000 projects in the cloud. We know what is right. So based on our depository off projects that we have, we know which architectures work. So we >>have >>an artificial intelligence engine which actually sister these architectures and then recommends what is right for outlined. So essentially, the plants have, ah stronger affinity toe what works so essentially, when we recommend to them you're saying, Hey, you know, this is something that worked at this client E, which is what works a client. So we are reusing a depository off reliable, credible architecture that supports the current line needs from a depository of the existing what we call as working architectures that is out there and essentially this ability to kind of learn. Obviously we will work with the client. Things can still change, and then we can off make sure that the right thing goes into the depository, and the next time we come back and recommend toe the 3001st client, we know what works and that that's exactly the power off. It is the ability to learn ability to understand and ability to recommend. I'm just keeping it simple for our clients to understand so that they don't have toe get Swan with the complexity of cloud you just have to navigate this. >>I mean, it almost is the best practices machine in the sense that it really understands industry to industry, company to company, the right kinds of architecture. >>So, for example, in the business case, so we have reportedly off costs for all the different industries. So when my spin exceed, the benchmarking costs for airline industry is very different from the bench marking cost for utilities. So when I prepare a business case, I'm looking through the depository off my industry data that we are working with our clients based on that industry data actually build the business case. So it's not a business case just built on very much off a data center because the cost off employees the capital cost the operation was very different for different industries. So you lied to consider a 10 industry angle in terms of how you estimate the business case. Coming out of that, we have the ability to estimate so we also have aspects where a lot of clients don't have eight and weeks to decide. The board is asking them, Hey, what you gonna do? So we have the ability to have a business case for the strategy deals that we say, and we're able to very quickly revert back because we have a lot of repositories of data that we have with us. That helps in that conversation. >>So when time is of the essence, this is what matters. I read an article that you wrote recently for ex Center. I believe it was an ex center block where you talked about the hype around Cloud and how companies were so eager to get on board with Cut With Cloud because they wanted greater efficiency. They wanted to be able to innovate more quickly, and yet it wasn't happening right away. I'm I'm wondering, where is the mindset right now? Are our company's understanding now that it is going to take time to capture the benefits? Or how would you? How would you describe the client mindset? So I would say >>they're two different generations of planks, clients who are already there and clients for getting there. The planes are already there. We're looking at aspects of transformation elements. I want to do my eye. I want my data analytics in the cloud, so we're helping them. Its second generational elements of cloud It's not just about moving your application we're talking to them about, you know, how do you run your business in terms of recommendation engines that you have in the cloud? So what do you need for the evacuation? Cleansing off data elements off it, essentially taking your data to the cloud. Now there are first generation aspects who are almost around data center aspects. You know, they want to get rid of the data centers. They want to go into the cloud. So my now helps both of them. My never helps clients who are essentially navigating through the cloud for the first time gives the more confidence, and they have that kind of getting the help of our collective, which works. And for the first condition, clients were already there in terms of in the club. We're helping them transformative aspects in terms of future systems. What for your future systems looked like, and cloud is an enabler for it, whether it's boundary less adaptable are radically human element off what a new application would look like. A business would look like you need to have flowers, a foundation elements for those those clients are in the 20% You're telling them Hey, cloud, you're already in the foundational aspect of it. Now you need to build boundless applications. Now you need to build adaptable. Now you need to build radically human applications. So how do you build radical human applications? You gotta have the eye when you have to have a I you need to have data. How do you get data? You need to curate plans and basically capture the data that you need so that you can build a re engines on top. So those are different levels of conversation with different maturity off up lines. But we're happy to help them in either of the spectrum's, because a lot of our clients are looking at obviously vetting their business on the cloud now. So we are looking for strategic partners for reliable partners who understand that industry, and with 30,000 projects, we >>are we are >>helping our clients make those decisions. >>So beyond making sure that we're talking about the 80% that are not yet there but our but our curious cloud curious beyond getting mine off platform stat, What is your best advice for those companies right now? >>So what we tell her clients is that you need to look at the end to end aspect of cloud. Do not look at it as a single application going to cloud. So when we talk to our clients way, look at generation, they're doing a lot of the transformative elements is about future systems. We start our conversation around your future systems aspect off it, and then, obviously clouds and enabling element of foundation really meant to get you there. But then essentially, if you want to run your business in the cloud, least the things you need to do. So the transformative aspects is what our clients are willing to work with us. So we tell them, Don't just take it to the cloud just from a obviously cost perspective. Obviously, you will gain a lot from that. But you also need to look at what you want to do in the cloud. It's not just going through the cloud. What? What do you want to do in the cloud? >>Well, key. Sure. Those air. Great great words of advice. Thank you so much for coming on. The Cuba was a pleasure having you. >>Thank you very much. >>I'm Rebecca night. Stay tuned for more of the cubes. Live coverage of the Accenture Executive Summit

Published Date : Dec 3 2019

SUMMARY :

Something brought to you by extension, Thank you so much for coming love to talk to you about our new platform. So the thing about Cloud and then this is this is really the topic of the day is that it presents So when you have that strategic choice, As you said, only 20% have adopted the cloud. and simulate that for our clients to visualize, you know, think of it like you And what if I can give you a three d simulation of how that whole plan looks like? So let me give So the discover assets architects simulate, which is what I've been talking to you about today is So I mean, as you said, so many of them are paralyzed because they want to get it right the first time. So based on our depository off projects that we have, we know which architectures work. so that they don't have toe get Swan with the complexity of cloud you just have I mean, it almost is the best practices machine in the sense that it really understands industry to industry, So you lied to consider a 10 I believe it was an ex center block where you talked about the hype around Cloud You gotta have the eye when you have to have a I So what we tell her clients is that you need to look at the end to end aspect of cloud. Thank you so much for coming on. Live coverage of the Accenture Executive Summit

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rebecca KnightPERSON

0.99+

80%QUANTITY

0.99+

Las VegasLOCATION

0.99+

SpainLOCATION

0.99+

10 yearsQUANTITY

0.99+

JapanLOCATION

0.99+

20%QUANTITY

0.99+

Kishore DurgPERSON

0.99+

30,000 projectsQUANTITY

0.99+

LeePERSON

0.99+

five yearQUANTITY

0.99+

two daysQUANTITY

0.99+

AWSORGANIZATION

0.99+

bothQUANTITY

0.99+

two yearQUANTITY

0.99+

eightQUANTITY

0.99+

todayDATE

0.99+

first timeQUANTITY

0.99+

first partQUANTITY

0.98+

Accenture Executive SummitEVENT

0.98+

first conditionQUANTITY

0.98+

AccentureORGANIZATION

0.98+

RebeccaPERSON

0.98+

hundreds of peopleQUANTITY

0.98+

Walter WallPERSON

0.97+

twoQUANTITY

0.97+

oneQUANTITY

0.96+

single applicationQUANTITY

0.95+

one clickQUANTITY

0.94+

6 12 weeksQUANTITY

0.93+

first generationQUANTITY

0.93+

10 industry angleQUANTITY

0.92+

two yearsQUANTITY

0.91+

CloudTITLE

0.9+

second generationalQUANTITY

0.89+

DirkPERSON

0.89+

ArmanORGANIZATION

0.86+

Cloud Attic CenterORGANIZATION

0.86+

Adi BundleTITLE

0.86+

ex Censure Executive SummitEVENT

0.86+

ReinventPERSON

0.8+

threeQUANTITY

0.8+

CubaLOCATION

0.8+

JimLOCATION

0.8+

3001st clientQUANTITY

0.8+

Key ShorePERSON

0.79+

weeksQUANTITY

0.75+

80 personQUANTITY

0.73+

re:Invent 2019EVENT

0.61+

generationsQUANTITY

0.57+

AWSEVENT

0.54+

CloudORGANIZATION

0.48+

Patrick Osborne, HPE | HPE Secondary Storage for Hybrid cloud


 

>> From the SiliconANGLE Media Office in Boston, Massachusetts, it's theCUBE! Now, here's your host, Dave Vellante. >> Hi everybody, welcome to the special CUBE conversation on secondary storage and data protection, which is one of the hottest topics in the business right now. Cloud, multi-cloud, bringing the Cloud experience to wherever your data lives and protecting that data driven by digital transformation. We're gonna talk about that with Patrick Osborne, the Vice President and General Manager for big data and secondary storage at HPE, good friend and CUBE alum. Great to see you again. Thanks for coming on. >> Great, thanks for having us. >> So let's start with some of those trends that I mentioned. I think, let's start with digital transformation. It's a big buzzword in the industry but it's real. I travel around, I talk to customers all the time, everybody's trying to get digital transformation right. And digital means data, data needs to be protected in new ways now, and so when we trickle down into your world, data protection, what are you seeing in terms of the impact of digital and digital transformation on data protection? >> Absolutely, great question. So the winds of change in secondary storage are blowing pretty hard right now. I think there's a couple different things that are driving that conversation. A, the specialization of people with specific backup teams, right, that's moving away, right. You're moving away from general storage administration and specialized teams to people focusing a lot of those resources now on Cloud Ops team, DevOps team, application development. So they want that activity of data protection to be automated and invisible. Like you said before, in terms of being able to re-use that data, the old days of essentially having a primary dataset and then pushing it off to some type of secondary storage which just sits there over time, is not something that customers want anymore. >> Right. >> They wanna be able to use that data, they wanna be able to generate copies of that, do test and dev, gain insight from that, being able to move that to the Cloud, for example, to be able to burst out there or do it for DR activities. So I think there's a lot of things that are happening when it comes to data that are certainly changing the requirements and expectations around secondary storage. >> So the piece I want to bring to the conversation is Cloud and I saw a stat recently that the average company, the average enterprise has, like, eight clouds, and I was thinking, sheesh, small company like ours has eight clouds, so I mean, the average enterprise must have 80 clouds when you start throwing in all the saas. >> Yeah. >> So Cloud and specifically, multi-cloud, you guys, HPEs, always been known for open platform, whatever the customer wants to do, we'll do it. So multi-cloud becomes really important. And let's expand the definition of Cloud to include private cloud on PRM, what we call True Private Cloud in the Wikibon world, but whether it's Azure, AWS, Google, dot, dot, dot, what are you guys seeing in terms of the pressure from customers to support multi... They don't want a silo, a data protection silo for each cloud, right? >> Absolutely. So they don't want silos in general, right? So I think a couple of key things that you brought up, private cloud is very interesting for customers. Whether they're gonna go on PRM or off PRM, they absolutely want to have the experience on PRM. So what we're providing customers is the ability, through APIs and seamless integration into their existing application frameworks, the ability to move data from point A to point B to point C, which could be primary all-flash, secondary systems, cloud targets, but have that be able to be automated full API set and provide a lot of those capabilities, those user stories around data protection and re-use, directly to the developers, right, and the database admins and whoever's doing this news or DevOps area. The second piece is that, like you said, everyone's gonna have multiple clouds, and what we want to do is we want to be able to give customers an intelligent experience around that. We don't necessarily need to own all the infrastructure, right, but we need to be able to facilitate and provide the visibility of where that data's gonna land, and over time, with our capabilities that we have around InfoSight, we wanna be able to do that predictably, make recommendations, have that whole population of customers learn from each other and provide some expert analysis for our customers as to where to place workloads. >> These trends, Patrick, they're all interrelated, so they're not distinct and before we get into the hard news, I wanna kinda double down on another piece of this. So you got data, you got digital, which is data, you've got new pressures on data protection, you've got the cloud-scale, a lot of diversity. We haven't even talked about the edge. That's another, sort of, piece of it. But people wanna get more out of their data protection investment. They're kinda sick of just spending on insurance. They'd like to get more value out of it. You've mentioned DevOps before. >> Yep. >> Better access to that data, certainly compliance. Things like GDPR have heightened awareness of things that you can do with the data, not just for backup, and not even just for compliance, but actually getting value out of the data. Your thoughts on that trend? >> Yeah, so from what we see for our customers, they absolutely wanna reuse data, right? So we have a ton of solutions for our customers around very low latency, high performance optimized flash storage in 3PAR and Nimble, different capabilities there, and then being able to take that data and move it off to a hybrid flash array, for example, and then do workloads on that, is something that we're doing today with our customers, natively as well as partnering with some of our ISV ecosystem. And then sort of a couple new use cases that are coming is that I want to be able to have data providence. So I wanna share some of my data, keep that in a colo but be able to apply compute resources, whether those are VMs, whether they are functions, lambda functions, on that data. So we wanna bring the compute to the data, and that's another use case that we're enabling for our customers, and then ultimately using the Cloud as a very, very low-cost, scalable and elastic tier storage for archive and retention. >> One of the things we've been talking about in theCUBE community is you hear that Bromite data is the new oil, and somebody in the community was saying, you know what? It's actually more valuable than oil. When I have oil, I can put it in my house or I can put it my car. But data, the unique attribute of data is I can use it over and over and over again. And again, that puts more pressure on data protection. All right, let's get into some of the hard news here. You've got kind of a four-pack of news that we wanna talk about. Let's start with StoreOnce. It's a platform that you guys announced several years ago. You've been evolving it regularly. What's the StoreOnce news? >> Yes, so in the secondary storage world, we've seen the movement from PBBA, so Purpose-Built Backup Appliances, either morphing into very intelligent software that runs on commodity hardware, or an integrated appliance approach, right? So you've got a integrated DR appliance that seamlessly integrates into your environment. So what we've been doing with StoreOnce, this is our 4th generation system and it's got a lot of great attributes. It has a system, right. It's available in a rote form factor at different capacities. It's also available as a software-defined version so you can run that on PRM, you can run it off PRM. It scales up to multiple petabytes in a software-only version. So we've got a couple different use cases for it, but what I think is one of the key things is that we're providing a very integrated experience for customers who are 3PAR Nimble customers. So it allows you to essentially federate your primary all-flash storage with secondary. And then we actually provide a number of use cases to go out to the Cloud as well. Very easy to use, geared towards the application admin, very integrative. >> So it's bigger, better, faster, and you've got this integration, a confederation as you called it, across different platforms. What's the key technical enabler there? >> Yeah, so we have a really extensible platform for software that we call Recovery Manager Central. Essentially, it provides a number of different use cases and user stories around copy data management. So it's gonna allow you to take application integrated snapshots. It's gonna allow you to do that either in the application framework, so if you're a DVA and you do Arman, you could do it in there, or if you have your own custom applications, you can write to the API. So it allows you to do snapshots, full clones, it'll allow you to do DR, so one box to another similar system, it'll allow you to go from primary to secondary, it'll allow you to archive out to the Cloud, and then all of that in reverse, right? So you can pull all of that data back and it'll give you visibility across all those assets. So, the past where you, as a customer, did all this on your own, right, bought on horizontal lines? We're giving a customer, based on a set of outcomes and applications, a complete vertically-oriented solution. >> Okay, so that's the, really, second piece of hard news. >> Yeah. >> Recovery Manager Central, RMC, 6.0, right-- >> Yeah. >> Is the release that we're on? And that's copy data management essentially-- >> Absolutely. >> Is what you're talking about. It's your catalog, right, so your tech underneath that, and you're applying that now across the portfolio, right? >> Absolutely. So, we're extending that from... We've had, for the past year, that ability to do the copy data management directly from 3PAR. We're extending that to provide that for Nimble. Right, so for Nimble customers that want to use all-flash, they want to use hybrid flash arrays from Nimble, you can go to secondary storage in StoreOnce and then out to the Cloud. >> Okay, and that's what 6.0 enables-- >> Yeah, exactly. >> That Nimble piece and then out to the Cloud. Okay, third piece of news is an ecosystem announcement with Commvault. Take us through that. >> Yeah, so we understand at HPE, given the fact that we're very, very focused on hybrid Cloud and we have a lot of customers that have been our customers for a long time, none of these opportunities are greenfield, right, at the end of the day. So your customers are, they have to integrate with existing solutions, and in a lot of cases, they have some partners for data protection. So one of the things that we've done with this ecosystem is made very public our APIs and how to integrate our systems. So we're storage people, we are data management folks, we do big data, we also do infrastructure. So we know how to manage the infrastructure, move data very seamlessly between primary, secondary, and the Cloud. And what we do is, we open up those APIs in those use cases to all of our partners and our customers. So, in that, we're announcing a number of integrations with Commvault, so they're gonna be integrating with our de-duplication and compression framework, as well as being able to program to what we call Cloud Bank, right? So, we'll be able to, in effect, integrate with Commvault with our primary storage, be able to do rapid recovery from StoreOnce in a number of backup use cases, and then being able to go out to the cloud, all managed through customers' Commvault interface. >> All right, so if I hear you correctly, you've just gotta double click on the Commvault integration. It's not just a go-to-market setup. It's deeper engineering and integration that you guys are doing. >> Absolutely. >> Okay, great. And then, of course the fourth piece is around, so your bases are loaded here, the fourth piece is around the Cloud economics, Cloud pricing model. Your GreenLake model, the utility pricing has gotten a lot of traction. When we're at HPE Discover, customers talking about it, you guys have been leaders there. Talk about GreenLake and how that model fits into this. >> Yeah, so, in the technology talk track we talk about, essentially, how to make this simple and how to make it scalable. At the end of the day, on the buying pattern side, customers expect elasticity, right? So, what we're providing for our customers is when they want to do either a specific integration or implementation of one of those components from a technology perspective, we can provide that. If they're doing a complete re-architecture and want to understand how I can essentially use secondary storage better and I wanna take advantage of all that data that I have sitting in there, I can provide that whole experience to customers as a service, right? So, the primary storage, your secondary storage, the Cloud capacity, even some of the ISV partner software that we provide, I can take that as an entire, vetted solution, with reference architectures and the expertise to implement, and I can give that to a customer in an OpEx as a service elastic purchasing model. And that is very unique for HPE and that's what we've gone to market with GreenLake, and we're gonna be providing more solutions like that, but in this case, we're announcing the fact that you can buy that whole experience, backup as a service, data protection as a service, through GreenLake from HPE. >> So how does that work, Patrick, practically speaking? A customer will, what, commit to some level of capacity, let's say, as an example, and then HPE will put in some extra headroom if, in fact, that's needed, you maybe sit down with the customer and do some kind of capacity planning, or how does that actually work, practically speaking? >> Yeah, absolutely. So we work with customers on the architecture, right, up front. So we have a set of vetted architectures. We try to avoid snowflakes, right, at the end of the day. We want to talk to customers around outcomes. So if a customer is trying to reach outcome XYZ, we come with a recommendation on how to do that. And what we can do is, we don't have very high up-front commitments and it's very elastic in the way that we approach the purchasing experience. So we're able to fit those modules in. And then we've made some number of acquisitions over the last couple years, right? So, on the advisory side, we have Cloud Technology Partners. We come in and talk about how do you do a hybrid cloud backup as a service, right? So we can advise customers on how to do that and build that into the experience. We acquired CloudCruiser, right? So we have the billing and the monitoring and everything that gets very, very granular on how you use that service, and that goes into how we bill customers on a per-metric usage format. And so we're able to package all of that up and we have, this is a kind of a little-known fact, very, very high NPS score for HPE financial services. Right, so the combination of our point next services, advisory, financial services, really puts a lot of meat behind GreenLake as a really good customer experience around elasticity. >> Okay, now all this stuff is gonna be available calendar Q4 of 2018, correct? >> Correct. >> Okay, so if you've seen videos like this before, we like to talk about what it is, how it works, and then we like to bring it home with the business impact. So thinking about these four announcements, and you can drill deeper on any one that you like, but I'd like to start, at least, holistically, what's the business impact of all of this? Obviously, you've got Cloud, we talked about some of the trends up front, but what are you guys telling customers is the real ROI? >> So, I think the big ROI is it moves secondary storage from a TCO conversation to an ROI conversation. Right, so instead of selling customers a solution where you're gonna have data that sits there waiting for something to happen, I'm giving customers a solution that's consumed as a service to be able to mine and utilize that secondary data, right? Whether it's for simple tasks like patch verification, application rollouts, things like that, and actually lowering the cost of your primary storage in doing that, which is usually pretty expensive from a storage perspective. I'm also helping customers save time, right? By providing these integrated experiences from primary to secondary to Cloud and making that automatic, I do help customers save quite a bit in OpEx from an operator perspective. And they can take those resources and move them on to higher impact projects like DevOps, CloudOps, things of that nature. That's a big impact from a customer perspective. >> So there's a CapEx to OpEx move for those customers that want to take advantage of GreenLake. [Patrick] Yep. >> So certain CFOs will like that story. But I think the other piece that, to me anyway, is most important is, especially in this world of digital transformation, I know it's a buzzword, but it's real. When you go to talk to people, they don't wanna do the heavy lifting of infrastructure management, the day-to-day infrastructure management. A lot of mid-size customers, they just don't have the resources to do it anymore. >> Correct. >> And they're under such pressure to digitize, every company wants to become a software company. Benioff talks about that, Satya Nadella talks about that, Antonio talks about digital transformation. And so it's on CEOs' minds. They don't want to be paying people for these mundane tasks. They really wannna shift them to these digital transformation initiatives and drive more business value. >> Absolutely. So you said it best, right, we wanna drive the customer experience to focusing on high-value things that'll enable their digital transformation. So, as a vision, what we're gonna keep on providing, and you've seen that with InfoSight on Nimble, InfoSight for 3PAR, and our vision around AI for the data center, these tasks around data protection, they're repeatable tasks, how to protect data, how to move data, how to mine that data. So if we can provide recommendations and some predictive analytics and experiences to the customers around this, and essentially abstract that and just have the customers focus on defining their SLA, and we're worried about delivering that SLA, then that's a huge win for us and our customers. And that's our vision, that's what we're gonna be providing them. >> Yeah, automation is the key. You've got some tools in the toolkit to help do that and it's just gonna escalate from here. It feels like we're on the early part of the S-curve and it's just gonna really spike. >> Absolutely. >> All right, Patrick. Hey, thanks for coming in and taking us through this news, and congratulations on getting this stuff done and we'll be watching the marketplace. Thank you. >> Great. Kudos to the team, great announcement, and we look forward to working with you guys again. >> All right, thanks for watching, everybody. We'll see you next time. This is Dave Vellante on theCUBE. (gentle music)

Published Date : Oct 4 2018

SUMMARY :

From the SiliconANGLE Media Office Great to see you again. It's a big buzzword in the industry but it's real. So the winds of change in secondary storage for example, to be able to burst out there So the piece I want to bring to the And let's expand the definition of Cloud the ability to move data from point A to point B So you got data, you got digital, which is data, of things that you can do with the data, So we have a ton of solutions for our customers It's a platform that you guys announced So it allows you to essentially federate What's the key technical enabler there? primary to secondary, it'll allow you to Okay, so that's the, really, second piece across the portfolio, right? We're extending that to provide that for Nimble. That Nimble piece and then out to the Cloud. So one of the things that we've done that you guys are doing. Talk about GreenLake and how that model fits into this. and I can give that to a customer in an OpEx and build that into the experience. of the trends up front, but what are you guys and actually lowering the cost of your primary So there's a CapEx to OpEx move for those have the resources to do it anymore. and drive more business value. the customer experience to focusing on Yeah, automation is the key. this stuff done and we'll be watching the marketplace. and we look forward to working with you guys again. We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Patrick OsbornePERSON

0.99+

Dave VellantePERSON

0.99+

Satya NadellaPERSON

0.99+

AntonioPERSON

0.99+

PatrickPERSON

0.99+

80 cloudsQUANTITY

0.99+

NimbleORGANIZATION

0.99+

BenioffPERSON

0.99+

second pieceQUANTITY

0.99+

fourth pieceQUANTITY

0.99+

AWSORGANIZATION

0.99+

each cloudQUANTITY

0.99+

Boston, MassachusettsLOCATION

0.98+

oneQUANTITY

0.98+

GreenLakeORGANIZATION

0.98+

GDPRTITLE

0.98+

SiliconANGLE Media OfficeORGANIZATION

0.98+

HPEORGANIZATION

0.98+

CUBEORGANIZATION

0.98+

Cloud Technology PartnersORGANIZATION

0.98+

GoogleORGANIZATION

0.98+

HPE DiscoverORGANIZATION

0.97+

InfoSightORGANIZATION

0.97+

todayDATE

0.96+

several years agoDATE

0.96+

third pieceQUANTITY

0.96+

RMCORGANIZATION

0.96+

four-packQUANTITY

0.95+

HPEsORGANIZATION

0.95+

CloudTITLE

0.94+

past yearDATE

0.94+

OneQUANTITY

0.94+

CommvaultORGANIZATION

0.93+

eight cloudsQUANTITY

0.93+

CloudOpsTITLE

0.93+

four announcementsQUANTITY

0.93+

4th generation systemQUANTITY

0.91+

Cloud BankTITLE

0.9+

OpExTITLE

0.9+

Cloud OpsORGANIZATION

0.86+

StoreOnceTITLE

0.86+

DevOpsORGANIZATION

0.86+

BromiteORGANIZATION

0.85+

last couple yearsDATE

0.82+

3PARORGANIZATION

0.81+

CommvaultTITLE

0.8+

3PARTITLE

0.79+

coupleQUANTITY

0.79+

ArmanTITLE

0.78+

DevOpsTITLE

0.76+

2018DATE

0.76+

CapExTITLE

0.71+

Q4 ofDATE

0.71+

StoreOnceORGANIZATION

0.71+

theCUBEORGANIZATION

0.63+

3PAR NimbleORGANIZATION

0.63+

PBBATITLE

0.59+

AzureTITLE

0.57+

GreenLakeTITLE

0.57+

HPE Secondary Storage for Hybrid cloud


 

>> From the SiliconANGLE Media Office in Boston, Massachusetts, it's theCUBE! Now, here's your host, Dave Vellante. >> Hi everybody, welcome to the special CUBE conversation on secondary storage and data protection, which is one of the hottest topics in the business right now. Cloud, multi-cloud, bringing the Cloud experience to wherever your data lives and protecting that data driven by digital transformation. We're gonna talk about that with Patrick Osborne, the Vice President and General Manager for big data and secondary storage at HPE, good friend and CUBE alum. Great to see you again. Thanks for coming on. >> Great, thanks for having us. >> So let's start with some of those trends that I mentioned. I think, let's start with digital transformation. It's a big buzzword in the industry but it's real. I travel around, I talk to customers all the time, everybody's trying to get digital transformation right. And digital means data, data needs to be protected in new ways now, and so when we trickle down into your world, data protection, what are you seeing in terms of the impact of digital and digital transformation on data protection? >> Absolutely, great question. So the winds of change in secondary storage are blowing pretty hard right now. I think there's a couple different things that are driving that conversation. A, the specialization of people with specific backup teams, right, that's moving away, right. You're moving away from general storage administration and specialized teams to people focusing a lot of those resources now on Cloud Ops team, DevOps team, application development. So they want that activity of data protection to be automated and invisible. Like you said before, in terms of being able to re-use that data, the old days of essentially having a primary dataset and then pushing it off to some type of secondary storage which just sits there over time, is not something that customers want anymore. >> Right. >> They wanna be able to use that data, they wanna be able to generate copies of that, do test and dev, gain insight from that, being able to move that to the Cloud, for example, to be able to burst out there or do it for DR activities. So I think there's a lot of things that are happening when it comes to data that are certainly changing the requirements and expectations around secondary storage. >> So the piece I want to bring to the conversation is Cloud and I saw a stat recently that the average company, the average enterprise has, like, eight clouds, and I was thinking, sheesh, small company like ours has eight clouds, so I mean, the average enterprise must have 80 clouds when you start throwing in all the sass. >> Yeah. >> So Cloud and specifically, multi-cloud, you guys, HPEs, always been known for open platform, whatever the customer wants to do, we'll do it. So multi-cloud becomes really important. And let's expand the definition of Cloud to include private cloud on PRM, what we call True Private Cloud in the Wikibon world, but whether it's Azure, AWS, Google, dot, dot, dot, what are you guys seeing in terms of the pressure from customers to support multi... They don't want a silo, a data protection silo for each cloud, right? >> Absolutely. So they don't want silos in general, right? So I think a couple of key things that you brought up, private cloud is very interesting for customers. Whether they're gonna go on PRM or off PRM, they absolutely want to have the experience on PRM. So what we're providing customers is the ability, through APIs and seamless integration into their existing application frameworks, the ability to move data from point A to point B to point C, which could be primary all-flash, secondary systems, cloud targets, but have that be able to be automated full API set and provide a lot of those capabilities, those user stories around data protection and re-use, directly to the developers, right, and the database admins and whoever's doing this news or DevOps area. The second piece is that, like you said, everyone's gonna have multiple clouds, and what we want to do is we want to be able to give customers an intelligent experience around that. We don't necessarily need to own all the infrastructure, right, but we need to be able to facilitate and provide the visibility of where that data's gonna land, and over time, with our capabilities that we have around InfoSight, we wanna be able to do that predictably, make recommendations, have that whole population of customers learn from each other and provide some expert analysis for our customers as to where to place workloads. >> These trends, Patrick, they're all interrelated, so they're not distinct and before we get into the hard news, I wanna kinda double down on another piece of this. So you got data, you got digital, which is data, you've got new pressures on data protection, you've got the cloud-scale, a lot of diversity. We haven't even talked about the edge. That's another, sort of, piece of it. But people wanna get more out of their data protection investment. They're kinda sick of just spending on insurance. They'd like to get more value out of it. You've mentioned DevOps before. >> Yep. >> Better access to that data, certainly compliance. Things like GDPR have heightened awareness of things that you can do with the data, not just for backup, and not even just for compliance, but actually getting value out of the data. Your thoughts on that trend? >> Yeah, so from what we see for our customers, they absolutely wanna reuse data, right? So we have a ton of solutions for our customers around very low latency, high performance optimized flash storage in 3PAR and Nimble, different capabilities there, and then being able to take that data and move it off to a hybrid flash array, for example, and then do workloads on that, is something that we're doing today with our customers, natively as well as partnering with some of our ISV ecosystem. And then sort of a couple new use cases that are coming is that I want to be able to have data providence. So I wanna share some of my data, keep that in a colo but be able to apply compute resources, whether those are VMs, whether they are functions, lambda functions, on that data. So we wanna bring the compute to the data, and that's another use case that we're enabling for our customers, and then ultimately using the Cloud as a very, very low-cost, scalable and elastic tier storage for archive and retention. >> One of the things we've been talking about in theCUBE community is you hear that Bromite data is the new oil, and somebody in the community was saying, you know what? It's actually more valuable than oil. When I have oil, I can put it in my house or I can put it my car. But data, the unique attribute of data is I can use it over and over and over again. And again, that puts more pressure on data protection. All right, let's get into some of the hard news here. You've got kind of a four-pack of news that we wanna talk about. Let's start with StoreOnce. It's a platform that you guys announced several years ago. You've been evolving it regularly. What's the StoreOnce news? >> Yes, so in the secondary storage world, we've seen the movement from PBBA, so Purpose-Built Backup Appliances, either morphing into very intelligent software that runs on commodity hardware, or an integrated appliance approach, right? So you've got a integrated DR appliance that seamlessly integrates into your environment. So what we've been doing with StoreOnce, this is our 4th generation system and it's got a lot of great attributes. It has a system, right. It's available in a rote form factor at different capacities. It's also available as a software-defined version so you can run that on PRM, you can run it off PRM. It scales up to multiple petabytes in a software-only version. So we've got a couple different use cases for it, but what I think is one of the key things is that we're providing a very integrated experience for customers who are 3PAR Nimble customers. So it allows you to essentially federate your primary all-flash storage with secondary. And then we actually provide a number of use cases to go out to the Cloud as well. Very easy to use, geared towards the application admin, very integrative. >> So it's bigger, better, faster, and you've got this integration, a confederation as you called it, across different platforms. What's the key technical enabler there? >> Yeah, so we have a really extensible platform for software that we call Recovery Manager Central. Essentially, it provides a number of different use cases and user stories around copy data management. So it's gonna allow you to take application integrated snapshots. It's gonna allow you to do that either in the application framework, so if you're a DVA and you do Arman, you could do it in there, or if you have your own custom applications, you can write to the API. So it allows you to do snapshots, full clones, it'll allow you to do DR, so one box to another similar system, it'll allow you to go from primary to secondary, it'll allow you to archive out to the Cloud, and then all of that in reverse, right? So you can pull all of that data back and it'll give you visibility across all those assets. So, the past where you, as a customer, did all this on your own, right, bought on horizontal lines? We're giving a customer, based on a set of outcomes and applications, a complete vertically-oriented solution. >> Okay, so that's the, really, second piece of hard news. >> Yeah. >> Recovery Manager Central, RMC, 6.0, right-- >> Yeah. >> Is the release that we're on? And that's copy data management essentially-- >> Absolutely. >> Is what you're talking about. It's your catalog, right, so your tech underneath that, and you're applying that now across the portfolio, right? >> Absolutely. So, we're extending that from... We've had, for the past year, that ability to do the copy data management directly from 3PAR. We're extending that to provide that for Nimble. Right, so for Nimble customers that want to use all-flash, they want to use hybrid flash arrays from Nimble, you can go to secondary storage in StoreOnce and then out to the Cloud. >> Okay, and that's what 6.0 enables-- >> Yeah, exactly. >> That Nimble piece and then out to the Cloud. Okay, third piece of news is an ecosystem announcement with Commvault. Take us through that. >> Yeah, so we understand at HPE, given the fact that we're very, very focused on hybrid Cloud and we have a lot of customers that have been our customers for a long time, none of these opportunities are greenfield, right, at the end of the day. So your customers are, they have to integrate with existing solutions, and in a lot of cases, they have some partners for data protection. So one of the things that we've done with this ecosystem is made very public our APIs and how to integrate our systems. So we're storage people, we are data management folks, we do big data, we also do infrastructure. So we know how to manage the infrastructure, move data very seamlessly between primary, secondary, and the Cloud. And what we do is, we open up those APIs in those use cases to all of our partners and our customers. So, in that, we're announcing a number of integrations with Commvault, so they're gonna be integrating with our de-duplication and compression framework, as well as being able to program to what we call Cloud Bank, right? So, we'll be able to, in effect, integrate with Commvault with our primary storage, be able to do rapid recovery from StoreOnce in a number of backup use cases, and then being able to go out to the cloud, all managed through customers' Commvault interface. >> All right, so if I hear you correctly, you've just gotta double click on the Commvault integration. It's not just a go-to-market setup. It's deeper engineering and integration that you guys are doing. >> Absolutely. >> Okay, great. And then, of course the fourth piece is around, so your bases are loaded here, the fourth piece is around the Cloud economics, Cloud pricing model. Your GreenLake model, the utility pricing has gotten a lot of traction. When we're at HPE Discover, customers talking about it, you guys have been leaders there. Talk about GreenLake and how that model fits into this. >> Yeah, so, in the technology talk track we talk about, essentially, how to make this simple and how to make it scalable. At the end of the day, on the buying pattern side, customers expect elasticity, right? So, what we're providing for our customers is when they want to do either a specific integration or implementation of one of those components from a technology perspective, we can provide that. If they're doing a complete re-architecture and want to understand how I can essentially use secondary storage better and I wanna take advantage of all that data that I have sitting in there, I can provide that whole experience to customers as a service, right? So, the primary storage, your secondary storage, the Cloud capacity, even some of the ISV partner software that we provide, I can take that as an entire, vetted solution, with reference architectures and the expertise to implement, and I can give that to a customer in an OpEx as a service elastic purchasing model. And that is very unique for HPE and that's what we've gone to market with GreenLake, and we're gonna be providing more solutions like that, but in this case, we're announcing the fact that you can buy that whole experience, backup as a service, data protection as a service, through GreenLake from HPE. >> So how does that work, Patrick, practically speaking? A customer will, what, commit to some level of capacity, let's say, as an example, and then HPE will put in some extra headroom if, in fact, that's needed, you maybe sit down with the customer and do some kind of capacity planning, or how does that actually work, practically speaking? >> Yeah, absolutely. So we work with customers on the architecture, right, up front. So we have a set of vetted architectures. We try to avoid snowflakes, right, at the end of the day. We want to talk to customers around outcomes. So if a customer is trying to reach outcome XYZ, we come with a recommendation on how to do that. And what we can do is, we don't have very high up-front commitments and it's very elastic in the way that we approach the purchasing experience. So we're able to fit those modules in. And then we've made some number of acquisitions over the last couple years, right? So, on the advisory side, we have Cloud Technology Partners. We come in and talk about how do you do a hybrid cloud backup as a service, right? So we can advise customers on how to do that and build that into the experience. We acquired CloudCruiser, right? So we have the billing and the monitoring and everything that gets very, very granular on how you use that service, and that goes into how we bill customers on a per-metric usage format. And so we're able to package all of that up and we have, this is a kind of a little-known fact, very, very high NPS score for HPE financial services. Right, so the combination of our point next services, advisory, financial services, really puts a lot of meat behind GreenLake as a really good customer experience around elasticity. >> Okay, now all this stuff is gonna be available calendar Q4 of 2018, correct? >> Correct. >> Okay, so if you've seen videos like this before, we like to talk about what it is, how it works, and then we like to bring it home with the business impact. So thinking about these four announcements, and you can drill deeper on any one that you like, but I'd like to start, at least, holistically, what's the business impact of all of this? Obviously, you've got Cloud, we talked about some of the trends up front, but what are you guys telling customers is the real ROI? >> So, I think the big ROI is it moves secondary storage from a TCO conversation to an ROI conversation. Right, so instead of selling customers a solution where you're gonna have data that sits there waiting for something to happen, I'm giving customers a solution that's consumed as a service to be able to mine and utilize that secondary data, right? Whether it's for simple tasks like patch verification, application rollouts, things like that, and actually lowering the cost of your primary storage in doing that, which is usually pretty expensive from a storage perspective. I'm also helping customers save time, right? By providing these integrated experiences from primary to secondary to Cloud and making that automatic, I do help customers save quite a bit in OpEx from an operator perspective. And they can take those resources and move them on to higher impact projects like DevOps, CloudOps, things of that nature. That's a big impact from a customer perspective. >> So there's a CapEx to OpEx move for those customers that want to take advantage of GreenLake. [Patrick] Yep. >> So certain CFOs will like that story. But I think the other piece that, to me anyway, is most important is, especially in this world of digital transformation, I know it's a buzzword, but it's real. When you go to talk to people, they don't wanna do the heavy lifting of infrastructure management, the day-to-day infrastructure management. A lot of mid-size customers, they just don't have the resources to do it anymore. >> Correct. >> And they're under such pressure to digitize, every company wants to become a software company. Benioff talks about that, Satya Nadella talks about that, Antonio talks about digital transformation. And so it's on CEOs' minds. They don't want to be paying people for these mundane tasks. They really wannna shift them to these digital transformation initiatives and drive more business value. >> Absolutely. So you said it best, right, we wanna drive the customer experience to focusing on high-value things that'll enable their digital transformation. So, as a vision, what we're gonna keep on providing, and you've seen that with InfoSight on Nimble, InfoSight for 3PAR, and our vision around AI for the data center, these tasks around data protection, they're repeatable tasks, how to protect data, how to move data, how to mine that data. So if we can provide recommendations and some predictive analytics and experiences to the customers around this, and essentially abstract that and just have the customers focus on defining their SLA, and we're worried about delivering that SLA, then that's a huge win for us and our customers. And that's our vision, that's what we're gonna be providing them. >> Yeah, automation is the key. You've got some tools in the toolkit to help do that and it's just gonna escalate from here. It feels like we're on the early part of the S-curve and it's just gonna really spike. >> Absolutely. >> All right, Patrick. Hey, thanks for coming in and taking us through this news, and congratulations on getting this stuff done and we'll be watching the marketplace. Thank you. >> Great. Kudos to the team, great announcement, and we look forward to working with you guys again. >> All right, thanks for watching, everybody. We'll see you next time. This is Dave Vellante on theCUBE. (gentle music)

Published Date : Oct 2 2018

SUMMARY :

From the SiliconANGLE Media Office Great to see you again. It's a big buzzword in the industry but it's real. So the winds of change in secondary storage for example, to be able to burst out there So the piece I want to bring to the And let's expand the definition of Cloud the ability to move data from point A to point B So you got data, you got digital, which is data, of things that you can do with the data, So we have a ton of solutions for our customers It's a platform that you guys announced So it allows you to essentially federate What's the key technical enabler there? primary to secondary, it'll allow you to Okay, so that's the, really, second piece across the portfolio, right? We're extending that to provide that for Nimble. That Nimble piece and then out to the Cloud. So one of the things that we've done that you guys are doing. Talk about GreenLake and how that model fits into this. and I can give that to a customer in an OpEx and build that into the experience. of the trends up front, but what are you guys and actually lowering the cost of your primary So there's a CapEx to OpEx move for those have the resources to do it anymore. and drive more business value. the customer experience to focusing on Yeah, automation is the key. this stuff done and we'll be watching the marketplace. and we look forward to working with you guys again. We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Patrick OsbornePERSON

0.99+

Dave VellantePERSON

0.99+

Satya NadellaPERSON

0.99+

AntonioPERSON

0.99+

PatrickPERSON

0.99+

80 cloudsQUANTITY

0.99+

NimbleORGANIZATION

0.99+

BenioffPERSON

0.99+

fourth pieceQUANTITY

0.99+

second pieceQUANTITY

0.99+

AWSORGANIZATION

0.99+

each cloudQUANTITY

0.99+

Boston, MassachusettsLOCATION

0.98+

oneQUANTITY

0.98+

GreenLakeORGANIZATION

0.98+

GDPRTITLE

0.98+

GoogleORGANIZATION

0.98+

SiliconANGLE Media OfficeORGANIZATION

0.98+

HPEORGANIZATION

0.98+

CUBEORGANIZATION

0.98+

Cloud Technology PartnersORGANIZATION

0.98+

HPE DiscoverORGANIZATION

0.97+

InfoSightORGANIZATION

0.97+

todayDATE

0.96+

several years agoDATE

0.96+

third pieceQUANTITY

0.96+

RMCORGANIZATION

0.96+

four-packQUANTITY

0.95+

CloudTITLE

0.94+

past yearDATE

0.94+

HPEsORGANIZATION

0.94+

OneQUANTITY

0.94+

CommvaultORGANIZATION

0.93+

eight cloudsQUANTITY

0.93+

CloudOpsTITLE

0.93+

four announcementsQUANTITY

0.93+

4th generation systemQUANTITY

0.91+

Cloud BankTITLE

0.9+

OpExTITLE

0.9+

Cloud OpsORGANIZATION

0.86+

StoreOnceTITLE

0.86+

DevOpsORGANIZATION

0.86+

BromiteORGANIZATION

0.85+

last couple yearsDATE

0.82+

3PARORGANIZATION

0.81+

CommvaultTITLE

0.8+

3PARTITLE

0.79+

coupleQUANTITY

0.79+

ArmanTITLE

0.78+

DevOpsTITLE

0.76+

2018DATE

0.76+

CapExTITLE

0.71+

Q4 ofDATE

0.71+

StoreOnceORGANIZATION

0.71+

theCUBEORGANIZATION

0.63+

3PAR NimbleORGANIZATION

0.63+

PBBATITLE

0.59+

GreenLakeTITLE

0.57+

AzureTITLE

0.55+

Day One Morning Keynote | Red Hat Summit 2018


 

[Music] [Music] [Music] [Laughter] [Laughter] [Laughter] [Laughter] [Music] [Music] [Music] [Music] you you [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Applause] [Music] wake up feeling blessed peace you warned that Russia ain't afraid to show it I'll expose it if I dressed up riding in that Chester roasted nigga catch you slippin on myself rocks on I messed up like yes sir [Music] [Music] [Music] [Music] our program [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] you are not welcome to Red Hat summit 2018 2018 [Music] [Music] [Music] [Laughter] [Music] Wow that is truly the coolest introduction I've ever had thank you Wow I don't think I feel cool enough to follow an interaction like that Wow well welcome to the Red Hat summit this is our 14th annual event and I have to say looking out over this audience Wow it's great to see so many people here joining us this is by far our largest summit to date not only did we blow through the numbers we've had in the past we blew through our own expectations this year so I know we have a pretty packed house and I know people are still coming in so it's great to see so many people here it's great to see so many familiar faces when I had a chance to walk around earlier it's great to see so many new people here joining us for the first time I think the record attendance is an indication that more and more enterprises around the world are seeing the power of open source to help them with their challenges that they're facing due to the digital transformation that all of enterprises around the world are going through the theme for the summit this year is ideas worth exploring and we intentionally chose that because as much as we are all going through this digital disruption and the challenges associated with it one thing I think is becoming clear no one person and certainly no one company has the answers to these challenges right this isn't a problem where you can go buy a solution this is a set of capabilities that we all need to build it's a set of cultural changes that we all need to go through and that's going to require the best ideas coming from so many different places so we're not here saying we have the answers we're trying to convene the conversation right we want to serve as a catalyst bringing great minds together to share ideas so we all walk out of here at the end of the week a little wiser than when we first came here we do have an amazing agenda for you we have over 7,000 attendees we may be pushing 8,000 by the time we got through this morning we have 36 keynote speakers and we have a hundred and twenty-five breakout sessions and have to throw in one plug scheduling 325 breakout sessions is actually pretty difficult and so we used the Red Hat business optimizer which is an AI constraint solver that's new in the Red Hat decision manager to help us plan the summit because we have individuals who have a clustered set of interests and we want to make sure that when we schedule two breakout sessions we do it in a way that we don't have overlapping sessions that are really important to the same individual so we tried to use this tool and what we understand about people's interest in history of what they wanted to do to try to make sure that we spaced out different times for things of similar interests for similar people as well as for people who stood in the back of breakouts before and I know I've done that too we've also used it to try to optimize room size so hopefully we will do our best to make sure that we've appropriately sized the spaces for those as well so it's really a phenomenal tool and I know it's helped us a lot this year in addition to the 325 breakouts we have a lot of our customers on stage during the main sessions and so you'll see demos you'll hear from partners you'll hear stories from so many of our customers not on our point of view of how to use these technologies but their point of views of how they actually are using these technologies to solve their problems and you'll hear over and over again from those keynotes that it's not just about the technology it's about how people are changing how people are working to innovate to solve those problems and while we're on the subject of people I'd like to take a moment to recognize the Red Hat certified professional of the year this is known award we do every year I love this award because it truly recognizes an individual for outstanding innovation for outstanding ideas for truly standing out in how they're able to help their organization with Red Hat technologies Red Hat certifications help system administrators application developers IT architects to further their careers and help their organizations by being able to advance their skills and knowledge of Red Hat products and this year's winner really truly is a great example about how their curiosity is helped push the limits of what's possible with technology let's hear a little more about this year's winner when I was studying at the University I had computer science as one of my subjects and that's what created the passion from the very beginning they were quite a few institutions around my University who were offering Red Hat Enterprise Linux as a course and a certification paths through to become an administrator Red Hat Learning subscription has offered me a lot more than any other trainings that have done so far that gave me exposure to so many products under red hair technologies that I wasn't even aware of I started to think about the better ways of how these learnings can be put into the real life use cases and we started off with a discussion with my manager saying I have to try this product and I really want to see how it really fits in our environment and that product was Red Hat virtualization we went from deploying rave and then OpenStack and then the open shift environment we wanted to overcome some of the things that we saw as challenges to the speed and rapidity of release and code etc so it made perfect sense and we were able to do it in a really short space of time so you know we truly did use it as an Innovation Lab I think idea is everything ideas can change the way you see things an Innovation Lab was such an idea that popped into my mind one fine day and it has transformed the way we think as a team and it's given that playpen to pretty much everyone to go and test their things investigate evaluate do whatever they like in a non-critical non production environment I recruited Neha almost 10 years ago now I could see there was a spark a potential with it and you know she had a real Drive a real passion and you know here we are nearly ten years later I'm Neha Sandow I am a Red Hat certified engineer all right well everyone please walk into the states to the stage Neha [Music] [Applause] congratulations thank you [Applause] I think that - well welcome to the red has some of this is your first summit yes it is thanks so much well fantastic sure well it's great to have you here I hope you have a chance to engage and share some of your ideas and enjoy the week thank you thank you congratulations [Applause] neha mentioned that she first got interest in open source at university and it made me think red hats recently started our Red Hat Academy program that looks to programmatically infuse Red Hat technologies in universities around the world it's exploded in a way we had no idea it's grown just incredibly rapidly which i think shows the interest that there really is an open source and working in an open way at university so it's really a phenomenal program I'm also excited to announce that we're launching our newest open source story this year at Summit it's called the science of collective discovery and it looks at what happens when communities use open hardware to monitor the environment around them and really how they can make impactful change based on that technologies the rural premier that will be at 5:15 on Wednesday at McMaster Oni West and so please join us for a drink and we'll also have a number of the experts featured in that and you can have a conversation with them as well so with that let's officially start the show please welcome red hat president of products and technology Paul Cormier [Music] Wow morning you know I say it every year I'm gonna say it again I know I repeat myself it's just amazing we are so proud here to be here today too while you all week on how far we've come with opens with open source and with the products that we that we provide at Red Hat so so welcome and I hope the pride shows through so you know I told you Seven Summits ago on this stage that the future would be open and here we are just seven years later this is the 14th summit but just seven years later after that and much has happened and I think you'll see today and this week that that prediction that the world would be open was a pretty safe predict prediction but I want to take you just back a little bit to see how we started here and it's not just how Red Hat started here this is an open source in Linux based computing is now in an industry norm and I think that's what you'll you'll see in here this week you know we talked back then seven years ago when we put on our prediction about the UNIX error and how Hardware innovation with x86 was it was really the first step in a new era of open innovation you know companies like Sun Deck IBM and HP they really changed the world the computing industry with their UNIX models it was that was really the rise of computing but I think what we we really saw then was that single company innovation could only scale so far could really get so far with that these companies were very very innovative but they coupled hardware innovation with software innovation and as one company they could only solve so many problems and even which comp which even complicated things more they could only hire so many people in each of their companies Intel came on the scene back then as the new independent hardware player and you know that was really the beginning of the drive for horizontal computing power and computing this opened up a brand new vehicle for hardware innovation a new hardware ecosystem was built around this around this common hardware base shortly after that Stallman and leanness they had a vision of his of an open model that was created and they created Linux but it was built around Intel this was really the beginning of having a software based platform that could also drive innovation this kind of was the beginning of the changing of the world here that system-level innovation now having a hardware platform that was ubiquitous and a software platform that was open and ubiquitous it really changed this system level innovation and that continues to thrive today it was only possible because it was open this could not have happened in a closed environment it allowed the best ideas from anywhere from all over to come in in win only because it was the best idea that's what drove the rate of innovation at the pace you're seeing today and it which has never been seen before we at Red Hat we saw the need to bring this innovation to solve real-world problems in the enterprise and I think that's going to be the theme of the show today you're going to see us with our customers and partners talking about and showing you some of those real-world problems that we are sought solving with this open innovation we created rel back then for this for the enterprise it started it's it it wasn't successful because it's scaled it was secure and it was enterprise ready it once again changed the industry but this time through open innovation this gave the hardware ecosystem a software platform this open software platform gave the hardware ecosystem a software platform to build around it Unleashed them the hardware side to compete and thrive it enabled innovation from the OEMs new players building cheaper faster servers even new architectures from armed to power sprung up with this change we have seen an incredible amount of hardware innovation over the last 15 years that same innovation happened on the software side we saw powerful implementations of bare metal Linux distributions out in the market in fact at one point there were 300 there are over 300 distributions out in the market on the foundation of Linux powerful open-source equivalents were even developed in every area of Technology databases middleware messaging containers anything you could imagine innovation just exploded around the Linux platform in innovation it's at the core also drove virtualization both Linux and virtualization led to another area of innovation which you're hearing a lot about now public cloud innovation this innovation started to proceed at a rate that we had never seen before we had never experienced this in the past in this unprecedented speed of innovation and software was now possible because you didn't need a chip foundry in order to innovate you just needed great ideas in the open platform that was out there customers seeing this innovation in the public cloud sparked it sparked their desire to build their own linux based cloud platforms and customers are now are now bringing that cloud efficiency on-premise in their own data centers public clouds demonstrated so much efficiency the data centers and architects wanted to take advantage of it off premise on premise I'm sorry within their own we don't within their own controlled environments this really allowed companies to make the most of existing investments from data centers to hardware they also gained many new advantages from data sovereignty to new flexible agile approaches I want to bring Burr and his team up here to take a look at what building out an on-premise cloud can look like today Bure take it away I am super excited to be with all of you here at Red Hat summit I know we have some amazing things to show you throughout the week but before we dive into this demonstration I want you to take just a few seconds just a quick moment to think about that really important event your life that moment you turned on your first computer maybe it was a trs-80 listen Claire and Atari I even had an 83 b2 at one point but in my specific case I was sitting in a classroom in Hawaii and I could see all the way from Diamond Head to Pearl Harbor so just keep that in mind and I turn on an IBM PC with dual floppies I don't remember issuing my first commands writing my first level of code and I was totally hooked it was like a magical moment and I've been hooked on computers for the last 30 years so I want you to hold that image in your mind for just a moment just a second while we show you the computers we have here on stage let me turn this over to Jay fair and Dini here's our worldwide DevOps manager and he was going to show us his hardware what do you got Jay thank you BER good morning everyone and welcome to Red Hat summit we have so many cool things to show you this week I am so happy to be here and you know my favorite thing about red hat summit is our allowed to kind of share all of our stories much like bird just did we also love to you know talk about the hardware and the technology that we brought with us in fact it's become a bit of a competition so this year we said you know let's win this thing and we actually I think we might have won we brought a cloud with us so right now this is a private cloud for throughout the course of the week we're going to turn this into a very very interesting open hybrid cloud right before your eyes so everything you see here will be real and happening right on this thing right behind me here so thanks for our four incredible partners IBM Dell HP and super micro we've built a very vendor heterogeneous cloud here extra special thanks to IBM because they loaned us a power nine machine so now we actually have multiple architectures in this cloud so as you know one of the greatest benefits to running Red Hat technology is that we run on just about everything and you know I can't stress enough how powerful that is how cost-effective that is and it just makes my life easier to be honest so if you're interested the people that built this actual rack right here gonna be hanging out in the customer success zone this whole week it's on the second floor the lobby there and they'd be glad to show you exactly how they built this thing so let me show you what we actually have in this rack so contained in this rack we have 1056 physical chorus right here we have five and a half terabytes of RAM and just in case we threw 50 terabytes of storage in this thing so burr that's about two million times more powerful than that first machine you boot it up thanks to a PC we're actually capable of putting all the power needs and cooling right in this rack so there's your data center right there you know it occurred to me last night that I can actually pull the power cord on this thing and kick it up a notch we could have the world's first mobile portable hybrid cloud so I'm gonna go ahead and unplug no no no no no seriously it's not unplug the thing we got it working now well Berg gets a little nervous but next year we're rolling this thing around okay okay so to recap multiple vendors check multiple architectures check multiple public clouds plug right into this thing check and everything everywhere is running the same software from Red Hat so that is a giant check so burn Angus why don't we get the demos rolling awesome so we have totally we have some amazing hardware amazing computers on this stage but now we need to light it up and we have Angus Thomas who represents our OpenStack engineering team and he's going to show us what we can do with this awesome hardware Angus thank you Beth so this was an impressive rack of hardware to Joe has bought a pocket stage what I want to talk about today is putting it to work with OpenStack platform director we're going to turn it from a lot of potential into a flexible scalable private cloud we've been using director for a while now to take care of managing hardware and orchestrating the deployment of OpenStack what's new is that we're bringing the same capabilities for on-premise manager the deployment of OpenShift director deploying OpenShift in this way is the best of both worlds it's bare-metal performance but with an underlying infrastructure as a service that can take care of deploying in new instances and scaling out and a lot of the things that we expect from a cloud provider director is running on a virtual machine on Red Hat virtualization at the top of the rack and it's going to bring everything else under control what you can see on the screen right now is the director UI and as you see some of the hardware in the rack is already being managed at the top level we have information about the number of cores in the amount of RAM and the disks that each machine have if we dig in a bit there's information about MAC addresses and IPs and the management interface the BIOS kernel version dig a little deeper and there is information about the hard disks all of this is important because we want to be able to make sure that we put in workloads exactly where we want them Jay could you please power on the two new machines at the top of the rack sure all right thank you so when those two machines come up on the network director is going to see them see that they're new and not already under management and is it immediately going to go into the hardware inspection that populates this database and gets them ready for use so we also have profiles as you can see here profiles are the way that we match the hardware in a machine to the kind of workload that it's suited to this is how we make sure that machines that have all the discs run Seth and machines that have all the RAM when our application workouts for example there's two ways these can be set when you're dealing with a rack like this you could go in an individually tag each machine but director scales up to data centers so we have a rules matching engine which will automatically take the hardware profile of a new machine and make sure it gets tagged in exactly the right way so we can automatically discover new machines on the network and we can automatically match them to a profile that's how we streamline and scale up operations now I want to talk about deploying the software we have a set of validations we've learned over time about the Miss configurations in the underlying infrastructure which can cause the deployment of a multi node distributed application like OpenStack or OpenShift to fail if you have the wrong VLAN tags on a switch port or DHCP isn't running where it should be for example you can get into a situation which is really hard to debug a lot of our validations actually run before the deployment they look at what you're intending to deploy and they check in the environment is the way that it should be and they'll preempts problems and obviously preemption is a lot better than debugging something new that you probably have not seen before is director managing multiple deployments of different things side by side before we came out on stage we also deployed OpenStack on this rack just to keep me honest let me jump over to OpenStack very quickly a lot of our opens that customers will be familiar with this UI and the bare metal deployment of OpenStack on our rack is actually running a set of virtual machines which is running Gluster you're going to see that put to work later on during the summit Jay's gone to an awful lot effort to get this Hardware up on the stage so we're going to use it as many different ways as we can okay let's deploy OpenShift if I switch over to the deployed a deployment plan view there's a few steps first thing you need to do is make sure we have the hardware I already talked about how director manages hardware it's smart enough to make sure that it's not going to attempt to deploy into machines they're already in use it's only going to deploy on machines that have the right profile but I think with the rack that we have here we've got enough next thing is the deployment configuration this is where you get to customize exactly what's going to be deployed to make sure that it really matches your environment if they're external IPs for additional services you can set them here whatever it takes to make sure that the deployment is going to work for you as you can see on the screen we have a set of options around enable TLS for encryption network traffic if I dig a little deeper there are options around enabling ipv6 and network isolation so that different classes of traffic there are over different physical NICs okay then then we have roles now roles this is essentially about the software that's going to be put on each machine director comes with a set of roles for a lot of the software that RedHat supports and you can just use those or you can modify them a little bit if you need to add a monitoring agent or whatever it might be or you can create your own custom roles director has quite a rich syntax for custom role definition and custom Network topologies whatever it is you need in order to make it work in your environment so the rawls that we have right now are going to give us a working instance of openshift if I go ahead and click through the validations are all looking green so right now I can click the button start to the deploy and you will see things lighting up on the rack directors going to use IPMI to reboot the machines provisioned and with a trail image was the containers on them and start up the application stack okay so one last thing once the deployment is done you're going to want to keep director around director has a lot of capabilities around what we call de to operational management bringing in new Hardware scaling out deployments dealing with updates and critically doing upgrades as well so having said all of that it is time for me to switch over to an instance of openshift deployed by a director running on bare metal on our rack and I need to hand this over to our developer team so they can show what they can do it thank you that is so awesome Angus so what you've seen now is going from bare metal to the ultimate private cloud with OpenStack director make an open shift ready for our developers to build their next generation applications thank you so much guys that was totally awesome I love what you guys showed there now I have the honor now I have the honor of introducing a very special guest one of our earliest OpenShift customers who understands the necessity of the private cloud inside their organization and more importantly they're fundamentally redefining their industry please extend a warm welcome to deep mar Foster from Amadeus well good morning everyone a big thank you for having armadillos here and myself so as it was just set I'm at Mario's well first of all we are a large IT provider in the travel industry so serving essentially Airlines hotel chains this distributors like Expedia and others we indeed we started very early what was OpenShift like a bit more than three years ago and we jumped on it when when Retta teamed with Google to bring in kubernetes into this so let me quickly share a few figures about our Mario's to give you like a sense of what we are doing and the scale of our operations so some of our key KPIs one of our key metrics is what what we call passenger borders so that's the number of customers that physically board a plane over the year so through our systems it's roughly 1.6 billion people checking in taking the aircrafts on under the Amarillo systems close to 600 million travel agency bookings virtually all airlines are on the system and one figure I want to stress out a little bit is this one trillion availability requests per day that's when I read this figure my mind boggles a little bit so this means in continuous throughput more than 10 million hits per second so of course these are not traditional database transactions it's it's it's highly cached in memory and these applications are running over like more than 100,000 course so it's it's it's really big stuff so today I want to give some concrete feedback what we are doing so I have chosen two applications products of our Mario's that are currently running on production in different in different hosting environments as the theme here is of this talk hybrid cloud and so I want to give some some concrete feedback of how we architect the applications and of course it stays relatively high level so here I have taken one of our applications that is used in the hospitality environment so it's we have built this for a very large US hotel chain and it's currently in in full swing brought into production so like 30 percent of the globe or 5,000 plus hotels are on this platform not so here you can see that we use as the path of course on openshift on that's that's the most central piece of our hybrid cloud strategy on the database side we use Oracle and Couchbase Couchbase is used for the heavy duty fast access more key value store but also to replicate data across two data centers in this case it's running over to US based data centers east and west coast topology that are fit so run by Mario's that are fit with VMware on for the virtualization OpenStack on top of it and then open shift to host and welcome the applications on the right hand side you you see the kind of tools if you want to call them tools that we use these are the principal ones of course the real picture is much more complex but in essence we use terraform to map to the api's of the underlying infrastructure so they are obviously there are differences when you run on OpenStack or the Google compute engine or AWS Azure so some some tweaking is needed we use right at ansible a lot we also use puppet so you can see these are really the big the big pieces of of this sense installation and if we look to the to the topology again very high high level so these two locations basically map the data centers of our customers so they are in close proximity because the response time and the SLA is of this application is are very tight so that's an example of an application that is architectures mostly was high ability and high availability in minds not necessarily full global worldwide scaling but of course it could be scaled but here the idea is that we can swing from one data center to the unit to the other in matters of of minutes both take traffic data is fully synchronized across those data centers and while the switch back and forth is very fast the second example I have taken is what we call the shopping box this is when people go to kayak or Expedia and they're getting inspired where they want to travel to this is really the piece that shoots most of transit of the transactions into our Mario's so we architect here more for high scalability of course availability is also a key but here scaling and geographical spread is very important so in short it runs partially on-premise in our Amarillo Stata Center again on OpenStack and we we deploy it mostly in the first step on the Google compute engine and currently as we speak on Amazon on AWS and we work also together with Retta to qualify the whole show on Microsoft Azure here in this application it's it's the same building blocks there is a large swimming aspect to it so we bring Kafka into this working with records and another partner to bring Kafka on their open shift because at the end we want to use open shift to administrate the whole show so over time also databases and the topology here when you look to the physical deployment topology while it's very classical we use the the regions and the availability zone concept so this application is spread over three principal continental regions and so it's again it's a high-level view with different availability zones and in each of those availability zones we take a hit of several 10,000 transactions so that was it really in very short just to give you a glimpse on how we implement hybrid clouds I think that's the way forward it gives us a lot of freedom and it allows us to to discuss in a much more educated way with our customers that sometimes have already deals in place with one cloud provider or another so for us it's a lot of value to set two to leave them the choice basically what up that was a very quick overview of what we are doing we were together with records are based on open shift essentially here and more and more OpenStack coming into the picture hope you found this interesting thanks a lot and have a nice summer [Applause] thank you so much deeper great great solution we've worked with deep Marv and his team for a long for a long time great solution so I want to take us back a little bit I want to circle back I sort of ended talking a little bit about the public cloud so let's circle back there you know even so even though some applications need to run in various footprints on premise there's still great gains to be had that for running certain applications in the public cloud a public cloud will be as impactful to to the industry as as UNIX era was of computing was but by itself it'll have some of the same limitations and challenges that that model had today there's tremendous cloud innovation happening in the public cloud it's being driven by a handful of massive companies and much like the innovation that sundeck HP and others drove in a you in the UNIX era of community of computing many customers want to take advantage of the best innovation no matter where it comes from buddy but as they even eventually saw in the UNIX era they can't afford the best innovation at the cost of a siloed operating environment with the open community we are building a hybrid application platform that can give you access to the best innovation no matter which vendor or which cloud that it comes from letting public cloud providers innovate and services beyond what customers or anyone can one provider can do on their own such as large scale learning machine learning or artificial intelligence built on the data that's unique probably to that to that one cloud but consumed in a common way for the end customer across all applications in any environment on any footprint in in their overall IT infrastructure this is exactly what rel brought brought to our customers in the UNIX era of computing that consistency across any of those footprints obviously enterprises will have applications for all different uses some will live on premise some in the cloud hybrid cloud is the only practical way forward I think you've been hearing that from us for a long time it is the only practical way forward and it'll be as impactful as anything we've ever seen before I want to bring Byrne his team back to see a hybrid cloud deployment in action burr [Music] all right earlier you saw what we did with taking bare metal and lighting it up with OpenStack director and making it openshift ready for developers to build their next generation applications now we want to show you when those next turn and generation applications and what we've done is we take an open shift and spread it out and installed it across Asia and Amazon a true hybrid cloud so with me on stage today as Ted who's gonna walk us through an application and Brent Midwood who's our DevOps engineer who's gonna be making sure he's monitoring on the backside that we do make sure we do a good job so at this point Ted what have you got for us Thank You BER and good morning everybody this morning we are running on the stage in our private cloud an application that's providing its providing fraud detection detect serves for financial transactions and our customer base is rather large and we occasionally take extended bursts of traffic of heavy traffic load so in order to keep our latency down and keep our customers happy we've deployed extra service capacity in the public cloud so we have capacity with Microsoft Azure in Texas and with Amazon Web Services in Ohio so we use open chip container platform on all three locations because openshift makes it easy for us to deploy our containerized services wherever we want to put them but the question still remains how do we establish seamless communication across our entire enterprise and more importantly how do we balance the workload across these three locations in such a way that we efficiently use our resources and that we give our customers the best possible experience so this is where Red Hat amq interconnect comes in as you can see we've deployed a MQ interconnect alongside our fraud detection applications in all three locations and if I switch to the MQ console we'll see the topology of the app of the network that we've created here so the router inside the on stage here has made connections outbound to the public routers and AWS and Azure these connections are secured using mutual TLS authentication and encrypt and once these connections are established amq figures out the best way auda matically to route traffic to where it needs to get to so what we have right now is a distributed reliable broker list message bus that expands our entire enterprise now if you want to learn more about this make sure that you catch the a MQ breakout tomorrow at 11:45 with Jack Britton and David Ingham let's have a look at the message flow and we'll dive in and isolate the fraud detection API that we're interested in and what we see is that all the traffic is being handled in the private cloud that's what we expect because our latencies are low and they're acceptable but now if we take a little bit of a burst of increased traffic we're gonna see that an EQ is going to push a little a bi traffic out onto the out to the public cloud so as you're picking up some of the load now to keep the Layton sees down now when that subsides as your finishes up what it's doing and goes back offline now if we take a much bigger load increase you'll see two things first of all asher is going to take a bigger proportion than it did before and Amazon Web Services is going to get thrown into the fray as well now AWS is actually doing less work than I expected it to do I expected a little bit of bigger a slice there but this is a interesting illustration of what's going on for load balancing mq load balancing is sending requests to the services that have the lowest backlog and in order to keep the Layton sees as steady as possible so AWS is probably running slowly for some reason and that's causing a and Q to push less traffic its way now the other thing you're going to notice if you look carefully this graph fluctuate slightly and those fluctuations are caused by all the variances in the network we have the cloud on stage and we have clouds in in the various places across the country there's a lot of equipment locked layers of virtualization and networking in between and we're reacting in real-time to the reality on the digital street so BER what's the story with a to be less I noticed there's a problem right here right now we seem to have a little bit performance issue so guys I noticed that as well and a little bit ago I actually got an alert from red ahead of insights letting us know that there might be some potential optimizations we could make to our environment so let's take a look at insights so here's the Red Hat insights interface you can see our three OpenShift deployments so we have the set up here on stage in San Francisco we have our Azure deployment in Texas and we also have our AWS deployment in Ohio and insights is highlighting that that deployment in Ohio may have some issues that need some attention so Red Hat insights collects anonymized data from manage systems across our customer environment and that gives us visibility into things like vulnerabilities compliance configuration assessment and of course Red Hat subscription consumption all of this is presented in a SAS offering so it's really really easy to use it requires minimal infrastructure upfront and it provides an immediate return on investment what insights is showing us here is that we have some potential issues on the configuration side that may need some attention from this view I actually get a look at all the systems in our inventory including instances and containers and you can see here on the left that insights is highlighting one of those instances as needing some potential attention it might be a candidate for optimization this might be related to the issues that you were seeing just a minute ago insights uses machine learning and AI techniques to analyze all collected data so we combine collected data from not only the system's configuration but also with other systems from across the Red Hat customer base this allows us to compare ourselves to how we're doing across the entire set of industries including our own vertical in this case the financial services industry and we can compare ourselves to other customers we also get access to tailored recommendations that let us know what we can do to optimize our systems so in this particular case we're actually detecting an issue here where we are an outlier so our configuration has been compared to other configurations across the customer base and in this particular instance in this security group were misconfigured and so insights actually gives us the steps that we need to use to remediate the situation and the really neat thing here is that we actually get access to a custom ansible playbook so if we want to automate that type of a remediation we can use this inside of Red Hat ansible tower Red Hat satellite Red Hat cloud forms it's really really powerful the other thing here is that we can actually apply these recommendations right from within the Red Hat insights interface so with just a few clicks I can select all the recommendations that insights is making and using that built-in ansible automation I can apply those recommendations really really quickly across a variety of systems this type of intelligent automation is really cool it's really fast and powerful so really quickly here we're going to see the impact of those changes and so we can tell that we're doing a little better than we were a few minutes ago when compared across the customer base as well as within the financial industry and if we go back and look at the map we should see that our AWS employment in Ohio is in a much better state than it was just a few minutes ago so I'm wondering Ted if this had any effect and might be helping with some of the issues that you were seeing let's take a look looks like went green now let's see what it looks like over here yeah doesn't look like the configuration is taking effect quite yet maybe there's some delay awesome fantastic the man yeah so now we're load balancing across the three clouds very much fantastic well I have two minute Ted I truly love how we can route requests and dynamically load transactions across these three clouds a truly hybrid cloud native application you guys saw here on on stage for the first time and it's a fully portable application if you build your applications with openshift you can mover from cloud to cloud to cloud on stage private all the way out to the public said it's totally awesome we also have the application being fully managed by Red Hat insights I love having that intelligence watching over us and ensuring that we're doing everything correctly that is fundamentally awesome thank you so much for that well we actually have more to show you but you're going to wait a few minutes longer right now we'd like to welcome Paul back to the stage and we have a very special early Red Hat customer an Innovation Award winner from 2010 who's been going boldly forward with their open hybrid cloud strategy please give a warm welcome to Monty Finkelstein from Citigroup [Music] [Music] hi Marty hey Paul nice to see you thank you very much for coming so thank you for having me Oh our pleasure if you if you wanted to we sort of wanted to pick your brain a little bit about your experiences and sort of leading leading the charge in computing here so we're all talking about hybrid cloud how has the hybrid cloud strategy influenced where you are today in your computing environment so you know when we see the variable the various types of workload that we had an hour on from cloud we see the peaks we see the valleys we see the demand on the environment that we have we really determined that we have to have a much more elastic more scalable capability so we can burst and stretch our environments to multiple cloud providers these capabilities have now been proven at City and of course we consider what the data risk is as well as any regulatory requirement so how do you how do you tackle the complexity of multiple cloud environments so every cloud provider has its own unique set of capabilities they have they're own api's distributions value-added services we wanted to make sure that we could arbitrate between the different cloud providers maintain all source code and orchestration capabilities on Prem to drive those capabilities from within our platforms this requires controlling the entitlements in a cohesive fashion across our on Prem and Wolfram both for security services automation telemetry as one seamless unit can you talk a bit about how you decide when you to use your own on-premise infrastructure versus cloud resources sure so there are multiple dimensions that we take into account right so the first dimension we talk about the risk so low risk - high risk and and really that's about the data classification of the environment we're talking about so whether it's public or internal which would be considered low - ooh confidential PII restricted sensitive and so on and above which is really what would be considered a high-risk the second dimension would be would focus on demand volatility and responsiveness sensitivity so this would range from low response sensitivity and low variability of the type of workload that we have to the high response sensitivity and high variability of the workload the first combination that we focused on is the low risk and high variability and high sensitivity for response type workload of course any of the workloads we ensure that we're regulatory compliant as well as we achieve customer benefits with within this environment so how can we give developers greater control of their their infrastructure environments and still help operations maintain that consistency in compliance so the main driver is really to use the public cloud is scale speed and increased developer efficiencies as well as reducing cost as well as risk this would mean providing develop workspaces and multiple environments for our developers to quickly create products for our customers all this is done of course in a DevOps model while maintaining the source and artifacts registry on-prem this would allow our developers to test and select various middleware products another product but also ensure all the compliance activities in a centrally controlled repository so we really really appreciate you coming by and sharing that with us today Monte thank you so much for coming to the red echo thanks a lot thanks again tamati I mean you know there's these real world insight into how our products and technologies are really running the businesses today that's that's just the most exciting part so thank thanks thanks again mati no even it with as much progress as you've seen demonstrated here and you're going to continue to see all week long we're far from done so I want to just take us a little bit into the path forward and where we we go today we've talked about this a lot innovation today is driven by open source development I don't think there's any question about that certainly not in this room and even across the industry as a whole that's a long way that we've come from when we started our first summit 14 years ago with over a million open source projects out there this unit this innovation aggregates into various community platforms and it finally culminates in commercial open source based open source developed products these products run many of the mission-critical applications in business today you've heard just a couple of those today here on stage but it's everywhere it's running the world today but to make customers successful with that interact innovation to run their real-world business applications these open source products have to be able to leverage increase increasingly complex infrastructure footprints we must also ensure a common base for the developer and ultimately the application no matter which footprint they choose as you heard mati say the developers want choice here no matter which no matter which footprint they are ultimately going to run their those applications on they want that flexibility from the data center to possibly any public cloud out there in regardless of whether that application was built yesterday or has been running the business for the last 10 years and was built on 10-year old technology this is the flexibility that developers require today but what does different infrastructure we may require different pieces of the technical stack in that deployment one example of this that Effects of many things as KVM which provides the foundation for many of those use cases that require virtualization KVM offers a level of consistency from a technical perspective but rel extends that consistency to add a level of commercial and ecosystem consistency for the application across all those footprints this is very important in the enterprise but while rel and KVM formed the foundation other technologies are needed to really satisfy the functions on these different footprints traditional virtualization has requirements that are satisfied by projects like overt and products like Rev traditional traditional private cloud implementations has requirements that are satisfied on projects like OpenStack and products like Red Hat OpenStack platform and as applications begin to become more container based we are seeing many requirements driven driven natively into containers the same Linux in different forms provides this common base across these four footprints this level of compatible compatibility is critical to operators who must best utilize the infinite must better utilize secure and deploy the infrastructure that they have and they're responsible for developers on the other hand they care most about having a platform that can creates that consistency for their applications they care about their services and the services that they need to consume within those applications and they don't want limitations on where they run they want service but they want it anywhere not necessarily just from Amazon they want integration between applications no matter where they run they still want to run their Java EE now named Jakarta EE apps and bring those applications forward into containers and micro services they need able to orchestrate these frameworks and many more across all these different footprints in a consistent secure fashion this creates natural tension between development and operations frankly customers amplify this tension with organizational boundaries that are holdover from the UNIX era of computing it's really the job of our platforms to seamlessly remove these boundaries and it's the it's the goal of RedHat to seamlessly get you from the old world to the new world we're gonna show you a really cool demo demonstration now we're gonna show you how you can automate this transition first we're gonna take a Windows virtual machine from a traditional VMware deployment we're gonna convert it into a KVM based virtual machine running in a container all under the kubernetes umbrella this makes virtual machines more access more accessible to the developer this will accelerate the transformation of those virtual machines into cloud native container based form well we will work this prot we will worked as capability over the product line in the coming releases so we can strike the balance of enabling our developers to move in this direction we want to be able to do this while enabling mission-critical operations to still do their job so let's bring Byrne his team back up to show you this in action for one more thanks all right what Red Hat we recognized that large organizations large enterprises have a substantial investment and legacy virtualization technology and this is holding you back you have thousands of virtual machines that need to be modernized so what you're about to see next okay it's something very special with me here on stage we have James Lebowski he's gonna be walking us through he's represents our operations folks and he's gonna be walking us through a mass migration but also is Itamar Hine who's our lead developer of a very special application and he's gonna be modernizing container izing and optimizing our application all right so let's get started James thanks burr yeah so as you can see I have a typical VMware environment here I'm in the vSphere client I've got a number of virtual machines a handful of them that make up my one of my applications for my development environment in this case and what I want to do is migrate those over to a KVM based right at virtualization environment so what I'm gonna do is I'm gonna go to cloud forms our cloud management platform that's our first step and you know cloud forms actually already has discovered both my rev environment and my vSphere environment and understands the compute network and storage there so you'll notice one of the capabilities we built is this new capability called migrations and underneath here I could begin to there's two steps and the first thing I need to do is start to create my infrastructure mappings what this will allow me to do is map my compute networking storage between vSphere and Rev so cloud forms understands how those relate let's go ahead and create an infrastructure mapping I'll call that summit infrastructure mapping and then I'm gonna begin to map my two environments first the compute so the clusters here next the data stores so those virtual machines happen to live on datastore - in vSphere and I'll target them a datastore data to inside of my revenue Arman and finally my networks those live on network 100 so I'll map those from vSphere to rover so once my infrastructure is map the next step I need to do is actually begin to create a plan to migrate those virtual machines so I'll continue to the plan wizard here I'll select the infrastructure mapping I just created and I'll select migrate my development environment from those virtual machines to Rev and then I need to import a CSV file the CSV file is going to contain a list of all the virtual machines that I want to migrate that were there and that's it once I hit create what's going to happen cloud forms is going to begin in an automated fashion shutting down those virtual machines begin converting them taking care of all the minutia that you'd have to do manually it's gonna do that all automatically for me so I don't have to worry about all those manual interactions and no longer do I have to go manually shut them down but it's going to take care of that all for me you can see the migrations kicked off here this is the I've got the my VMs are migrating here and if I go back to the screen here you can see that we're gonna start seeing those shutdown okay awesome but as people want to know more information about this how would they dive deeper into this technology later this week yeah it's a great question so we have a workload portability session in the hybrid cloud on Wednesday if you want to see a presentation that deep dives into this topic and how some of the methodologies to migrate and then on Thursday we actually have a hands-on lab it's the IT optimization VM migration lab that you can check out and as you can see those are shutting down here yeah we see a powering off right now that's fantastic absolutely so if I go back now that's gonna take a while you got to convert all the disks and move them over but we'll notice is previously I had already run one migration of a single application that was a Windows virtual machine running and if I browse over to Red Hat virtualization I can see on the dashboard here I could browse to virtual machines I have migrated that Windows virtual machine and if I open up a tab I can now browse to my Windows virtual machine which is running our wingtip toy store application our sample application here and now my VM has been moved over from Rev to Vita from VMware to Rev and is available for Itamar all right great available to our developers all right Itamar what are you gonna do for us here well James it's great that you can save cost by moving from VMware to reddit virtualization but I want to containerize our application and with container native virtualization I can run my virtual machine on OpenShift like any other container using Huebert a kubernetes operator to run and manage virtual machines let's look at the open ship service catalog you can see we have a new virtualization section here we can import KVM or VMware virtual machines or if there are already loaded we can create new instances of them for the developer to work with just need to give named CPU memory we can do other virtualization parameters and create our virtual machines now let's see how this looks like in the openshift console the cool thing about KVM is virtual machines are just Linux processes so they can act and behave like other open shipped applications we build in more than a decade of virtualization experience with KVM reddit virtualization and OpenStack and can now benefit from kubernetes and open shift to manage and orchestrate our virtual machines since we know this virtual machine this container is actually a virtual machine we can do virtual machine stuff with it like shutdown reboot or open a remote desktop session to it but we can also see this is just a container like any other container in openshift and even though the web application is running inside a Windows virtual machine the developer can still use open shift mechanisms like services and routes let's browse our web application using the OpenShift service it's the same wingtip toys application but this time the virtual machine is running on open shift but we're not done we want to containerize our application since it's a Windows virtual machine we can open a remote desktop session to it we see we have here Visual Studio and an asp.net application let's start container izing by moving the Microsoft sequel server database from running inside the Windows virtual machine to running on Red Hat Enterprise Linux as an open shipped container we'll go back to the open shipped Service Catalog this time we'll go to the database section and just as easily we'll create a sequel server container just need to accept the EULA provide password and choose the Edition we want and create a database and again we can see the sequel server is just another container running on OpenShift now let's take let's find the connection details for our database to keep this simple we'll take the IP address of our database service go back to the web application to visual studio update the IP address in the connection string publish our application and go back to browse it through OpenShift fortunately for us the user experience team heard we're modernizing our application so they pitched in and pushed new icons to use with our containerized database to also modernize the look and feel it's still the same wingtip toys application it's running in a virtual machine on openshift but it's now using a containerized database to recap we saw that we can run virtual machines natively on openshift like any other container based application modernize and mesh them together we containerize the database but we can use the same approach to containerize any part of our application so some items here to deserve repeating one thing you saw is Red Hat Enterprise Linux burning sequel server in a container on open shift and you also saw Windows VM where the dotnet native application also running inside of open ships so tell us what's special about that that seems pretty crazy what you did there exactly burr if we take a look under the hood we can use the kubernetes commands to see the list of our containers in this case the sequel server and the virtual machine containers but since Q Bert is a kubernetes operator we can actually use kubernetes commands like cube Cpl to list our virtual machines and manage our virtual machines like any other entity in kubernetes I love that so there's your crew meta gem oh we can see the kind says virtual machine that is totally awesome now people here are gonna be very excited about what they just saw we're gonna get more information and when will this be coming well you know what can they do to dive in this will be available as part of reddit Cloud suite in tech preview later this year but we are looking for early adopters now so give us a call also come check our deep dive session introducing container native virtualization Thursday 2:00 p.m. awesome that is so incredible so we went from the old to the new from the close to the open the Red Hat way you're gonna be seeing more from our demonstration team that's coming Thursday at 8 a.m. do not be late if you like what you saw this today you're gonna see a lot more of that going forward so we got some really special things in store for you so at this point thank you so much in tomorrow thank you so much you guys are awesome yeah now we have one more special guest a very early adopter of Red Hat Enterprise Linux we've had over a 12-year partnership and relationship with this organization they've been a steadfast Linux and middleware customer for many many years now please extend a warm welcome to Raj China from the Royal Bank of Canada thank you thank you it's great to be here RBC is a large global full-service is back we have the largest bank in Canada top 10 global operate in 30 countries and run five key business segments personal commercial banking investor in Treasury services capital markets wealth management and insurance but honestly unless you're in the banking segment those five business segments that I just mentioned may not mean a lot to you but what you might appreciate is the fact that we've been around in business for over 150 years we started our digital transformation journey about four years ago and we are focused on new and innovative technologies that will help deliver the capabilities and lifestyle our clients are looking for we have a very simple vision and we often refer to it as the digitally enabled bank of the future but as you can appreciate transforming a hundred fifty year old Bank is not easy it certainly does not happen overnight to that end we had a clear unwavering vision a very strong innovation agenda and most importantly a focus towards a flawless execution today in banking business strategy and IT strategy are one in the same they are not two separate things we believe that in order to be the number one bank we have to have the number one tactic there is no question that most of today's innovations happens in the open source community RBC relies on RedHat as a key partner to help us consume these open source innovations in a manner that it meets our enterprise needs RBC was an early adopter of Linux we operate one of the largest footprints of rel in Canada same with tables we had tremendous success in driving cost out of infrastructure by partnering with rahat while at the same time delivering a world-class hosting service to your business over our 12 year partnership Red Hat has proven that they have mastered the art of working closely with the upstream open source community understanding the needs of an enterprise like us in delivering these open source innovations in a manner that we can consume and build upon we are working with red hat to help increase our agility and better leverage public and private cloud offerings we adopted virtualization ansible and containers and are excited about continuing our partnership with Red Hat in this journey throughout this journey we simply cannot replace everything we've had from the past we have to bring forward these investments of the past and improve upon them with new and emerging technologies it is about utilizing emerging technologies but at the same time focusing on the business outcome the business outcome for us is serving our clients and delivering the information that they are looking for whenever they need it and in whatever form factor they're looking for but technology improvements alone are simply not sufficient to do a digital transformation creating the right culture of change and adopting new methodologies is key we introduced agile and DevOps which has boosted the number of adult projects at RBC and increase the frequency at which we do new releases to our mobile app as a matter of fact these methodologies have enabled us to deliver apps over 20x faster than before the other point about around culture that I wanted to mention was we wanted to build an engineering culture an engineering culture is one which rewards curiosity trying new things investing in new technologies and being a leader not necessarily a follower Red Hat has been a critical partner in our journey to date as we adopt elements of open source culture in engineering culture what you seen today about red hearts focus on new technology innovations while never losing sight of helping you bring forward the investments you've already made in the past is something that makes Red Hat unique we are excited to see red arts investment in leadership in open source technologies to help bring the potential of these amazing things together thank you that's great the thing you know seeing going from the old world to the new with automation so you know the things you've seen demonstrated today they're they're they're more sophisticated than any one company could ever have done on their own certainly not by using a proprietary development model because of this it's really easy to see why open source has become the center of gravity for enterprise computing today with all the progress open-source has made we're constantly looking for new ways of accelerating that into our products so we can take that into the enterprise with customers like these that you've met what you've met today now we recently made in addition to the Red Hat family we brought in core OS to the Red Hat family and you know adding core OS has really been our latest move to accelerate that innovation into our products this will help the adoption of open shift container platform even deeper into the enterprise and as we did with the Linux core platform in 2002 this is just exactly what we did with with Linux back then today we're announcing some exciting new technology directions first we'll integrate the benefits of automated operations so for example you'll see dramatic improvements in the automated intelligence about the state of your clusters in OpenShift with the core OS additions also as part of open shift will include a new variant of rel called Red Hat core OS maintaining the consistency of rel farhat for the operation side of the house while allowing for a consumption of over-the-air updates from the kernel to kubernetes later today you'll hear how we are extending automated operations beyond customers and even out to partners all of this starting with the next release of open shift in July now all of this of course will continue in an upstream open source innovation model that includes continuing container linux for the community users today while also evolving the commercial products to bring that innovation out to the enterprise this this combination is really defining the platform of the future everything we've done for the last 16 years since we first brought rel to the commercial market because get has been to get us just to this point hybrid cloud computing is now being deployed multiple times in enterprises every single day all powered by the open source model and powered by the open source model we will continue to redefine the software industry forever no in 2002 with all of you we made Linux the choice for enterprise computing this changed the innovation model forever and I started the session today talking about our prediction of seven years ago on the future being open we've all seen so much happen in those in those seven years we at Red Hat have celebrated our 25th anniversary including 16 years of rel and the enterprise it's now 2018 open hybrid cloud is not only a reality but it is the driving model in enterprise computing today and this hybrid cloud world would not even be possible without Linux as a platform in the open source development model a build around it and while we have think we may have accomplished a lot in that time and we may think we have changed the world a lot we have but I'm telling you the best is yet to come now that Linux and open source software is firmly driving that innovation in the enterprise what we've accomplished today and up till now has just set the stage for us together to change the world once again and just as we did with rel more than 15 years ago with our partners we will make hybrid cloud the default in the enterprise and I will take that bet every single day have a great show and have fun watching the future of computing unfold right in front of your eyes see you later [Applause] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] anytime [Music]

Published Date : May 8 2018

SUMMARY :

account right so the first dimension we

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
James LebowskiPERSON

0.99+

Brent MidwoodPERSON

0.99+

OhioLOCATION

0.99+

Monty FinkelsteinPERSON

0.99+

TedPERSON

0.99+

TexasLOCATION

0.99+

2002DATE

0.99+

CanadaLOCATION

0.99+

five and a half terabytesQUANTITY

0.99+

MartyPERSON

0.99+

Itamar HinePERSON

0.99+

AWSORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

David InghamPERSON

0.99+

Red HatORGANIZATION

0.99+

IBMORGANIZATION

0.99+

RBCORGANIZATION

0.99+

two machinesQUANTITY

0.99+

PaulPERSON

0.99+

JayPERSON

0.99+

San FranciscoLOCATION

0.99+

HawaiiLOCATION

0.99+

50 terabytesQUANTITY

0.99+

ByrnePERSON

0.99+

AmazonORGANIZATION

0.99+

HPORGANIZATION

0.99+

second floorQUANTITY

0.99+

Red Hat Enterprise LinuxTITLE

0.99+

AsiaLOCATION

0.99+

Raj ChinaPERSON

0.99+

DiniPERSON

0.99+

Pearl HarborLOCATION

0.99+

ThursdayDATE

0.99+

Jack BrittonPERSON

0.99+

8,000QUANTITY

0.99+

Java EETITLE

0.99+

WednesdayDATE

0.99+

AngusPERSON

0.99+

JamesPERSON

0.99+

LinuxTITLE

0.99+

thousandsQUANTITY

0.99+

JoePERSON

0.99+

todayDATE

0.99+

two applicationsQUANTITY

0.99+

two new machinesQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

BurrPERSON

0.99+

WindowsTITLE

0.99+

2018DATE

0.99+

CitigroupORGANIZATION

0.99+

2010DATE

0.99+

Amazon Web ServicesORGANIZATION

0.99+

each machineQUANTITY

0.99+

firstQUANTITY

0.99+

Visual StudioTITLE

0.99+

JulyDATE

0.99+

Red HatTITLE

0.99+

aul CormierPERSON

0.99+

Diamond HeadLOCATION

0.99+

first stepQUANTITY

0.99+

Neha SandowPERSON

0.99+

two stepsQUANTITY

0.99+

Red HatORGANIZATION

0.99+

UNIXTITLE

0.99+

second dimensionQUANTITY

0.99+

seven years laterDATE

0.99+

seven years agoDATE

0.99+

this weekDATE

0.99+

36 keynote speakersQUANTITY

0.99+

first levelQUANTITY

0.99+

OpenShiftTITLE

0.99+

first stepQUANTITY

0.99+

16 yearsQUANTITY

0.99+

30 countriesQUANTITY

0.99+

vSphereTITLE

0.99+