Image Title

Search Results for thousands of VMs:

Ronen Schwartz, NetApp & Kevin McGrath | AWS re:Invent 2022


 

>>Hello, wonderful humans and welcome back to The Cube's Thrilling live coverage of AWS Reinvent here in Las Vegas, Nevada. I'm joined by my fantastic co-host, John Farer. John, things are really ramping up in here. Day one. >>Yep, it's packed already. I heard 70,000 maybe attendees really this year. I just saw that on Twitter. Again, it continues to show that over the past 10 years we've been here, you're seeing some of the players that were here from the beginning growing up and getting bigger and stronger, becoming more platforms, not just point solutions. You're seeing new entrants coming in, new startups, and the innovation you start to see happening, it's really compelling to fun to watch. And our next segment, we have multi 10 time Cube alumni coming on and a first timer, so it should be great. We'll get into some of the innovation, >>Not only as this guest went on the cube 10 times, he also spoke at the first AWS reinvent, just like you were covering it here with Cube. But without further ado, please welcome Ronan and Kevin from NetApp. Thank you gentlemen, both for being here and for matching in your dark blue. How's the show going for you? Ronan, I'm gonna ask you first, you've been here since the beginning. How does it feel in 2022? >>First, it's amazing to see so many people, right? So many humans in one place, flesh and blood. And it's also amazing to see, it's such a celebration for people in the cloud, right? Like this is our, this is our event, the people in the cloud. I'm really, really happy to be here and be in the cube as well. >>Fantastic. It, it is a party, it's a cloud party. Yes. How are you feeling being here, Kevin? I'm >>Feeling great. I mean, going all the way back to the early days of Spot T, which was the start that eventually got acquired as Spot by NetApp. I mean this was, this was our big event. This is what we lived for. We've gone, I've gone from everything, one of the smaller booths out here on the floor all the way up to the, the huge booth that we have today. So we've kind of grown along with the AWS ecosystem and it's just a lot of fun to get here, see all the customers and talk to everybody. >>That's a lot of fun. Fun. That's the theme that we've been talking about. And we wrote a story about on, on Silicon Angle, more that growth from that getting in and getting bigger, not just an ISV or part of the startup showcase or ecosystem. The progression of the investment on how cloud has changed deliverables. You've been part of that wave. What's the biggest walk away, what's, and what's the most important thing going on now cuz it's not stopping. You got new interests coming in and the folks are rising with the tide and getting platforms built around their products. >>Yeah, I would say, you know, years ago is, is cloud in my decision path and now it's cloud is in my decision path. How much is it and how am I going to use it? And I think especially coming up over the next year, macroeconomic events and everything going on is how do I make my next dollar in the cloud go further than my last dollar? Because I know I'm gonna be there, I know I'm gonna be growing in the cloud, so how do I effectively use it to run my business going forward? >>All right, take a minute to explain Spot now part of NetApp. What's the story? What take us through for the folks that aren't familiar with the journey, where it's come from, where it's today? >>Sure. So SPOT is all about cloud optimization. We help all of our customers deploy scale and optimize their applications in the cloud. And what we do is everything from VMs to containers to any type of custom application you want to deploy, we analyze those applications, we find the best price point to run them, we right size them, we do the automation so your DevOps team doesn't have to do it. And we basically make the whole cloud serverless for you at the end of the day. So whatever you're doing in the cloud, we'll manage that for you from the lowest level of the stack all the way up to the highest level financials. >>Is this what you call the evolved cloud state? >>It is in the evolve clouds a little bit more, and Ronan can touch on that a little bit too. The Evolve clouds not only the public cloud but also the cloud that you're building OnPrem, right? A lot of big companies, it's not necessarily a hundred percent one way or the other. The Evolve cloud is which cloud am I on? Am I on an OnPrem cloud and a public cloud or am I on multiple public clouds in an OnPrem cloud? And I think Ronan, you probably have an opinion on that too. >>Yeah, and and I think what we are hearing from our customers is that many of them are in a situation where a lot of their data has been built for years on premises. They're accelerating their move to the cloud, some of them are accelerating, they're moving into multiple cloud and that situation of an on-prem that is becoming cloudy and cloudy all the time. And then accelerated cloud adoption. This is what the customers are calling the Evolve cloud and that's what we're trying to support them in that journey. >>How many customers are you supporting in this Evolve cloud? You made it seem like you can just turnkey this for everyone, which I am here >>For it. Yeah, just to be clear, I mean we have thousands of customers, right? Everything from your small startups, people just getting going with a few VMs all the way to people scaling to tens and thousands of VMs in the cloud or even beyond VM services and you know, tens of millions of spend a month. You know, people are putting a lot of investment into the cloud and we have all walks of life under our, you know, customer portfolio. >>You know, multi-cloud has been a big topic in the industry. We call it super cloud. Cause we think super cloud kind of more represents the destination to multi-cloud. I mean everyone has multiple clouds, but they're best of breed defaults. They're not by design in most cases, but we're starting to see traction towards that potential common level services fix to late. See, I still think we're on the performance game now, so I have to ask, ask you guys. Performance has becoming back in VO speeds and feeds back during the data center days. Well, I wouldn't wanna talk speeds and feeds of solutions and then cloud comes in. Now we're at the era of cloud where people are moving their workloads there. There's a lot more automation going on, A lot more, as you said, part of the decision. It is the path. Yeah. So they say, now I wanna run my workloads on the better, faster infrastructure. No developer wants to run their apps on the slower hardware. >>I think that's a tall up for you. Ronan go. >>I mean, I put out my story, no developer ever said, give me the slower software performance and and pay more fast, >>Fastest find too fastest. >>Speed feeds your back, >>Right? And and performance comes in different, in different parameters, right? They think it is come throughput, it comes through latency. And I think even a stronger word today is price performance, right? How much am I paying for the performance that that I need? NetApp is actually offering a very, very big advantage for customers on both the high end performance as well as in the dollar per performance. That is, that is needed. This is actually one of the key differentiator that Fsx for NetApp on top is an AWS storage based on the NetApp on top storage operating system. This is one of the biggest advantages it is offering. It is SAP certified, for example, where latency is the key, is the key item. It is offering new and fastest throughput available, but also leveraging some advanced features like tiering and so on, is offering unique competitive advantage in the dollar for performance specifically. >>And why, why is performance important now, in your opinion? Obviously besides the obvious of no one wants to run their stuff on the slower infrastructure, but why are some people so into it now? >>I think performance as a single parameter is, is definitely a key influencer of the user experience. None, none of us will, will compromise our our experience. The second part is performance is critical when scale is happening, right? And especially with the scale of data performance to handle massive amounts of data is is becoming more and more critical. The last thing that I'll emphasize is again is the dollar for performance. The more data you have, the more you need to handle, the more critical for you is to handle it in a cost effective way. This is kind of, that's kind of in the, in the, in the secret sauce of the success of every workload. >>There isn't a company or person here who's not thinking about doing more faster for cheaper. So you're certainly got your finger on the pulse With that, I wanna talk about a, a customer case study. A little birdie told me that a major US airline recently just had a mass of when we're where according to my notes response time and customer experience was improved by 17 x. Now that's the type of thing that cuts cost big time. Can one of you tell me a little bit more about that? >>Yeah, so I think we all flew here somehow, right? >>Exactly. It's airlines matter. Probably most folks listening, they're >>Doing very well right now. Yes, the >>Airlines and I think we all also needed to deal with changes in the flights with, with really enormous amount of complexity in managing a business like that. We actually rank and choose what, what airline to use among other things based on the level of service that they give us. And especially at the time of crunch, a lot of users are looking through a lot of data to try to optimize, >>Plus all of them who just work this holiday weekend sidebar >>E Exactly right. Can't even, and Thanksgiving is one of these crunch times that are in the middle of this. So 70 x improvement in performance means a loss seven >>Zero or >>17 1 7 1 7 x Right? >>Well, and especially when we're talking about it looks like 50,000, 50,000 messages per minute that this customer was processing. Yes. That that's a lot. That's almost a thousand messages a second. Wow. I think my math tees up there. Yeah. >>It does allow them to operate in the next level of scale and really increase their support for the customer. It also allows them to be more efficient when it comes to cost. Now they need less infrastructure to give better service across the board. The nice thing is that it didn't require them for a lot of work. Sometimes when the customers are doing their journey to the cloud, one of the things that kind of hold them back is like, is either the fear or, or maybe is the, the concern of how much effort will it take me to achieve the same performance or even a better performance in the cloud? They are a live example that not only can you achieve, you can actually exceed the performance that I have on premises and really give customer a better service >>Customer a better service. And reliability is extremely important there. 99.9%. 99% >>99. Yes. >>Yes. That second nine obviously being very important, especially when we're talking about the order of magnitude of, of data and, and actions being taken place. How much of a priority is, is reliability and security for y'all as a team? >>So reliability is a key item for, for everybody, especially in crunch times. But reliability goes beyond the nines. Specifically reliability goes into how simple it is for you to enable backup n dr, how protected are you against ransomware? This is where netup and, and including the fsx for NETUP on top richness of data management makes a huge difference. If you are able to make your copy undeletable, that is actually a game changer when it comes to, to data protection. And this is, this is something that in the past requires a lot of work, opening vaults and other things. Yeah. Now it becomes a very simple configuration that is attached to every net up on top storage, no matter where it is. >>We heard some news at VMware explorer this past fall. Early fall. You guys were there. We saw the Broadcom acquisition. Looks like it's gonna get finalized maybe sooner than later. Lot of, so a lot of speculation around VMware. Someone called the VMware like where is VMware as in where they now, nice pun it was, it was actually Nutanix people, they go at each other all the time. But Broadcom's gonna keep vse and that's where the bread and butter, that's the, that's the goose that lays the Golden eggs. Customers are there. How do you guys see your piece there with VMware cloud on AWS that integrates solution? You guys have a big part of that ecosystem. We've covered it for years. I mean we've been to every VM world now called explorer. You guys have a huge customer base with VMware customers. What's the, what's the outlook? >>Yeah, and, and I think the important part is that a big part of the enterprise workloads are running on VMware and they will continue to run on VMware in, in, in the future. And most of them will try to run in a hybrid mode if not moving completely to the cloud. The cloud give them unparallel scale, it give them DR and backup opportunities. It does a lot of goodness to that. The partnership that NetApp brings with both VMware as well ass as well as other cloud vendors is actually a game changer. Because the minute that you go to the cloud, things like DR and backup have a different economics connected to them. Suddenly you can do compute less dr definitely on backup you can actually achieve massive savings. NetApp is the only data store that is certified to run with VMware cloud. And that actually opens to the customer's huge opportunity for unparalleled data protection as well as real, real savings, hard savings. And customers that look today and they say, I'm gonna shrink my data center, I'm gonna focus on, on moving certain things to the cloud, DR and backup and especially DR and backup VMware might be one of the easiest, fastest things to take into the cloud. And the partnership betweens VMware and NetApp might actually give you >>And the ONAP is great solution. Fsx there? Yes. I think you guys got a real advantage here and I want to get into something that's kind of a gloom and doom. I don't have to go negative on this one, Savannah, but they me nervous John. But you know, if you look at the economic realities you got a lot of companies like that are in the back of a Druva, Netta, Druva, cohesive rub. Others, you know, they, you know, there's a, their generational cloud who breaks through. What's the unique thing? Because you know there's gonna be challenges in the economy and customers are gonna vote with their wallets and they start to see as they make these architectural decisions, you guys are in the middle of it. There's not, there may not be enough to go around and the musical chairs might stop or, or not, I'm not sure. But I feel like if there's gonna be a consolidation, what does that look like? What are customers thinking? Backup recovery, cloud. That's a unique thing. You mentioned economics, it's not, you can't take the old strategy and put it there from five, 10 years ago. What's different now? >>Yeah, I think when it comes to data protection, there is a real change in, in the technology landscape that opened the door for a lot of new vendors to come and offer. Should we expect consolidation? I think microeconomic outside and other things will probably drive some of that to happen. I think there is one more parameter, John, that I wanna mention in this context, which is simplicity. Many of the storage vendors, including us, including aws, you wanna make as much of the backup NDR at basically a simple checkbox that you choose together with your main workload. This is another key capabilities that is, that is being, bringing and changing the market, >>But it also needs to move up. So it's not only simplicity, it's also about moving to the applications that you use, use, and just having it baked in. It's not about you going out and finding a replication. It's like what Ronan said, we gotta make it simple and then we gotta bake it into what they use. So one of our most recent acquisitions of Insta Cluster allows us to provide our customers with open source databases and data streaming services. When those sit on top of on tap and they sit on top of spots, infrastructure optimization, you get all that for free through the database that you use. So you don't worry about it. Your database is replicated, it's highly available, and it's running at the best cost. That's where it's going. >>Awesome. >>You also recently purchased Cloud Checker as well. Yes. Do you just purchase wonderful things all the time? We >>Do. We do. We, >>I'm not >>The, if he walk and act around and then we find the best thing and then we, we break out the checkbook, no, but more seriously, it, it rounds out what customers need for the cloud. So a lot of our customers come from storage, but they need to operate the entire cloud around the storage that they have. Cloud Checker gives us that financial visibility across every single dollar that you spend in the cloud and also gives us a better go to market motion with our MSPs and our distributors than we had in the past. So we're really excited about what cloud checker can unlock for us in >>The future. Makes a lot of sense and congratulations on all the extremely exciting things going on. Our final and closing question for our guests on this year's show is we would love your, your Instagram hot take your 32nd hot take on the most important stories, messages, themes of AWS reinvent 2022. Ronan, I'm gonna start with you cause you have a smirk >>And you do it one day ahead of the keynotes, one day ahead with you. >>You can give us a little tease a little from you. >>I think that pandemic or no pandemic face to face or no face to face, the innovation in the cloud is, is actually breaking all records. And I think this year specifically, you will see a lot of focus on data and scale. I think that's, these are two amazing things that you'll see, I think doubling down. But I'm also anxious to see tomorrow, so I'll learn more about it. >>All right. We might have to chat with you a little bit after tomorrow. Is keynotes and whatnot coming up? What >>About you? I think you're gonna hear a lot about cost. How much are you spending? How far are your dollars going? How are you using the cloud to the best of your abilities? How, how efficient are you being with your dollars in the cloud? I think that's gonna be a huge topic. It's on everybody's mind. It's the macro economics situation right now. I think it's gonna be in every session of the keynote tomorrow. All >>Right, so every >>Session. Every session, >>A bulk thing. John, we're gonna have >>That. >>I'm with him. You know, all S in general, you >>Guys have, and go look up what I said. >>Yeah, >>We'll go back and look at, >>I'm gonna check on you >>On that. The record now states. There you go, Kevin. Thank both. Put it down so much. We hope that it's a stellar show for Spotify, my NetApp. Thank you. And that we have you 10 more times and more than just this once and yeah, I, I can't wait to see, well, I can't wait to hear when your predictions are accurate tomorrow and we get to learn a lot more. >>No, you gotta go to all the sessions down just to check his >>Math on that. Yeah, no, exactly. Now we have to do our homework just to call him out. Not that we're competitive or those types of people at all. John. No. On that note, thank you both for being here with us. John, thank you so much. Thank you all for tuning in from home. We are live from Las Vegas, Nevada here at AWS Reinvent with John Furrier. My name is Savannah Peterson. You're watching the Cube, the leader in high tech coverage.

Published Date : Nov 29 2022

SUMMARY :

John, things are really ramping up in here. new startups, and the innovation you start to see happening, it's really compelling to fun Thank you gentlemen, both for being here and for matching in your And it's also amazing to see, it's such a celebration for people in the cloud, How are you feeling being here, it's just a lot of fun to get here, see all the customers and talk to everybody. You got new interests coming in and the folks are rising with the tide and getting platforms And I think especially coming up over the for the folks that aren't familiar with the journey, where it's come from, where it's today? And we basically make the whole cloud serverless for you at the end of the day. And I think Ronan, you probably have an opinion on that too. on-prem that is becoming cloudy and cloudy all the time. in the cloud or even beyond VM services and you know, tens of millions of more represents the destination to multi-cloud. I think that's a tall up for you. This is actually one of the key differentiator The more data you have, the more you need to handle, the more critical for Can one of you tell me a little bit more about that? Probably most folks listening, they're Yes, the a lot of data to try to optimize, Can't even, and Thanksgiving is one of these crunch times that are in the middle of I think my math tees up there. not only can you achieve, you can actually exceed the performance that I have on premises and really give And reliability is extremely important there. How much of a priority is, how simple it is for you to enable backup n dr, how protected are you How do you guys see Because the minute that you go to the cloud, things like DR and backup have a different economics I think you guys got a real advantage here and I want to get into a simple checkbox that you choose together with your main workload. So it's not only simplicity, it's also about moving to the applications Do you just purchase wonderful things all the time? Do. We do. So a lot of our customers come from storage, but they need to operate the entire cloud around the Makes a lot of sense and congratulations on all the extremely exciting things going on. And I think this year specifically, you will see a lot of focus on data and scale. We might have to chat with you a little bit after tomorrow. How are you using the cloud to the best of your abilities? John, we're gonna have You know, all S in general, you And that we have you 10 No. On that note, thank you both for being here with us.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RonanPERSON

0.99+

Dave VellantePERSON

0.99+

Traci GusherPERSON

0.99+

JohnPERSON

0.99+

John FarerPERSON

0.99+

Ronen SchwartzPERSON

0.99+

TraciPERSON

0.99+

Diane GreenPERSON

0.99+

Savannah PetersonPERSON

0.99+

John FurrierPERSON

0.99+

GoogleORGANIZATION

0.99+

KevinPERSON

0.99+

DavePERSON

0.99+

70QUANTITY

0.99+

tensQUANTITY

0.99+

San FranciscoLOCATION

0.99+

10 timesQUANTITY

0.99+

BroadcomORGANIZATION

0.99+

eight monthsQUANTITY

0.99+

Kevin McGrathPERSON

0.99+

John FurrierPERSON

0.99+

eight weeksQUANTITY

0.99+

KPMGORGANIZATION

0.99+

NettaORGANIZATION

0.99+

tomorrowDATE

0.99+

OracleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

eight minutesQUANTITY

0.99+

1990DATE

0.99+

PythonTITLE

0.99+

2022DATE

0.99+

Advanced Solutions LabORGANIZATION

0.99+

NutanixORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

70,000QUANTITY

0.99+

FirstQUANTITY

0.99+

first questionQUANTITY

0.99+

this weekDATE

0.99+

bothQUANTITY

0.99+

99.9%QUANTITY

0.99+

OneQUANTITY

0.99+

Las Vegas, NevadaLOCATION

0.99+

HANATITLE

0.99+

oneQUANTITY

0.99+

thousands of customersQUANTITY

0.99+

todayDATE

0.99+

10 more timesQUANTITY

0.99+

ONAPORGANIZATION

0.99+

DruvaORGANIZATION

0.99+

firstQUANTITY

0.98+

this yearDATE

0.98+

ZeroQUANTITY

0.98+

second thingQUANTITY

0.98+

VMwareORGANIZATION

0.98+

17QUANTITY

0.97+

first impactQUANTITY

0.97+

Matt LeBlanc & Tom Leyden, Kasten by Veeam | VMware Explore 2022


 

(upbeat music) >> Hey everyone and welcome back to The Cube. We are covering VMware Explore live in San Francisco. This is our third day of wall to wall coverage. And John Furrier is here with me, Lisa Martin. We are excited to welcome two guests from Kasten by Veeam, please welcome Tom Laden, VP of marketing and Matt LeBlanc, not Joey from friends, Matt LeBlanc, the systems engineer from North America at Kasten by Veeam. Welcome guys, great to have you. >> Thank you. >> Thank you for having us. >> Tom-- >> Great, go ahead. >> Oh, I was going to say, Tom, talk to us about some of the key challenges customers are coming to you with. >> Key challenges that they have at this point is getting up to speed with Kubernetes. So everybody has it on their list. We want to do Kubernetes, but where are they going to start? Back when VMware came on the market, I was switching from Windows to Mac and I needed to run a Windows application on my Mac and someone told me, "Run a VM." Went to the internet, I downloaded it. And in a half hour I was done. That's not how it works with Kubernetes. So that's a bit of a challenge. >> I mean, Kubernetes, Lisa, remember the early days of The Cube Open Stack was kind of transitioning, Cloud was booming and then Kubernetes was the paper that became the thing that pulled everybody together. It's now de facto in my mind. So that's clear, but there's a lot of different versions of it and you hear VMware, they call it the dial tone. Usually, remember, Pat Gelter, it's a dial tone. Turns out that came from Kit Colbert or no, I think AJ kind of coined the term here, but it's since been there, it's been adopted by everyone. There's different versions. It's open source. AWS is involved. How do you guys look at the relationship with Kubernetes here and VMware Explore with Kubernetes and the customers because they have choices. They can go do it on their own. They can add a little bit with Lambda, Serverless. They can do more here. It's not easy. It's not as easy as people think it is. And then this is a skill gaps problem too. We're seeing a lot of these problems out there. What's your take? >> I'll let Matt talk to that. But what I want to say first is this is also the power of the cloud native ecosystem. The days are gone where companies were selecting one enterprise application and they were building their stack with that. Today they're building applications using dozens, if not hundreds of different components from different vendors or open source platforms. And that is really what creates opportunities for those cloud native developers. So maybe you want to... >> Yeah, we're seeing a lot of hybrid solutions out there. So it's not just choosing one vendor, AKS, EKS, or Tanzu. We're seeing all the above. I had a call this morning with a large healthcare provider and they have a hundred clusters and that's spread across AKS, EKS and GKE. So it is covering everything. Plus the need to have a on-prem solution manage it all. >> I got a stat, I got to share that I want to get your reactions and you can laugh or comment, whatever you want to say. Talk to big CSO, CXO, executive, big company, I won't say the name. We got a thousand developers, a hundred of them have heard of Kubernetes, okay. 10 have touched it and used it and one's good at it. And so his point is that there's a lot of Kubernetes need that people are getting aware. So it shows that there's more and more adoption around. You see a lot of managed services out there. So it's clear it's happening and I'm over exaggerating the ratio probably. But the point is the numbers kind of make sense as a thousand developers. You start to see people getting adoption to it. They're aware of the value, but being good at it is what we're hearing is one of those things. Can you guys share your reaction to that? Is that, I mean, it's hyperbole at some level, but it does point to the fact of adoption trends. You got to get good at it, you got to know how to use it. >> It's very accurate, actually. It's what we're seeing in the market. We've been doing some research of our own, and we have some interesting numbers that we're going to be sharing soon. Analysts don't have a whole lot of numbers these days. So where we're trying to run our own surveys to get a grasp of the market. One simple survey or research element that I've done myself is I used Google trends. And in Google trends, if you go back to 2004 and you compare VMware against Kubernetes, you get a very interesting graph. What you're going to see is that VMware, the adoption curve is practically complete and Kubernetes is clearly taking off. And the volume of searches for Kubernetes today is almost as big as VMware. So that's a big sign that this is starting to happen. But in this process, we have to get those companies to have all of their engineers to be up to speed on Kubernetes. And that's one of the community efforts that we're helping with. We built a website called learning.kasten.io We're going to rebrand it soon at CubeCon, so stay tuned, but we're offering hands on labs there for people to actually come learn Kubernetes with us. Because for us, the faster the adoption goes, the better for our business. >> I was just going to ask you about the learning. So there's a big focus here on educating customers to help dial down the complexity and really get them, these numbers up as John was mentioning. >> And we're really breaking it down to the very beginning. So at this point we have almost 10 labs as we call them up and they start really from install a Kubernetes Cluster and people really hands on are going to install a Kubernetes Cluster. They learn to build an application. They learn obviously to back up the application in the safest way. And then there is how to tune storage, how to implement security, and we're really building it up so that people can step by step in a hands on way learn Kubernetes. >> It's interesting, this VMware Explore, their first new name change, but VMWorld prior, big community, a lot of customers, loyal customers, but they're classic and they're foundational in enterprises and let's face it. Some of 'em aren't going to rip out VMware anytime soon because the workloads are running on it. So in Broadcom we'll have some good action to maybe increase prices or whatnot. So we'll see how that goes. But the personas here are definitely going cloud native. They did with Tanzu, was a great thing. Some stuff was coming off, the fruit's coming off the tree now, you're starting to see it. CNCF has been on this for a long, long time, CubeCon's coming up in Detroit. And so that's just always been great, 'cause you had the day zero event and you got all kinds of community activity, tons of developer action. So here they're talking, let's connect to the developer. There the developers are at CubeCon. So the personas are kind of connecting or overlapping. I'd love to get your thoughts, Matt on? >> So from the personnel that we're talking to, there really is a split between the traditional IT ops and a lot of the people that are here today at VMWare Explore, but we're also talking with the SREs and the dev ops folks. What really needs to happen is we need to get a little bit more experience, some more training and we need to get these two groups to really start to coordinate and work together 'cause you're basically moving from that traditional on-prem environment to a lot of these traditional workloads and the only way to get that experience is to get your hands dirty. >> Right. >> So how would you describe the persona specifically here versus say CubeCon? IT ops? >> Very, very different, well-- >> They still go ahead. Explain. >> Well, I mean, from this perspective, this is all about VMware and everything that they have to offer. So we're dealing with a lot of administrators from that regard. On the Kubernetes side, we have site reliability engineers and their goal is exactly as their title describes. They want to architect arch applications that are very resilient and reliable and it is a different way of working. >> I was on a Twitter spaces about SREs and dev ops and there was people saying their title's called dev ops. Like, no, no, you do dev ops, you don't really, you're not the dev ops person-- >> Right, right. >> But they become the dev ops person because you're the developer running operations. So it's been weird how dev ops been co-opted as a position. >> And that is really interesting. One person told me earlier when I started Kasten, we have this new persona. It's the dev ops person. That is the person that we're going after. But then talking to a few other people who were like, "They're not falling from space." It's people who used to do other jobs who now have a more dev ops approach to what they're doing. It's not a new-- >> And then the SRE conversation was in site, reliable engineer comes from Google, from one person managing multiple clusters to how that's evolved into being the dev ops. So it's been interesting and this is really the growth of scale, the 10X developer going to more of the cloud native, which is okay, you got to run ops and make the developer go faster. If you look at the stuff we've been covering on The Cube, the trends have been cloud native developers, which I call dev ops like developers. They want to go faster. They want self-service and they don't want to slow down. They don't want to deal with BS, which is go checking security code, wait for the ops team to do something. So data and security seem to be the new ops. Not so much IT ops 'cause that's now cloud. So how do you guys see that in, because Kubernetes is rationalizing this, certainly on the compute side, not so much on storage yet but it seems to be making things better in that grinding area between dev and these complicated ops areas like security data, where it's constantly changing. What do you think about that? >> Well there are still a lot of specialty folks in that area in regards to security operations. The whole idea is be able to script and automate as much as possible and not have to create a ticket to request a VM to be billed or an operating system or an application deployed. They're really empowered to automatically deploy those applications and keep them up. >> And that was the old dev ops role or person. That was what dev ops was called. So again, that is standard. I think at CubeCon, that is something that's expected. >> Yes. >> You would agree with that. >> Yeah. >> Okay. So now translating VM World, VMware Explore to CubeCon, what do you guys see as happening between now and then? Obviously got re:Invent right at the end in that first week of December coming. So that's going to be two major shows coming in now back to back that're going to be super interesting for this ecosystem. >> Quite frankly, if you compare the persona, maybe you have to step away from comparing the personas, but really compare the conversations that we're having. The conversations that you're having at a CubeCon are really deep dives. We will have people coming into our booth and taking 45 minutes, one hour of the time of the people who are supposed to do 10 minute demos because they're asking more and more questions 'cause they want to know every little detail, how things work. The conversations here are more like, why should I learn Kubernetes? Why should I start using Kubernetes? So it's really early day. Now, I'm not saying that in a bad way. This is really exciting 'cause when you hear CNCF say that 97% of enterprises are using Kubernetes, that's obviously that small part of their world. Those are their members. We now want to see that grow to the entire ecosystem, the larger ecosystem. >> Well, it's actually a great thing, actually. It's not a bad thing, but I will counter that by saying I am hearing the conversation here, you guys'll like this on the Veeam side, the other side of the Veeam, there's deep dives on ransomware and air gap and configuration errors on backup and recovery and it's all about Veeam on the other side. Those are the guys here talking deep dive on, making sure that they don't get screwed up on ransomware, not Kubernete, but they're going to Kub, but they're now leaning into Kubernetes. They're crossing into the new era because that's the apps'll end up writing the code for that. >> So the funny part is all of those concepts, ransomware and recovery, they're all, there are similar concepts in the world of Kubernetes and both on the Veeam side as well as the Kasten side, we are supporting a lot of those air gap solutions and providing a ransomware recovery solution and from a air gap perspective, there are a many use cases where you do need to live. It's not just the government entity, but we have customers that are cruise lines in Europe, for example, and they're disconnected. So they need to live in that disconnected world or military as well. >> Well, let's talk about the adoption of customers. I mean this is the customer side. What's accelerating their, what's the conversation with the customer at base, not just here but in the industry with Kubernetes, how would you guys categorize that? And how does that get accelerated? What's the customer situation? >> A big drive to Kubernetes is really about the automation, self-service and reliability. We're seeing the drive to and reduction of resources, being able to do more with less, right? This is ongoing the way it's always been. But I was talking to a large university in Western Canada and they're a huge Veeam customer worth 7000 VMs and three months ago, they said, "Over the next few years, we plan on moving all those workloads to Kubernetes." And the reason for it is really to reduce their workload, both from administration side, cost perspective as well as on-prem resources as well. So there's a lot of good business reasons to do that in addition to the technical reliability concerns. >> So what is those specific reasons? This is where now you start to see the rubber hit the road on acceleration. >> So I would say scale and flexibility that ecosystem, that opportunity to choose any application from that or any tool from that cloud native ecosystem is a big driver. I wanted to add to the adoption. Another area where I see a lot of interest is everything AI, machine learning. One example is also a customer coming from Veeam. We're seeing a lot of that and that's a great thing. It's an AI company that is doing software for automated driving. They decided that VMs alone were not going to be good enough for all of their workloads. And then for select workloads, the more scalable one where scalability was more of a topic, would move to Kubernetes. I think at this point they have like 20% of their workloads on Kubernetes and they're not planning to do away with VMs. VMs are always going to be there just like mainframes still exist. >> Yeah, oh yeah. They're accelerating actually. >> We're projecting over the next few years that we're going to go to a 50/50 and eventually lean towards more Kubernetes than VMs, but it was going to be a mix. >> Do you have a favorite customer example, Tom, that you think really articulates the value of what Kubernetes can deliver to customers where you guys are really coming in and help to demystify it? >> I would think SuperStereo is a really great example and you know the details about it. >> I love the SuperStereo story. They were a AWS customer and they're running OpenShift version three and they need to move to OpenShift version four. There is no upgrade in place. You have to migrate all your apps. Now SuperStereo is a large French IT firm. They have over 700 developers in their environment and it was by their estimation that this was going to take a few months to get that migration done. We're able to go in there and help them with the automation of that migration and Kasten was able to help them architect that migration and we did it in the course of a weekend with two people. >> A weekend? >> A weekend. >> That's a hackathon. I mean, that's not real come on. >> Compared to thousands of man hours and a few months not to mention since they were able to retire that old OpenShift cluster, the OpenShift three, they were able to stop paying Jeff Bezos for a couple of those months, which is tens of thousands of dollars per month. >> Don't tell anyone, keep that down low. You're going to get shot when you leave this place. No, seriously. This is why I think the multi-cloud hybrid is interesting because these kinds of examples are going to be more than less coming down the road. You're going to see, you're going to hear more of these stories than not hear them because what containerization now Kubernetes doing, what Dockers doing now and the role of containers not being such a land grab is allowing Kubernetes to be more versatile in its approach. So I got to ask you, you can almost apply that concept to agility, to other scenarios like spanning data across clouds. >> Yes, and that is what we're seeing. So the call I had this morning with a large insurance provider, you may have that insurance provider, healthcare provider, they're across three of the major hyperscalers clouds and they do that for reliability. Last year, AWS went down, I think three times in Q4 and to have a plan of being able to recover somewhere else, you can actually plan your, it's DR, it's a planned migration. You can do that in a few hours. >> It's interesting, just the sidebar here for a second. We had a couple chats earlier today. We had the influences on and all the super cloud conversations and trying to get more data to share with the audience across multiple areas. One of them was Amazon and that super, the hyper clouds like Amazon, as your Google and the rest are out there, Oracle, IBM and everyone else. There's almost a consensus that maybe there's time for some peace amongst the cloud vendors. Like, "Hey, you've already won." (Tom laughs) Everyone's won, now let's just like, we know where everyone is. Let's go peace time and everyone, then 'cause the relationship's not going to change between public cloud and the new world. So there's a consensus, like what does peace look like? I mean, first of all, the pie's getting bigger. You're seeing ecosystems forming around all the big new areas and that's good thing. That's the tides rise and the pie's getting bigger, there's bigger market out there now so people can share and share. >> I've never worked for any of these big players. So I would have to agree with you, but peace would not drive innovation. And in my heart is with tech innovation. I love it when vendors come up with new solutions that will make things better for customers and if that means that we're moving from on-prem to cloud and back to on-prem, I'm fine with that. >> What excites me is really having the flexibility of being able to choose any provider you want because you do have open standards, being cloud native in the world of Kubernetes. I've recently discovered that the Canadian federal government had mandated to their financial institutions that, "Yes, you may have started all of your on cloud presence in Azure, you need to have an option to be elsewhere." So it's not like-- >> Well, the sovereign cloud is one of those big initiatives, but also going back to Java, we heard another guest earlier, we were thinking about Java, right once ran anywhere, right? So you can't do that today in a cloud, but now with containers-- >> You can. >> Again, this is, again, this is the point that's happening. Explain. >> So when you have, Kubernetes is a strict standard and all of the applications are written to that. So whether you are deploying MongoDB or Postgres or Cassandra or any of the other cloud native apps, you can deploy them pretty much the same, whether they're in AKS, EKS or on Tanzu and it makes it much easier. The world became just a lot less for proprietary. >> So that's the story that everybody wants to hear. How does that happen in a way that is, doesn't stall the innovation and the developer growth 'cause the developers are driving a lot of change. I mean, for all the talk in the industry, the developers are doing pretty good right now. They've got a lot of open source, plentiful, open source growing like crazy. You got shifting left in the CICD pipeline. You got tools coming out with Kubernetes. Infrastructure has code is almost a 100% reality right now. So there's a lot of good things going on for developers. That's not an issue. The issue is just underneath. >> It's a skillset and that is really one of the biggest challenges I see in our deployments is a lack of experience. And it's not everyone. There are some folks that have been playing around for the last couple of years with it and they do have that experience, but there are many people that are still young at this. >> Okay, let's do, as we wrap up, let's do a lead into CubeCon, it's coming up and obviously re:Invent's right behind it. Lisa, we're going to have a lot of pre CubeCon interviews. We'll interview all the committee chairs, program chairs. We'll get the scoop on that, we do that every year. But while we got you guys here, let's do a little pre-pre-preview of CubeCon. What can we expect? What do you guys think is going to happen this year? What does CubeCon look? You guys our big sponsor of CubeCon. You guys do a great job there. Thanks for doing that. The community really recognizes that. But as Kubernetes comes in now for this year, you're looking at probably the what third year now that I would say Kubernetes has been on the front burner, where do you see it on the hockey stick growth? Have we kicked the curve yet? What's going to be the level of intensity for Kubernetes this year? How's that going to impact CubeCon in a way that people may or may not think it will? >> So I think first of all, CubeCon is going to be back at the level where it was before the pandemic, because the show, as many other shows, has been suffering from, I mean, virtual events are not like the in-person events. CubeCon LA was super exciting for all the vendors last year, but the attendees were not really there yet. Valencia was a huge bump already and I think Detroit, it's a very exciting city I heard. So it's going to be a blast and it's going to be a huge attendance, that's what I'm expecting. Second I can, so this is going to be my third personally, in-person CubeCon, comparing how vendors evolved between the previous two. There's going to be a lot of interesting stories from vendors, a lot of new innovation coming onto the market. And I think the conversations that we're going to be having will yet, again, be much more about live applications and people using Kubernetes in production rather than those at the first in-person CubeCon for me in LA where it was a lot about learning still, we're going to continue to help people learn 'cause it's really important for us but the exciting part about CubeCon is you're talking to people who are using Kubernetes in production and that's really cool. >> And users contributing projects too. >> Also. >> I mean Lyft is a poster child there and you've got a lot more. Of course you got the stealth recruiting going on there, Apple, all the big guys are there. They have a booth and no one's attending you like, "Oh come on." Matt, what's your take on CubeCon? Going in, what do you see? And obviously a lot of dynamic new projects. >> I'm going to see much, much deeper tech conversations. As experience increases, the more you learn, the more you realize you have to learn more. >> And the sharing's going to increase too. >> And the sharing, yeah. So I see a lot of deep conversations. It's no longer the, "Why do I need Kubernetes?" It's more, "How do I architect this for my solution or for my environment?" And yeah, I think there's a lot more depth involved and the size of CubeCon is going to be much larger than we've seen in the past. >> And to finish off what I think from the vendor's point of view, what we're going to see is a lot of applications that will be a lot more enterprise-ready because that is the part that was missing so far. It was a lot about the what's new and enabling Kubernetes. But now that adoption is going up, a lot of features for different components still need to be added to have them enterprise-ready. >> And what can the audience expect from you guys at CubeCon? Any teasers you can give us from a marketing perspective? >> Yes. We have a rebranding sitting ready for learning website. It's going to be bigger and better. So we're not no longer going to call it, learning.kasten.io but I'll be happy to come back with you guys and present a new name at CubeCon. >> All right. >> All right. That sounds like a deal. Guys, thank you so much for joining John and me breaking down all things Kubernetes, talking about customer adoption, the challenges, but also what you're doing to demystify it. We appreciate your insights and your time. >> Thank you so much. >> Thank you very much. >> Our pleasure. >> Thanks Matt. >> For our guests and John Furrier, I'm Lisa Martin. You've been watching The Cube's live coverage of VMware Explore 2022. Thanks for joining us. Stay safe. (gentle music)

Published Date : Sep 1 2022

SUMMARY :

We are excited to welcome two customers are coming to you with. and I needed to run a and you hear VMware, they the cloud native ecosystem. Plus the need to have a They're aware of the value, And that's one of the community efforts to help dial down the And then there is how to tune storage, So the personas are kind of and a lot of the people They still go ahead. and everything that they have to offer. the dev ops person-- So it's been weird how dev ops That is the person that we're going after. the 10X developer going to and not have to create a ticket So again, that is standard. So that's going to be two of the people who are but they're going to Kub, and both on the Veeam side not just here but in the We're seeing the drive to to see the rubber hit the road that opportunity to choose any application They're accelerating actually. over the next few years and you know the details about it. and they need to move to I mean, that's not real come on. and a few months not to mention since and the role of containers and to have a plan of being and that super, the and back to on-prem, I'm fine with that. that the Canadian federal government this is the point that's happening. and all of the applications and the developer growth and that is really one of How's that going to impact and it's going to be a huge attendance, and no one's attending you like, the more you learn, And the sharing's and the size of CubeCon because that is the part It's going to be bigger and better. adoption, the challenges, of VMware Explore 2022.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt LeBlancPERSON

0.99+

Lisa MartinPERSON

0.99+

EuropeLOCATION

0.99+

JohnPERSON

0.99+

IBMORGANIZATION

0.99+

Pat GelterPERSON

0.99+

Tom LeydenPERSON

0.99+

MattPERSON

0.99+

John FurrierPERSON

0.99+

Tom LadenPERSON

0.99+

LisaPERSON

0.99+

TomPERSON

0.99+

VeeamORGANIZATION

0.99+

OracleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

one hourQUANTITY

0.99+

San FranciscoLOCATION

0.99+

AmazonORGANIZATION

0.99+

LALOCATION

0.99+

DetroitLOCATION

0.99+

JoeyPERSON

0.99+

AppleORGANIZATION

0.99+

10 minuteQUANTITY

0.99+

two peopleQUANTITY

0.99+

Last yearDATE

0.99+

Jeff BezosPERSON

0.99+

45 minutesQUANTITY

0.99+

John FurrierPERSON

0.99+

2004DATE

0.99+

two guestsQUANTITY

0.99+

Western CanadaLOCATION

0.99+

GoogleORGANIZATION

0.99+

7000 VMsQUANTITY

0.99+

JavaTITLE

0.99+

97%QUANTITY

0.99+

hundredsQUANTITY

0.99+

last yearDATE

0.99+

thirdQUANTITY

0.99+

Kit ColbertPERSON

0.99+

SecondQUANTITY

0.99+

todayDATE

0.99+

20%QUANTITY

0.99+

CNCFORGANIZATION

0.99+

two groupsQUANTITY

0.99+

firstQUANTITY

0.99+

TanzuORGANIZATION

0.99+

WindowsTITLE

0.99+

third dayQUANTITY

0.99+

North AmericaLOCATION

0.99+

dozensQUANTITY

0.99+

OneQUANTITY

0.99+

over 700 developersQUANTITY

0.99+

learning.kasten.ioOTHER

0.98+

AKSORGANIZATION

0.98+

oneQUANTITY

0.98+

VeeamPERSON

0.98+

VMware Explore 2022TITLE

0.98+

VMWare ExploreORGANIZATION

0.98+

CubeConEVENT

0.98+

One exampleQUANTITY

0.98+

KubernetesTITLE

0.98+

three months agoDATE

0.98+

bothQUANTITY

0.98+

EKSORGANIZATION

0.97+

LyftORGANIZATION

0.97+

TodayDATE

0.97+

KastenORGANIZATION

0.97+

this yearDATE

0.97+

three timesQUANTITY

0.97+

SuperStereoTITLE

0.97+

third yearQUANTITY

0.96+

Jason Collier, AMD | VMware Explore 2022


 

(upbeat music) >> Welcome back to San Francisco, "theCUBE" is live, our day two coverage of VMware Explore 2022 continues. Lisa Martin with Dave Nicholson. Dave and I are pleased to welcome Jason Collier, principal member of technical staff at AMD to the program. Jason, it's great to have you. >> Thank you, it's great to be here. >> So what's going on at AMD? I hear you have some juicy stuff to talk about. >> Oh, we've got a ton of juicy stuff to talk about. Clearly the Project Monterey announcement was big for us, so we've got that to talk about. Another thing that I really wanted to talk about was a tool that we created and we call it, it's the VMware Architecture Migration Tool, call it VAMT for short. It's a tool that we created and we worked together with VMware and some of their professional services crew to actually develop this tool. And it is also an open source based tool. And really the primary purpose is to easily enable you to move from one CPU architecture to another CPU architecture, and do that in a cold migration fashion. >> So we're probably not talking about CPUs from Tandy, Radio Shack systems, likely this would be what we might refer to as other X86 systems. >> Other X86 systems is a good way to refer to it. >> So it's interesting timing for the development and the release of a tool like this, because in this sort of X86 universe, there are players who have been delayed in terms of delivering their next gen stuff. My understanding is AMD has been public with the idea that they're on track for by the end of the year, Genoa, next gen architecture. So can you imagine a situation where someone has an existing set of infrastructure and they're like, hey, you know what I want to get on board, the AMD train, is this something they can use from the VMware environment? >> Absolutely, and when you think about- >> Tell us exactly what that would look like, walk us through 100 servers, VMware, 1000 VMs, just to make the math easy. What do you do? How does it work? >> So one, there's several things that the tool can do, we actually went through, the design process was quite extensive on this. And we went through all of the planning phases that you need to go through to do these VM migrations. Now this has to be a cold migration, it's not a live migration. You can't do that between the CPU architectures. But what we do is you create a list of all of the virtual machines that you want to migrate. So we take this CSV file, we import this CSV file, and we ask for things like, okay, what's the name? Where do you want to migrate it to? So from one cluster to another, what do you want to migrate it to? What are the networks that you want to move it to? And then the storage platform. So we can move storage, it could either be shared storage, or we could move say from VSAN to VSAN, however you want to set it up. So it will do those storage migrations as well. And then what happens is it's actually going to go through, it's going to shut down the VM, it's going to take a snapshot, it is going to then basically move the compute and/or storage resources over. And once it does that, it's going to power 'em back up. And it's going to check, we've got some validation tools, where it's going to make sure VM Tools comes back up where everything is copacetic, it didn't blue screen or anything like that. And once it comes back up, then everything's good, it moves onto the next one. Now a couple of things that we've got feature wise, we built into it. You can parallelize these tasks. So you can say, how many of these machines do you want to do at any given time? So it could be, say 10 machines, 50 machines, 100 machines at a time, that you want to go through and do this move. Now, if it did blue screen, it will actually roll it back to that snapshot on the origin cluster. So that there is some protection on that. A couple other things that are actually in there are things like audit tracking. So we do full audit logging on this stuff, we take a snapshot, there's basically kind of an audit trail of what happens. There's also full logging, SYS logging, and then also we'll do email reporting. So you can say, run this and then shoot me a report when this is over. Now, one other cool thing is you can also actually define a change window. So I don't want to do this in the middle of the afternoon on a Tuesday. So I want to do this later at night, over the weekend, you can actually just queue this up, set it, schedule it, it'll run. You can also define how long you want that change window to be. And what it'll do, it'll do as many as it can, then it'll effectively stop, finish up, clean up the tasks and then send you a report on what all was successfully moved. >> Okay, I'm going to go down the rabbit hole a little bit on this, 'cause I think it's important. And if I say something incorrect, you correct me. >> No problem. >> In terms of my technical understanding. >> I got you. >> So you've got a VM, essentially a virtual machine typically will consist of an entire operating system within that virtual machine. So there's a construct that containerizes, if you will, the operating system, what is the difference, where is the difference in the instruction set? Where does it lie? Is it in the OS' interaction with the CPU or is it between the construct that is the sort of wrapper around the VM that is the difference? >> It's really primarily the OS, right? And we've not really had too many issues doing this and most of the time, what is going to happen, that OS is going to boot up, it's going to recognize the architecture that it's on, it's going to see the underlying architecture, and boot up. All the major operating systems that we test worked fine. I mean, typically they're going to work on all the X86 platforms. But there might be instruction sets that are kind of enabled in one architecture that may not be in another architecture. >> And you're looking for that during this process. >> Well usually the OS itself is going to kind of detect that. So if it pops up, the one thing that is kind of a caution that you need to look for. If you've got an application that's explicitly using an instruction set that's on one CPU vendor and not the other CPU vendor. That's the one thing where you're probably going to see some application differences. That said, it'll probably be compatible, but you may not get that instruction set advantage in it. >> But this tool remediates against that. >> Yeah, and what we do, we're actually using VM Tools itself to go through and validate a lot of those components. So we'll look and make sure VM Tools is enabled in the first place, on the source system. And then when it gets to the destination system, we also look at VM Tools to see what is and what is not enabled. >> Okay, I'm going to put you on the spot here. What's the zinger, where doesn't it work? You already said cold, we understand, you can schedule for cold migrations, that's not a zinger. What's the zinger, where doesn't it work? >> It doesn't work like, live migrations just don't work. >> No live, okay, okay, no live. What about something else? What's the oh, you've got that version, you've got that version of X86 architecture, it-won't work, anything? >> A majority of those cases work, where it would fail, where it's going to kick back and say, hey, VM Tools is not installed. So where you would see this is if you're running a virtual appliance from some vendor, like insert vendor here that say, got a firewall, or got something like that, and they don't have VM Tools enabled. It's going to fail it out of the gate, and say, hey, VM Tools is not on this, you might want to manually do it. >> But you can figure out how to fix that? >> You can figure out how to do that. You can also, and there's a flag in there, so in kind of the options that you give it, you say, ignore VM Tools, don't care, move it anyway. So if you've got less, some VMs that are in there, but they're not a priority VM, then it's going to migrate just fine. >> Got It. >> Can you elaborate a little bit on the joint development work that AMD and VMware are doing together and the value in it for customers? >> Yeah, so it's one of those things we worked with VMware to basically produce this open source tool. So we did a lot of the core component and design and we actually engaged VMware Professional Services. And a big shout out to Austin Browder. He helped us a ton in this project specifically. And we basically worked, we created this, kind of co-designed, what it was going to look like. And then jointly worked together on the coding, of pulling this thing together. And then after that, and this is actually posted up on VMware's public repos now in GitHub. So you can go to GitHub, you can go to the VMware samples code, and you can download this thing that we've created. And it's really built to help ease migrations from one architecture to another. So if you're looking for a big data center move and you got a bunch of VMs to move. I mean, even if it's same architecture to same architecture, it's definitely going to ease the pain of going through and doing a migration of, it's one thing when you're doing 10 machines, but when you're doing 10,000 virtual machines, that's a different story. It gets to be quite operationally inefficient. >> I lose track after three. >> Yeah. >> So I'm good for three, not four. >> I was going to ask you what your target market segment is here. Expand on that a little bit and talk to me about who you're working with and those organizations. >> So really this is targeted toward organizations that have large deployments in enterprise, but also I think this is a big play with channel partners as well. So folks out there in the channel that are doing these migrations and they do a lot of these, when you're thinking about the small and mid-size organizations, it's a great fit for that. Especially if they're kind of doing that upgrade, the lift and shift upgrade, from here's where you've been five to seven years on an architecture and you want to move to a new architecture. This is really going to help. And this is not a point and click GUI kind of thing. It's command line driven, it's using PowerShell, we're using PowerCLI to do the majority of this work. And for channel partners, this is an excellent opportunity to put the value and the value add and VAR, And there's a lot of opportunity for, I think, channel partners to really go and take this. And once again, being open source. We expect this to be extensible, we want the community to contribute and put back into this to basically help grow it and make it a more useful tool for doing these cold migrations between CPU architectures. >> Have you seen any in the last couple of years of dynamics, obviously across the world, any industries in particular that are really leading edge for what you guys are doing? >> Yeah, that's really, really interesting. I mean, we've seen it, it's honestly been a very horizontal problem, pretty much across all vertical markets. I mean, we've seen it in financial services, we've seen it in, honestly, pretty much across the board. Manufacturing, financial services, healthcare, we have seen kind of a strong interest in that. And then also we we've actually taken this and presented this to some of our channel partners as well. And there's been a lot of interest in it. I think we presented it to about 30 different channel partners, a couple of weeks back about this. And I got contact from 30 different channel partners that said they're interested in basically helping us work on it. >> Tagging on to Lisa's question, do you have visibility into the AMD thought process around the timing of your next gen release versus others that are competitors in the marketplace? How you might leverage that in terms of programs where partners are going out and saying, hey, perfect time, you need a refresh, perfect time to look at AMD, if you haven't looked at them recently. Do you have any insight into that in what's going on? I know you're focused on this area. But what are your thoughts on, well, what's the buzz? What's the buzz inside AMD on that? >> Well, when you look overall, if you look at the Gartner Hype Cycle, when VMware was being broadly adopted, when VMware was being broadly adopted, I'm going to be blunt, and I'm going to be honest right here, AMD didn't have a horse in the race. And the majority of those VMware deployments we see are not running on AMD. Now that said, there's an extreme interest in the fact that we've got these very cored in systems that are now coming up on, now you're at that five to seven year refresh window of pulling in new hardware. And we have extremely attractive hardware when it comes to running virtualized workloads. The test cluster that I'm running at home, I've got that five to seven year old gear, and I've got some of the, even just the Milan systems that we've got. And I've got three nodes of another architecture going onto AMD. And when I got these three nodes completely maxed to the number of VMs that I can run on 'em, I'm at a quarter of the capacity of what I'm putting on the new stuff. So what you get is, I mean, we worked the numbers, and it's definitely, it's like a 30% decrease in the amount of resources that you need. >> That's a compelling number. >> It's a compelling number. >> 5%, 10%, nobody's going to do anything for that. You talk 30%. >> 30%. It's meaningful, it's meaningful. Now you you're out of Austin, right? >> Yes. >> So first thing I thought of when you talk about running clusters in your home is the cost of electricity, but you're okay. >> I'm okay. >> You don't live here, you don't live here, you don't need to worry about that. >> I'm okay. >> Do you have a favorite customer example that you think really articulates the value of AMD when you're in customer conversations and they go, why AMD and you hit back with this? >> Yeah. Actually it's funny because I had a conversation like that last night, kind of random person I met later on in the evening. We were going through this discussion and they were facing exactly this problem. They had that five to seven year infrastructure. It's funny, because the guy was a gamer too, and he's like, man, I've always been a big AMD fan, I love the CPUs all the way since back in basically the Opterons and Athlons right. He's like, I've always loved the AMD systems, loved the graphics cards. And now with what we're doing with Ryzen and all that stuff. He's always been a big AMD fan. He's like, and I'm going through doing my infrastructure refresh. And I told him, I'm just like, well, hey, talk to your VAR and have 'em plug some AMD SKUs in there from the Dells, HPs and Lenovos. And then we've got this tool to basically help make that migration easier on you. And so once we had that discussion and it was great, then he swung by the booth today and I was able to just go over, hey, this is the tool, this is how you use it, here's all the info. Call me if you need any help. >> Yeah, when we were talking earlier, we learned that you were at Scale. So what are you liking about AMD? How does that relate? >> The funny thing is this is actually the first time in my career that I've actually had a job where I didn't work for myself. I've been doing venture backed startups the last 25 years and we've raised couple hundred million dollars worth of investment over the years. And so one, I figured, here I am going to AMD, a larger corporation. I'm just like, am I going to be able to make it a year? And I have been here longer than a year and I absolutely love it. The culture at AMD is amazing. We still have that really, I mean, almost it's like that underdog mentality within the organization. And the team that I'm working with is a phenomenal team. And it's actually, our EVP and our Corp VP, were actually my executive sponsors, we were at a prior company. They were one of my executive sponsors when I was at Scale. And so my now VP boss calls me up and says, hey, I'm putting a band together, are you interested? And I was kind of enjoying a semi-retirement lifestyle. And then I'm just like, man, because it's you, yes, I am interested. And the group that we're in, the work that we're doing, the way that we're really focusing on forward looking things that are affecting the data center, what's going to be the data center like three to five years from now. It's exciting, and I am having a blast, I'm having the time of my life. I absolutely love it. >> Well, that relationship and the trust that you will have with each other, that bleeds into the customer conversations, the partner conversations, the employee conversations, it's all inextricably linked. >> Yes it is. >> And we want to know, you said three to five years out, like what? Like what? Just general futurist stuff, where do you think this is going. >> Well, it's interesting. >> So moon collides with the earth in 2025, we already know that. >> So we dialed this back to the Pensando acquisition. When you look at the Pensando acquisition and you look at basically where data centers are today, but then you look at where basically the big hyperscalers are. You look at an AWS, you look at their architecture, you specifically wrap Nitro around that, that's a very different architecture than what's being run in the data center. And when you look at what Pensando does, that's a lot of starting to bring what these real clouds out there, what these big hyperscalers are running into the grasps of the data center. And so I think you're going to see a fundamental shift. The next 10 years are going to be exciting because the way you look at a data center now, when you think of what CPUs do, what shared storage, how the networking is all set up, it ain't going to look the same. >> Okay, so the competing vision with that, to play devil's advocate, would be DPUs are kind of expensive. Why don't we just use NICs, give 'em some more bandwidth, and use the cheapest stuff. That's the competing vision. >> That could be. >> Or the alternative vision, and I imagine everything else we've experienced in our careers, they will run in parallel paths, fit for function. >> Well, parallel paths always exist, right? Otherwise, 'cause you know how many times you've heard mainframe's dead, tape's dead, spinning disk is dead. None of 'em dead, right? The reality is you get to a point within an industry where it basically goes from instead of a growth curve like that, it goes to a growth curve of like that, it's pretty flat. So from a revenue growth perspective, I don't think you're going to see the revenue growth there. I think you're going to see the revenue growth in DPUs. And when you actually take, they may be expensive now, but you look at what Monterey's doing and you look at the way that those DPUs are getting integrated in at the OEM level. It's going to be a part of it. You're going to order your VxRail and VSAN style boxes, they're going to come with them. It's going to be an integrated component. Because when you start to offload things off the CPU, you've driven your overall utilization up. When you don't have to process NSX on basically the X86, you've just freed up cores and a considerable amount of them. And you've also moved that to where there's a more intelligent place for that pack to be processed right, out here on this edge. 'Cause you know what, that might not need to go into the host bus at all. So you have just alleviated any transfers over a PCI bus, over the PCI lanes, into DRAM, all of these components, when you're like, but all to come with, oh, that bit needs to be on this other machine. So now it's coming in and it's making that decision there. And then you take and integrate that into things like the Aruba Smart Switch, that's running the Pensando technology. So now you got top of rack that is already making those intelligent routing decisions on where packets really need to go. >> Jason, thank you so much for joining us. I know you guys could keep talking. >> No, I was going to say, you're going to have to come back. You're going to have to come back. >> We've just started to peel the layers of the onion, but we really appreciate you coming by the show, talking about what AMD and VMware are doing, what you're enabling customers to achieve. Sounds like there's a lot of tailwind behind you. That's awesome. >> Yeah. >> Great stuff, thank you. >> It's a great time to be at AMD, I can tell you that. >> Oh, that's good to hear, we like it. Well, thank you again for joining us, we appreciate it. For our guest and Dave Nicholson, I'm Lisa Martin. You're watching "theCUBE Live" from San Francisco, VMware Explore 2022. We'll be back with our next guest in just a minute. (upbeat music)

Published Date : Aug 31 2022

SUMMARY :

Jason, it's great to have you. I hear you have some to easily enable you to move So we're probably good way to refer to it. and the release of a tool like this, 1000 VMs, just to make the math easy. And it's going to check, we've Okay, I'm going to In terms of my that is the sort of wrapper and most of the time, that during this process. that you need to look for. in the first place, on the source system. What's the zinger, where doesn't it work? It doesn't work like, live What's the oh, you've got that version, So where you would see options that you give it, And a big shout out to Austin Browder. I was going to ask you what and the value add and VAR, and presented this to some of competitors in the marketplace? in the amount of resources that you need. nobody's going to do anything for that. Now you you're out of Austin, right? is the cost of electricity, you don't live here, you don't They had that five to So what are you liking about AMD? that are affecting the data center, Well, that relationship and the trust where do you think this is going. we already know that. because the way you look Okay, so the competing Or the alternative vision, And when you actually take, I know you guys could keep talking. You're going to have to come back. peel the layers of the onion, to be at AMD, I can tell you that. Oh, that's good to hear, we like it.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

Lisa MartinPERSON

0.99+

Jason CollierPERSON

0.99+

Dave NicholsonPERSON

0.99+

LisaPERSON

0.99+

50 machinesQUANTITY

0.99+

10 machinesQUANTITY

0.99+

JasonPERSON

0.99+

10 machinesQUANTITY

0.99+

100 machinesQUANTITY

0.99+

DavePERSON

0.99+

AMDORGANIZATION

0.99+

AustinLOCATION

0.99+

San FranciscoLOCATION

0.99+

San FranciscoLOCATION

0.99+

fiveQUANTITY

0.99+

threeQUANTITY

0.99+

100 serversQUANTITY

0.99+

seven yearQUANTITY

0.99+

theCUBE LiveTITLE

0.99+

10,000 virtual machinesQUANTITY

0.99+

LenovosORGANIZATION

0.99+

30%QUANTITY

0.99+

2025DATE

0.99+

AWSORGANIZATION

0.99+

fourQUANTITY

0.99+

oneQUANTITY

0.99+

10%QUANTITY

0.99+

30 different channel partnersQUANTITY

0.99+

five yearsQUANTITY

0.99+

earthLOCATION

0.99+

5%QUANTITY

0.99+

1000 VMsQUANTITY

0.99+

DellsORGANIZATION

0.99+

GitHubORGANIZATION

0.99+

seven yearsQUANTITY

0.98+

Austin BrowderPERSON

0.98+

a yearQUANTITY

0.98+

TandyORGANIZATION

0.98+

Radio ShackORGANIZATION

0.98+

VMwareORGANIZATION

0.98+

MontereyORGANIZATION

0.98+

todayDATE

0.97+

HPsORGANIZATION

0.97+

first timeQUANTITY

0.97+

TuesdayDATE

0.97+

ScaleORGANIZATION

0.97+

VM ToolsTITLE

0.97+

one thingQUANTITY

0.96+

last nightDATE

0.96+

about 30 different channel partnersQUANTITY

0.95+

firstQUANTITY

0.95+

AthlonsCOMMERCIAL_ITEM

0.95+

VxRailCOMMERCIAL_ITEM

0.95+

X86TITLE

0.94+

PensandoORGANIZATION

0.94+

VMware Explore 2022TITLE

0.94+

RyzenCOMMERCIAL_ITEM

0.94+

five yearsQUANTITY

0.93+

Manoj Sharma, Google Cloud | VMware Explore 2022


 

>>Welcome back everyone to the Cube's live coverage here in San Francisco of VMware Explorer, 2022. I'm John furrier with Dave ante coast of the hub. We're two sets, three days of wall to wall coverage. Our 12 year covering VMware's annual conference day, formerly world. Now VMware Explorer. We're kicking off day tube, no Sharma director of product management at Google cloud GCP. No Thankss for coming on the cube. Good to see you. >>Yeah. Very nice to see you as well. >>It's been a while. Google next cloud. Next is your event. We haven't been there cuz of the pandemic. Now you got an event coming up in October. You wanna give that plug out there in October 11th, UHS gonna be kind of a hybrid show. You guys with GCP, doing great. Getting up, coming up on in, in the rear with third place, Amazon Azure GCP, you guys have really nailed the developer and the AI and the data piece in the cloud. And now with VMware, with multicloud, you guys are in the mix in the universal program that they got here had been, been a partnership. Talk about the Google VMware relationship real quick. >>Yeah, no, I wanna first address, you know, us being in third place. I think when, when customers think about cloud transformation, you know, they, they, for them, it's all about how you can extract value from the data, you know, how you can transform your business with AI. And as far as that's concerned, we are in first place. Now coming to the VMware partnership, what we observed was, you know, you know, first of all, like there's a lot of data gravity built over the past, you know, 20 years in it, you know, and you know, VMware has, you know, really standardized it platforms. And when it comes to the data gravity, what we found was that, you know, customers want to extract the value that, you know, lives in that data as I was just talking about, but they find it hard to change architectures and, you know, bring those architectures into, you know, the cloud native world, you know, with microservices and so forth. >>Especially when, you know, these applications have been built over the last 20 years with off the shelf, you know, commercial off the shelf in, you know, systems you don't even know who wrote the code. You don't know what the IP address configuration is. And it's, you know, if you change anything, it can break your production. But at the same time, they want to take advantage of what the cloud has to offer. You know, the self-service the elasticity, you know, the, the economies of scale efficiencies of operation. So we wanted to, you know, bring CU, you know, bring the cloud to where the customer is with this service. And, you know, with, like I said, you know, VMware was the defacto it platform. So it was a no brainer for us to say, you know what, we'll give VMware in a native manner yeah. For our customers and bring all the benefits of the cloud into it to help them transform and take advantage of the cloud. >>It's interesting. And you called out that the, the advantages of Google cloud, one of the things that we've observed is, you know, VMware trying to be much more cloud native in their messaging and their positioning. They're trying to connect into that developer world for cloud native. I mean, Google, I mean, you guys have been cloud native literally from day one, just as a company. Yeah. Infrastructure wise, I mean, DevOps was an infrastructures code was Google's DNA. I, you had Borg, which became Kubernetes. Everyone kind of knows that in the history, if you, if you're in, in the, inside the ropes. Yeah. So as you guys have that core competency of essentially infrastructures code, which is basically cloud, how are you guys bringing that into the enterprise with the VMware, because that's where the puck is going. Right. That's where the use cases are. Okay. You got data clearly an advantage there, developers, you guys do really well with developers. We see that at say Coon and CNCF. Where's the use cases as the enterprise start to really figure out that this is now happening with hybrid and they gotta be more cloud native. Are they ramping up certain use cases? Can you share and connect the dots between what you guys had as your core competency and where the enterprise use cases are? >>Yeah. Yeah. You know, I think transformation means a lot of things, especially when you get into the cloud, you want to be not only efficient, but you also wanna make sure you're secure, right. And that you can manage and maintain your infrastructure in a way that you can reason about it. When, you know, when things go wrong, we took a very unique approach with Google cloud VMware engine. When we brought it to the cloud to Google cloud, what we did was we, we took like a cloud native approach. You know, it would seem like, you know, we are to say that, okay, VMware is cloud native, but in fact that's what we've done with this service from the ground up. One of the things we wanted to do was make sure we meet all the enterprise needs availability. We are the only service that gives four nines of SLA in a single site. >>We are the only service that has fully redundant networking so that, you know, some of the pets that you run on the VMware platform with your operational databases and the keys to the kingdom, you know, they can be run in a efficient manner and in a, in a, in a stable manner and, and, you know, in a highly available fashion, but we also paid attention to performance. One of our customers Mitel runs a unified communication service. And what they found was, you know, the high performance infrastructure, low latency infrastructure actually helps them deliver, you know, highly reliable, you know, communication experience to their customers. Right. And so, you know, we, you know, while, you know, so we developed the service from the ground up, making sure we meet the needs of these enterprise applications, but also wanted to make sure it's positioned for the future. >>Well, integrated into Google cloud VPC, networking, billing, identities, access control, you know, support all of that with a one stop shop. Right? And so this completely changes the game for, for enterprises on the outset, but what's more like we also have built in integration to cloud operations, you know, a single pane of glass for managing all your cloud infrastructure. You know, you have the ability to easily ELT into BigQuery and, you know, get a data transformation going that way from your operational databases. So, so I think we took a very like clean room ground from the ground of approach to make sure we get the best of both worlds to our customers. So >>Essentially made the VMware stack of first class citizen connecting to all the go Google tool. Did you build a bare metal instance to be able to support >>That? We, we actually have a very customized infrastructure to make sure that, you know, the experience that customers looking for in the VMware context is what we can deliver to them. And, and like I said, you know, being able to manage the pets in, in addition to the cattle that, that we are, we are getting with the modern containerized workloads. >>And, and it's not likely you did that as a one off, I, I would presume that other partners can potentially take advantage of that, that approach as well. Is that >>True? Absolutely. So one of our other examples is, is SAP, you know, our SAP infrastructure runs on very similar kind of, you know, highly redundant infrastructure, some, some parts of it. And, and then, you know, we also have in the same context partners such as NetApp. So, so customers want to, you know, truly, so, so there's two parts to it, right? One is to meet customers where they already are, but also take them to the future. And partner NetApp has delivered a cloud service that is well integrated into the platform, serves use cases like VDI serves use cases for, you know, tier two data protection scenarios, Dr. And also high performance context that customers are looking for, explain >>To people because think a lot of times people understand say, oh, NetApp, but doesn't Google have storage. Yeah. So explain that relationship and why that, that is complimentary. Yeah. And not just some kind of divergence from your strategy. >>Yeah. Yeah. No. So I think the, the idea here is NetApp, the NetApp platform living on-prem, you know, for, for so many years, it's, it's built a lot of capabilities that customers take advantage of. Right. So for example, it has the sta snap mirror capabilities that enable, you know, instant Dr. Of between locations and customers. When they think of the cloud, they are also thinking of heterogeneous context where some of the infrastructure is still needs to live on prem. So, you know, they have the Dr going on from the on-prem side using snap mirror, into Google cloud. And so, you know, it enables that entry point into the cloud. And so we believe, you know, partnering with NetApp kind of enables these high performance, you know, high, you know, reliability and also enables the customers to meet regulatory needs for, you know, the Dr. And data protection that they're looking for. And, >>And NetApp, obviously a big VMware partner as well. So I can take that partnership with VMware and NetApp into the Google cloud. >>Correct. Yeah. Yeah. It's all about leverage. Like I said, you know, meeting customers where they already are and ensuring that we smoothen their journey into the future rather than making it like a single step, you know, quantum leap. So to speak between two words, you know, I think, you know, I like to say like for the, for the longest time the cloud was being presented as a false choice between, you know, the infrastructure as of, of the past and the infrastructure of the future, like the red pill and the blue pill. Right. And, you know, we've, I like to say, like, I've, you know, we've brought, brought into the, into this context, the purple pill. Right. Which gives you really the best of both tools. >>Yeah. And this is a tailwind for you guys now, and I wanna get your thoughts on this and your differentiation around multi-cloud that's around the corner. Yeah. I mean, everyone now recognizes at least multi clouds of reality. People have workloads on AWS, Azure and GCP. That is technically multi-cloud. Yeah. Now the notion of spanning applications across clouds is coming certainly hybrid cloud is a steady state, which essentially DevOps on prem or edge in the cloud. So, so you have, now the recognition that's here, you guys are positioned well for this. How is that evolving and how are you positioning yourself with, and how you're differentiating around as clients start thinking, Hey, you know what, I can start running things on AWS and GCP. Yeah. And OnPrem in a really kind of a distributed way. Yeah. With abstractions and these things that people are talking about super cloud, what we call it. And, and this is really the conversations. Okay. What does that next future around the corner architecture look like? And how do you guys fit in, because this is an opportunity for you guys. It's almost, it's almost, it's like Wayne Gretsky, the puck is coming to you. Yeah. Yeah. It seems that way to me. What, how do you respond to >>That? Yeah, no, I think, you know, Raghu said, yes, I did yesterday. Right. It's all about being cloud smart in this new heterogeneous world. I think Google cloud has always been the most open and the most customer oriented cloud. And the reason I say that is because, you know, looking at like our Kubernetes platform, right. What we've enabled with Kubernetes and Antho is the ability for a customer to run containerized infrastructure in the same consistent manner, no matter what the platform. So while, you know, Kubernetes runs on GKE, you can run using Anthos on the VMware platform and you can run using Anthos on any other cloud on the planet in including AWS Azure. And, and so it's, you know, we, we take a very open, we've taken an open approach with Kubernetes to begin with, but, you know, the, the fact that, you know, with Anthos and this multicloud management experience that we can provide customers, we are, we are letting customers get the full freedom of an advantage of what multicloud has to has to offer. And I like to say, you know, VMware is the ES of ISAs, right. Cause cuz if you think about it, it's the only hypervisor that you can run in the same consistent manner, take the same image and run it on any of the providers. Right. And you can, you know, link it, you know, with the L two extensions and create a fabric that spans the world and, and, and multiple >>Products with, with almost every company using VMware. >>That's pretty much that's right. It's the largest, like the VMware network of, of infrastructure is the largest network on the planet. Right. And so, so it's, it's truly about enabling customer choice. We believe that every cloud, you know, brings its advantages and, you know, at the end of their day, the technology of, you know, capabilities of the provider, the differentiation of the provider need to stand on its merit. And so, you know, we truly embrace this notion of money. Those ops guys >>Have to connect to opportunities to connect to you, you guys in yeah. In, in the cloud. >>Yeah. Absolutely >>Like to ask you a question sort of about database philosophy and maybe, maybe futures a little bit, there seems to be two camps. I mean, you've got multiple databases, you got span for, you know, kind of global distributed database. You've got big query for analytics. There seems to be a trend in the industry for some providers to say, okay, let's, let's converge the transactions and analytics and kind of maybe eliminate the need to do a lot of Elting and others are saying, no, no, we want to be, be, you know, really precise and distinct with our capabilities and, and, and have be spoke set of capability, right. Tool for the right job. Let's call it. What's Google's philosophy in that regard. And, and how do you think about database in the future? >>So, so I think, you know, when it comes to, you know, something as general and as complex as data, right, you know, data lives in all ships and forms, it, it moves at various velocities that moves at various scale. And so, you know, we truly believe that, you know, customers should have the flexibility and freedom to put things together using, you know, these various contexts and, and, you know, build the right set of outcomes for themselves. So, you know, we, we provide cloud SQL, right, where customers can run their own, you know, dedicated infrastructure, fully managed and operated by Google at a high level of SLA compared to any other way of doing it. We have a database born in the cloud, a data warehouse born in the cloud BigQuery, which enables zero ops, you know, zero touch, you know, instant, you know, know high performance analytics at scale, you know, span gives customers high levels of reliability and redundancy in, in, in a worldwide context. So with, with, with extreme levels of innovation coming from, you know, the, the, the NTP, you know, that happen across different instances. Right? So I, you know, I, we, we do think that, you know, data moves a different scale and, and different velocity and, and, you know, customers have a complex set of needs. And, and so our portfolio of database services put together can truly address all ends of the spectrum. >>Yeah. And we've certainly been following you guys at CNCF and the work that Google cloud's doing extremely strong technical people. Yeah. Really open source focused, great products, technology. You guys do a great job. And I, I would imagine, and it's clear that VMware is an opportunity for you guys, given the DNA of their customer base. The installed base is huge. You guys have that nice potential connection where these customers are kind of going where its puck is going. You guys are there now for the next couple minutes, give a, give a plug for Google cloud to the VMware customer base out there. Yeah. Why Google cloud, why now what's in it for them? What's the, what's the value parts? Give the, give the plug for Google cloud to the VMware community. >>Absolutely. So, so I think, you know, especially with VMware engine, what we've built, you know, is truly like a cloud native next generation enterprise platform. Right. And it does three specific things, right? It gives you a cloud optimized experience, right? Like the, the idea being, you know, self-service efficiencies, economies, you know, operational benefits, you get that from the platform and a customer like Mitel was able to take advantage of that. Being able to use the same platform that they were running in their co-located context and migrate more than a thousand VMs in less than 90 days, something that they weren't able to do for, for over two years. The second aspect of our, you know, our transformation journey that we enable with this service is cloud integration. What that means is the same VPC experience that you get in the, the, the networking global networking that Google cloud has to offer. >>The VMware platform is fully integrated into that. And so the benefits of, you know, having a subnet that can live anywhere in the world, you know, having multi VPC, but more importantly, the benefits of having these Google cloud services like BigQuery and span and cloud operations management at your fingertips in the same layer, three domain, you know, just make an IP call and your data is transformed into BigQuery from your operational databases and car four. The retailer in Europe actually was able to do that with our service. And not only that, you know, do do the operational transform into BigQuery, you know, from their, the data gravity living in VMware on, on VMware engine, but they were able to do it in, you know, cost effective, a manner. They, they saved, you know, over 40% compared to the, the current context and also lower the co increase the agility of operations at the same time. >>Right. And so for them, this was extremely transf transformative. And lastly, we believe in the context of being open, we are also a very partner friendly cloud. And so, you know, customers come bring VMware platform because of all the, it, you know, ecosystem that comes along with it, right. You've got your VM or your Zerto or your rubric, or your capacity for data protection and, and backup. You've got security from Forex, tha fortunate, you know, you've got, you know, like we'd already talked about NetApp storage. So we, you know, we are open in that technology context, ISVs, you know, fully supported >>Integrations key. Yeah, >>Yeah, exactly. And, and, you know, that's how you build a platform, right? Yeah. And so, so we enable that, but, but, you know, we also enable customers getting into the future, going into the future, through their AI, through the AI capabilities and services that are once again available at, at their fingertips. >>Soo, thanks for coming on. Really appreciate it. And, you know, as super clouds, we call it, our multi-cloud comes around the corner, you got the edge exploding, you guys do a great job in networking and security, which is well known. What's your view of this super cloud multi-cloud world. What's different about it? Why isn't it just sass on cloud what's, what's this next gen cloud really about it. You had to kind of kind explain that to, to business folks and technical folks out there. Is it, is it something unique? Do you see a, a refactoring? Is it something that does something different? Yeah. What, what doesn't make it just SAS. >>Yeah. Yeah. No, I think that, you know, there's, there's different use cases that customers have have in mind when they, when they think about multi-cloud. I think the first thing is they don't want to have, you know, all eggs in a single basket. Right. And, and so, you know, it, it helps diversify their risk. I mean, and it's a real problem. Like you, you see outages in, you know, in, in availability zones that take out entire businesses. So customers do wanna make sure that they're not, they're, they're able to increase their availability, increase their resiliency through the use of multiple providers, but I think so, so that's like getting the same thing in different contexts, but at the same time, the context is shifting right. There is some, there's some data sources that originate, you know, elsewhere and there, the scale and the velocity of those sources is so vast, you know, you might be producing video from retail stores and, you know, you wanna make sure, you know, this, this security and there's, you know, information awareness built about those sources. >>And so you want to process that data, add the source and take instant decisions with that proximity. And that's why we believe with the GC and, you know, with, with both, both the edge versions and the hosted versions, GDC stands for Google, Google distributed cloud, where we bring the benefit and value of Google cloud to different locations on the edge, as well as on-prem. And so I think, you know, those kinds of contexts become important. And so I think, you know, we, you know, we are not only do we need to be open and pervasive, you know, but we also need to be compatible and, and, and also have the proximity to where information lives and value lives. >>Minish. Thanks for coming on the cube here at VMware Explorer, formerly world. Thanks for your time. Thank >>You so much. Okay. >>This is the cube. I'm John for Dave ante live day two coverage here on Moscone west lobby for VMware Explorer. We'll be right back with more after the short break.

Published Date : Aug 31 2022

SUMMARY :

No Thankss for coming on the cube. And now with VMware, with multicloud, you guys are in the mix in the universal program you know, the cloud native world, you know, with microservices and so forth. You know, the self-service the elasticity, you know, you know, VMware trying to be much more cloud native in their messaging and their positioning. You know, it would seem like, you know, we And so, you know, we, you know, while, you know, so we developed the service from the you know, get a data transformation going that way from your operational databases. Did you build a bare metal instance to be able to support And, and like I said, you know, being able to manage the pets in, And, and it's not likely you did that as a one off, I, I would presume that other partners And, and then, you know, we also have in the same context partners such as NetApp. And not just some kind of divergence from your strategy. to meet regulatory needs for, you know, the Dr. And data protection that they're looking for. and NetApp into the Google cloud. you know, I think, you know, I like to say like for the, now the recognition that's here, you guys are positioned well for this. Kubernetes to begin with, but, you know, the, the fact that, you know, And so, you know, we truly embrace this notion of money. In, in the cloud. no, no, we want to be, be, you know, really precise and distinct with So, so I think, you know, when it comes to, you know, for you guys, given the DNA of their customer base. of our, you know, our transformation journey that we enable with this service is you know, having a subnet that can live anywhere in the world, you know, you know, we are open in that technology context, ISVs, you know, fully supported Yeah, so we enable that, but, but, you know, we also enable customers getting And, you know, as super clouds, we call it, our multi-cloud comes stores and, you know, you wanna make sure, you know, this, this security and there's, And so I think, you know, Thanks for coming on the cube here at VMware Explorer, formerly world. You so much. This is the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EuropeLOCATION

0.99+

GoogleORGANIZATION

0.99+

RaghuPERSON

0.99+

San FranciscoLOCATION

0.99+

Manoj SharmaPERSON

0.99+

October 11thDATE

0.99+

Wayne GretskyPERSON

0.99+

OctoberDATE

0.99+

two wordsQUANTITY

0.99+

two partsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

less than 90 daysQUANTITY

0.99+

BigQueryTITLE

0.99+

DavePERSON

0.99+

12 yearQUANTITY

0.99+

second aspectQUANTITY

0.99+

yesterdayDATE

0.99+

AWSORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

2022DATE

0.99+

20 yearsQUANTITY

0.99+

bothQUANTITY

0.99+

more than a thousand VMsQUANTITY

0.99+

two setsQUANTITY

0.99+

both toolsQUANTITY

0.99+

oneQUANTITY

0.98+

over two yearsQUANTITY

0.98+

VMwareORGANIZATION

0.98+

OneQUANTITY

0.98+

CoonORGANIZATION

0.98+

three daysQUANTITY

0.98+

both worldsQUANTITY

0.98+

first thingQUANTITY

0.98+

third placeQUANTITY

0.98+

MosconeLOCATION

0.98+

over 40%QUANTITY

0.98+

first placeQUANTITY

0.97+

AnthosTITLE

0.97+

GDCORGANIZATION

0.96+

NetAppTITLE

0.96+

two campsQUANTITY

0.96+

VMware ExplorerORGANIZATION

0.95+

first addressQUANTITY

0.95+

single stepQUANTITY

0.95+

KubernetesTITLE

0.95+

VMwareTITLE

0.93+

single basketQUANTITY

0.93+

GCPORGANIZATION

0.93+

tier twoQUANTITY

0.92+

MitelORGANIZATION

0.92+

SQLTITLE

0.91+

single siteQUANTITY

0.91+

OnPremORGANIZATION

0.91+

Google VMwareORGANIZATION

0.9+

ForexORGANIZATION

0.88+

day oneQUANTITY

0.88+

pandemicEVENT

0.87+

ISAsTITLE

0.87+

three specific thingsQUANTITY

0.86+

VMware ExplorerORGANIZATION

0.86+

AnthoTITLE

0.86+

Keith Norbie, NetApp & Brandon Jackson, CDW | VMware Explore 2022


 

>>Hey everyone. Welcome back to San Francisco. Lisa Martin and Dave Nicholson here. The cube is covering VMware Explorer, 2022 first year with the new name, there's about seven to 10,000 people here. So folks are excited to be back. I was in the keynote this morning. You probably were two David. It was standing room, only lots of excitement, lots of news. We're gonna be unpacking some news. Next. We have Brandon Jackson joining us S DDC architect at CDW and Keith normy is back one of our alumni head of worldwide partner solution sales at NetApp guys. Welcome back to the program. Hey, thank >>You, reunion week. >>So let's talk about what's going on, obviously, lots of news this morning, lots of momentum at VMware, lots of momentum at NetApp CDW. Keith, we'll start with you talk about what was announced yesterday, NetApp, VMware, AWS, and what's in it for customers and partners. >>Yeah, it's a new day. I talked about this in a blog that I wrote that, you know, for me, I started out with VMware and NetApp about 15 years ago when the ecosystem was still kind of emerging back in the ESX three days, for those that remember those days and, and NetApp had a really real dominant position because some of the things that they had delivered with VMware, and we're kind of at that same venture now where everyone needs to have as they talk about today. Multi-cloud, and, and there's been some things that people try to get through as they talk about cloud chaos today. It also is in the, some of the realms, the barriers that you don't often see. So releasing this new FSX capability with the metal data store within VMware cloud, and AWS is a real big opportunity. And it's not just a big opportunity for NetApp. It's a big opportunity for the people that actually deliver this for the customers, which is our partner. So for me, it's full circle. I started with a partner I come back around and I'm now in a great position to kind of work with our partners. And they're the real story here with us. Yeah. >>Brandon, talk about the value in this from CDWs perspective, what is the momentum that your you and the company are excited to carry forward? >>Yeah, this is super exciting. I've been close to the VMware cloud AWS story since its inception. So, you know, almost four years building that practice out at CDW and it's a great solution, but we spent all this time prior driving people to that HCI type of mentality where, Hey, you can just scale the portions that you need and that wasn't available in the cloud. And although it's a great solution, there's pain points there where it just can become cost prohibitive because customers see what they need. But that storage piece is a heavy component. And when that adds to what that cluster size needs to be, that's a real problem with this announcement, right? We can now use those supplemental data stores and be able to shrink that size. So it saves the customer massive amounts of money. I mean, we have like 25, 50% in savings while without sacrificing anything, they're getting the operational efficiency that they know and love from NetApp. They get that control and that experience that they've been using or want to use in VMware cloud. And they're just combining the two in a very cost friendly package. >>So I have one comment and that is finally >>Right. Absolutely. I, >>We used to refer to it as the devil's triangle of CPU, memory and storage. And if those are, if those are inextricably linked to one another, you want a little bit more storage. Okay. Here's your CPU and memory that you can pay for and power and cool that you don't need? No, no, no, no, no, no. I just need, I just need some storage over here. And in the VMware context, think of the affinity that VMware has had with NetApp forever. The irony being that EMC of course, owned VMware for a period of time, kind of owned their stock. Yeah. So you have this thing that is fundamentally built around VMFS that just fits perfectly into the filer methodology. Yeah. And now they're back together in the cloud. And, and the thing is if, if we were, if we were sitting here talking about this 5, 6, 7 years ago, an AWS person would've said we were all crazy. Yeah, yeah. AWS at the time would've said, nah, no, no, no, no. We're gonna figure that out. You, you, you, you guys are just gonna have to go away. It's >>Not lost on me that, you know, it was great seeing and hearing of NetApp in a day, one VMware keynote. >>It's amazing. >>That was great. And so we built off that because the, the, the great thing about kind of where this comes from is, you know, you built that whole HCI or converged infrastructure for simplicity and everyone is simplicity. And so this is just another evolution of the story. And as you do, so, you know, you've, you've freed up for all the workloads, all the scenarios, all the, all the operational situations that you've wanted to kind of get into. Now, if you can save anywhere from 25 to 50% of the costs of previous, you can unleash a whole nother set of workloads and do so by the way, with same consistent operational consistency from NetApp, in terms of the data that you have on-prem to cloud, or even if you don't have NetApp, on-prem, you know, we have the ways to get it to the cloud and VMware cloud and AWS, and, and, and basically give you that data simplicity for management. >>And, but again, it isn't just a NetApp part of this. There is, as everyone knows with cloud, a whole layer of infrastructure around the security networking, there's a ton of work that gets from the partner side to look at applications and workloads and understand sort of what's the composition of those, which ones are ready for the cloud. First, you know, seeing, you know, the AWS person with the SAP title, that's a big workload. Obviously that's making this journey to the cloud, along with all the rest of them. That's what the partners deliver. NetApp has done everything they can do to make that as frictionless as possible in the marketplace as a first party service, and now through VMware cloud. So we've done all we can do on, on that factor. Now it's the partners that could take it. And by the way, the reaction that we've seen kind of in some of, of the private previews are working, has been incredible. These guys bring really the true superhero muscle to what organizations are gonna need to have to take those workloads to VMware cloud and, and evolve it into this new cloud era that they're talking about at the keynote today. >>Yeah, don't get us wrong. We love vSphere eight and vs a, a and VSAN aid in particular, but there's a huge market need for this, for what you guys are delivering. >>Talk to us, Brandon, from your perspective about being able to, to part, to, to have the powerhouses of NetApp, VMware and AWS, and in terms of being able to meet your customers where they are and what they want. >>And I, that's huge, right? That the solution allows these things to come together in a seamless way, right? So we get the, the flexibility of cloud. We get the scalability of easy storage now, in a way we didn't have before, and we get the power that's VMware, right. And in that, in the virtualization platform, and that makes it easy for a customer to say, I need to be somewhere else. And maybe that's not, that's not a colo anymore. That's not a secondary data center. I want to be in the cloud, but I wanna do it on my terms. I wanna do it. So it works for me as a customer. This solution has that, right? And, and we come in as a partner and we look at, we kind of call it the full stack approach, where we really look at the entire, you know, ecosystem that we're talking. >>So from the application all the way down to the infrastructure and even below, and figure out how that's gonna work best for our customers and putting things together with the native cloud services, then with their VMware environment, living on VMware cloud, AWS, leveraging storage with a, you know, with the, the FSX in. So they can easily grow their storage and use all those operational efficiencies and the things that they love about NetApp already. And from a Dr. Use case, we can replicate from a NetApp to NetApp. And it's just, it makes it so easy to have that conversation with the customers and just, it clicks. And like, this is what I need. This is what I've been looking for. And all wrapped up in a really easy package. >>No wonder Dave's comment was finally right. >>Oh, absolutely. I mean, we've been, again, you know, we talked about the HCI, like that made sense. And three or four years ago, maybe even a little bit longer, right. That click, same thing was like, oh my gosh, this is the way infrastructure should work. And we're just having that same Nirvana moment that this is how easy cloud infrastructure can work and that I can have that storage without sacrificing the cost, throw more nodes into my cluster to be able to do so. >>Yeah. I I've just worked with so many customers who struggle to get to where they want to be BEC, and this is something that just feels like a nice worn in pair of shoes or jeans to folks who right now, you know, look, the majority of it spend is still on premises, right? So the typical deployment of VMware today is often VMware with NetApp appliances providing file storage. So this is something that I imagine will help accelerate some of your customers' moves. >>It absolutely will. And in fact, I have three customers off the hand that I know that I've been like, not wanting to say anything like let's talk next week. Right? There's this, there may be something we can talk about when, on, after Explorer waiting for the announcement, because we've been working with NetApp and, and doing some of the private preview stuff. Yeah. And our engineering teams, working with your engineering teams to build this out so that when the announcement came out yesterday, we can go back and say, okay, now let's have that conversation. Now let's talk about what this looks like, >>Where are you having customer conversation? So this is strictly an it conversation has this elevated up the stack, especially as we've seen the massive, I call it cloud migration adoption of the last couple of years. >>I, I I'll speak fairly from the partner level. It is an elevated conversation. So we're not only talking, at least I'm not only talking to it. Administrators, directors, C levels like this is a story that resonates because it's about business value, right? I have an initiative, I have a goal. And that goal is wrapped into that it solution. And typically has some sort of resource or financial cost to it. We want to hear that story. And so it resonates when we can talk about how you can achieve your goals, do it in a way with a specific solution that encompasses everything at a price point that you'll like, and then that can flow down to the directors and the it administrators. And we can start talking about, you know, turning the screws and the knobs. >>Yeah. And for us, it does start with a partner because the reality is that's who the that's, who the customers all engage. And the reality is there's not just one partner type there's many, you know, we, in fact, what the biggest thing that we've been really modernizing is how to address the different partner types. Cuz you obviously have the Accentures of the world that are the big GSIs, the big SI you have folks that are hosting providers, you have Equinox X in the middle of that. You've got partners that just do services that might be only influenced partners that are influencing the, the design. And so if you look up and down between, you know, VMware's partner ecosystem and NetApp's partner ecosystem overlap pretty well, but there's this factor with AWS about, you know, both born and the cloud partners and partners, you know, like CW that have really, you know, taken the step forward to be relevant in that phase going forward. >>And that's, what's exciting to us is to see that kind of come forward. So when something like a FSX end comes forward in this VMware cloud and AWS scenario, they can take and, and just have instant ignition with it. And for us, that's what it's about. Our job is really just to remove friction back what they do and get outta the way, help them win. And last week we were in Chicago at the AWS reinvent thing and seeing AWS with another partner in their whole briefing and how they came to life with the, with this whole anticipation for this week, you know, it's, it's all the partners are very excited for it. So we're just gonna fuel that. And you know, I often wonder we got the, the t-shirt that says, you know, two's company three is a cloud maybe should have been four because it takes the, the partner for the, the completion. >>We appreciate that for sure. >>It does. It sounds like there's tremendous momentum in the market, an appetite across all three companies, four, if you include CDW. So in terms of, of the selling motion, it sounds like you've got folks that are gonna be eating out of eating out of your pocket. Who've been waiting for this for quite a while. Yeah. >>I think you, the analogy used earlier, it's nice when the tires are already on the Ferrari, right. This thing could just go, yes. And we've got people that we're already talking to that this fits, we've got some great go to market strategies. As we start doing partner in sales enablement to make sure that our people behind the scenes are telling the story and the way that we want it to jointly so that all of us can, you know, come together and have that aligned common message to really, you know, make this win and make this pop >>One correction though is technically we sponsor Aston Martin. So it's not a fry. It's an Aston Martin. There >>You go. >>That's right. Quite taken, not a car guy. Can >>You, can you talk a little bit Brendan about the, the routes to market and the, the GTM that you guys are working on together, even at a high level? Yeah. >>At a high level, we've already had some meetings talking about how we can get this message out. The nice thing about this is it's not relegated to a single industry vertical. It's not a single type of customer. We see this across the board and, and certainly with any of our cloud infrastructure solutions, it seems very, even from a regional standpoint and an industry vertical standpoint. So really it's just about how to get our sellers, you know, that get that message to them. So we had meetings here this week. We've been talking to your teams, oh, for probably six weeks now on what's that gonna look like? You know, what type of events are we gonna hold? Do we wanna do some type of road show? Yeah. We've done that with FlexPod very successfully, a few years ago where our teams working with your teams and VMware, we all came out and, and showed this to the world and doing something similar with this to show how easy it is to add supplemental storage to VMC. And just get that out to the masses through events, maybe through sales webinars. I mean, we're still in this world where maybe it's more virtual than on person, but we're starting to shift back, but it's just about telling the message and, and showing, Hey, here's how you do it. Come talk to us. We can help you. And we want to help >>Talk about the messaging from a, a multi-cloud perspective. Here we are at VMware Explorer, the theme, the center of the multi-cloud universe, how is this solution from NetApp's perspective? And then CDWs, how does it an enabler of customers that so many are living in the multi-cloud world by default? >>Yeah. And I think the big subtlety there that, that maybe was MIS missed was the private cloud being just so their cloud. The reality of that is probably a little bit short of, you know, of what people kind of deal with with their on on-prem data centers, just because of some of the applications, data sets they're trying to work through for AI ML and analytics. But that's what the partner's great at is, is helping them kind of leap forward and actually realize the on-prem to become the private cloud and really operate in this multi-cloud scenario and, and get beyond this cloud chaos factor. So again, you know, the beautiful part about all this is that, you know, the, the, the never ending sort of options, the optionality that you have on security, on networking, on applications, data sets, locations, governance, these are all factors that the partner deals with way better than we could even think of. So for us, it's really about just trying to connect with them, get their feedback and actually design in from the partner to take something like this and make it something that works for them >>Back to your shirt. What does it say? Two's company, three's a cloud that's right. But if you want rain, you need a fourth. Yeah. Right. We're here in California. I don't care about clouds. We need it to rain. All >>Right. So >>It's all well and good that yeah. If you know, a couple of you get together and offer something up, but where the rubber meets the road, you know, the customer relationship, the strategic seat at the customer table, there, aren't more of those than there have been in the past. And, and, and ecosystems have obviously gotten more complicated. I can't help thinking back as I think back on the history of, of NetApp and VMware and CDW, there was a time when, when things were bad, you get rid of marketing. And then, and then after that, it was definitely alliances and partnerships cuz who the heck are those people right now? Everything is an ecosystem. Yeah. Everything is an ecosystem. So talk about how CW CDW has changed through its history in terms of where CDW has come from. >>Sure. And you >>Know, not everybody knows that CDW is involved in as sophisticated in area as you are. >>And, and that's true. I mean, sometimes it's tongue in cheek, but you know, we've fulfilled a lot of needs throughout the years and, and maybe at times just a fulfillment or a box pusher, but we're really so much more that, and we've been so much more than that for years. And through some of our acquisitions, you know, Sirius last year I G N w our international arm with Kway when it became CDW, K we have a, you know, a premier experience around consultative services. And that we talk about that full stack, right? Yeah. From the application to the cloud, to the infrastructure, to the security around it, to the networking, we can help out with all of that. And we've got experts and, and, you know, on the presales and postsales that, that's what they live for. It's their passion. And working with partners close in hand, that that's, we've had great relationships with, with NetApp. And again, I've been with CDW for over 12 years. And in all 12 of those years, I've been very close to NetApp in one way, shape or form, and to see how we work together to solve our customers' challenges. It's less about what we want to do. It's more about what we're doing to help the customer. And, and I've seen that day in and day out from our relationship and, you know, kind of our partnership. >>So say we're back here in six months, or maybe we're back here at reinvent, talking with you guys and a customer. What are some of the outcomes that at this stage you were expecting customers to be able to achieve, >>Be able to do more, put more out there, right. To not be limited by the construct of, I only have X amount of space. And so maybe the use case or the initiative is, is wrapped around that. Let's turn that around and say, that's, you're limitless, let's have move what you need. And you're not gonna have to worry so much about the cost, the way you did six months ago or seven months ago, or six months in a day ago that you can do more with it. And if we have an X amount in our bucket in, in July, we could do 200 VMs. You know, and now six months later, we've done 500 VMs because of those efficiency savings because of that cost savings and using supplemental storage. So I, I see that being a growth factor and being say, Hey, this was easy. We always knew this was a solution we liked, but now it's easy and bigger. >>Yeah. I think on our end, the spectrum, I'll just say what Phil Brons would say. I said previously, he was in the previous segment, which is, this could go pretty quick, folks that have wanted to do this now that they know this is something to do and that they can go at it. The part we already know, the partners are very much in like ready to go mode. They've been waiting for this day to just get the announcement out so they can get kind of get going. And it's funny because you know, when we've presented, we've kind of presented some of the tech behind what we're doing and then the ROI T C calculator last, and everyone's feedback is the same. They said you should just lead to the calculator. So then yeah, you can see exactly how much money you save. In fact, one of the jokes is there's not many times you've saved this much money in it before. And so it's, it's a big, wow. Factor, >>Big, wow. Factor, big differentiator, guys. Thank you so much for joining David, me talking about what NetApp, VMware, AWS are doing, how it's being delivered through CDW, the evolution of all these companies. We're excited to watch the solution. We better let you go because you probably have a ton of meeting. People are just chopping at the bit to get this. Yeah. >>It's, it's exciting times. I'm loving it being here and being able to talk about this finally, in a public setting. So this has been great. >>Awesome guys. Thank you again for your time. We appreciate it. Yep. For our guests and Dave Nicholson, I'm Lisa Martin. You're watching the cube live from VMware Explorer, 2022. We'll be back after a short break, stick around.

Published Date : Aug 31 2022

SUMMARY :

So folks are excited to be back. we'll start with you talk about what was announced yesterday, NetApp, VMware, I talked about this in a blog that I wrote that, you know, for me, type of mentality where, Hey, you can just scale the portions that you need and that wasn't available in I, And in the VMware context, think of the affinity that VMware has had with NetApp forever. Not lost on me that, you know, it was great seeing and hearing of NetApp in a day, And as you do, so, you know, you've, you've freed up for all the workloads, And by the way, the reaction that we've seen kind of in some of, of the private previews are working, a and VSAN aid in particular, but there's a huge market need for this, for what you guys are delivering. and in terms of being able to meet your customers where they are and what they want. And in that, in the virtualization platform, and that makes it easy for a with a, you know, with the, the FSX in. I mean, we've been, again, you know, we talked about the HCI, like that made sense. now, you know, look, the majority of it spend is still on premises, right? And our engineering teams, working with your engineering teams to build this out Where are you having customer conversation? And we can start talking about, you know, turning the screws and the knobs. And so if you look up and down between, you know, VMware's partner ecosystem and NetApp's partner ecosystem overlap to life with the, with this whole anticipation for this week, you know, it's, So in terms of, of the selling motion, it sounds like you've got folks that you know, come together and have that aligned common message to really, you know, So it's not a fry. That's right. You, can you talk a little bit Brendan about the, the routes to market and the, the GTM that you guys are And just get that out to the masses through events, And then CDWs, how does it an enabler of customers that so many are living in the multi-cloud world The reality of that is probably a little bit short of, you know, of what people But if you want rain, you need a fourth. So but where the rubber meets the road, you know, the customer relationship, the strategic seat at the customer table, I mean, sometimes it's tongue in cheek, but you know, we've fulfilled What are some of the outcomes that at this stage you were expecting customers to be able to achieve, the cost, the way you did six months ago or seven months ago, or six months in a day ago that you So then yeah, you can see exactly how much money you save. We better let you go because you probably have a ton of meeting. So this has been great. Thank you again for your time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Dave NicholsonPERSON

0.99+

Lisa MartinPERSON

0.99+

CaliforniaLOCATION

0.99+

DavePERSON

0.99+

KeithPERSON

0.99+

Brandon JacksonPERSON

0.99+

Keith NorbiePERSON

0.99+

San FranciscoLOCATION

0.99+

AWSORGANIZATION

0.99+

ChicagoLOCATION

0.99+

CDWORGANIZATION

0.99+

JulyDATE

0.99+

last weekDATE

0.99+

FerrariORGANIZATION

0.99+

Aston MartinORGANIZATION

0.99+

CDWsORGANIZATION

0.99+

next weekDATE

0.99+

VMwareORGANIZATION

0.99+

yesterdayDATE

0.99+

BrandonPERSON

0.99+

12QUANTITY

0.99+

Phil BronsPERSON

0.99+

EMCORGANIZATION

0.99+

NetAppORGANIZATION

0.99+

500 VMsQUANTITY

0.99+

three customersQUANTITY

0.99+

200 VMsQUANTITY

0.99+

fourthQUANTITY

0.99+

NetAppTITLE

0.99+

twoQUANTITY

0.99+

six months laterDATE

0.99+

25QUANTITY

0.99+

last yearDATE

0.99+

FirstQUANTITY

0.99+

six months agoDATE

0.99+

one partnerQUANTITY

0.99+

ESXTITLE

0.99+

TwoQUANTITY

0.99+

5DATE

0.99+

seven months agoDATE

0.99+

over 12 yearsQUANTITY

0.99+

six monthsQUANTITY

0.98+

BrendanPERSON

0.98+

threeDATE

0.98+

2022DATE

0.98+

this weekDATE

0.98+

Keith normyPERSON

0.98+

six weeksQUANTITY

0.98+

25, 50%QUANTITY

0.97+

50%QUANTITY

0.97+

todayDATE

0.97+

6DATE

0.97+

fourQUANTITY

0.97+

bothQUANTITY

0.97+

oneQUANTITY

0.97+

FSXTITLE

0.96+

Patrick Osborne, HPE | VeeamON 2022


 

(digital pulsing music) >> We're back at VeeamON 2022. My name is Dave Vellante. I'm here with my co-host David Nicholson. I've got another mass boy coming on. Patrick Osborne is the vice president of the storage business unit at HPE. Good to see you again, my friend. It's been a long time. >> It's been way too long, thank you very much for having me. >> I can't even remember the last time we saw each other. It might have been in our studios in the East Coast. Well, it's good to be here with you. Lots have been going on, of course, we've been following from afar, but give us the update, what's new with HPE? We've done some stuff on GreenLake, we've covered that pretty extensively and looks like you got some momentum there. >> Quite a bit of momentum, both on the technology front and certainly the customer acquisition front. The message is certainly resonating with our customers. GreenLake is, that's the transformation that's fueling the future of Hewlett Packard Enterprise. So the momentum is great on the technology side. We're at well over 50 services that we're providing on the GreenLake platform. Everything from solutions and workloads to compute, networking and storage. So it's been really fantastic to see the platform and being able to really delight the customers and then the momentum on the sales and the customer acquisition side, the customers are voting with their dollars, so they're very happy with the platform, certainly from an operational perspective and a financial consumption perspective and so our target goal, which we've said a bunch of times is we want to be the hyperscaler on on-prem. We want to provide that customer experience to the folks that are investing in the platform. It's going really well. >> I'll ask you a question, as a former analyst, it could be obnoxious and so forth, so I'll be obnoxious for a minute. I wrote a piece in 2010 called At Your Storage Service, saying the future of storage and infrastructure as a service, blah, blah, blah. Now, of course, you don't want to over-rotate when there's no market, there was no market for GreenLake in 2010. Do you feel like your timing was right on, a little bit late, little bit early? Looking back now, how do you feel about that? >> Well, it's funny you say that. On the timing side, we've seen iterations of this stops and start forever. >> That's true. Financial gimmicks. >> I started my career at Sun Microsystems. We talked about the big freaking Web-tone switch and a lot of the network is the computer. You saw storage networks, you've seen a lot, a ton of iterations in this category, and so, I think the timing's right right now. Obviously, the folks in the hyperscaler class have proved out that this is something that's working. I think for us, the big thing that's really resonating with the customers is they want the operational model and they want the consumption model that they're getting from that as a service experience, but they still are going to run a number of their workloads on-prem and that's the best place to do it for them economically and we've proved that out. So I think the time is here to have that bifurcated experience from operational and financial perspective and in the past, the technology wasn't there and the ability to deliver that for the customers in a manner that was useful wasn't there. So I think the timing's perfect right now to provide them. >> As you know, theCUBE has had a presence at HPE Discover. Previous, even HP Discover and same with Veeam. But we got a long history with HP/HPE. When Hewlett Packard split into two companies, we made the observation, Wow, this opens up a whole new ecosystem opportunity for HPE generally, in storage business specifically, especially in data protection and backup, and the Veeam relationship, the ink wasn't dry and all of a sudden you guys were partnering, throwing joint activities, and so talk about how that relationship has evolved. >> From my perspective, we've always been a big partnering company, both on the route to market side, so our distributors and partners, and we work with them in big channel business. And then on the software partnership side, that's always evolving and growing. We're a very open ecosystem and we like to provide choice for our customers and I think, at the end of the day, we've got a lot of things that we work on jointly, so we have a great value prop. First phase of that relationship was partnering, we've got a full boat of product integrations that we do for customers. The second was a lot of special sauce that we do for our customers for co-integration and co-development. We had a huge session today with Rick Vanover and Frederico on our team here to talk about ransomware. We have big customers suffering from this plague right now and we've done a lot together on the engineering side to provide a very, very well-engineered, well thought out process to help avoid some of these things. And so that wave, too, of how do we do a ton of co-innovation together to really delight our customers and help them run their businesses, and I think the evolution of where we're going now, we have a lot of things that are very similar, strategically, in terms of, we all talk about data services and outcomes for our customers. So at the end of the day, when we think about GreenLake, like our virtual machine backup as a service or disaster recovery, it's all about what workloads are you running, what are the most important ones, where do you need help protecting that data? And essentially, how can we provide that outcome to you and you pay it as an outcome. And so we have a lot of things that we're working on together in that space. >> Let's take a little bit of a closer look at that. First of all, I'm from California, so I'm having a really hard time understanding what either of you were saying. Your accents are so thick. >> We could talk in Boston. >> Your accents are so thick. (Dave laughing) I could barely, but I know I heard you say something about Veaam at one point. Take a closer look at that. What does that look like from a ransomware perspective in terms of this concept of air gaping or immutable, immutable volumes and just as an aside, it seems like Veeam is a perfect partnership for you since customers obviously are going to be in hybrid mode for a long time and Veeam overlays that nicely. But what does it look like specifically? Immutable, air gap, some of the things we've been hearing a lot about. >> I'm exec sponsor for a number of big HPE customers and I'll give you an example. One of our customers, they have their own cloud service for time management and essentially they're exploited and they're not able to provide their service. It has huge ripple effect, if you think about, on inability to do their service and then how that affects their customers and their customers' employees and all that. It's a disaster, no pun intended. And the thing is, we learn from that and we can put together a really good architectures and best practices. So we're talking today about 3-2-1-1, so having three copies of your data, two different types of media, having an offline copy, an offsite copy and an offline copy. And now we're thinking about all the things you need to do to mitigate against all the different ways that people are going to exploit you. We've seen it all. You have keys that are erased, primary storage that is compromised and encrypted, people that come in and delete your backup catalog, they delete your backups, they delete your snapshots. So they get it down to essentially, "I'm either going to have one set of data, it's encrypted, I'm going to make you pay for it," and 40 percent of the time they pay and they get the data back, 60 percent of the time they pay and they get maybe some of the data back. But for the most part, you're not getting your data back. The best thing that we can do for our customers that come with a very prescriptive set of T-shirt configuration sizes, standardization, best practices on how they can take this entire ecosystem together and make it really easy for the customers to implement. But I wouldn't say, it's never bulletproof, but essentially, do as much as you can to avoid having to pay that ransomware. >> So 3-2-1-1, three copies, meaning local. >> Patrick: Yeah. >> So you can do fast recovery if you need to. Two different types of media, so tape fits in here? Not necessarily flashing and spinning disks. Could it be tape? >> A lot of times we have customers that have almost four different types. So they are running their production on flash. We have Alletras with HPE networking and servers running specific workloads, high performance. We have secondary storage on-prem for fast recovery and then we have some form of offsite and offline. Offsite could be object storage in the cloud and then offline would be an actual tape backup. The tape is out of the tape library in a vault so no one can actually access it through the network and so it's a physical copy that's offline. So you always have something to restore. >> Patrick, where's the momentum today, specifically, we're at VeeamON, but with regard to the Veeam partnership, is it security and ransomware, which is a new thing for this world. The last two years, it's really come to the top. Is it cloud migration? Is it data services and data management? Where's the momentum, all of the above, but maybe you could help us parse that. >> What we're seeing here at Hewlett Packard Enterprise, especially through GreenLake, is just an overall focus on data services. So what we're doing is we've got great platforms, we always had. HPE is known as an engineering company. We have fantastic products and solutions that customers love. What we're doing right now is taking, essentially, a lot of the beauty of those products and elevating them into an operational experience in the cloud, so you have a set of platforms that you want to run, you have machine critical platform, business critical, secondary storage, archival, data analytics and I want to be able to manage those from the cloud. So fleet management, HCI management, protocol management, block service, what have you, and then I want a set of abstracted data services that are on top of it and that's essentially things like disaster recovery, backup, data immutability, data vision, understanding what kind of data you have, and so we'll be able to provide those services that are essentially abstracted from the platforms themselves that run across multiple types of platforms. We can charge them on outcome based. They're based on consumption, so you think about something like DR, you have a small set of VMs that you want to protect with a very tight RPO, you can pay for those 100 VMs that are the most important that you have. So for us driving that operational experience and then the cloud data service experience into GreenLake gives customers a really, gives them a cloud experience. >> So have you heard the term super cloud? >> Patrick: Yeah. (chuckles) >> Have you? >> Patrick: Absolutely. >> It's term that we kind of coined, but I want to ask you about it specifically, in terms of how it fits into your strategy. So the idea is, and you kind of just described it, I think, whether your data is on-prem, it's in the cloud, multiple clouds, we'll talk about the edge later, but you're hiding the underlying complexities of the cloud's APIs and primitives, you're taking care of that for your customers, irrespective of physical location. It's the common experience across all those platforms. Is that a reasonable vision, maybe, even from a technical standpoint, is it part of HPE strategy and what does it take to actually do that, 'cause it sounds nice, but it's probably pretty intense? >> So the proof's in the pudding for us. We have a number of platforms that are providing, whether it's compute or networking or storage, running those workloads that they plum up into the cloud, they have an operational experience in the cloud and now they have data services that are running in the cloud for us in GreenLake. So it's a reality. We have a number of platforms that support that. We're going to have a set of big announcements coming up at HPE Discover. So we led with Alletra and we have a block service, we have VM backup as a service and DR On top of that. That's something that we're providing today. GreenLake has over, I think, it's actually over 60 services right now that we're providing in the GreenLake platform itself. Everything from security, single sign on, customer IDs, everything, so it's real. We have the proof point for it. >> So, GreenLake is essentially, I've said it, it's the HPE cloud. Is that a fair statement? >> A hundred percent. >> You're redefining cloud. And one of the hallmarks of cloud is ecosystem. Roughly, and I want to talk more about you got to grow that ecosystem to be successful in cloud, no question about it. And HPE's got the chops to do that. What percent of those services are HPE versus ecosystem partners and how do you see that evolving over time? >> We have a good number of services that are based on HPE, our tried and true intellectual property. >> You got good tech. >> Absolutely, so a number of that. And then we have partners in GreenLake today. We have a pretty big ecosystem and it's evolving, too. So we have customers and partners that are focused, our customers want our focus on data services. We have a number of opportunities and partnerships around data analytics. As you know, that's a really dynamic space. A lot of folks providing support on open source, analytics and that's a fast moving ecosystem, so we want to support that. We've seen a lot of interest in security. Being able to bring in security companies that are focused on data security. Data analytics to understand what's in your data from a customer perspective, how to secure that. So we have a pretty big ecosystem there. Just like our path at HPE, we've always had a really strong partnership with tons of software companies and we're going to continue to do that with GreenLake. >> You guys have been partner-friendly, I'll give you that. I'm going to ask Antonio this at Discover in a couple of weeks, but I want to ask you, when you think about, again, to go back to AWS as the prototypical cloud, you look at a Snowflake and a Redshift. The Redshift guys probably hate Snowflake, but the EC2 guys love them, sell a lot of compute. Now you as a business unit manager, do you ever see the day where you're side by side with one of your competitors? I'm guessing Antonio would say absolutely. Culturally, how does that play inside of HPE? I'm testing your partner-friendliness. How would you- >> Who will you- >> How do you think about that? >> At the end of the day, for us, the opportunity for us is to delight our customers. So we've always talked about customer choice and how to provide that best outcome. I think the big thing for us is that, from a cost perspective, we've seen a lot of customers coming back to HPE repatriation, from a repatriation perspective for a certain class of workloads. From my perspective, we're providing the best infrastructure and the best operational services at the best price at scale for these costumers. >> Really? It definitely, culturally, HPE has to, I think you would agree, it has to open up. You might not, you're going to go compete, based on the merit- >> Absolutely. >> of your product and technology. The repatriation thing is interesting. 'Cause I've always been a repatriation skeptic. Are you actually starting to see that in a meaningful way? Do you think you'll see it in the macro numbers? I mean, cloud doesn't seem to be slowing down, the public cloud growth, I mean, the 35, 40 percent a year. >> We're seeing it in our numbers. We're seeing it in the new logo and existing customer acquisition within GreenLake. So it's real for us. >> And they're telling you? Pure cost? >> Cost. >> So it's that's simple. >> Cost. >> So, they get the cloud bill, you do, too. I'd get the email from my CFO, "Why the cloud bill so high this month?" Part of that is it's consumption-based and it's not predictable. >> And also, too, one of the things that you said around unlocking a lot of the customer's ability from a resourcing perspective, so if we can take care of all the stuff underneath, the under cloud for the customer, the platform, so the stores, the serving, the networking, the automation, the provisioning, the health. As you guys know, we have hundreds of thousands of customers on the Aruba platform. We've got hundreds of thousands of customers calling home through InfoSight. So we can provide a very rich set of analytics, automated environment, automated health checking, and a very good experience that's going to help them move away from managing boxes to doing operational services with GreenLake. >> We talk about repatriation often. There was a time when I think a lot of us would've agreed that no one who was born in the cloud will ever do anything other than grow in the cloud. Are you seeing organizations that were born in the cloud realizing, "Hey, we know what our 80 percent steady state is and we've modeled this. Why rent it when we can own it? Or why rent it here when we can have it as operational cost there?" Are you seeing those? >> We're seeing some of that. We're certainly seeing folks that have a big part of their native or their digital business. It's a cost factor and so I think, one of the other areas, too, that we're seeing is there's a big transformation going on for our partners as well, too, on the sell-through side. So you're starting to see more niche SaaS offerings. You're starting to see more vertically focused offerings from our service provider partners or MSPs. So it's not just in either-or type of situation. You're starting to see now some really, really specific things going on in either verticals, customer segmentation, specific SaaS or data services and for us, it's a really good ecosystem, because we work with our SP partners, our MSP partners, they use our tech, they use our services, they provide services to our joint customers. For example, I know you guys have talked to iland here in the past. It's a great example for us for customers that are looking for DR as a service, backup as a service hosting, so it's a nice triangle for us to be able to please those customers. >> They're coming on to tomorrow. They're on 11/11. I think you're right on. The one, I think, obvious place where this repatriation could happen, it's the Sarah Wong and Martin Casano scenario where a SaaS companies cost a good sold become dominated by cloud costs. And they say, "Okay, well, maybe, I'm not going to build my own data centers. That's probably not going to happen, but I can go to Equinix and do a colo and I'm going to save a ton of dough, managing my own infrastructure with automation or outsourcing it." So Patrick, got to go. I could talk with you forever. Thank you so much for coming back in theCUBE. >> Always a pleasure. >> Go, Celts. How you feeling about the, we always talk sports here in VeeamON. How are you feeling about the Celts today? >> My original call today was Celtics in six, but we'll see what happens. >> Stephen, you like Celtics? Celtics six. >> Stephen: Celtics six. >> Even though tonight, they got a little- >> Stephen: Still believe, you got to believe. >> All right, I believe. >> It'd be better than the Miami's Mickey Mouse run there, in the bubble, a lot of astronauts attached to that. (Dave laughing) >> I love it. You got to believe here on theCUBE. All right, keep it right- >> I don't care. >> Keep it right there. You don't care, 'cause you're not from a sports town. Where are you in California? >> We have no sports. >> All right, keep it right there. This is theCUBE's coverage of VeeamON 2022. Dave Vellante for Dave Nicholson. We'll be right back. (digital music)

Published Date : May 18 2022

SUMMARY :

Good to see you again, my long, thank you very much and looks like you got and certainly the customer Now, of course, you don't want On the timing side, we've That's true. and the ability to deliver and all of a sudden you provide that outcome to you what either of you were saying. Immutable, air gap, some of the things and 40 percent of the time they pay So 3-2-1-1, three So you can do fast and then we have some form Where's the momentum, all of the above, that are the most important that you have. So the idea is, and you kind that are running in the it, it's the HPE cloud. And HPE's got the chops to do that. We have a good number of services to do that with GreenLake. but the EC2 guys love them, and how to provide that best outcome. go compete, based on the merit- it in the macro numbers? We're seeing it in the "Why the cloud bill so high this month?" a lot of the customer's than grow in the cloud. one of the other areas, and I'm going to save a ton of dough, about the Celts today? we'll see what happens. Stephen, you like you got to believe. in the bubble, a lot of astronauts You got to Where are you in California? coverage of VeeamON 2022.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David NicholsonPERSON

0.99+

ChrisPERSON

0.99+

Lisa MartinPERSON

0.99+

JoelPERSON

0.99+

Jeff FrickPERSON

0.99+

PeterPERSON

0.99+

MonaPERSON

0.99+

Dave VellantePERSON

0.99+

David VellantePERSON

0.99+

KeithPERSON

0.99+

AWSORGANIZATION

0.99+

JeffPERSON

0.99+

KevinPERSON

0.99+

Joel MinickPERSON

0.99+

AndyPERSON

0.99+

RyanPERSON

0.99+

Cathy DallyPERSON

0.99+

PatrickPERSON

0.99+

GregPERSON

0.99+

Rebecca KnightPERSON

0.99+

StephenPERSON

0.99+

Kevin MillerPERSON

0.99+

MarcusPERSON

0.99+

Dave AlantePERSON

0.99+

EricPERSON

0.99+

AmazonORGANIZATION

0.99+

twoQUANTITY

0.99+

DanPERSON

0.99+

Peter BurrisPERSON

0.99+

Greg TinkerPERSON

0.99+

UtahLOCATION

0.99+

IBMORGANIZATION

0.99+

JohnPERSON

0.99+

RaleighLOCATION

0.99+

BrooklynLOCATION

0.99+

Carl KrupitzerPERSON

0.99+

LisaPERSON

0.99+

LenovoORGANIZATION

0.99+

JetBlueORGANIZATION

0.99+

2015DATE

0.99+

DavePERSON

0.99+

Angie EmbreePERSON

0.99+

Kirk SkaugenPERSON

0.99+

Dave NicholsonPERSON

0.99+

2014DATE

0.99+

SimonPERSON

0.99+

UnitedORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

SouthwestORGANIZATION

0.99+

KirkPERSON

0.99+

FrankPERSON

0.99+

Patrick OsbornePERSON

0.99+

1984DATE

0.99+

ChinaLOCATION

0.99+

BostonLOCATION

0.99+

CaliforniaLOCATION

0.99+

SingaporeLOCATION

0.99+

Greg Rokita, Edmunds.com & Joel Minnick, Databricks | AWS re:Invent 2021


 

>>We'll come back to the cubes coverage of AWS reinvent 2021, the industry's most important hybrid event. Very few hybrid events, of course, in the last two years. And the cube is excited to be here. Uh, this is our ninth year covering AWS reinvent this the 10th reinvent we're here with Joel Minnick, who the vice president of product and partner marketing at smoking hot company, Databricks and Greg Rokita, who is executive director of technology at Edmonds. If you're buying a car or leasing a car, you gotta go to Edmund's. We're gonna talk about busting data silos, guys. Great to see you again. >>Welcome. Welcome. Glad to be here. >>All right. So Joel, what the heck is a lake house? This is all over the place. Everybody's talking about lake house. What is it? >>And it did well in a nutshell, a Lakehouse is the ability to have one unified platform to handle all of your traditional analytics workloads. So your BI and reporting Trisha, the lake, the workloads that you would have for your data warehouse on the same platform as the workloads that you would have for data science and machine learning. And so if you think about kind of the way that, uh, most organizations have built their infrastructure in the cloud today, what we have is generally customers will land all their data in a data lake and a data lake is fantastic because it's low cost, it's open. It's able to handle lots of different kinds of data. Um, but the challenges that data lakes have is that they don't necessarily scale very well. It's very hard to govern data in a data lake house. It's very hard to manage that data in a data lake, sorry, in a, in a data lake. >>And so what happens is that customers then move the data out of a data lake into downstream systems and what they tend to move it into our data warehouses to handle those traditional reporting kinds of workloads that they have. And they do that because data warehouses are really great at being able to have really great scale, have really great performance. The challenge though, is that data warehouses really only work for structured data. And regardless of what kind of data warehouse you adopt, all data warehouse and platforms today are built on some kind of proprietary format. So once you've put that data into the data warehouse, that's, that is kind of what you're locked into. The promise of the data lake house was to say, look, what if we could strip away all of that complexity and having to move data back and forth between all these different systems and keep the data exactly where it is today and where it is today is in the data lake. >>And then being able to apply a transaction layer on top of that. And the Databricks case, we do that through a technology and open source technology called data lake, or sorry, Delta lake. And what Delta lake allows us to do is when you need it, apply that performance, that reliability, that quality, that scale that you would expect out of a data warehouse directly on your data lake. And if I can do that, then what I'm able to do now is operate from one single source of truth that handles all of my analytics workloads, both my traditional analytics workloads and my data science and machine learning workloads, and being able to have all of those workloads on one common platform. It means that now not only do I get much, much more simple in the way that my infrastructure works and therefore able to operate at much lower costs, able to get things to production much, much faster. >>Um, but I'm also able to now to leverage open source in a much bigger way being that lake house is inherently built on an open platform. Okay. So I'm no longer locked into any kind of data format. And finally, probably one of the most, uh, lasting benefits of a lake house is that all the roles that have to take that have to touch my data for my data engineers, to my data analyst, my data scientists, they're all working on the same data, which means that collaboration that has to happen to go answer really hard problems with data. I'm now able to do much, much more easy because those silos that traditionally exist inside of my environment no longer have to be there. And so Lakehouse is that is the promise to have one single source of truth, one unified platform for all of my data. Okay, >>Great. Thank you for that very cogent description of what a lake house is now. Let's I want to hear from the customer to see, okay, this is what he just said. True. So actually, let me ask you this, Greg, because the other problem that you, you didn't mention about the data lake is that with no schema on, right, it gets messy and Databricks, I think, correct me if I'm wrong, has begun to solve that problem, right? Through series of tooling and AI. That's what Delta liked us. It's a man, like it's a managed service. Everybody thought you were going to be like the cloud era of spark and Brittany Britain, a brilliant move to create a managed service. And it's worked great. Now everybody has a managed service, but so can you paint a picture at Edmonds as to what you're doing with, maybe take us through your journey the early days of a dupe, a data lake. Oh, that sounds good. Throw it in there, paint a picture as to how you guys are using data and then tie it into what y'all just said. >>As Joel said, that they'll the, it simplifies the architecture quite a bit. Um, in a modern enterprise, you have to deal with a variety of different data sources, structured semi-structured and unstructured in the form of images and videos. And with Delta lake and built a lake, you can have one system that handles all those data sources. So what that does is that basically removes the issue of multiple systems that you have to administer. It lowers the cost, and it provides consistency. If you have multiple systems that deal with data, you always arise as the issue as to which data has to be loaded into which system. And then you have issues with consistency. Once you have issues with consistency, business users, as analysts will stop trusting your data. So that was very critical for us to unify the system of data handling in the one place. >>Additionally, you have a massive scalability. So, um, I went to the talk with from apple saying that, you know, he can process two years worth of data. Instead of just two days in an Edmonds, we have this use case of backfilling the data. So often we changed the logic and went to new. We need to reprocess massive amounts of data with the lake house. We can reprocess months worth of data in, in a matter of minutes or hours. And additionally at the data lake houses based on open, uh, open standards, like parquet that allowed us, allowed us to basically hope open source and third-party tools on top of the Delta lake house. Um, for example, a Mattson, we use a Matson for data discovery, and finally, uh, the lake house approach allows us for different skillsets of people to work on the same source data. We have analysts, we have, uh, data engineers, we have statisticians and data scientists using their own programming languages, but working on the same core of data sets without worrying about duplicating data and consistency issues between the teams. >>So what, what is, what are the primary use cases where you're using house Lakehouse Delta? >>So, um, we work, uh, we have several use cases, one of them more interesting and important use cases as vehicle pricing, you have used the Edmonds. So, you know, you go to our website and you use it to research vehicles, but it turns out that pricing and knowing whether you're getting a good or bad deal is critical for our, uh, for our business. So with the lake house, we were able to develop a data pipeline that ingests the transactions, curates the transactions, cleans them, and then feeds that curated a curated feed into the machine learning model that is also deployed on the lake house. So you have one system that handles this huge complexity. And, um, as you know, it's very hard to find unicorns that know all those technologies, but because we have flexibility of using Scala, Java, uh, Python and SQL, we have different people working on different parts of that pipeline on the same system and on the same data. So, um, having Lakehouse really enabled us to be very agile and allowed us to deploy new sources easily when we, when they arrived and fine tune the model to decrease the error rates for the price prediction. So that process is ongoing and it's, it's a very agile process that kind of takes advantage of the, of the different skill sets of different people on one system. >>Because you know, you guys democratized by car buying, well, at least the data around car buying because as a consumer now, you know, I know what they're paying and I can go in of course, but they changed their algorithms as well. I mean, the, the dealers got really smart and then they got kickbacks from the manufacturer. So you had to get smarter. So it's, it's, it's a moving target, I guess. >>Great. The pricing is actually very complex. Like I, I don't have time to explain it to you, but knowing, especially in this crazy market inflationary market where used car prices are like 38% higher year over year, and new car prices are like 10% higher and they're changing rapidly. So having very responsive pricing model is, is extremely critical. Uh, you, I don't know if you're familiar with Zillow. I mean, they almost went out of business because they mispriced their, uh, their houses. So, so if you own their stock, you probably under shorthand of it, but, you know, >>No, but it's true because I, my lease came up in the middle of the pandemic and I went to Edmonds, say, what's this car worth? It was worth like $7,000. More than that. Then the buyout costs the residual value. I said, I'm taking it, can't pass up that deal. And so you have to be flexible. You're saying the premises though, that open source technology and Delta lake and lake house enabled that flexible. >>Yes, we are able to ingest new transactions daily recalculate our model within less than an hour and deploy the new model with new pricing, you know, almost real time. So, uh, in this environment, it's very critical that you kind of keep up to date and ingest their latest transactions as they prices change and recalculate your model that predicts the future prices. >>Because the business lines inside of Edmond interact with the data teams, you mentioned data engineers, data scientists, analysts, how do the business people get access to their data? >>Originally, we only had a core team that was using Lakehouse, but because the usage was so powerful and easy, we were able to democratize it across our units. So other teams within software engineering picked it up and then analysts picked it up. And then even business users started using the dashboarding and seeing, you know, how the price has changed over time and seeing other, other metrics within the, >>What did that do for data quality? Because I feel like if I'm a business person, I might have context of the data that an analyst might not have. If they're part of a team that's servicing all these lines of business, did you find that the data quality, the collaboration affected data? >>Th the biggest thing for us was the fact that we don't have multiple systems now. So you don't have to load the data. Whenever you have to load the data from one system to another, there is always a lag. There's always a delay. There is always a problematic job that didn't do the copy correctly. And the quality is uncertain. You don't know which system tells you the truth. Now we just have one layer of data. Whether you do reports, whether you're data processing or whether you do modeling, they all read the same data. And the second thing is that with the dashboarding capabilities, people that were not very technical that before we could only use Tableau and Tableau is not the easiest thing to use as if you're not technical. Now they can use it. So anyone can see how our pricing data looks, whether you're an executive, whether you're an analyst or a casual business users, >>But Hey, so many questions, you guys are gonna have to combat. I'm gonna run out of time, but you now allow a consumer to buy a car directly. Yes. Right? So that's a new service that you launched. I presume that required new data. We give, we >>Give consumers offers. Yes. And, and that offer you >>Offered to buy my league. >>Exactly. And that offer leverages the pricing that we develop on top of the lake house. So the most important thing is accurately giving you a very good offer price, right? So if we give you a price, that's not so good. You're going to go somewhere else. If we give you price, that's too high, we're going to go bankrupt like Zillow debt, right. >>It took to enable that you're working off the same dataset. Yes. You're going to have to spin up a, did you have to inject new data? Was there a new data source that we're working on? >>Once we curate the data sources and once we clean it, we see the directly to the model. And all of those components are running on the lake house, whether you're curating the data, cleaning it or running the model. The nice thing about lake house is that machine learning is the first class citizen. If you use something like snowflake, I'm not going to slam snowflake here, but you >>Have two different use case. You have >>To, you have to load it into a different system later. You have to load it into a different system. So like good luck doing machine learning on snowflake. Right. >>Whereas, whereas Databricks, that's kind of your raison d'etre >>So what are your, your, your data engineer? I feel like I should be a salesman or something. Yeah. I'm not, I'm not saying that. Just, just because, you know, I was told to, like, I'm saying it because of that's our use case, >>Your use case. So question for each of you, what, what business results did you see when you went to kind of pre lake house, post lake house? What are the, any metrics you can share? And then I wonder, Joel, if you could share a sort of broader what you're seeing across your customer base, but Greg, what can you tell us? Well, >>Uh, before their lake house, we had two different systems. We had one for processing, which was still data breaks. And the second one for serving and we iterated over Nateeza or Redshift, but we figured that maintaining two different system and loading data from one to the other was a huge overhead administration security costs. Um, the fact that you had to consistency issues. So the fact that you can have one system, um, with, uh, centralized data, solves all those issues. You have to have one security mechanism, one administrative mechanism, and you don't have to load the data from one system to the other. You don't have to make compromises. >>It's scale is not a problem because of the cloud, >>Because you can spend clusters at will for different use cases. So your clusters are independent. You have processing clusters that are not affecting your serving clusters. So, um, in the past, if you were running a serving, say on Nateeza or Redshift, if you were doing heavy processing, your reports would be affected, but now all those clusters are separated. So >>Consumer data consumer can take that data from the producer independ >>Using its own cluster. Okay. >>Yeah. I'll give you the final word, Joel. I know it's been, I said, you guys got to come back. This is what have you seen broadly? >>Yeah. Well, I mean, I think Greg's point about scale. It's an interesting one. So if you look at cross the entire Databricks platform, the platform is launching 9 million VMs every day. Um, and we're in total processing over nine exabytes a month. So in terms of just how much data the platform is able to flow through it, uh, and still maintain a extremely high performance is, is bar none out there. And then in terms of, if you look at just kind of the macro environment of what's happening out there, you know, I think what's been most exciting to watch or what customers are experiencing traditionally or, uh, on the traditional data warehouse and kinds of workloads, because I think that's where the promise of lake house really comes into its own is saying, yes, I can run these traditional data warehousing workloads that require a high concurrency high scale, high performance directly on my data lake. >>And, uh, I think probably the two most salient data points to raise up there is, uh, just last month, Databricks announced it's set the world record for the, for the, uh, TPC D S 100 terabyte benchmark. So that is a place where Databricks at the lake house architecture, that benchmark is built to measure data warehouse performance and the lake house beat data warehouse and sat their own game in terms of overall performance. And then what's that spends from a price performance standpoint, it's customers on Databricks right now are able to enjoy that level of performance at 12 X better price performance than what cloud data warehouses provide. So not only are we jumping on this extremely high scale and performance, but we're able to do it much, much more efficiently. >>We're gonna need a whole nother section second segment to talk about benchmarking that guys. Thanks so much, really interesting session and thank you and best of luck to both join the show. Thank you for having us. Very welcome. Okay. Keep it right there. Everybody you're watching the cube, the leader in high-tech coverage at AWS reinvent 2021

Published Date : Nov 30 2021

SUMMARY :

Great to see you again. Glad to be here. This is all over the place. and reporting Trisha, the lake, the workloads that you would have for your data warehouse on And regardless of what kind of data warehouse you adopt, And what Delta lake allows us to do is when you need it, that all the roles that have to take that have to touch my data for as to how you guys are using data and then tie it into what y'all just said. And with Delta lake and built a lake, you can have one system that handles all Additionally, you have a massive scalability. So you have one system that So you had to get smarter. So, so if you own their stock, And so you have to be flexible. less than an hour and deploy the new model with new pricing, you know, you know, how the price has changed over time and seeing other, other metrics within the, lines of business, did you find that the data quality, the collaboration affected data? So you don't have to load But Hey, so many questions, you guys are gonna have to combat. So the most important thing is accurately giving you a very good offer did you have to inject new data? I'm not going to slam snowflake here, but you You have To, you have to load it into a different system later. Just, just because, you know, I was told to, And then I wonder, Joel, if you could share a sort of broader what you're seeing across your customer base, but Greg, So the fact that you can have one system, So, um, in the past, if you were running a serving, Okay. This is what have you seen broadly? So if you look at cross the entire So not only are we jumping on this extremely high scale and performance, but we're able to do it much, Thanks so much, really interesting session and thank you and best of luck to both join the show.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JoelPERSON

0.99+

GregPERSON

0.99+

Joel MinnickPERSON

0.99+

$7,000QUANTITY

0.99+

Greg RokitaPERSON

0.99+

38%QUANTITY

0.99+

two daysQUANTITY

0.99+

10%QUANTITY

0.99+

JavaTITLE

0.99+

DatabricksORGANIZATION

0.99+

two yearsQUANTITY

0.99+

one systemQUANTITY

0.99+

oneQUANTITY

0.99+

ScalaTITLE

0.99+

appleORGANIZATION

0.99+

PythonTITLE

0.99+

SQLTITLE

0.99+

ninth yearQUANTITY

0.99+

last monthDATE

0.99+

lake houseORGANIZATION

0.99+

two different systemsQUANTITY

0.99+

TableauTITLE

0.99+

2021DATE

0.99+

9 million VMsQUANTITY

0.99+

second thingQUANTITY

0.99+

less than an hourQUANTITY

0.99+

LakehouseORGANIZATION

0.98+

12 XQUANTITY

0.98+

DeltaORGANIZATION

0.98+

Delta lake houseORGANIZATION

0.98+

one layerQUANTITY

0.98+

one common platformQUANTITY

0.98+

bothQUANTITY

0.97+

AWSORGANIZATION

0.97+

ZillowORGANIZATION

0.97+

Brittany BritainPERSON

0.97+

Edmunds.comORGANIZATION

0.97+

two different systemQUANTITY

0.97+

EdmondsORGANIZATION

0.97+

over nine exabytes a monthQUANTITY

0.97+

todayDATE

0.96+

Lakehouse DeltaORGANIZATION

0.96+

Delta lakeORGANIZATION

0.95+

TrishaPERSON

0.95+

data lakeORGANIZATION

0.94+

MattsonORGANIZATION

0.92+

second segmentQUANTITY

0.92+

eachQUANTITY

0.92+

MatsonORGANIZATION

0.91+

two most salient data pointsQUANTITY

0.9+

EdmondsLOCATION

0.89+

100 terabyteQUANTITY

0.87+

one single sourceQUANTITY

0.86+

first classQUANTITY

0.85+

NateezaTITLE

0.85+

one securityQUANTITY

0.85+

RedshiftTITLE

0.84+

Sirish Raghuram | KubeCon + CloudNativeCon NA 2021


 

welcome back to la we are live in los angeles at kubecon cloudnativecon 21 lisa martin and dave nicholson we've been talking to folks all day great to be here in person about 2 700 folks are here the kubernetes the community the cncf community is huge 138 000 folks great to see some of them in person back collaborating once again dave and i are pleased to welcome our next guest we have suresh ragaram co-founder and ceo of platform 9. sarish welcome to the program thank you for having me it's a pleasure to be here give our audience an overview of platform 9 who are you guys what do you do when were you founded all that good stuff so we are about seven years old we were founded with a mission to make it easy to run private hybrid and edge clouds my co-founders and i were early engineers at vmware and what we realized is that it's really easy to go use the public cloud because the public clouds have this innovation which is they have a control plane which serves as a it serves as a foundation for them to launch a lot of services and make that really simple and easy to use but if you need to get that experience in a private cloud or a hybrid cloud or in the edge nobody gives you that cloud control plane you get it from amazon in amazon get it from azure in azure google and google who gives you a sas cloud control plane to run private clouds or edge clouds or hybrid clouds nobody and this is uh this is what we do so this is we make it easy to run these clouds using technologies like kubernetes with our our sas control plane now is it limited to kubernetes because when you you you mentioned your background at vmware uh is this a control plane for what people would think of as private clouds using vmware style abstraction or is this primarily cloud native so when we first started actually docker did not exist like okay so at the time our first product to market was actually an infrastructure service product and at the time we looked at what is what is out there we knew vmware vsphere was out there it's a vmware technology there was apache cloud stack and openstack and we had look the open ecosystem around vms and infrastructure as a service is openstack so we chose open source as the lingua franca for the service endpoint so our control plane we deliver openstack as a service that was our first product when kubernetes when the announcement of communities came out from google we knew at that time we're going to go launch because we'd already been studying lxc and and docker we knew at the time we're going to standardize on kubernetes because we believe that an open ecosystem was forming around that that was a big bet for us you know this this this foundation and this this community is proof that that was a good bet and today that's actually a flap flagship product it's our you know the biggest biggest share of revenue biggest share of install base uh but we do have more than one product we have openstack as a service we have bare metal as a service we have containers as a service with kubernetes i want to ask you some of the the i'm looking at your website here platform9.com some of the three marketing messages i want you to break these down for me simplify day two ops multi-cloud ready on day one and we know so many businesses are multi-cloud and percentage is only going up and faster time to market talk to me about this let's start with simplified day two ops how do you enable that so you know one of the biggest if you talk to anyone who runs like a large vmware environment and you ask them when was the last time you did an upgrade or for that matter somebody who's running like a large-scale kubernetes environment or an openstack environment uh probably in a private cloud deployment awesome when was the last time you did an upgrade how did that go when was the last time you had an outage who did you call how did that go right and you'll hear an outpouring of emotion okay same thing you go ask people when you use kubernetes in the public cloud how do these things work and they'll say it's pretty easy it's not that hard and so the question the idea of platform 9 is why is there such a divide there's this you know we talk about digital divide there is a cloud divide the public clouds have figured out something that the rest of the industry has not and people suffer with private clouds there's a lot of demand for private clouds very few people can make it work because they try to do it with a lot of like handheld tools and you know limited automation skills and scripting what you need is you need the automation that makes sure that ongoing troubleshooting 24x7 alerting upgrades to new versions are all fully managed when amazon doesn't upgrade to a new version people don't have to worry about it they don't have to stay up at night they don't deal with outages you shouldn't have to deal with that in your private cloud so those are the kinds of problems right the troubleshooting the upgrades the the remediation when things go wrong that are taken for granted in the public cloud that we bring to the customers who want to run them in private or hybrid or edge cloud environments how do you help customers and what does future proofing mean like how do you help customers future proof their cloud native journey what does that mean to platform 9 and what does that mean to your customers i'll give you one of my favorite stories is actually one of our early customers is snapfish it's a photo sharing company it's a consumer company right when they got started with us they were coming off of vmware they wanted to run an openstack environment they started nearly four years ago and they started using us with openstack and vms and infrastructure as a service fast forward to today 85 percent of the usage on us is containers and they didn't have to hire openstack experts nor do they have to hire kubernetes experts but their application development teams got went from moving from a somewhat legacy vmware style id environment to a modern self-service developer experience with openstack and then to containers and kubernetes and we're gonna we're gonna work on the next generation of innovation with serverless technologies simplifying you know building modern more elastic applications and so our control plane the beauty of our model is our control plane adds value it added value with openstack it added value with kubernetes it'll add value with what's next around the evolution of serverless technologies right it's evergreen and our customers get the benefit of all of that so when you talk about managing environments that are on premises and in clouds i assume you're talking hyperscale clouds like aws azure gcp um what kind of infrastructure needs to be deployed and when i say infrastructure that's can be software what needs to be deployed in say aws for this to work what does it look like so some 30 of our users use us on in the public cloud and the majority of that actually happens in aws uh because they're the number one cloud and we really give people three choices right so they can choose to use and consume aws the way they want to so we have a small minority of customers that actually provisions bare metal servers in aws that's a small minority because the specific use cases they're trying to do and they try to deploy like kubernetes on bare metal but the bare metal happens to be running on aws okay that's a small minority a larger majority of our users in aws or some hyperscale cloud brings their vpc under management so they come in get started sign up with platform 9 in their platform 9 control plane they go and say i want to plug in this vpc and i want to give you this much authorization to this vpc and in that vpc we essentially can impersonate them and on their behalf provision nodes and provision clusters using our communities open source kubernetes upstream cncf kubernetes but we also have customers that said hey i already have some clusters with eks i really like what the rest of your platform allows me to do and i think it's a better platform for me to use for a variety of reasons can you bring my eks clusters under management and then help me provision new new clusters on top and the answer is you can so you can choose to bring your bare metal you can choose to bring your vpc and just provision like virtual machine and treat them as nodes for communities clusters or you can bring pre-built kubernetes clusters and manage them using our management uh product what are your routes to market so we have three routes to market um we have a completely self-serve completely free forever uh experience where people can just go sign up log in get access to the control plane and be up and running within minutes right they can plug in their server hardware on premises at the edge in the cloud their vpcs and they can be up and running from there they can choose to upgrade upsell into a grow into an uh growth tier or you know choose to request for more support and a higher touch experience and work with our sales team and get into an enterprise tier and our that is our second go to market which is a direct go to market uh companies in the retail space companies tech companies uh companies in fintech companies that are investing in digital transformation a big way have lots of software developers and are adopting these technologies in a big way but want private or hybrid or edge clouds that's the second go to market the third and and in the last two years this is new to us really exciting go to market to us is a partner partner let go to market where partners like rackspace have oem platform line so we have a partnership d partnership with rackspace all of rackspace's customers and they install base essentially including customers who are consuming public cloud services wire rackspace get access to platform 9 and rackspace working together with rackspace's ability to kind of service the whole mile uh and also uh we have a very important partnership with maveneer in the 5g space so 5g we think is a large opportunity and there's a there's a joint product there called maven webscape platform to run 5g networks on our community stack so platform nine why what does that mean harry potter harry potter so it's platform nine and three quarters okay we had this realization my cofounders and i were at vmware for 10 for 10 15 years and we were struggling with this problem of why is the public cloud so easy to use why is it so hard to run a private cloud and even today i think not many people realize uh and that's the analogy to platform nine and three quarters it's like it's right in the middle of king's cross station you go through it and you enter the whole new world of magic that that secret door that platform nine and three quarters is a sas control plane that is a secret sauce that amazon has and azure has and google has and we're bringing that for anybody who wants to use it on any infrastructure of their choice where can customers go to learn more about platform nine so platform nine dot com uh follow us on twitter platform line says or on linkedin you know and if any of our viewers are here at kubecon they can stop by your booth what are some of the things that you're featuring there we are at the booth we have our product managers we have our support engineers we have the people that are actually doing the real work behind the product right there we're talking about our roadmap we're talking about the product demos we're doing like specific show talks on specific deep dives in our product and we're also talking about some some really cool things that are coming up in the garage uh in the in the next six months can you leave us with any teasers about what some of the cool things are that are coming up in the garage yeah one one one thing that is a really big deal is um uh is the ability to manage kubernetes clusters as as as cattle right kubernetes makes node management and app management lets you treat them as cattle instead of pets but kubernetes clusters themselves our customers tell us like even in amazon eks and others these clusters themselves become pets and they become hard to manage so we have a really really interesting capability to manage these as more as you know from infrastructure code with githubs uh as cattle we actually have an announcement that i'm not able to share at this point which is coming out in two weeks uh in the ed space so you'll have to stay tuned for that so folks can go to platformnine.com.com check out that announcement two weeks two weeks from now by the end of october that's right awesome sharers thank you so much for joining us i love the fact that you asked that question because i kept thinking platform nine where do i know that from and i just googled harry potter that's right from nine and five dying because i didn't automatically make the correlation because my son and i are the most unbelievable potterheads ever yeah well so we have that in common that's fantastic awesome thank you for joining us sharing what platform mine is some of the exciting stuff coming out and two weeks learn to hear some great news about the edge absolutely awesome thank you for joining us my pleasure thank you for having me uh our pleasure as well for dave nicholson i'm lisa martin live in los angeles thecube is covering kubecon cloudnativecon21 stick around we'll be right back with our next guest

Published Date : Oct 15 2021

SUMMARY :

right so they can choose to use and

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
suresh ragaramPERSON

0.99+

davePERSON

0.99+

dave nicholsonPERSON

0.99+

Sirish RaghuramPERSON

0.99+

first productQUANTITY

0.99+

10QUANTITY

0.99+

85 percentQUANTITY

0.99+

platformnine.com.comOTHER

0.99+

amazonORGANIZATION

0.99+

lisa martinPERSON

0.99+

oneQUANTITY

0.99+

138 000 folksQUANTITY

0.99+

two weeksQUANTITY

0.99+

sarishPERSON

0.98+

todayDATE

0.98+

kubeconORGANIZATION

0.98+

harry potterPERSON

0.98+

rackspaceORGANIZATION

0.98+

secondQUANTITY

0.98+

end of octoberDATE

0.98+

three marketing messagesQUANTITY

0.97+

KubeConEVENT

0.97+

thirdQUANTITY

0.97+

googleORGANIZATION

0.97+

los angelesLOCATION

0.97+

CloudNativeConEVENT

0.97+

openstackTITLE

0.97+

vmwareORGANIZATION

0.96+

azureORGANIZATION

0.96+

harry potterPERSON

0.96+

more than one productQUANTITY

0.95+

awsORGANIZATION

0.94+

about 2 700 folksQUANTITY

0.94+

firstQUANTITY

0.94+

apacheTITLE

0.93+

about seven years oldQUANTITY

0.93+

platform9.comOTHER

0.91+

githubsTITLE

0.9+

next six monthsDATE

0.89+

9TITLE

0.89+

10 15 yearsQUANTITY

0.89+

day oneQUANTITY

0.89+

snapfishORGANIZATION

0.88+

platform nine and three quartersTITLE

0.87+

twitterORGANIZATION

0.87+

30 of our usersQUANTITY

0.86+

four years agoDATE

0.84+

three choicesQUANTITY

0.83+

three routesQUANTITY

0.83+

platform 9TITLE

0.83+

NA 2021EVENT

0.82+

platformTITLE

0.82+

platform 9ORGANIZATION

0.81+

nineTITLE

0.81+

platform nine and three quartersTITLE

0.79+

nineORGANIZATION

0.78+

one thingQUANTITY

0.78+

a lot of servicesQUANTITY

0.73+

vmwareTITLE

0.71+

one ofQUANTITY

0.71+

platform nine and three quartersTITLE

0.71+

last two yearsDATE

0.7+

lxcORGANIZATION

0.65+

ceoPERSON

0.64+

platformORGANIZATION

0.64+

Protect Your Data & Recover from Cyberthreats & Ransomware in Minutes


 

>>Welcome back to the cubes coverage of H P S. Green Lake announcement. We've been following Green Lake and the cadence of announcements making. Now we're gonna talk about ransomware, ransomware become a household term. But what people really don't understand is that virtually any bad actor can become a ransomware criminal by going on the dark web hiring a ransomware as a service sticking, putting a stick into a server and taking a piece of the action and that is a really insidious threat. Uh, the adversaries are extremely capable, so we're going to dig into that with Omar assad, who's the storage platform, lead cloud data services at H P E and Deepak verma vice president of product Zito, which is now an H P E company Gentlemen, welcome to the cube. Good to see you. Thank you. >>Thank you. Welcome. Pleasure to be here. So >>over you heard my little narrative upfront. How does the Xarelto acquisition fit into that discourse? >>Thank you. Dave first of all, we're extremely excited to welcome Sir toe into the HP family. Uh, the acquisition of Puerto expands the Green Lake offerings from H P E uh, into the data protection as a service and ransomware protection as a service capabilities and it at the same time accelerates the transformation that the HP storage businesses going through as it transforms itself into more of a cloud native business, which sort of follows on from the May 4th announcements that you helped us cover. Uh, this enables the HP sales teams to now expand the data protection perimeter and to start offering data protection as a service and ransomware as a service with the best in class technologies uh, from a protection site as well as from ransomware recovery side of the house. And so we're all the way down already trying to integrate uh, you know, the little offerings as part of the Green lake offerings and extending support through our services organization. And the more of these announcements are gonna roll out later in the month. >>And I think that's what you want to see from it as a service offering. You want to see a fast cadence of new services that are not a box by a box that are applying. No, it's services that you want to access. So let's, let's talk about before we get into the tech, can we talk about how you're helping customers deal with ransomware? Maybe some of the use cases that you're seeing. >>First of all, extremely excited to be part of the HP family now. Um, Quick history and that we've been around for about 11 years. We've had about 9000 plus customers and they all benefit from essentially the same technology that we invented 11 years ago. First and foremost, one of the use cases has been continuous data protection. So were built on the CdP platform, which means extremely low RTO S and R P O S for recovery. I'll give you example there um, United Airlines is an application that cost them $1 million dollars for every hour that they're down. They use traditional approaches. That would be a lot of loss with Zito, we have that down two seconds of loss in case and the application goes down. So that's kind of core and fundamental to our plaque. The second uh critical use case that for us has been simplicity. A lot of customers have said we make the difficult, simple. So DRS is a complex uh process. Um, give you an example there. Hcea Healthcare Consolidated four different disaster recovery platforms into a single platform in Puerto and saved about $10 million dollars a year. So it's making that operations of having disaster recovery process is much simpler. Um the third kind of critical use case for us as uh, the environment has evolved as the landscape has involved has been around hybrid cloud. So being able to take customers to the platforms that they want to go to that's critical for us And for our customers an example, there is Kingston technology's so Kingston tried some competitive products to move to Azure, it would take them about 24 hours to recover 30 VMS or so with zero technology. They will get about all their 1000 VMS up in Azure instantaneously. So these are three use cases that were foundational. Built. Built the company in the tech. >>Nice. Thank you. Thank you for that. So simple works well these days, especially with all this complexity we have to deal with. Can we get into the secret sauce a little bit. I mean CdP has been around forever. What do you guys do that? That's different. Maybe you can talk about that. Sure. >>Um it's cdp based, I think we've perfected the technology. It's less about being able to just copy the data. It's more about what you do when things go bump. We've made it simpler with driven economies of scale lower and being platform agnostic. We've really brought that up across to whatever platforms once upon a time it was moving from physical to virtual or even across different virtualization platforms and then being able to move across to whatever cloud platform customer may want or or back >>to cbP continuous data protection by the way for the audience that may not know that go ahead. And >>one of the additional points that I want to add to the box comment over here is the the basics of platform independence is what really drew uh hp technologists into the technology because you know, one of the things we have many, we have the high end platform with the H B electra nine Kv of the electro six kids the midrange platform. Then we have a bunch of file and object offerings on the side. What zero does it University universally applies to all those technologies and along with, you know, as you pair them up with our computer offerings to offer a full stack but now the stack is disaster recovery capable. Natively with the integration of certo, you know, one of the things that, you know, Deepak talked about about the as your migrations that a lot of the customers are talking about cloud is also coming up as a D our use case for a lot of our customers, customers, you know, you know, as we went through thousands of customers interviews one of the, one of the key things that came back was investing in a D our data center which is just waiting there for a disaster to happen. It's a very expensive insurance policy. So absurd. Oh, through its native capabilities allows customers to do is to just use public cloud as a D our target and and as a service, it just takes care of all the format conversions and recoveries and although that's completely automated inside the platform and and we feel that, you know, when you combine this either at the high end of data center storage offering or the middle age offering with this replication, D. R. And ransomware protection built into the same package, working under the same hood, it just simplifies and streamlines the customers deployment. >>Come here a couple of things. So first of all historically, if you wanted to recover to appoint within let's say, you know, 10 seconds, five seconds you have to pay up. Big time. Number one. Number two is you couldn't test your D. R. It was too risky. So people just had it in, they had a checkbox on compliance but they actually couldn't really test it because they were afraid they were going to lose data. So it sounds like you're solving both of those problems or >>or you know we remember the D. R. Test where it was a weekend. It was an event right? It was the event and at the end of july that the entire I. T. Organizing honey >>it's not gonna be home this weekend. Exactly what >>we've changed. That is a click of a button. You can D. R. Test today if you want to you can have disaster recovery still running. You can D. R. Test in Azure bring up your environment an isolated network bubble, make sure everything's running and bring it and bring it down. The interesting thing is the technology was invented back when our fear in the industry was losing a data center was losing power was catastrophic, natural disasters. But the technology has lent itself very well to the new threats which which are very much around ransomware as you mentioned because it's a type of disaster. Somebody's going after your data. Physical servers are still around but you still need to go back to a point in time and you need to do that very quickly. So the technology has really just found itself uh appealing to new challenges. >>If a customer asks you can I really eliminate cyber attacks, where should I put my my if I had 100 bucks to spend. Should I spend it on you know layers and defense should I spend it on recovery. Both, what would you tell them? >>I think it's a balanced answer. I think prevention is 100% impossible. Uh It's really I'd say spend it in in thirds. You want to spend a third of it and and prevention a third of it maybe in detection and then a third of it in uh recovery. So it's really that balancing act that means you can't leave the front door open but then have a lot of recovery techniques invested in. It has to be it has to be a balance and it's also not a matter of if it's a matter of when so we invest in all three areas. Hopefully two of them will work to your advantage. >>You dave you you should always protect your perimeter. I mean that that goes without saying but then as you invest in other aspects of the business, as Deepak mentioned, recovery needs to be fast and quick recovery whether from your recovering from a backup disaster. Are you covering from a data center disaster a corrupted file or from a ransomware attack. A couple of things that zero really stitches together like journal based recovery has been allowed for a while but making journal based recovery platform independent in a seamless fashion with the click of a button within five seconds go back to where your situation was. That gives you the peace of mind that even if the perimeter was breached, you're still protected, you know, five minutes into the problem And, and that's the peace of mind, which along with data protection as a service, disaster recovery as a service and now integrating this, you know, recovery from ransomware along with it in a very simple, easy to consume package is what drew us into the >>more you can do this you said on the use the cloud as a target. I could use the cloud as an air gap if I wanted to. It sounds like it's cloud Native, correct? Just wrap your stack in kubernetes and shove it in the cloud and have a host and say we're cloud to No, really I'm serious. So >>absolutely, we we looked at that approach and that that's where the challenge comes in, Right? So I give you the example of Kingston technology just doesn't scale, it's not fast enough. What we did was developed a platform for cloud Native. We consume cloud services where necessary in order to provide that scalability. So one example in Azure is being able to use scale set. So think about a scenario where you just declare a disaster, you've got 1000 VMS to move over, we can spin up the workers that need to do the work to get 1000 VMS spin them down. So you're up and running instantaneously and that involves using cloud Native uh tools and technologies, >>can we stay on that for a minute, So take take us through an example of what life was like would be like without zero trying to recover and what it's like with Puerto resources, complexity time maybe you could sort of paint a picture. Sure. >>Let me, I'll actually use an example from a customer 10 Kata. They uh develop defensive fabrics, especially fabric. So think about firefighters, think about our men and women abroad that need protective clothing that developed the fibers behave. They were hit by ransomware by crypto locker. That this was before zero. Unfortunately it took they took about a two week uh data loss. It took them weeks to recover that environment, bring it back up and the confidence was pretty low. They invested in, they looked at our technology, they invested in the technology and then they were hit with a different variant of crypto locker immediately. The the IT administrators and the ITS folks there were relieved right, they had a sense of confidence to say yes we can recover. And the second time around they had data loss of about 10 seconds, they could recover within a few minutes. So that's the before and after picture giving customers that confidence to say yep, a breach happened, we tried our best but now it's up to recovery and I can recover without having to dig tapes out from some vault and hopefully have a good copy of data sitting there and then try that over and over again and there's a tolerance right before a time before which business will not be able to sustain itself. So what we want to do is minimize that for businesses so that they can recover as quickly as possible with as little data loss as possible. >>Thank you for that. So, Omar, there's a bigger sort of cyber recovery agenda that you have as part of, of green lake, I'm sure. What, what should we expect, what's next? Where do you want to take this? >>So uh excellent question point in the future day. So one of the things that you helped us, uh you know, unveil uh in May was the data services. Cloud console. Data services. Cloud console was the first uh sort of delivery as we took the storage business as it is and start to transform into more of a cloud native business. We introduced electra uh which is the cloud native hardware with the customers buy for persistent storage within their data center. But then data services, cloud console truly cemented that cloud operational model. Uh We separated the management from, from the devices itself and sort of lifted it up as a sas service into the public, public cloud. So now what you're gonna see is, you know, more and more data and data management services come up on the data services. Cloud console and and zero is going to be one of the first ones. Cloud physics was another one that we we talked about, but zero is the is the true data management service that is going to come up on data services, cloud console as part of the Green Lake services agenda that that HP has in the customer's environ and then you're gonna see compliance as a service. You're going to see data protection as a service. You're gonna see disaster recovery as a service. But the beautiful thing about it is, is choice with simplicity as these services get loaded up on data services, clown console. All our customers instantly get it. There's nothing to install, there's nothing to troubleshoot uh, there's nothing to size. All those capabilities are available on the console, customers go in and just start consuming Xarelto capabilities from a management control plane, Disaster recovery control plan are going to be available on the data services, cloud console, automatically detecting electro systems, rian Bear systems, container based systems, whichever our customers have deployed and from there is just a flip of a button. Another way to look at it is it sort of gives you that slider that you have data protection or back up on one side, you've got disaster recovery on one side, you've got ransomware protection on on the extreme right side, you can just move a slider across and choose the service level that you want without worrying about best practices, installation, application integration. All of that just takes control from the data services, cloud concepts. >>Great, great summary because historically you would have to build that right now. You can buy it as a service. You can programmatically, you know, deploy it and that's a game changer. Have to throw it over the fence to some folks. That's okay. Now, you know, make it make it work and then they change the code and you come back a lot of finger pointing. It's now it's your responsibility. >>Absolutely. Absolutely. We're excited to provide Zito continue provides the desert of customers but also integrate with the Green Green Lake platform and let the rest of Green Lake customers experience some of the sort of technology and really make that available as a service. >>That's great. This is a huge challenge for customers. I mean they do, I pay their ransom. Do not pay the ransom. If I pay the ransom the FBI is going to come after me. But if I don't pay the ransom, I'm not gonna get the crypto key. So solutions like this are critical. You certainly see the president pushing for that. The United States government said, hey, we got to do a better job. Good job guys, Thanks for for sharing your story in the cube and congratulations. Thank >>you. Thank you David. >>All right. And thank you for watching everybody. Uh this is the, I want to tell you that everything that you're seeing today as part of the Green Lake announcement is going to be available on demand as part of the HP discover more. So you got to check that out. Thank you. You're watching the cube. >>Mhm mm.

Published Date : Sep 28 2021

SUMMARY :

Uh, the adversaries are extremely capable, so we're going to dig into that with Omar assad, Pleasure to be here. over you heard my little narrative upfront. itself into more of a cloud native business, which sort of follows on from the May 4th announcements that you And I think that's what you want to see from it as a service offering. First and foremost, one of the use cases has been Thank you for that. It's more about what you do when things go bump. to cbP continuous data protection by the way for the audience that may not know that go ahead. technologists into the technology because you know, one of the things we have many, we have the high end platform with So first of all historically, if you wanted to recover to appoint within let's say, or you know we remember the D. R. Test where it was a weekend. it's not gonna be home this weekend. back to a point in time and you need to do that very quickly. Both, what would you tell them? So it's really that balancing act that means you can't leave the front door You dave you you should always protect your perimeter. more you can do this you said on the use the cloud as a target. So think about a scenario where you just declare a disaster, you've got 1000 VMS to move over, complexity time maybe you could sort of paint a picture. So that's the before and after picture giving customers that confidence to Thank you for that. So one of the things that you You can programmatically, you know, deploy it and that's a game changer. of the sort of technology and really make that available as a service. If I pay the ransom the FBI is going to come after me. Thank you David. So you got to check that out.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

OmarPERSON

0.99+

100 bucksQUANTITY

0.99+

FBIORGANIZATION

0.99+

1000 VMSQUANTITY

0.99+

H P EORGANIZATION

0.99+

DeepakPERSON

0.99+

May 4thDATE

0.99+

10 secondsQUANTITY

0.99+

HPORGANIZATION

0.99+

100%QUANTITY

0.99+

five minutesQUANTITY

0.99+

United AirlinesORGANIZATION

0.99+

five secondsQUANTITY

0.99+

KingstonORGANIZATION

0.99+

two secondsQUANTITY

0.99+

MayDATE

0.99+

second timeQUANTITY

0.99+

FirstQUANTITY

0.99+

$1 million dollarsQUANTITY

0.99+

Omar assadPERSON

0.99+

BothQUANTITY

0.99+

AzureTITLE

0.99+

three use casesQUANTITY

0.99+

firstQUANTITY

0.99+

oneQUANTITY

0.98+

DavePERSON

0.98+

todayDATE

0.98+

secondQUANTITY

0.98+

about 10 secondsQUANTITY

0.98+

30 VMSQUANTITY

0.98+

11 years agoDATE

0.98+

PuertoLOCATION

0.98+

thousands of customersQUANTITY

0.97+

Hcea HealthcareORGANIZATION

0.97+

bothQUANTITY

0.97+

ZitoORGANIZATION

0.97+

zeroQUANTITY

0.97+

about 11 yearsQUANTITY

0.97+

United States governmentORGANIZATION

0.96+

about 24 hoursQUANTITY

0.96+

thirdQUANTITY

0.96+

Green LakeORGANIZATION

0.96+

third kindQUANTITY

0.95+

D. R. TestEVENT

0.94+

Green LakeLOCATION

0.94+

H P S. Green LakeORGANIZATION

0.93+

about a two weekQUANTITY

0.93+

threeQUANTITY

0.93+

about 9000 plus customersQUANTITY

0.93+

about $10 million dollars a yearQUANTITY

0.93+

single platformQUANTITY

0.92+

Cloud physicsTITLE

0.91+

one sideQUANTITY

0.89+

XareltoTITLE

0.89+

one exampleQUANTITY

0.88+

10 KataORGANIZATION

0.86+

minutesQUANTITY

0.85+

end of julyDATE

0.84+

Deepak vermaPERSON

0.83+

two ofQUANTITY

0.82+

first onesQUANTITY

0.82+

LakeORGANIZATION

0.81+

PuertoORGANIZATION

0.81+

Green Green LakeORGANIZATION

0.78+

Number twoQUANTITY

0.78+

a minuteQUANTITY

0.78+

Breaking Analysis: Can anyone tame the identity access beast? Okta aims to try...


 

>> From "theCUBE" studios in Palo Alto in Boston, bringing you data-driven insights from "theCUBE" in ETR. This is breaking analysis with Dave Vellante. >> Chief Information Security Officer's site trust, is the number one value attribute, they can deliver to their organizations. And when it comes to security, identity is the new attack surface. As such identity and access management, continue to be the top priority among technology decision makers. It also happens to be one of the most challenging and complicated areas of the cybersecurity landscape. Okta, a leader in the identity space has announced its intent to converge privileged access and Identity Governance in an effort to simplify the landscape and re-imagine identity. Our research shows that interest in this type of consolidation is very high, but organizations believe technical debt, compatibility issues, expense and lack of talent are barriers to reaching cyber nirvana, with their evolving Zero-Trust networks. Hello and welcome to this week's Wikibon CUBE insights, powered by ETR. In this breaking analysis, we'll explore the complex and evolving world of identity access and privileged account management, with an assessment of Okta's market expansion aspirations and fresh data from ETR, and input from my colleague Eric Bradley. Let's start by exploring identity and why it's fundamental to digital transformations. Look the pandemic accelerated digital and digital raises the stakes in cybersecurity. We've covered this extensively, but today we're going to drill into identity, which is one of the hardest nuts to crack in security. If hackers can steal someone's identity, they can penetrate networks. If that someone has privileged access to databases, financial information, HR systems, transaction systems, the backup corpus, well. You get the point. There are many bespoke tools to support a comprehensive identity access management and privilege access system. Single sign-on, identity aggregation, de-duplication of identities, identity creation, the governance of those identities, group management. Many of these tools are open source. So you have lots of vendors, lots of different systems, and often many dashboards. Practitioners tell us that it's the paper cuts that kill them, patches that aren't applied, open ports, orphan profiles that aren't disabled. They'd love to have a single dashboard, but it's often not practical for large organizations because of the bespoke nature of the tooling and the skills required to manage them. Now, adding to this complexity, many organizations have different identity systems for privileged accounts, the general employee population and customer identity. For example, around 50 percent of ETR respondents in a recent survey use different systems for workforce identity and consumer identity. Now this is often done because the consumer identity is a totally different journey. The consumer is out in the wild and takes an unknown, nonlinear path and then enters the known space inside a brand's domain. The employee identity journey is known throughout. You go onboarding, to increasing responsibilities and more access to off-boarding. Privileged access may even have different attributes, does usually like no email and, or no shared credentials. And we haven't even touched on the other identity consumers in the ecosystem like selling partners, suppliers, machines, etcetera. Like I said, it's complicated and meeting the needs of auditors is stressful and expensive for CSOs. Open chest wounds, such as sloppy histories of privileged access approvals, obvious role conflicts, missing data, inconsistent application of policy and the list goes on. The expense of securing digital operations goes well beyond the software and hardware acquisition costs. So there's a real need and often desire, to converge these systems. But technical debt makes it difficult. Companies have spent a lot of time, effort and money on their identity systems and they can't just rip and replace. So they often build by integrating piece parts or they add on to their Quasi-integrated monolithic systems. And then there's the whole Zero-Trust concept. It means a lot of different things to a lot of different people, but folks are asking if I have Zero-Trust, does it eliminate the need for identity? And what does that mean for my architecture, going forward. So, let's take a snapshot of some of the key players in identity and PAM, Privileged Access Management. This is an X-Y graph that we always like to show. It shows the net score or spending velocity, spending momentum on the vertical axis and market share or presence in the ETR dataset on the horizontal axis. It's not like revenue market share. It's just, it's mentioned market share if you will. So it's really presence in the dataset. Now, note the chart insert, the table, which shows the actual data for Net Score and Shared In, which informs the position of the dot. The red dotted line there, it indicates an elevated level. Anything over 40 percent that mark, we consider the strongest spending velocity. Now within this subset of vendors that we've chosen, where we've tried to identify some, most of them are pure plays, in this identity space. You can see there are six above that 40 percent mark including Zscaler, which tops the charts, Okta, which has been at or near the top for several quarters. There's an argument by the way, to be made that Okta and Zscaler are on a collision course as Okta expands it's TAM, but let's just park that thought for a moment. You can see Microsoft with a highly elevated spending score and a massive presence on the horizontal axis, CyberArk and SailPoint, which Okta is now aiming to disrupt and Auth zero, which Okta officially acquired in may of this year, more on that later now. Now, below that 40 percent mark you can see Cisco, which is largely acquired companies in order to build its security portfolio. For example, Duo which focuses on access and multi-factor authentication. Now, word of caution, Cisco and Microsoft in particular are overstated because, this includes their entire portfolio of security products, whereas the others are more closely aligned as pure plays in identity and privileged access. ThycotyicCentrify is pretty close to that 40 percent mark and came about as a result of the two companies merging in April of this year. More evidence of consolidation in this space, BeyondTrust is close to the red line as well, which is really interesting because this is a company whose roots go back to the VAX VMS days, which many of you don't even know what a VAX VMS is in the mid 1980s. It was the mini computer standard and the company has evolved to provide more modern PAM solutions. Ping Identity is also notable in that, it essentially emerged after the dot com bust in the early 2000s as an identity solution provider for single sign-on, SSO and multifactor authentication, MFA solutions. In IPO'd in the second half of 2019, just prior to the pandemic. It's got a $2 billion market cap-down from its highs of around $3 billion earlier this year and last summer. And like many of the remote work stocks, they bounced around, as the reopening trade and lofty valuations have weighed on many of these names, including Okta and SailPoint. Although CyberArk, actually acted well after its August 12th earnings call as its revenue growth about doubled year on year. So hot space and a big theme this year is around Okta's acquisition of Auth zero and its announcement at Oktane 2021, where it entered the PAM market and announced its thrust to converge its platform around PAM and Identity Governance and administration. Now I spoke earlier this week with Diya Jolly, who's the Chief Product Officer at Okta and I'll share some of her thoughts later in this segment. But first let's look at some of the ETR data from a recent drill down study that our friends over there conducted. This data is from a drill down that was conducted early this summer, asking organizations how important it is to have a single dashboard for access management, Identity Governance and privileged access. This goes directly to Okta strategy that it announced this year at it's Oktane user conference. Basically 80 percent of the respondents want this. So this is no surprise. Now let's stay on this theme of convergence. ETR asks security pros if they thought convergence between access management and Identity Governance would occur within the next three years. And as you can see, 89% believe this is going to happen. They either strongly agree, agree, or somewhat agree. I mean, it's almost as though the CSOs are willing this to occur. And this seemingly bodes well for Okta, which in April announced its intent to converge PAM and IGA. Okta's Diya jolly stressed to me that this move was in response to customer demand. And this chart confirms that, but there's a deeper analysis worth exploring. Traditional tools of identity, single sign-on SSO and multi-factor authentication MFA, they're being commoditized. And the most obvious example of this is OAuth or Open Authorization. You know, log in with Twitter, Google, LinkedIn, Amazon, Facebook. Now Okta currently has around a $35 billion market cap as of today, off from its highs, which were well over 40 billion earlier this year. Okta stated, previously stated, total addressable market was around 55 billion. So CEO, Todd McKinnon had to initiate a TAM expansion play, which is the job of any CEO, right? Now, this move does that. It increases the company's TAM by probably around $20 to $30 billion in our view. Moreover, the number one criticism of Okta is, "Your price is too high." That's a good problem to have I say. Regardless, Okta has to think about adding more value to its customers and prospects, and this move both expands its TAM and supports its longer-term vision to enable a secure user-controlled ubiquitous, digital identity, supporting federated users and data within a centralized system. Now, the other thing Jolly stressed to me is that Okta is heavily focused on the user experience, making it simple and consumer grade easy. At Oktane 21, she gave a keynote laying out the company's vision. It was a compelling presentation designed to show how complex the problem is and how Okta plans to simplify the experience for end users, service providers, brands, and the overall technical community across the ecosystem. But look, there are a lot of challenges, the company faces to pull this off. So let's dig into that a little bit. Zero-Trust has been the buzz word and it's a direction, the industry is moving towards, although there are skeptics. Zero-Trust today is aspirational. It essentially says you don't trust any user or device. And the system can ensure the right people or machines, have the proper level of access to the resources they need all the time, with a fantastic user experience. So you can see why I call this nirvana earlier. In previous breaking analysis segments, we've laid out a map for protecting your digital identity, your passwords, your crypto wallets, how to create Air Gaps. It's a bloody mess. So ETR asked security pros if they thought a hybrid of access management and Zero-Trust network could replace their PAM systems, because if you can achieve Zero-Trust in a world with no shared credentials and real-time access, a direction which Diya jolly clearly told me Okta is headed, then in theory, you can eliminate the need for Privileged Access Management. Another way of looking at this is, you do for every user what you do for PAM users. And that's how you achieve Zero-Trust. But you can see from this picture that there's more uncertainty here with nearly 50 percent of the sample, not in agreement that this is achievable. Practitioners in Eric Bradley's round tables tell us that you'll still need the PAM system to do things, like session auditing and credential checkouts and other things. But much of the PAM functionality could be handled by this Zero-Trust environment we believe. ETR then asks the security pros, how difficult it would be to replace their PAM systems. And this is where it gets interesting. You can see by this picture. The enthusiasm wanes quite a bit when the practitioners have to think about the challenges associated with replacing Privileged Access Management Systems with a new hybrid. Only 20 percent of the respondents see this as, something that is easy to do, likely because they are smaller and don't have a ton of technical debt. So the question and the obvious question is why? What are the difficulties and challenges of replacing these systems? Here's a diagram that shows the blockers. 53 percent say gaps in capabilities. 26 percent say there's no clear ROI. IE too expensive and 11 percent interestingly said, they want to stay with best of breed solutions. Presumably handling much of the integration of the bespoke capabilities on their own. Now speaking with our Eric Bradley, he shared that there's concern about "rip and replace" and the ability to justify that internally. There's also a significant buildup in technical debt, as we talked about earlier. One CSO on an Eric Bradley ETR insights panel explained that the big challenge Okta will face here, is the inertia of entrenched systems from the likes of SailPoint, Thycotic and others. Specifically, these companies have more mature stacks and have built in connectors to legacy systems over many years and processes are wired to these systems and would be very difficult to change with skill sets aligned as well. One practitioner told us that he went with SailPoint almost exclusively because of their ability to interface with SAP. Further, he said that he believed, Okta would be great at connecting to other cloud API enabled systems. There's a large market of legacy systems for which Okta would have to build custom integrations and that would be expensive and would require a lot of engineering. Another practitioner said, "We're not implementing Okta, but we strongly considered it." The reason they didn't go with was the company had a lot of on-prem legacy apps and so they went with Microsoft Identity Manager, but that didn't meet the grade because the user experience was subpar. So they're still searching for a solution that can be good at both cloud and on-prem. Now, a third CSO said, quote, " I've spent a lot of money, writing custom connectors to SailPoint", and he's stressed a lot of money, he said that several times. "So, who was going to write those custom connectors for me? Will Okta do it for free? I just don't see that happening", end quote. Further, this individual said, quote, "It's just not going to be an easy switch. And to be clear, SailPoint is not our PAM solution. That's why we're looking at CyberArk." So the complexity that, unquote. So the complexity and fragmentation continues. And personally I see this as a positive trend for Okta, if it can converge these capabilities. Now I pressed Okta's Diya Jolly on these challenges and the difficulties of replacing them over to our stacks of the competitors. She fully admitted, this was a real issue But her answer was that Okta is betting on the future of microservices and cloud disruption. Her premise is that Okta's platform is better suited for this new application environment, and they're essentially betting on organizations modernizing their application portfolios and Okta believes that it will be ultimately a tailwind for the company. Now let's look at the age old question of best of breed versus incumbent slash integrated suite. ETR and it's drilled down study ask customers, when thinking about identity and access management solutions, do you prefer best of breed and incumbent that you're already using or the most cost efficient solution? The respondents were asked to force rank one, two and three, and you can see, incumbent just edged out best in breed with a 2.2 score versus a 2.1, with the most cost-effective choice at 1.7. Now, overall, I would say, this is good news for Okta. Yes, they faced the issues that we brought up earlier but as digital transformations lead to modernizing much of the application portfolio with container and microservices, Okta will be in a position, assuming it continues to innovate, to pick up much of this business. And to the point earlier, where the CSO told us they're going to use both SailPoint and CyberArk. When ETR asked practitioners which vendors are in the best position to benefit from Zero-Trust, the Zero-Trust trend, the answers were not surprisingly all over the place. Lots of Okta came up. Zscaler came up a lot too, hmm. There's that collision course. But plenty of SailPoint, Palo Alto, Microsoft, Netskope, Dichotic, Centrify, Cisco, all over the map. So now let's look specifically at how practitioners are thinking about Okta's latest announcements. This chart shows the results of the question. Are you planning to evaluate Okta's recently announced Identity Governance and PAM offerings? 45 to nearly 50 percent of the respondents either were already using or plan to evaluate, with just around 40 percent saying they had no plans to evaluate. So again, this is positive news for Okta in our view. The huge portion of the market is going to take a look at what Okta's doing. Combined with the underlying trends that we shared earlier related to the need for convergence, this is good news for the company. Now, even if the blockers are too severe to overcome, Okta will be on the radar and is on the radar as you can see from this data. And as with the Microsoft MIM example, the company will be seen as increasingly strategic, Okta that is, and could get another bite at the apple. Moreover, Okta's acquisition of Auth zero is strategically important. One of the other things Jolly told me is they see initiative starting both from devs and then hand it over to IT to implement, and then the reverse where IT may be the starting point and then go to devs to productize the effort. The Auth zero acquisition gives Okta plays in both games, because as we've reported earlier, Okta wasn't strong with the devs, Auth zero that was their wheelhouse. Now Okta has both. Now on the one hand, when you talk to practitioners, they're excited about the joint capabilities and the gaps that Auth zero fills. On the other hand, it takes out one of Okta's main competitors and customers like competition. So I guess I look at it this way. Many enterprises will spend more money to save time. And that's where Okta has traditionally been strong. Premium pricing but there's clear value, in that it's easier, less resources required, skillsets are scarce. So boom, good fit. Other enterprises look at the price tag of an Okta and, they actually have internal development capabilities. So they prefer to spend engineering time to save money. That's where Auth zero has seen its momentum. Now Todd McKinnon and company, they can have it both ways because of that acquisition. If the price of Okta classic is too high, here's a lower cost solution with Auth zero that can save you money if you have the developer talent and the time. It's a compelling advantage, that's unique. Okay, let's wrap. The road to Zero-Trust networks is long and arduous. The goal is to understand, support and enable access for different roles, safely and securely, across an ecosystem of consumers, employees, partners, suppliers, all the consumers, (laughs softly) of your touch points to your security system. You've got to simplify the user experience. Today's kluge of password, password management, security exposures, just not going to cut it in the digital future. Supporting users in a decentralized, no-moat world, the queen has left her castle, as I often say is compulsory. But you must have federated governance. And there's always going to be room for specialists in this space. Especially for industry specific solutions for instance, within healthcare, education, government, etcetera. Hybrids are the reality for companies that have any on-prem legacy apps. Now Okta has put itself in a leadership position, but it's not alone. Complexity and fragmentation will likely remain. This is a highly competitive market with lots of barriers to entry, which is both good and bad for Okta. On the one hand, unseating incumbents will not be easy. On the other hand, Okta is both scaling and growing rapidly, revenues are growing almost 50% per annum and with it's convergence agenda and Auth zero, it can build a nice moat to its business and keep others out. Okay, that's it for now. Remember, these episodes are all available as podcasts, wherever you listen, just search braking analysis podcast, and please subscribe. Thanks to my colleague, Eric Bradley, and our friends over at ETR. Check out ETR website at "etr.plus" for all the data and all the survey action. We also publish a full report every week on "wikibon.com" and "siliconangle.com". So make sure you check that out and browse the breaking analysis collection. There are nearly a hundred of these episodes on a variety of topics, all available free of charge. Get in touch with me. You can email me at "david.vellante@siliconangle.com" or "@dvellante" on Twitter. Comment on our LinkedIn posts. This is Dave Vellante for "theCUBE" insights powered by ETR. Have a great week everybody. Stay safe, be well And we'll see you next time. (upbeat music)

Published Date : Aug 20 2021

SUMMARY :

with Dave Vellante. and the skills required to manage them.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Eric BradleyPERSON

0.99+

Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

OktaORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Eric BradleyPERSON

0.99+

$2 billionQUANTITY

0.99+

45QUANTITY

0.99+

NetskopeORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

SailPointORGANIZATION

0.99+

sixQUANTITY

0.99+

CentrifyORGANIZATION

0.99+

Todd McKinnonPERSON

0.99+

AprilDATE

0.99+

Diya JollyPERSON

0.99+

AmazonORGANIZATION

0.99+

appleORGANIZATION

0.99+

40 percentQUANTITY

0.99+

August 12thDATE

0.99+

CyberArkORGANIZATION

0.99+

DichoticORGANIZATION

0.99+

two companiesQUANTITY

0.99+

JollyPERSON

0.99+

TAMORGANIZATION

0.99+

david.vellante@siliconangle.comOTHER

0.99+

11 percentQUANTITY

0.99+

89%QUANTITY

0.99+

Palo AltoORGANIZATION

0.99+

53 percentQUANTITY

0.99+

26 percentQUANTITY

0.99+

ETRORGANIZATION

0.99+

bothQUANTITY

0.99+

LinkedInORGANIZATION

0.99+

both gamesQUANTITY

0.99+

last summerDATE

0.99+

Auth zeroORGANIZATION

0.99+

80 percentQUANTITY

0.99+

threeQUANTITY

0.99+

around $20QUANTITY

0.99+

ThycoticORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

TwitterORGANIZATION

0.99+

mid 1980sDATE

0.99+

IGAORGANIZATION

0.99+

20 percentQUANTITY

0.99+

early 2000sDATE

0.99+

twoQUANTITY

0.99+

Auth zeroORGANIZATION

0.99+

F1 Racing at the Edge of Real-Time Data: Omer Asad, HPE & Matt Cadieux, Red Bull Racing


 

>>Edge computing is predict, projected to be a multi-trillion dollar business. You know, it's hard to really pinpoint the size of this market. Let alone fathom the potential of bringing software, compute, storage, AI, and automation to the edge and connecting all that to clouds and on-prem systems. But what, you know, what is the edge? Is it factories? Is it oil rigs, airplanes, windmills, shipping containers, buildings, homes, race cars. Well, yes and so much more. And what about the data for decades? We've talked about the data explosion. I mean, it's mind boggling, but guess what, we're gonna look back in 10 years and laugh. What we thought was a lot of data in 2020, perhaps the best way to think about edge is not as a place, but when is the most logical opportunity to process the data and maybe it's the first opportunity to do so where it can be decrypted and analyzed at very low latencies that that defines the edge. And so by locating compute as close as possible to the sources of data, to reduce latency and maximize your ability to get insights and return them to users quickly, maybe that's where the value lies. Hello everyone. And welcome to this cube conversation. My name is Dave Vellante and with me to noodle on these topics is Omar Assad, VP, and GM of primary storage and data management services at HPE. Hello, Omer. Welcome to the program. >>Hey Steve. Thank you so much. Pleasure to be here. >>Yeah. Great to see you again. So how do you see the edge in the broader market shaping up? >>Uh, David? I think that's a super important, important question. I think your ideas are quite aligned with how we think about it. Uh, I personally think, you know, as enterprises are accelerating their sort of digitization and asset collection and data collection, uh, they're typically, especially in a distributed enterprise, they're trying to get to their customers. They're trying to minimize the latency to their customers. So especially if you look across industries manufacturing, which is distributed factories all over the place, they are going through a lot of factory transformations where they're digitizing their factories. That means a lot more data is being now being generated within their factories. A lot of robot automation is going on that requires a lot of compute power to go out to those particular factories, which is going to generate their data out there. We've got insurance companies, banks that are creating and interviewing and gathering more customers out at the edge for that. >>They need a lot more distributed processing out at the edge. What this is requiring is what we've seen is across analysts. A common consensus is that more than 50% of an enterprise is data, especially if they operate globally around the world is going to be generated out at the edge. What does that mean? More data is new data is generated at the edge, but needs to be stored. It needs to be processed data. What is not required needs to be thrown away or classified as not important. And then it needs to be moved for Dr. Purposes either to a central data center or just to another site. So overall in order to give the best possible experience for manufacturing, retail, uh, you know, especially in distributed enterprises, people are generating more and more data centric assets out at the edge. And that's what we see in the industry. >>Yeah. We're definitely aligned on that. There's some great points. And so now, okay. You think about all this diversity, what's the right architecture for these deploying multi-site deployments, robo edge. How do you look at that? >>Oh, excellent question. So now it's sort of, you know, obviously you want every customer that we talk to wants SimpliVity, uh, in, in, and, and, and, and no pun intended because SimpliVity is reasoned with a simplistic edge centric architecture, right? So because let's, let's take a few examples. You've got large global retailers, uh, they have hundreds of global retail stores around the world that is generating data that is producing data. Then you've got insurance companies, then you've got banks. So when you look at a distributed enterprise, how do you deploy in a very simple and easy to deploy manner, easy to lifecycle, easy to mobilize and easy to lifecycle equipment out at the edge. What are some of the challenges that these customers deal with these customers? You don't want to send a lot of ID staff out there because that adds costs. You don't want to have islands of data and islands of storage and promote sites, because that adds a lot of States outside of the data center that needs to be protected. >>And then last but not the least, how do you push lifecycle based applications, new applications out at the edge in a very simple to deploy better. And how do you protect all this data at the edge? So the right architecture in my opinion, needs to be extremely simple to deploy. So storage, compute and networking, uh, out towards the edge in a hyperconverged environment. So that's, we agree upon that. It's a very simple to deploy model, but then comes, how do you deploy applications on top of that? How do you manage these applications on top of that? How do you back up these applications back towards the data center, all of this keeping in mind that it has to be as zero touch as possible. We at HBS believe that it needs to be extremely simple. Just give me two cables, a network cable, a power cable, tied it up, connected to the network, push it state from the data center and back up at state from the ed back into the data center. Extremely simple. >>It's gotta be simple because you've got so many challenges. You've got physics that you have to deal your latency to deal with. You got RPO and RTO. What happens if something goes wrong, you've gotta be able to recover quickly. So, so that's great. Thank you for that. Now you guys have hard news. W what is new from HPE in this space >>From a, from a, from a, from a deployment perspective, you know, HPE SimpliVity is just gaining like it's exploding, like crazy, especially as distributed enterprises adopt it as it's standardized edge architecture, right? It's an HCI box has got stories, computer networking, all in one. But now what we have done is not only you can deploy applications all from your standard V-Center interface, from a data center, what have you have now added is the ability to backup to the cloud, right? From the edge. You can also back up all the way back to your core data center. All of the backup policies are fully automated and implemented in the, in the distributed file system. That is the heart and soul of, of the SimpliVity installation. In addition to that, the customers now do not have to buy any third-party software into backup is fully integrated in the architecture and it's van efficient. >>In addition to that, now you can backup straight to the client. You can backup to a central, uh, high-end backup repository, which is in your data center. And last but not least, we have a lot of customers that are pushing the limit in their application transformation. So not only do we previously were, were one-on-one them leaving VMware deployments out at the edge sites. Now revolver also added both stateful and stateless container orchestration, as well as data protection capabilities for containerized applications out at the edge. So we have a lot, we have a lot of customers that are now deploying containers, rapid manufacturing containers to process data out at remote sites. And that allows us to not only protect those stateful applications, but back them up, back into the central data center. >>I saw in that chart, it was a light on no egress fees. That's a pain point for a lot of CEOs that I talked to. They grit their teeth at those entities. So, so you can't comment on that or >>Excellent, excellent question. I'm so glad you brought that up and sort of at that point, uh, uh, pick that up. So, uh, along with SimpliVity, you know, we have the whole green Lake as a service offering as well. Right? So what that means, Dave, is that we can literally provide our customers edge as a service. And when you compliment that with, with Aruba wired wireless infrastructure, that goes at the edge, the hyperconverged infrastructure, as part of SimpliVity, that goes at the edge, you know, one of the things that was missing with cloud backups is the every time you backup to the cloud, which is a great thing, by the way, anytime you restore from the cloud, there is that breastfeed, right? So as a result of that, as part of the GreenLake offering, we have cloud backup service natively now offered as part of HPE, which is included in your HPE SimpliVity edge as a service offering. So now not only can you backup into the cloud from your edge sites, but you can also restore back without any egress fees from HBS data protection service. Either you can restore it back onto your data center, you can restore it back towards the edge site and because the infrastructure is so easy to deploy centrally lifecycle manage, it's very mobile. So if you want to deploy and recover to a different site, you could also do that. >>Nice. Hey, uh, can you, Omar, can you double click a little bit on some of the use cases that customers are choosing SimpliVity for, particularly at the edge, and maybe talk about why they're choosing HPE? >>What are the major use cases that we see? Dave is obviously, uh, easy to deploy and easy to manage in a standardized form factor, right? A lot of these customers, like for example, we have large retailer across the us with hundreds of stores across us. Right now you cannot send service staff to each of these stores. These data centers are their data center is essentially just a closet for these guys, right? So now how do you have a standardized deployment? So standardized deployment from the data center, which you can literally push out and you can connect a network cable and a power cable, and you're up and running, and then automated backup elimination of backup and state and BR from the edge sites and into the data center. So that's one of the big use cases to rapidly deploy new stores, bring them up in a standardized configuration, both from a hardware and a software perspective, and the ability to backup and recover that instantly. >>That's one large use case. The second use case that we see actually refers to a comment that you made in your opener. Dave was where a lot of these customers are generating a lot of the data at the edge. This is robotics automation that is going to up in manufacturing sites. These is racing teams that are out at the edge of doing post-processing of their cars data. Uh, at the same time, there is disaster recovery use cases where you have, uh, you know, campsites and local, uh, you know, uh, agencies that go out there for humanity's benefit. And they move from one site to the other. It's a very, very mobile architecture that they need. So those, those are just a few cases where we were deployed. There was a lot of data collection, and there's a lot of mobility involved in these environments. So you need to be quick to set up quick, to up quick, to recover, and essentially you're up to your next, next move. >>You seem pretty pumped up about this, uh, this new innovation and why not. >>It is, it is, uh, you know, especially because, you know, it is, it has been taught through with edge in mind and edge has to be mobile. It has to be simple. And especially as, you know, we have lived through this pandemic, which, which I hope we see the tail end of it in at least 2021, or at least 2022. They, you know, one of the most common use cases that we saw, and this was an accidental discovery. A lot of the retail sites could not go out to service their stores because, you know, mobility is limited in these, in these strange times that we live in. So from a central center, you're able to deploy applications, you're able to recover applications. And, and a lot of our customers said, Hey, I don't have enough space in my data center to back up. Do you have another option? So then we rolled out this update release to SimpliVity verse from the edge site. You can now directly back up to our backup service, which is offered on a consumption basis to the customers, and they can recover that anywhere they want. >>Fantastic Omer, thanks so much for coming on the program today. >>It's a pleasure, Dave. Thank you. >>All right. Awesome to see you. Now, let's hear from red bull racing and HPE customer, that's actually using SimpliVity at the edge. Countdown really begins when the checkered flag drops on a Sunday. It's always about this race to manufacture >>The next designs to make it more adapt to the next circuit to run those. Of course, if we can't manufacture the next component in time, all that will be wasted. >>Okay. We're back with Matt kudu, who is the CIO of red bull racing? Matt, it's good to see you again. >>Great to say, >>Hey, we're going to dig into a real-world example of using data at the edge and in near real time to gain insights that really lead to competitive advantage. But, but first Matt, tell us a little bit about red bull racing and your role there. >>Sure. So I'm the CIO at red bull racing and that red bull race. And we're based in Milton Keynes in the UK. And the main job job for us is to design a race car, to manufacture the race car, and then to race it around the world. So as CIO, we need to develop the ITT group needs to develop the applications is the design, manufacturing racing. We also need to supply all the underlying infrastructure and also manage security. So it's really interesting environment. That's all about speed. So this season we have 23 races and we need to tear the car apart and rebuild it to a unique configuration for every individual race. And we're also designing and making components targeted for races. So 20 a movable deadlines, um, this big evolving prototype to manage with our car. Um, but we're also improving all of our tools and methods and software that we use to design and make and race the car. >>So we have a big can do attitude of the company around continuous improvement. And the expectations are that we continuously make the car faster. That we're, that we're winning races, that we improve our methods in the factory and our tools. And, um, so for, I take it's really unique and that we can be part of that journey and provide a better service. It's also a big challenge to provide that service and to give the business the agility, agility, and needs. So my job is, is really to make sure we have the right staff, the right partners, the right technical platforms. So we can live up to expectations >>That tear down and rebuild for 23 races. Is that because each track has its own unique signature that you have to tune to, or are there other factors involved there? >>Yeah, exactly. Every track has a different shape. Some have lots of strengths. Some have lots of curves and lots are in between. Um, the track surface is very different and the impact that has some tires, um, the temperature and the climate is very different. Some are hilly, some, a big curves that affect the dynamics of the power. So all that in order to win, you need to micromanage everything and optimize it for any given race track. >>Talk about some of the key drivers in your business and some of the key apps that give you a competitive advantage to help you win races. >>Yeah. So in our business, everything is all about speed. So the car obviously needs to be fast, but also all of our business operations needed to be fast. We need to be able to design a car and it's all done in the virtual world, but the, the virtual simulations and designs need to correlate to what happens in the real world. So all of that requires a lot of expertise to develop the simulation is the algorithms and have all the underlying infrastructure that runs it quickly and reliably. Um, in manufacturing, um, we have cost caps and financial controls by regulation. We need to be super efficient and control material and resources. So ERP and MES systems are running and helping us do that. And at the race track itself in speed, we have hundreds of decisions to make on a Friday and Saturday as we're fine tuning the final configuration of the car. And here again, we rely on simulations and analytics to help do that. And then during the race, we have split seconds, literally seconds to alter our race strategy if an event happens. So if there's an accident, um, and the safety car comes out, or the weather changes, we revise our tactics and we're running Monte Carlo for example. And he is an experienced engineers with simulations to make a data-driven decision and hopefully a better one and faster than our competitors, all of that needs it. Um, so work at a very high level. >>It's interesting. I mean, as a lay person, historically we know when I think about technology and car racing, of course, I think about the mechanical aspects of a self-propelled vehicle, the electronics and the light, but not necessarily the data, but the data's always been there. Hasn't it? I mean, maybe in the form of like tribal knowledge, if somebody who knows the track and where the Hills are and experience and gut feel, but today you're digitizing it and you're, you're processing it and close to real time. >>It's amazing. I think exactly right. Yeah. The car's instrumented with sensors, we post-process at Virgin, um, video, um, image analysis, and we're looking at our car, our competitor's car. So there's a huge amount of, um, very complicated models that we're using to optimize our performance and to continuously improve our car. Yeah. The data and the applications that can leverage it are really key. Um, and that's a critical success factor for us. >>So let's talk about your data center at the track, if you will. I mean, if I can call it that paint a picture for us, what does that look like? >>So we have to send, um, a lot of equipment to the track at the edge. Um, and even though we have really a great wide area network linked back to the factory and there's cloud resources, a lot of the trucks are very old. You don't have hardened infrastructure, don't have ducks that protect cabling, for example, and you could lose connectivity to remote locations. So the applications we need to operate the car and to make really critical decisions, all that needs to be at the edge where the car operates. So historically we had three racks of equipment, like a safe infrastructure, um, and it was really hard to manage, um, to make changes. It was too flexible. Um, there were multiple panes of glass, um, and, um, and it was too slow. It didn't run her applications quickly. Um, it was also too heavy and took up too much space when you're cramped into a garage with lots of environmental constraints. >>So we, um, we'd, we'd introduced hyperconvergence into the factory and seen a lot of great benefits. And when we came time to refresh our infrastructure at the track, we stepped back and said, there's a lot smarter way of operating. We can get rid of all the slow and flexible, expensive legacy and introduce hyperconvergence. And we saw really excellent benefits for doing that. Um, we saw a three X speed up for a lot of our applications. So I'm here where we're post-processing data, and we have to make decisions about race strategy. Time is of the essence in a three X reduction in processing time really matters. Um, we also, um, were able to go from three racks of equipment down to two racks of equipment and the storage efficiency of the HPE SimpliVity platform with 20 to one ratios allowed us to eliminate a rack. And that actually saved a hundred thousand dollars a year in freight costs by shipping less equipment, um, things like backup, um, mistakes happen. >>Sometimes the user makes a mistake. So for example, a race engineer could load the wrong data map into one of our simulations. And we could restore that VDI through SimpliVity backup at 90 seconds. And this makes sure it enables engineers to focus on the car to make better decisions without having downtime. And we sent them to, I take guys to every race they're managing 60 users, a really diverse environment, juggling a lot of balls and having a simple management platform like HPE SimpliVity gives us, allows them to be very effective and to work quickly. So all of those benefits were a huge step forward relative to the legacy infrastructure that we used to run at the edge. >>Yeah. So you had the nice Petri dish and the factory. So it sounds like your, your goals, obviously your number one KPI is speed to help shave seconds time, but also costs just the simplicity of setting up the infrastructure. >>Yeah. It's speed. Speed, speed. So we want applications absolutely fly, you know, get to actionable results quicker, um, get answers from our simulations quicker. The other area that speed's really critical is, um, our applications are also evolving prototypes, and we're always, the models are getting bigger. The simulations are getting bigger and they need more and more resource and being able to spin up resource and provision things without being a bottleneck is a big challenge in SimpliVity. It gives us the means of doing that. >>So did you consider any other options or was it because you had the factory knowledge? It was HCI was, you know, very clearly the option. What did you look at? >>Yeah, so, um, we have over five years of experience in the factory and we eliminated all of our legacy, um, um, infrastructure five years ago. And the benefits I've described, um, at the track, we saw that in the factory, um, at the track we have a three-year operational life cycle for our equipment. When into 2017 was the last year we had legacy as we were building for 2018. It was obvious that hyper-converged was the right technology to introduce. And we'd had years of experience in the factory already. And the benefits that we see with hyper-converged actually mattered even more at the edge because our operations are so much more pressurized time has even more of the essence. And so speeding everything up at the really pointy end of our business was really critical. It was an obvious choice. >>Why, why SimpliVity? What why'd you choose HPE SimpliVity? >>Yeah. So when we first heard about hyperconverged way back in the, in the factory, um, we had, um, a legacy infrastructure, overly complicated, too slow, too inflexible, too expensive. And we stepped back and said, there has to be a smarter way of operating. We went out and challenged our technology partners. We learned about hyperconvergence within enough, the hype, um, was real or not. So we underwent some PLCs and benchmarking and, and the, the PLCs were really impressive. And, and all these, you know, speed and agility benefits, we saw an HP for our use cases was the clear winner in the benchmarks. So based on that, we made an initial investment in the factory. Uh, we moved about 150 VMs in the 150 VDI into it. Um, and then as, as we've seen all the benefits we've successfully invested, and we now have, um, an estate to the factory of about 800 VMs and about 400 VDI. So it's been a great platform and it's allowed us to really push boundaries and, and give the business, um, the service that expects. >>So w was that with the time in which you were able to go from data to insight to recommendation or, or edict, uh, was that compressed, you kind of indicated that, but >>So we, we all telemetry from the car and we post-process it, and that reprocessing time really it's very time consuming. And, um, you know, we went from nine, eight minutes for some of the simulations down to just two minutes. So we saw big, big reductions in time and all, ultimately that meant an engineer could understand what the car was during a practice session, recommend a tweak to the configuration or setup of it, and just get more actionable insight quicker. And it ultimately helps get a better car quicker. >>Such a great example. How are you guys feeling about the season, Matt? What's the team's sentiment? >>Yeah, I think we're optimistic. Um, we w we, um, uh, we have a new driver >>Lineup. Uh, we have, um, max for stopping his carries on with the team and Sergio joins the team. So we're really excited about this year and, uh, we want to go and win races. Great, Matt, good luck this season and going forward and thanks so much for coming back in the cube. Really appreciate it. And it's my pleasure. Great talking to you again. Okay. Now we're going to bring back Omer for quick summary. So keep it real >>Without having solutions from HB, we can't drive those five senses, CFD aerodynamics that would undermine the simulations being software defined. We can bring new apps into play. If we can bring new them's storage, networking, all of that can be highly advises is a hugely beneficial partnership for us. We're able to be at the cutting edge of technology in a highly stressed environment. That is no bigger challenge than the formula. >>Okay. We're back with Omar. Hey, what did you think about that interview with Matt? >>Great. Uh, I have to tell you I'm a big formula one fan, and they are one of my favorite customers. Uh, so, you know, obviously, uh, one of the biggest use cases as you saw for red bull racing is Trackside deployments. There are now 22 races in a season. These guys are jumping from one city to the next, they've got to pack up, move to the next city, set up, set up the infrastructure very, very quickly and average formula. One car is running the thousand plus sensors on that is generating a ton of data on track side that needs to be collected very quickly. It needs to be processed very quickly, and then sometimes believe it or not, snapshots of this data needs to be sent to the red bull back factory back at the data center. What does this all need? It needs reliability. >>It needs compute power in a very short form factor. And it needs agility quick to set up quick, to go quick, to recover. And then in post processing, they need to have CPU density so they can pack more VMs out at the edge to be able to do that processing now. And we accomplished that for, for the red bull racing guys in basically two are you have two SimpliVity nodes that are running track side and moving with them from one, one race to the next race, to the next race. And every time those SimpliVity nodes connect up to the data center collector to a satellite, they're backing up back to their data center. They're sending snapshots of data back to the data center, essentially making their job a whole lot easier, where they can focus on racing and not on troubleshooting virtual machines, >>Red bull racing and HPE SimpliVity. Great example. It's agile, it's it's cost efficient, and it shows a real impact. Thank you very much. I really appreciate those summary comments. Thank you, Dave. Really appreciate it. All right. And thank you for watching. This is Dave Volante. >>You.

Published Date : Mar 30 2021

SUMMARY :

as close as possible to the sources of data, to reduce latency and maximize your ability to get Pleasure to be here. So how do you see the edge in the broader market shaping up? A lot of robot automation is going on that requires a lot of compute power to go out to More data is new data is generated at the edge, but needs to be stored. How do you look at that? a lot of States outside of the data center that needs to be protected. We at HBS believe that it needs to be extremely simple. You've got physics that you have to deal your latency to deal with. In addition to that, the customers now do not have to buy any third-party In addition to that, now you can backup straight to the client. So, so you can't comment on that or So as a result of that, as part of the GreenLake offering, we have cloud backup service natively are choosing SimpliVity for, particularly at the edge, and maybe talk about why from the data center, which you can literally push out and you can connect a network cable at the same time, there is disaster recovery use cases where you have, uh, out to service their stores because, you know, mobility is limited in these, in these strange times that we always about this race to manufacture The next designs to make it more adapt to the next circuit to run those. it's good to see you again. insights that really lead to competitive advantage. So this season we have 23 races and we So my job is, is really to make sure we have the right staff, that you have to tune to, or are there other factors involved there? So all that in order to win, you need to micromanage everything and optimize it for Talk about some of the key drivers in your business and some of the key apps that So all of that requires a lot of expertise to develop the simulation is the algorithms I mean, maybe in the form of like tribal So there's a huge amount of, um, very complicated models that So let's talk about your data center at the track, if you will. So the applications we need to operate the car and to make really Time is of the essence in a three X reduction in processing So for example, a race engineer could load the wrong but also costs just the simplicity of setting up the infrastructure. So we want applications absolutely fly, So did you consider any other options or was it because you had the factory knowledge? And the benefits that we see with hyper-converged actually mattered even more at the edge And, and all these, you know, speed and agility benefits, we saw an HP So we saw big, big reductions in time and all, How are you guys feeling about the season, Matt? we have a new driver Great talking to you again. We're able to be at Hey, what did you think about that interview with Matt? and then sometimes believe it or not, snapshots of this data needs to be sent to the red bull And we accomplished that for, for the red bull racing guys in And thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

SergioPERSON

0.99+

MattPERSON

0.99+

DavidPERSON

0.99+

DavePERSON

0.99+

two racksQUANTITY

0.99+

StevePERSON

0.99+

Dave VolantePERSON

0.99+

2020DATE

0.99+

OmarPERSON

0.99+

Omar AssadPERSON

0.99+

2018DATE

0.99+

Matt CadieuxPERSON

0.99+

20QUANTITY

0.99+

Red Bull RacingORGANIZATION

0.99+

HBSORGANIZATION

0.99+

Milton KeynesLOCATION

0.99+

2017DATE

0.99+

23 racesQUANTITY

0.99+

60 usersQUANTITY

0.99+

22 racesQUANTITY

0.99+

three-yearQUANTITY

0.99+

90 secondsQUANTITY

0.99+

eight minutesQUANTITY

0.99+

Omer AsadPERSON

0.99+

UKLOCATION

0.99+

two cablesQUANTITY

0.99+

One carQUANTITY

0.99+

more than 50%QUANTITY

0.99+

twoQUANTITY

0.99+

nineQUANTITY

0.99+

each trackQUANTITY

0.99+

ITTORGANIZATION

0.99+

SimpliVityTITLE

0.99+

last yearDATE

0.99+

two minutesQUANTITY

0.99+

VirginORGANIZATION

0.99+

HPE SimpliVityTITLE

0.99+

three racksQUANTITY

0.99+

Matt kuduPERSON

0.99+

oneQUANTITY

0.99+

hundreds of storesQUANTITY

0.99+

five sensesQUANTITY

0.99+

hundredsQUANTITY

0.99+

about 800 VMsQUANTITY

0.99+

bothQUANTITY

0.98+

green LakeORGANIZATION

0.98+

about 400 VDIQUANTITY

0.98+

10 yearsQUANTITY

0.98+

second use caseQUANTITY

0.98+

one cityQUANTITY

0.98+

ArubaORGANIZATION

0.98+

one siteQUANTITY

0.98+

five years agoDATE

0.98+

F1 RacingORGANIZATION

0.98+

todayDATE

0.98+

SimpliVityORGANIZATION

0.98+

this yearDATE

0.98+

150 VDIQUANTITY

0.98+

about 150 VMsQUANTITY

0.98+

SundayDATE

0.98+

red bullORGANIZATION

0.97+

firstQUANTITY

0.97+

OmerPERSON

0.97+

multi-trillion dollarQUANTITY

0.97+

over five yearsQUANTITY

0.97+

one large use caseQUANTITY

0.97+

first opportunityQUANTITY

0.97+

HPEORGANIZATION

0.97+

eachQUANTITY

0.96+

decadesQUANTITY

0.96+

one ratiosQUANTITY

0.96+

HPORGANIZATION

0.96+

one raceQUANTITY

0.95+

GreenLakeORGANIZATION

0.94+

Accelerate Your Application Delivery with HPE GreenLake for Private Cloud | HPE GreenLake Day 2021


 

>>Good morning. Good afternoon. And good evening. I am Kevin Duke with HPE GreenLake cloud services and welcome to the HPE GreenLake per private cloud session. I enjoined today by Raj mystery and Steve show Walter, who will walk us through today's presentation and demonstration. We'd like to keep this session interactive. So please submit your questions in the chat window. We have subject matter experts on the line to answer your questions. So with that, I'll hand over to Raj Kilz. Thanks Kevin. So cloud is now fast becoming a reality. We says HPE and what our customers say to is that it's not an expectation anymore. It's an absolute necessity. So the research and the stats that you see on the screen, just kind of like prove that over the last five to six years, organizations, enterprises are adopting cloud. Be it in the data center with the hyperscalers are a mixture of both. >>But the interesting thing that we see now is a moving investment to basically increase private cloud capability. Uh, and in that vein, what we've done with Greenlight cloud services is create a rich portfolio that delivers that cloud-like experience either at the edge in your data center, co-location the actually matches and actually embraces the work that you do with the hyperscalers. What we're doing here is we're providing self-service capability elasticity in the means and the way that you would use this and the way that you would flex things open down more importantly, all of this is one and operated for you, which comes true to what we say in terms of delivering that cloud experience within those locations, being at the edge, the data center or the Colwell, um, Greenlight for private cloud was initially launched, uh, in summer 2020, it was the first iteration of what we call the Greenlight platforms. >>What we're trying to do with this element of the Greenlight cloud services portfolio is four things which is eliminate the complexity of building things that live, breathe, behave, and act like in a cloud-like manner in the data center, because this is hard. Yeah, the visibility around the way that you would manage and understand and run and operate certain elements of that cloud, uh, the governance and the compliance pace, which is important, especially when it comes to things like applying policies, et cetera, that you would have. And then the skills gap that we do from our managed services perspective, that takes things off. So from an infrastructure standpoint, beginning of the left, well class HPE compute storage networks, which is embracing the virtualization and the software defined networking layer together with a pretty rich cloud automation and orchestration portal all wrapped up for you. Pre-built pre architected, removing complexity, increasing time to value, uh, and, and, uh, the, the actual delivery timescales, if we move to the actual experience, although this is actually, uh, embracing the way that you would access these, uh, solutions. >>So in, through GreenLake central, uh, that's where your other service experience begins with HPE is your entry point into the world of as a service from here on which you would actually access that service. So from a, from a private cloud standpoint, this is where you would initiate the cloud management portal. And then you would begin either working in that and the roles of either administrator, consumer, et cetera. Lastly, you know, pushing buttons and provisioning stuff is really easy, but a lot of our focuses is in the pulse provisioning processes, understanding is it turned on? Is it off? How much is it costing me? Am I getting the most efficiency out of it? Am I running out of capacity to deliver services to my users? All of this is finally wrapped up with the managed platform capability, which means you now have to understand and treat Hewlett-Packard enterprise as an MSP and a cloud provider within our data center. We take care of the infrastructure, the software, and the experience, your entry point is that the cloud management layer, that's how we get you going. >>Hey, Roger, I know we made some announcements earlier today about a new scalable form factor version of private cloud. I was wondering if maybe you could talk about how that extends the value proposition for customers >>Question Steve. Um, so what the scalable form factor is really is it's also looking at market feedback, understanding of what our customers are bringing this as a entry point into the smaller and the more medium enterprise who are looking to deliver private cloud capabilities, you know, making it easier for them to embrace it and then scale. The other differences is the way that we actually have flexibility in the way that that cloud solution now grows. So different options, uh, available to customers in terms of what they want to do. We typically talk as a team and to our customers about different roles. So we have a notion of the cloud administrator or the cloud operator. This is more of your classic kind of like administrative role. So this is where our customers would come in, right at the cloud management layer and begin configuring their EMR environment, networking services, et cetera, from here, onwards is the cloud consumer. >>So applications, line, application, developers, lines of business, et cetera, they're presented with a self-service catalog for them to come and provision stuff. It could be normal VMs. It could be kind of like applications depending on what the administrators or the operators of Charles and to present to them. Lastly, it's around, how do I understand what's going on in the environment? So the focus, as I mentioned beforehand, visibility to them understand what's happening to then optimize later. So addressing the needs of lines of, or it leaders and business leaders within our customer base, although this then begins from our central point of access, which is GreenLake central from here, services and solutions that customers subscribe to are presented, depending on who you are, your privileges, your role within the actual environment, you get different options. So a cloud administrator may see different things in central because they require administrative functions within the cloud environment. I consume it such as me may have limited views in central access a service, but I can basically only read our provision set, things that are done for me from a lines of business or from an it leadership perspective. It's about providing predictive billing visibility into cost, understanding from a planning standpoint and allowing people to optimize it, ease and speed. So central is where your journey begins. And then from within there, you launch the necessary service like you're subscribed to today's focus is the private cloud. >>So who is this a solution built for Raj? Uh, initially we started off at the large enterprise level, Kevin, but what we've done as we did as HPE has listened to our customers. So we've reintroduced on them. We're launching today, the scalable form factor to address the needs of a multitude of clients, both large and small and enable people to have different kind of like deployment types. So remote office branch office for the larger customers, and for those smaller enterprises wanting to begin their private cloud journey, a great way for them to do that with HPE. >>All right. Thank you. >>Um, how does the customer access the private cloud environment, uh, by agreeing like central Kevin? Great question. That's our entry point for any of the services? Uh, easier to see later on in the demo that Steve's going to walk through, uh, it's where customers come in and depending on role access, privilege, rights, et cetera, you are presented with your services. And from within that you access the service depending on the role that's been assigned to you. So state, why don't you show us a little bit about what a cloud administrator or a cloud operator can do within the environment? Sure. Happy to rush. As we talked to these personas are use cases. You know, our experience, as Raj mentioned, will always start in GreenLake central. So the role or persona I'm taking on here is that of an administrator of this private cloud environment. So again, I start off by logging into GreenLake central. Once this stood up, services stood up and available, uh, within my data center, I see the green Lake for private cloud tile, which gives me an overview of services I'm consuming. And some of the things that might be running in that environment, clicking on the tile, takes me to the cloud management platform dashboard. This is where I, as an administrator can configure and control lots of things in the environment on behalf of my end users. So a couple of examples of things I might want to do first off. Uh, there's an important >>Notion of grouping that we use for access control within the environment. So I may want to organize my users into groups to control what they can see, what they can do, what sort of policies I can apply to them next. I probably want to configure the underlying software defined network that Raj talked about. So again, we deliver a software defined networking capability from within the software defined network. This is where I can create things like underlying networks, underlying distributed V switches at an IP address pools. I can also configure and manage software defined routers, firewall rules, and some of those sorts of things within the environment, uh, in the IP address pools I have that I want to make available to some of those underlying networks I could manage from within here as well. We also feature software to find load balancing capabilities. So again, if I expect to my developers or my end users, to be able to provision resources that require some load balancing, I can create those load balancers define the types of load balancing I want to make available to those end users from within here as well. >>Finally, I can manage keys and certificates. So if I have things like key payers or SSL certificates, again, that I want to make available to my end users, um, I could manage all that from within here. And then one of the final things I might want to do is start to manage a, an automation library. So a library of virtual images, I don't want to make available for, for my end users because the private cloud solution is based on VMware. I might want to just pull in some existing VMware images. I have, I might want to create some new custom images, but really I have a central place to be able to manage that library of images and then, you know, decide who has access to which images and how I want to make those available to end users users to be able to provision and lifecycle manage. >>So that's a quick overview of some of the administrative capabilities, uh, Kevin, any questions at this point about that capability? I got one for you customers bring their own tooling to the private cloud. Oh yeah. So that's a great question. So, you know, almost every customer I talk to nowadays has made a large investment, typically in some sort of automation tooling. And one of the things that we want to provide is the ability to surface that tooling and kind of allow customers to be able to reuse that tooling within our private cloud environment. So within the private cloud platform, as an administrator, I can create all sorts of scripts and, you know, maybe some basic capabilities I wanted to find for scripts, but they also have the ability to integrate automation platforms. So we can see in this particular environment, I've, I've onboarded a, uh, a set of Ansible playbooks that exist in a get repo. Uh, I really just point the cloud management platform to that repo it scrapes all the playbooks that it finds there and those become available as tasks and workflows that I can use, uh, after I provisioned BM. So again, I can reuse that investment that I've made in automating things like application provisioning, application configuration for my end users within my environment. >>I've got another one for you. Uh, how do customers improve control and governance of their private cloud? Yeah. So there are a couple of different ways to do that. So, you know, we'll talk specifically. One of the capabilities I have within, uh, the cloud management platform is the ability to create, uh, policies. Policies are really a way I can provide my users with, you know, self-service access to kind of go do the things that they need to do, but provide some control around what they can do. So there's all sorts of policies I can create. So policies around things like if I've got certain group of users that I want to require to get provision approval, anytime they approve provision something, I wanted an administrator to approve it. I can also limit the things that maybe a group of consumers of consumers can consume within my environment. Maybe I want to define a certain host name rule. So rather than create your own host names, I have a rule I want applied. Um, if tagging and showback is important, I might want to force some tags within my environment, say, Hey, anybody who provisioned something needs to provide me a value for this tag. And then I can define how that applies within the environment. So hopefully that answers some questions and gives you a feel of how these cloud administrators would work within the environment, TB to be able to manage the overall environment itself. >>Perfect. Thanks Steve. So what we've just described in seeing is, is the ability for a cloud administrator to a do day one tasks, set things up, set some services off and more importantly, apply some rules, controls, and governance. So it keeps users safe and it keeps it happy. Really. So let's say I'm Raj, I'm the head of applications and I've got a team of developers. So I'm now going to come in as a consumer. Can you show me what I can do as a consumer pleaser? >>Sure. Raj. So again, just like with the administrative use case, we talked about as a, as a cloud consumer, my experience starts in GreenLake central. So once I'm logged into GreenLake central, if I've been provided access to the environment by my administrators, I see the green Lake for private cloud tile, and I click on it to get to the cloud management platform. Just like the administrator you use. Now, I probably see a lot less because I probably have a lot less capabilities here, but one of the first places I'd probably want to go is take a look at what instances have been provisioned and maybe provision an instance on my own. So, you know, instance provisioning is very simple. Really. It's just a few clicks and answer a few questions. Uh, so in this case, if I have access to multiple groups, so kind of that logical separation that I talked about, I'd first pick, you know, which group is this a part of? >>Uh, again, in my particular case, I can provide a freeform name because that's the policy that's been set up for me. Um, I've got a forced tag, right? So I have to provide a tag or a label that tells me what, uh, what, what area this is a part of. And as I continue to drill down, now I get to a point where I can select my image based on the images that have been made, made available to me. Um, I can choose a size of a VM. So we have sort of some pre-provision sizes that might administrators have made available to me. And in some cases I can customize some things within those sizes, or maybe I can't, again, just depending on how this was created, select the network that I want to connect to, uh, and provide a few other options. One, the things I do want to talk about is this notion of tagging tagging is very important from a showback perspective. >>And we'll talk about when we get to cost analysis, how we can use any tags that get applied here to be able to do some show back reporting later. So if I want to provide a tag for an owner to make sure I can always write a report that says, show me everything that Steve has consumed. I've got the ability to provide those tags here. And again, through a policy, I can make those tags required. A couple of other choices. I have any of that available automation that maybe my administrators have made available. I can run here. I can select some scaling of my application, maybe go ahead and auto select the backup schedule, manage some lifecycle actions if maybe this VM only needs to run during weekdays. And I don't need it on the weekends. I can have it automatically shut down and start up. >>And at the end, just click on complete. Uh, and my VM is often being built. And then, you know, once my VMs are up and running, I've got access to be able to manage those VMs on a running basis. So, you know, if I have a VM that's running and I want to be able to manage it very simple again, from within the cloud management platform to go take a look at maybe how this VM is performing, maybe I want to log into the console. Maybe I want to take a look at the log stash that, you know, the log log error messages that this VM has created, or maybe I just want to stop it, start, it can create an image from it, or maybe, you know, after I've provisioned, it runs some of those workflows on it as a, as a end-user, I've got the capability to kind of fully manage and fully control those VMs once I have them up and running. >>So that's a quick overview of that cloud consumer use case. Uh, Kevin, do you have any questions right now about that use case? Yeah, I do, uh, cloud consumers today want more than a VM, so how can a private cloud deliver more value for cloud consumers? Yeah, so that's a great question. So we talked a little bit about the cloud management platforms, ability to integrate with existing automation, for things like, uh, application installation and configuration. Uh, but one thing I didn't talk about is kind of an alternate way. We can use that and that's through this notion of blueprints. So within the cloud management platform, I, as a developer or as an administrator can set up blueprints, which are really, uh, very complex applications. These could be multi-node multi-tiered applications where each tier may have a different application installed. They may be load balanced, all those sorts of things, and I can stitch all those together and make them available as a catalog item. >>It's just kind of one simple catalog item for an end user to consume. So they don't have to understand all the complexity or all the multiple nodes or all the workflows required on the backend to provide that service. I've already done all that hard work. I advertise it to them and they don't have to know, again, in this particular case, I've got a web tier made up of a couple of VMs, a database tier made up of a couple of VMs. Uh, there's some automation running, maybe through those Ansible playbooks, uh, in, in the backend to make all those things happen really, as an end user, I just say, Hey, I want one of these applications. I may need to answer a few questions, uh, depending on how the application or their blueprint is built. And then I could push that out as an application. And again, I don't have to understand all the complexities that make up that multi-node multi-tiered application on in the background >>Stay. That's really cool. So like phase as good as it can be. So, right. So we've pushed some buttons, we've set some stuff up, we've provisioned some stuff. So right at the beginning, you know, we spoke about the post provisioning stuff. So how do we actually manage the costs and also look at their usage within their environment, which is also important to our customers. >>Yeah. So it's a great question, Raj. So, you know, obviously customers want to understand what their overall green link consumption is, what their bill is, how all those things relate together. And then they probably want to do much more detailed cost analysis as well. So the good news here, we provide all this tooling and all this is available right through GreenLake central. So a couple of the tiles that you'll see in GreenLake central tie into the private cloud solution, just like they would any other GreenLake solution. So if I want to see overall what I've consumed, uh, within my private cloud, as a GreenLake resource, I can drill down to understand, Hey, what was actually metered as what I consumed, how did that relate to my GreenLake rate card? You know, how did that, how did that create the number that appeared on my GreenLake bill for this particular service at the end of the month, I've also got the tools to do capacity planning, again, just like every other green Lake environment. >>Uh, we want to be able to show kind of that capacity planning view so customers can understand kind of what they're consuming, uh, what direction that's trending. And when we need to add some, we may need to add some more additional capacity. So again, when a customer needs more, it's already there and ready to go, they just start to consume it and pay for it as a part of their green Lake bill. So Greenlight customers have a dedicated account team that kind of works with them to keep an eye on that capacity. And again, make sure we're working with customers to make the right decisions about when is the right time to add additional capacity to the environment. And then finally, you know, our customers also get access to consumption analytics for much more detailed cost reporting. So within consumption analytics, I can take advantage of those tags that I talked about previously. >>So here's a report I created where I want to see my private cloud consumption and use really broken down by cost center. And by the VMs that my users within each of those cost centers is consuming. So I wrote a report to do some showback costing based on those tags. So in this particular case, I can tell, for example, the colo engineer cost center that Hey you over the last month, you've consumed 32, uh, elements within the private cloud environment. You know, your total cost for that was $860. And I can give them the ability to, you know, if they want drill down on this. So, you know, now they'll see every individual VM that was provisioned, uh, where it ran when it ran. And in this particular case, I've broken down the cost between compute and storage, because I really wanted to see those separately as separate line items, but, you know, really give customers the ability to do whatever showback or chargeback reporting makes sense within their organization, based on the tags. >>They want to apply it and how they want to be able to show and consume those costs. So, Kevin, any questions about, uh, sort of this cost analysis use case? Yep. Is there a way to proactively monitor consumption of the private cloud environment? Yeah, so we actually provide a couple of different ways to do that. Uh, one right within consumption analytics that we talked about, one of the capabilities I have is, is the ability to set a budget. So in this particular case, I've set a budget again, kind of by that cost center that I can take a look at, Hey, you know, what are all these cost centers consuming within this private cloud environment? Uh, and how does that relate to, you know, what maybe, uh, an amount that I've given them to be able to use? So I can take a look at it and see, Hey, in the current period, you know, I've got one, a cost center that's over budget two that are under budget and take a look at their historical use as well. >>Going back to the cloud management platform. I also have more of a hard way to be able to set those consumption boundaries, uh, by using a policy. So again, if I want to create a policy that says, Hey, you know, Steve can only have 20 VMs. Uh, once he's provisioned those 20 VMs, he can't have any more, um, you know, he's got to come back and ask for more. And again, you know, when I create this policy, I could apply it to a group or an individual user just kind of based on how I want to put those guard rails around that environment and then sort of do that around that environment. So there's kind of a way to do this in more of a soft way based on cost to understand budgets and get notifications. When I get close to my budget limits or more of a hard way to actually, you know, be able to limit resources that customers can consume within the environment itself. So with that, Raj, I'll throw it back to you. >>Thanks, Dave, >>Just to wrap up really, you know, Steve and Kevin, thank you for the great demonstration and the chat, really, um, a few things for the audience and our customers, uh, to understand what we're now doing with Greenlight for private cloud and other platform solutions is helping you to get started really, really quickly, allowing you to begin your journey with us at the right level. And then you can scale depending on how you are actually managing your transformation, be it from an infrastructure standpoint application standpoint, or you are looking to basically just modernize the way that you deliver services back out to your internal users. The other side of it is, is the important fact that we now act and behave very much like a cloud. So because we run those environments for you, we eliminate the complexity of feeding them all, Trang, all the infrastructure, the configuration, and the updates of the software layer. It leaves you free to basically deliver the services like Steve has just shown the other side of it. Final point, is this all usage based? Uh, so again, lowering kind of like the initial investment risk for you guys, allowing you to, uh, benefit from the way that we've actually integrated the solutions and technologies. So you can just embrace them and take advantage of. >>Excellent. Thank you, Raj. So I would like to thank you all for today. Thank >>You, Raj and Steve, for a brilliant demonstration. If you would like more information or like to speak to someone directly, then please fill out the poll by clicking on the poll option at the top of the chat box. So in closing, if you are interested in HPE GreenLake for private cloud, then please start a trial. It's easy. Thank you. Thank you all and goodbye for now..

Published Date : Mar 17 2021

SUMMARY :

So the research and the stats that you see on the screen, in the means and the way that you would use this and the way that you would flex things open Yeah, the visibility around the way that you would manage and understand and run is that the cloud management layer, that's how we get you going. I was wondering if maybe you could talk about how that extends the value proposition for customers The other differences is the way that we actually have flexibility in the way that that cloud solution So the focus, as I mentioned beforehand, visibility to them understand what's So remote office branch office for the larger customers, Thank you. So a couple of examples of things I might want to do first off. I have that I want to make available to some of those underlying networks I could manage from within here as well. So a library of virtual images, I don't want to make available for, So that's a quick overview of some of the administrative capabilities, uh, Kevin, any questions at this point about that So hopefully that answers some questions and gives you a feel of how these cloud administrators would work within the environment, So let's say I'm Raj, I'm the head of applications and I've got a team of developers. Just like the administrator you use. So I have to provide a tag or a label that tells me what, the backup schedule, manage some lifecycle actions if maybe this VM only needs to run during a, as a end-user, I've got the capability to kind of fully manage and fully control those VMs once I So within the cloud management platform, I, as a developer or as an administrator So they don't have to understand So right at the beginning, you know, we spoke about the post provisioning stuff. So if I want to see overall what I've consumed, uh, within my private cloud, And then finally, you know, So in this particular case, I can tell, for example, the colo engineer cost center that Hey you over see, Hey, in the current period, you know, I've got one, a cost center that's over budget two that are under budget and When I get close to my budget limits or more of a hard way to actually, you know, be able to limit resources that Just to wrap up really, you know, Steve and Kevin, thank you for the great demonstration and the chat, Thank So in closing, if you are interested in HPE GreenLake

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

RogerPERSON

0.99+

KevinPERSON

0.99+

RajPERSON

0.99+

Kevin DukePERSON

0.99+

DavePERSON

0.99+

$860QUANTITY

0.99+

Raj KilzPERSON

0.99+

GreenlightORGANIZATION

0.99+

todayDATE

0.99+

20 VMsQUANTITY

0.99+

Hewlett-PackardORGANIZATION

0.99+

32QUANTITY

0.99+

summer 2020DATE

0.99+

oneQUANTITY

0.99+

ColwellORGANIZATION

0.99+

bothQUANTITY

0.99+

HPEORGANIZATION

0.99+

each tierQUANTITY

0.98+

twoQUANTITY

0.98+

first pickQUANTITY

0.97+

green LakeORGANIZATION

0.97+

eachQUANTITY

0.97+

GreenLakeORGANIZATION

0.97+

firstQUANTITY

0.96+

HPE GreenLakeTITLE

0.96+

WalterPERSON

0.96+

OneQUANTITY

0.96+

GreenLake centralORGANIZATION

0.95+

Raj mysteryPERSON

0.94+

first placesQUANTITY

0.93+

HPE GreenLakeORGANIZATION

0.92+

one thingQUANTITY

0.91+

last monthDATE

0.91+

HPE GreenLakeORGANIZATION

0.9+

six yearsQUANTITY

0.9+

CharlesPERSON

0.89+

AnsibleORGANIZATION

0.89+

earlier todayDATE

0.88+

VMwareTITLE

0.86+

first iterationQUANTITY

0.81+

GreenLake centralTITLE

0.8+

one simple catalogQUANTITY

0.76+

playbooksCOMMERCIAL_ITEM

0.75+

Omer Asad, HPE ft Matt Cadieux, Red Bull Racing full v1 (UNLISTED)


 

(upbeat music) >> Edge computing is projected to be a multi-trillion dollar business. It's hard to really pinpoint the size of this market let alone fathom the potential of bringing software, compute, storage, AI and automation to the edge and connecting all that to clouds and on-prem systems. But what is the edge? Is it factories? Is it oil rigs, airplanes, windmills, shipping containers, buildings, homes, race cars. Well, yes and so much more. And what about the data? For decades we've talked about the data explosion. I mean, it's a mind-boggling but guess what we're going to look back in 10 years and laugh what we thought was a lot of data in 2020. Perhaps the best way to think about Edge is not as a place but when is the most logical opportunity to process the data and maybe it's the first opportunity to do so where it can be decrypted and analyzed at very low latencies. That defines the edge. And so by locating compute as close as possible to the sources of data to reduce latency and maximize your ability to get insights and return them to users quickly, maybe that's where the value lies. Hello everyone and welcome to this CUBE conversation. My name is Dave Vellante and with me to noodle on these topics is Omer Asad, VP and GM of Primary Storage and Data Management Services at HPE. Hello Omer, welcome to the program. >> Thanks Dave. Thank you so much. Pleasure to be here. >> Yeah. Great to see you again. So how do you see the edge in the broader market shaping up? >> Dave, I think that's a super important question. I think your ideas are quite aligned with how we think about it. I personally think enterprises are accelerating their sort of digitization and asset collection and data collection, they're typically especially in a distributed enterprise, they're trying to get to their customers. They're trying to minimize the latency to their customers. So especially if you look across industries manufacturing which has distributed factories all over the place they are going through a lot of factory transformations where they're digitizing their factories. That means a lot more data is now being generated within their factories. A lot of robot automation is going on, that requires a lot of compute power to go out to those particular factories which is going to generate their data out there. We've got insurance companies, banks, that are creating and interviewing and gathering more customers out at the edge for that. They need a lot more distributed processing out at the edge. What this is requiring is what we've seen is across analysts. A common consensus is this that more than 50% of an enterprises data especially if they operate globally around the world is going to be generated out at the edge. What does that mean? New data is generated at the edge what needs to be stored. It needs to be processed data. Data which is not required needs to be thrown away or classified as not important. And then it needs to be moved for DR purposes either to a central data center or just to another site. So overall in order to give the best possible experience for manufacturing, retail, especially in distributed enterprises, people are generating more and more data centric assets out at the edge. And that's what we see in the industry. >> Yeah. We're definitely aligned on that. There's some great points and so now, okay. You think about all this diversity what's the right architecture for these multi-site deployments, ROBO, edge? How do you look at that? >> Oh, excellent question, Dave. Every customer that we talked to wants SimpliVity and no pun intended because SimpliVity is reasoned with a simplistic edge centric architecture, right? Let's take a few examples. You've got large global retailers, they have hundreds of global retail stores around the world that is generating data that is producing data. Then you've got insurance companies, then you've got banks. So when you look at a distributed enterprise how do you deploy in a very simple and easy to deploy manner, easy to lifecycle, easy to mobilize and easy to lifecycle equipment out at the edge. What are some of the challenges that these customers deal with? These customers, you don't want to send a lot of IT staff out there because that adds cost. You don't want to have islands of data and islands of storage and promote sites because that adds a lot of states outside of the data center that needs to be protected. And then last but not the least how do you push lifecycle based applications, new applications out at the edge in a very simple to deploy manner. And how do you protect all this data at the edge? So the right architecture in my opinion needs to be extremely simple to deploy so storage compute and networking out towards the edge in a hyper converged environment. So that's we agree upon that. It's a very simple to deploy model but then comes how do you deploy applications on top of that? How do you manage these applications on top of that? How do you back up these applications back towards the data center, all of this keeping in mind that it has to be as zero touch as possible. We at HPE believe that it needs to be extremely simple, just give me two cables, a network cable, a power cable, fire it up, connect it to the network, push it state from the data center and back up it state from the edge back into the data center, extremely simple. >> It's got to be simple 'cause you've got so many challenges. You've got physics that you have to deal, you have latency to deal with. You got RPO and RTO. What happens if something goes wrong you've got to be able to recover quickly. So that's great. Thank you for that. Now you guys have heard news. What is new from HPE in this space? >> Excellent question, great. So from a deployment perspective, HPE SimpliVity is just gaining like it's exploding like crazy especially as distributed enterprises adopted as it's standardized edge architecture, right? It's an HCI box has got storage computer networking all in one. But now what we have done is not only you can deploy applications all from your standard V-Center interface from a data center, what have you have now added is the ability to backup to the cloud right from the edge. You can also back up all the way back to your core data center. All of the backup policies are fully automated and implemented in the distributed file system that is the heart and soul of the SimpliVity installation. In addition to that, the customers now do not have to buy any third-party software. Backup is fully integrated in the architecture and it's then efficient. In addition to that now you can backup straight to the client. You can back up to a central high-end backup repository which is in your data center. And last but not least, we have a lot of customers that are pushing the limit in their application transformation. So not only, we previously were one-on-one leaving VMware deployments out at the edge site now evolved also added both stateful and stateless container orchestration as well as data protection capabilities for containerized applications out at the edge. So we have a lot of customers that are now deploying containers, rapid manufacture containers to process data out at remote sites. And that allows us to not only protect those stateful applications but back them up back into the central data center. >> I saw in that chart, it was a line no egress fees. That's a pain point for a lot of CIOs that I talked to. They grit their teeth at those cities. So you can't comment on that or? >> Excellent question. I'm so glad you brought that up and sort of at the point that pick that up. So along with SimpliVity, we have the whole Green Lake as a service offering as well, right? So what that means Dave is, that we can literally provide our customers edge as a service. And when you compliment that with with Aruba Wired Wireless Infrastructure that goes at the edge, the hyperconverged infrastructure as part of SimpliVity that goes at the edge. One of the things that was missing with cloud backups is that every time you back up to the cloud, which is a great thing by the way, anytime you restore from the cloud there is that egress fee, right? So as a result of that, as part of the GreenLake offering we have cloud backup service natively now offered as part of HPE, which is included in your HPE SimpliVity edge as a service offering. So now not only can you backup into the cloud from your edge sites, but you can also restore back without any egress fees from HPE's data protection service. Either you can restore it back onto your data center, you can restore it back towards the edge site and because the infrastructure is so easy to deploy centrally lifecycle manage, it's very mobile. So if you want to deploy and recover to a different site, you could also do that. >> Nice. Hey, can you, Omer, can you double click a little bit on some of the use cases that customers are choosing SimpliVity for particularly at the edge and maybe talk about why they're choosing HPE? >> Excellent question. So one of the major use cases that we see Dave is obviously easy to deploy and easy to manage in a standardized form factor, right? A lot of these customers, like for example, we have large retailer across the US with hundreds of stores across US, right? Now you cannot send service staff to each of these stores. Their data center is essentially just a closet for these guys, right? So now how do you have a standardized deployment? So standardized deployment from the data center which you can literally push out and you can connect a network cable and a power cable and you're up and running and then automated backup, elimination of backup and state and DR from the edge sites and into the data center. So that's one of the big use cases to rapidly deploy new stores, bring them up in a standardized configuration both from a hardware and a software perspective and the ability to backup and recover that instantly. That's one large use case. The second use case that we see actually refers to a comment that you made in your opener, Dave, was when a lot of these customers are generating a lot of the data at the edge. This is robotics automation that is going up in manufacturing sites. These is racing teams that are out at the edge of doing post-processing of their cars data. At the same time there is disaster recovery use cases where you have campsites and local agencies that go out there for humanity's benefit. And they move from one site to the other. It's a very, very mobile architecture that they need. So those are just a few cases where we were deployed. There was a lot of data collection and there was a lot of mobility involved in these environments, so you need to be quick to set up, quick to backup, quick to recover. And essentially you're up to your next move. >> You seem pretty pumped up about this new innovation and why not. >> It is, especially because it has been taught through with edge in mind and edge has to be mobile. It has to be simple. And especially as we have lived through this pandemic which I hope we see the tail end of it in at least 2021 or at least 2022. One of the most common use cases that we saw and this was an accidental discovery. A lot of the retail sites could not go out to service their stores because mobility is limited in these strange times that we live in. So from a central recenter you're able to deploy applications. You're able to recover applications. And a lot of our customers said, hey I don't have enough space in my data center to back up. Do you have another option? So then we rolled out this update release to SimpliVity verse from the edge site. You can now directly back up to our backup service which is offered on a consumption basis to the customers and they can recover that anywhere they want. >> Fantastic Omer, thanks so much for coming on the program today. >> It's a pleasure, Dave. Thank you. >> All right. Awesome to see you, now, let's hear from Red Bull Racing an HPE customer that's actually using SimpliVity at the edge. (engine revving) >> Narrator: Formula one is a constant race against time Chasing in tens of seconds. (upbeat music) >> Okay. We're back with Matt Cadieux who is the CIO Red Bull Racing. Matt, it's good to see you again. >> Great to see you Dave. >> Hey, we're going to dig in to a real world example of using data at the edge in near real time to gain insights that really lead to competitive advantage. But first Matt tell us a little bit about Red Bull Racing and your role there. >> Sure. So I'm the CIO at Red Bull Racing and at Red Bull Racing we're based in Milton Keynes in the UK. And the main job for us is to design a race car, to manufacture the race car and then to race it around the world. So as CIO, we need to develop, the IT group needs to develop the applications use the design, manufacturing racing. We also need to supply all the underlying infrastructure and also manage security. So it's really interesting environment that's all about speed. So this season we have 23 races and we need to tear the car apart and rebuild it to a unique configuration for every individual race. And we're also designing and making components targeted for races. So 23 and movable deadlines this big evolving prototype to manage with our car but we're also improving all of our tools and methods and software that we use to design make and race the car. So we have a big can-do attitude of the company around continuous improvement. And the expectations are that we continue to say, make the car faster. That we're winning races, that we improve our methods in the factory and our tools. And so for IT it's really unique and that we can be part of that journey and provide a better service. It's also a big challenge to provide that service and to give the business the agility of needs. So my job is really to make sure we have the right staff, the right partners, the right technical platforms. So we can live up to expectations. >> And Matt that tear down and rebuild for 23 races, is that because each track has its own unique signature that you have to tune to or are there other factors involved? >> Yeah, exactly. Every track has a different shape. Some have lots of straight, some have lots of curves and lots are in between. The track surface is very different and the impact that has on tires, the temperature and the climate is very different. Some are hilly, some have big curbs that affect the dynamics of the car. So all that in order to win you need to micromanage everything and optimize it for any given race track. >> COVID has of course been brutal for sports. What's the status of your season? >> So this season we knew that COVID was here and we're doing 23 races knowing we have COVID to manage. And as a premium sporting team with Pharma Bubbles we've put health and safety and social distancing into our environment. And we're able to able to operate by doing things in a safe manner. We have some special exceptions in the UK. So for example, when people returned from overseas that they did not have to quarantine for two weeks, but they get tested multiple times a week. And we know they're safe. So we're racing, we're dealing with all the hassle that COVID gives us. And we are really hoping for a return to normality sooner instead of later where we can get fans back at the track and really go racing and have the spectacle where everyone enjoys it. >> Yeah. That's awesome. So important for the fans but also all the employees around that ecosystem. Talk about some of the key drivers in your business and some of the key apps that give you competitive advantage to help you win races. >> Yeah. So in our business, everything is all about speed. So the car obviously needs to be fast but also all of our business operations need to be fast. We need to be able to design a car and it's all done in the virtual world, but the virtual simulations and designs needed to correlate to what happens in the real world. So all of that requires a lot of expertise to develop the simulations, the algorithms and have all the underlying infrastructure that runs it quickly and reliably. In manufacturing we have cost caps and financial controls by regulation. We need to be super efficient and control material and resources. So ERP and MES systems are running and helping us do that. And at the race track itself. And in speed, we have hundreds of decisions to make on a Friday and Saturday as we're fine tuning the final configuration of the car. And here again, we rely on simulations and analytics to help do that. And then during the race we have split seconds literally seconds to alter our race strategy if an event happens. So if there's an accident and the safety car comes out or the weather changes, we revise our tactics and we're running Monte-Carlo for example. And use an experienced engineers with simulations to make a data-driven decision and hopefully a better one and faster than our competitors. All of that needs IT to work at a very high level. >> Yeah, it's interesting. I mean, as a lay person, historically when I think about technology in car racing, of course I think about the mechanical aspects of a self-propelled vehicle, the electronics and the light but not necessarily the data but the data's always been there. Hasn't it? I mean, maybe in the form of like tribal knowledge if you are somebody who knows the track and where the hills are and experience and gut feel but today you're digitizing it and you're processing it and close to real time. Its amazing. >> I think exactly right. Yeah. The car's instrumented with sensors, we post process and we are doing video image analysis and we're looking at our car, competitor's car. So there's a huge amount of very complicated models that we're using to optimize our performance and to continuously improve our car. Yeah. The data and the applications that leverage it are really key and that's a critical success factor for us. >> So let's talk about your data center at the track, if you will. I mean, if I can call it that. Paint a picture for us what does that look like? >> So we have to send a lot of equipment to the track at the edge. And even though we have really a great wide area network link back to the factory and there's cloud resources a lot of the tracks are very old. You don't have hardened infrastructure, don't have ducks that protect cabling, for example and you can lose connectivity to remote locations. So the applications we need to operate the car and to make really critical decisions all that needs to be at the edge where the car operates. So historically we had three racks of equipment like I said infrastructure and it was really hard to manage, to make changes, it was too flexible. There were multiple panes of glass and it was too slow. It didn't run our applications quickly. It was also too heavy and took up too much space when you're cramped into a garage with lots of environmental constraints. So we'd introduced hyper convergence into the factory and seen a lot of great benefits. And when we came time to refresh our infrastructure at the track, we stepped back and said, there's a lot smarter way of operating. We can get rid of all the slow and flexible expensive legacy and introduce hyper convergence. And we saw really excellent benefits for doing that. We saw up three X speed up for a lot of our applications. So I'm here where we're post-processing data. And we have to make decisions about race strategy. Time is of the essence. The three X reduction in processing time really matters. We also were able to go from three racks of equipment down to two racks of equipment and the storage efficiency of the HPE SimpliVity platform with 20 to one ratios allowed us to eliminate a rack. And that actually saved a $100,000 a year in freight costs by shipping less equipment. Things like backup mistakes happen. Sometimes the user makes a mistake. So for example a race engineer could load the wrong data map into one of our simulations. And we could restore that DDI through SimpliVity backup at 90 seconds. And this enables engineers to focus on the car to make better decisions without having downtime. And we sent two IT guys to every race, they're managing 60 users a really diverse environment, juggling a lot of balls and having a simple management platform like HPE SimpliVity gives us, allows them to be very effective and to work quickly. So all of those benefits were a huge step forward relative to the legacy infrastructure that we used to run at the edge. >> Yeah. So you had the nice Petri dish in the factory so it sounds like your goals are obviously number one KPIs speed to help shave seconds, awesome time, but also cost just the simplicity of setting up the infrastructure is-- >> That's exactly right. It's speed, speed, speed. So we want applications absolutely fly, get to actionable results quicker, get answers from our simulations quicker. The other area that speed's really critical is our applications are also evolving prototypes and we're always, the models are getting bigger. The simulations are getting bigger and they need more and more resource and being able to spin up resource and provision things without being a bottleneck is a big challenge in SimpliVity. It gives us the means of doing that. >> So did you consider any other options or was it because you had the factory knowledge? It was HCI was very clearly the option. What did you look at? >> Yeah, so we have over five years of experience in the factory and we eliminated all of our legacy infrastructure five years ago. And the benefits I've described at the track we saw that in the factory. At the track we have a three-year operational life cycle for our equipment. When in 2017 was the last year we had legacy as we were building for 2018, it was obvious that hyper-converged was the right technology to introduce. And we'd had years of experience in the factory already. And the benefits that we see with hyper-converged actually mattered even more at the edge because our operations are so much more pressurized. Time is even more of the essence. And so speeding everything up at the really pointy end of our business was really critical. It was an obvious choice. >> Why SimpliVity, why'd you choose HPE SimpliVity? >> Yeah. So when we first heard about hyper-converged way back in the factory, we had a legacy infrastructure overly complicated, too slow, too inflexible, too expensive. And we stepped back and said there has to be a smarter way of operating. We went out and challenged our technology partners, we learned about hyperconvergence, would enough the hype was real or not. So we underwent some PLCs and benchmarking and the PLCs were really impressive. And all these speed and agility benefits we saw and HPE for our use cases was the clear winner in the benchmarks. So based on that we made an initial investment in the factory. We moved about 150 VMs and 150 VDIs into it. And then as we've seen all the benefits we've successfully invested and we now have an estate in the factory of about 800 VMs and about 400 VDIs. So it's been a great platform and it's allowed us to really push boundaries and give the business the service it expects. >> Awesome fun stories, just coming back to the metrics for a minute. So you're running Monte Carlo simulations in real time and sort of near real-time. And so essentially that's if I understand it, that's what ifs and it's the probability of the outcome. And then somebody got to make, then the human's got to say, okay, do this, right? Was the time in which you were able to go from data to insight to recommendation or edict was that compressed and you kind of indicated that. >> Yeah, that was accelerated. And so in that use case, what we're trying to do is predict the future and you're saying, well and before any event happens, you're doing what ifs and if it were to happen, what would you probabilistic do? So that simulation, we've been running for awhile but it gets better and better as we get more knowledge. And so that we were able to accelerate that with SimpliVity but there's other use cases too. So we also have telemetry from the car and we post-process it. And that reprocessing time really, is it's very time consuming. And we went from nine, eight minutes for some of the simulations down to just two minutes. So we saw big, big reductions in time. And ultimately that meant an engineer could understand what the car was doing in a practice session, recommend a tweak to the configuration or setup of it and just get more actionable insight quicker. And it ultimately helps get a better car quicker. >> Such a great example. How are you guys feeling about the season, Matt? What's the team's sentiment? >> I think we're optimistic. Thinking our simulations that we have a great car we have a new driver lineup. We have the Max Verstapenn who carries on with the team and Sergio Cross joins the team. So we're really excited about this year and we want to go and win races. And I think with COVID people are just itching also to get back to a little degree of normality and going racing again even though there's no fans, it gets us into a degree of normality. >> That's great, Matt, good luck this season and going forward and thanks so much for coming back in theCUBE. Really appreciate it. >> It's my pleasure. Great talking to you again. >> Okay. Now we're going to bring back Omer for quick summary. So keep it right there. >> Narrator: That's where the data comes face to face with the real world. >> Narrator: Working with Hewlett Packard Enterprise is a hugely beneficial partnership for us. We're able to be at the cutting edge of technology in a highly technical, highly stressed environment. There is no bigger challenge than Formula One. (upbeat music) >> Being in the car and driving in on the limit that is the best thing out there. >> Narrator: It's that innovation and creativity to ultimately achieves winning of this. >> Okay. We're back with Omer. Hey, what did you think about that interview with Matt? >> Great. I have to tell you, I'm a big formula One fan and they are one of my favorite customers. So obviously one of the biggest use cases as you saw for Red Bull Racing is track side deployments. There are now 22 races in a season. These guys are jumping from one city to the next they got to pack up, move to the next city, set up the infrastructure very very quickly. An average Formula One car is running the thousand plus sensors on, that is generating a ton of data on track side that needs to be collected very quickly. It needs to be processed very quickly and then sometimes believe it or not snapshots of this data needs to be sent to the Red Bull back factory back at the data center. What does this all need? It needs reliability. It needs compute power in a very short form factor. And it needs agility quick to set up, quick to go, quick to recover. And then in post processing they need to have CPU density so they can pack more VMs out at the edge to be able to do that processing. And we accomplished that for the Red Bull Racing guys in basically two of you have two SimpliVity nodes that are running track side and moving with them from one race to the next race to the next race. And every time those SimpliVity nodes connect up to the data center, collect up to a satellite they're backing up back to their data center. They're sending snapshots of data back to the data center essentially making their job a whole lot easier where they can focus on racing and not on troubleshooting virtual machines. >> Red bull Racing and HPE SimpliVity. Great example. It's agile, it's it's cost efficient and it shows a real impact. Thank you very much Omer. I really appreciate those summary comments. >> Thank you, Dave. Really appreciate it. >> All right. And thank you for watching. This is Dave Volante for theCUBE. (upbeat music)

Published Date : Mar 5 2021

SUMMARY :

and connecting all that to Pleasure to be here. So how do you see the edge in And then it needs to be moved for DR How do you look at that? and easy to deploy It's got to be simple and implemented in the So you can't comment on that or? and because the infrastructure is so easy on some of the use cases and the ability to backup You seem pretty pumped up about A lot of the retail sites on the program today. It's a pleasure, Dave. SimpliVity at the edge. a constant race against time Matt, it's good to see you again. in to a real world example and then to race it around the world. So all that in order to win What's the status of your season? and have the spectacle So important for the fans So the car obviously needs to be fast and close to real time. and to continuously improve our car. data center at the track, So the applications we Petri dish in the factory and being able to spin up the factory knowledge? And the benefits that we see and the PLCs were really impressive. Was the time in which you And so that we were able to about the season, Matt? and Sergio Cross joins the team. and thanks so much for Great talking to you again. going to bring back Omer comes face to face with the real world. We're able to be at the that is the best thing out there. and creativity to ultimately that interview with Matt? So obviously one of the biggest use cases and it shows a real impact. Thank you, Dave. And thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt CadieuxPERSON

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

Sergio CrossPERSON

0.99+

2017DATE

0.99+

2018DATE

0.99+

Red Bull RacingORGANIZATION

0.99+

MattPERSON

0.99+

2020DATE

0.99+

Milton KeynesLOCATION

0.99+

two weeksQUANTITY

0.99+

three-yearQUANTITY

0.99+

20QUANTITY

0.99+

Red Bull RacingORGANIZATION

0.99+

Omer AsadPERSON

0.99+

Dave VolantePERSON

0.99+

USLOCATION

0.99+

OmerPERSON

0.99+

Red BullORGANIZATION

0.99+

UKLOCATION

0.99+

two racksQUANTITY

0.99+

23 racesQUANTITY

0.99+

Max VerstapennPERSON

0.99+

90 secondsQUANTITY

0.99+

60 usersQUANTITY

0.99+

22 racesQUANTITY

0.99+

eight minutesQUANTITY

0.99+

more than 50%QUANTITY

0.99+

each trackQUANTITY

0.99+

twoQUANTITY

0.99+

one raceQUANTITY

0.99+

two minutesQUANTITY

0.99+

two cablesQUANTITY

0.99+

nineQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

150 VDIsQUANTITY

0.99+

SimpliVityTITLE

0.99+

Pharma BubblesORGANIZATION

0.99+

oneQUANTITY

0.99+

five years agoDATE

0.99+

first opportunityQUANTITY

0.99+

last yearDATE

0.99+

OneQUANTITY

0.99+

about 800 VMsQUANTITY

0.99+

three racksQUANTITY

0.98+

firstQUANTITY

0.98+

one siteQUANTITY

0.98+

HPEORGANIZATION

0.98+

Monte CarloTITLE

0.98+

about 400 VDIsQUANTITY

0.98+

Primary Storage and Data Management ServicesORGANIZATION

0.98+

hundreds of storesQUANTITY

0.98+

Red bull RacingORGANIZATION

0.98+

bothQUANTITY

0.98+

thousand plus sensorsQUANTITY

0.98+

tens of secondsQUANTITY

0.98+

second use caseQUANTITY

0.98+

multi-trillion dollarQUANTITY

0.98+

over five yearsQUANTITY

0.98+

todayDATE

0.97+

GreenLakeORGANIZATION

0.97+

one cityQUANTITY

0.97+

10 yearsQUANTITY

0.96+

HPE SimpliVityTITLE

0.96+

COVIDOTHER

0.96+

hundreds of global retail storesQUANTITY

0.96+

about 150 VMsQUANTITY

0.96+

Matt Cadieux, CIO Red Bull Racing v2


 

(mellow music) >> Okay, we're back with Matt Cadieux who is the CIO Red Bull Racing. Matt, it's good to see you again. >> Yeah, great to see you, Dave. >> Hey, we're going to dig into a real world example of using data at the edge and in near real-time to gain insights that really lead to competitive advantage. But first Matt, tell us a little bit about Red Bull Racing and your role there. >> Sure, so I'm the CIO at Red Bull Racing. And at Red Bull Racing we're based in Milton Keynes in the UK. And the main job for us is to design a race car, to manufacture the race car, and then to race it around the world. So as CIO, we need to develop, the IT team needs to develop the applications used for the design, manufacturing, and racing. We also need to supply all the underlying infrastructure, and also manage security. So it's a really interesting environment that's all about speed. So this season we have 23 races, and we need to tear the car apart, and rebuild it to a unique configuration for every individual race. And we're also designing and making components targeted for races. So 23 immovable deadlines, this big evolving prototype to manage with our car. But we're also improving all of our tools and methods and software that we use to design and make and race the car. So we have a big can-do attitude in the company, around continuous improvement. And the expectations are that we continue to make the car faster, that we're winning races, that we improve our methods in the factory and our tools. And so for IT it's really unique and that we can be part of that journey and provide a better service. It's also a big challenge to provide that service and to give the business the agility it needs. So my job is really to make sure we have the right staff, the right partners, the right technical platforms, so we can live up to expectations. >> And Matt that tear down and rebuild for 23 races. Is that because each track has its own unique signature that you have to tune to or are there other factors involved there? >> Yeah, exactly. Every track has a different shape. Some have lots of straight, some have lots of curves and lots are in between. The track's surface is very different and the impact that has on tires, the temperature and the climate is very different. Some are hilly, some are big curves that affect the dynamics of the car. So all that in order to win, you need to micromanage everything and optimize it for any given race track. >> And, you know, COVID has, of course, been brutal for sports. What's the status of your season? >> So this season we knew that COVID was here and we're doing 23 races knowing we have COVID to manage. And as a premium sporting team we've formed bubbles, we've put health and safety and social distancing into our environment. And we're able to operate by doing things in a safe manner. We have some special exhibitions in the UK. So for example, when people return from overseas that they do not have to quarantine for two weeks but they get tested multiple times a week and we know they're safe. So we're racing, we're dealing with all the hassle that COVID gives us. And we are really hoping for a return to normality sooner instead of later where we can get fans back at the track and really go racing and have the spectacle where everyone enjoys it. >> Yeah, that's awesome. So important for the fans but also all the employees around that ecosystem. Talk about some of the key drivers in your business and some of the key apps that give you competitive advantage to help you win races. >> Yeah, so in our business everything is all about speed. So the car obviously needs to be fast but also all of our business operations need to be fast. We need to be able to design our car and it's all done in the virtual world but the virtual simulations and designs need to correlate to what happens in the real world. So all of that requires a lot of expertise to develop the simulations, the algorithms, and have all the underlying infrastructure that runs it quickly and reliably. In manufacturing, we have cost caps and financial controls by regulation. We need to be super efficient and control material and resources. So ERP and MES systems are running, helping us do that. And at the race track itself in speed, we have hundreds of decisions to make on a Friday and Saturday as we're fine tuning the final configuration of the car. And here again, we rely on simulations and analytics to help do that. And then during the race, we have split seconds, literally seconds to alter our race strategy if an event happens. So if there's an accident and the safety car comes out or the weather changes, we revise our tactics. And we're running Monte Carlo for example. And using experienced engineers with simulations to make a data-driven decision and hopefully a better one and faster than our competitors. All of that needs IT to work at a very high level. >> You know it's interesting, I mean, as a lay person, historically when I think about technology and car racing, of course, I think about the mechanical aspects of a self-propelled vehicle, the electronics and the like, but not necessarily the data. But the data's always been there, hasn't it? I mean, maybe in the form of like tribal knowledge, if it's somebody who knows the track and where the hills are and experience and gut feel. But today you're digitizing it and you're processing it in close to real-time. It's amazing. >> Yeah, exactly right. Yeah, the car is instrumented with sensors, we post-process, we're doing video, image analysis and we're looking at our car, our competitor's car. So there's a huge amount of very complicated models that we're using to optimize our performance and to continuously improve our car. Yeah, the data and the applications that leverage it are really key. And that's a critical success factor for us. >> So let's talk about your data center at the track, if you will, I mean, if I can call it that. Paint a picture for us. >> Sure. What does that look like? >> So we have to send a lot of equipment to the track, at the edge. And even though we have really a great lateral network link back to the factory and there's cloud resources, a lot of the tracks are very old. You don't have hardened infrastructure, you don't have docks that protect cabling, for example, and you can lose connectivity to remote locations. So the applications we need to operate the car and to make really critical decisions, all that needs to be at the edge where the car operates. So historically we had three racks of equipment, legacy infrastructure and it was really hard to manage, to make changes, it was too inflexible. There were multiple panes of glass, and it was too slow. It didn't run our applications quickly. It was also too heavy and took up too much space when you're cramped into a garage with lots of environmental constraints. So we'd introduced hyper-convergence into the factory and seen a lot of great benefits. And when we came time to refresh our infrastructure at the track, we stepped back and said there's a lot smarter way of operating. We can get rid of all this slow and inflexible expensive legacy and introduce hyper-convergence. And we saw really excellent benefits for doing that. We saw a three X speed up for a lot of our applications. So here where we're post-processing data, and we have to make decisions about race strategy, time is of the essence and a three X reduction in processing time really matters. We also were able to go from three racks of equipment down to two racks of equipment and the storage efficiency of the HPE SimpliVity platform with 20 to one ratios allowed us to eliminate a rack. And that actually saved a $100,000 a year in freight costs by shipping less equipment. Things like backup, mistakes happen. Sometimes a user makes a mistake. So for example a race engineer could load the wrong data map into one of our simulations. And we could restore that DDI through SimpliVity backup in 90 seconds. And this makes sure, enables engineers to focus on the car, to make better decisions without having downtime. And we send two IT guys to every race. They're managing 60 users, a really diverse environment, juggling a lot of balls and having a simple management platform like HP SimpliVity gives us, allows them to be very effective and to work quickly. So all of those benefits were a huge step forward relative to the legacy infrastructure that we used to run at the edge. >> Yes, so you had the nice Petri dish in the factory, so it sounds like your goals obviously, number one KPI is speed to help shave seconds off the time, but also cost. >> That's right. Just the simplicity of setting up the infrastructure is key. >> Yeah, that's exactly right. >> It's speed, speed, speed. So we want applications that absolutely fly, you know gets actionable results quicker, get answers from our simulations quicker. The other area that speed's really critical is our applications are also evolving prototypes and we're always, the models are getting bigger, the simulations are getting bigger, and they need more and more resource. And being able to spin up resource and provision things without being a bottleneck is a big challenge. And SimpliVity gives us the means of doing that. >> So did you consider any other options or was it because you had the factory knowledge, HCI was, you know, very clearly the option? What did you look at? >> Yeah, so we have over five years of experience in the factory and we eliminated all of our legacy infrastructure five years ago. And the benefits I've described at the track we saw that in the factory. At the track, we have a three-year operational life cycle for our equipment. 2017 was the last year we had legacy. As we were building for 2018, it was obvious that hyper-converged was the right technology to introduce. And we'd had years of experience in the factory already. And the benefits that we see with hyper-converged actually mattered even more at the edge because our operations are so much more pressurized. Time is even more of the essence. And so speeding everything up at the really pointy end of our business was really critical. It was an obvious choice. >> So why SimpliVity? Why do you choose HPE SimpliVity? >> Yeah, so when we first heard about hyper-converged, way back in the factory. We had a legacy infrastructure, overly complicated, too slow, too inflexible, too expensive. And we stepped back and said there has to be a smarter way of operating. We went out and challenged our technology partners. We learned about hyper-convergence. We didn't know if the hype was real or not. So we underwent some PLCs and benchmarking and the PLCs were really impressive. And all these, you know, speed and agility benefits we saw and HPE for our use cases was the clear winner in the benchmarks. So based on that we made an initial investment in the factory. We moved about 150 VMs and 150 VDIs into it. And then as we've seen all the benefits we've successfully invested, and we now have an estate in the factory of about 800 VMs and about 400 VDIs. So it's been a great platform and it's allowed us to really push boundaries and give the business the service it expects. >> Well that's a fun story. So just coming back to the metrics for a minute. So you're running Monte Carlo simulations in real-time and sort of near real-time. >> Yeah. And so essentially that's, if I understand it, that's what-ifs and it's the probability of the outcome. And then somebody's got to make, >> Exactly. then a human's got to say, okay, do this, right. And so was that, >> Yeah. with the time in which you were able to go from data to insight to recommendation or edict was that compressed? You kind of indicated that, but. >> Yeah, that was accelerated. And so in that use case, what we're trying to do is predict the future and you're saying well, and before any event happens, you're doing what-ifs. Then if it were to happen, what would you probabilistically do? So, you know, so that simulation we've been running for a while but it gets better and better as we get more knowledge. And so that we were able to accelerate that with SimpliVity. But there's other use cases too. So we offload telemetry from the car and we post-process it. And that reprocessing time really is very time consuming. And, you know, we went from nine, eight minutes for some of the simulations down to just two minutes. So we saw big, big reductions in time. And ultimately that meant an engineer could understand what the car was doing in a practice session, recommend a tweak to the configuration or setup of it, and just get more actionable insight quicker. And it ultimately helps get a better car quicker. >> Such a great example. How are you guys feeling about the season, Matt? What's the team's, the sentiment? >> Yeah, I think we're optimistic. We with thinking our simulations that we have a great car. We have a new driver lineup. We have Max Verstappen who carries on with the team and Sergio Perez joins the team. So we're really excited about this year and we want to go and win races. And I think with COVID people are just itching also to get back to a little degree of normality, and, you know, and going racing again, even though there's no fans, it gets us into a degree of normality. >> That's great, Matt, good luck this season and going forward and thanks so much for coming back in theCUBE. Really appreciate it. >> It's my pleasure. Great talking to you again. >> Okay, now we're going to bring back Omar for a quick summary. So keep it right there. (mellow music)

Published Date : Mar 4 2021

SUMMARY :

Matt, it's good to see you again. and in near real-time and that we can be part of that journey And Matt that tear down and the impact that has on tires, What's the status of your season? and have the spectacle and some of the key apps So the car obviously needs to be fast the electronics and the like, and to continuously improve our car. data center at the track, What does that look like? So the applications we Petri dish in the factory, Just the simplicity of And being able to spin up And the benefits that we and the PLCs were really impressive. So just coming back to probability of the outcome. And so was that, from data to insight to recommendation And so that we were able to What's the team's, the sentiment? and Sergio Perez joins the team. and going forward and thanks so much Great talking to you again. So keep it right there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Max VerstappenPERSON

0.99+

Matt CadieuxPERSON

0.99+

Sergio PerezPERSON

0.99+

MattPERSON

0.99+

two weeksQUANTITY

0.99+

Milton KeynesLOCATION

0.99+

Red Bull RacingORGANIZATION

0.99+

DavePERSON

0.99+

OmarPERSON

0.99+

2018DATE

0.99+

60 usersQUANTITY

0.99+

UKLOCATION

0.99+

20QUANTITY

0.99+

90 secondsQUANTITY

0.99+

23 racesQUANTITY

0.99+

150 VDIsQUANTITY

0.99+

three-yearQUANTITY

0.99+

two racksQUANTITY

0.99+

each trackQUANTITY

0.99+

2017DATE

0.99+

two minutesQUANTITY

0.99+

eight minutesQUANTITY

0.99+

nineQUANTITY

0.99+

three racksQUANTITY

0.99+

last yearDATE

0.99+

five years agoDATE

0.98+

hundredsQUANTITY

0.98+

todayDATE

0.98+

about 800 VMsQUANTITY

0.98+

HPORGANIZATION

0.98+

about 150 VMsQUANTITY

0.98+

about 400 VDIsQUANTITY

0.98+

one ratiosQUANTITY

0.98+

firstQUANTITY

0.96+

over five yearsQUANTITY

0.95+

this yearDATE

0.95+

SimpliVityTITLE

0.94+

$100,000 a yearQUANTITY

0.93+

23 immovableQUANTITY

0.93+

HCIORGANIZATION

0.93+

two ITQUANTITY

0.91+

SaturdayDATE

0.91+

Monte CarloTITLE

0.91+

oneQUANTITY

0.88+

Every trackQUANTITY

0.84+

a minuteQUANTITY

0.77+

COVIDOTHER

0.77+

threeQUANTITY

0.76+

Monte CarloCOMMERCIAL_ITEM

0.75+

every raceQUANTITY

0.75+

times a weekQUANTITY

0.75+

secondsQUANTITY

0.64+

FridayDATE

0.6+

of curvesQUANTITY

0.58+

noQUANTITY

0.56+

number oneQUANTITY

0.56+

straightQUANTITY

0.52+

SimpliVityOTHER

0.52+

COVIDTITLE

0.5+

HPETITLE

0.34+

Liam Furlong, Revelation Software | CUBE Conversation, November 2020


 

from the cube studios in palo alto in boston connecting with thought leaders all around the world this is a cube conversation hi lisa martin with the cube here covering some news from dell technologies i'm pleased to welcome one of its customers liam furlong the i.t manager from revelation software liam great to see you today thanks lisa it's fantastic to be with you and we're socially distant california you're down in australia i know it's early morning for you but we're pleased to be chatting with you so give me and our audience an overview of revelation software who are you and what do you do yeah sure revelation software is a software development company no surprises there and our primary product is a tool called revtrack and for all those sap users out there we help you get your changes navigated safely through the wide landscapes and the open seas of your sap environment so we're all about change management and delivering certainty in what is really rapidly changing landscapes uh in the it world so customers can go to you for all of their challenges with all their sap data and sort of offload that basically i mean that sounds lovely i'm sure many of them would take that so talk to me about your itune manager talking about your i.t environment i know you're highly virtualized just give us an overview of what your data environment looks like we um like a lot of software companies we give our development teams a lot of freedom and so over the years a lot has definitely built into our environment we have hundreds of vms and even more sap landscapes we are committed to our customers to provide a lot of previous version compatibility both in our product but also in sap we support more of sap's old versions than they do we just want to make sure that everyone is able to do their job and focus on what they're trying to do rather than worrying about you know do i have to upgrade am i going to be forced ahead uh in you know especially in a change management landscape and so we have a lot of history a lot of old environments and we manage that by using a lot of on-prem we have local data centers like everyone i guess but also we've got a great multi-cloud environment now and it helps us to really uh provide an excellent environment for our teams to develop in the way that they want to support our customers uh in an efficient way but also without us having to over commit to hardware and so on so you have highly virtualized environment about 150 vms nearly 500 sap landscapes so big administration of overhead talk to me about how you were protecting your data i'm assuming vms maybe some sap databases and servers how are you protecting that before using dell's new integrative approach yeah we uh used a targeted appliance uh style i guess we built up what we thought was the right solution we had a lot of legacy thinking really but uh tools we used a lot of scripts previously we used the veeam platform and that presented an ever increasing set of challenges as you can imagine with s s3 s4 hana rolling along the environment just had to change our backup load was increasing our backup windows weren't getting any larger and our backup targets weren't getting any larger so we really needed to ask some hard questions about what we were doing and whether it was working for us we had absolutely no cloud integration our off-site copies were completely inadequate and so as an i-team manager who is um the guy at the end of the road when it comes to rpo and rto and uh certainty of restorability i was not sleeping well it's fair to say well and that's something that obviously you you look to a company like dell technologies to help with sleep as a sleep aid but you guys i saw that after 20 years you were testing and a hosted version of your rev track insights product and needed cloud dr and you kind of talked about meeting customer slas and i was reading your case study and there was some big challenges there with respect to the sla front yeah definitely um i guess uh actually we were really fortunate to have started a conversation with dell even before we were bringing our cloud platform online we knew we were going to need to be able to address cloud dr it was on the horizon for us and so being able to talk with a vendor that had everything wrapped in uh the idea of an integrated appliance was really quite foreign to me the um the the thought that i could trust dell technologies to actually do this better than me i made that that sounds a bit uh arrogant but the truth is you know i knew my environment and they didn't but what was really stand out for us in the process is dell knew that too and they climbed into our environment and worked really hard they really actually wanted to understand well what were our challenges and what were our loads what was our environment really like and then work with us on a strong solution and i was amazed it felt really like the cavalry had arrived and they knew exactly what they were doing and then they worked overtime to help us find a great solution and it has been a fantastic solution not only solving the challenges we faced at that time of deployment but knowing what was on the horizon going into the cloud and having a sas platform uh we were future-proofed in a way that i was hopeful about but now that we're using it in that way i'm confident and every day i know that it's working properly for us that confidence is absolutely critical but you use the term that we hear so often in technology future proof talk to me about when you hear that as an i.t manager what does that mean to you and how is dell tech with the integrated approach delivering that yeah i think um i mean if i'm just being honest uh i generally dismiss that when i hear anyone say that they're future-proofed because no one knows what's coming i mean here we are living this year outright and uh we we knew 2020 was going to be a big year but not in the ways that it has been uh i think that even though we wanted to believe that this backup tool would cover us we weren't sure uh what it has meant is there are two real standout things one there's a suite of functionality and in the integrated appliance which we didn't need then but it was standing by and it was easy to turn on it wasn't like oh and now you'll have to pay this extra fee or now you'll have to deploy these extra tools it was all ready to go and so they've brought their years of experience and forecasting and built in a bunch of functions which you're not going to need and no one is going to need all of the tools out of the box but over time you can deploy it and the other really big one for us is all of the extra storage that we might need as our backup requirements grow shipped in the box which is a huge cost to the vendor um but it's just sitting there ready for us to consume as we need which is absolutely fantastic for me i don't need to take our backup system offline to upgrade i don't need to consume more rack space i don't need to use more power it's already doing everything it needs to and it's just about rolling forward easily as we move forward as a company so walk us through what the environment looks like now we mentioned 150 vms a big sap landscape give us a picture of the technologies and what dell is helping to protect in your environment yeah so um dell dell covers everything the the integrated appliance we're using um actually it meets all of our needs uh i'm a paranoid and in my job so we have extra bits and pieces kicking around but the power protect device is our go-to we know that it's going to be there it's going to be online it's going to have covered everything from our on-prem so we use a vmware environment locally and we're backing up all of those vms every night about 54 terabytes of data and we knock that out in about a 90 minute window which is absolutely fantastic so that backs up to local and then it ships up to our cloud environment so we've got our offsite covered in that same night then we've also got environment i guess using the amazon example we have a multi-cloud so we've got things in a couple of different cloud providers but to use amazon as an example we have production systems running up there we have our sas environment running up there and we capture that also with our power protect device and bring everything back down and so now we've got that covered as well and so no matter what our problem is i've just got one place to go to to say i need to restore this and i need to do it fast and we can get that done uh straight away it's fantastic and that's what i've been hearing i've spoken with a number of folks already including the vp of product marketing caitlin gordon and we're hearing a lot of that one-stop shop sort of description for the integrated appliance i'm wondering if you could give us a compare and contrast uh power protect the integrated appliance as you said and described the benefits that you've already achieved versus the targeted approach with theme that you had before yeah sure um what we came from was only being able to back up mission critical systems nightly and everything else had to be backed up weekly to achieve our backup windows even still monday morning was uh was a nerve wracking a few hours while the weekend back up kind of crawled through and finished and people are like oh systems are a bit slow this morning like oh yeah we're looking at that you know um we came from that to getting as i said earlier everything done every night which is a complete transformation for us it means that we don't need to worry about we used to have to supplement our veeam backup with scripts because we could get the scripted backup done uh much faster and so we would go oh we'll restore with veeam and then we'll lay a script over the top to recover everything up to last night but now um it's just all uh covered through that one appliance um again in our cloud environments we use the local tools to provide a local backup and that's great to have previously that was mission critical we had to have that working and we had to have our technicians up to speed with four or five different uh tool sets but now they it's great that they are aware of those tools but really it's just about understanding uh one application in regards to a targeted solution you end up having really all these building blocks that only one person really knows how they all string together but now not only do our whole team understand how it works together but it's one phone number to find a whole group of people who know how it works together and they can help us you know from upgrades deployments restores anything we need if i'm on leave then i know that someone else from deltek can step in and cover me for any of their questions it might normally bubble up to my level um one of my favorite numbers i'm sorry i feel like i'm ranting but one of my favorite numbers is you know we came from using a different hardware vendor's san and we were getting compression maybe of three to six times uh on data we get compression from a month view of 150 to 200 times and if we expand that out to an annual view we get compression rates of 300 times on our data which means instead of having literally 15 ru of storage we have two u of storage uh the cost per terabyte is down by hundreds of dollars it it makes me look really good and i haven't had to do anything all i did was just go yep you guys do it you guys deploy your solution so it's been those are huge deduplication numbers i know caitlin gordon shared with me on average 65 to 1 but you you basically at least double that and in terms of of making you look good that's something that's actually quite important in terms of i.t and the business uh making sure that what you can deliver to the business is the confidence and you and your team that their data is protected can you share a little bit about maybe the i.t business relations and how this technology has helped them just have that confidence yeah definitely um i mean as you say every part of the business sees a different thing our development team are paying attention to very different things to our accounting team these numbers definitely help me to make friends in both teams as a it manager if the backups do their job properly if this all works no one notices if this goes wrong i break the business so the stakes are pretty high with backup but even though that's true and we know that's true committing a big financial investment is still hard it's still a moment where you hold your breath and ask was it worth it but now that we've been able to show the numbers to our executive teams and they can see how much money they're saving how much money we would normally be reinvesting at this point but we can now make that available for other projects we can put that into further development we can put that into improving our sas platform that really works for us as a business we want to serve our customers better we don't want to waste our time and money on stuff that affects just our day-to-day we want to be really focused where our people are and with what they care about so by putting money back in the pockets uh that's a big win and by making our uh infrastructure teams more free their time is freer because they're not spending you know we do restores every week pardon me every week because those restores now run more smoothly and they are faster and there's less hunting around to try and find the backup that actually worked then that means our infrastructure teams are free to also now do other upgrades to work alongside say our developers they want to be running the current versions of the atlassian suite not you know a version from a year ago but we've got more time to do that work now it makes a big difference well that workforce productivity that you're alluding to it can be hugely impactful across the business it's not just that now you know you've got one solution one phone number to call if there's issues you've got more time back to be more innovative more strategic and so do the rest of the folks on your team so the business overall that workforce productivity can really be very widespread in a good way absolutely and it's well felt i think you know one of the things that it's really hard to put a dollar value on but it is really key is people don't like doing rework and backup recovery feels like reworking like i've been here before and so by mitigating particularly this aspect of our roles our teams are happier they generally are enjoying their work more because they're as i say they've got more time to work on things that are energizing and rewarding and across the business people feel better as well there are a lot of complications in anyone's job but certainly from the direction that hardware and storage and backup is being uh concerned we've taken away a big stress for me for example it's important that we test our dr scenario obviously everyone says that but now we can actually do it and now we can actually do a full dr production outage and go okay great let's shut it down and see what happens and we were able to do that a couple of times a year we don't have to pay for a cold dc or a warm dc in the wings we can recover to the cloud we our dr site is vmware cloud on amazon so we can spin it up and do the whole dr scenario our dr is engaged within about three hours from a full building loss and not only is that great peace of mind but also again it puts great data into the hands of my cio he's able to present on business continuity issues to the executive team and show that we're actually caring about the business and caring about the things that people do worry about and again makes people look good which is uh which is always helpful it is absolutely as you said it's really you know if you can't restore the data you're kind of stuck so now i know why you look so rested because you you have the solution you're sleeping better at night liam has been such a pleasure talking to you great work and we look forward to hearing more great stories to come from revelation software thanks so much lisa it's been a wonderful time per liam furlong i'm lisa martin you're watching the cube you

Published Date : Nov 13 2020

SUMMARY :

and i need to do it fast and we can get

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
australiaLOCATION

0.99+

150QUANTITY

0.99+

300 timesQUANTITY

0.99+

November 2020DATE

0.99+

threeQUANTITY

0.99+

2020DATE

0.99+

hundreds of dollarsQUANTITY

0.99+

fiveQUANTITY

0.99+

amazonORGANIZATION

0.99+

deltekORGANIZATION

0.99+

a year agoDATE

0.99+

bostonLOCATION

0.99+

lisaPERSON

0.99+

both teamsQUANTITY

0.99+

caitlin gordonPERSON

0.99+

todayDATE

0.98+

six timesQUANTITY

0.98+

lisa martinPERSON

0.98+

californiaLOCATION

0.98+

lisa martinPERSON

0.98+

65QUANTITY

0.98+

200 timesQUANTITY

0.98+

Liam FurlongPERSON

0.98+

oneQUANTITY

0.98+

every weekQUANTITY

0.97+

one personQUANTITY

0.97+

about three hoursQUANTITY

0.97+

hundreds of vmsQUANTITY

0.97+

dellORGANIZATION

0.97+

ituneTITLE

0.96+

1QUANTITY

0.96+

last nightDATE

0.95+

palo altoORGANIZATION

0.95+

a monthQUANTITY

0.92+

one phoneQUANTITY

0.91+

monday morningDATE

0.9+

revtrackTITLE

0.9+

dell technologiesORGANIZATION

0.9+

nearly 500 sapQUANTITY

0.89+

fourQUANTITY

0.89+

this morningDATE

0.89+

20 yearsQUANTITY

0.87+

about 54 terabytesQUANTITY

0.86+

bothQUANTITY

0.86+

a couple of times a yearQUANTITY

0.85+

one solutionQUANTITY

0.84+

dell techORGANIZATION

0.83+

a number of folksQUANTITY

0.8+

two realQUANTITY

0.8+

atlassian suiteTITLE

0.79+

doubleQUANTITY

0.79+

about a 90 minuteQUANTITY

0.78+

about 150 vmsQUANTITY

0.78+

two uQUANTITY

0.77+

one phone numberQUANTITY

0.76+

every nightQUANTITY

0.74+

lotQUANTITY

0.74+

15 ru of storageQUANTITY

0.73+

one placeQUANTITY

0.72+

morningDATE

0.7+

lot of software companiesQUANTITY

0.68+

thingsQUANTITY

0.67+

storageQUANTITY

0.64+

liamPERSON

0.62+

one applicationQUANTITY

0.59+

terabyteQUANTITY

0.55+

favoriteQUANTITY

0.55+

s3 s4TITLE

0.53+

revelationORGANIZATION

0.52+

veeamTITLE

0.51+

Revelation SoftwareTITLE

0.5+

softwareTITLE

0.48+

BCBSNC Petar Bojovic v1 30FPS


 

>> Hello, my name is Petar Bojovic, director of technology infrastructure from Blue Cross Blue Shield of North Carolina. I have been with this organization for over three years and I own system engineering across private and public cloud, virtualization, OS, backup, storage, OpenShift platform, and automation. I have been implementing significant change improving our operating model since I arrived. Blue Cross Blue Shield North Carolina is transforming healthcare by changing the current model. We are focusing on something called value-based healthcare. Traditionally, healthcare model is typically a fee for service or capitated approach in which providers are paid based on the amount of health care services they deliver. Fee for service versus outcome. So when you go to a doctor and you do an office visit, they charge you for every item that they see you with. Based on that, they send that to my organization to adjudicate those claims. Value-based healthcare, on the other hand is a healthcare delivery model in which providers, including hospitals and physicians, are paid based on patient health outcomes. The value in value-based healthcare is derived from measuring health outcomes against the cost of delivering those outcomes. Well, we want to do the same and derive value out of IT as well. Significant value can be attained through automation on all levels. Just like most journeys, mine began with motivation. Let's talk about how I got here, how did I get started and how you can do it too. But first let's talk about Ansible. The automation engine is designed to provide an easy, reusable and platform-independent vehicle to automate complex and repetitive tasks. First off, Ansible is an open-source tool. It is very simple to use and set up. Even better, it's extremely powerful and flexible. You can orchestrate the entire application environment, no matter where it's deployed. You can also customize it based on your needs. Back in 2018, we had to build 150 jump server VMs in under 24 hours. Well, we leveraged some of the cobbled automation tools and scripts to get this done within that timeline. Without leveraging these tools and automation, this would have taken at least a week to facilitate the build. So a couple key takeaways from that exercise. Number one, this was awesome. All right, number two. We saw amazing potential in automation. Number three, we ran into network and other related build issues. Number four, we were unsure what to do next and where to focus within the realm of automation. So fast forward to May, 2019. My infrastructure engineering team introduced infrastructure automation software, you guessed it, Ansible to Blue Cross North Carolina to reduce overall IT costs, increase agility, productivity, and delivery while reducing delays and reliance on outside managed service providers or those repetitive manual tasks. So in the past, to provision a single server or virtual machine would take a minimum of 10 business days. That's not the overall process. That is just the deployment, would take 10 business days and over 20 hours of work, resulting in a cost of approximately $3,300 in charges per build. That's over $3,000 per build per server. However, with the automation platform provided by Ansible, this effort was significantly reduced. Reduced to under one business day, a half-hour worth of work and zero managed service provider charges. Let's fast forward a bit, to middle to late 2019. Blue Cross North Carolina decided to re-host the Facets application platform in-house within our co-locations. Well, Facets is a claim adjudication platform. After a medical claim is submitted, the insurance company Blue Cross determines their financial responsibility for the payment to the provider. This process is referred to as claims adjudication. Blue Cross was faced with the requirement to create roughly 1000 virtual machines as quickly as possible across all regions. Development, tests, training, QA, UAT, P stage, a staging environment, production, NDR. Well, our current managed service provider projected requiring 12 dedicated staff members and 16 weeks to process this request. Nope. There had to be a better way. By leveraging automation, the existing infrastructure engineering team was able to successfully provision all the required servers across three business days, in a total of 16 hours as well as nine weeks ahead of the project plan schedule, resulting in a cost avoidance of over $850,000. That's amazing, isn't it? Well, since then, the infrastructure engineering team has continued to use automation to assist teams both inside and outside of IT by reducing hours spent on repetitive tasks. Automation has increased the speed and accuracy of account creation, security hardening, and remediations, environment-wide configuration changes and software agent installations, just to name a few. This implementation had a significant positive impact on cost savings and cost avoidance and how quickly we can deliver and deploy infrastructure for projects. How were we able to implement this meaningful change? Well, I'll tell you, we started to evangelize and convert those naysayers to the wonderful world of automation. "Automate everything" was our mantra, even automate the automation. We'll eventually get there. It took a top-down approach to really accelerate use and adoptions. I spoke to anyone and everyone I could, my VP did the same, and even my CIO, Jo Abernathy. She started touting how important automation will be to the organization, its value, and how we can stay competitive and deploy faster and deliver at the speed of innovation. Wow, just wow. With a top-down approach for automation, we are empowering teams throughout the business to focus on those areas that are ripe for automation. Repetitive mundane tasks, those that are time delays are ideal candidates and allow these teams to focus on their core workload, versus time spent on those repetitive tasks. We are flaunting our automation successes within IT infrastructure to other departments in IT and the business at large. These conversations are opening up new possibilities to empower team members to leverage automation to deploy and deliver more quickly to meet the demand of enterprise projects and initiatives. Since its inception, the team has delivered on every project request ahead of schedule for infrastructure build, think about that. Every project request has been delivered ahead of schedule from an infrastructure perspective. That's fantastic. Well, automation is not new to the company, my company, nor yours. There's many tools, we use scripting, there's a lot available, but with Ansible and the way we've been able to implement it and make meaningful impact quickly, is new to the industry. We have been able to automate and reap significant cost avoidance in a short amount of time. Other similarly-sized companies are leveraging the same tool for longer and are not able to accomplish as much as we did in a short of time. We had motivation. Since starting this initiative, we are averaging cost avoidances in excess of $250,000 monthly. As difficult as it is to implement something new in most companies, the team implemented this capability and way of thinking within months. The sheer dedication to the business objective and the can-do attitude allowed us to be extremely successful. Thank you very much for your time and allowing me to tell my story.

Published Date : Oct 5 2020

SUMMARY :

So in the past, to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jo AbernathyPERSON

0.99+

Blue Cross Blue ShieldORGANIZATION

0.99+

May, 2019DATE

0.99+

Petar BojovicPERSON

0.99+

10 business daysQUANTITY

0.99+

Blue CrossORGANIZATION

0.99+

nine weeksQUANTITY

0.99+

2018DATE

0.99+

16 weeksQUANTITY

0.99+

Blue Cross Blue Shield North CarolinaORGANIZATION

0.99+

late 2019DATE

0.99+

16 hoursQUANTITY

0.99+

AnsibleORGANIZATION

0.99+

12 dedicated staff membersQUANTITY

0.99+

over $850,000QUANTITY

0.99+

Blue Cross North CarolinaORGANIZATION

0.99+

BCBSNCORGANIZATION

0.99+

over three yearsQUANTITY

0.99+

approximately $3,300QUANTITY

0.99+

three business daysQUANTITY

0.98+

FirstQUANTITY

0.98+

over 20 hoursQUANTITY

0.98+

firstQUANTITY

0.98+

under 24 hoursQUANTITY

0.96+

$250,000QUANTITY

0.95+

1000 virtual machinesQUANTITY

0.94+

bothQUANTITY

0.93+

over $3,000 perQUANTITY

0.93+

Number oneQUANTITY

0.92+

North CarolinaLOCATION

0.9+

half-hourQUANTITY

0.9+

150 jump server VMsQUANTITY

0.89+

Number threeQUANTITY

0.88+

number twoQUANTITY

0.88+

zero managed service providerQUANTITY

0.88+

single serverQUANTITY

0.87+

under one business dayQUANTITY

0.87+

Number fourQUANTITY

0.86+

Blue Cross NorthORGANIZATION

0.81+

at least a weekQUANTITY

0.77+

CarolinaLOCATION

0.75+

AnsibleTITLE

0.74+

couple keyQUANTITY

0.65+

FacetsTITLE

0.62+

FacetsORGANIZATION

0.52+

Another test of transitions


 

>> Hi, my name is Andy Clemenko. I'm a Senior Solutions Engineer at StackRox. Thanks for joining us today for my talk on labels, labels, labels. Obviously, you can reach me at all the socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, and it'll take you to my GitHub page where I've got all of this documentation, socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, (upbeat music) >> Hi, my name is Andy Clemenko. I'm a Senior Solutions Engineer at StackRox. Thanks for joining us today for my talk on labels, labels, labels. Obviously, you can reach me at all the socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, and it'll take you to my GitHub page where I've got all of this documentation, I've got the Keynote file there. YAMLs, I've got Dockerfiles, Compose files, all that good stuff. If you want to follow along, great, if not go back and review later, kind of fun. So let me tell you a little bit about myself. I am a former DOD contractor. This is my seventh DockerCon. I've spoken, I had the pleasure to speak at a few of them, one even in Europe. I was even a Docker employee for quite a number of years, providing solutions to the federal government and customers around containers and all things Docker. So I've been doing this a little while. One of the things that I always found interesting was the lack of understanding around labels. So why labels, right? Well, as a former DOD contractor, I had built out a large registry. And the question I constantly got was, where did this image come from? How did you get it? What's in it? Where did it come from? How did it get here? And one of the things we did to kind of alleviate some of those questions was we established a baseline set of labels. Labels really are designed to provide as much metadata around the image as possible. I ask everyone in attendance, when was the last time you pulled an image and had 100% confidence, you knew what was inside it, where it was built, how it was built, when it was built, you probably didn't, right? The last thing we obviously want is a container fire, like our image on the screen. And one kind of interesting way we can kind of prevent that is through the use of labels. We can use labels to address security, address some of the simplicity on how to run these images. So think of it, kind of like self documenting, Think of it also as an audit trail, image provenance, things like that. These are some interesting concepts that we can definitely mandate as we move forward. What is a label, right? Specifically what is the Schema? It's just a key-value. All right? It's any key and pretty much any value. What if we could dump in all kinds of information? What if we could encode things and store it in there? And I've got a fun little demo to show you about that. Let's start off with some of the simple keys, right? Author, date, description, version. Some of the basic information around the image. That would be pretty useful, right? What about specific labels for CI? What about a, where's the version control? Where's the source, right? Whether it's Git, whether it's GitLab, whether it's GitHub, whether it's Gitosis, right? Even SPN, who cares? Where are the source files that built, where's the Docker file that built this image? What's the commit number? That might be interesting in terms of tracking the resulting image to a person or to a commit, hopefully then to a person. How is it built? What if you wanted to play with it and do a git clone of the repo and then build the Docker file on your own? Having a label specifically dedicated on how to build this image might be interesting for development work. Where it was built, and obviously what build number, right? These kind of all, not only talk about continuous integration, CI but also start to talk about security. Specifically what server built it. The version control number, the version number, the commit number, again, how it was built. What's the specific build number? What was that job number in, say, Jenkins or GitLab? What if we could take it a step further? What if we could actually apply policy enforcement in the build pipeline, looking specifically for some of these specific labels? I've got a good example of, in my demo of a policy enforcement. So let's look at some sample labels. Now originally, this idea came out of label-schema.org. And then it was a modified to opencontainers, org.opencontainers.image. There is a link in my GitHub page that links to the full reference. But these are some of the labels that I like to use, just as kind of like a standardization. So obviously, Author's, an email address, so now the image is attributable to a person, that's always kind of good for security and reliability. Where's the source? Where's the version control that has the source, the Docker file and all the assets? How it was built, build number, build server the commit, we talked about, when it was created, a simple description. A fun one I like adding in is the healthZendpoint. Now obviously, the health check directive should be in the Docker file. But if you've got other systems that want to ping your applications, why not declare it and make it queryable? Image version, obviously, that's simple declarative And then a title. And then I've got the two fun ones. Remember, I talked about what if we could encode some fun things? Hypothetically, what if we could encode the Compose file of how to build the stack in the first image itself? And conversely the Kubernetes? Well, actually, you can and I have a demo to show you how to kind of take advantage of that. So how do we create labels? And really creating labels as a function of build time okay? You can't really add labels to an image after the fact. The way you do add labels is either through the Docker file, which I'm a big fan of, because it's declarative. It's in version control. It's kind of irrefutable, especially if you're tracking that commit number in a label. You can extend it from being a static kind of declaration to more a dynamic with build arguments. And I can show you, I'll show you in a little while how you can use a build argument at build time to pass in that variable. And then obviously, if you did it by hand, you could do a docker build--label key equals value. I'm not a big fan of the third one, I love the first one and obviously the second one. Being dynamic we can take advantage of some of the variables coming out of version control. Or I should say, some of the variables coming out of our CI system. And that way, it self documents effectively at build time, which is kind of cool. How do we view labels? Well, there's two major ways to view labels. The first one is obviously a docker pull and docker inspect. You can pull the image locally, you can inspect it, you can obviously, it's going to output as JSON. So you going to use something like JQ to crack it open and look at the individual labels. Another one which I found recently was Skopeo from Red Hat. This allows you to actually query the registry server. So you don't even have to pull the image initially. This can be really useful if you're on a really small development workstation, and you're trying to talk to a Kubernetes cluster and wanting to deploy apps kind of in a very simple manner. Okay? And this was that use case, right? Using Kubernetes, the Kubernetes demo. One of the interesting things about this is that you can base64 encode almost anything, push it in as text into a label and then base64 decode it, and then use it. So in this case, in my demo, I'll show you how we can actually use a kubectl apply piped from the base64 decode from the label itself from skopeo talking to the registry. And what's interesting about this kind of technique is you don't need to store Helm charts. You don't need to learn another language for your declarative automation, right? You don't need all this extra levels of abstraction inherently, if you use it as a label with a kubectl apply, It's just built in. It's kind of like the kiss approach to a certain extent. It does require some encoding when you actually build the image, but to me, it doesn't seem that hard. Okay, let's take a look at a demo. And what I'm going to do for my demo, before we actually get started is here's my repo. Here's a, let me actually go to the actual full repo. So here's the repo, right? And I've got my Jenkins pipeline 'cause I'm using Jenkins for this demo. And in my demo flask, I've got the Docker file. I've got my compose and my Kubernetes YAML. So let's take a look at the Docker file, right? So it's a simple Alpine image. The org statements are the build time arguments that are passed in. Label, so again, I'm using the org.opencontainers.image.blank, for most of them. There's a typo there. Let's see if you can find it, I'll show you it later. My source, build date, build number, commit. Build number and get commit are derived from the Jenkins itself, which is nice. I can just take advantage of existing URLs. I don't have to create anything crazy. And again, I've got my actual Docker build command. Now this is just a label on how to build it. And then here's my simple Python, APK upgrade, remove the package manager, kind of some security stuff, health check getting Python through, okay? Let's take a look at the Jenkins pipeline real quick. So here is my Jenkins pipeline and I have four major stages, four stages, I have built. And here in build, what I do is I actually do the Git clone. And then I do my docker build. From there, I actually tell the Jenkins StackRox plugin. So that's what I'm using for my security scanning. So go ahead and scan, basically, I'm staging it to scan the image. I'm pushing it to Hub, okay? Where I can see the, basically I'm pushing the image up to Hub so such that my StackRox security scanner can go ahead and scan the image. I'm kicking off the scan itself. And then if everything's successful, I'm pushing it to prod. Now what I'm doing is I'm just using the same image with two tags, pre-prod and prod. This is not exactly ideal, in your environment, you probably want to use separate registries and non-prod and a production registry, but for demonstration purposes, I think this is okay. So let's go over to my Jenkins and I've got a deliberate failure. And I'll show you why there's a reason for that. And let's go down. Let's look at my, so I have a StackRox report. Let's look at my report. And it says image required, required image label alert, right? Request that the maintainer, add the required label to the image, so we're missing a label, okay? One of the things we can do is let's flip over, and let's look at Skopeo. Right? I'm going to do this just the easy way. So instead of looking at org.zdocker, opencontainers.image.authors. Okay, see here it says build signature? That was the typo, we didn't actually pass in. So if we go back to our repo, we didn't pass in the the build time argument, we just passed in the word. So let's fix that real quick. That's the Docker file. Let's go ahead and put our dollar sign in their. First day with the fingers you going to love it. And let's go ahead and commit that. Okay? So now that that's committed, we can go back to Jenkins, and we can actually do another build. And there's number 12. And as you can see, I've been playing with this for a little bit today. And while that's running, come on, we can go ahead and look at the Console output. Okay, so there's our image. And again, look at all the build arguments that we're passing into the build statement. So we're passing in the date and the date gets derived on the command line. With the build arguments, there's the base64 encoded of the Compose file. Here's the base64 encoding of the Kubernetes YAML. We do the build. And then let's go down to the bottom layer exists and successful. So here's where we can see no system policy violations profound marking stack regimes security plugin, build step as successful, okay? So we're actually able to do policy enforcement that that image exists, that that label sorry, exists in the image. And again, we can look at the security report and there's no policy violations and no vulnerabilities. So that's pretty good for security, right? We can now enforce and mandate use of certain labels within our images. And let's flip back over to Skopeo, and let's go ahead and look at it. So we're looking at the prod version again. And there's it is in my email address. And that validated that that was valid for that policy. So that's kind of cool. Now, let's take it a step further. What if, let's go ahead and take a look at all of the image, all the labels for a second, let me remove the dash org, make it pretty. Okay? So we have all of our image labels. Again, author's build, commit number, look at the commit number. It was built today build number 12. We saw that right? Delete, build 12. So that's kind of cool dynamic labels. Name, healthz, right? But what we're looking for is we're going to look at the org.zdockerketers label. So let's go look at the label real quick. Okay, well that doesn't really help us because it's encoded but let's base64 dash D, let's decode it. And I need to put the dash r in there 'cause it doesn't like, there we go. So there's my Kubernetes YAML. So why can't we simply kubectl apply dash f? Let's just apply it from standard end. So now we've actually used that label. From the image that we've queried with skopeo, from a remote registry to deploy locally to our Kubernetes cluster. So let's go ahead and look everything's up and running, perfect. So what does that look like, right? So luckily, I'm using traefik for Ingress 'cause I love it. And I've got an object in my Kubernetes YAML called flask.doctor.life. That's my Ingress object for traefik. I can go to flask.docker.life. And I can hit refresh. Obviously, I'm not a very good web designer 'cause the background image in the text. We can go ahead and refresh it a couple times we've got Redis storing a hit counter. We can see that our server name is roundrobing. Okay? That's kind of cool. So let's kind of recap a little bit about my demo environment. So my demo environment, I'm using DigitalOcean, Ubuntu 19.10 Vms. I'm using K3s instead of full Kubernetes either full Rancher, full Open Shift or Docker Enterprise. I think K3s has some really interesting advantages on the development side and it's kind of intended for IoT but it works really well and it deploys super easy. I'm using traefik for Ingress. I love traefik. I may or may not be a traefik ambassador. I'm using Jenkins for CI. And I'm using StackRox for image scanning and policy enforcement. One of the things to think about though, especially in terms of labels is none of this demo stack is required. You can be in any cloud, you can be in CentOs, you can be in any Kubernetes. You can even be in swarm, if you wanted to, or Docker compose. Any Ingress, any CI system, Jenkins, circle, GitLab, it doesn't matter. And pretty much any scanning. One of the things that I think is kind of nice about at least StackRox is that we do a lot more than just image scanning, right? With the policy enforcement things like that. I guess that's kind of a shameless plug. But again, any of this stack is completely replaceable, with any comparative product in that category. So I'd like to, again, point you guys to the andyc.infodc20, that's take you right to the GitHub repo. You can reach out to me at any of the socials @clemenko or andy@stackrox.com. And thank you for attending. I hope you learned something fun about labels. And hopefully you guys can standardize labels in your organization and really kind of take your images and the image provenance to a new level. Thanks for watching. (upbeat music) >> Narrator: Live from Las Vegas It's theCUBE. Covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel along with it's ecosystem partners. >> Okay, welcome back everyone theCUBE's live coverage of AWS re:Invent 2019. This is theCUBE's 7th year covering Amazon re:Invent. It's their 8th year of the conference. I want to just shout out to Intel for their sponsorship for these two amazing sets. Without their support we wouldn't be able to bring our mission of great content to you. I'm John Furrier. Stu Miniman. We're here with the chief of AWS, the chief executive officer Andy Jassy. Tech athlete in and of himself three hour Keynotes. Welcome to theCUBE again, great to see you. >> Great to be here, thanks for having me guys. >> Congratulations on a great show a lot of great buzz. >> Andy: Thank you. >> A lot of good stuff. Your Keynote was phenomenal. You get right into it, you giddy up right into it as you say, three hours, thirty announcements. You guys do a lot, but what I liked, the new addition, the last year and this year is the band; house band. They're pretty good. >> Andy: They're good right? >> They hit the queen notes, so that keeps it balanced. So we're going to work on getting a band for theCUBE. >> Awesome. >> So if I have to ask you, what's your walk up song, what would it be? >> There's so many choices, it depends on what kind of mood I'm in. But, uh, maybe Times Like These by the Foo Fighters. >> John: Alright. >> These are unusual times right now. >> Foo Fighters playing at the Amazon Intersect Show. >> Yes they are. >> Good plug Andy. >> Headlining. >> Very clever >> Always getting a good plug in there. >> My very favorite band. Well congratulations on the Intersect you got a lot going on. Intersect is a music festival, I'll get to that in a second But, I think the big news for me is two things, obviously we had a one-on-one exclusive interview and you laid out, essentially what looks like was going to be your Keynote, and it was. Transformation- >> Andy: Thank you for the practice. (Laughter) >> John: I'm glad to practice, use me anytime. >> Yeah. >> And I like to appreciate the comments on Jedi on the record, that was great. But I think the transformation story's a very real one, but the NFL news you guys just announced, to me, was so much fun and relevant. You had the Commissioner of NFL on stage with you talking about a strategic partnership. That is as top down, aggressive goal as you could get to have Rodger Goodell fly to a tech conference to sit with you and then bring his team talk about the deal. >> Well, ya know, we've been partners with the NFL for a while with the Next Gen Stats that they use on all their telecasts and one of the things I really like about Roger is that he's very curious and very interested in technology and the first couple times I spoke with him he asked me so many questions about ways the NFL might be able to use the Cloud and digital transformation to transform their various experiences and he's always said if you have a creative idea or something you think that could change the world for us, just call me he said or text me or email me and I'll call you back within 24 hours. And so, we've spent the better part of the last year talking about a lot of really interesting, strategic ways that they can evolve their experience both for fans, as well as their players and the Player Health and Safety Initiative, it's so important in sports and particularly important with the NFL given the nature of the sport and they've always had a focus on it, but what you can do with computer vision and machine learning algorithms and then building a digital athlete which is really like a digital twin of each athlete so you understand, what does it look like when they're healthy and compare that when it looks like they may not be healthy and be able to simulate all kinds of different combinations of player hits and angles and different plays so that you could try to predict injuries and predict the right equipment you need before there's a problem can be really transformational so we're super excited about it. >> Did you guys come up with the idea or was it a collaboration between them? >> It was really a collaboration. I mean they, look, they are very focused on players safety and health and it's a big deal for their- you know, they have two main constituents the players and fans and they care deeply about the players and it's a-it's a hard problem in a sport like Football, I mean, you watch it. >> Yeah, and I got to say it does point out the use cases of what you guys are promoting heavily at the show here of the SageMaker Studio, which was a big part of your Keynote, where they have all this data. >> Andy: Right. >> And they're data hoarders, they hoard data but the manual process of going through the data was a killer problem. This is consistent with a lot of the enterprises that are out there, they have more data than they even know. So this seems to be a big part of the strategy. How do you get the customers to actually wake up to the fact that they got all this data and how do you tie that together? >> I think in almost every company they know they have a lot of data. And there are always pockets of people who want to do something with it. But, when you're going to make these really big leaps forward; these transformations, the things like Volkswagen is doing where they're reinventing their factories and their manufacturing process or the NFL where they're going to radically transform how they do players uh, health and safety. It starts top down and if the senior leader isn't convicted about wanting to take that leap forward and trying something different and organizing the data differently and organizing the team differently and using machine learning and getting help from us and building algorithms and building some muscle inside the company it just doesn't happen because it's not in the normal machinery of what most companies do. And so it always, almost always, starts top down. Sometimes it can be the Commissioner or CEO sometimes it can be the CIO but it has to be senior level conviction or it doesn't get off the ground. >> And the business model impact has to be real. For NFL, they know concussions, hurting their youth pipe-lining, this is a huge issue for them. This is their business model. >> They lose even more players to lower extremity injuries. And so just the notion of trying to be able to predict injuries and, you know, the impact it can have on rules and the impact it can have on the equipment they use, it's a huge game changer when they look at the next 10 to 20 years. >> Alright, love geeking out on the NFL but Andy, you know- >> No more NFL talk? >> Off camera how about we talk? >> Nobody talks about the Giants being 2 and 10. >> Stu: We're both Patriots fans here. >> People bring up the undefeated season. >> So Andy- >> Everybody's a Patriot's fan now. (Laughter) >> It's fascinating to watch uh, you and your three hour uh, Keynote, uh Werner in his you know, architectural discussion, really showed how AWS is really extending its reach, you know, it's not just a place. For a few years people have been talking about you know, Cloud is an operational model its not a destination or a location but, I felt it really was laid out is you talked about Breadth and Depth and Werner really talked about you know, Architectural differentiation. People talk about Cloud, but there are very-there are a lot of differences between the vision for where things are going. Help us understand why, I mean, Amazon's vision is still a bit different from what other people talk about where this whole Cloud expansion, journey, put ever what tag or label you want on it but you know, the control plane and the technology that you're building and where you see that going. >> Well I think that, we've talked about this a couple times we have two macro types of customers. We have those that really want to get at the low level building blocks and stitch them together creatively however they see fit to create whatever's in their-in their heads. And then we have the second segment of customers that say look, I'm willing to give up some of that flexibility in exchange for getting 80% of the way there much faster. In an abstraction that's different from those low level building blocks. And both segments of builders we want to serve and serve well and so we've built very significant offerings in both areas. I think when you look at microservices um, you know, some of it has to do with the fact that we have this very strongly held belief born out of several years of Amazon where you know, the first 7 or 8 years of Amazon's consumer business we basically jumbled together all of the parts of our technology in moving really quickly and when we wanted to move quickly where you had to impact multiple internal development teams it was so long because it was this big ball, this big monolithic piece. And we got religion about that in trying to move faster in the consumer business and having to tease those pieces apart. And it really was a lot of impetus behind conceiving AWS where it was these low level, very flexible building blocks that6 don't try and make all the decisions for customers they get to make them themselves. And some of the microservices that you saw Werner talking about just, you know, for instance, what we-what we did with Nitro or even what we did with Firecracker those are very much about us relentlessly working to continue to uh, tease apart the different components. And even things that look like low level building blocks over time, you build more and more features and all of the sudden you realize they have a lot of things that are combined together that you wished weren't that slow you down and so, Nitro was a completely re imagining of our Hypervisor and Virtualization layer to allow us, both to let customers have better performance but also to let us move faster and have a better security story for our customers. >> I got to ask you the question around transformation because I think that all points, all the data points, you got all the references, Goldman Sachs on stage at the Keynote, Cerner, I mean healthcare just is an amazing example because I mean, that's demonstrating real value there there's no excuse. I talked to someone who wouldn't be named last night, in and around the area said, the CIA has a cost bar like this a cost-a budget like this but the demand for mission based apps is going up exponentially, so there's need for the Cloud. And so, you see more and more of that. What is your top down, aggressive goals to fill that solution base because you're also a very transformational thinker; what is your-what is your aggressive top down goals for your organization because you're serving a market with trillions of dollars of spend that's shifting, that's on the table. >> Yeah. >> A lot of competition now sees it too, they're going to go after it. But at the end of the day you have customers that have a demand for things, apps. >> Andy: Yeah. >> And not a lot of budget increase at the same time. This is a huge dynamic. >> Yeah. >> John: What's your goals? >> You know I think that at a high level our top down aggressive goals are that we want every single customer who uses our platform to have an outstanding customer experience. And we want that outstanding customer experience in part is that their operational performance and their security are outstanding, but also that it allows them to build, uh, build projects and initiatives that change their customer experience and allow them to be a sustainable successful business over a long period of time. And then, we also really want to be the technology infrastructure platform under all the applications that people build. And we're realistic, we know that you know, the market segments we address with infrastructure, software, hardware, and data center services globally are trillions of dollars in the long term and it won't only be us, but we have that goal of wanting to serve every application and that requires not just the security operational premise but also a lot of functionality and a lot of capability. We have by far the most amount of capability out there and yet I would tell you, we have 3 to 5 years of items on our roadmap that customers want us to add. And that's just what we know today. >> And Andy, underneath the covers you've been going through some transformation. When we talked a couple of years ago, about how serverless is impacting things I've heard that that's actually, in many ways, glue behind the two pizza teams to work between organizations. Talk about how the internal transformations are happening. How that impacts your discussions with customers that are going through that transformation. >> Well, I mean, there's a lot of- a lot of the technology we build comes from things that we're doing ourselves you know? And that we're learning ourselves. It's kind of how we started thinking about microservices, serverless too, we saw the need, you know, we would have we would build all these functions that when some kind of object came into an object store we would spin up, compute, all those tasks would take like, 3 or 4 hundred milliseconds then we'd spin it back down and yet, we'd have to keep a cluster up in multiple availability zones because we needed that fault tolerance and it was- we just said this is wasteful and, that's part of how we came up with Lambda and you know, when we were thinking about Lambda people understandably said, well if we build Lambda and we build this serverless adventure in computing a lot of people were keeping clusters of instances aren't going to use them anymore it's going to lead to less absolute revenue for us. But we, we have learned this lesson over the last 20 years at Amazon which is, if it's something that's good for customers you're much better off cannibalizing yourself and doing the right thing for customers and being part of shaping something. And I think if you look at the history of technology you always build things and people say well, that's going to cannibalize this and people are going to spend less money, what really ends up happening is they spend less money per unit of compute but it allows them to do so much more that they ultimately, long term, end up being more significant customers. >> I mean, you are like beating the drum all the time. Customers, what they say, we encompass the roadmap, I got that you guys have that playbook down, that's been really successful for you. >> Andy: Yeah. >> Two years ago you told me machine learning was really important to you because your customers told you. What's the next traunch of importance for customers? What's on top of mind now, as you, look at- >> Andy: Yeah. >> This re:Invent kind of coming to a close, Replay's tonight, you had conversations, you're a tech athlete, you're running around, doing speeches, talking to customers. What's that next hill from if it's machine learning today- >> There's so much I mean, (weird background noise) >> It's not a soup question (Laughter) And I think we're still in the very early days of machine learning it's not like most companies have mastered it yet even though they're using it much more then they did in the past. But, you know, I think machine learning for sure I think the Edge for sure, I think that um, we're optimistic about Quantum Computing even though I think it'll be a few years before it's really broadly useful. We're very um, enthusiastic about robotics. I think the amount of functions that are going to be done by these- >> Yeah. >> robotic applications are much more expansive than people realize. It doesn't mean humans won't have jobs, they're just going to work on things that are more value added. We're believers in augmented virtual reality, we're big believers in what's going to happen with Voice. And I'm also uh, I think sometimes people get bored you know, I think you're even bored with machine learning already >> Not yet. >> People get bored with the things you've heard about but, I think just what we've done with the Chips you know, in terms of giving people 40% better price performance in the latest generation of X86 processors. It's pretty unbelievable in the difference in what people are going to be able to do. Or just look at big data I mean, big data, we haven't gotten through big data where people have totally solved it. The amount of data that companies want to store, process, analyze, is exponentially larger than it was a few years ago and it will, I think, exponentially increase again in the next few years. You need different tools and services. >> Well I think we're not bored with machine learning we're excited to get started because we have all this data from the video and you guys got SageMaker. >> Andy: Yeah. >> We call it the stairway to machine learning heaven. >> Andy: Yeah. >> You start with the data, move up, knock- >> You guys are very sophisticated with what you do with technology and machine learning and there's so much I mean, we're just kind of, again, in such early innings. And I think that, it was so- before SageMaker, it was so hard for everyday developers and data scientists to build models but the combination of SageMaker and what's happened with thousands of companies standardizing on it the last two years, plus now SageMaker studio, giant leap forward. >> Well, we hope to use the data to transform our experience with our audience. And we're on Amazon Cloud so we really appreciate that. >> Andy: Yeah. >> And appreciate your support- >> Andy: Yeah, of course. >> John: With Amazon and get that machine learning going a little faster for us, that would be better. >> If you have requests I'm interested, yeah. >> So Andy, you talked about that you've got the customers that are builders and the customers that need simplification. Traditionally when you get into the, you know, the heart of the majority of adoption of something you really need to simplify that environment. But when I think about the successful enterprise of the future, they need to be builders. how'l I normally would've said enterprise want to pay for solutions because they don't have the skill set but, if they're going to succeed in this new economy they need to go through that transformation >> Andy: Yeah. >> That you talk to, so, I mean, are we in just a total new era when we look back will this be different than some of these previous waves? >> It's a really good question Stu, and I don't think there's a simple answer to it. I think that a lot of enterprises in some ways, I think wish that they could just skip the low level building blocks and only operate at that higher level abstraction. That's why people were so excited by things like, SageMaker, or CodeGuru, or Kendra, or Contact Lens, these are all services that allow them to just send us data and then run it on our models and get back the answers. But I think one of the big trends that we see with enterprises is that they are taking more and more of their development in house and they are wanting to operate more and more like startups. I think that they admire what companies like AirBnB and Pintrest and Slack and Robinhood and a whole bunch of those companies, Stripe, have done and so when, you know, I think you go through these phases and eras where there are waves of success at different companies and then others want to follow that success and replicate it. And so, we see more and more enterprises saying we need to take back a lot of that development in house. And as they do that, and as they add more developers those developers in most cases like to deal with the building blocks. And they have a lot of ideas on how they can creatively stich them together. >> Yeah, on that point, I want to just quickly ask you on Amazon versus other Clouds because you made a comment to me in our interview about how hard it is to provide a service to other people. And it's hard to have a service that you're using yourself and turn that around and the most quoted line of my story was, the compression algorithm- there's no compression algorithm for experience. Which to me, is the diseconomies of scale for taking shortcuts. >> Andy: Yeah. And so I think this is a really interesting point, just add some color commentary because I think this is a fundamental difference between AWS and others because you guys have a trajectory over the years of serving, at scale, customers wherever they are, whatever they want to do, now you got microservices. >> Yeah. >> John: It's even more complex. That's hard. >> Yeah. >> John: Talk about that. >> I think there are a few elements to that notion of there's no compression algorithm for experience and I think the first thing to know about AWS which is different is, we just come from a different heritage and a different background. We ran a business for a long time that was our sole business that was a consumer retail business that was very low margin. And so, we had to operate at very large scale given how many people were using us but also, we had to run infrastructure services deep in the stack, compute storage and database, and reliable scalable data centers at very low cost and margins. And so, when you look at our business it actually, today, I mean its, its a higher margin business in our retail business, its a lower margin business in software companies but at real scale, it's a high volume, relatively low margin business. And the way that you have to operate to be successful with those businesses and the things you have to think about and that DNA come from the type of operators we have to be in our consumer retail business. And there's nobody else in our space that does that. So, you know, the way that we think about costs, the way we think about innovation in the data center, um, and I also think the way that we operate services and how long we've been operating services as a company its a very different mindset than operating package software. Then you look at when uh, you think about some of the uh, issues in very large scale Cloud, you can't learn some of those lessons until you get to different elbows of the curve and scale. And so what I was telling you is, its really different to run your own platform for your own users where you get to tell them exactly how its going to be done. But that's not the way the real world works. I mean, we have millions of external customers who use us from every imaginable country and location whenever they want, without any warning, for lots of different use cases, and they have lots of design patterns and we don't get to tell them what to do. And so operating a Cloud like that, at a scale that's several times larger than the next few providers combined is a very different endeavor and a very different operating rigor. >> Well you got to keep raising the bar you guys do a great job, really impressed again. Another tsunami of announcements. In fact, you had to spill the beans earlier with Quantum the day before the event. Tight schedule. I got to ask you about the musical festival because, I think this is a very cool innovation. It's the inaugural Intersect conference. >> Yes. >> John: Which is not part of Replay, >> Yes. >> John: Which is the concert tonight. Its a whole new thing, big music act, you're a big music buff, your daughter's an artist. Why did you do this? What's the purpose? What's your goal? >> Yeah, it's an experiment. I think that what's happened is that re:Invent has gotten so big, we have 65 thousand people here, that to do the party, which we do every year, its like a 35-40 thousand person concert now. Which means you have to have a location that has multiple stages and, you know, we thought about it last year and when we were watching it and we said, we're kind of throwing, like, a 4 hour music festival right now. There's multiple stages, and its quite expensive to set up that set for a party and we said well, maybe we don't have to spend all that money for 4 hours and then rip it apart because actually the rent to keep those locations for another two days is much smaller than the cost of actually building multiple stages and so we thought we would try it this year. We're very passionate about music as a business and I think we-I think our customers feel like we've thrown a pretty good music party the last few years and we thought we would try it at a larger scale as an experiment. And if you look at the economics- >> At the headliners real quick. >> The Foo Fighters are headlining on Saturday night, Anderson Paak and the Free Nationals, Brandi Carlile, Shawn Mullins, um, Willy Porter, its a good set. Friday night its Beck and Kacey Musgraves so it's a really great set of um, about thirty artists and we're hopeful that if we can build a great experience that people will want to attend that we can do it at scale and it might be something that both pays for itself and maybe, helps pay for re:Invent too overtime and you know, I think that we're also thinking about it as not just a music concert and festival the reason we named it Intersect is that we want an intersection of music genres and people and ethnicities and age groups and art and technology all there together and this will be the first year we try it, its an experiment and we're really excited about it. >> Well I'm gone, congratulations on all your success and I want to thank you we've been 7 years here at re:Invent we've been documenting the history. You got two sets now, one set upstairs. So appreciate you. >> theCUBE is part of re:Invent, you know, you guys really are apart of the event and we really appreciate your coming here and I know people appreciate the content you create as well. >> And we just launched CUBE365 on Amazon Marketplace built on AWS so thanks for letting us- >> Very cool >> John: Build on the platform. appreciate it. >> Thanks for having me guys, I appreciate it. >> Andy Jassy the CEO of AWS here inside theCUBE, it's our 7th year covering and documenting the thunderous innovation that Amazon's doing they're really doing amazing work building out the new technologies here in the Cloud computing world. I'm John Furrier, Stu Miniman, be right back with more after this short break. (Outro music)

Published Date : Sep 29 2020

SUMMARY :

at org the org to the andyc and it was. of time. That's hard. I think that

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andy ClemenkoPERSON

0.99+

AndyPERSON

0.99+

Stu MinimanPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Andy JassyPERSON

0.99+

CIAORGANIZATION

0.99+

John FurrierPERSON

0.99+

AWSORGANIZATION

0.99+

EuropeLOCATION

0.99+

JohnPERSON

0.99+

3QUANTITY

0.99+

StackRoxORGANIZATION

0.99+

80%QUANTITY

0.99+

4 hoursQUANTITY

0.99+

100%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

VolkswagenORGANIZATION

0.99+

Rodger GoodellPERSON

0.99+

AirBnBORGANIZATION

0.99+

RogerPERSON

0.99+

40%QUANTITY

0.99+

Brandi CarlilePERSON

0.99+

PintrestORGANIZATION

0.99+

PythonTITLE

0.99+

two daysQUANTITY

0.99+

4 hourQUANTITY

0.99+

7th yearQUANTITY

0.99+

Willy PorterPERSON

0.99+

Friday nightDATE

0.99+

andy@stackrox.comOTHER

0.99+

7 yearsQUANTITY

0.99+

Goldman SachsORGANIZATION

0.99+

two tagsQUANTITY

0.99+

IntelORGANIZATION

0.99+

millionsQUANTITY

0.99+

Foo FightersORGANIZATION

0.99+

last yearDATE

0.99+

GiantsORGANIZATION

0.99+

todayDATE

0.99+

andyc.info/dc20OTHER

0.99+

65 thousand peopleQUANTITY

0.99+

Saturday nightDATE

0.99+

SlackORGANIZATION

0.99+

two setsQUANTITY

0.99+

flask.docker.lifeOTHER

0.99+

WernerPERSON

0.99+

two thingsQUANTITY

0.99+

Shawn MullinsPERSON

0.99+

RobinhoodORGANIZATION

0.99+

IntersectORGANIZATION

0.99+

thousandsQUANTITY

0.99+

Kacey MusgravesPERSON

0.99+

4 hundred millisecondsQUANTITY

0.99+

first imageQUANTITY

0.99+

Scott Delandy, Dell Technologies | CUBE Conversation, September 2020


 

>> Narrator: From theCUBE studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is theCUBE conversation. >> Hi, I'm Stu Miniman, and welcome to a special CUBE conversation. Going to be going through digging a little bit into the history as well as talking about the modern storage environment. Happy to welcome back to the program. One of our CUBE alumni, someone even I actually known for many years, we worked together for a number of years. Scott Delandy is the Technical Director of Dell Storage in Data Protection Division, of course with Dell Technologies. Scott, great to see you. >> Hey, Stu, is so awesome to see you guys. Thank you for the opportunity to come and chat with you. Today we've got some really exciting stuff that we want to go through and I know you and I are probably going to have a little bit of an issue because I know when we get together, we always want to reminisce about, you know, the things that we've done and you know, the stuff that we've gotten to work on and as well as the cool stuff that's happening within technology today. So everybody buckle in, 'cause this is going to be cool. >> Unfortunately, you know, we're only a few miles away from each other in person, but of course, in these time we have to do it while remote, but we'll walk side by side for a little bit of memory lane. >> Yes absolutely. >> You know, as I hinted you and I both worked at a company that many people will remember, I always worry Scott, you know, the younger people, you know, will be like, EMC, you know, who are they? Back when I started at EMC in 2000, it was, you know, you talked about prime and deck and some of the other companies here in Massachusetts that had been great and then been acquired or things that happened. So you even had a little bit, you've had a longer tenure at what is now Dell, EMC, of course, you know, the mega merger a couple of years ago. So, talk about a little bit, you know, your journey and we're going to be talking about PowerMax, which of course is the continuation of the long legacy of this Symmetrics platform. >> Yeah, it's crazy. So, I hit 30 years, with EMC and now with, Dell back in July. So it's been, you know, an amazing three now going on three plus decades of being able to work with amazing technology, incredibly talented people within the organization, as well as some of the best and brightest when it comes to users and customers that actually deploy the technology. So it's been a tremendous ride and you know, I'm not planning on slowing down any time soon. Let's just keep going, man. >> Yeah. You talk about decades, Scott, it felt like 2020 has been a decade into itself. (Scott laughs) So, but we, we talk about that history, Symmetrics really created, you know, that, that standalone storage business, you know, created a lot of technologies that help drive a lot of businesses out there, bring us up to speed, PowerMax, you know, what, where does that business fit in their portfolio? Got any good stuff for us that's adoption here in 2020. >> Yeah. I mean, you, you kind of said it. So when Symmetrics was originally introduced, and that was kind of one of the older generations architectures of what we now know today as PowerMax, a lot has changed with respect to the platform in terms of the technology, the types of environments that we support, the data services that we provide. So it's been, you know, again, three plus decades of evolution in terms of the technology, but kind of the concept of external storage buying computer deploying compute, separate from the storage infrastructure, that was a, unheard of concept back in 1990 when we first introduced Symmetrics. So this, month September is actually the 30 year anniversary from when we actually first created, that platform and, you know, lots of things have changed, right? It started as, you know, a mainframe platform and then we evolved into mainframe and open systems. And then we started looking at the adoption of things like client server, and then environments became virtualized, and, you know, throughout that entire history Symmetrics and now PowerMax has really been, one of the core tenants in terms of leveraging the storage infrastructure to make a lot of those evolutions happen in terms of the types of applications, types of operating environments and just the entire ecosystem that goes around, supporting an organization's applications and helping them run their business. Now where you know, PowerMax comes into play today, it is that it's still considered, the gold standard when it comes to high end technology, providing the reliability, the automation, the data services, the rich functionality that has made that platform, the success that still continues to be. You know, one of the things that blows my mind is if you look at just the last earnings call from, you know, last month or a couple of months ago now, PowerMax business is still growing what grew for that quarter at a triple digit rate. And, you know, you think of, you know, you look at kind of what's happening from a technology standpoint and kind of, you know, external storage has been a pretty kind of stable segment in terms of the infrastructure business, but still being able to see that type of growth, and just talking to users and, you know, hearing how much they continue to love the platform, how they continue to, you know, rely on the types of things that we're able to provide for their applications, for their businesses, just the tremendous amount of trust that's been built up, with respect to that platform. It's cool to be a part of that, and to be able to hear those types of things from the people that actually use the products. >> Yeah. One of the big changes during my time, you know, in the portfolio there, Scott was of course the real emergence of server virtualization with VMware of course. I'd actually started working, you know, when I was at EMC with VMware ahead of the acquisition. And then once the acquisition happened, there was a long maturation of storage in VMware environment. We kind of look back and say, you know, we spent a decade trying to fix and make sure that, you know, storage and networking could work well in those virtual environments. So we've got VMworld going on, understand you've got some news on the update, you know, that constant cadence of always make sure that, the storage and the virtual environment, work very well together. So, why don't you bring us up to date on the new. >> Yeah, so it's pretty exciting. So we are announcing some new software capabilities for the platform, as well as some new hardware enhancements, but basically the three focuses are a tighter integration with VMware specifically, by introducing new support for vVols and then changing the way that we've been able to deploy, and support vVols within the platform. We're also introducing, new cloud capabilities. So being able to take your primary storage, your PowerMax system, and being able to extend that to leverage cloud deployments. So being able to consume the capacity a little bit differently, being able to support some real interesting use cases in terms of why somebody might want to take their primary tier one storage and connect that, and to be able to move some of those datasets into a cloud provider. And then the third part is some really innovative things happening around security, really around being able to provide additional support for data protection, especially for things like encrypted environment, while still being able to preserve the efficiencies that we've built into these storage platforms. So those are kind of the three big things there's lot of other what we would call giblets also associated with the launch. But those are really the big ticket items that I think people are talking about in terms of this release. >> Well, let's drill in a little bit there, Scott. So if we take the cloud piece, you know, their message, of course, we understand, you know, Dell and VMware have partnered very closely together. VMware very much is driving that, you know, hybrid and multicloud deployment out there. So when I talk to some of the product teams, is you know, that consistency of deployment, you know, say you take a BX rail with VMware VCF, that that similar environment, what I could do in a Google cloud or an Azure, how does the, those cloud solutions that you talk about fit into that overall discussion? >> Well, when you look at something like vVols, right? So, vVols is a little bit of a change or a newer way of being able to connect, into an external storage platform. And one of the things that we're trying to solve with vVols is being able to provide, better granularity in terms of the storage and the capacity being consumed at the individual VM level, but also being able to plug into the VMware ecosystem so that even though you have an external storage device connected into that environment, the way it gets managed, the way it gets provisioned, the way you set up replication, the way you recover things is completely transparent because all of that is handled, through the VMware software that sits above that. So it seems like a trivial exercise to just, you know, plug in a storage system and kind of away you go, but there's heavy lifting required in order to support that because you've got that in sometimes, in some cases make changes to the things that you're doing on the back end storage side, as well as work with the ecosystem provider in this case VMware. That have changes so that they can support some of the functionality and some of the rich data services that you're able to provide under the covers. Right. I'll give you a great example. So one of the things that we have the ability to do today is when we plug into a VMware environment with a PowerMax, we can support up to 64,000 devices, right? And you just try and get your head around that 64,000 devices. What does that even mean? It sounds like a lot, is that just marketing number and nobody would ever, you know, get to that level in terms of the number of devices that you would have to support. But one of the, kind of the technical challenges that we wanted to be able to solve is that when you deploy a virtual machine, each individual virtual machine consumes minimally three vVols in order to support that. And sometimes dozens and dozens of vVols especially if you're looking at doing things like copies or making snapshots of that. So the ability to scale to that large number of vVols and being able to support that, in a single storage system is very powerful for our users, especially folks out there that are looking to do, massive levels of consolidation where they really want to collapse the infrastructure down. They want to get as few physical things that they have to manage, which means you're spreading, you know, hundreds, thousands of these virtual machines into a single piece of infrastructure. So scale really does matter, especially for the types of users that would deploy a PowerMax in their environment, because of, again, the things that they're trying to do from an IT perspective, as well as the things that they need to do in order to be able to support their businesses. >> Yeah. Well, Scott, absolutely scale is such an important piece of the overall discussion today. It means different things to different people. It could mean you're massively scaling out like the hyperscalers, there's the edge discussion of, you know, small scale, but lots of copies. Talk to me about scale when it comes to those mission critical application. So, you know, I think about the solutions and data services that you're talking about, of course, you know, EMC, the Symmetrics really helped create that allegory with things like SRDF, Timefinder back in the day. So, what are you hearing today what's most important for critical application. >> So it really, excellent point. It really comes down to automation, right? Where, you know, you think of some of these, large environments, and we have users out there today that will have tens of thousands of virtual machines running in a single system. And you know, the ability to manage those, you can't find human beings that are, enough of them, as well as, you know ability to keep up with all the changes that happen in that environment. It's just something, that cannot physically be done in a manual way. So having that environment as automated as possible is really important, but it's not just automation, it's being able to automate at scale, right? So if I have 10,000 VMs and I want to go ahead and make a change in the environment, going through and making those VM by VM by VM is incredibly impractical. So being able to plug into the environment and being able to have hooks or APIs into the interfaces that sit on top of that, that's where a lot of the value comes in, right. It's really that automation, because again, tens of thousands of VMs, 64,000 devices. cool cool stuff, but you're not going to manage those individually. So how do you take that infrastructure and how do you literally make it, invisible to everybody around it so that when you have something that you want to do, just worry about the outcome, you don't worry about the individual steps required in order to get to that outcome. >> Yeah. What you said is so important, Scott, I love when PowerMax first came out, I got to talk with some of the engineers and, you know, the comment I made is, we've been talking about automation for decades. You know, Scott, you probably know better than most, when some of the previous generations, no automation would be discussed, but it's different. And what they really said is, it's so much about, you know, machine scale and being able to, we've gone beyond human scale. Humans could not keep up with the amount of changes and how we do things, and it's not just some scripts that you build. So there really is that, kind of machine learning built into what we're talking about. The other thing we've talked about for a long time and has always been critical in your space and you'd hit it up before, security. So, you know, give us the discussion of, you know, security in PowerMax, how that fits over in company's overall security stance. >> Well, I mean, at a very high level, I can confidently say that there is a heightened level of awareness around security, especially for the types of applications and the types of data, that we would typically support within these platforms. So it is very much a top of mind discussion. And, you know, one of the things that people are looking at in terms of how do I protect that data is it needs to be encrypted, right? And you know, we've been doing encryption for many many years. Right? We first introduced that through a feature called DARE which is Data At Rest Encryption, which would allow us at the individual drive level to encrypt it. So if that drive was ever physically removed either to be serviced, or, you know, someone just lost the drive, you wouldn't have to worry about that data being kind of out in the wild and being able to be accessed by somebody, because there was an encryption key. And unless you had that key, you could not access that data. And for many many years, that became a check in the box requirement. You cannot put your gear in my data center, unless I can assure that that data that's being stored on that system is encrypted, right. What's changing now, is just being able to encrypt the data on the array is no are good enough for some environments. The data needs to be encrypted from the host, from it being written by the application all the way through the server, the memory, the networks, everything, the controllers, right to the backend storage. So it's not just encrypting the data that's at rest, but encrypting the data end to end. Right. And one of the challenges that you have is that when you are writing an encrypted data, to a storage platform, especially in all flash storage platform, one of the data services that provides a lot of value is the ability to do data reduction, through a combination of things like data deduplication, and compression, and pattern recognition. There's all this kind of cool stuff that happens under the covers. So we will typically see a three to one, four to one data reduction for a particular application. But when that data is encrypted, you no longer get that efficiency it won't dedup, it won't compress. That kind of changes the sort of economic paradigm if you would, as you look at these external storage devices. So we've been talking to customers, we had one customer in particular come to us. They were a large insurance company. And one of their biggest customers came to them and said, our new policy is that all of our employee data, has to be encrypted, encrypted end to end. And so, as they looked at, well, how are we going to address that requirement? They quickly realized that in order to do that, they're going to need to increase the amount of storage that they have three to four X, because this data that they were getting really high deduplication and compression up against, they we're no longer going to get that. So what we did is we looked at well, what are ways that we can preserve the data efficiencies, the data reduction on the storage side, while still being able to meet the requirement to encrypt that data? So one of the new features that we're introducing within PowerMax is the ability to do end-to-end encryption while still being able to preserve the efficiencies. So I can turn encryption on all the way at the host level. I can write that data into the PowerMax, the PowerMax has access to the encryption keys that are on the host. It has the ability to decrypt that data in line. So there's no bump in the wire. There's no performance impact, apply the data reduction to it, and then re-encrypt the data as we're writing it out to the back. Yeah. So it's a hugely important feature for IT organizations, that are just now kind of getting their heads around this emerging requirement, that it's just not the stuff that's at rest that needs to be encrypted, it's the data end to end that's in that process. So big challenge there, and it really is one of the innovations that we're kind of pushing, in order to basically meet that requirement for this you know, set of users out there that see this as either something that they need today, or an evolving requirement where they want to put infrastructure in place. So if they're not doing it today, but they see maybe a couple of years down the line, that's something that they're going to need to do. They have the ability to enable that feature on the storage itself. >> Well, so Scott, 30 years of innovation, driving through this, you know, first of all, I hope if you haven't planned already, you need to get one of those Symmetrics refrigerators that I saw from back in the day, you know, wheel that out to the parking lot of where our tool's used to be, you know, a sign to the times that, you know, it used to be a bar for a few times now, you know, an organic sushi place, but you know, socially distanced gathering to celebrate, but give us a little look forward, you know, 30 years, I'm not resting on your laurels, always moving forward. So what would we expect to see, from PowerMax you know, going forward? >> So, two things, number one, the person that came up with that idea of the, what we internally refer to as the V fridge was an absolute genius. Just, you know, I would say that person was a genius. Second thing is in terms of, you know, what we see going forward is, I mean, one of the top of mind discussions for a lot of users is cloud, right? How do I have a cloud strategy? I know that I have applications that I am going to continue to need to run in my, what we'll call a quote unquote traditional data center, just because of the sensitivity of the application, just the predictability that I need around that. I need to basically control that and I have the economics in place where that becomes a really cost effective way of being able to support those types of workloads. But that said, there are other ways that I can consume storage infrastructure, that doesn't require me to go ahead and buy a storage system and kind of deploy it in a data center that I own. So users want to basically be able to explore that as an option, but they want to really understand what's the right use case for that. So one of the things that we're also introducing within PowerMax, and we expect there to be a lot of interests and we expect there to be definitely a solid uptake in terms of adoption, is the ability connect a PowerMax into a cloud, right? So this could be a Dell ECS platform. It could be Amazon S3, it could be Microsoft Azure. So there's a lot of flexibility in terms of the type of cloud connectivity that I could support. But as we looked at you know, what do we want to do? We don't want to to just, you know, connect into a cloud because that's doesn't mean anything, right? So we need to understand, you know, what's, the right use case, right? So when we talk to a lot of our users, they had their storage systems and what they were doing is they were using a lot of capacity for things like snapshots, right? Creating point in time copies of their applications, for a variety of reasons, doing those for database checkpoints, doing those to support testing and development environments, doing those because they wanted to make a copy, and do some sort of offline processing up against that. But very mature, very well established concept of making copies called snapshots. And when we talk to some users, they are, we have some out there that are very heavy consumers of snapshots. And in some cases, 25-30% of the storage that they're using, is being consumed for snapshots. And what the requirement was is, Hey, if I could free up that space by taking these snapshots that I create, then may be I'll use them within the first couple of days, couple of weeks, but then I want to keep those snaps, but I don't really need to keep them, on my primary tier one storage. Maybe if I could offload those to another type of storage, that's either more cost effective, allows me to consume it on demand, gives me the ability to free up those resources so that I could use this capacity that I already own for other things that are growing within the environment, that would be something that I would be interested in. So we, we heard that requirement and, you know, from a product management standpoint, when you look at developing new products, new capabilities, there's kind of three things that you always want to do. Number one, you want to identify what is the requirement? What is the use case? What is the problem that you're trying to solve? And you want to make sure you understand that really well. And you build a technology that's designed to do that in a very good and efficient way. So that's number one. Number two is you want to make it easy to deploy, right? We don't want to create an environment where you need, you know, it's very fragile and you need, you know, specialized skills to go in there and deploy it, it's literally firing up the application, putting in the IP addresses for the S3 storage that you want to connect to, and then away you go, your setup is done, really really simple setup. But the third thing, and really, you know, one of the more important things is, what's the user experience? right. Is this something bizarre? Is this managed as a vApp? Is this something that I have to, you know, click on another application, I have to fire up another screen? So you want to take the management of that data service, and you want to build it right into the platform itself. So with the cloud snapshot capability that we're introducing, that's exactly what we're doing. Where we've identified a solid use case that we knew a lot of customers out there are going to be very interested in understanding, what they can do with this. And what type of new flexibility it can provide. Number two, making it super simple to deploy. Matter of fact, it's included with the PowerMax. You buy the PowerMax, that software functionality, that capability is included with the platform. So there's not even an additional licensing charge required to do that. It's included with the storage. And number three, an ease of perspective. I create a snapshot. I have the option. Do I want that snapshot to live on the array that I created it? Or do I want to take that snapshot, and do I want to push it off onto that provider? Whether it's an ECS in my data center or whether it's something that's sitting over an Amazon AWS, but really easy to basically deploy. And what we plan to do is to take this capability that we've narrowed down to a very specific use case in order to make sure, that we have a clear idea of what the benefits are in terms of why users would want to deploy it, look at other things, because there are other opportunities that we have to expand that to as that capability matures, and as we start to see adoption really take off, >> Oh, Scott, great to catch up with you. Thanks so much for helping us, you know, look down memory lane, as well as a look at the new pieces today and where we're going for the future >> Stu, always a pleasure. Thanks a lot. Great to talk to you again, as always. And hopefully we can get to do this again sometime soon, and maybe a real kind of physical sort of setting, where you know, we're not separated by, you know, a couple of counties and having to go to the West coast and come back here, but maybe you know, actually in a similar physical location. >> Definitely. We all hope for that in the future that we can get everybody back together. In the meantime, we have all the virtual coverage, be sure to check out thecube.net Of course all theCUBE conversations as you can see linked on the front page. Well, it shows like VMworld that we alluded to. I'm Stu Miniman and thank you for watching. Thank you. (upbeat music)

Published Date : Sep 29 2020

SUMMARY :

leaders all around the world, into the history as well as talking about is so awesome to see you guys. Unfortunately, you know, of course, you know, the mega So it's been, you know, Symmetrics really created, you know, that, and just talking to users and, you know, We kind of look back and say, you know, and to be able to move that consistency of deployment, you know, So the ability to scale of course, you know, EMC, the Symmetrics so that when you have scripts that you build. is the ability to do data reduction, that I saw from back in the day, you know, But the third thing, and really, you know, you know, look down memory lane, Great to talk to you again, as always. We all hope for that in the future

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ScottPERSON

0.99+

EMCORGANIZATION

0.99+

MassachusettsLOCATION

0.99+

Scott DelandyPERSON

0.99+

2000DATE

0.99+

Stu MinimanPERSON

0.99+

DellORGANIZATION

0.99+

1990DATE

0.99+

Palo AltoLOCATION

0.99+

September 2020DATE

0.99+

dozensQUANTITY

0.99+

2020DATE

0.99+

30 yearsQUANTITY

0.99+

64,000 devicesQUANTITY

0.99+

StuPERSON

0.99+

threeQUANTITY

0.99+

10,000 VMsQUANTITY

0.99+

VMwareORGANIZATION

0.99+

hundredsQUANTITY

0.99+

JulyDATE

0.99+

fourQUANTITY

0.99+

oneQUANTITY

0.99+

last monthDATE

0.99+

PowerMaxORGANIZATION

0.99+

todayDATE

0.99+

Dell TechnologiesORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

BostonLOCATION

0.99+

third partQUANTITY

0.99+

PowerMaxCOMMERCIAL_ITEM

0.98+

one customerQUANTITY

0.98+

thecube.netOTHER

0.98+

tens of thousandsQUANTITY

0.98+

TodayDATE

0.98+

AmazonORGANIZATION

0.98+

Dell StorageORGANIZATION

0.98+

firstQUANTITY

0.97+

25-30%QUANTITY

0.97+

bothQUANTITY

0.97+

OneQUANTITY

0.97+

third thingQUANTITY

0.97+

vVolsTITLE

0.97+

SymmetricsORGANIZATION

0.97+

three focusesQUANTITY

0.97+

two thingsQUANTITY

0.97+

three plus decadesQUANTITY

0.97+

first couple of daysQUANTITY

0.96+

up to 64,000 devicesQUANTITY

0.96+

single systemQUANTITY

0.96+

VMworldORGANIZATION

0.95+

Andy


 

>> Hi, my name is Andy Clemenko. I'm a Senior Solutions Engineer at StackRox. Thanks for joining us today for my talk on labels, labels, labels. Obviously, you can reach me at all the socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, and it'll take you to my GitHub page where I've got all of this documentation, I've got the Keynote file there. YAMLs, I've got Dockerfiles, Compose files, all that good stuff. If you want to follow along, great, if not go back and review later, kind of fun. So let me tell you a little bit about myself. I am a former DOD contractor. This is my seventh DockerCon. I've spoken, I had the pleasure to speak at a few of them, one even in Europe. I was even a Docker employee for quite a number of years, providing solutions to the federal government and customers around containers and all things Docker. So I've been doing this a little while. One of the things that I always found interesting was the lack of understanding around labels. So why labels, right? Well, as a former DOD contractor, I had built out a large registry. And the question I constantly got was, where did this image come from? How did you get it? What's in it? Where did it come from? How did it get here? And one of the things we did to kind of alleviate some of those questions was we established a baseline set of labels. Labels really are designed to provide as much metadata around the image as possible. I ask everyone in attendance, when was the last time you pulled an image and had 100% confidence, you knew what was inside it, where it was built, how it was built, when it was built, you probably didn't, right? The last thing we obviously want is a container fire, like our image on the screen. And one kind of interesting way we can kind of prevent that is through the use of labels. We can use labels to address security, address some of the simplicity on how to run these images. So think of it, kind of like self documenting, Think of it also as an audit trail, image provenance, things like that. These are some interesting concepts that we can definitely mandate as we move forward. What is a label, right? Specifically what is the Schema? It's just a key-value. All right? It's any key and pretty much any value. What if we could dump in all kinds of information? What if we could encode things and store it in there? And I've got a fun little demo to show you about that. Let's start off with some of the simple keys, right? Author, date, description, version. Some of the basic information around the image. That would be pretty useful, right? What about specific labels for CI? What about a, where's the version control? Where's the source, right? Whether it's Git, whether it's GitLab, whether it's GitHub, whether it's Gitosis, right? Even SPN, who cares? Where are the source files that built, where's the Docker file that built this image? What's the commit number? That might be interesting in terms of tracking the resulting image to a person or to a commit, hopefully then to a person. How is it built? What if you wanted to play with it and do a git clone of the repo and then build the Docker file on your own? Having a label specifically dedicated on how to build this image might be interesting for development work. Where it was built, and obviously what build number, right? These kind of all, not only talk about continuous integration, CI but also start to talk about security. Specifically what server built it. The version control number, the version number, the commit number, again, how it was built. What's the specific build number? What was that job number in, say, Jenkins or GitLab? What if we could take it a step further? What if we could actually apply policy enforcement in the build pipeline, looking specifically for some of these specific labels? I've got a good example of, in my demo of a policy enforcement. So let's look at some sample labels. Now originally, this idea came out of label-schema.org. And then it was a modified to opencontainers, org.opencontainers.image. There is a link in my GitHub page that links to the full reference. But these are some of the labels that I like to use, just as kind of like a standardization. So obviously, Author's, an email address, so now the image is attributable to a person, that's always kind of good for security and reliability. Where's the source? Where's the version control that has the source, the Docker file and all the assets? How it was built, build number, build server the commit, we talked about, when it was created, a simple description. A fun one I like adding in is the healthZendpoint. Now obviously, the health check directive should be in the Docker file. But if you've got other systems that want to ping your applications, why not declare it and make it queryable? Image version, obviously, that's simple declarative And then a title. And then I've got the two fun ones. Remember, I talked about what if we could encode some fun things? Hypothetically, what if we could encode the Compose file of how to build the stack in the first image itself? And conversely the Kubernetes? Well, actually, you can and I have a demo to show you how to kind of take advantage of that. So how do we create labels? And really creating labels as a function of build time okay? You can't really add labels to an image after the fact. The way you do add labels is either through the Docker file, which I'm a big fan of, because it's declarative. It's in version control. It's kind of irrefutable, especially if you're tracking that commit number in a label. You can extend it from being a static kind of declaration to more a dynamic with build arguments. And I can show you, I'll show you in a little while how you can use a build argument at build time to pass in that variable. And then obviously, if you did it by hand, you could do a docker build--label key equals value. I'm not a big fan of the third one, I love the first one and obviously the second one. Being dynamic we can take advantage of some of the variables coming out of version control. Or I should say, some of the variables coming out of our CI system. And that way, it self documents effectively at build time, which is kind of cool. How do we view labels? Well, there's two major ways to view labels. The first one is obviously a docker pull and docker inspect. You can pull the image locally, you can inspect it, you can obviously, it's going to output as JSON. So you going to use something like JQ to crack it open and look at the individual labels. Another one which I found recently was Skopeo from Red Hat. This allows you to actually query the registry server. So you don't even have to pull the image initially. This can be really useful if you're on a really small development workstation, and you're trying to talk to a Kubernetes cluster and wanting to deploy apps kind of in a very simple manner. Okay? And this was that use case, right? Using Kubernetes, the Kubernetes demo. One of the interesting things about this is that you can base64 encode almost anything, push it in as text into a label and then base64 decode it, and then use it. So in this case, in my demo, I'll show you how we can actually use a kubectl apply piped from the base64 decode from the label itself from skopeo talking to the registry. And what's interesting about this kind of technique is you don't need to store Helm charts. You don't need to learn another language for your declarative automation, right? You don't need all this extra levels of abstraction inherently, if you use it as a label with a kubectl apply, It's just built in. It's kind of like the kiss approach to a certain extent. It does require some encoding when you actually build the image, but to me, it doesn't seem that hard. Okay, let's take a look at a demo. And what I'm going to do for my demo, before we actually get started is here's my repo. Here's a, let me actually go to the actual full repo. So here's the repo, right? And I've got my Jenkins pipeline 'cause I'm using Jenkins for this demo. And in my demo flask, I've got the Docker file. I've got my compose and my Kubernetes YAML. So let's take a look at the Docker file, right? So it's a simple Alpine image. The org statements are the build time arguments that are passed in. Label, so again, I'm using the org.opencontainers.image.blank, for most of them. There's a typo there. Let's see if you can find it, I'll show you it later. My source, build date, build number, commit. Build number and get commit are derived from the Jenkins itself, which is nice. I can just take advantage of existing URLs. I don't have to create anything crazy. And again, I've got my actual Docker build command. Now this is just a label on how to build it. And then here's my simple Python, APK upgrade, remove the package manager, kind of some security stuff, health check getting Python through, okay? Let's take a look at the Jenkins pipeline real quick. So here is my Jenkins pipeline and I have four major stages, four stages, I have built. And here in build, what I do is I actually do the Git clone. And then I do my docker build. From there, I actually tell the Jenkins StackRox plugin. So that's what I'm using for my security scanning. So go ahead and scan, basically, I'm staging it to scan the image. I'm pushing it to Hub, okay? Where I can see the, basically I'm pushing the image up to Hub so such that my StackRox security scanner can go ahead and scan the image. I'm kicking off the scan itself. And then if everything's successful, I'm pushing it to prod. Now what I'm doing is I'm just using the same image with two tags, pre-prod and prod. This is not exactly ideal, in your environment, you probably want to use separate registries and non-prod and a production registry, but for demonstration purposes, I think this is okay. So let's go over to my Jenkins and I've got a deliberate failure. And I'll show you why there's a reason for that. And let's go down. Let's look at my, so I have a StackRox report. Let's look at my report. And it says image required, required image label alert, right? Request that the maintainer, add the required label to the image, so we're missing a label, okay? One of the things we can do is let's flip over, and let's look at Skopeo. Right? I'm going to do this just the easy way. So instead of looking at org.zdocker, opencontainers.image.authors. Okay, see here it says build signature? That was the typo, we didn't actually pass in. So if we go back to our repo, we didn't pass in the the build time argument, we just passed in the word. So let's fix that real quick. That's the Docker file. Let's go ahead and put our dollar sign in their. First day with the fingers you going to love it. And let's go ahead and commit that. Okay? So now that that's committed, we can go back to Jenkins, and we can actually do another build. And there's number 12. And as you can see, I've been playing with this for a little bit today. And while that's running, come on, we can go ahead and look at the Console output. Okay, so there's our image. And again, look at all the build arguments that we're passing into the build statement. So we're passing in the date and the date gets derived on the command line. With the build arguments, there's the base64 encoded of the Compose file. Here's the base64 encoding of the Kubernetes YAML. We do the build. And then let's go down to the bottom layer exists and successful. So here's where we can see no system policy violations profound marking stack regimes security plugin, build step as successful, okay? So we're actually able to do policy enforcement that that image exists, that that label sorry, exists in the image. And again, we can look at the security report and there's no policy violations and no vulnerabilities. So that's pretty good for security, right? We can now enforce and mandate use of certain labels within our images. And let's flip back over to Skopeo, and let's go ahead and look at it. So we're looking at the prod version again. And there's it is in my email address. And that validated that that was valid for that policy. So that's kind of cool. Now, let's take it a step further. What if, let's go ahead and take a look at all of the image, all the labels for a second, let me remove the dash org, make it pretty. Okay? So we have all of our image labels. Again, author's build, commit number, look at the commit number. It was built today build number 12. We saw that right? Delete, build 12. So that's kind of cool dynamic labels. Name, healthz, right? But what we're looking for is we're going to look at the org.zdockerketers label. So let's go look at the label real quick. Okay, well that doesn't really help us because it's encoded but let's base64 dash D, let's decode it. And I need to put the dash r in there 'cause it doesn't like, there we go. So there's my Kubernetes YAML. So why can't we simply kubectl apply dash f? Let's just apply it from standard end. So now we've actually used that label. From the image that we've queried with skopeo, from a remote registry to deploy locally to our Kubernetes cluster. So let's go ahead and look everything's up and running, perfect. So what does that look like, right? So luckily, I'm using traefik for Ingress 'cause I love it. And I've got an object in my Kubernetes YAML called flask.doctor.life. That's my Ingress object for traefik. I can go to flask.docker.life. And I can hit refresh. Obviously, I'm not a very good web designer 'cause the background image in the text. We can go ahead and refresh it a couple times we've got Redis storing a hit counter. We can see that our server name is roundrobing. Okay? That's kind of cool. So let's kind of recap a little bit about my demo environment. So my demo environment, I'm using DigitalOcean, Ubuntu 19.10 Vms. I'm using K3s instead of full Kubernetes either full Rancher, full Open Shift or Docker Enterprise. I think K3s has some really interesting advantages on the development side and it's kind of intended for IoT but it works really well and it deploys super easy. I'm using traefik for Ingress. I love traefik. I may or may not be a traefik ambassador. I'm using Jenkins for CI. And I'm using StackRox for image scanning and policy enforcement. One of the things to think about though, especially in terms of labels is none of this demo stack is required. You can be in any cloud, you can be in CentOs, you can be in any Kubernetes. You can even be in swarm, if you wanted to, or Docker compose. Any Ingress, any CI system, Jenkins, circle, GitLab, it doesn't matter. And pretty much any scanning. One of the things that I think is kind of nice about at least StackRox is that we do a lot more than just image scanning, right? With the policy enforcement things like that. I guess that's kind of a shameless plug. But again, any of this stack is completely replaceable, with any comparative product in that category. So I'd like to, again, point you guys to the andyc.infodc20, that's take you right to the GitHub repo. You can reach out to me at any of the socials @clemenko or andy@stackrox.com. And thank you for attending. I hope you learned something fun about labels. And hopefully you guys can standardize labels in your organization and really kind of take your images and the image provenance to a new level. Thanks for watching. (upbeat music)

Published Date : Sep 28 2020

SUMMARY :

at org the org to the andyc

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andy ClemenkoPERSON

0.99+

EuropeLOCATION

0.99+

100%QUANTITY

0.99+

StackRoxORGANIZATION

0.99+

two tagsQUANTITY

0.99+

PythonTITLE

0.99+

flask.docker.lifeOTHER

0.99+

andy@stackrox.comOTHER

0.99+

AndyPERSON

0.99+

andyc.info/dc20OTHER

0.99+

DockerORGANIZATION

0.99+

todayDATE

0.99+

flask.doctor.lifeOTHER

0.99+

third oneQUANTITY

0.99+

DockerfilesTITLE

0.99+

seventhQUANTITY

0.99+

KubernetesTITLE

0.98+

first oneQUANTITY

0.98+

second oneQUANTITY

0.98+

label-schema.orgOTHER

0.98+

OneQUANTITY

0.98+

KeynoteTITLE

0.98+

andyc.infodc20OTHER

0.98+

first imageQUANTITY

0.98+

First dayQUANTITY

0.97+

CentOsTITLE

0.97+

StackRoxTITLE

0.97+

SkopeoORGANIZATION

0.96+

Red HatORGANIZATION

0.96+

GitTITLE

0.96+

Ubuntu 19.10 VmsTITLE

0.95+

oneQUANTITY

0.95+

build 12OTHER

0.95+

JQTITLE

0.95+

base64TITLE

0.93+

JenkinsTITLE

0.93+

build number 12OTHER

0.91+

org.opencontainers.image.OTHER

0.91+

IngressORGANIZATION

0.89+

DODORGANIZATION

0.89+

opencontainers.image.authors.OTHER

0.89+

a secondQUANTITY

0.89+

two major waysQUANTITY

0.89+

Jenkins StackRoxTITLE

0.88+

GitosisTITLE

0.86+

GitLabORGANIZATION

0.86+

GitHubORGANIZATION

0.86+

two fun onesQUANTITY

0.84+

GitLabTITLE

0.82+

skopeoORGANIZATION

0.82+

DockerTITLE

0.81+

JSONTITLE

0.81+

traefikTITLE

0.77+

skopeoTITLE

0.76+

@clemenkoPERSON

0.74+

RancherTITLE

0.74+

IngressTITLE

0.73+

org.zdockerOTHER

0.72+

RedisTITLE

0.72+

DigitalOceanTITLE

0.71+

org.opencontainers.image.blankOTHER

0.71+

KuberORGANIZATION

0.69+

Charles Giancarlo, Pure Storage and Murli Thirumale, Portworx | CUBE Conversation, September 2020


 

from the cube studios in palo alto in boston connecting with thought leaders all around the world this is a cube conversation hi everybody this is dave vellante thecube and we have some news for you pure storage has acquired portworx the kubernetes specialist for 370 million dollars in an all-cash transaction charlie giancarlo is here he's the ceo of pure storage and he's joined by merlin theorem alley who is the ceo of portworx gentlemen good to see you thanks for coming on thank you dave thanks for having us so charlie uh the transaction all cash transaction north of 300 million your biggest transaction uh ever your biggest acquisition uh give us give us the hard news yeah well the the hard news is uh easy news for our customers we're bringing together uh two great companies uh pure as you know the the leader in technology and uh data storage and management and we're bringing together in uh to our team uh the port works team that is the has been the leader uh in container orchestrated uh storage systems and uh it really is gonna match uh you know the existing and and uh legacy uh hardware and application environment to the new environment of containers and we couldn't be more excited so to so tell us you know what was the rationale the sort of thesis behind the acquisition what are you hoping to accomplish charlie yeah you know uh containers is the way that uh applications are going to be developed in the future uh with with no doubt and uh containers utilize storage differently than traditional application environments whether those are rvms or even bare metal application environments and uh because of that it's a very new way of of handling data management the other thing we saw was a philosophy within um portworx very similar to pure of building cloud everywhere and make it look the same whether it's in a private data center or in the uh public uh cloud environment and so by bringing these two things together we create a very consistent environment for uh for customers whether they're utilizing and going with their existing application environment or with the new container environment for their new applications so merlin let me go to you first of all congratulations you know this isn't your your your first uh nice exit we we've known each other for a long time so so that's fantastic for you and the team uh so so bring us up to date on kind of where the company you know started and and where it's gone and and why you feel like this is such a good fit and a good exit for portworx well let's start with the company you know we've been uh at this for uh five and a half years almost six now and we started with these the very premise that that as containers were beginning to be deployed and apps started to kind of be seen everywhere containerized that data agility needed to match the app agility that people were getting from containers and that was something that was missing and so one of the things we did was really kind of take an entirely different approach to storage we turned kind of storage on its head and and designed it from the app down and effectively what we did was leverage kubernetes which was being used really until then to orchestrate really just the container part of the of this system to start orchestrating data and storage as well so northbound you know we containers are being orchestrated or orchestrated by kubernetes to manage the apps and southbound portworx now added the ability to manage data with kubernetes and what that's resulted in dave is that you know uh in in the last several years we've gained 160 customers uh household names right comcast t-mobile lufthansa ge roblox uh rbc who have all sort of deployed us in production and and really kind of built a leadership position in the ability to aid digital transformation uh of you know which customers are going through with containers hey guys i wonder if you could bring up that the chart uh i want to just introduce some etr data here so so this is one of our favorite views x y view the vertical axis is spending momentum when what we call net score higher the better and the vert and the horizontal axis is is market share and you can see i've outlined with that little pink area container orchestration and container platforms and you can see it's very elevated right there with machine learning and ai a little bit above cloud computing right there with robotic process automation this is the april survey of 1200 uh respondents uh the july survey you know robotic process automation bumps up a little bit which changes the shape but i wanted to show this picture to really explain to our audience the you know the popularity and this is where people are investing and charlie you can see storage kind of you know right there in the in the middle and you it seems to me you're now connecting the dots to containers which are gonna disperse everywhere we often think of containers sometimes as a separate thing but it's not i mean it's embedded into the entire stack i wonder if you can talk containers are just the next generation way of of building applications right and one of the great things about uh containers when you build an app on containers it becomes what's known as portable you know it can operate in the cloud it can operate on your own hardware inside your own data center and of course pure is known for making data portable as well between both private data centers and hyperscalers such as aws and uh and azure so by bringing this together making it possible not just for as we talked about container based applications but also for existing uh application environments whether those are vm or bare metal you know we create a very flexible portable environment i wonder if we could talk merely about you know just sort of the evolution of i mean vms and then and obviously containers the you know the virtual machines when we were spinning them up in the early days storage was like the second class citizen and then through a series of integrations and you know hard work you had you know storage much more native but every vm is is is kind of fat right it's sharing the same uh or has its own operating system my understanding is containers they could share a single operating system uh and and so but talk a little bit more first of all is that right and where does storage fit in in containers i mean we think of them at least at least in the early days as ephemeral uh but you're solving a different problem of persistence maybe talk about that that problem that you're solving sure dave i think you know you characterize this as uh the right way right there's kind of vms uh that have dominated sort of in in the world of infrastructure for for for the last 10 15 years now but what what is really happening here is a little bit more profound right really is if you think about it this is the transformation of a data center from being very very machine centric which is sort of the look back view of the world to being much more application centered going forward and this is being accomplished not just by you know what charlie talked about which is applications being deployed in containers but by the evolution of using kubernetes now as the new control plane for the data center so in in the last couple of years something amazing has happened right people have adopted containers and in doing so they've realized they need to orchestrate these containers and lo and behold they've kind of deployed kubernetes as they've done that they've begun to recognize that kubernetes now gives them a an amazing capability they can now let everything be application driven so kubernetes is now the new app defined control plane for the for the data center just like vms and vmware was the you know the kind of compute centered machine defined data center of the past so we're one of those modern day companies for the modern you know digital transformation stack and it doesn't just mean it's just not just products like portworx but other products in there right whether it's a rancher an open shift or or security solutions that are extensions of kubernetes so to your point what we've done is we've taken kubernetes and extended it to managing storage and data and we're doing that in a way that allows it to be fully distributed completely automated and in fact what happens is now the management of of the app and the data go hand in hand at the same time you don't have these separation of sort of responsibilities so the person who is really our buyer and buying set is a very different buyer than traditional storage and you know you know traditional storage i've talked to you about that part of the business a long time many times in the in the past our buyers are actually devops buyers so we land in devops and we expand in it ops our budgets are coming from uh a digital transformation budget like move to cloud or or even just kind of business transformation and our users are really not the classic storage user but really the the person who's driving kubernetes the person who's making automation decisions cloud architects automation architects they can now operate storage without having to know storage through products like portworx that extend kubernetes uh and and and allow it to be all application driven okay so it's much more happen so it's much more than just bringing i'd say jess much more than bringing state uh to what was originally a stateless environment it's bringing more data management uh correct so charlie connect the dots for us in terms of where pure fits in that in that value chain well as you know i mean we've developed a large number of products and capabilities that uh go well beyond storage into data management so whether it's snapshotting or replication or data motion you know into uh you know from uh from on-prem into the cloud and as we've been doing that we've been building up a control plane to do this with you know traditional uh block and file storage now this is extending that set of same set of capabilities uh to the container side uh you know whether it'll be block because contain there are a lot of container systems that are looking at block but even into the object space overall so think of this as the integration of data manage of a data management control plane for both existing and new apps and and that data control plane existing not just in one location such as uh the the private data center or the private cloud but also into the public uh cloud as well so that a company can orchestrate their both their uh their container-based apps but also the data that goes along with them and the data that goes to their traditional apps with one orchestration tool so you know you mentioned you know i think when you said motion i think of vmotion uh and and if i want to move a workload from one vm to another i can preserve at state is that kind of where you're headed with with control you're thinking of it very much in a push you know i t push sense rather than just the application calling for data access and being given given it through a set of apis so again very much more dynamic environment rather than rather than it be a human uh instigated you know think of it being as as a policy and and pro and programmer initiated set of activities uh i'm glad you brought that up because i think about we think about you know we also often think in monolithic terms and and containers are not right it's really like you said we can have applications even though they run inside of vms it sort of breaks they can but they don't have to right they can run on bare metal uh but of course you know with the with vmware they they've designed it to be able to run inside of vms as well if that's what customers are most comfortable with sure ultimately you were going to add some color to that yeah i think i think you know what what what charlie is describing is really kind of a new paradigm that's a self-service paradigm where application owners and application drivers people who are creating apps deploying apps now can can self-service themselves through a kubernetes-based interface and and it's all automated right so in in a in a funny way one way to think about this is a somebody who who's you know deploying apps they are doing that with the help of kubernetes their hands never leave the kubernetes wheel and now all of a sudden they're deploying you know data and storage and doing all of that without an intimate knowledge of the storage infrastructure so that kind of idea of automation driving and it's and this app-driven self-service model really enables that agility for data in addition to the agility for uh for the app layer and and i think dave the key thing here is you know why why it why has that container bubble floated to the top of of your of your of the graph that you just showed it's because i think modern day enterprises are doing two things that are imperative for their success right one of them is the fast enterprises are gonna eat the slow so they need to move fast and the way for that fast to be translated from an app agility to the agility throughout the whole stack is enabled by this the other thing that they are doing is data is the new oil and and folks really need to be able to leverage their data whether it's their own data external data but bring it all together in real time mine it and they can't do that without automating the heck out of it right and that's what kubernetes enables also so the combination of data agility and being able to kind of create that ability to mine in real time the data through an app-oriented interface is is completely revolutionary if you think about it and in my view going forward what you're going to start seeing is that kubernetes is going to start revolutionizing not just the app world but the world of infrastructure the world of infrastructure is going to change significantly with the advent of kubernetes being used to manage infrastructure yeah we often say in the cube the data is the new development kit and and you're talking about you know infrastructure as code is the perfect instantiation here so charlie i i wonder if are developers sort of a new distribution channel for you do you see that involving yeah you know we did a lot of studying uh before bringing the two companies together and about 40 percent of the buyers of uh of uh this uh environment of of port works our customers that we do talk to regularly in the it group and about 60 in the devops environment so you know one of the beautiful things about this is we have a good head start with the people we're selling to today but also it opens up a whole new uh buying area for us with devops and one that we plan on uh investing in as we go forward so charlie i would imagine this is a pretty fast close right uh what's the yeah these are two california companies and and luckily we've we we scoot under the uh uh the uh uh legal radar of hsr so we think we'll be able to close this within 30 days great and and how will you organize it you're going to where it's going to be it's going to be a a new business unit uh reporting directly to me uh as especially as we go through the you know the early days of of integrating but really we want to learn from the way that poor works has built a successful business make sure that we combine the best of both organizations together uh and uh really understand uh you know how to best uh tie together our go-to markets uh in the uh you know with uh the combination of of legacy and container and so so emerald are you gonna hang out for a while absolutely i you know uh i was i was talking to my team earlier and i said look the journey of business success is like a thousand steps and the part of a startup is only the first 250 steps i'll tell you i think we've kind of run up those first 250 steps pretty fast but we're going to sprint through the next 750 steps with with uh you know in the company of of pure because look pure is has always been well known as a disrupter in the business uh for a long time and we are a relatively new disruptor in the kubernetes space i think this is this this level of our joint ability to disrupt that market end to end is gonna be just just uh astounding astonishing i i'm just really looking forward to kind of taking this to a greater level of accelerating our our business well charlie i mean you see in the data i mean if you pick a analyst firm the vast majority of new applications are being you know developed in using kubernetes and containers but uh give us the last word uh give us the the summary from you in your final thoughts you know i think you know for uh both pure and port work customers what they're going to see is just a great marriage of two great companies i think it's a marriage of two great technologies and they're going to see the ability to be able to orchestrate all of their data across you know their existing as well as their new application environments and across both their development of their private cloud and the public cloud environment so this is uh you know a great addition to uh the advancement that customers are seeing through orchestration orchestration both of their application environment but just as importantly the orchestration of their data storage and management excellent well gentlemen thanks so much for your time really appreciate you coming on thecube thank you david thank you all right and best of luck to you both and thank you for watching everybody this is dave vellante for the cube and we'll see you next time you

Published Date : Sep 16 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
September 2020DATE

0.99+

Charles GiancarloPERSON

0.99+

davidPERSON

0.99+

Murli ThirumalePERSON

0.99+

comcastORGANIZATION

0.99+

charlie giancarloPERSON

0.99+

two companiesQUANTITY

0.99+

charliePERSON

0.99+

160 customersQUANTITY

0.99+

two great companiesQUANTITY

0.99+

first 250 stepsQUANTITY

0.99+

twoQUANTITY

0.98+

370 million dollarsQUANTITY

0.98+

dave vellantePERSON

0.98+

two great technologiesQUANTITY

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

two thingsQUANTITY

0.97+

bostonLOCATION

0.97+

30 daysQUANTITY

0.97+

firstQUANTITY

0.97+

one locationQUANTITY

0.96+

both organizationsQUANTITY

0.96+

lufthansaORGANIZATION

0.96+

julyDATE

0.95+

todayDATE

0.95+

about 40 percentQUANTITY

0.94+

750 stepsQUANTITY

0.94+

five and a half yearsQUANTITY

0.94+

davePERSON

0.93+

aprilDATE

0.93+

last 10 15 yearsDATE

0.92+

about 60QUANTITY

0.91+

thousand stepsQUANTITY

0.89+

1200 uh respondentsQUANTITY

0.89+

last couple of yearsDATE

0.89+

awsORGANIZATION

0.87+

californiaLOCATION

0.86+

vmwareTITLE

0.82+

uhORGANIZATION

0.81+

Pure StoragePERSON

0.8+

last several yearsDATE

0.8+

two great companiesQUANTITY

0.79+

PortworxPERSON

0.77+

t-mobileORGANIZATION

0.75+

one wayQUANTITY

0.74+

single operating systemQUANTITY

0.73+

secondQUANTITY

0.72+

rbcORGANIZATION

0.71+

portworxORGANIZATION

0.7+

north of 300 millionQUANTITY

0.69+

one of ourQUANTITY

0.67+

a lot of containerQUANTITY

0.66+

robloxORGANIZATION

0.62+

numberQUANTITY

0.58+

azureORGANIZATION

0.54+

emeraldPERSON

0.54+

palo altoORGANIZATION

0.53+

thingsQUANTITY

0.52+

sixQUANTITY

0.5+

vmsTITLE

0.48+

daveORGANIZATION

0.37+

ON DEMAND MIRANTIS OPENSTACK ON K8S FINAL


 

>> Hi, I'm Adrienne Davis, Customer Success Manager on the CFO-side of the house at Mirantis. With me today is Artem Andreev, Product Manager and expert, who's going to enlighten us today. >> Hello everyone. It's great to hear all of you listening to our discussion today. So my name is Artem Andreev. I'm a Product Manager for Mirantis OpenStack line of products. That includes the current product line that we have in the the next generation product line that we're about to launch quite soon. And actually this is going to be the topic of our presentation today. So the new product that we are very, very, very excited about, and that is going to be launched in a matter of several weeks, is called Mirantis OpenStack on Kubernetes. For those of you who have been in Mirantis quite a while already, Mirantis OpenStack on Kubernetes is essentially a reincarnation of our Miranti Cloud Platform version one, as we call it these days. So, and the theme has reincarnated into something more advanced, more robust, and altogether modern, that provides the same, if not more, value to our customers, but packaged in a different shape. And well, we're very excited about this new launch, and we would like to share this excitement with you Of course. As you might know, recently a few months ago, Mirantis acquired Docker Enterprise together with the advanced Kubernetes technology that Docker Enterprise provides. And we made this technology the piece and parcel of our product suite, and this naturally includes OpenStack Mirantis, OpenStack on Kubernetes as well, since this is a part of our product suite. And well, the Kubernetes technology in question, we call Docker Enterprise Container Cloud these days, I'm going to refer to this name a lot over the course of the presentation. So I would like to split today's discussions to several major parts. So for those of you who do not know what OpenStack is in general, a quick recap might be helpful to understand the value that it provides. I will discuss why someone still needs OpenStack in 2020. We will talk about what a modern OpenStack distribution is supposed to do to the expectation that is there. And of course, we will go into a bit of details of how exactly Mirantis OpenStack on Kubernetes works, how it helps to deploy and manage OpenStack clouds. >> So set the stage for me here. What's the base environment we were trying to get to? >> So what is OpenStack? One can think of OpenStack as a free and open source alternative to VMware, and it's a fair comparison. So OpenStack, just as VMware, operates primarily on Virtual Machines. So it gives you as a user, a clean and crispy interface to launch a virtual VM, to configure the virtual networking to plug this VM into it to configure and provision virtual storage, to attach to your VM, and do a lot of other things that actually a modern application requires to run. So the idea behind OpenStack is that you have a clean and crispy API exposed to you as a user, and alters little details and nuances of the physical infrastructure configuration provision that need to happen just for the virtual application to work are hidden, and spread across multiple components that comprise OpenStack per se. So as compared again, to a VMware, the functionality is pretty much similar, but actually OpenStack can do much more than just Vms, and it does that, at frankly speaking much less price, if we do the comparison. So what OpenStack has to offer. Naturally, the virtualization, networking, storage systems out there, it's just the basic entry level functionality. But of course, what comes with it is the identity and access management features, or practical user interface together with the CLI and command line tools to manage the cloud, orchestration functionality, to deploy your application in the form of templates, ability to manage bare metal machines, and of course, some nice and fancy extras like DNSaaS service, Metering, Secret Management, and Load Balancing. And frankly speaking, OpenStack can actually do even more, depending on the needs that you have. >> We hear so much about containers today. Do applications even need VMs anymore? Can't Kubernetes provide all these services? And even if IaaS is still needed, why would one bother with building their own private platform, if there's a wide choice of public solutions for virtualization, like Amazon web services, Microsoft Azure, and Google cloud platform? >> Well, that's a very fair question. And you're absolutely correct. So the whole trend (audio blurs) as the States. Everybody's talking about containers, everybody's doing containers, but to be realistic, yes, the market still needs VMs. There are certain use cases in the modern world. And actually these use cases are quite new, like 5G, where you require high performance in the networking for example. You might need high performance computing as well. So when this takes quite special hardware and configuration to be provided within your infrastructure, that is much more easily solved with the Vms, and not containers. Of course not to mention that, there are still legacy applications that you need to deal with, and that well, they have just switched from the server-based provision into VM-based provision, and they need to run somewhere. So they're not just ready for containers. And well, if we think about, okay, VMs are still needed, but why don't I just go to a public infrastructure as a service provider and run my workloads there? Now if you can do that, but well, you have to be prepared to pay a lot of money, once you start running your workloads at scale. So public IaaSes, they actually tend to hit your pockets heavily. And of course, if you're working in a highly regulated area, like enterprises cover (audio blurs) et cetera, so you have to comply with a lot of security regulations and data placement regulations. And well, public IaaSes, let's be frank, they're not good at providing you with this transparency. So you need to have full control over your whole stack, starting from the hardware to the very, very top. And this is why private infrastructure as a service is still a theme these days. And I believe that it's going to be a theme for at least five years more, if not more. >> So if private IaaSes are useful and demanded, why does Mirantis just stick to the OpenStack that we already have? Why did we decide to build a new product, rather than keep selling the current one? >> Well, to answer this question, first, we need to see what actually our customers believe more in infrastructure as a service platform should be able to provide. And we've compiled this list into like five criteria. Naturally, private IaaS needs to be reliable and robust, meaning that whatever happens on the underneath the API, that should not be impacting the business generated workloads, this is a must, or impacting them as little as possible, the platform needs to be secure and transparent, going back to the idea of working in the highly regulated areas. And this is again, a table stake to enter the enterprise market. The platform needs to be simple to deploy (audio blurs) 'cause well, you as an operator, you should not be thinking about the internals, but try to focus in on enabling your users with the best possible experience. Updates, updates are very important. So the platform needs to keep up with the latest software patches, bug fixes, and of course, features, and upgrading to a new version must not take weeks or months, and has as little impact on the running workloads as possible. And of course, to be able to run modern application, the platform needs to provide the comparable set of services, just as a public cloud so that you can move your application across your terms in the private or public cloud without having to change it severally, so-called the feature parity, it needs to be there. And if we look at the architecture of OpenStack, and we know OpenStack is powerful, it can do a lot. We've just discussed that, right? But the architecture of OpenStack is known to be complex. And well, tell me, how would you enable the robustness and robustness and reliability in this complex system? It's not easy, right? So, and actually this diagrams shelves, just like probably a third part of the modern update OpenStack cloud. So it's just a little illustration. It's not the whole picture. So imagine how hard it is to make a very solid platform out of this architecture. And well, naturally this also imposes some challenges to provide the transparency and security, 'cause well, the more complex the system is, the harder it is to manage, and well the harder it is to see what's on the inside, and well upgrades, yeah. One of the biggest challenges that we learned from our past previous history, well that many of our customers prefer to stay on the older version of OpenStack, just because, well, they were afraid of upgraded, cause they saw upgrades as time-consuming and risky and divorce. And well, instead of just switching to the latest and greatest software, they preferred reliability by sticking to the old stuff. Well, why? Well, 'cause potentially that meant implied certain impact on their workloads and well an upgrade required thorough planning and execution, just to be as as riskless as possible. And we are solving all of these challenges, of managing a system as complex as OpenStack is with Kubernetes. >> So how does Kubernetes solve these problems? >> Well, we look at OpenStack as a typical microservice architecture application, that is organized into multiple little moving parts, demons that are connected to each other and that talk to each other through the standard API. And altogether, that feels as very good feet to run on top of a Kubernetes cluster, because many of the modern applications, they fall exactly on the same pattern. >> How exactly did you put OpenStack on Kubernetes? >> Well, that's not easy. I'm going to be frank with you. And if you look at the architectural diagram, so this is a stack of Miranda's products represented with a focus of course, on the Mirantis OpenStack, as a central part. So what you see in the middle shelving pink, is Mirantis OpenStack on Kubernetes itself. And of course around that are supporting components that are needed to be there, to run OpenStack on Kubernetes successfully. So on the very bottom, there is hardware, networking, storage, computing, hardware that somebody needs to configure provision and manage, to be able to deploy the operating system on top of it. And this is just another layer of complexity that abstracts the Mirantis OpenStack on Kubernetes just from the under lake. So once we have operating system there, there needs to be a Kubernetes cluster, deployed and managed. And as I mentioned previously, we are using the capabilities that this Kuberenetes cluster provides to run OpenStack itself, the control plane that way, because everything in Mirantis OpenStack on Kuberentes is a container, or whatever you can think of. Of course naturally, it doesn't sound like an easy task to manage this multi-layered pie. And this is where Docker Enterprise Container Cloud comes into play, 'cause this is our single pane of glass into day one and day two operations for the hardware itself, for the operating system, and for Docker Enterprise Kubernetes. So it solves the need to have this underlay ready and prepared. And once the underlay is there, you go ahead, and deploy Mirantis OpenStack on Kubernetes, just as another Kubernetes application, application following the same practices and tools as you use with any other applications. So naturally of course, once you have OpenStack up and running, you can use it to create your own... To give your users ability to create their own private little Kubernetes clusters inside OpenStack projects. And this is one of the measure just cases for OpenStack these days, again, being an underlay for containers. So if you look at the operator experience, how does it look like for a human operator who is responsible for deployment the management of the cloud to deal with Mirantis OpenStack on Kubernetes? So first, you deploy Docker Enterprise Container Cloud, and you use the built-in capabilities that it provides to provision your physical infrastructure, that you discover the hardware nodes, you deploy operating system there, you do configuration of the network interfaces in storage devices there, and then you deploy Kubernetes cluster on top of that. This Kubernetes cluster is going to be dedicated to Mirantis OpenStack on Kuberenetes itself. So it's a special (indistinct) general purpose thing, that well is dedicated to OpenStack. And that means that inside of this cluster, there are a bunch of life cycle management modules, running as Kubernetes operators. So OpenStack itself has its own LCM module or operator. There is a dedicated operator for Ceph, cause Ceph is our major storage solution these days, that we integrate with. Naturally, there is a dedicated lifecycle management module for Stack Light. Stack Light is our operator, logging monitoring alerting solution for OpenStack on Kubernetes, that we bundle toegether with the whole product suite. So Kubernetes operators, directly through, it keeps the TL command or through the practical records that are provided by Docker Enterprise Container Cloud, as a part of it, to deploy the OpenStack staff and Stack Light clusters one by one, and connect them together. So instead of dealing with hundreds of YAML files, while it's five definitions, five specifications, that you're supposed to provide these days and that's safe. And although data management is performed through these APIs, just as the deployment as easily. >> All of this assumes that OpenStack has containers. Now, Mirantis was containerizing back long before Kubernetes even came along. Why did we think this would be important? >> That is true. Well, we've been containerizing OpenStack for quite a while already, it's not a new thing at all. However, is the way that we deploy OpenStack as a Kubernetes application that matters, 'cause Kubernetes solves a whole bunch of challenges that we have used to deal with, with MCP1, when deploying OpenStack on top of bare operating systems as packages. So, naturally Kubernetes provides us with... Allows us to achieve reliability through the self (audio blurs) auto-scaling mechanisms. So you define a bunch of policies that describe the behavior of OpenStack control plane. And Kubernetes follows these policies when things happen, and without actually any need for human interaction. So isolation of the dependencies or OpenStack services within Docker images is a good thing, 'cause previously we had to deal with packages and conflicts in between the versions of different libraries. So now we just ship everything together as a Docker image, and I think that early in updates is an advanced feature that Kubernetes provides natively. So updating OpenStack has never been as easy as with Kubernetes. Kubernetes also provides some fancy building blocks for network and like hold balancing, and of course, collegial tunnels, and service meshes. They're also quite helpful when dealing with such a complex application like OpenStack when things need to talk to each other and without any problem in the configuration. So Helm Reconciling is a place that also has a great deal of role. So it actually is our soul for Kubernetes. We're using Helm Bubbles, which are for opens, provide for OpenStack into upstream, as our low level layer of logic to deploy OpenStack app services and connect them to each other. And they'll naturally automatic scale-up of control plane. So adding in, YouNote is easy, you just add a new Kubernetes work up with a bunch of labels there and well, it handles the distribution of the necessary service automatically. Naturally, there are certain drawbacks. So there's fancy features come at a cost. Human operators, they need to understand Kubernetes and how it works. But this is also a good thing because everything is moving towards Kubernetes these days, so you would have to learn at some point anyway. So you can use this as a chance to bring yourself to the next level of knowledge. OpenStack is not 100% Cloud Native Application by itself. Unfortunately, there are certain components that are stateful like databases, or NOAA compute services, or open-the-switch demons, and that have to be dealt with very carefully when doing operates, updates, and all the whole deployment. So there's extra life cycle management logic build team that handles these components carefully for you. So, a bit of a complexity we had to have. And naturally, Kubernetes requires resources, and keeping the resources itself to run. So you need to have this resources available and dedicated to Kubernetes control plane, to be able to control your application, that is all OpenStack and stuff. So a bit of investment is required. >> Can anybody just containerize OpenStack services and get these benefits? >> Well, yes, the idea is not new, there's a bunch of OpStream open, sorry, community projects doing pretty much the same thing. So we are not inventing a rocket here, let's be fair. However, it's only the way that Kubernetes cooks OpenStack, gives you the robustness and reliability that enterprise and like big customers actually need. And we're doing a great deal of a job, ultimating all the possible day to work polls and all these caveats complexities of the OpenStack management inside our products. Okay, at this point, I believe we shall wrap this discussion a bit up. So let me conclude for you. So OpenStack is an opensource infrastructure as a service platform, that still has its niche in 2020th, and it's going to have it's niche for at least five years. OpenStack is a powerful but very complex tool. And the complexities of OpenStack and OpenStack life cycle management, are successfully solved by Mirantis, through the capabilities of Kubernetes distribution, that provides us with the old necessary primitives to run OpenStack, just as another containerized application these days.

Published Date : Sep 14 2020

SUMMARY :

on the CFO-side of the house at Mirantis. and that is going to be launched So set the stage for me here. So as compared again, to a VMware, And even if IaaS is still needed, and they need to run somewhere. So the platform needs to keep up and that talk to each other of the cloud to deal with All of this assumes that and keeping the resources itself to run. and it's going to have it's

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Adrienne DavisPERSON

0.99+

Artem AndreevPERSON

0.99+

2020DATE

0.99+

five specificationsQUANTITY

0.99+

five definitionsQUANTITY

0.99+

MirantisORGANIZATION

0.99+

100%QUANTITY

0.99+

OpenStackTITLE

0.99+

hundredsQUANTITY

0.99+

CephORGANIZATION

0.99+

MicrosoftORGANIZATION

0.98+

todayDATE

0.98+

OneQUANTITY

0.98+

five criteriaQUANTITY

0.98+

firstQUANTITY

0.98+

KubernetesTITLE

0.97+

2020thDATE

0.96+

oneQUANTITY

0.95+

GoogleORGANIZATION

0.93+

MCP1TITLE

0.92+

twoQUANTITY

0.92+

Mirantis OpenStackTITLE

0.91+

Mirantis OpenStackTITLE

0.91+

YouNoteTITLE

0.9+

Docker EnterpriseORGANIZATION

0.9+

Helm BubblesTITLE

0.9+

KubernetesORGANIZATION

0.9+

least five yearsQUANTITY

0.89+

singleQUANTITY

0.89+

Mirantis OpenStack on KubernetesTITLE

0.88+

few months agoDATE

0.86+

OpenStack on KubernetesTITLE

0.86+

Docker EnterpriseTITLE

0.85+

K8STITLE

0.84+

Kit Colbert, VMware | VMware Cloud on AWS Update


 

(soft music) >> Narrator: From theCUBE studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is theCUBE conversation. >> Hi, I'm Stu Miniman. And we're digging in with VMware with the latest update of the VMware cloud, on AWS definitely technology solution set that the ecosystem has been very interesting into. And to help us do that deep dive happy to welcome back to the program, Kit Colbert. He is the Vice President and CTO of the cloud platform business unit with VMware. Kit, thanks so much for joining us. >> Thanks for having me Stu. >> All right, so you brought along some slides said if people want to watch we've done an executive interview to give kind of the general business update, but when it comes to the technology, you know I guess we start with VMware, Amazon partnership is a deep integration we've heard both from Andy Jassy and for Pat Gelsinger, on how much engineering work and how critically important it is. Anybody from the technical side understand that one of the interesting things in cloud is that Amazon created bare metal instances to support this solution. So one of the items here is that there is a new bare metal instance. So why don't you bring us inside, you know What the updates are and what this means to the user base? >> Yeah, absolutely. Yeah so the bare metal support is something that we worked very closely with AWS on when we were first launching VMware cloud on AWS. And the idea there was that bare metal support is that it very similarly models, EC2 virtual machines, in the sense of each of these Vms VM types or instance types, as they say, are various kinds of T short sizes, right? And so they have a lot of these different instance types. And so similarly speaking, on the bare metal side, we're also seeing a lot of different instance types there. So we started out with an i3.metal instance, and we added an r5.metal instance and now we're really excited to add what we're we're calling i3en.metal. And so lets bring about slide to talk more about all the new capabilities there with i3en. You know, we have found when we talk to customers is that they love the simplicity of the hyper converged model that i3 brings. What they said was, hey, we've got a lot of workloads that are storage capacity battle. And so that meant that, you know they had the issue there is workloads, they use some amount usually a good amount of CPU memory, but they have a lot of storage capacity requirements. What that meant with i3, is they had to get a lot of these i3 hosts to get enough storage capacity to support those workloads. And obviously, they have some extra compute capacity lying around. And so you know what we've done here with i3en, is dramatically increase the amount of storage capacity. So we can see here, what is it about 45 terabytes or so so much, much larger than what you can get about four x larger than we can get on i3en. metal today. So this is again, very targeted to those very large workloads that needed beefy underlying server and, just trying to better align the customer needs and workload needs with the underlying physical capabilities. And so this is just going to be one of many that we'll bring out. We've got, a whole pipeline of these actually. And, you know, again, you can imagine all the different types of VM instance types, right? There's GPU ones, there's FPGA based ones, you know, so there's all sorts of different shapes and sizes. And, you know as we get more and more feedback from customers, as they're running more and more applications, we'll get more and more of these instance types out there as well. >> Yeah, it's really interesting Kit it give, it gives me flashbacks. I'm thinking back to your 10 or even 15 years ago, when you talked in the early days of, did I just deploy VMware on the servers I had? Or did I buy servers that had the configuration, so I could optimize and take advantage of the feature functionality that's needed? All right, when I heard some of the things you talked about there, about the, you know, being able to use certain workloads and the like, one of the feedbacks I've gotten from users is, you know, the overall price of this, let's just say it's not the least expensive solution to start with. So, so, what, what are, what are some of the new entry level options that you have with the VMC on AWS? How does this update help? >> Yeah, yeah, first of all on the price side what we have found is that this is actually extremely price efficient price competitive. if you're able to utilize all the underlying physical and variable capacity. But you know, as you just mentioned, Stu, you know, the default configuration is three nodes of those i3 hosts, and that those three hosts aren't small either, right? They're pretty beefy and if you just want to get started, just try something small. Well, today, we do have actually a OneNote instance. But that OneNote instance, is just a temporary is kind of a testbed, if you will a proof of concept type of environment. It's not a long term, long running a production environment. And so customers kind of have this OneNote on the one hand or three notes on other and, you know, obviously they're saying, "Hey, why can't we just start with two nodes, "make it super simple, "reduce that price point again "for a very small footprint deployment, "and then allow us to scale up." So we bring up the next slide, what you can see is that that's exactly what we've done here as well, supporting two nodes now. And the idea here is this is a full production environment. You get all the great VMware technology, you can do motion stuff, HA, you get availability, and so forth, stores policies, as you see here. So again, this is meant to be a long lived, fully supported production environment that can also scale up if need be, right? You might start out with two nodes, but then find, "Hey, I want to add three or four or more." And you can certainly fully do that and fully support that. So again, this is just giving customers more optionality, more flexibility for where they want to to come in. What we've been doing thus far is talking with a lot of customers that had, you know, pretty large footprints and saying, "Hey, I want to move a good chunk of my data center, "or I've got a lot of workloads I want to burst." And of those cases, three or more nodes made a lot of sense. What we're finding now is that a lot of customers do want that flexibility to start smaller, just with two nodes really simple, kind of put their toe in the water, if you will and get a feel for the service and then expand from there. >> Yeah, okay, Kit, one, one quick follow up on this, you mentioned that if customers are maximizing, you know, leveraging the full environment they I have there, it's very cost competitive. You know, how are we hearing from data from from customers? What is their, their growth pattern? Are they getting good utilization? Do, do they have a good feel for, how to manage that economics in the AWS space, a lot of talk about things like FinOps these days, and how to make sure that the technical group and the Financial Group are working close together. >> Yeah, such a great question, actually. And the whole notion of the economics around this is a huge focus area for us. We have a whole Cloud Economics group, as a matter of fact, that we frequently bring in to talk with customers to help them think through all these different things. There's, there's a number of different considerations there. You know, a lot of look from, going from on-prem into the Cloud to the VMC on AWS. And, you know, with VMC on AWS, our prices are just public cloud in general, it's very easy to understand the price 'cause it's right up front, you're getting charged, right? On Premise a bit more difficult to understand that you've got a lot of capital expenses, you got a lot of other sort of operational expenses, you know, power electricity, people, and how do you, how do you make all the right computations there? So we have whole teams to help people think through that. But usually, what we have found is that price is not the main thing, right? Price is kind of a secondary or tertiary type of consideration. The main thing is always one of our primary use cases, it's like, man, I need to get out of my data centers, or my data center is that capacity, I want to keep it but I really need to be able to burst to the cloud, maybe some sort of test dev* like test in the cloud and production on pram or vice versa. Those are the key use cases that bring customers in, and then it's really a question of, okay, now that you know, you want to do this, how do we do this as effectively, efficiently from a cost perspective as well as possible, right? And that's where that sort of economic discussion starts to happen. And then you get into more of the details like, okay, which kind of instance type do I want? What are the cost metrics of that? Can I actually fill it to capacity? That's where we start getting into those more specific situations for each customer. >> Excellent, we have that. That really tees up for me kit, when, when I think about the, you know early customers that I've talked to that are using VMC on AWS, they tend to be your enterprise customers, they're big VMware customers, they've enterprise license agreements, and the like. VMware has got a strong history working across the board. And you talk about Cloud in previous solutions. You've had close partnerships with the, with the managed service providers. >> Yeah. >> So my understanding is you're actually looking to help connect between what you've done with a managed service project in the past and this VMware on AWS solution. So bring us inside, you know this, this, this option >> Sure. Yeah, let me let me break it down for you 'cause we do work with a lot of partners. You know, obviously from VMware, its inception partners have been, you know, core to our strategy and core to our success, right? What we've actually been doing, actually somewhat kind of quietly over the past 15 years anyway, isn't really building out, what we call our VMware Cloud Provider Partner Program, and the VCPP program. And, you know, the idea there is that we do have a lot of these managed service providers that can take our software and run it on behalf of their customers, essentially, delivering our software as a service to their customers. And that's been great. We've seen a lot of success stories there. And we have about 4200 of these folks now, like a tremendous amount spread all around the world, all sorts of different geographies, and also all sorts of different industry verticals. And so you see a lot of these folks getting really specific, you know, let's say to the finance, vertical, you know, in and around Wall Street, running all sorts of great services for the financial services firms. Well, these folks are looking to evolve as well and what they're saying and seeing is like, hey, you know, just this basic idea of running infrastructure. Well, I can do that. But it doesn't necessarily differentiate me right? I need to move up the stack and start offering more services, and really trying to be a very, you know, sort of boutique and targeted solution for their customers. And so a lot of these customers, you know, obviously want to run on VMC on AWS. And so what we've been doing is enabling these partners to, you know, sell through essentially VMC on AWS that to sell these servers to their customers. But one of the challenges there is that they're only able to sell the full sort of bare-metal server, they weren't able to break that up or split that across customers as they can do today within their own environments. In fact, today, within their environments, they use something called VMware Cloud Director. And this is software that we give them. And you know, it's really nice that you can take a vSphere environment, software-defined data center and break it apart or kind of carve it up, if you will, into multiple smaller tenants, then, the, you know, each of these customers can, can take part of. And so but we didn't have that functionality for VMware Cloud on AWS. And so that's for the announcements all about, so let's pull up the slide to talk about that. The basic idea here is we can now enable the same software defined data centers that are running inside of AWS as part of VMware Cloud on AWS to be accessed by VMware Cloud Director. And so what we've done is actually made, we call it VCD, for short made VCD, a service that we now operate, and it runs there alongside VMC on AWS. And so now these managed service providers can leverage the VCD as a service to rule out access and, and carve up these SDDC that they get. And, you know, the takeaway here is that we're just giving these partners much greater flexibility and optionality in terms of how they consume, the underlying bare-metal infrastructure on VMC on AWS, and then give that out to their own customers again, giving greater customer choice and options, those customers. >> All right kit, so the other big thing that we've covered this year with VMware, of course, is the launch of vSphere 7. What that means the cloud native-space, the whole Tanzu portfolio line. So help us understand how all the application modernization Kubernetes and like ties into now the solution that we're talking about. >> Yeah, absolutely. It's about a huge focus for us, as you know. Yeah, we launched Tanzu last year at Vmworld. And have, then launched the product set earlier this year, it's finally ready to GA. It's in great customer interest and has customer traction there. And obviously, one of the big questions people had was like, "Hey, how can I get this for VMC on AWS?" And so the specific product they were looking at there was called Tanzu Kubernetes Grid. And so the idea with Tanzu Kubernetes Grid is that it enables a customer to provision and manage Kubernetes clusters across any cloud, right? And you can do this on AWS, you can do this on-prem, on vSphere, or other clouds and so forth. And, so obviously, this technology needed to come to BMC. You know, the thing we talk about with customers, when it comes to VMC on AWS is this notion of migrate the modernize, that we can migrate you off of your on-prem infrastructure to this modernized cloud infrastructure that is VMC on AWS. And once you have that modernized infrastructure, it makes it much easier to modernize your applications, you've got all sorts of great AWS services sitting there. So now the application itself can start taking advantage of all these things, as well as these new type of capabilities. So let's pull up the slide for this one. So what we're announcing here is Tanzu Kubernetes Grid plus on VMC on AWS. And what this gives you is all that great functionality, the ability to get Kubernetes seamlessly running on top of your VMC environment right next to all of your existing apps. So it's not one of those situations where you need, you know, separate clusters or different environments. You can have a single environment, they can have both your traditional applications and your more modern ones. And Tanzu Kubernetes Grid takes care of all the management of that Kubernetes environment. It ensures that it's up to date, properly lifecycle manage, manage local security, you get a Container Registry there, can elastically scale based on demand. And of course, you get all that great consistency as well. And we do a lot of customers that are multicloud that, that are doing things across different environments. And so TKG can replicate itself and give you that consistent management across any of those environments on-prem and the cloud between clouds. So that's really what the power of this is. And again, it's really taking VMC from just being a platform from migrating your existing workloads to really being a platform for modernizing those workloads as well. >> Yeah, it's interesting Kit, you know if when I think about traditionally, VMware, it was, you know, let me take my app and I'm going to shove it into the end and I'll never think about it again. So what's the change in mindset? How do you make sure that it's not just, you know, stick it in there and forget about it, but, you know, can move in change which is, you know, really the, the call for today is that I need to be more agile, I need to be able to respond to change? >> That's a great question. And we actually spend a lot of time talking about this with customers. So if we take a step back, you know, it's important to understand the traditional journey most customers are looking at when they're moving to the cloud. I talked about this notion of migrating then modernizing. Oftentimes, you know, before the advent of VMC on AWS, you didn't have the ability to take those two apart, you had to migrate and modernize simultaneously. In order to move to the cloud, you actually had to do a bunch of refactoring and retooling and so forth to your application. And obviously, that created a lot of challenges because it slowed how quickly customers can move up to the cloud. And so what we've done, which I think is really, really powerful is kind of broken those two apart. To say, you know what, you may have a business imperative to get out of the data center, we can help you do that, we can move, you know, some customers moved hundreds of workloads a week, up to VMC on AWS. And then once you've done that, you're now a little bit more breathing room, right? You've gotten out of your immediate business problem, and let's say in this case, closed-ended data center. And now you can sort of focus on okay, how do I think about modernizing these applications? How do I think about the interior points to opening them up and actually getting inside of them? And so I think, you know, the most valuable aspect of the approach that we've taken here is that ability to, to separate out those two to get the quick business wins that you need. And then to take the time to think about, okay, how do I actually modernize this? How do I want to? What sort of technologies do I want to use? How should I do this right, rather than just need to do this quickly? And so I think that's a really, really powerful aspect of our approach, and that we can give customers more optionality in terms of how they approach their modernization efforts. >> Yeah, so, so Kit, the final question I have for you, the VMware AWS partnership has been around for a couple years now. >> Yeah. >> What would you say is the biggest change technically, from when the solution was first announced, just to where we are today with all the new updates that you've talked about? >> Yeah, that's a great question. Look, it's hard to pick one, right? I think the biggest thing in general, is just the increasing maturity of this offering. And that goes really across the board, technical maturity, operational maturity, compliance, certification maturity, right? Getting more and more of those under our belt, global reach maturity, right? We started off in one region, but now we're all over the world, pretty much every region that AWS has. You see more and more features, you know, we're constantly releasing new features, new hardware types. And so I think that's really the biggest thing. It's not been like one singular thing, what has been is just a lot of work by the team across 1000 different areas, and moving all those in parallel. And that's really been the heavy lift that we've had to do with the past few years. You know, as we talked about, it was a lot of work just to get this thing out in the first place, right? We had to do a lot of technical work with AWS to enable this bare metal-capability. And so we got that one out, we got it out and had that initial service. There have been a lot of limitations, right? We just had one instance type, only one region, you know, didn't have as many compliance certifications. So obviously that limited the number of customers initially, right? Just because there are some restrictions around that. So our goal has really been to open this up to as many customers, in fact, every customer, all of our 500,000 odd vSphere customers to be able to move to VMS on AWS. And so we're slow, you know, slowly but surely, every month knocking down more and more barricades to that, right? And so what you're seeing is just a tremendous explosion of innovation and effort across the entire team. And so it's really it's kudos to the team for their continued effort day in day out of these past three years or so, to get VMC on AWS to where it is today. >> Excellent, well, thank you so much, Kit. Great to talk to you. Congratulation to the VMware and AWS team. And of course, looking forward to talking to more of the customers down the road, as they take advantage of this, hopefully at Vmworld, and some of the Amazon shows too. Thanks so much for joining us Kit. >> Thank yous Stu. >> All right, stay with us for lots more coverage, of course VMware Cloud on AWS really exciting and interesting topic we've been covering since day one. I'm Stu Miniman and thank you for watching theCUBE. (upbeat music)

Published Date : Jul 15 2020

SUMMARY :

leaders all around the world, that the ecosystem has the technology, you know And so you know what of the things you talked And the idea here is this is that the technical group now that you know, you want to do this, And you talk about Cloud So bring us inside, you know this, And so that's for the What that means the cloud native-space, And so the idea with Tanzu Kubernetes Grid is that I need to be more agile, And so I think, you know, Yeah, so, so Kit, the And so we're slow, you and some of the Amazon shows too. and thank you for watching theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andy JassyPERSON

0.99+

AWSORGANIZATION

0.99+

Kit ColbertPERSON

0.99+

Pat GelsingerPERSON

0.99+

Palo AltoLOCATION

0.99+

AmazonORGANIZATION

0.99+

last yearDATE

0.99+

Stu MinimanPERSON

0.99+

VMCORGANIZATION

0.99+

VmworldORGANIZATION

0.99+

threeQUANTITY

0.99+

500,000QUANTITY

0.99+

twoQUANTITY

0.99+

VMwareORGANIZATION

0.99+

three notesQUANTITY

0.99+

1000 different areasQUANTITY

0.99+

fourQUANTITY

0.99+

one regionQUANTITY

0.99+

todayDATE

0.99+

KubernetesTITLE

0.99+

vSphere 7TITLE

0.99+

GALOCATION

0.98+

Tanzu Kubernetes GridTITLE

0.98+

firstQUANTITY

0.98+

bothQUANTITY

0.98+

eachQUANTITY

0.98+

one instanceQUANTITY

0.98+

vSphereTITLE

0.98+

oneQUANTITY

0.98+

BostonLOCATION

0.98+

VMware Cloud DirectorTITLE

0.97+

this yearDATE

0.97+

one singular thingQUANTITY

0.97+

VMware CloudTITLE

0.97+

two nodesQUANTITY

0.96+

KitPERSON

0.96+

hundredsQUANTITY

0.96+

three hostsQUANTITY

0.96+

about 4200QUANTITY

0.95+

VMCTITLE

0.95+

each customerQUANTITY

0.95+

three nodesQUANTITY

0.95+

Daniel Fried & David Harvey, Veeam | VeeamON 2020


 

>>From around the globe with digital coverage of 2020 brought to you by beam. Welcome back. I'm assuming a man, and this is the cubes coverage of Veem on 2020 online. I'm really happy to welcome to the program. We had done the Milan many years, first time doing it online and we have two first time guests. the center square. We have Daniel freed. He is the GM and senior vice president of AMEA and the head of worldwide sitting on the other side of the screen. Is it David Harvey? He's the vice president of Dietrich alliances. Both of them, of course, with beam. Gentlemen, thanks so much for joining us. >>Thank you. >>All right, Daniel, maybe start with you, uh, you know, the online event, obviously, uh, you know, it gives us, you know, there's some allergens, but there's also some opportunities rather than, you know, thousands of us gathering in Las Vegas where right. There's a diversity of locations because if you look up and down the street, the strip, um, and instead we really have a global event in an operation, unity, I'm speaking to you where you are in Asia right now. What, what is, you know, the online event mean? And you know, how you can relate to, you know, how many countries do you have a attending the event. Okay. Yeah. >>Okay. So, so the good, the good thing about, about being online is, as you mentioned, as you said, is, is we can have all, all people from all countries, all around the world present. Of course we are surely, uh, now with my responsibility, my worldwide responsibility for the channels, uh, all countries in the world, we have partners of all in all countries in the world, which means that all our teams, as well as all our butlers are virtual things or the kid limits, uh, of, of joining that, that event today. But that's, that's why I'm very, very happy to have these virtual events, which is much easier. And they're heading all people try flying in from all the different parts of the world, do they guess? Right. And, and, and David, you know, also with alliances standpoint, I assume since, you know, they don't actually have to fly to Vegas. We've got the special guest appearances by Satya Nadella, uh, you know, Arvin, Krishna, you know, all of the, you know, Andy Jassy, you know, everyone's coming in, but no, and also seriousness from an Alliance standpoint, uh, you know, we'd love to hear how you're, you're working with them., uh, for, for the global event. >>Yeah, no, absolutely. And security is having a tough time keeping them at Bay right now. I mean, the online thing is handy because we can just cut them off, but, uh, yeah. Uh, but you're exactly right. It, the support of the alliances has been fantastic. Uh, everyone was trying to adjust to this new world we're in, but what you're seeing this week, um, he's a fantastic mom's body alliances. So once in Mike, all items should really work and we're doing the same for their events. And it's just a really nice >>If >>Camaraderie is coming together. And so, um, they've been great in supporting us as you've as seen through the week. Um, and we're excited about know whole vibe that getting in a commitment >>That, that we're getting from the customers I'm from the alliances, which is really, really good. Excellent. Well, we know that, you know, Veeam is a hundred percent partner focus, Daniel, maybe let's start with you, uh, you know, what, what's new kind of in the last year. So since we were together, last year, so on the new, on the new things that we have been doing for the last year, it's actually continuing first to move with our hundred percent, uh, since the beginning of, of, of Veem and all the way to the fully do squatters, that's more important even that is definitely the move that we see, uh, with working with your answers, uh, and their partners, as well as working much more with the Saudis providers, meaning the cloud service providers, where are there is a big, big trend now in the market with customers requesting more and more rather than, than I would say, technologies and products on premise. >>Uh, so we see that everywhere around the world. It is actually writing now again with the nutrition that we see, well, why, because of these, Nope, this is about situation, uh, where virtual is a big move that we, uh, we, we can see from customers and the partners that we have, the ecosystem that we've built, um, all around the world, he's helping very much in this move. Excellent. And David would love to hear the, the, the progress that, uh, your group with some of the parts. Yeah, absolutely. I mean, it's been a, it's been a really exciting ride, uh, year over year growth rates with the alliances, continue to shoot out, which we're really excited about. Um, the VTN launch was fantastic for us for most of our major strategic alliances. So we're really pleased about that. And a lot of our technical alliances as well, they really benefited from some of the new capability coming out there. >>So what we're seeing is not only are we seeing our go to market, be enriched more and have a lot of success with the strategic alliances, the technology Alliance is a really starting to benefit from some of that new innovation that just came out and funny as well. So that global systems integrators, we've seen a massive uptick in that interest in the last, in the last couple of quarters. And that's really helping too Alison tonight. Oh, I spy. So yeah, it's been a really exciting year. And certainly when you do these types of events virtually yeah. LinkedIn, your, I am, and text messages go through the roof, which is a nice way to, to keep communication with the alliances. Yeah, I did. David, I'd like to just drill in a little bit on some of the pieces that you're talking about there, uh, you know, I really feel in the last year, yeah. We saw a real maturation in what we do talk about. Yeah. Hybrid cloud and multicloud. Um, I, I know one of the, you know, key strategic Alliance is actually from day one for Veem. Yeah. And you know, every time I saw an announcement of some of the VMware Bob pieces, I usually felt like there was soon after a Veem piece of it. Uh, could you bring us inside a little bit, especially some of the cloud pieces and maybe how beam differentiate, uh, from, from some of the competition out there, you know, both VMware, >>You know, Amazon, Microsoft and that whole ecosystem. >>Yeah, absolutely. I mean, as you touched on, uh, VMware and ops have been very close, Brown is process, and we're really excited about, uh, some of the recent work has been going on with them as well. Um, we're also have tremendous steps fools with Amazon that continues to be a strong area. And the Microsoft is a cloud in the way that we continue during the harms, the way we work with their solution. Um, it's really providing right strides forwards, especially for the enterprise customers. Uh, we also were excited about the recent announcement related to Google cloud as well. So that's another big area for us. Um, and so that was another thing that continues to differentiate us. And what I would say overall though, is it's about having that philosophy as customers continue to have there philosophical view related to on premise cloud on off premise cloud. >>What we're showing is whether it's through the hardware partners, whether it's through the application partners well through the cloud is we're enabling you to decide your workflows. And I think that's the bit it's a little bit different than, and some of the others that are out there taking that heritage, should we put into the virtual world and that mentality, there's certain it departments have. It enables us to really synergize with those different partners as they go through their evolution and a certain customers move more towards the public cloud. And then you might be look towards some workplace back to the private that synergy between all of those areas is hugely important. And even for the hardware partners that we have, do you have cloud plays, mentioning some of their value solutions as well. So it's a really sort of, um, heterogeneous world that it we're really pleased on the way that the market is accepting it. Yeah. And Daniel that this, this move and a maturation of what's happened in the cloud is a significant impact on the channel. I'd love to hear, you know, anything specifically, you know, with your, uh, your viewpoint on the channel as to, you know, how your partners are now adjusting to that, you know, VMware, Microsoft, uh, some of the other pieces is that how they are now ready, uh, to help customers, uh, through these transitions. >>Yeah. And, and let me, let me make one run back, which is very important. First of all, VIM is not Mmm. The cloud provider and will not be accepted, right. Or in other words, the idea is that we will never compete with our brothers, never. Uh, so we provide technology, which is used by our partners and a number of them. I just think that technology to provide services, a number of them are using this technology to resell, uh, or to implement some additional services for the customers. And this is a key, key element. We're not there to do anything and competition. We are here to compliment and to use it, to leverage as much as possible, all our partners, as much as we can, uh, they know very good the market, they know very good at how things are moving. They know very good where they can do they know very good where they cannot do and what their customers want or, or, Oh,. >>Um, so the big, big move that we see in the market is how everyone is moving more and more to, again, there's said initially, uh, to the cloud, um, I mean, providing cloud services, whether it's multicloud hybrid cloud, as you mentioned it, as you listed them, we have all different types of scenarios. And this is a very interesting thing, is us helping them, educating them on how to use our technology, to be able to verify we be provide services and capabilities to their end customers. So we have a big, big investments in this enablement in what we call sales acceleration software, because it's all about businesses, uh, and helping our partners to get there and to move them as fast as placebo. Again, there is a big, a big need, a big request from the end customers and the role of the partners. I understand that and have to move very quickly to this new world of services. >>And we are there to help and support because we strategically no, that this is a way not only for him, but for the entire market. Yeah. And Danielle, you know, an important point. I think anybody that thinks that, okay, editor, uh, you know, to the channel or things, you know, probably doesn't matter. Okay. Or value proposition, a Veeam. What I'm curious from your standpoint is what was the impact of know wire now? You know, obviously some management changes there. Uh, I'm, I'm curious what feedback you've gotten and how that impact, uh, you know, the channel first. Yeah. I mean, let's be open as you know, it's one of, I hope one of our qualities, that theme is the transparency and the way we communicate again with the world, with our, especially with our partners. So initially the feedback that I had and with a number of partners and partners, well, a little bit of, okay, Nope, no worries. >>Uh, no, no. What is going to happen? What is next? Are we going to, to lose the DVM culture? Are we going to, are we going to go through a number of changes eventually in the strategy of him? And actually I have to say, and I'm extremely comfortable, uh, in my, let's say regular communications and connections with, with the insight partners, we have quiet team software because they think that the strategy that we had and the strategy that we have now is the strategy they want just to keep on doing, because it is a successful strategy. And by the way, when we do get the data, uh, that we got from the market from, uh, from, from some, from IDC that that was out lately, we see that Veeam is the number one in both, all around the world, compared to all the other vendors, doing the same kind of technology. >>That means that each is a successful strategy going with the partners and through the partners, he's a very successful strategy. And there is no reason that that yeah, and insight partners understands that extremely good. And I feel very comfortable with it. Yeah. With our future. That would mean more to us, but that's okay. We'll see. In the coming quarters. Well, I, I think, uh, you know, we, we, we do need to have, make sure that VMs has a little bit more focused on getting some green in your home environments there. Um, cause normally if I'm doing an interview with green, I'm expecting with BMI Mexican and a little bit more of the, of the breaker in there, David, you know, obviously, you know, the strategic alliances, uh, you know, some of those executive relationships, good morning, bring us in a little bit, as you know, Daniel was saying there's a little bit yeah. >>Of trepidation at the bit. And they've worked ruin, uh, from the Alliance standpoint, uh, you know, what is this, uh, what what's, what's transpired. Yes, true. It's, it's one of those things. It's a really unexciting answer because they aren't similar, simple answers calmness. Um, I often 24 hours, once we announced it, my call sheet was pretty, pretty empty for the simple reason being that, uh, we've spoken to everybody very quickly and the resonant feedback was that's great news. We know insight. We trust insight. We're glad it is say a growth play. Uh, also it clears up the future. And obviously, yeah, when you have strategic alliances is always in the back of their mind, wondering when is one of our competitors going to come in and Acqua you guys Mmm. Your article feedback was, this is fantastic. This is exactly what we wanted to see. >>Um, you provide clarity to our partnership. You can continue to invest in grow, which you've demonstrated for years, and you can move that forward for the next few years. Um, but also more importantly, this enables us to feel even better doubling down on veins. And so frankly, while we haven't had any issues and I'm sure a lot of the viewers out there have been through events seeing sometimes that can be crazy. It's a Daniel was pointing the strategy. Hasn't changed, we're executing, we've got the support. And the strategic Alliance is probably for the executive level and also the day to day level on leaning in more and more of them please that we're executing on our strategy, focusing in the U S with a big push. Okay. Bringing the investment, moving forward, stabilizing the leadership team. It's just been overall. It's been fantastic. So yeah. >>Yeah. It's, it's a really unexciting new soundbite answer, but that's a, how long has inclarity clarity has been a real takeaway? Excellent. Well, one of the, the key messages in the keynote, of course talking about a digital transformation, we'd love to hear, uh, for, from both of you, uh, you know, what you're seeing and hearing how beam's message is a, you know, engaging with both partners and ultimately the, the end user itself, uh, Daniel, maybe we'll start with you on that. Yeah. Yeah. Thanks. Thanks for asking. It's usually always comes from the end customers and their needs, and we all know that the need for data uh he's he's getting exponential. Uh, so that is why we can't do things manually anymore. So it has to be digitalized everywhere. Yeah. The very interesting thing is that not only something that express with the end customers, but we see more and more because it's an absolute need. Uh, when partners are providing, uh, services or providing all night, chubby she's out services or providing even, even products, they have digitalize also themselves. They are doing it at very, very high speed. But I know I'm mentioning that because I'm extremely pleased with the ecosystems of partners that we have >>Because they understand it's very good, how the market is, is evolving. I'm still only about the customers, but it's also about themselves. Yeah. That they are evolving 21st. And did you digitalization of all the processors? Well, the way they work with their customers, it's definitely one of the key elements, uh, which is going to be extremely good for the future. That's why, because of all this moves in a very positive dynamic way, there is no reasons why we should change our strategies and no remaining said our rights, uh, lions first, whatever it is, uh, continue driving the ecosystem, building the ecosystems, organizing the acquisition. And he's absolutely key for the success of everyone, including people, Brittany and David, please from the Alliance side. Yeah, it's do, I'm sure you'll notice, but in anybody and, uh, we're in a fortunate situation that we probably both get to sit through, uh, all of the strategies that a lot of the Titans of industry are all focused on right now and, and, and having ecosystem we do in your line side, that rich tapestry from the very large to very small is focused on that digital transformation. >>And I think that the good news from my point of view, and I'm going to touch on one of the points Daniel mentioned before was we don't eat with them. And so, yeah, he volunteers, we've got his work hogging, a piece of that, the strategy that they're looking for, the criticality of data three is transformation is huge as everybody knows. Um, and what we're finding right now is that the approach that we take yep. Approach to focus on doing what we do extremely well is synergizing with the evolution of the customer is seeing as they go through that transformation and transformation, sometimes a scary transformation sometimes brings nervousness and they want to do it with a lot of their thought leaders. They working with the VM-ware has the Microsoft, the HBS, and then apps, et cetera. And so from that point of view, the fact that we can providing them with that peace of mind for the complete solution, it's been fantastic. >>So, you know, when you look at a 75 plus partners, there's always going to be one way you need to thread the needle. Shall we say on exactly where intellectual property provides that value to them? But the good news is we don't have to spend a lot of time on that because we're clear, we're concise. Uh, and a lot of times they've been involved in a lot of our strategy sessions. So they're on board with us. And I think the Daniels area as well with the channel, the channel sees that as well. And that's why, whether it's through the alliances channel or with us directly to the resellers, uh, we're finding that, uh, that harmony is bringing a lot of peace of mind. So you can focus on the pains of the customer. I'm not worried about your technology partners fighting with themselves. And that's really where we are, right. Uh, the overall ethic of the company. All right. Well, the final item I have for, for both of you is, you know, normally, you know, but we have a certain understanding of where we are and what the roadmap is. Look, of course, we're dealing with a global pandemic, right? So >>As we look forward to the outlook, uh, I'd love to be able to hear a little bit about, you know, what you're hearing from your partners, how that is coloring, you know, decisions that are made really for the rest of kind of the next 12 months or so. Um, and you know, okay. Any other data points that you have, uh, from your broad perspectives as to how people think the recovery is going to be know, obviously we understand there's a lot of inserts. Nope. Daniel, you've got a, uh, great global viewpoint. We understand, uh, you know, what, what is happening impacts differently locally quite a bit, but, um, what are you seeing going forward and do you know the impact? Bye bye. Yeah. So I couldn't say the contrary. Yeah. So they correct. And we see it in our numbers that the countries, which are the most impacted, I would buy the QVC. >>I would have been more difficulties than the others, uh, to move, to move forward for a business standpoint, uh, which everybody understands, but we've received in the numbers. No, the thing. And this is what I liked very much about, but our ecosystem and where is we had a plan, uh, that we said that we said in 2019 before we knew anything about curvy a con for 2020, and you know what, uh, we are now in no, in, in, in our, the second part of the month of the year, you too, and are going to make our numbers. We are going to make our plans and why are we going to make it? That's the only because, you know, it's just been because perfect, but he's very, very much because of all our partners who, despite all the issues that are, they are in country because of coverage are just getting there, biking, helping themselves, helping us, and altogether as, as a big business machine, as big business system, we all just making success. >>And this will only show extremely good at the end of the year. When we look at the market share, Jamie's going to gain again, uh, with all our butters, it will be the, the results of the success. So good results. Very good results. No. And, and do you mean just continuing to move with these, he's a network of fathers and David, obviously we've seen, you know, you know, many of the big partners, you know, uh, you know, very circumstance and their response, you know, nobody wants, are you seen as, uh, you know, doing something that is untoward towards customers taking care of business. Okay. So, you know, how how's this impacting, you know, what you're doing with your partners? And it gives a little bit of the outlook going forward. Yeah. I mean, why not use for this as energy? Mmm. Some of these headlines that you see, of course, they're not going to get picked up with the impact related to it on a day to day basis, through the discussions with the executives are in the field level, we're seeing the energy with same people want to make sure on what is a tricky situation was a very impactful situation. >>Um, but what, we're not seeing people Mmm. He was onto it. We're seeing people really want to, um, make sure that they are also relating to the needs of their customers today, whether it's more and point whether it's moving towards the user experience, but also taking this time to keep building the foundation for a lot of that infrastructure related to data protection, data availability, um, that we've enjoyed for a long period of time. So yeah, you know, you, you have a degree of disruption, but the objective that I'm seeing from all the major guys that are out there is let's make sure we drive hard. Let's not take the pedal off the metal. Let's not use this as an excuse. Let's keep moving. What, uh, I mean, I sh I would say our engagement with them has increased in sort of happened. Um, and so I don't think we ever expected to be running into tempo. >>We're running bean does it as standard, but we don't normally I have that same temperature. Okay. From some of the, uh, some of the alliances we're really pushing hard with him. So, yeah, we're excited. And we continue to evolve rudeness how, in a situation, everyone's going to be employees with a lot of aggression, a lot of desire to keep capitalizing on the work we've done together. The key solving the customer demands that are going to come over the next 18 to 24 months, um, and reading, make sure that, uh, this is really okay. Yeah. It's impactful just to be clear, but, but not one that we're going to let define our future. I'm looking into that together. So I think from us, um, we're excited about not only as Daniel said, beam success. Well, what, we're starting to see us really good attitudes, uh, from all of our lines bombs, which we love. Yeah. All right. Well, Daniel and David, thank you so much for the update. Great. Yep. Okay. Thank you. Thanks. All right. Lots more covered from Veeam on 2020 online. I'm assuming a minute. Thank you. Oh, wow. The cube.

Published Date : Jun 17 2020

SUMMARY :

of 2020 brought to you by beam. And you know, how you can relate to, you know, how many countries do you have a attending the event. Satya Nadella, uh, you know, Arvin, Krishna, you know, all of the, I mean, the online thing is handy because we can just cut them off, but, uh, yeah. And so, um, they've been great in supporting us as you've as seen Well, we know that, you know, Veeam is a hundred percent partner focus, Daniel, maybe let's start with you, Uh, so we see that everywhere around the world. uh, you know, I really feel in the last year, yeah. And the Microsoft is a cloud in the way that we continue during the harms, And even for the hardware partners that we have, do you have cloud plays, the idea is that we will never compete with our brothers, never. Um, so the big, big move that we see in the market is how everyone is moving more editor, uh, you know, to the channel or things, you know, probably doesn't matter. had and the strategy that we have now is the strategy they want just to keep on doing, of the, of the breaker in there, David, you know, obviously, you know, the strategic alliances, uh, And obviously, yeah, when you have strategic alliances is always in the back of their mind, wondering when is one And the strategic Alliance is probably for the executive level and also the day to day level on the end user itself, uh, Daniel, maybe we'll start with you on that. And he's absolutely key for the success of everyone, And so from that point of view, the fact that we can providing them with that peace of mind Well, the final item I have for, for both of you is, you know, normally, Um, and you know, okay. That's the only because, you know, it's just been because perfect, and David, obviously we've seen, you know, you know, many of the big partners, from all the major guys that are out there is let's make sure we drive hard. The key solving the customer demands that are going to come over the next 18 to 24

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

DanielPERSON

0.99+

BrittanyPERSON

0.99+

DaniellePERSON

0.99+

Satya NadellaPERSON

0.99+

David HarveyPERSON

0.99+

AmazonORGANIZATION

0.99+

2019DATE

0.99+

MicrosoftORGANIZATION

0.99+

ArvinPERSON

0.99+

KrishnaPERSON

0.99+

2020DATE

0.99+

Las VegasLOCATION

0.99+

VegasLOCATION

0.99+

Andy JassyPERSON

0.99+

AsiaLOCATION

0.99+

MikePERSON

0.99+

last yearDATE

0.99+

24 hoursQUANTITY

0.99+

LinkedInORGANIZATION

0.99+

VeeamORGANIZATION

0.99+

75 plus partnersQUANTITY

0.99+

AMEAORGANIZATION

0.99+

HBSORGANIZATION

0.99+

JamiePERSON

0.99+

bothQUANTITY

0.99+

Daniel FriedPERSON

0.99+

BothQUANTITY

0.99+

first timeQUANTITY

0.99+

oneQUANTITY

0.99+

AlisonPERSON

0.99+

IDCORGANIZATION

0.99+

hundred percentQUANTITY

0.99+

VMwareORGANIZATION

0.99+

VeemORGANIZATION

0.98+

thousandsQUANTITY

0.98+

eachQUANTITY

0.98+

second partQUANTITY

0.97+

VeeamONORGANIZATION

0.97+

tonightDATE

0.96+

todayDATE

0.96+

this weekDATE

0.95+

both partnersQUANTITY

0.95+

BrownPERSON

0.94+

24 monthsQUANTITY

0.94+

FirstQUANTITY

0.93+

two first timeQUANTITY

0.92+

Dietrich alliancesORGANIZATION

0.91+

18QUANTITY

0.9+

VeemTITLE

0.9+

BayLOCATION

0.89+

BMI MexicanORGANIZATION

0.87+

firstQUANTITY

0.86+

Google cloudTITLE

0.85+

next 12 monthsDATE

0.84+

one wayQUANTITY

0.84+

threeQUANTITY

0.82+

Jared Rosoff & Kit Colbert, VMware | CUBEConversation, April 2020


 

(upbeat music) >> Hey, welcome back everybody, Jeff Frick here with theCUBE. We are having a very special Cube conversation and kind of the the ongoing unveil, if you will, of the new VMware vSphere seven dot O. We're going to get a little bit more of a technical deep-dive here today and we're excited to have a longtime CUBE alumni. Kit Colbert here is the VP and CTO of Cloud platform at VMware. Kit, great to see you. >> Yeah, happy to be here. And new to theCUBE, Jared Rosoff. He's a Senior Director of Product Management of VMware and I'm guessing had a whole lot to do with this build. So Jared, first off, congratulations for birthing this new release and great to have you on board. >> Thanks, feels pretty great, great to be here. >> All right, so let's just jump into it. From kind of a technical aspect, what is so different about vSphere 7? >> Yeah, great. So vSphere 7 bakes Kubernetes right into the virtualization platform. And so this means that as a developer, I can now use Kubernetes to actually provision and control workloads inside of my vSphere environment. And it means as an IT admin, I'm actually able to deliver Kubernetes and containers to my developers really easily right on top of the platform I already run. >> So I think we had kind of a sneaking suspicion that that might be coming with the acquisition of the Heptio team. So really exciting news, and I think Kit, you teased it out quite a bit at VMware last year about really enabling customers to deploy workloads across environments, regardless of whether that's on-prem, public cloud, this public cloud, that public cloud, so this really is the realization of that vision. >> It is, yeah. So we talked at VMworld about Project Pacific, right, this technology preview. And as Jared mentioned of what that was, was how do we take Kubernetes and really build it into vSphere? As you know, we had a hybrid cloud vision for quite a while now. How do we proliferate vSphere to as many different locations as possible? Now part of the broader VMware cloud foundation portfolio. And you know, as we've gotten more and more of these instances in the cloud, on premises, at the edge, with service providers, there's a secondary question of how do we actually evolve that platform so it can support not just the existing workloads, but also modern workloads as well. >> Right. All right, so I think he brought some pictures for us, a little demo. So why don't we, >> Yeah. Why don't we jump over >> Yeah, let's dive into it. to there and let's see what it looks like? You guys can cue up the demo. >> Jared: Yeah, so we're going to start off looking at a developer actually working with the new VMware cloud foundation four and vSphere 7. So what you're seeing here is the developer's actually using Kubernetes to deploy Kubernetes. The self-eating watermelon, right? So the developer uses this Kubernetes declarative syntax where they can describe a whole Kubernetes cluster. And the whole developer experience now is driven by Kubernetes. They can use the coop control tool and all of the ecosystem of Kubernetes API's and tool chains to provision workloads right into vSphere. And so, that's not just provisioning workloads though, this is also key to the developer being able to explore the things they've already deployed. So go look at, hey, what's the IP address that got allocated to that? Or what's the CPU load on this workload I just deployed? On top of Kubernetes, we've integrated a Container Registry into vSphere. So here we see a developer pushing and pulling container images. And you know, one of the amazing things about this is from an infrastructure as code standpoint, now, the developer's infrastructure as well as their software is all unified in source control. I can check in not just my code, but also the description of the Kubernetes environment and storage and networking and all the things that are required to run that app. So now we're looking at a sort of a side-by-side view, where on the right hand side is the developer continuing to deploy some pieces of their application. And on the left hand side, we see vCenter. And what's key here is that as the developer deploys new things through Kubernetes, those are showing up right inside of the vCenter console. And so the developer and IT are seeing exactly the same things with the same names. And so this means when a developer calls, their IT department says, hey, I got a problem with my database. We don't spend the next hour trying to figure out which VM they're talking about. They got the same name, they see the same information. So what we're going to do is that, you know, we're going to push the the developer screen aside and start digging into the vSphere experience. And you know, what you'll see here is that vCenter is the vCenter you've already known and love, but what's different is that now it's much more application focused. So here we see a new screen inside of vCenter, vSphere namespaces. And so, these vSphere namespaces represent whole logical applications, like the whole distributed system now is a single object inside of vCenter. And when I click into one of these apps, this is a managed object inside of vSphere. I can click on permissions, and I can decide which developers have the permission to deploy or read the configuration of one of these namespaces. I can hook this into my Active Directory infrastructure. So I can use the same corporate credentials to access the system. I tap into all my existing storage. So this platform works with all of the existing vSphere storage providers. I can use storage policy based management to provide storage for Kubernetes. And it's hooked in with things like DRS, right? So I can define quotas and limits for CPU and memory, and all of that's going to be enforced by DRS inside the cluster. And again, as an admin, I'm just using vSphere. But to the developer, they're getting a whole Kubernetes experience out of this platform. Now, vSphere also now sucks in all this information from the Kubernetes environment. So besides seeing the VMs and things the developers have deployed, I can see all of the desired state specifications, all the different Kubernetes objects that the developers have created. The compute, network and storage objects, they're all integrated right inside the vCenter console. And so once again from a diagnostics and troubleshooting perspective, this data's invaluable. It often saves hours just in trying to figure out what we're even talking about when we're trying to resolve an issue. So as you can see, this is all baked right into vCenter. The vCenter experience isn't transformed a lot. We get a lot of VI admins who look at this and say, where's the Kubernetes? And they're surprised, they like, they've been managing Kubernetes all this time, it just looks like the vSphere experience they've already got. But all those Kubernetes objects, the pods and containers, Kubernetes clusters, load balancer, storage, they're all represented right there natively in the vCenter UI. And so we're able to take all of that and make it work for your existing VI admins. >> Well that's a, that's pretty wild, you know. It really builds off the vision that again, I think you kind of outlined, Kit, teased out it at VMworld which was the IT still sees vSphere, which is what they want to see, what they're used to seeing, but devs see Kubernetes. And really bringing those together in a unified environment so that, depending on what your job is, and what you're working on, that's what you're going to see and that's kind of unified environment. >> Yep. Yeah, as the demo showed, it is still vSphere at the center, but now there's two different experiences that you can have interacting with vSphere. The Kubernetes based one, which is of course great for developers and DevOps type folks, as well as a traditional vSphere interface, APIs, which is great for VI admins and IT operations. >> Right. And then, and really, it was interesting too. You teased out a lot. That was a good little preview if people knew what they were watching, but you talked about really cloud journey, and kind of this bifurcation of kind of classical school apps that are running in their classic VMs and then kind of the modern, you know, cloud native applications built on Kubernetes. And you outlined a really interesting thing that people often talk about the two ends of the spectrum and getting from one to the other but not really about kind of the messy middle, if you will. And this is really enabling people to pick where along that spectrum they can move their workloads or move their apps. >> Yeah, no. I think we think a lot about it like that. That we look at, we talk to customers and all of them have very clear visions on where they want to go. Their future state architecture. And that involves embracing cloud, it involves modernizing applications. And you know, as you mentioned, it's challenging for them because I think what a lot of customers see is this kind of, these two extremes. Either you're here where you are, with kind of the old current world, and you got the bright nirvana future on the far end there. And they believe that the only way to get there is to kind of make a leap from one side to the other. That you have to kind of change everything out from underneath you. And that's obviously very expensive, very time consuming and very error-prone as well. There's a lot of things that can go wrong there. And so I think what we're doing differently at VMware is really, to your point, is you call it the messy middle, I would say it's more like how do we offer stepping stones along that journey? Rather than making this one giant leap, we had to invest all this time and resources. How can we enable people to make smaller incremental steps each of which have a lot of business value but don't have a huge amount of cost? >> Right. And it's really enabling kind of this next gen application where there's a lot of things that are different about it but one of the fundamental things is where now the application defines the resources that it needs to operate versus the resources defining kind of the capabilities of what the application can do and that's where everybody is moving as quickly as makes sense, as you said, not all applications need to make that move but most of them should and most of them are and most of them are at least making that journey. So you see that? >> Yeah, definitely. I mean, I think that certainly this is one of the big evolutions we're making in vSphere from looking historically at how we managed infrastructure, one of the things we enable in vSphere 7 is how we manage applications, right? So a lot of the things you would do in infrastructure management of setting up security rules or encryption settings or you know, your resource allocation, you would do this in terms of your physical and virtual infrastructure. You talk about it in terms of this VM is going to be encrypted or this VM is going to have this Firewall rule. And what we do in vSphere 7 is elevate all of that to application centric management. So you actually look at an application and say I want this application to be constrained to this much CPU. Or I want this application to have these security rules on it. And so that shifts the focus of management really up to the application level. >> Jeff: Right. >> Yeah, and like, I would kind of even zoom back a little bit there and say, you know, if you look back, one thing we did with something like VSAN, before that, people had to put policies on a LUN, you know, an actual storage LUN and a storage array. And then by virtue of a workload being placed on that array, it inherited certain policies, right? And so VSAN really turned that around and allows you to put the policy on the VM. But what Jared's talking about now is that for a modern workload, a modern workload's not a single VM, it's a collection of different things. We got some containers in there, some VMs, probably distributed, maybe even some on-prem, some in the cloud, and so how do you start managing that more holistically? And this notion of really having an application as a first-class entity that you can now manage inside of vSphere, it's a really powerful and very simplifying one. >> Right. And why this is important is because it's this application centric point of view which enables the digital transformation that people are talking about all the time. That's a nice big word, but the rubber hits the road is how do you execute and deliver applications, and more importantly, how do you continue to evolve them and change them based on either customer demands or competitive demands or just changes in the marketplace? >> Yeah, well you look at something like a modern app that maybe has a hundred VMs that are part of it and you take something like compliance, right? So today, if I want to check if this app is compliant, I got to go look at every individual VM and make sure it's locked down, and hardened, and secured the right way. But now instead, what I can do is I can just look at that one application object inside of vCenter, set the right security settings on that, and I can be assured that all the different objects inside of it are going to inherit that stuff. So it really simplifies that. It also makes it so that that admin can handle much larger applications. You know, if you think about vCenter today you might log in and see a thousand VMs in your inventory. When you log in with vSphere 7, what you see is a few dozen applications. So a single admin can manage a much larger pool of infrastructure, many more applications than they could before because we automate so much of that operation. >> And it's not just the scale part, which is obviously really important, but it's also the rate of change. And this notion of how do we enable developers to get what they want to get done, done, i.e., building applications, while at the same time enabling the IT operations teams to put the right sort of guardrails in place around compliance and security, performance concerns, these sorts of elements. And so by being able to have the IT operations team really manage that logical application at that more abstract level and then have the developer be able to push in new containers or new VMs or whatever they need inside of that abstraction, it actually allows those two teams to work actually together and work together better. They're not stepping over each other but in fact now, they can both get what they need to get done, done, and do so as quickly as possible but while also being safe and in compliance and so forth. >> Right. So there's a lot more to this. This is a very significant release, right? Again, lot of foreshadowing if you go out and read the tea leaves, it's a pretty significant, you know, kind of re-architecture of many parts of vSphere. So beyond the Kubernetes, you know, kind of what are some of the other things that are coming out in this very significant release? >> Yeah, that's a great question because we tend to talk a lot about Kubernetes, what was Project Pacific but is now just part of vSphere, and certainly that is a very large aspect of it but to your point, vSphere 7 is a massive release with all sorts of other features. And so instead of a demo here, let's pull up some slides and we'll take a look at what's there. So outside of Kubernetes, there's kind of three main categories that we think about when we look at vSphere 7. So the first one is simplified lifecycle management. And then really focus on security is the second one, and then applications as well, but both including the cloud native apps that couldn't fit in the Kubernetes bucket as well as others. And so we go on the first one, the first column there, there's a ton of stuff that we're doing around simplifying lifecycle. So let's go to the next slide here where we can dive in a little bit more to the specifics. So we have this new technology, vSphere life cycle management, vLCM, and the idea here is how do we dramatically simplify upgrades, life cycle management of the ESX clusters and ESX hosts? How do we make them more declarative with a single image that you can now specify for an entire cluster. We find that a lot of our vSphere admins, especially at larger scales, have a really tough time doing this. There's a lot of in and outs today, it's somewhat tricky to do. And so we want to make it really really simple and really easy to automate as well. >> Right. So if you're doing Kubernetes on Kubernetes, I suppose you're going to have automation on automation, right? Because upgrading to the seven is probably not an inconsequential task. >> And yeah, and going forward and allowing, you know, as we start moving to deliver a lot of this great vSphere functionality at a more rapid clip, how do we enable our customers to take advantage of all those great things we're putting out there as well? >> Right. Next big thing you talk about is security. >> Yep. >> And we just got back from RSA, thank goodness we got that show in before all the madness started. >> Yep. >> But everyone always talked about security's got to be baked in from the bottom to the top. So talk about kind of the changes in the security. >> So, done a lot of things around security. Things around identity federation, things around simplifying certificate management, you know, dramatic simplifications there across the board. One I want to focus on here on the next slide is actually what we call vSphere trust authority. And so with that one what we're looking at here is how do we reduce the potential attack surfaces and really ensure there's a trusted computing base? When we talk to customers, what we find is that they're nervous about a lot of different threats including even internal ones, right? How do they know all the folks that work for them can be fully trusted? And obviously if you're hiring someone, you somewhat trust them but you know, how do you implement the concept of lease privilege? Right? >> Right. >> Jeff: Or zero trust, right, is a very hot topic >> Yeah, exactly. in security. >> So the idea with trust authority is that we can specify a small number of physical ESX hosts that you can really lock down and ensure are fully secure. Those can be managed by a special vCenter server which is in turn very locked down, only a few people have access to it. And then those hosts and that vCenter can then manage other hosts that are untrusted and can use attestation to actually prove that okay, this untrusted host haven't been modified, we know they're okay so they're okay to actually run workloads on they're okay to put data on and that sort of thing. So it's this kind of like building block approach to ensure that businesses can have a very small trust base off of which they can build to include their entire vSphere environment. >> Right. And then the third kind of leg of the stool is, you know, just better leveraging, you know, kind of a more complex asset ecosystem, if you will, with things like FPGAs and GPUs and you know, >> Yeah. kind of all of the various components that power these different applications which now the application can draw the appropriate resources as needed, so you've done a lot of work there as well. >> Yeah, there's a ton of innovation happening in the hardware space. As you mentioned, all sorts of accelerateds coming out. We all know about GPUs, and obviously what they can do for machine learning and AI type use cases, not to mention 3-D rendering. But you know, FPGAs and all sorts of other things coming down the pike as well there. And so what we found is that as customers try to roll these out, they have a lot of the same problems that we saw on the very early days of virtualization. I.e., silos of specialized hardware that different teams were using. And you know, what you find is all things we found before. You find very low utilization rates, inability to automate that, inability to manage that well, put in security and compliance and so forth. And so this is really the reality that we see at most customers. And it's funny because, and so much you think, well wow, shouldn't we be past this? As an industry, shouldn't we have solved this already? You know, we did this with virtualization. But as it turns out, the virtualization we did was for compute, and then storage and network, but now we really need to virtualize all these accelerators. And so that's where this Bitfusion technology that we're including now with vSphere really comes to the forefront. So if you see in the current slide we're showing here, the challenges that just these separate pools of infrastructure, how do you manage all that? And so if you go to the, if we go to the next slide what we see is that with Bitfusion, you can do the same thing that we saw with compute virtualization. You can now pool all these different silos infrastructure together so they become one big pool of GPUs of infrastructure that anyone in an organization can use. We can, you know, have multiple people sharing a GPU. We can do it very dynamically. And the great part of it is is that it's really easy for these folks to use. They don't even need to think about it. In fact, integrates seamlessly with their existing workflows. >> So it's pretty interesting 'cause of the classifications of the assets now are much larger, much varied, and much more workload specific, right? That's really the opportunity slash challenge that you guys are addressing. >> They are. >> A lot more diverse, yep. And so like, you know, a couple other things just, now, I don't have a slide on it, but just things we're doing to our base capabilities. Things around DRS and VMotion. Really massive evolutions there as well to support a lot of these bigger workloads, right? So you look at some of the massive SAP HANA, or Oracle Databases. And how do we ensure that VMotion can scale to handle those without impacting their performance or anything else there. Making DRS smarter about how it does load balancing and so forth. >> Jeff: Right. >> So a lot of the stuff is not just kind of brand new, cool new accelerator stuff, but it's also how do we ensure the core apps people have already been running for many years, we continue to keep up with the innovation and scale there as well. >> Right. All right, so Jared, I give you the last word. You've been working on this for a while, there's a whole bunch of admins that have to sit and punch keys. What do you tell them, what should they be excited about, what are you excited for them in this new release? >> I think what I'm excited about is how, you know, IT can really be an enabler of the transformation of modern apps, right? I think today you look at a lot of these organizations and what ends up happening is the app team ends up sort of building their own infrastructure on top of IT's infrastructure, right? And so now I think we can shift that story around. I think that there's, you know, there's an interesting conversation that a lot of IT departments and app dev teams are going to be having over the next couple years about how do we really offload some of these infrastructure tasks from the dev team, make you more productive, give you better performance, availability, disaster recovery, and these kinds of capabilities. >> Awesome. Well, Jared, congratulation, again both of you, for you getting the release out. I'm sure it was a heavy lift and it's always good to get it out in the world and let people play with it and thanks for sharing a little bit more of a technical deep-dive. I'm sure there's a ton more resources for people that even want to go down into the weeds. So thanks for stopping by. >> Thank you. >> Thank you. >> All right, he's Jared, he's Kit, I'm Jeff. You're watching theCUBE. We're in the Palo Alto studios. Thanks for watching and we'll see you next time. (upbeat music)

Published Date : Apr 2 2020

SUMMARY :

and kind of the the ongoing and great to have you on board. great, great to be here. From kind of a technical aspect, and containers to my of the Heptio team. And as Jared mentioned of what that was, All right, so I think he Why don't we jump over to there and let's see what it looks like? and all of the ecosystem the IT still sees vSphere, that you can have and kind of this bifurcation and all of them have very clear visions kind of the capabilities So a lot of the things you would do and so how do you start but the rubber hits the and secured the right way. And it's not just the scale part, So beyond the Kubernetes, you know, and certainly that is a management of the ESX clusters So if you're doing Next big thing you talk about is security. And we just got back from RSA, from the bottom to the top. but you know, how do you Yeah, exactly. So the idea with trust authority of leg of the stool is, kind of all of the various components and so much you think, well 'cause of the classifications And so like, you know, a So a lot of the stuff is that have to sit and punch keys. of the transformation and it's always good to We're in the Palo Alto studios.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JaredPERSON

0.99+

Jeff FrickPERSON

0.99+

JeffPERSON

0.99+

Jared RosoffPERSON

0.99+

April 2020DATE

0.99+

two teamsQUANTITY

0.99+

Kit ColbertPERSON

0.99+

VMwareORGANIZATION

0.99+

VMworldORGANIZATION

0.99+

todayDATE

0.99+

vSphere 7TITLE

0.99+

last yearDATE

0.99+

vSphere 7TITLE

0.99+

vSphereTITLE

0.99+

Project PacificORGANIZATION

0.99+

second oneQUANTITY

0.99+

Palo AltoLOCATION

0.99+

ESXTITLE

0.99+

vCenterTITLE

0.99+

HeptioORGANIZATION

0.99+

two endsQUANTITY

0.99+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

first oneQUANTITY

0.98+

CUBEORGANIZATION

0.98+

sevenQUANTITY

0.98+

two extremesQUANTITY

0.98+

SAP HANATITLE

0.98+

theCUBEORGANIZATION

0.97+

KubernetesTITLE

0.97+

thirdQUANTITY

0.97+

first columnQUANTITY

0.97+

singleQUANTITY

0.96+

one sideQUANTITY

0.96+

eachQUANTITY

0.96+

firstQUANTITY

0.95+

single objectQUANTITY

0.95+

three main categoriesQUANTITY

0.95+

vSphere Online Launch Event


 

[Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] hello and welcome to the Palo Alto students leaky bomb John free we're here for a special cube conversation and special report big news from VMware to discuss the launch of the availability of vSphere seven I'm here with Chris Prasad SVP and general manager of the vSphere business and cloud platform business unit and Paul Turner VP a VP of Product Management guys thanks for coming in and talking about the big news thank you for having us you guys announced some interesting things back in march around containers kubernetes and the vSphere there's just about the hard news what's being announced today we are announcing the general availability of vSphere 7 John it's by far the biggest release that we have done in the last 10 years we previewed it this project Pacific a few months ago with this release we are putting kubernetes native support into the vSphere platform what that allows us to do is give customers the ability to run both modern applications based on kubernetes and containers as well as traditional VM based applications on the same platform and it also allows the IT departments to provide their developers cloud operating model using the VMware cloud foundation that is powered by this release this is a key part of our tansu portfolio of solutions and products that we announced this year and it is targeted fully at the developers of modern applications and the specific news is vSphere 7 is general available you know really vSphere 7 yes ok that so let's on the trend line here the relevance is what what's the big trend line that this is riding obviously we saw the announcements at VMworld last year and throughout the year there's a lot of buzz Pat Keller says there's a big wave here with kubernetes what does this announcement mean for you guys with the marketplace trend yeah so what kubernetes is really about is people trying to have an agile operation they're trying to modernize their IT applications and they the best way to do that is build off your current platform expanded and and make it a an innovative a agile platform for you to run kubernetes applications and VM applications together I'm not just that customers are also looking at being able to manage a hybrid cloud environment both on Prem and public cloud together so they want to be able to evolve and modernize their application stack but modernize their infrastructure stack which means hybrid cloud operations with innovative applications kubernetes or container based applications and VMs what's excited about this trend Chris we were talking with us at VMworld last year and we've had many conversations around cloud native but you're seeing cloud native becoming the operating model for modern business I mean this is really the move to the cloud if you look at the successful enterprises even the suppliers the on-premises piece if not move to the cloud native marketplace technologies the on premise isn't effective so it's not so much on premises going away we know it's not but it's turning into cloud native this is the move to the cloud generally this is a big wave yeah absolutely I mean if Jon if you think about it on-premise we have significant market share by far the leader in the market and so what we are trying to do with this is to allow customers to use the current platform they are using but bring their application modern application development on top of the same platform today customers tend to set up stacks which are different right so you have a kubernetes stack you have a stack for the traditional applications you have operators and administrators who are specialized in kubernetes on one side and you have the traditional VM operators on the other side with this move what we are saying is that you can be on the same common platform you can have the same administrators who are used to administering the environment that you already had and at the same time offer the developers what they like which is kubernetes dial-tone that they can come and deploy their applications on the same platform that you use for traditional applications yep all Pat said Cuba is gonna be the dial tone on the internet most Millennials might even know what dial tone is but a buddy mince is is that's the key fabric there's gonna work a straight and you know we've heard over the years skill gap skill gap not a lot of skills out there but when you look at the reality of skills gap it's really about skills gaps and shortages not enough people most CIOs and chief and major security are so that we talk to you say I don't want to fork my development teams I don't want to have three separate teams so I don't have to I want to have automation I want an operating model that's not gonna be fragmented this kind of speaks to this whole idea of you know interoperability and multi-cloud this seems to be the next big way behind ibrid I think it I think it is the next big wake the the thing that customers are looking for is a cloud operating model they like the ability for developers to be able to invoke new services on demand in a very agile way and we want to bring that cloud operating model to on-prem to Google cloud to Amazon Cloud to Microsoft cloud to any of our VC peepee partners you get the same cloud operating experience and it's all driven by a kubernetes based dial-tone it's effective and available within this platform so by bringing a single infrastructure platform that can one run in this hybrid manner and give you the cloud operating agility that developers are looking for that's what's key in version seven says Pat Kelsey near me when he says dial tone of the internet kubernetes does he mean always on or what does he mean specifically just that it's always available what's what says what's the meaning behind that that phrase the the first thing he means is that developers can come to the infrastructure which is the VMware cloud foundation and be able to work with a set of api's that are kubernetes api s-- so developers understand that they're looking for that they understand that dial tone right and you come to our VMware cloud foundation that runs across all these clouds you get the same API said that you can use to deploy their application okay so let's get into the value here of vSphere seven how does VMware vSphere 7 specifically help customers isn't just bolting on kubernetes to vSphere some will say is it that's simple or are you running product management no it's not that easy it's yeah some people say hey just Bolton kubernetes on vSphere it's it's not that easy so so one of the things if if anybody's actually tried deploying kubernetes first it's it's highly complicated um so so definitely one of the things that we're bringing is you call it a bolt on but it's certainly not like that we are making it incredibly simple you talked about IT operational shortages customers want to be able to deploy kubernetes environments in a very simple way the easiest way that we can you can do that is take your existing environment that are out ninety percent of IT and just turn on turn on the kubernetes dial tone and it is as simple as that now it's much more than that in version 7 as well we're bringing in a couple things that are very important you also have to be able to manage at scale just like you would in the cloud you want to be able to have infrastructure almost self-managed and upgrade and lifecycle manage itself and so we're bringing in a new way of managing infrastructure so that you can manage just large scale environments both on-premise and public cloud environments and scale and then associated with that as well is you must make it secure so there's a lot of enhancements we're building into the platform around what we call intrinsic security which is how can we actually build in truly a trusted platform for your developers and IIT yeah I mean I I was just going to touch on your point about the shortage of IT staff and how we are addressing that here the the way we are addressing that is that the IT administrators that are used to administering vSphere can continue to administer this enhanced platform with kubernetes the same way administered the older laces so they don't have to learn anything new they're just working the same way we are not changing any tools process technologies so same as it was before same as it was before more capable dealer and developers can come in and they see new capabilities around kubernetes so it's best of both worlds and what was the pain point that you guys are so obviously the ease-of-use is critical Asti operationally I get that as you look at the cloud native developer Saiga's infrastructure as code means as app developers on the other side taking advantage of it what's the real pain point that you guys are solving with vSphere 7 so I think it's it's it's multiple factors so so first is we've we've talked about agility a few times right there is DevOps as a real trend inside an IT organizations they need to be able to build and deliver applications much quicker they need to be able to respond to the business and to do that what they are doing is is they need infrastructure that is on demand so what what we're really doing in the core kubernetes kind of enablement is allowing that on-demand fulfillment of infrastructure so you get that agility that you need but it's it's not just tied to modern applications it's also your all of your existing business applications and your monitoring applications on one platform which means that you know you've got a very simple and and low-cost way of managing large-scale IT infrastructure so that's a that's a huge piece as well and and then I I do want to emphasize a couple of other things it's we're also bringing in new capabilities for AI and m/l applications for sa P Hana databases where we can actually scale to some of the largest business applications out there and you have all of the capabilities like like the GPU awareness and FPGA were FPGA awareness that we built into the platform so that you can truly run this as the fastest accelerated platform for your most extreme applications so you've got the ability to run those applications as well as your kubernetes and container based applications that's the accelerated application innovation piece of the announcement right that's right yeah it's it's it's quite powerful that we've actually brought in you know basically new hardware awareness into the product and expose that to your developers whether that's through containers or through VMs Chris I want to get your thoughts on the ecosystem and then the community but I want to just dig into one feature you mentioned I get the lifestyle improvement a life cycle improvement I get the application acceleration innovation but the intrinsic security is interesting could you take a minute explain what that is yeah so there's there's a few different aspects one is looking at how can we actually provide a trusted environment and that means that you need to have a way that the the key management that even your administrator is not able to get keys to the kingdom as we would call it you you want to have a controlled environment that you know some of the worst security challenges inside and some of the companies has been your Intel or internal IT staff so you've got to have a way that you can run a trusted environment in independent we've got these fair trust Authority that we released in version 7 that actually gives you a a secure environment for actually managing your keys to the kingdom effectively your certificates so you've got this you know continuous runtime now not only that we've actually gone and taken our carbon black features and we're actually building in full support for carbon black into the platform so that you've got negative security of even your application ecosystem yeah that's been coming up a lot conversations the carbon black in the security piece Chris obviously have vsphere everywhere having that operating model makes a lot of sense but you have a lot of touch points you got cloud hyper scale is got the edge you got partners so the other dominant market share and private cloud we are on Amazon as you well know as your Google IBM cloud Oracle cloud so all the major clouds there is a vSphere stack running so it allows customers if you think about it right it allows customers to have the same operating model irrespective where their workload is residing they can set policies compliance security they said it once it applies to all their environments across this hybrid cloud and it's all for a supported by our VMware cloud foundation which is powered by vSphere 7 yeah I think having that the cloud is API based having connection points and having that reliable easy to use is critical operating model all right guys so let's summarize the announcement what do you guys take Derek take away from this vSphere 7 what is the bottom line what's what's it really mean I think what we're if we look at it for developers we are democratizing kubernetes we already are in 90% of IT environments out there are running vSphere we are bringing to every one of those be sphere environments and all of the virtual infrastructure administrators they can now manage kubernetes environments you can you can manage it by simply upgrading your environment that's a really nice position rather than having independent kind of environments you need to manage so so I think that's that is one of the key things that's in here the other thing though is there is I don't think any other platform out there that other than vSphere that can run in your data center in Google's in Amazon's in Microsoft's in you know thousands of VC PP partners you have one hybrid platform that you can run with and that's got operational benefits that's got efficiency benefits that's got agility benefits yeah I just add to that and say that look we want to meet customers where they are in their journey and we want to enable them to make business decisions without technology getting in the way and I think the announcement that we made today with vSphere 7 is going to help them accelerate their digital transformation journey without making trade-offs on people process and technology and there's more to come that we're laser focused on making our platform the best in the industry for running all kinds of applications and the best platform for a hybrid and multi cloud and so you'll see more capabilities coming in the future stay tuned oh one final question on this news announcement which is this awesome vSphere core product for you guys if I'm the customer tell me why it's gonna be important five years from now because of what I just said it is the only platform that is going to be running across all the public clouds right which will allow you to have an operational model that is consistent across the clouds so think about it if you go to Amazon native and then you have orc Lord and as your you're going to have different tools different processes different people trained to work with those clouds but when you come to VMware and you use our cloud foundation you have one operating model across all these environments and that's going to be game-changing great stuff great stuff thanks for unpacking that for us graduates on the insulin Thank You Vera bees fear 7 News special report here inside the cube conversation I'm John Farrar your thanks for watch [Music] and welcome back everybody Jeff Rick here with the cube we are having a very special Q conversation and kind of the the ongoing unveil if you will of the new VMware vSphere 7 dot gonna get a little bit more of a technical deep dive here today we're excited to have a longtime cube alumni kit Kolbert here is the vp and CTO cloud platform at being work it great to see you yeah and and new to the cube jared rose off he's a senior director of product management at VMware and I'm guessin had a whole lot to do with this build so Jared first off congratulations for birthing this new release and great to have you on board alright so let's just jump into it from kind of a technical aspect what is so different about vSphere seven yeah great so vSphere seven baek's kubernetes right into the virtualization platform and so this means that as a developer I can now use kubernetes to actually provision and control workloads inside of my vSphere environment and it means as an IT admin I'm actually able to deliver kubernetes and containers to my developers really easily right on top of the platform I already run so I think we had kind of a sneaking suspicion that that might be coming when the with the acquisition of the hefty Oh team so really exciting news and I think it you tease it out quite a bit at VMware last year about really enabling customers to deploy workloads across environments regardless of whether that's on Prem public cloud this public cloud that public cloud so this really is the the realization of that vision yes yeah so we talked at VMworld about project Pacific right this technology preview and as Jared mentioned of what that was was how do we take kubernetes and really build it into vSphere as you know we had a hybrid cloud vision for quite a while now how do we proliferate vSphere to as many different locations as possible now part of the broader VMware cloud foundation portfolio and you know as we've gotten more and more of these instances in the cloud on-premises at the edge with service providers there's a secondary question how do we actually evolve that platform so it can support not just the existing workloads but also modern work clothes as well right all right so I think you brought some pictures for us a little demo so why don't ya well into there and let's see what it looks like you guys can cube the demo yes we're gonna start off looking at a developer actually working with the new VMware cloud foundation for an vSphere 7 so what you're seeing here is the developers actually using kubernetes to deploy kubernetes the self eating watermelon right so the developer uses this kubernetes declarative syntax where they can describe a whole kubernetes cluster and the whole developer experience now is driven by kubernetes they can use the coop control tool and all of the ecosystem of kubernetes api is and tool chains to provision workloads right into vSphere and so you know that's not just provisioning workloads though this is also key to the developer being able to explore the things they've already deployed so go look at hey what's the IP address that got allocated to that or what's the CPU load on this you know workload I just deployed on top of kubernetes we've integrated a container registry into vSphere so here we see a developer pushing and pulling container images and you know one of the amazing things about this is from an infrastructure as code standpoint now the developers infrastructure as well as their software is all unified in source control I can check in not just my code but also the description of the kubernetes environment and storage and networking and all the things that are required to run that app so now we're looking at a sort of a side-by-side view where on the right hand side is the developer continuing to deploy some pieces of their application and on the left-hand side we see V Center and what's key here is that as the developer deploys new things through kubernetes those are showing up right inside of the V center console and so the developer and IT are seeing exactly the same things with the same names and so this means what a developer calls their IT department says hey I got a problem with my database we don't spend the next hour trying to figure out which VM they're talking about they got the same name they say they see the same information so what we're gonna do is that you know we're gonna push the the developer screen aside and start digging into the vSphere experience and you know what you'll see here is that V Center is the V Center you've already known and loved but what's different is that now it's much more application focused so here we see a new screen inside of V Center vSphere namespaces and so these vSphere namespaces represent whole logical applications like a whole distributed system now as a single object inside a V Center and when I click into one of these apps this is a managed object inside of e spear I can click on permissions and I can decide which developers have the permission to deploy or read the configuration of one of these namespaces I can hook this into my Active Directory infrastructure so I can use the same you know corporate credentials to access the system I tap into all my existing storage so you know this platform works with all of the existing vSphere storage providers can use storage policy based management to provide storage for kubernetes and it's hooked in with things like DRS right so I can define quotas and limits for CPU and memory and all that's going to be enforced by Drs inside the cluster and again as an as an admin I'm just using vSphere but to the developer they're getting a whole kubernetes experience out of this platform now vSphere also now sucks in all this information from the kubernetes environment so besides you know seeing the VMS and and things that developers have deployed I can see all of the desired state specifications all the different kubernetes objects that the developers have created the compute network and storage objects they're all integrated right inside the the vCenter console and so once again from a diagnostics and troubleshooting perspective this data is invaluable it often saves hours just in trying to figure out what what we're even talking about when we're trying to resolve an issue so the you know as you can see this is all baked right into V Center the V Center experience isn't transformed a lot we get a lot of VI admins who look at this and say where's the kubernetes and they're surprised that like they've been managing kubernetes all this time it just looks it looks like the vSphere experience they've already got but all those kubernetes objects the pods and containers kubernetes clusters load balancer stores they're all represented right there natively in the V Center UI and so we're able to take all of that and make it work for your existing VI admins well that's a it's pretty it's pretty wild you know it really builds off the vision that again I think you kind of outlined kid teased out it at VMworld which was you know the IT still sees vSphere which is what they want to see when they're used to seeing but devs siku Nettie's and really bringing those together in a unified environment so that depending on what your job is and what you're working on that's what you're gonna see in this kind of unified environment yeah yeah as the demo showed it is still vSphere at the center but now there's two different experiences that you can have interacting with vSphere the kubernetes base one which is of course great for developers and DevOps type folks as well as the traditional vSphere interface API is which is great for VI admins and IT operations right and then and really it was interesting to you tease that a lot that was a good little preview of people knew they're watching but you talked about really cloud journey and and kind of this bifurcation of kind of classical school apps that are that are running in their classic memes and then kind of the modern you know county cloud native applications built on kubernetes and youyou outlined a really interesting thing that people often talk about the two ends of the spectrum and getting from one to the other but not really about kind of the messy middle if you will and this is really enabling people to pick where along that spectrum they can move their workloads or move their apps ya know I think we think a lot about it like that that we look at we talk to customers and all of them have very clear visions on where they want to go their future state architecture and that involves embracing cloud it involves modernizing applications and you know as you mentioned that it's it's challenging for them because I think what a lot of customers see is this kind of these two extremes either you're here where you are kind of the old current world and you got the bright Nirvana future on the far end there and they believe it's the only way to get there is to kind of make a leap from one side to the other that you have to kind of change everything out from underneath you and that's obviously very expensive very time-consuming and very error-prone as well there's a lot of things that can go wrong there and so I think what we're doing differently at VMware is really to your point as you call it the the messy middle I would say it's more like how do we offer stepping stones along that journey rather than making this one giant leap we had to invest all this time and resources how come you able people to make smaller incremental steps each of which have a lot of business value but don't have a huge amount of cost right and its really enabling kind of this next gen application where there's a lot of things that are different about about one of the fundamental things is we're now the application defines a reach sources that it needs to operate versus the resources defining kind of the capabilities of what the what the application can't do and that's where everybody is moving as quickly as as makes sense you said not all applications need to make that move but most of them should and most of them are and most of them are at least making that journey you see that yeah definitely I mean I think that you know certainly this is one of the big evolutions we're making in vSphere from you know looking historically at how we managed infrastructure one of things we enable in VCR 7 is how we manage applications right so a lot of the things you would do in infrastructure management of setting up security rules or encryption settings or you know your your resource allocation you would do this in terms of your physical and virtual infrastructure you talk about it in terms of this VM is going to be encrypted or this VM is gonna have this firewall rule and what we do in vSphere 7 is elevate all of that to application centric management so you actually look at an application and say I want this application to be constrained to this much CPU or I want this application to be have these security rules on it and so that shifts the focus of management really up to the application level right yeah and like I kind of even zoom back a little bit there and say you know if you look back one thing we did was something like V San before that people had to put policies on a LUN you know an actual storage LUN and a storage array and then by virtue of a workload being placed on that array it inherited certain policies right and so these have turned that around allows you to put the policy on the VM but what jerez talking about now is that for a modern workload a modern were close not a single VM it's it's a collection of different things you've got some containers in there some VMs probably distributed maybe even some on-prem some in the cloud and so how do you start managing that more holistically and this notion of really having an application as a first-class entity that you can now manage inside of vSphere it's really powerful and very simplifying one right and why this is important is because it's this application centric point of view which enables the digital transformation that people are talking about all the time that's it's a nice big word but the rubber hits the road is how do you execute and deliver applications and more importantly how do you continue to evolve them and change them you know based on either customer demands or competitive demands or just changes in the marketplace yeah well you look at something like a modern app that maybe has a hundred VMs that are part of it and you take something like compliance right so today if I want to check of this app is compliant I got to go look at every individual VM and make sure it's locked down and hardened and secured the right way but now instead what I can do is I can just look at that one application object inside of each Center set the right security settings on that and I can be assured that all the different objects inside of it are gonna inherit that stuff so it really simplifies that it also makes it so that that admin can handle much larger applications you know if you think about vCenter today you might log in and see a thousand VMs in your inventory when you log in with vSphere seven what you see is a few dozen applications so a single admin can manage a much larger pool of infrastructure many more applications and they could before because we automate so much of that operation and it's not just the scale part which is obviously really important but it's also the rate of change and this notion of how do we enable developers to get what they want to get done done ie building applications well at the same time enabling the IT operations teams to put the right sort of guardrails in place around compliance and security performance concerns these sorts of elements and so being by being able to have the IT operations team really manage that logical application at that more abstract level and then have the developer be able to push in new containers or new VMs or whatever they need inside of that abstraction it actually allows those two teams to work actually together and work together better they're not stepping over each other but in fact now they can both get what they need to get done done and do so as quickly as possible but while also being safe and in compliance is ready fourth so there's a lot more to this is a very significant release right again a lot of foreshadowing if you go out and read the tea leaves that's a pretty significant you know kind of RER context or many many parts of ease of beer so beyond the kubernetes you know kind of what are some of the other things that are coming out and there's a very significant release yeah it's a great question because we tend to talk a lot about kubernetes what was project Pacific but is now just part of vSphere and certainly that is a very large aspect of it but to your point you know vSphere 7 is a massive release with all sorts of other features and so instead of a demo here let's pull up with some slides right look at what's there so outside of kubernetes there's kind of three main categories that we think about when we look at vSphere seven so the first first one is simplified lifecycle management and then really focus on security it's a second one and then applications as well out both including you know the cloud native apps that don't fit in the kubernetes bucket as well as others and so we go on the first one the first column there there's a ton of stuff that we're doing around simplifying life cycle so let's go to the next slide here where we can dive in a little bit more to the specifics so we have this new technology vSphere lifecycle management VL cm and the idea here is how do we dramatically simplify upgrades lifecycle management of the ESX clusters and ESX hosts how do we make them more declarative with a single image you can now specify for an entire cluster we find that a lot of our vSphere admins especially at larger scales have a really tough time doing this there's a lot of in and out today it's somewhat tricky to do and so we want to make it really really simple and really easy to automate as well so if you're doing kubernetes on kubernetes I suppose you're gonna have automation on automation right because they're upgrading to the sevens is probably not any consequent inconsequential tasks mm-hm and yeah and going forward and allowing you know as we start moving to deliver a lot of this great VCR functionality at a more rapid clip how do we enable our customers to take advantage of all those great things we're putting out there as well right next big thing you talk about is security yep we just got back from RSA thank goodness we got that that show in before all the badness started yeah but everyone always talked about security's got to be baked in from the bottom to the top yeah talk about kind of the the changes in the security so done a lot of things around security things around identity Federation things around simplifying certificate management you know dramatic simplifications there across the board one I want to focus on here on the next slide is actually what we call vSphere trust Authority and so with that one what we're looking at here is how do we reduce the potential attack surfaces and really ensure there's a trusted computing base when we talk to customers what we find is that they're nervous about a lot of different threats including even internal ones right how do they know all the folks that work for them can be fully trusted and obviously if you're hiring someone you somewhat trust them but you know what what's how do you implement that the concept of least privilege right or zero trust right yeah topic exactly so the idea with trust authorities that we can specify a small number of physical ESX hosts that you can really lock down and sure fully secure those can be managed by a special vCenter server which is in turn very lockdown only a few people have access to it and then those hosts and that vCenter can then manage other hosts that are untrusted and can use attestation to actually prove that okay these untrusted hosts haven't been modified we know they're okay so they're okay to actually run workloads on they're okay to put data on and that sort of thing so is this kind of like building block approach to ensure that businesses can have a very small trust base off of which they can build to include their entire vSphere environment right and then the third kind of leg of the stool is you know just better leveraging you know kind of a more complex asset ecosystem if you know what things like FPGAs and GPUs and you know kind of all of the various components that power these different applications which now the application could draw the appropriate resources as needed so you've done a lot of work here as well yeah there's a ton of innovation happening in the hardware space as you mentioned all sort of accelerators coming out we all know about GPUs and obviously what they can do for machine learning and AI type use cases not to mention 3d rendering but you know FPGAs and all sorts of other things coming down the pike as well there and so what we found is that as customers try to roll these out they have a lot of the same problems that we saw in the very early days of virtualization ie silos of specialized hardware that different teams were using and you know what you find is all things we found before you found we find very low utilization rates inability to automate that inability to manage that well putting security and compliance and so forth and so this is really the reality that we see at most customers and it's funny because and some ones you think well well shouldn't we be past this as an industry shouldn't we have solved this already you know we did this with virtualization but as it turns out the virtualization we did was for compute and then storage and network but now we really needed to virtualize all these accelerators and so that's where this bit fusion technology that we're including now with vSphere it really comes to the forefront so if you see in the current slide we're showing here the challenge is that just these separate pools of infrastructure how do you manage all that and so if you go to the we go to the next slide what we see is that with bit fusion you can do the same thing that we saw with compute virtualization you can now pool all these different silos infrastructure together so they become one big pool of GPUs of infrastructure that anyone in an organization can use we can you know have multiple people sharing a GPU we can do it very dynamically and the great part of it is is that it's really easy for these folks to use they don't even need to think about it in fact integrates seamlessly with their existing workflows so it's pretty it's pretty trick is because the classifications of the assets now are much much larger much varied and much more workload specific right that's really the opportunities flash they are they're good guys are diverse yeah and so like you know a couple other things just I don't have a slide on it but just things we're doing to our base capabilities things around DRS and vmotion really massive evolutions there as well to support a lot of these bigger workloads right so you look at some of the massive sa P Hana or Oracle databases and how do we ensure that the emotion can scale to handle those without impacting their performance or anything else they're making DRS smarter about how it does load balancing and so forth right now a lot of this stuff not just kind of brand new cool new accelerator stuff but it's also how do we ensure the core ass people have already been running for many years we continue to keep up with the innovation and scale there as well right all right so do I give you the last word you've been working on this for a while there's a whole bunch of admins that have to sit and punch keys what do you what do you tell them what should they be excited about what are you excited for them in this new release I think what I'm excited about is how you know IT can really be an enabler of the transformation of modern apps right I think today you look at a lot of these organizations and what ends up happening is the app team ends up sort of building their own infrastructure on top of IT infrastructure right and so now I think we can shift that story around I think that there's you know there's an interesting conversation that a lot of IT departments and appdev teams are gonna be having over the next couple years about how do we really offload some of these infrastructure tasks from the dev team make you more productive give you better performance availability disaster recovery and these kinds of capabilities awesome well Jared congratulations that get both of you for forgetting to release out I'm sure it was a heavy lift and it's always good to get it out in the world and let people play with it and thanks for for sharing a little bit more of a technical deep dive I'm sure there's ton more resources from people I even want to go down into the weeds so thanks for stopping by thank you thank you all right ease Jared he's kid I'm Jeff you're watching the cube we're in the Palo Alto studios thanks for watching we'll see you next time [Music] hi and welcome to a special cube conversation I'm Stu min a minute and we're digging into VMware vSphere seven announcement we've had conversations with some of the executives some of the technical people but we know that there's no better way to really understand a technology than to talk to some of the practitioners that are using it so really happy to have joined me for the program I have Bill Buckley Miller who is in infrastructure designer with British Telecom joining me digitally from across the pond bill thanks so much for joining us nice - all right so Phil let's start of course British Telecom I think most people know you know what BT is and it's a you know a really sprawling company tell us a little bit about you know your group your role and what's your mandate okay so my group it's called service platforms it's the bit of BT that services all of our multi millions of our customers so they we have broadband we have TV we have mobile we have DNS and email systems and one and it's all about our customers it's not a B to be part of BT you with me we we specifically focus on those kind of multi million customers that we've got in those various services I'm in particular my group is for we do infrastructure so we really do from data center all the way up to really about boot time or so we'll just past boot time and the application developers look after that stage and above okay great we definitely gonna want to dig in and talk about that that boundary between the infrastructure teams and the application teams but let's talk a little bit first you know we're talking about VMware so you know how long's your organization been doing VMware and tell us you know what you see with the announcement that VMware's making work BC or seven sure well I mean we've had a really great relationship with VMware for about twelve thirteen years something like that and it's a absolutely key part of our of our infrastructure it's written throughout BT really in every part of our operations design development and the whole ethos of the company is based around a lot of VMware products and so one of the challenges that we've got right now is application architectures are changing quite significantly at the moment and as you know in particular with serving us and with containers and a whole bunch of other things like that we're very comfortable with our ability to manage VMs and have been for a while we currently use extensively we use vSphere NSX t.v raps log insight network insight and a whole bunch of other VMware constellation applications and our operations teams know how to use that they know how to optimize they know how to capacity plan and troubleshoot so that's that's great and that's been like that for a half a decade at least we've been really really confident with our ability to still with Yemen where environments and Along Came containers and like I say multi cloud as well and what we were struggling with was the inability to have a cell pane a glass really on all of that and to use the same people and the same same processes to manage a different kind of technology so we we'd be working pretty closely with VMware on a number of different containerization products for several years now I would really closely with the b-string integrated containers guys in particular and now with the Pacific guys with really the idea that when we we bring in version 7 and the containerization aspects of version 7 we'll be in a position to have that single pane of glass to allow our operations team to really barely differentiate between what's a VM and what's a container that's really the holy grail right so we'll be able to allow our developers to develop our operations team to deploy and to operate and our designers to see the same infrastructure whether that's on premises cloud or off premises and be able to manage the whole piece in that was bad ok so Phil really interesting things you walked through here you've been using containers in a virtualized environment for a number of years want to understand in the organizational piece just a little bit because it sounds I manage all the environment but you know containers are a little bit different than VMs you know if I think back you know from an application standpoint it was you know let's stick it in a vm I don't need to change it and once I spin up a VM often that's gonna sit there for you know months if not years as opposed to you know I think about a containerization environment it's you know I really want a pool of resources I'm gonna create and destroy things all the time so you know bring us inside that organizational piece you know how much will there need to be interaction and more interaction or change in policies between your infrastructure team and your app dev team well yes making absolutely right that's the nature and that the time scales that were talking about between VMs and containers oh he's wildly different as you say we we probably oughta certainly have VMs in place now that were in place in 2000 and 2018 certainly but I imagine I haven't haven't really been touched whereas as you say VMs and a lot of people talk about spinning them all up all the time there are parts of our architecture that require that in particular the very client facing bursty stuff it you know does require spinning up spinning down pretty quickly but some of our smaller the containers do sit around for weeks if not if not months I really just depend on the development cycle aspects of that but the heartbeat that we've we've really had was just the visualizing it and there are a number different products out there that allow you to see the behavior of your containers and understand the resource requirements that they are having at any given moment allows troubleshoot and so on but they are not they need their new products their new things that we we would have to get used to and also it seems that there's an awful lot of competing products quite a Venn diagram if in terms of functionality and user abilities to do that so through again again coming back to being able to manage through vSphere to be able to have a list of VMs and alongside it is a list of containers and to be able to use policies to define how the behave in terms of their networking to be able to essentially put our deployments on Rails by using in particular tag based policies means that we can take the onus of security we can take the onus of performance management and capacity management away from the developers you don't really care about a lot of time and they can just get on with their job which is to develop new functionality and help our customers so that then means that then we have to be really responsible about defining those policies and making sure that they're adhered to but again we know how to do that with VMs new visa so the fact that we can actually apply that straightaway just to add slightly different completely unit which is really what we're talking about here is ideal and then to be able to extend that into multiple clouds as well because we do use multiple cards where AWS and as your customers and were between them is an opportunity that we can't do anything of them be you know excited about take oh yeah still I really like how you described it really the changing roles that are happening there in your organization need to understand right there's things that developers care about you know they want to move fast they want to be able to build new things and there's things that they shouldn't have to worry about and you know we talked about some of the new world and it's like oh can the platform underneath this take care of it well there there's some things platforms take care of there's some things that the software or you know your theme is going to need to understand so maybe if you could dig in a little bit some of those what are the drivers from your application portfolio what is the business asking of your organization that that's driving this change and you know being one of those you know tailwind pushing you towards you know kubernetes and the the vSphere 7 technologies well it all comes down with the customers right our customers want new functionality they want new integrations they want new content and they want better stability and better performance and our ability to extend or contracting capacity as needed as well so they're the real ultimate we want to give our customers the best possible experience of our products and services so we have to address that really from a development perspective it's our developers that have the responsibility to design them to deploy those so we have to in infrastructure we have to act as a firm foundation really underneath all of that that allows them to know that what they spend their time and develop and want to push out to our customers is something that can be trusted as performant we understand where their capacity requirements are coming from in in the short term and in the long term for that and it's secure as well obviously is a big aspect to it so really we're just providing our developers with the best possible chance of giving our customers what will hopefully make them delighted great Phil you've mentioned a couple of times that you're using public clouds as well as you know your your your your VMware farm one of make sure I if you can explain a little bit a couple of things number one is when it comes to your team especially your infrastructure team how much are they involved with setting up some of the the basic pieces or managing things like performance in the public cloud and secondly when you look at your applications are some of your clouds some of your applications hybrid going between the data center and the public cloud and I haven't talked to too many customers that are doing applications that just live in any cloud and move things around but you know maybe if you could clarify those pieces as to you know what cloud really means to your organization and your applications sure well I mean to us climate allows us to accelerate development she's nice because it means we don't have to do on-premises capacity lifts for new pieces of functionality or so we can initially build in the cloud and test in the cloud but very often applications really make better sense especially in the TV environment where people watch TV all the time I mean yes there are peak hours and lighter hours of TV watching same goes for broadband really but we generally we're well more than an eight-hour application profile so what that allows us to do then is to have well it makes sense we run them inside our organization where we have to run them in our organization for you know data protection reasons or whatever then we can do that as well but where we say for instance we have a boxing match on and we're going to be seen enormous spike in the amount of customers that want to sign up into our order journey for to allow them to view that and to gain access to that well why would you spend a lot of money on servers just for that level of additional capacity so we do absolutely have hybrid applications not sorry hybrid blocks we have blocks of suburb locations you know dozens of them really to support oil platform and what you would see is that if you were to look at our full application structure for one of the platform as I mentioned that some of the smoothers application blocks I have to run inside some can run outside and what we want to be able to do is to allow our operations team to define that again by policy as to where they run and to you know have a system that allows us to transparently see where they're running how they're running and the implications of those decisions so that we can tune those maybe in the future as well and that way we best serve our customers we you know we get to get our customers yeah what they need all right great Phil final question I have for you you've been through a few iterations of looking at VMS containers public cloud what what advice would you give your peers with the announcement of vSphere 7 and how they can look at things today in 2020 versus what they might have looked at say a year or two ago well I'll be honest I was a little bit surprised by vSphere so we knew that VMware we're working on trying to make containers on the same level both from a management deployment perspective as we MS I mean they're called VMware after all we knew that they were looking it's no surprise by just quite how quickly they've managed to almost completely reinvent their application really it's you know if you look at the whole tansy stuff from the Mission Control stuff I think a lot of people were blown away by just quite how happy VMware were to reinvent themselves and from an application perspective you know and to really leap forward and this is the very between version six and seven I've been following these since version three at least and it's an absolutely revolutionary change in terms of the overall architecture the aims to - what they want to achieve with the application and you know luckily the nice thing is is that if you're used to version six is not that big a deal it's really not that big a deal to move forward at all it's not such a big change to process and training and things like that but my word there's no awful lot of work underneath that underneath the covers and I'm really excited and I think other people in my position should really just take it as an opportunity to really revisit what they can achieve with them in particular with vSphere and with in combination with and SXT it's it's but you know it's quite hard to put into place unless you've seen the slide or slides about it and useless you've seen the products just how revolutionary the the version 7 is compared to previous revisions which have kind of evolved for a couple of years so yeah I think I'm really excited to run it and know a lot of my peers other companies that I speak with quite often are very excited about seven as well so yeah I'm really excited about the whole ball base well Phil thank you so much absolutely no doubt this is a huge move for VMware the entire company and their ecosystem rallying around helped move to the next phase of where application developers and infrastructure need to go Phil Buckley joining us from British Telecom I'm Stu minimun thank you so much for watching the queue

Published Date : Apr 1 2020

SUMMARY :

really the move to the cloud if you look

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul TurnerPERSON

0.99+

JaredPERSON

0.99+

John FarrarPERSON

0.99+

90%QUANTITY

0.99+

Pat KellerPERSON

0.99+

2000DATE

0.99+

Jeff RickPERSON

0.99+

Chris PrasadPERSON

0.99+

ChrisPERSON

0.99+

Phil BuckleyPERSON

0.99+

British TelecomORGANIZATION

0.99+

Pat KelseyPERSON

0.99+

JeffPERSON

0.99+

2018DATE

0.99+

MicrosoftORGANIZATION

0.99+

PhilPERSON

0.99+

AmazonORGANIZATION

0.99+

two teamsQUANTITY

0.99+

ninety percentQUANTITY

0.99+

Bill Buckley MillerPERSON

0.99+

last yearDATE

0.99+

vSphere 7TITLE

0.99+

AWSORGANIZATION

0.99+

VMworldORGANIZATION

0.99+

vSphereTITLE

0.99+

vSphere sevenTITLE

0.99+

vSphereORGANIZATION

0.99+

DerekPERSON

0.99+

GoogleORGANIZATION

0.99+

last yearDATE

0.99+

ESXTITLE

0.99+

thousandsQUANTITY

0.99+

two extremesQUANTITY

0.99+

VCR 7TITLE

0.99+

jaredPERSON

0.99+

VMwareORGANIZATION

0.99+

each CenterQUANTITY

0.98+

todayDATE

0.98+

three main categoriesQUANTITY

0.98+

2020DATE

0.98+

about twelve thirteen yearsQUANTITY

0.98+

firstQUANTITY

0.98+

BTORGANIZATION

0.98+

PatPERSON

0.98+

first oneQUANTITY

0.98+

bothQUANTITY

0.98+

dozensQUANTITY

0.98+

second oneQUANTITY

0.98+

two different experiencesQUANTITY

0.97+

first columnQUANTITY

0.97+

one applicationQUANTITY

0.97+

eight-hourQUANTITY

0.97+