Image Title

Search Results for firstproduct manager:

Driving Business Results with Cloud Transformation - Aditi Banerjee and Todd Edmunds


 

>> Welcome back to the program. My name is Dave Vellante and in this session we're going to explore one of the more interesting topics of the day. IoT for smart factories and with me are Todd Edmunds, the global CTO of Smart Manufacturing, Edge and Digital Twins, at Dell Technologies. That is such a cool title. (Todd laughs) I want to be you. And Dr. Aditi Banerjee, who's the Vice President General Manager for Aerospace Defense and Manufacturing at DXC Technology. Another really cool title. Folks, welcome to the program. Thanks for coming on. >> Thanks Dave. >> Thank you. Great to be here. >> Well- >> Nice to be here. >> Todd, let's start with you. We hear a lot about Industry 4.0, smart factories, IIoT. Can you briefly explain, like, what is Industry 4.0 all about and why is it important for the manufacturing industry? >> Yeah, sure Dave. You know, it's been around for quite a while and it's got, it's gone by multiple different names. As you said, Industry 4.0, smart manufacturing, industrial IoT, smart factory. But it all really means the same thing. It's really applying technology to get more out of the factories and the facilities that you have to do your manufacturing. So being much more efficient. Implementing really good sustainability initiatives. And so we really look at that by saying, "Okay, what are we going to do with technology to really accelerate what we've been doing for a long, long time"? So it's really not, it's not new. It's been around for a long time. What's new is that manufacturers are looking at this, not as a one-off, two off individual use case point of view, but instead they're saying, "We really need to look at this holistically, thinking about a strategic investment in how we do this." Not to just enable one or two use cases, but enable many, many use cases across the spectrum. I mean, there's tons of 'em out there. There's predictive maintenance and there's OEE, overall equipment effectiveness, and there's computer vision. And all of these things are starting to percolate down to the factory floor, but it needs to be done in a little bit different way. And really to to really get those outcomes that they're looking for in smart factory, or Industry 4.0, or however you want to call it. And truly transform. Not just throw an Industry 4.0 use case out there, but to do the digital transformation that's really necessary and to be able to stay relevant for the future. You know, I heard it once said that you have three options. Either you digitally transform and stay relevant for the future or you don't and fade into history like 52% of the companies that used to be on the Fortune 500 since 2000, right. And so really that's a key thing and we're seeing that really, really being adopted by manufacturers all across the globe. >> Yeah, so Aditi, that's like digital transformation is almost synonymous with business transformation. So is there anything you'd add to what Todd just said? >> Absolutely, though, I would really add that what really drives Industry 4.0 is the business transformation. What we are able to deliver in terms of improving the manufacturing KPIs and the KPIs for customer satisfaction, right. For example, improving the downtime, you know, or decreasing the maintenance cycle of the equipments or improving the quality of products, right. So I think these are lot of business outcomes that our customers are looking at while using Industry 4.0 and the technologies of Industry 4.0 to deliver these outcomes. >> So Aditi, one, if I could stay with you and maybe this is a bit esoteric, but when I first started researching IoT and Industrial IoT 4.0, et cetera, I felt, you know, while there could be some disruptions in the ecosystem, I kind of came to the conclusion that large manufacturing firms, aerospace defense companies, the firms building out critical infrastructure, actually had kind of an incumbent advantage and a great opportunity. Of course, then I saw on TV, somebody now, they're building homes with 3D printers. It like blows your mind. So that's pretty disruptive. But. So, but they got to continue, the incumbents have to continue to invest in the future. They're well capitalized. They're pretty good businesses. Very good businesses. But there's a lot of complexities involved in kind of connecting the old house to the new addition that's being built, if you will. Or there's transformation that we're talking about. So my question is how are your customers preparing for this new era? What are the key challenges that they're facing in the blockers, if you will? >> Yeah, I mean the customers are looking at Industry 4.0 for greenfield factories, right. That is where the investments are going directly into building the factories with the new technologies with the new connectivities, right, for the machines, for example. Industry IoT, Having the right type of data platforms to drive computational analytics and outcomes, as well as looking at edge versus cloud type of technologies, right. Those are all getting built in the greenfield factories. However, for the install-based factories, right, that is where our customers are looking at how do I modernize, right. These factories. How do I connect the existing machine? And that is where some of the challenges come in on, you know, the legacy system connectivity that they need to think about. Also, they need to start thinking about cybersecurity and operation technology security, right, because now you are connecting the factories to each other, right. So cybersecurity becomes top of mind, right. So there is definitely investment that is involved. Clients are creating roadmaps for digitizing and modernizing these factories and investments in a very strategic way, right. So perhaps they start with the innovation program. And then they look at the business case and they scale it up, right. >> Todd, I'm glad Aditi brought up security because if you think about the operations technology, you know folks, historically they air gapped, you know, the systems. That's how they created security. That's changed. The business came in and said, "Hey, we got to connect. We got to make it intelligent." So that's got to be a big challenge as well. >> It absolutely is Dave. And, you know, you can no longer just segment that because really to get all of those efficiencies that we talk about, that IOT and industrial IoT and Industry 4.0 promise, you have to get data out of the factory but then you got to put data back in the factory. So no longer is it just firewalling everything is really the answer. So you really have to have a comprehensive approach to security, but you also have to have a comprehensive approach to the cloud and what that means. And does it mean a continuum of cloud all the way down to the edge, right down to the factory? It absolutely does because no one approach has the answer to everything. The more you go to the cloud, the broader the attack surface is. So what we're seeing is a lot of our customers approaching this from, kind of, that hybrid, you know, write once, run anywhere on the factory floor down to the edge. And one of things we're seeing too is to help distinguish between what is the edge and that. And bridge that gap between, like Dave, you talked about IT and OT, and also help that what Aditi talked about is the greenfield plants versus the brownfield plants, that they call it, that are the legacy ones and modernizing those, is it's great to kind of start to delineate. What does that mean? Where's the edge? Where's the IT and the OT? We see that from a couple of different ways. We start to think about, really, two edges in a manufacturing floor. We talk about an industrial edge that sits, or some people call it a far edge or a thin edge, sits way down on that plant. Consists of industrial hardened devices that do that connectivity, the hard stuff, about how do I connect to this obsolete legacy protocol and what do I do with it? And create that next generation of data that has context. And then we see another edge evolving above that which is much more of a data and analytics and enterprise grade application layer that sits down in the factory itself that helps figure out where we're going to run this. Is... Does it connect to the cloud? Do we run applications on-prem? Because a lot of times that on-prem application is needs to be done because that's the only way it's going to work. Because of security requirements. Because of latency requirements, performance, and a lot of times, cost. It's really helpful to build that multiple edge strategy because then you consolidate all of those resources, applications, infrastructure, hardware, into a centralized location. Makes it much, much easier to really deploy and manage that security. But it also makes it easier to deploy new applications, new use cases, and become the foundation for DXC's expertise in applications that they deliver to our customers as well. >> Todd, how complex are these projects? I mean, I feel like it's kind of the digital equivalent of building the Hoover Dam. I mean, it... So, yeah, how long does a typical project take? I know it varies, but what, you know, what are the critical success factors in terms of delivering business value quickly? >> Yeah, that's a great question in that we're, you know, like I said at the beginning, this is not new smart factory and Industry 4.0 is not new. It's been... It's people have been trying to implement the holy grail of smart factory for a long time. And what we're seeing is a switch, a little bit of a switch or quite a bit of a switch, to where the enterprise and the IT folks are having a much bigger say and have a lot to offer to be able to help that complexity. So instead of deploying a computer here and a gateway there and a server there. I mean, you go walk into any manufacturing plant and you can see servers sitting underneath someone's desk or a PC in a closet somewhere running a a critical production application. So we're seeing the enterprise have a much bigger say at the table. Much louder voice at the table to say, "We've been doing this enterprise all the time. We know how to really consolidate, bring hyper-converged applications, hyper-converged infrastructure, to really accelerate these kind of applications. Really accelerate the outcomes that are needed to really drive that smart factory." And start to bring that same capabilities down into the Mac on the factory floor. That way, if you do it once to make it easier to implement you can repeat that. You can scale that. You can manage it much easily. And you can then bring that all together because you have the security in one centralized location. So we're seeing manufacturers... Yeah, that first use case may be fairly difficult to implement and we got to go down in and see exactly what their problems are. But when the infrastructure is done the correct way, when that... Think about how you're going to run that and how are you going to optimize the engineering. Well, let's take that what you've done in that one factory and then set. Let's that, make that across all the factories including the factory that we're in, but across the globe. That makes it much, much easier. You really do the hard work once and then repeat almost like a cookie cutter. >> Got it, thank you. Aditi, what about the skillsets available to apply these to these projects? You got to have knowledge of digital, AI, data, integration. Is there a talent shortage to get all this stuff done? >> Yeah, I mean, definitely. Different types of skillsets are needed from a traditional manufacturing skillset, right. Of course, the basic knowledge of manufacturing is important. But the digital skillsets, like, you know, IoT. Having a skillset in different protocols for connecting the machines, right. That experience that comes with it. Data and analytics, security, augmented virtual reality, programming. You know, again, looking at robotics and the digital twin. So, you know, it's a lot more connectivity software data-driven skillsets that are needed to smart factory to life at scale. And, you know, lots of firms are, you know, recruiting these types of resources with these skillsets to, you know, accelerate their smart factory implementation as well as consulting firms like DXC technology and others. We recruit. We train our talent to provide these services. >> Got it. Aditi, I wonder if we could stay on you. Let's talk about the partnership between DXC and Dell. What are you doing specifically to simplify the move to industry 4.0 for customers? What solutions are you offering? How are you working together, Dell and DXC, to bring these to market? >> Yeah, I... Dell and DXC have a very strong partnership, you know, and we work very closely together to create solutions, to create strategies, and how we are going to jointly help our clients, right. So. Areas that we have worked closely together is edge compute, right. How that impacts the smart factory. So we have worked pretty closely in that area. We're also looked at vision technologies, you know. How do we use that at the edge to improve the quality of products, right. So we have several areas that we collaborate in and our approach is that we want to bring solutions to our client and as well as help them scale those solutions with the right infrastructure, the right talent, and the right level of security. So we bring a comprehensive solution to our clients. >> So, Todd, last question. Kind of similar but different. You know, why Dell DXC? Pitch me. What's different about this partnership? You know, where are you confident that, you know, you're going to deliver the best value to customers? >> Absolutely, great question. You know, there's no shortage of bespoke solutions that are out there. There's hundreds of people that can come in and do individual use cases and do these things and just... And that's where it ends. What Dell and DXC Technology together bring to the table is we do the optimization of the engineering of those previously bespoke solutions upfront, together. Right. The power of our scalables, enterprise grade, structured, you know, industry standard infrastructure as well as our expertise in delivering package solutions that really accelerate with DXC's expertise and reputation as a global trusted advisor. Be able to really scale and repeat those solutions that DXC is so really, really good at. And Dell's infrastructure and our, what, 30,000 people across the globe that are really, really good at that scalable infrastructure to be able to repeat. And then it really lessens the risk that our customers have and really accelerates those solutions. So it's, again, not just one individual solutions. It's all of the solutions that not just drive use cases but drive outcomes with those solutions. >> Yeah, you're right. The partnership has gone... I mean, I first encountered it back in, I think, it was 2010, May of 2010. We had you guys both on the queue... I think we were talking about converged infrastructure and I had a customer on, and it was actually manufacturing customer. Was quite interesting. And back then it was how do we kind of replicate what's coming in the cloud? And you guys have obviously taken it into the digital world. Really want to thank you for your time today. Great conversation. And love to have you back. >> Thank you so much. >> Absolutely. >> It was a pleasure speaking with you. >> I agree. >> All right, keep it right there for more discussions that educate and inspire on theCUBE.

Published Date : Feb 9 2023

SUMMARY :

Welcome back to the program. Great to be here. the manufacturing industry? and to be able to stay add to what Todd just said? the downtime, you know, the incumbents have to continue that they need to think about. So that's got to be a on the factory floor down to the edge. of the digital equivalent and have a lot to offer to be You got to have knowledge of that are needed to smart to simplify the move to How that impacts the smart factory. to deliver the best value It's all of the solutions And love to have you back. that educate and inspire on theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DXCORGANIZATION

0.99+

oneQUANTITY

0.99+

Aditi BanerjeePERSON

0.99+

DavePERSON

0.99+

DellORGANIZATION

0.99+

Todd EdmundsPERSON

0.99+

2010DATE

0.99+

AditiPERSON

0.99+

ToddPERSON

0.99+

52%QUANTITY

0.99+

30,000 peopleQUANTITY

0.99+

DXC TechnologyORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

May of 2010DATE

0.99+

firstQUANTITY

0.99+

2000DATE

0.98+

bothQUANTITY

0.98+

two use casesQUANTITY

0.97+

two edgesQUANTITY

0.97+

one factoryQUANTITY

0.95+

Hoover DamLOCATION

0.95+

twoQUANTITY

0.95+

hundreds of peopleQUANTITY

0.93+

todayDATE

0.91+

three optionsQUANTITY

0.9+

twinQUANTITY

0.87+

Smart Manufacturing, Edge and Digital TwinsORGANIZATION

0.86+

MacCOMMERCIAL_ITEM

0.85+

Dell DXCORGANIZATION

0.85+

Vice President General ManagerPERSON

0.84+

one individual solutionsQUANTITY

0.8+

AditiORGANIZATION

0.78+

Aerospace Defense and ManufacturingORGANIZATION

0.69+

FortuneORGANIZATION

0.59+

onceQUANTITY

0.55+

4.0OTHER

0.54+

Industry 4.0EVENT

0.42+

4.0EVENT

0.33+

500TITLE

0.28+

Jeff Boudreau and Travis Vigil, Dell


 

(bright music) >> Okay, we're back. With Jeff and Travis Vigil to dig deeper into the news. Guys, again, good to see you. Travis, if you could, maybe before we get into the news, can you set the business context for us? What's going on out there? >> Yeah, thanks for that question, Dave. To set a little bit of the context when you look at the data protection market, Dell has been a leader in providing solutions to customers for going on nearly two decades now. We have tens of thousands of people using our appliances. We have multiple thousands of people using our latest, modern, simple power protect data manager software. And as Jeff mentioned, we have, you know, 1700 customers protecting 14 exabytes of data in the public clouds today. And that foundation gives us a unique vantage point. We talked to a lot of customers. And they're really telling us three things. They want simple solutions, they want us to help them modernize, and they want us as the highest priority, maintain that high degree of resiliency that they expect from our data protection solutions. So that's the backdrop to the news today. And as we go through the news, I think you'll agree that each of these announcements deliver on those pillars. And in particular, today we're announcing the PowerProtect Data Manager Appliance. We are announcing PowerProtect Cyber Recovery enhancements, and we are announcing enhancements to our APEX data storage services. >> Okay, so three pieces, let's dig to that. It's interesting appliance, everybody wants software but then you talk to customers and they're like, "Well, we actually want appliances because we just want to put it in and it works, and performs great." So what do we need to know about the appliance? What's the news there? >> Well, you know, part of the reason I gave you some of those stats to begin with is, that we have this strong foundation of experience, but also intellectual property. Components that we've taken, that have been battle tested in the market. And we've put them together in a new simple, integrated appliance that really combines the best of the target appliance capabilities, we have with that modern, simple software. And we've integrated it from the, you know, sort of taking all of those pieces, putting them together in a simple, easy-to-use and easy-to-scale interface for customers. >> So the premise that I've been putting forth for, you know, months now, probably well over a year, is that data protection is becoming an extension of your cybersecurity strategies. So I'm interested in your perspective on Cyber Recovery, your specific news that you have there? >> Yeah, you know, we are in addition to simplifying things via the appliance. We are providing solutions for customers no matter where they're deploying. And Cyber Recovery, especially, when it comes to cloud deployments, it's an increasing area of interest and deployment that we see with our customers. So what we're announcing today is that we're expanding our Cyber Recovery services to be available in Google Cloud. With this announcement, it means we're available in all three of the major Clouds. And it really provides customers the flexibility to cure their data no matter if they're running, you know, on premises, in a Colo, at the edge in the public cloud. And the other nice thing about this announcement is that you have the ability to use Google Cloud as a Cyber Recovery vault. That really allows customers to isolate critical data and they can recover that critical data from the vault back to on-premises or from that vault back to running their cyber protection, or their data protection solutions in the public cloud. >> I always involve my favorite Matt Baker here, It's not a zero-sum game, but this is a perfect example where there's opportunities for a company like Dell to partner with the public cloud provider. You've got capabilities that don't exist there. You've got the on-prem capabilities. We could talk about Edge all day, but that's a different topic. Okay so my other question, Travis, is how does this all fit into APEX? We hear a lot about APEX as a service it's sort of the new hot thing. What's happening there? What's the news around APEX? >> Yeah, we've seen incredible momentum with our APEX Solutions, since we introduced data protection options into them earlier this year. And we're really building on that momentum with this announcement being, you know, providing solutions that allow customers to consume flexibly. And so what we're announcing specifically is, that we're expanding APEX Data Storage Services to include a data protection option. And it's like with all APEX offers, it's a pay-as-you go solution. Really streamlines the process of customers purchasing, deploying, maintaining and managing their backup software. All a customer really needs to do is, you know, specify their base capacity, they specify their performance tier, they tell us do they want a one-year term, or a three-year term? And we take it from there. We get them up and running, so they can start deploying and consuming flexibly. And as with many of our APEX solutions, it's a simple user experience all exposed through a unified APEX console. >> Okay, so you're keeping a simple, like, I think large, medium, small, you know, we hear a lot about T-shirt sizes. I'm a big fan of that 'cause you guys should be smart enough to figure out, you know, based on my workload, what I need. How different is this? I wonder if you guys could address this, Jeff, maybe you can- >> So, I'll start and then, pitch me, you know, Travis, you jump in when I screw up here so... >> Awesome. >> So first I'd say we offer innovative Multi-cloud data protection solutions. We provide that deliver performance, efficiency and scale that our customers demand and require. We support as Travis at all the major public clouds. We have a broad ecosystem of workload support and I guess the great news is we're up to 80% more cost effective than any of the competition. >> 80%? >> 80%. >> That's a big number. Travis, what's your point of view on this? >> Yeah, I think number one, end-to-end data protection. We, we are that one stop shop that I talked about. Whether it's a simplified appliance, whether it's deployed in the cloud, whether it's at the edge, whether it's integrated appliances, target appliances, software we have solutions that span the gamut as a service. I mentioned the APEX solution as well. So really we can provide solutions that helps support customers and protect them, any workload, any cloud, anywhere that data lives, Edge core to cloud. The other thing that we're here, as a big differentiator for Dell and Jeff touched on this a little bit earlier, is our intelligent cyber resiliency. We have a unique combination in the market where we can offer immutability or protection against deletion as sort of that first line of defense. But we can also offer a second level of defense which is isolation, talking about data vaults or cyber vaults and Cyber Recovery. And more importantly, the intelligence that goes around that vault. It can look at detecting cyber-attacks, it can help customer speed time to recovery and really provides AI and ML to help early diagnosis of a cyber-attack and fast recovery should a cyber-attack occur. And you know, if you look at customer adoption of that solution specifically in the clouds, we have over 1300 customers utilizing PowerProtect Cyber Recovery. >> So I think it's fair to say that your, I mean your portfolio has obviously been a big differentiator whenever I talk to, you know your finance team, Michael Dell, et cetera that an end-to-end capability that that your ability to manage throughout the supply chain. We actually just did an event recently with you guys where you went into what you're doing to make infrastructure trusted. And so my take on that is, in a lot of respects, you're shifting, you know, the client's burden to your R&D, and now, they have a lot of work to do, so it's not like they can go home and just relax, but that's a key part of the partnership that I see. Jeff, I wonder if you could give us the final thoughts. >> Sure, Dell has a long history of being a trusted partner within IT, right? So we have unmatched capabilities, going back to your point, we have the broadest portfolio, we have, you know, we're a leader in every category that we participate and we have a broad deep breadth of portfolio. We have scale, we have innovation that is just unmatched. Within data protection itself, we have the trusted market leader, no if and or buts. We're a number one for both data protection software in appliances per IDC. And we were just named, for the 17th consecutive time the leader in the Gartner Magic Quadrant. So bottom line is customers can count on Dell. >> Yeah. And I think again, we're seeing the evolution of data protection. It's not like the last 10 years, it's really becoming an adjacency and really a key component of your cyber strategy. I think those two parts of the organization are coming together. So guys, really appreciate your time. Thanks for (indistinct). >> Thank you, sir. Thanks, Travis, good to see you. All right, in a moment, I'm going to come right back and summarize what we learned today, what actions you can take for your business. You're watching "The Future of Multicloud Data Protection" made possible by Dell and collaboration with the Cube, your leader in enterprise and emerging tech coverage, right back. (upbeat music) >> In our data driven world. Protecting data has never been more critical, to guard against everything from cyber incidents to unplanned outages. You need a cyber resilient multi-cloud data protection strategy. >> It's not a matter of if you're going to get hacked, it's a matter of when. And I want to know that I can recover and continue to recover each day. >> It is important to have a cyber security and a cyber resiliency plan in place, because the threat of cyber-attack are imminent. >> PowerProtects Data manager from Dell Technologies helps deliver the data protection and security confidence you would expect from a trusted we chose PowerProtect Data Manager because we've been on strategic partner with Dell Technologies, for roughly 20 years now. Our partnership with Dell Technologies has provided us with the ability to scale, and grow as we've transition from 10 billion in assets to 20 billion. >> With PowerProtect Data Manager, you can enjoy exceptional ease of use to increase your efficiency and reduce costs. >> Got installed it by myself, learn it by myself, with very intuitive >> While restoring a machine with PowerProtect Data Manager is fast. We can fully manage PowerProtect through the center. We can recover a whole machine in seconds. >> Data Manager offers innovation such as Transparent Snapshots to simplify virtual machine backups and it goes beyond backup and restore to provide valuable insights and to protected data, workloads and VMs. >> In our previous environment, it would take anywhere from three to six hours a night to do a single backup of each VM. Now we're backing up hourly and it takes two to three seconds with the Transparent Snapshots. >> With PowerProtect's Data Manager, you get the peace of mind knowing that your data is safe and available whenever you need it. >> Data is extreme important. We can't afford to lose any data. We need things just to work. >> Start your journey to modern data protection with Dell PowerProtect Data Manager. Visit dell.com/powerprotectdatamanager. >> We put forth the premise in our introduction that the worlds of data protection and cyber security must be more integrated. We said that data recovery strategies have to be built into security practices and procedures and by default, this should include modern hardware and software. Now in addition, to reviewing some of the challenges that customers face, which have been pretty well documented, we heard about new products that Dell Technologies is bringing to the marketplace. Specifically, address these customer concerns. There were three that we talked about today. First, the PowerProtect Data Manager Appliance, which is an integrated system. Taking advantage of Dell's history in data protection but adding new capabilities. And I want to come back to that in a moment. Second is Dell's PowerProtect Cyber Recovery for Google Cloud platform. This rounds out the big three public cloud providers for Dell, which joins AWS and Azure support. Now finally, Dell has made its target backup appliances available in APEX. You might recall earlier this year, we saw the introduction from Dell of APEX backup services. And then in May at Dell Technologies World, we heard about the introduction of APEX Cyber Recovery Services. And today, Dell is making its most popular backup appliances available in APEX. Now I want to come back to the PowerProtect Data Manager Appliance because it's a new integrated appliance. And I asked Dell off camera, really, what is so special about these new systems and what's really different from the competition because look, everyone offers some kind of integrated appliance. So I heard a number of items Dell talked about simplicity and efficiency and containers and Kubernetes. So I kind of kept pushing and got to what I think is the heart of the matter in two really important areas. One is simplicity. Dell claims that customers can deploy the system in half the time relative to the competition. So we're talking minutes to deploy and of course, that's going to lead to much simpler management. And the second real difference I heard, was backup and restore performance for VMware workloads. In particular, Dell has developed transparent snapshot capabilities to fundamentally change the way VMs are protected which leads to faster backup and restores with less impact on virtual infrastructure. Dell believes this new development is unique in the market, and claims that in its benchmarks, the new appliance was able to back up 500 virtual machines in 47% less time compared to a leading competitor. Now this is based on Dell benchmarks so hopefully these are things that you can explore in more detail with Dell to see if and how they apply to your business. So if you want more information go to the Data Protection page at Dell.com. You can find that at dell.com/dataprotection. And all the content here and all the videos are available on demand at thecube.net. Check out our series, on the blueprint for trusted infrastructure it's related and has some additional information. And go to siliconangle.com for all the news and analysis related to these and other announcements. This is Dave Vellante. Thanks for watching "The Future of Multi-cloud Protection." Made possible by Dell in collaboration with the Cube your leader in enterprise and emerging tech coverage. (upbeat music)

Published Date : Nov 17 2022

SUMMARY :

to dig deeper into the news. So that's the backdrop to the news today. let's dig to that. stats to begin with is, So the premise that I've been is that you have the to partner with the public cloud provider. needs to do is, you know, to figure out, you know, based pitch me, you know, Travis, and scale that our customers Travis, what's your point of view on this? And you know, if you So I think it's fair to say that your, going back to your point, we of the organization Thanks, Travis, good to see you. to guard against everything and continue to recover each day. It is important to from 10 billion in assets to 20 billion. to increase your efficiency We can fully manage and to protected data, workloads and VMs. three to six hours a night and available whenever you need it. We need things just to work. with Dell PowerProtect Data Manager. and got to what I think is the heart

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

JeffPERSON

0.99+

TravisPERSON

0.99+

DellORGANIZATION

0.99+

DavePERSON

0.99+

Jeff BoudreauPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

twoQUANTITY

0.99+

47%QUANTITY

0.99+

Matt BakerPERSON

0.99+

10 billionQUANTITY

0.99+

threeQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

Travis VigilPERSON

0.99+

one-yearQUANTITY

0.99+

20 billionQUANTITY

0.99+

MayDATE

0.99+

thecube.netOTHER

0.99+

AWSORGANIZATION

0.99+

1700 customersQUANTITY

0.99+

FirstQUANTITY

0.99+

secondQUANTITY

0.99+

SecondQUANTITY

0.99+

three secondsQUANTITY

0.99+

OneQUANTITY

0.99+

The Future of Multi-cloud ProtectionTITLE

0.99+

eachQUANTITY

0.99+

Michael DellPERSON

0.99+

second levelQUANTITY

0.99+

todayDATE

0.99+

siliconangle.comOTHER

0.99+

bothQUANTITY

0.99+

two partsQUANTITY

0.99+

dell.com/dataprotectionOTHER

0.98+

dell.com/powerprotectdatamanagerOTHER

0.98+

three piecesQUANTITY

0.98+

each dayQUANTITY

0.98+

over 1300 customersQUANTITY

0.98+

each VMQUANTITY

0.98+

500 virtual machinesQUANTITY

0.98+

first lineQUANTITY

0.97+

CubeORGANIZATION

0.97+

80%QUANTITY

0.97+

GartnerORGANIZATION

0.97+

earlier this yearDATE

0.96+

APEXORGANIZATION

0.96+

thousands of peopleQUANTITY

0.96+

20 yearsQUANTITY

0.95+

three thingsQUANTITY

0.94+

tens of thousands of peopleQUANTITY

0.94+

up to 80%QUANTITY

0.91+

PowerProtect Data ManagerCOMMERCIAL_ITEM

0.9+

PowerProtectCOMMERCIAL_ITEM

0.89+

three-year termQUANTITY

0.88+

Dell Technologies |The Future of Multicloud Data Protection is Here 11-14


 

>>Prior to the pandemic, organizations were largely optimized for efficiency as the best path to bottom line profits. Many CIOs tell the cube privately that they were caught off guard by the degree to which their businesses required greater resiliency beyond their somewhat cumbersome disaster recovery processes. And the lack of that business resilience has actually cost firms because they were unable to respond to changing market forces. And certainly we've seen this dynamic with supply chain challenges and there's a little doubt. We're also seeing it in the area of cybersecurity generally, and data recovery. Specifically. Over the past 30 plus months, the rapid adoption of cloud to support remote workers and build in business resilience had the unintended consequences of expanding attack vectors, which brought an escalation of risk from cybercrime. Well, security in the public clouds is certainly world class. The result of multi-cloud has brought with it multiple shared responsibility models, multiple ways of implementing security policies across clouds and on-prem. >>And at the end of the day, more, not less complexity, but there's a positive side to this story. The good news is that public policy industry collaboration and technology innovation is moving fast to accelerate data protection and cybersecurity strategies with a focus on modernizing infrastructure, securing the digital supply chain, and very importantly, simplifying the integration of data protection and cybersecurity. Today there's heightened awareness that the world of data protection is not only an adjacency to, but it's becoming a fundamental component of cybersecurity strategies. In particular, in order to build more resilience into a business, data protection, people, technologies, and processes must be more tightly coordinated with security operations. Hello and welcome to the future of Multi-Cloud Data Protection Made Possible by Dell in collaboration with the Cube. My name is Dave Ante and I'll be your host today. In this segment, we welcome into the cube, two senior executives from Dell who will share details on new technology announcements that directly address these challenges. >>Jeff Boudreau is the president and general manager of Dell's Infrastructure Solutions Group, isg, and he's gonna share his perspectives on the market and the challenges he's hearing from customers. And we're gonna ask Jeff to double click on the messages that Dell is putting into the marketplace and give us his detailed point of view on what it means for customers. Now, Jeff is gonna be joined by Travis Vhi. Travis is the senior Vice President of product management for ISG at Dell Technologies, and he's gonna give us details on the products that are being announced today and go into the hard news. Now, we're also gonna challenge our guests to explain why Dell's approach is unique and different in the marketplace. Thanks for being with us. Let's get right into it. We're here with Jeff Padre and Travis Behill. We're gonna dig into the details about Dell's big data protection announcement. Guys, good to see you. Thanks >>For coming in. Good to see you. Thank you for having us. >>You're very welcome. Right. Let's start off, Jeff, with the high level, you know, I'd like to talk about the customer, what challenges they're facing. You're talking to customers all the time, What are they telling you? >>Sure. As you know, we do, we spend a lot of time with our customers, specifically listening, learning, understanding their use cases, their pain points within their specific environments. They tell us a lot. Notice no surprise to any of us, that data is a key theme that they talk about. It's one of their most important, important assets. They need to extract more value from that data to fuel their business models, their innovation engines, their competitive edge. So they need to make sure that that data is accessible, it's secure in its recoverable, especially in today's world with the increased cyber attacks. >>Okay. So maybe we could get into some of those, those challenges. I mean, when, when you talk about things like data sprawl, what do you mean by that? What should people know? Sure. >>So for those big three themes, I'd say, you know, you have data sprawl, which is the big one, which is all about the massive amounts of data. It's the growth of that data, which is growing at an unprecedented rates. It's the gravity of that data and the reality of the multi-cloud sprawl. So stuff is just everywhere, right? Which increases that service a tax base for cyber criminals. >>And by gravity you mean the data's there and people don't wanna move it. >>It's everywhere, right? And so when it lands someplace, I think edge, core or cloud, it's there and that's, it's something we have to help our customers with. >>Okay, so just it's nuanced cuz complexity has other layers. What are those >>Layers? Sure. When we talk to our customers, they tell us complexity is one of their big themes. And specifically it's around data complexity. We talked about that growth and gravity of the data. We talk about multi-cloud complexity and we talk about multi-cloud sprawl. So multiple vendors, multiple contracts, multiple tool chains, and none of those work together in this, you know, multi-cloud world. Then that drives their security complexity. So we talk about that increased attack surface, but this really drives a lot of operational complexity for their teams. Think about we're lack consistency through everything. So people, process, tools, all that stuff, which is really wasting time and money for our customers. >>So how does that affect the cyber strategies and the, I mean, I've often said the ciso now they have this shared responsibility model, they have to do that across multiple clouds. Every cloud has its own security policies and, and frameworks and syntax. So maybe you could double click on your perspective on that. >>Sure. I'd say the big, you know, the big challenge customers have seen, it's really inadequate cyber resiliency. And specifically they're feeling, feeling very exposed. And today as the world with cyber tax being more and more sophisticated, if something goes wrong, it is a real challenge for them to get back up and running quickly. And that's why this is such a, a big topic for CEOs and businesses around the world. >>You know, it's funny, I said this in my open, I, I think that prior to the pandemic businesses were optimized for efficiency and now they're like, wow, we have to actually put some headroom into the system to be more resilient. You know, I you hearing >>That? Yeah, we absolutely are. I mean, the customers really, they're asking us for help, right? It's one of the big things we're learning and hearing from them. And it's really about three things, one's about simplifying it, two, it is really helping them to extract more value from their data. And then the third big, big piece is ensuring their data is protected and recoverable regardless of where it is going back to that data gravity and that very, you know, the multi-cloud world just recently, I don't know if you've seen it, but the global data protected, excuse me, the global data protection index gdp. >>I, Yes. Jesus. Not to be confused with gdpr, >>Actually that was released today and confirms everything we just talked about around customer challenges, but also it highlights an importance of having a very cyber, a robust cyber resilient data protection strategy. >>Yeah, I haven't seen the latest, but I, I want to dig into it. I think this, you've done this many, many years in a row. I like to look at the, the, the time series and see how things have changed. All right. At, at a high level, Jeff, can you kind of address why Dell and from your point of view is best suited? >>Sure. So we believe there's a better way or a better approach on how to handle this. We think Dell is uniquely positioned to help our customers as a one stop shop, if you will, for that cyber resilient multi-cloud data protection solution and needs. We take a modern, a simple and resilient approach. >>What does that mean? What, what do you mean by modern? >>Sure. So modern, we talk about our software defined architecture, right? It's really designed to meet the needs not only of today, but really into the future. And we protect data across any cloud and any workload. So we have a proven track record doing this today. We have more than 1700 customers that trust us to protect them more than 14 exabytes of their data in the cloud today. >>Okay, so you said modern, simple and resilient. What, what do you mean by simple? Sure. >>We wanna provide simplicity everywhere, going back to helping with the complexity challenge, and that's from deployment to consumption to management and support. So our offers will deploy in minutes. They are easy to operate and use, and we support flexible consumption models for whatever customer may desire. So traditional subscription or as a service. >>And when you, when you talk about resilient, I mean, I, I put forth that premise, but it's hard because people say, Well, that's gonna gonna cost us more. Well, it may, but you're gonna also reduce your, your risk. So what's your point of view on resilience? >>Yeah, I think it's, it's something all customers need. So we're gonna be providing a comprehensive and resilient portfolio of cyber solutions that are secured by design. We have some ver some unique capabilities and a combination of things like built in amenability, physical and logical isolation. We have intelligence built in with AI par recovery. And just one, I guess fun fact for everybody is we have our cyber vault is the only solution in the industry that is endorsed by Sheltered Harbor that meets all the needs of the financial sector. >>So it's interesting when you think about the, the NIST framework for cybersecurity, it's all about about layers. You're sort of bringing that now to, to data protection, correct? Yeah. All right. In a minute we're gonna come back with Travis and dig into the news. We're gonna take a short break. Keep it right there. Okay. We're back with Jeff and Travis Vhi to dig deeper into the news. Guys, again, good to see you. Travis, if you could, maybe you, before we get into the news, can you set the business context for us? What's going on out there? >>Yeah, thanks for that question, Dave. To set a little bit of the context, when you look at the data protection market, Dell has been a leader in providing solutions to customers for going on nearly two decades now. We have tens of thousands of people using our appliances. We have multiple thousands of people using our latest modern simple power protect data managers software. And as Jeff mentioned, we have, you know, 1700 customers protecting 14 exabytes of data in the public clouds today. And that foundation gives us a unique vantage point. We talked to a lot of customers and they're really telling us three things. They want simple solutions, they want us to help them modernize and they want us to add as the highest priority, maintain that high degree of resiliency that they expect from our data protection solutions. So tho that's the backdrop to the news today. And, and as we go through the news, I think you'll, you'll agree that each of these announcements deliver on those pillars. And in particular today we're announcing the Power Protect data manager appliance. We are announcing power protect cyber recovery enhancements, and we are announcing enhancements to our Apex data storage >>Services. Okay, so three pieces. Let's, let's dig to that. It's interesting appliance, everybody wants software, but then you talk to customers and they're like, Well, we actually want appliances because we just wanna put it in and it works, right? It performs great. So, so what do we need to know about the appliance? What's the news there? Well, >>You know, part of the reason I gave you some of those stats to begin with is that we have this strong foundation of, of experience, but also intellectual property components that we've taken that have been battle tested in the market. And we've put them together in a new simple integrated appliance that really combines the best of the target appliance capabilities we have with that modern simple software. And we've integrated it from the, you know, sort of taking all of those pieces, putting them together in a simple, easy to use and easy to scale interface for customers. >>So the premise that I've been putting forth for, you know, months now, probably well, well over a year, is that, that that data protection is becoming an extension of your, your cybersecurity strategies. So I'm interested in your perspective on cyber recovery, you specific news that you have there. >>Yeah, you know, we, we are, in addition to simplifying things via the, the appliance, we are providing solutions for customers no matter where they're deploying. And cyber recovery, especially when it comes to cloud deployments, is an increasing area of interest and deployment that we see with our customers. So what we're announcing today is that we're expanding our cyber recovery services to be available in Google Cloud with this announcement. It means we're available in all three of the major clouds and it really provides customers the flexibility to secure their data no matter if they're running, you know, on premises in a colo at the edge in the public cloud. And the other nice thing about this, this announcement is that you have the ability to use Google Cloud as a cyber recovery vault that really allows customers to isolate critical data and they can recover that critical data from the vault back to on premises or from that vault back to running their cyber cyber protection or their data protection solutions in the public cloud. >>I always invoke my, my favorite Matt Baker here. It's not a zero sum game, but this is a perfect example where there's opportunities for a company like Dell to partner with the public cloud provider. You've got capabilities that don't exist there. You've got the on-prem capabilities. We can talk about edge all day, but that's a different topic. Okay, so my, my other question Travis, is how does this all fit into Apex? We hear a lot about Apex as a service, it's sort of the new hot thing. What's happening there? What's the news around Apex? >>Yeah, we, we've seen incredible momentum with our Apex solutions since we introduced data protection options into them earlier this year. And we're really building on that momentum with this announcement being, you know, providing solutions that allow customers to consume flexibly. And so what we're announcing specifically is that we're expanding Apex data storage services to include a data protection option. And it's like with all Apex offers, it's a pay as you go solution really streamlines the process of customers purchasing, deploying, maintaining and managing their backup software. All a customer really needs to do is, you know, specify their base capacity, they specify their performance tier, they tell us do they want a a one year term or a three year term and we take it from there. We, we get them up and running so they can start deploying and consuming flexibly. And it's, as with many of our Apex solutions, it's a simple user experience all exposed through a unified Apex console. >>Okay. So it's you keeping it simple, like I think large, medium, small, you know, we hear a lot about t-shirt sizes. I I'm a big fan of that cuz you guys should be smart enough to figure out, you know, based on my workload, what I, what I need, how different is this? I wonder if you guys could, could, could address this. Jeff, maybe you can, >>You can start. Sure. I'll start and then pitch me, you know, Travis, you you jump in when I screw up here. So, awesome. So first I'd say we offer innovative multi-cloud data protection solutions. We provide that deliver performance, efficiency and scale that our customers demand and require. We support as Travis and all the major public clouds. We have a broad ecosystem of workload support and I guess the, the great news is we're up to 80% more cost effective than any of the competition. >>80%. 80%, That's a big number, right? Travis, what's your point of view on this? Yeah, >>I, I think number one, end to end data protection. We, we are that one stop shop that I talked about. Whether it's a simplified appliance, whether it's deployed in the cloud, whether it's at the edge, whether it's integrated appliances, target appliances, software, we have solutions that span the gamut as a service. I mentioned the Apex solution as well. So really we can, we can provide solutions that help support customers and protect them, any workload, any cloud, anywhere that data lives edge core to cloud. The other thing that we hear as a, as a, a big differentiator for Dell and, and Jeff touched on on this a little bit earlier, is our intelligent cyber resiliency. We have a unique combination in, in the market where we can offer immutability or protection against deletion as, as sort of that first line of defense. But we can also offer a second level of defense, which is isolation, talking, talking about data vaults or cyber vaults and cyber recovery. And the, at more importantly, the intelligence that goes around that vault. It can look at detecting cyber attacks, it can help customers speed time to recovery and really provides AI and ML to help early diagnosis of a cyber attack and fast recovery should a cyber attack occur. And, and you know, if you look at customer adoption of that solution specifically in the clouds, we have over 1300 customers utilizing power protect cyber recovery. >>So I think it's fair to say that your, I mean your portfolio has obvious been a big differentiator whenever I talk to, you know, your finance team, Michael Dell, et cetera, that end to end capability that that, that your ability to manage throughout the supply chain. We actually just did a a, an event recently with you guys where you went into what you're doing to make infrastructure trusted. And so my take on that is you, in a lot of respects, you're shifting, you know, the client's burden to your r and d now they have a lot of work to do, so it's, it's not like they can go home and just relax, but, but that's a key part of the partnership that I see. Jeff, I wonder if you could give us the, the, the final thoughts. >>Sure. Dell has a long history of being a trusted partner with it, right? So we have unmatched capabilities. Going back to your point, we have the broadest portfolio, we have, you know, we're a leader in every category that we participate in. We have a broad deep breadth of portfolio. We have scale, we have innovation that is just unmatched within data protection itself. We have the trusted market leader, no, if and or buts, we're number one for both data protection software in appliances per idc and we would just name for the 17th consecutive time the leader in the, the Gartner Magic Quadrant. So bottom line is customers can count on Dell. >>Yeah, and I think again, we're seeing the evolution of, of data protection. It's not like the last 10 years, it's really becoming an adjacency and really a key component of your cyber strategy. I think those two parts of the organization are coming together. So guys, really appreciate your time. Thanks for Thank you sir. Thanks Travis. Good to see you. All right, in a moment I'm gonna come right back and summarize what we learned today, what actions you can take for your business. You're watching the future of multi-cloud data protection made possible by Dell and collaboration with the cube, your leader in enterprise and emerging tech coverage right back >>In our data driven world. Protecting data has never been more critical to guard against everything from cyber incidents to unplanned outages. You need a cyber resilient, multi-cloud data protection strategy. >>It's not a matter of if you're gonna get hacked, it's a matter of when. And I wanna know that I can recover and continue to recover each day. >>It is important to have a cyber security and a cyber resiliency plan in place because the threat of cyber attack are imminent. >>Power protects. Data manager from Dell Technologies helps deliver the data protection and security confidence you would expect from a trusted partner and market leader. >>We chose Power Protect Data Manager because we've been a strategic partner with Dell Technologies for roughly 20 years now. Our partnership with Dell Technologies has provided us with the ability to scale and grow as we've transitioned from 10 billion in assets to 20 billion. >>With Power Protect Data Manager, you can enjoy exceptional ease of use to increase your efficiency and reduce costs. >>Got installed it by myself, learned it by myself with very intuitive >>While restoring a machine with Power Protect Data Manager is fast. We can fully manage Power Protect through the center. We can recover a whole machine in seconds. >>Data Manager offers innovation such as Transparent snapshots to simplify virtual machine backups and it goes beyond backup and restore to provide valuable insights and to protected data workloads and VMs. >>In our previous environment, it would take anywhere from three to six hours at night to do a single backup of each vm. Now we're backing up hourly and it takes two to three seconds with the transparent snapshots. >>With Power Protects Data Manager, you get the peace of mind knowing that your data is safe and available whenever you need it. >>Data is extremely important. We can't afford to lose any data. We need things just to work. >>Start your journey to modern data protection with Dell Power Protect Data manager. Visit dell.com/power Protect Data Manager. >>We put forth the premise in our introduction that the worlds of data protection in cybersecurity must be more integrated. We said that data recovery strategies have to be built into security practices and procedures and by default this should include modern hardware and software. Now in addition to reviewing some of the challenges that customers face, which have been pretty well documented, we heard about new products that Dell Technologies is bringing to the marketplace that specifically address these customer concerns. There were three that we talked about today. First, the Power Protect Data Manager Appliance, which is an integrated system taking advantage of Dell's history in data protection, but adding new capabilities. And I want to come back to that in the moment. Second is Dell's Power Protect cyber recovery for Google Cloud platform. This rounds out the big three public cloud providers for Dell, which joins AWS and and Azure support. >>Now finally, Dell has made its target backup appliances available in Apex. You might recall earlier this year we saw the introduction from Dell of Apex backup services and then in May at Dell Technologies world, we heard about the introduction of Apex Cyber Recovery Services. And today Dell is making its most popular backup appliances available and Apex. Now I wanna come back to the Power Protect data manager appliance because it's a new integrated appliance. And I asked Dell off camera really what is so special about these new systems and what's really different from the competition because look, everyone offers some kind of integrated appliance. So I heard a number of items, Dell talked about simplicity and efficiency and containers and Kubernetes. So I kind of kept pushing and got to what I think is the heart of the matter in two really important areas. One is simplicity. >>Dell claims that customers can deploy the system in half the time relative to the competition. So we're talking minutes to deploy and of course that's gonna lead to much simpler management. And the second real difference I heard was backup and restore performance for VMware workloads. In particular, Dell has developed transparent snapshot capabilities to fundamentally change the way VMs are protected, which leads to faster backup and restores with less impact on virtual infrastructure. Dell believes this new development is unique in the market and claims that in its benchmarks the new appliance was able to back up 500 virtual machines in 47% less time compared to a leading competitor. Now this is based on Dell benchmarks, so hopefully these are things that you can explore in more detail with Dell to see if and how they apply to your business. So if you want more information, go to the data protectionPage@dell.com. You can find that at dell.com/data protection. And all the content here and other videos are available on demand@thecube.net. Check out our series on the blueprint for trusted infrastructure, it's related and has some additional information. And go to silicon angle.com for all the news and analysis related to these and other announcements. This is Dave Valante. Thanks for watching the future of multi-cloud protection made possible by Dell in collaboration with the Cube, your leader in enterprise and emerging tech coverage.

Published Date : Nov 17 2022

SUMMARY :

And the lack of that business And at the end of the day, more, not less complexity, Jeff Boudreau is the president and general manager of Dell's Infrastructure Solutions Group, Good to see you. Let's start off, Jeff, with the high level, you know, I'd like to talk about the So they need to make sure that that data data sprawl, what do you mean by that? So for those big three themes, I'd say, you know, you have data sprawl, which is the big one, which is all about the massive amounts it's something we have to help our customers with. Okay, so just it's nuanced cuz complexity has other layers. We talked about that growth and gravity of the data. So how does that affect the cyber strategies and the, And today as the world with cyber tax being more and more sophisticated, You know, it's funny, I said this in my open, I, I think that prior to the pandemic businesses that very, you know, the multi-cloud world just recently, I don't know if you've seen it, but the global data protected, Not to be confused with gdpr, Actually that was released today and confirms everything we just talked about around customer challenges, At, at a high level, Jeff, can you kind of address why Dell and from your point of We think Dell is uniquely positioned to help our customers as a one stop shop, if you will, It's really designed to meet the needs What, what do you mean by simple? We wanna provide simplicity everywhere, going back to helping with the complexity challenge, and that's from deployment So what's your point of view on resilience? Harbor that meets all the needs of the financial sector. So it's interesting when you think about the, the NIST framework for cybersecurity, it's all about about layers. And as Jeff mentioned, we have, you know, 1700 customers protecting 14 exabytes but then you talk to customers and they're like, Well, we actually want appliances because we just wanna put it in and it works, You know, part of the reason I gave you some of those stats to begin with is that we have this strong foundation of, So the premise that I've been putting forth for, you know, months now, probably well, well over a year, is an increasing area of interest and deployment that we see with our customers. it's sort of the new hot thing. All a customer really needs to do is, you know, specify their base capacity, I I'm a big fan of that cuz you guys should be smart enough to figure out, you know, based on my workload, We support as Travis and all the major public clouds. Travis, what's your point of view on of that solution specifically in the clouds, So I think it's fair to say that your, I mean your portfolio has obvious been a big differentiator whenever I talk to, We have the trusted market leader, no, if and or buts, we're number one for both data protection software in what we learned today, what actions you can take for your business. Protecting data has never been more critical to guard against that I can recover and continue to recover each day. It is important to have a cyber security and a cyber resiliency Data manager from Dell Technologies helps deliver the data protection and security We chose Power Protect Data Manager because we've been a strategic partner with With Power Protect Data Manager, you can enjoy exceptional ease of use to increase your efficiency We can fully manage Power Data Manager offers innovation such as Transparent snapshots to simplify virtual Now we're backing up hourly and it takes two to three seconds with the transparent With Power Protects Data Manager, you get the peace of mind knowing that your data is safe and available We need things just to work. Start your journey to modern data protection with Dell Power Protect Data manager. We put forth the premise in our introduction that the worlds of data protection in cybersecurity So I kind of kept pushing and got to what I think is the heart of the matter in two really Dell claims that customers can deploy the system in half the time relative to the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Dave ValantePERSON

0.99+

Jeff BoudreauPERSON

0.99+

TravisPERSON

0.99+

DavePERSON

0.99+

DellORGANIZATION

0.99+

10 billionQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

threeQUANTITY

0.99+

Travis BehillPERSON

0.99+

FirstQUANTITY

0.99+

demand@thecube.netOTHER

0.99+

AWSORGANIZATION

0.99+

20 billionQUANTITY

0.99+

Dave AntePERSON

0.99+

twoQUANTITY

0.99+

Jeff PadrePERSON

0.99+

Sheltered HarborORGANIZATION

0.99+

Matt BakerPERSON

0.99+

more than 1700 customersQUANTITY

0.99+

MayDATE

0.99+

SecondQUANTITY

0.99+

1700 customersQUANTITY

0.99+

more than 14 exabytesQUANTITY

0.99+

Michael DellPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

OneQUANTITY

0.99+

todayDATE

0.99+

two senior executivesQUANTITY

0.99+

three secondsQUANTITY

0.99+

secondQUANTITY

0.99+

ApexORGANIZATION

0.99+

eachQUANTITY

0.99+

three piecesQUANTITY

0.99+

thirdQUANTITY

0.99+

two partsQUANTITY

0.99+

TodayDATE

0.99+

six hoursQUANTITY

0.99+

each dayQUANTITY

0.99+

bothQUANTITY

0.98+

over 1300 customersQUANTITY

0.98+

Solutions GroupORGANIZATION

0.98+

three thingsQUANTITY

0.98+

dell.com/powerOTHER

0.98+

JesusPERSON

0.98+

GartnerORGANIZATION

0.98+

thousands of peopleQUANTITY

0.97+

The Future of Multicloud Data Protection is Here FULL EPISODE V3


 

>>Prior to the pandemic, organizations were largely optimized for efficiency as the best path to bottom line profits. Many CIOs tell the cube privately that they were caught off guard by the degree to which their businesses required greater resiliency beyond their somewhat cumbersome disaster recovery processes. And the lack of that business resilience has actually cost firms because they were unable to respond to changing market forces. And certainly we've seen this dynamic with supply chain challenges and there's a little doubt. We're also seeing it in the area of cybersecurity generally, and data recovery. Specifically. Over the past 30 plus months, the rapid adoption of cloud to support remote workers and build in business resilience had the unintended consequences of expanding attack vectors, which brought an escalation of risk from cyber crime. Well, security in the public clouds is certainly world class. The result of multi-cloud has brought with it multiple shared responsibility models, multiple ways of implementing security policies across clouds and on-prem. >>And at the end of the day, more, not less complexity, but there's a positive side to this story. The good news is that public policy industry collaboration and technology innovation is moving fast to accelerate data protection and cybersecurity strategies with a focus on modernizing infrastructure, securing the digital supply chain, and very importantly, simplifying the integration of data protection and cybersecurity. Today there's heightened awareness that the world of data protection is not only an adjacency to, but it's becoming a fundamental component of cybersecurity strategies. In particular, in order to build more resilience into a business, data protection, people, technologies, and processes must be more tightly coordinated with security operations. Hello and welcome to the future of Multi-Cloud Data Protection Made Possible by Dell in collaboration with the Cube. My name is Dave Valante and I'll be your host today. In this segment, we welcome into the Cube, two senior executives from Dell who will share details on new technology announcements that directly address these challenges. >>Jeff Boudreaux is the president and general manager of Dell's Infrastructure Solutions Group, isg, and he's gonna share his perspectives on the market and the challenges he's hearing from customers. And we're gonna ask Jeff to double click on the messages that Dell is putting into the marketplace and give us his detailed point of view on what it means for customers. Now Jeff is gonna be joined by Travis Vhi. Travis is the senior Vice President of product management for ISG at Dell Technologies, and he's gonna give us details on the products that are being announced today and go into the hard news. Now, we're also gonna challenge our guests to explain why Dell's approach is unique and different in the marketplace. Thanks for being with us. Let's get right into it. We're here with Jeff Padro and Travis Behill. We're gonna dig into the details about Dell's big data protection announcement. Guys, good to see you. Thanks >>For coming in. Good to see you. Thank you for having us. >>You're very welcome. Right. Let's start off, Jeff, with a high level, you know, I'd like to talk about the customer, what challenges they're facing. You're talking to customers all the time, What are they telling you? >>Sure. As you know, we do, we spend a lot of time with our customers, specifically listening, learning, understanding their use cases, their pain points within their specific environments. They tell us a lot. Notice no surprise to any of us, that data is a key theme that they talk about. It's one of their most important, important assets. They need to extract more value from that data to fuel their business models, their innovation engines, their competitive edge. So they need to make sure that that data is accessible, it's secure in its recoverable, especially in today's world with the increased cyber attacks. >>Okay. So maybe we could get into some of those, those challenges. I mean, when, when you talk about things like data sprawl, what do you mean by that? What should people know? Sure. >>So for those big three themes, I'd say, you know, you have data sprawl, which is the big one, which is all about the massive amounts of data. It's the growth of that data, which is growing at an unprecedented rates. It's the gravity of that data and the reality of the multi-cloud sprawl. So stuff is just everywhere, right? Which increases that service a tax base for cyber criminals. >>And and by gravity you mean the data's there and people don't wanna move it. >>It's everywhere, right? And so when it lands someplace, I think edge, core or cloud, it's there and that's, it's something we have to help our customers with. >>Okay, so just it's nuanced cuz complexity has other layers. What, what are those >>Layers? Sure. When we talk to our customers, they tell us complexity is one of their big themes. And specifically it's around data complexity. We talked about that growth and gravity of the data. We talk about multi-cloud complexity and we talk about multi-cloud sprawl. So multiple vendors, multiple contracts, multiple tool chains, and none of those work together in this, you know, multi-cloud world. Then that drives their security complexity. So we talk about that increased attack surface, but this really drives a lot of operational complexity for their teams. Think about we're a lack consistency through everything. So people, process, tools, all that stuff, which is really wasting time and money for our customers. >>So how does that affect the cyber strategies and the, I mean, I've often said the ciso now they have this shared responsibility model, they have to do that across multiple clouds. Every cloud has its own security policies and, and frameworks and syntax. So maybe you could double click on your perspective on that. >>Sure. I'd say the big, you know, the big challenge customers have seen, it's really inadequate cyber resiliency. And specifically they're feeling, feeling very exposed. And today as the world with cyber tax being more and more sophisticated, if something goes wrong, it is a real challenge for them to get back up and running quickly. And that's why this is such a, a big topic for CEOs and businesses around the world. >>You know, it's funny, I said this in my open, I, I think that prior to the pandemic businesses were optimized for efficiency and now they're like, Wow, we have to actually put some headroom into the system to be more resilient. You know, I you hearing >>That? Yeah, we absolutely are. I mean, the customers really, they're asking us for help, right? It's one of the big things we're learning and hearing from them. And it's really about three things, one's about simplifying it, two, it's really helping them to extract more value from their data. And then the third big, big piece is ensuring their data is protected and recoverable regardless of where it is going back to that data gravity and that very, you know, the multicloud world just recently, I don't know if you've seen it, but the global data protected, excuse me, the global data protection index gdp. >>I, Yes. Jesus. Not to be confused with gdpr, >>Actually that was released today and confirms everything we just talked about around customer challenges, but also it highlights an importance of having a very cyber, a robust cyber resilient data protection strategy. >>Yeah, I haven't seen the latest, but I, I want to dig into it. I think this is, you've done this many, many years in a row. I like to look at the, the, the time series and see how things have changed. All right. At, at a high level, Jeff, can you kind of address why Dell and from your point of view is best suited? >>Sure. So we believe there's a better way or a better approach on how to handle this. We think Dell is uniquely positioned to help our customers as a one stop shop, if you will, for that cyber resilient multi-cloud data protection solution in needs. We take a modern, a simple and resilient approach, >>But what does that mean? What, what do you mean by modern? >>Sure. So modern, we talk about our software defined architecture, right? It's really designed to meet the needs not only of today, but really into the future. And we protect data across any cloud in any workload. So we have a proven track record doing this today. We have more than 1700 customers that trust us to protect them more than 14 exabytes of their data in the cloud today. >>Okay, so you said modern, simple and resilient. What, what do you mean by simple? Sure. >>We wanna provide simplicity everywhere, going back to helping with the complexity challenge, and that's from deployment to consumption to management and support. So our offers will deploy in minutes. They are easy to operate and use, and we support flexible consumption models for whatever the customer may desire. So traditional subscription or as a service. >>And when you, when you talk about resilient, I mean, I, I put forth that premise, but it's hard because people say, Well, that's gonna gonna cost us more. Well, it may, but you're gonna also reduce your, your risk. So how, what's your point of view on resilience? >>Yeah, I think it's, it's something all customers need. So we're gonna be providing a comprehensive and resilient portfolio of cyber solutions that are secured by design. We have some ver some unique capabilities in a combination of things like built in amenability, physical and logical isolation. We have intelligence built in with AI par recovery and just one, I guess fun fact for everybody is we have our cyber vault is the only solution in the industry that is endorsed by Sheltered Harbor that meets all the needs of the financial sector. >>So it's interesting when you think about the, the NIST framework for cyber security, it's all about about layers. You're sort of bringing that now to, to data protection, correct? Yeah. All right. In a minute we're gonna come back with Travis and dig into the news. We're gonna take a short break. Keep it right there. Okay. We're back with Jeff and Travis Vehill to dig deeper into the news. Guys, again, good to see you. Travis, if you could, maybe you, before we get into the news, can you set the business context for us? What's going on out there? >>Yeah, thanks for that question, Dave. To set a little bit of the context, when you look at the data protection market, Dell has been a leader in providing solutions to customers for going on nearly two decades now. We have tens of thousands of people using our appliances. We have multiple thousands of people using our latest modern simple power protect data managers software. And as Jeff mentioned, we have, you know, 1700 customers protecting 14 exabytes of data in the public clouds today. And that foundation gives us a unique vantage point. We talked to a lot of customers and they're really telling us three things. They want simple solutions, they want us to help them modernize and they want us to add as the highest priority, maintain that high degree of resiliency that they expect from our data protection solutions. So tho that's the backdrop to the news today. And, and as we go through the news, I think you'll, you'll agree that each of these announcements deliver on those pillars. And in particular today we're announcing the Power Protect data manager appliance. We are announcing power protect cyber recovery enhancements, and we are announcing enhancements to our Apex data storage >>Services. Okay, so three pieces. Let's, let's dig to that. It's interesting appliance, everybody wants software, but then you talk to customers and they're like, Well, we actually want appliances because we just wanna put it in and it works, right? Performs great. So, so what do we need to know about the appliance? What's the news there? Well, >>You know, part of the reason I gave you some of those stats to begin with is that we have at this strong foundation of, of experience, but also intellectual property components that we've taken that have been battle tested in the market. And we've put them together in a new simple integrated appliance that really combines the best of the target appliance capabilities we have with that modern simple software. And we've integrated it from the, you know, sort of taking all of those pieces, putting them together in a simple, easy to use and easy to scale interface for customers. >>So the premise that I've been putting forth for, you know, months now, probably well, well over a year, is that, that that data protection is becoming an extension of your, your cybersecurity strategies. So I'm interested in your perspective on cyber recovery. You, you have specific news that you have there? >>Yeah, you know, we, we are, in addition to simplifying things via the, the appliance, we are providing solutions for customers no matter where they're deploying. And cyber recovery, especially when it comes to cloud deployments, is an increasing area of interest and deployment that we see with our customers. So what we're announcing today is that we're expanding our cyber recovery services to be available in Google Cloud with this announcement. It means we're available in all three of the major clouds and it really provides customers the flexibility to secure their data no matter if they're running, you know, on premises in a colo at the edge in the public cloud. And the other nice thing about this, this announcement is that you have the ability to use Google Cloud as a cyber recovery vault that really allows customers to isolate critical data and they can recover that critical data from the vault back to on-premises or from that vault back to running their cyber cyber protection or their data protection solutions in the public cloud. >>I always invoke my, my favorite Matt Baker here. It's not a zero sum game, but this is a perfect example where there's opportunities for a company like Dell to partner with the public cloud provider. You've got capabilities that don't exist there. You've got the on-prem capabilities. We could talk about edge all day, but that's a different topic. Okay, so Mike, my other question Travis, is how does this all fit into Apex? We hear a lot about Apex as a service, it's sort of the new hot thing. What's happening there? What's the news around Apex? >>Yeah, we, we've seen incredible momentum with our Apex solutions since we introduced data protection options into them earlier this year. And we're really building on that momentum with this announcement being, you know, providing solutions that allow customers to consume flexibly. And so what we're announcing specifically is that we're expanding Apex data storage services to include a data protection option. And it's like with all Apex offers, it's a pay as you go solution really streamlines the process of customers purchasing, deploying, maintaining and managing their backup software. All a customer really needs to do is, you know, specify their base capacity, they specify their performance tier, they tell us do they want a a one year term or a three year term and we take it from there. We, we get them up and running so they can start deploying and consuming flexibly. And it's, as with many of our Apex solutions, it's a simple user experience all exposed through a unified Apex console. >>Okay. So it's you keeping it simple, like I think large, medium, small, you know, we hear a lot about t-shirt sizes. I I'm a big fan of that cuz you guys should be smart enough to figure out, you know, based on my workload, what I, what I need, how different is this? I wonder if you guys could, could, could address this. Jeff, maybe you can, >>You can start. Sure. I'll start and then pitch me, you know, Travis, you you jump in when I screw up here. So, awesome. So first I'd say we offer innovative multi-cloud data protection solutions. We provide that deliver performance, efficiency and scale that our customers demand and require. We support as Travis at all the major public clouds. We have a broad ecosystem of workload support and I guess the, the great news is we're up to 80% more cost effective than any of the competition. >>80%. 80%, That's a big number, right. Travis, what's your point of view on this? Yeah, >>I, I think number one, end to end data protection. We, we are that one stop shop that I talked about. Whether it's a simplified appliance, whether it's deployed in the cloud, whether it's at the edge, whether it's integrated appliances, target appliances, software, we have solutions that span the gamut as a service. I mentioned the Apex solution as well. So really we can, we can provide solutions that help support customers and protect them, any workload, any cloud, anywhere that data lives edge core to cloud. The other thing that we hear as a, as a, a big differentiator for Dell and, and Jeff touched on on this a little bit earlier, is our intelligent cyber resiliency. We have a unique combination in, in the market where we can offer immutability or protection against deletion as, as sort of that first line of defense. But we can also offer a second level of defense, which is isolation, talking, talking about data vaults or cyber vaults and cyber recovery. And the, at more importantly, the intelligence that goes around that vault. It can look at detecting cyber attacks, it can help customers speed time to recovery and really provides AI and ML to help early diagnosis of a cyber re attack and fast recovery should a cyber attack occur. And, and you know, if you look at customer adoption of that solution specifically in the clouds, we have over 1300 customers utilizing power protect cyber recovery. >>So I think it's fair to say that your, I mean your portfolio has obvious been a big differentiator whenever I talk to, you know, your finance team, Michael Dell, et cetera, that end to end capability that that, that your ability to manage throughout the supply chain. We actually just did a a, an event recently with you guys where you went into what you're doing to make infrastructure trusted. And so my take on that is you, in a lot of respects, you're shifting, you know, the client's burden to your r and d now they have a lot of work to do, so it's, it's not like they can go home and just relax, but, but that's a key part of the partnership that I see. Jeff, I wonder if you could give us the, the, the final thoughts. >>Sure. Dell has a long history of being a trusted partner with it, right? So we have unmatched capabilities. Going back to your point, we have the broadest portfolio, we have, you know, we're a leader in every category that we participate in. We have a broad deep breadth of portfolio. We have scale, we have innovation that is just unmatched within data protection itself. We are the trusted market leader, no if and or bots, we're number one for both data protection software in appliances per idc. And we would just name for the 17th consecutive time the leader in the, the Gartner Magic Quadrant. So bottom line is customers can count on Dell. >>Yeah, and I think again, we're seeing the evolution of, of data protection. It's not like the last 10 years, it's really becoming an adjacency and really a key component of your cyber strategy. I think those two parts of the organization are coming together. So guys, really appreciate your time. Thanks for Thank you sir. Thanks Dave. Travis, good to see you. All right, in a moment I'm gonna come right back and summarize what we learned today, what actions you can take for your business. You're watching the future of multi-cloud data protection made possible by Dell and collaboration with the cube, your leader in enterprise and emerging tech coverage right back >>In our data driven world. Protecting data has never been more critical to guard against everything from cyber incidents to unplanned outages. You need a cyber resilient, multi-cloud data protection strategy. >>It's not a matter of if you're gonna get hacked, it's a matter of when. And I wanna know that I can recover and continue to recover each day. >>It is important to have a cyber security and a cyber resiliency plan in place because the threat of cyber attack are imminent. >>Power protects. Data manager from Dell Technologies helps deliver the data protection and security confidence you would expect from a trusted partner and market leader. >>We chose Power Protect Data Manager because we've been a strategic partner with Dell Technologies for roughly 20 years now. Our partnership with Dell Technologists has provided us with the ability to scale and grow as we've transitioned from 10 billion in assets to 20 billion. >>With Power Protect Data Manager, you can enjoy exceptional ease of use to increase your efficiency and reduce costs. >>Got installed it by myself, learned it by myself with very intuitive >>While restoring a machine with Power Protect Data Manager is fast. We can fully manage Power Protect through the center. We can recover a whole machine in seconds. >>Data Manager offers innovation such as Transparent snapshots to simplify virtual machine backups and it goes beyond backup and restore to provide valuable insights and to protected data workloads and VMs. >>In our previous environment, it would take anywhere from three to six hours at night to do a single backup of each vm. Now we're backing up hourly and it takes two to three seconds with the transparent snapshots. >>With Power Protects Data Manager, you get the peace of mind knowing that your data is safe and available whenever you need it. >>Data is extremely important. We can't afford to lose any data. We need things just to work. >>Start your journey to modern data protection with Dell Power Protect Data manager. Visit dell.com/power Protect Data Manager. >>We put forth the premise in our introduction that the world's of data protection in cybersecurity must be more integrated. We said that data recovery strategies have to be built into security practices and procedures and by default this should include modern hardware and software. Now in addition to reviewing some of the challenges that customers face, which have been pretty well documented, we heard about new products that Dell Technologies is bringing to the marketplace that specifically address these customer concerns. There were three that we talked about today. First, the Power Protect Data Manager Appliance, which is an integrated system taking advantage of Dell's history in data protection, but adding new capabilities. And I want to come back to that in the moment. Second is Dell's Power Protect cyber recovery for Google Cloud platform. This rounds out the big three public cloud providers for Dell, which joins AWS and and Azure support. >>Now finally, Dell has made its target backup appliances available in Apex. You might recall earlier this year we saw the introduction from Dell of Apex backup services and then in May at Dell Technologies world, we heard about the introduction of Apex Cyber Recovery Services. And today Dell is making its most popular backup appliances available and Apex. Now I wanna come back to the Power Protect data manager appliance because it's a new integrated appliance. And I asked Dell off camera really what is so special about these new systems and what's really different from the competition because look, everyone offers some kind of integrated appliance. So I heard a number of items, Dell talked about simplicity and efficiency and containers and Kubernetes. So I kind of kept pushing and got to what I think is the heart of the matter in two really important areas. One is simplicity. >>Dell claims that customers can deploy the system in half the time relative to the competition. So we're talking minutes to deploy and of course that's gonna lead to much simpler management. And the second real difference I heard was backup and restore performance for VMware workloads. In particular, Dell has developed transparent snapshot capabilities to fundamentally change the way VMs are protected, which leads to faster backup and restores with less impact on virtual infrastructure. Dell believes this new development is unique in the market and claims that in its benchmarks the new appliance was able to back up 500 virtual machines in 47% less time compared to a leading competitor. Now this is based on Dell benchmarks, so hopefully these are things that you can explore in more detail with Dell to see if and how they apply to your business. So if you want more information, go to the data protectionPage@dell.com. You can find that at dell.com/data protection. And all the content here and other videos are available on demand@thecube.net. Check out our series on the blueprint for trusted infrastructure, it's related and has some additional information. And go to silicon angle.com for all the news and analysis related to these and other announcements. This is Dave Valante. Thanks for watching the future of multi-cloud protection made possible by Dell in collaboration with the Cube, your leader in enterprise and emerging tech coverage.

Published Date : Oct 28 2022

SUMMARY :

And the lack of that business And at the end of the day, more, not less complexity, Jeff Boudreaux is the president and general manager of Dell's Infrastructure Solutions Group, Good to see you. Let's start off, Jeff, with a high level, you know, I'd like to talk about the So they need to make sure that that data data sprawl, what do you mean by that? So for those big three themes, I'd say, you know, you have data sprawl, which is the big one, which is all about the massive amounts of it's something we have to help our customers with. What, what are those We talked about that growth and gravity of the data. So how does that affect the cyber strategies and the, And today as the world with cyber tax being more and more sophisticated, You know, it's funny, I said this in my open, I, I think that prior to the pandemic businesses that very, you know, the multicloud world just recently, I don't know if you've seen it, but the global data protected, Not to be confused with gdpr, Actually that was released today and confirms everything we just talked about around customer challenges, At, at a high level, Jeff, can you kind of address why Dell and from your point of view is best suited? We think Dell is uniquely positioned to help our customers as a one stop shop, if you will, It's really designed to meet the needs What, what do you mean by simple? We wanna provide simplicity everywhere, going back to helping with the complexity challenge, and that's from deployment So how, what's your point of view on resilience? Harbor that meets all the needs of the financial sector. So it's interesting when you think about the, the NIST framework for cyber security, it's all about about layers. the context, when you look at the data protection market, Dell has been a leader in providing solutions but then you talk to customers and they're like, Well, we actually want appliances because we just wanna put it in and it works, You know, part of the reason I gave you some of those stats to begin with is that we have at this strong foundation of, So the premise that I've been putting forth for, you know, months now, probably well, well over a year, it really provides customers the flexibility to secure their data no matter if they're running, you know, it's sort of the new hot thing. All a customer really needs to do is, you know, specify their base capacity, I I'm a big fan of that cuz you guys should be smart enough to figure out, you know, based on my workload, We support as Travis at all the major public clouds. Travis, what's your point of view on of that solution specifically in the clouds, So I think it's fair to say that your, I mean your portfolio has obvious been a big differentiator whenever I talk to, We are the trusted market leader, no if and or bots, we're number one for both data protection software in what we learned today, what actions you can take for your business. Protecting data has never been more critical to guard against that I can recover and continue to recover each day. It is important to have a cyber security and a cyber resiliency Data manager from Dell Technologies helps deliver the data protection and security We chose Power Protect Data Manager because we've been a strategic partner with With Power Protect Data Manager, you can enjoy exceptional ease of use to increase your efficiency We can fully manage Power Data Manager offers innovation such as Transparent snapshots to simplify virtual Now we're backing up hourly and it takes two to three seconds with the transparent With Power Protects Data Manager, you get the peace of mind knowing that your data is safe and available We need things just to work. Start your journey to modern data protection with Dell Power Protect Data manager. We put forth the premise in our introduction that the world's of data protection in cybersecurity So I kind of kept pushing and got to what I think is the heart of the matter in two really Dell claims that customers can deploy the system in half the time relative to the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Dave ValantePERSON

0.99+

Jeff BoudreauxPERSON

0.99+

DellORGANIZATION

0.99+

TravisPERSON

0.99+

DavePERSON

0.99+

MikePERSON

0.99+

20 billionQUANTITY

0.99+

Travis BehillPERSON

0.99+

threeQUANTITY

0.99+

Jeff PadroPERSON

0.99+

10 billionQUANTITY

0.99+

Matt BakerPERSON

0.99+

AWSORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

Sheltered HarborORGANIZATION

0.99+

Travis VehillPERSON

0.99+

Michael DellPERSON

0.99+

secondQUANTITY

0.99+

demand@thecube.netOTHER

0.99+

MayDATE

0.99+

more than 14 exabytesQUANTITY

0.99+

more than 1700 customersQUANTITY

0.99+

1700 customersQUANTITY

0.99+

SecondQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

two senior executivesQUANTITY

0.99+

FirstQUANTITY

0.99+

three piecesQUANTITY

0.99+

todayDATE

0.99+

two partsQUANTITY

0.99+

twoQUANTITY

0.99+

six hoursQUANTITY

0.99+

bothQUANTITY

0.99+

thirdQUANTITY

0.99+

three secondsQUANTITY

0.99+

OneQUANTITY

0.99+

TodayDATE

0.99+

over 1300 customersQUANTITY

0.99+

Solutions GroupORGANIZATION

0.99+

ApexORGANIZATION

0.98+

three thingsQUANTITY

0.98+

500 virtual machinesQUANTITY

0.98+

eachQUANTITY

0.98+

20 yearsQUANTITY

0.98+

80%QUANTITY

0.98+

The Future of Multicloud Data Protection is Here FULL EPISODE V1


 

>> Prior to the pandemic, organizations were largely optimized for efficiency as the best path to bottom line profits. Many CIOs tell theCUBE privately that they were caught off guard by the degree to which their businesses required greater resiliency beyond their somewhat cumbersome disaster recovery processes. And the lack of that business resilience has actually cost firms because they were unable to respond to changing market forces. And certainly, we've seen this dynamic with supply chain challenges. And there's a little doubt we're also seeing it in the area of cybersecurity generally, and data recovery specifically. Over the past 30 plus months, the rapid adoption of cloud to support remote workers and build in business resilience had the unintended consequences of expanding attack vectors, which brought an escalation of risk from cybercrime. While security in the public cloud is certainly world class, the result of multicloud has brought with it multiple shared responsibility models, multiple ways of implementing security policies across clouds and on-prem. And at the end of the day, more, not less, . But there's a positive side to this story. The good news is that public policy, industry collaboration and technology innovation is moving fast to accelerate data protection and cybersecurity strategies with a focus on modernizing infrastructure, securing the digital supply chain, and very importantly, simplifying the integration of data protection and cybersecurity. Today, there's heightened awareness that the world of data protection is not only an adjacency to, but is becoming a fundamental component of cybersecurity strategies. In particular, in order to build more resilience into a business, data protection people, technologies and processes must be more tightly coordinated with security operations. Hello, and welcome to "The Future of Multicloud Data Protection" made possible by Dell in collaboration with theCUBE. My name is Dave Vellante and I'll be your host today. In this segment, we welcome into theCUBE two senior executives from Dell who will share details on new technology announcements that directly address these challenges. Jeff Boudreau is the President and General Manager of Dell's Infrastructure Solutions Group, ISG, and he's going to share his perspectives on the market and the challenges he's hearing from customers. And we're going to ask Jeff to double click on the messages that Dell is putting into the marketplace and give us his detailed point of view on what it means for customers. Now, Jeff is going to be joined by Travis Vigil. Travis is the Senior Vice-President of Product Management for ISG at Dell Technologies, and he's going to give us details on the products that are being announced today and go into the hard news. Now, we're also going to challenge our guests to explain why Dell's approach is unique and different in the marketplace. Thanks for being with us. Let's get right into it. (upbeat music) We're here with Jeff Boudreau and Travis Vigil, and we're going to dig into the details about Dell's big data protection announcement. Guys, good to see you. Thanks for coming in. >> Good to see you. Thank you for having us. >> You're very welcome. Alright, let's start off Jeff, with the high level. You know, I'd like to talk about the customer, what challenges they're facing? You're talking to customers all the time. What are they telling you? >> Sure, as you know, we spend a lot of time with our customers, specifically listening, learning, understanding their use cases, their pain points within their specific environments. They tell us a lot. No surprise to any of us that data is a key theme that they talk about. It's one of their most important assets. They need to extract more value from that data to fuel their business models, their innovation engines, their competitive edge. So, they need to make sure that that data is accessible, it's secure and its recoverable, especially in today's world with the increased cyber attacks. >> Okay, so maybe we could get into some of those challenges. I mean, when you talk about things like data sprawl, what do you mean by that? What should people know? >> Sure, so for those big three themes, I'd say, you have data sprawl, which is the big one, which is all about the massive amounts of data. It's the growth of that data, which is growing at unprecedented rates. It's the gravity of that data and the reality of the multicloud sprawl. So stuff is just everywhere, right? Which increases that surface as attack space for cyber criminals. >> And by gravity, you mean the data's there and people don't want to move it. >> It's everywhere, right? And so when it lands someplace, think Edge, Core or Cloud, it's there. And it's something we have to help our customers with. >> Okay, so it's nuanced 'cause complexity has other layers. What are those layers? >> Sure. When we talk to our customers, they tell us complexity is one of their big themes. And specifically it's around data complexity. We talked about that growth and gravity of the data. We talk about multicloud complexity and we talk about multicloud sprawl. So multiple vendors, multiple contracts, multiple tool chains, and none of those work together in this multicloud world. Then that drives their security complexity. So, we talk about that increased attack surface. But this really drives a lot of operational complexity for their teams. Think about we're lacking consistency through everything. So people, process, tools, all that stuff, which is really wasting time and money for our customers. >> So, how does that affect the cyber strategies and the, I mean, I've often said the Cisco, now they have this shared responsibility model. They have to do that across multiple clouds. Every cloud has its own security policies and frameworks and syntax. So, maybe you could double click on your perspective on that. >> Sure. I'd say the big challenge customers have seen, it's really inadequate cyber resiliency and specifically, they're feeling very exposed. And today as the world with cyber attacks being more and more sophisticated, if something goes wrong, it is a real challenge for them to get back up and running quickly. And that's why this is such a big topic for CEOs and businesses around the world. You know, it's funny. I said this in my open. I think that prior to the pandemic businesses were optimized for efficiency, and now they're like, "Wow, we have to actually put some headroom into the system to be more resilient." You know, are you hearing that? >> Yeah, we absolutely are. I mean, the customers really, they're asking us for help, right? It's one of the big things we're learning and hearing from them. And it's really about three things. One's about simplifying IT. Two, it's really helping them to extract more value from their data. And then the third big piece is ensuring their data is protected and recoverable regardless of where it is going back to that data gravity and that very, you know, the multicloud world. Just recently, I don't know if you've seen it, but the Global Data Protected, excuse me, the Global Data Protection Index. >> GDPI. >> Yes. Jesus. >> Not to be confused with GDPR. >> Actually, that was released today and confirms everything we just talked about around customer challenges. But also it highlights at an importance of having a very cyber, a robust cyber resilient data protection strategy. >> Yeah, I haven't seen the latest, but I want to dig into it. I think this, I've done this many, many years in a row. I'd like to look at the time series and see how things have changed. All right. At a high level, Jeff, can you kind of address why Dell, from your point of view is best suited? >> Sure. So, we believe there's a better way or a better approach on how to handle this. We think Dell is uniquely positioned to help our customers as a one stop shop, if you will, for that cyber resilient multicloud data protection solution and needs. We take a modern, a simple and resilient approach. >> What does that mean? What do you mean by modern? >> Sure. So modern, we talk about our software defined architecture. Right? It's really designed to meet the needs not only of today, but really into the future. And we protect data across any cloud and any workload. So, we have a proven track record doing this today. We have more than 1,700 customers that trust us to protect more than 14 exabytes of their data in the cloud today. >> Okay, so you said modern, simple and resilient. What do you mean by simple? >> Sure. We want to provide simplicity everywhere, going back to helping with the complexity challenge. And that's from deployment to consumption, to management and support. So, our offers will deploy in minutes. They are easy to operate and use, and we support flexible consumption models for whatever the customer may desire. So, traditional subscription or as a service. >> And when you talk about resilient, I mean, I put forth that premise, but it's hard because people say, "Well, that's going to cost us more. Well, it may, but you're going to also reduce your risk." So, what's your point of view on resilience? >> Yeah, I think it's something all customers need. So, we're going to be providing a comprehensive and resilient portfolio of cyber solutions that are secure by design. And we have some unique capabilities and a combination of things like built in immutability, physical and logical isolation. We have intelligence built in with AI part recovery. And just one, I guess fun fact for everybody is we have, our cyber vault is the only solution in the industry that is endorsed by Sheltered Harbor that meets all the needs of the financial sector. >> So it's interesting when you think about the NIST framework for cybersecurity. It's all about about layers. You're sort of bringing that now to data protection. >> Jeff: Correct. Yeah. >> All right. In a minute, we're going to come back with Travis and dig into the news. We're going to take a short break. Keep it right there. (upbeat music) (upbeat adventurous music) Okay, we're back with Jeff and Travis Vigil to dig deeper into the news. Guys, again, good to see you. Travis, if you could, maybe you, before we get into the news, can you set the business context for us? What's going on out there? >> Yeah. Thanks for that question, Dave. To set a little bit of the context, when you look at the data protection market, Dell has been a leader in providing solutions to customers for going on nearly two decades now. We have tens of thousands of people using our appliances. We have multiple thousands of people using our latest modern, simple PowerProtect Data Manager Software. And as Jeff mentioned, we have, 1,700 customers protecting 14 exabytes of data in the public clouds today. And that foundation gives us a unique vantage point. We talked to a lot of customers and they're really telling us three things. They want simple solutions. They want us to help them modernize. And they want us to add as the highest priority, maintain that high degree of resiliency that they expect from our data protection solutions. So, that's the backdrop to the news today. And as we go through the news, I think you'll agree that each of these announcements deliver on those pillars. And in particular, today we're announcing the PowerProtect Data Manager Appliance. We are announcing PowerProtect Cyber Recovery Enhancements, and we are announcing enhancements to our APEX Data Storage Services. >> Okay, so three pieces. Let's dig to that. It's interesting, appliance, everybody wants software, but then you talk to customers and they're like, "Well, we actually want appliances because we just want to put it in and it works." >> Travis: (laughs) Right. >> It performs great. So, what do we need to know about the appliance? What's the news there? >> Well, you know, part of the reason I gave you some of those stats to begin with is that we have this strong foundation of experience, but also intellectual property components that we've taken that have been battle tested in the market. And we've put them together in a new simple, integrated appliance that really combines the best of the target appliance capabilities we have with that modern, simple software. And we've integrated it from the, you know, sort of taking all of those pieces, putting them together in a simple, easy to use and easy to scale interface for customers. >> So, the premise that I've been putting forth for months now, probably well over a year, is that data protection is becoming an extension of your cybersecurity strategies. So, I'm interested in your perspective on cyber recovery. Your specific news that you have there. >> Yeah, you know, we are in addition to simplifying things via the appliance, we are providing solutions for customers no matter where they're deploying. And cyber recovery, especially when it comes to cloud deployments, is an increasing area of interest and deployment that we see with our customers. So, what we're announcing today is that we're expanding our cyber recovery services to be available in Google Cloud. With this announcement, it means we're available in all three of the major clouds and it really provides customers the flexibility to secure their data no matter if they're running on-premises, in Acolo, at the Edge, in the public cloud. And the other nice thing about this announcement is that you have the ability to use Google Cloud as a cyber recovery vault that really allows customers to isolate critical data and they can recover that critical data from the vault back to on-premises or from that vault back to running their cyber protection or their data protection solutions in the public cloud. >> I always invoke my favorite Matt Baker here. "It's not a zero sum game", but this is a perfect example where there's opportunities for a company like Dell to partner with the public cloud provider. You've got capabilities that don't exist there. You've got the on-prem capabilities. We could talk about Edge all day, but that's a different topic. Okay, so my other question Travis, is how does this all fit into APEX? We hear a lot about APEX as a service. It's sort of the new hot thing. What's happening there? What's the news around APEX? >> Yeah, we've seen incredible momentum with our APEX solutions since we introduced data protection options into them earlier this year. And we're really building on that momentum with this announcement being providing solutions that allow customers to consume flexibly. And so, what we're announcing specifically is that we're expanding APEX Data Storage Services to include a data protection option. And it's like with all APEX offers, it's a pay-as-you-go solution. Really streamlines the process of customers purchasing, deploying, maintaining and managing their backup software. All a customer really needs to do is specify their base capacity. They specify their performance tier. They tell us do they want a one year term or a three year term and we take it from there. We get them up and running so they can start deploying and consuming flexibly. And as with many of our APEX solutions, it's a simple user experience all exposed through a unified APEX Console. >> Okay, so it's, you're keeping it simple, like I think large, medium, small. You know, we hear a lot about T-shirt sizes. I'm a big fan of that 'cause you guys should be smart enough to figure out, you know, based on my workload, what I need. How different is this? I wonder if you guys could address this. Jeff, maybe you can start. >> Sure, I'll start and then- >> Pitch me. >> You know, Travis, you jump in when I screw up here. >> Awesome. >> So, first I'd say we offer innovative multicloud data protection solutions. We provide that deliver performance, efficiency and scale that our customers demand and require. We support as Travis said, all the major public clouds. We have a broad ecosystem of workload support and I guess the great news is we're up to 80% more cost effective than any of the competition. >> Dave: 80%? >> 80% >> Hey, that's a big number. All right, Travis, what's your point of view on this? >> Yeah, I think number one, end-to-end data protection. We are that one stop shop that I talked about, whether it's a simplified appliance, whether it's deployed in the cloud, whether it's at the Edge, whether it's integrated appliances, target appliances, software. We have solutions that span the gamut as a service. I mentioned the APEX Solution as well. So really, we can provide solutions that help support customers and protect them, any workload, any cloud, anywhere that data lives. Edge, Core to Cloud. The other thing that we hear as a big differentiator for Dell, and Jeff touched on on this a little bit earlier, is our Intelligent Cyber Resiliency. We have a unique combination in the market where we can offer immutability or protection against deletion as sort of that first line of defense. But we can also offer a second level of defense, which is isolation, talking about data vaults or cyber vaults and cyber recovery. And more importantly, the intelligence that goes around that vault. It can look at detecting cyber attacks. It can help customers speed time to recovery. And really provides AI and ML to help early diagnosis of a cyber attack and fast recovery should a cyber attack occur. And if you look at customer adoption of that solution, specifically in the cloud, we have over 1300 customers utilizing PowerProtect Cyber Recovery. >> So, I think it's fair to say that your portfolio has obviously been a big differentiator. Whenever I talk to your finance team, Michael Dell, et cetera, that end-to-end capability, that your ability to manage throughout the supply chain. We actually just did an event recently with you guys where you went into what you're doing to make infrastructure trusted. And so my take on that is you, in a lot of respects, you're shifting the client's burden to your R&D. now they have a lot of work to do, so it's not like they can go home and just relax. But that's a key part of the partnership that I see. Jeff, I wonder if you could give us the final thoughts. >> Sure. Dell has a long history of being a trusted partner within IT, right? So, we have unmatched capabilities. Going back to your point, we have the broadest portfolio. We're a leader in every category that we participate in. We have a broad deep breadth of portfolio. We have scale. We have innovation that is just unmatched. Within data protection itself, we are the trusted market leader. No if, ands or buts. We're number one for both data protection software in appliances per IDC and we were just named for the 17th consecutive time the leader in the Gartner Magic Quadrant. So, bottom line is customers can count on Dell. >> Yeah, and I think again, we're seeing the evolution of data protection. It's not like the last 10 years. It's really becoming an adjacency and really, a key component of your cyber strategy. I think those two parts of the organization are coming together. So guys, really appreciate your time. Thanks for coming. >> Thank you, sir. >> Dave. >> Travis, good to see you. All right, in a moment I'm going to come right back and summarize what we learned today, what actions you can take for your business. You're watching "The Future of Multicloud Data Protection" made possible by Dell in collaboration with theCUBE, your leader in enterprise and emerging tech coverage. Right back. >> Advertiser: In our data-driven world, protecting data has never been more critical. To guard against everything from cyber incidents to unplanned outages, you need a cyber resilient multicloud data protection strategy. >> It's not a matter of if you're going to get hacked, it's a matter of when. And I want to know that I can recover and continue to recover each day. >> It is important to have a cyber security and a cyber resiliency plan in place because the threat of cyber attack are imminent. >> Advertiser: PowerProtect Data Manager from Dell Technologies helps deliver the data protection and security confidence you would expect from a trusted partner and market leader. >> We chose PowerProtect Data Manager because we've been a strategic partner with Dell Technologies for roughly 20 years now. Our partnership with Dell Technologies has provided us with the ability to scale and grow as we've transitioned from 10 billion in assets to 20 billion. >> Advertiser: With PowerProtect Data Manager, you can enjoy exceptional ease of use to increase your efficiency and reduce costs. >> I'd installed it by myself, learn it by myself. It was very intuitive. >> While restoring your machine with PowerProtect Data Manager is fast, we can fully manage PowerProtect through the center. We can recover a whole machine in seconds. >> Instructor: Data Manager offers innovation such as transparent snapshots to simplify virtual machine backups, and it goes beyond backup and restore to provide valuable insights into protected data, workloads and VMs. >> In our previous environment, it would take anywhere from three to six hours a night to do a single backup of each VM. Now, we're backing up hourly and it takes two to three seconds with the transparent snapshots. >> Advertiser: With PowerProtect's Data Manager, you get the peace of mind knowing that your data is safe and available whenever you need it. >> Data is extremely important. We can't afford to lose any data. We need things just to work. >> Advertiser: Start your journey to modern data protection with Dell PowerProtect's Data Manager. Visit dell.com/powerprotectdatamanager >> We put forth the premise in our introduction that the worlds of data protection in cybersecurity must be more integrated. We said that data recovery strategies have to be built into security practices and procedures and by default, this should include modern hardware and software. Now, in addition to reviewing some of the challenges that customers face, which have been pretty well documented, we heard about new products that Dell Technologies is bringing to the marketplace that specifically address these customer concerns. And there were three that we talked about today. First, the PowerProtect Data Manager Appliance, which is an integrated system taking advantage of Dell's history in data protection, but adding new capabilities. And I want to come back to that in a moment. Second is Dell's PowerProtect Cyber Recovery for Google Cloud platform. This rounds out the big three public cloud providers for Dell, which joins AWS and Azure support. Now finally, Dell has made its target backup appliances available in APEX. You might recall, earlier this year we saw the introduction from Dell of APEX Backup Services and then in May at Dell Technologies World, we heard about the introduction of APEX Cyber Recovery Services. And today, Dell is making its most popular backup appliances available in APEX. Now, I want to come back to the PowerProtect Data Manager Appliance because it's a new integrated appliance and I asked Dell off camera, "Really what is so special about these new systems and what's really different from the competition?" Because look, everyone offers some kind of integrated appliance. So, I heard a number of items. Dell talked about simplicity and efficiency and containers and Kubernetes. So, I kind of kept pushing and got to what I think is the heart of the matter in two really important areas. One is simplicity. Dell claims that customers can deploy the system in half the time relative to the competition. So, we're talking minutes to deploy, and of course that's going to lead to much simpler management. And the second real difference I heard was backup and restore performance for VMware workloads. In particular, Dell has developed transparent snapshot capabilities to fundamentally change the way VMs are protected, which leads to faster backup and restores with less impact on virtual infrastructure. Dell believes this new development is unique in the market and claims that in its benchmarks, the new appliance was able to back up 500 virtual machines in 47% less time compared to a leading competitor. Now, this is based on Dell benchmarks, so hopefully these are things that you can explore in more detail with Dell to see if and how they apply to your business. So if you want more information, go to the Data Protection Page at dell.com. You can find that at dell.com/dataprotection. And all the content here and other videos are available on demand at theCUBE.net. Check out our series on the blueprint for trusted infrastructure, it's related and has some additional information. And go to siliconangle.com for all the news and analysis related to these and other announcements. This is Dave Vellante. Thanks for watching "The Future of Multicloud Protection" made possible by Dell, in collaboration with theCUBE, your leader in enterprise and emerging tech coverage. (upbeat music)

Published Date : Oct 27 2022

SUMMARY :

by the degree to which their businesses Good to see you. You know, I'd like to So, they need to make sure I mean, when you talk about and the reality of the multicloud sprawl. mean the data's there to help our customers with. Okay, so it's nuanced 'cause and gravity of the data. They have to do that into the system to be more resilient." and that very, you know, and confirms everything we just talked I'd like to look at the time series on how to handle this. in the cloud today. Okay, so you said modern, And that's from deployment to consumption, to also reduce your risk." that meets all the needs that now to data protection. Yeah. and dig into the news. So, that's the backdrop to the news today. Let's dig to that. What's the news there? and easy to scale interface for customers. So, the premise that that critical data from the to partner with the public cloud provider. that allow customers to consume flexibly. I'm a big fan of that 'cause you guys You know, Travis, you and I guess the great news is we're up your point of view on this? I mentioned the APEX Solution as well. to say that your portfolio Going back to your point, we of the organization Travis, good to see you. to unplanned outages, you and continue to recover each day. It is important to and security confidence you would expect from 10 billion in assets to 20 billion. to increase your efficiency I'd installed it by we can fully manage to simplify virtual machine backups, from three to six hours a and available whenever you need it. We need things just to work. journey to modern data protection and of course that's going to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

TravisPERSON

0.99+

JeffPERSON

0.99+

Jeff BoudreauPERSON

0.99+

DellORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Matt BakerPERSON

0.99+

DavePERSON

0.99+

10 billionQUANTITY

0.99+

47%QUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

20 billionQUANTITY

0.99+

twoQUANTITY

0.99+

Jeff BoudreauPERSON

0.99+

threeQUANTITY

0.99+

Sheltered HarborORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

AWSORGANIZATION

0.99+

one yearQUANTITY

0.99+

secondQUANTITY

0.99+

MayDATE

0.99+

SecondQUANTITY

0.99+

ISGORGANIZATION

0.99+

Michael DellPERSON

0.99+

FirstQUANTITY

0.99+

more than 1,700 customersQUANTITY

0.99+

Travis VigilPERSON

0.99+

three yearQUANTITY

0.99+

bothQUANTITY

0.99+

more than 14 exabytesQUANTITY

0.99+

two partsQUANTITY

0.99+

80%QUANTITY

0.99+

three secondsQUANTITY

0.99+

The Future of Multicloud ProtectionTITLE

0.99+

three piecesQUANTITY

0.99+

oneQUANTITY

0.99+

each dayQUANTITY

0.99+

eachQUANTITY

0.99+

todayDATE

0.99+

TwoQUANTITY

0.99+

second levelQUANTITY

0.99+

OneQUANTITY

0.99+

over 1300 customersQUANTITY

0.99+

two senior executivesQUANTITY

0.98+

dell.com/powerprotectdatamanagerOTHER

0.98+

GartnerORGANIZATION

0.98+

Darren Wolner, Lumen | VMware Explore 2022


 

(upbeat music) >> Welcome back, everyone, to theCUBE's coverage of VMware Explore 2022, formerly Vmworld. We've been covering this event since 2010. I'm with Dave Nicholson, my cohost. We've got two sets here, live for three days, breaking down all the action, what's going on in the news, what announcements, what are the partners doing, you got the VMware execs, you got the customers, and you got the partner ecosystem, which is booming. We got Darren Wolner, Senior Director of Product Management at Lumen, SASE and SD-WAN, in the midst of it all. The internet is SD-WAN, this is all rocking. Welcome to theCUBE. Thanks for coming on. >> Hey. Thanks for having me, guys. I really appreciate being here. >> Well, we know the name change LUMEN from CenturyLink. You guys have been on many times on theCUBE talking about, you know, the connective tissue. You got infrastructure, platform, now SASE. Cloud's changing. We're calling it supercloud. Some people call it multicloud. But the game is still the same. You got an on-premise environment, you got edge, could be a building. And you got now cloud-native hyperscale, cloud players, now all connecting, kind of like the old branch office days, connect here. So a lot of the same kind of concepts, but done differently. Give us the quick update from Lumen. What are you guys seeing? What are some of the big trends? >> So the quick update from Lumen is that we just launched a new service called SASE that we're extremely excited about. And this new service from Lumen takes advantage of a lot of the infrastructure that you just mentioned. So we're able to take advantage of our cloud edge 60 plus nodes to help customers move their applications closer to where they're doing business. Major performance boosts. So even though all these customers want to move the workloads to the cloud to improve their efficiency, improve their performance, we are acting quickly to make sure that that experience is a positive one. So as things are evolving and changing, so is Lumen, Aad we're pushing towards that evolution to technology. >> Take a minute to explain, just kind of set the table to the situation of how you guys relate to your customers. You mentioned SASE, which is a service I want to get into. Okay, got connectivity. What are some of the use cases? Where does SASE fit in? What is the use case with the customers? Where are you seeing the most traction? >> And you need to define SASE. It's always a party foul to use an acronym without defining it immediately after the first time you used it, so. >> Okay, so I have to recover from that foul. So, absolutely. So SASE, we view SASE as a convergence of network and security. And what we're doing with SASE is that we're delivering this package of services that are cloud based, that customers can pick and choose whichever ones they want. And that's Secure Access Service Edge. And that is what we're very excited to talk about. >> I mean, basically it's connectivity, it's application security, it's edge. So it's end-to-end. So we all get the acronym. Nice play there. But when reality comes to the customer, what is the use case that you guys are seeing the most on? Lift and shift I get. Is it lift and shift and then cloud native to on-prem? What is some of the things specifically that you guys are selling into? >> Specifically what we're seeing is we're seeing that customers, they want to evolve their networks and move to cloud environments, but not everybody's ready to do it all at the same time. That's part of the reason why SASE has become so popular right now. Because we're enabling customers to pick and choose the order in which they want to move to cloud enabled services, and we're allowing them to choose one or choose them all. And from a use case perspective, as we've just gone through COVID, and everybody knows work from home has become extremely important way of doing business, and that we want to give that flexibility. >> No one would've forecasted 100% work-from-home, VPN, move it under provisioned. (men laughing) So again, shock to the system. >> It is, it is, it is. It was, but with a solution like this, we're able to provide our customers with flexibility to run their businesses any way they want. They to be premise-based, we can support them. They want to be remote, we can support them. That is a huge use case right now. >> I mean, all joking aside, the forcing function, necessity's the mother of invention, and the pandemic really kind of changed the game. How do you guys see security evolving? Because as you look at the security, you got FourNet out there. I know you guys have a relationship with them. You got VMware. There's a lot of different tools and platforms emerging. We hear every CSO we talk to is like, hey, I want to take my 35 tools down to 24, and more platforms, and much more defensibility, not just point security. How do you discuss that with customers around the security conversation? >> So we're finding that our customers want a little bit more simplicity. You had mentioned that they want to bring down their numbers to something that's a little bit more manageable. With the service that we've just launched, we have single vendor solutions, and we're looking to simplify that path for the customer. And it's about simplicity, but it's also about optionality. We want to make sure that we can say yes to our customers. And whatever path that they want to go to, from a software perspective, we're able to support them. And the flexibility of our platform allows that to happen. >> You know, networking, Dave, we always talk about the three major pillars: networking, compute, storage. They never go away. >> No. >> They'll always be around. Networking is now front and center, especially with the abstractions going on. You're starting to see supercloud discussions. You see companies buying more cloud native, like with AWS, to take that CapEx off, but now are putting all that energy into modern application development. Which now puts pressure on, okay, well about network policies? So networking is into the fold again. It's always been there, it never left, but it's becoming different. How do you see the different conversations happening with the network component, with cloud native trend that we're seeing here? >> Well, I think the network component is really table stakes. And what's happening is, as everybody is interested in moving to the cloud, services are becoming instant, right? Digitized. But you have the network that customers are still looking for that level of support from a company like Lumen, and they know that we have a vast infrastructure. So the network conversation doesn't go away. It just evolves. What's happening is customers want to understand how they can better secure those networks. And then what's also happening is people want to use any device, anywhere, anytime. So the conversation about the network is important, but when you think about security, it's starting to move away from the network. It already has. >> There's no more perimeter. >> Exactly. So we need to be able to secure our customers wherever they are, however they want to use their devices. And for us, that path was SASE. >> So go into a little more depth in terms of how this is deployed. What is this thing that is SASE? >> Absolutely. >> Is this software living on the edge on people's servers? Does it include some sort of physical components and wizardry? >> Well... (laughs) >> Peel back-- >> Is it self-service? Is it installable? Does it need professional services? >> So, there is a little bit of wizardry. And what we put together is really an awesome digital platform where customers have the ability to go into the Lumen marketplace, and in five simple steps, purchase a SASE solution based on a few discreet choices that they need to make. And once they've provisioned that, once they've purchased that service, now they have those entitlements. We've created an all new application from the ground up called the Lumen SASE Manager where they're able to go in, take their entitlements, design, build, manage their network. So the customer can go through this journey, and it's relatively quick. And they have tons of flexibility to do that. However, if a customer prefers a seller-led journey, we're still going to help them do that as well. So really the spirit of SASE for us was to give ultimate flexibility to the customer. Consume exactly what you want, consume it the way you want to, but the simplicity factor with our digital approach I think is something that we feel is pretty game changing. >> So when one of those customers, let's say you have a campaign, thank you SASE. What are those customers thanking you for? Give me an example of what a delighted customer would point to as, "I'm really glad we made the decision to do this with Lumen." Why would they be happy? >> Why would they be happy? Because the advantage of doing this with Lumen is not only that simplified digital approach, but we're selling them essentially a cookie, right? And that cookie has two layers, and it has cream filling. And what's going on is-- >> Tastes great. >> Definitely, definitely. But everybody has different tastes, and we'll get to that in a second. But the top layer is the infrastructure that Lumen provides. And we have a vast infrastructure, 450,000 route miles of fiber, 60 plus cloud edge nodes to bring compute closer to the customer. So that's a very important layer that we're providing. And then the other layer of the cookie is the management. Different customers have different needs. Not every business looks alike. So you're going to have some businesses who have invested in their security apparatus, and they may not need enough as much help from us. So we're offering customers different levels of managed service wrapper so they can buy exactly what they need, no more, no less. So let's get to the cream filling. Everybody likes the cream filling, but not everybody likes the same kind. Every time you go down the supermarket aisle and you look at your favorite cream cookie, there's different types of flavors that are introduced from time to time. So what we want to do is to be able to say yes to our customers and give them as much variety as the cream flavors as possible. And that's where the software comes in. If you have dedicated a lot of expertise to a certain platform, we want to be able to support that software platform. And I think the flexibility of the Lumen platform and the flexibility of Lumen SASE solutions allows us to give that flexibility back. >> So you putting that wizardry at the edge, so the customer's environment, whatever they have flexes with the connectivity? >> It does, yes. >> That's what you're getting at. I mean, at the end of the day, we need the network. Everybody wants more bandwidth. >> Its not going away. >> Faster, faster, faster. >> That's right. >> We need more bandwidth. >> That's right. >> But it could be smarter. But that also implements some potential overhead. So you got to understand the end to end. That's where I think the SD-WAN interesting tie-in comes in. How do you talk to customers about that piece? Is it simply you can have your cake and eat it too, and you lose weight with Lumen? I stole that line from Victoria from VMware. I want my cake and eat it too, and I want to lose weight. >> I mean, wouldn't that be a wonderful world if we could do that? Have our cake and lose weight. >> I want to make sure. Yeah. >> But when it comes to SD-WAN, especially under our SASE umbrella, what we're looking to do is go down the road of simplicity and try to work out the amount of compute that a customer needs, and the amount of storage, I'm sorry, not storage, the amount of throughput that a customer needs. And we're getting these customers to make these decisions. They know what they have. They know what they want to run. We will consult with them. Whether they go through our digital experience, whether they go through our seller-led experience, there's always off ramps and a way to talk to a human being and make choices. So we're giving the customer enough information to make an informed decision, and we're here to support them if they need more. >> So you're customer-centric. You guys are good there. I mean, that's solid. Great track record there. I guess my final two questions are: one, how do I consume? I'm the customer. How do I consume? And what's on the roadmap going forward? I mean, look at the project management. You got the keys to the kingdom on the roadmap. And you can share if you want, but maybe you can't share some things. But what's the consumption model? Where do I find it? Is it the marketplace? Is it through channel partners and service providers? And then what's on the roadmap? >> Sure, absolutely. So you can consume this on dotcom through the Lumen marketplace. You could interact with the learn and the buy experience. And then once you've gone through that experience, you're going to consume it through the SASE manager. That's how you're going to use and interact with the service. That's how you're going to consume it. And then you're going to continue to utilize the SASE manager for reporting, access to portals, so forth and so on. You need to make a change to your service, not a problem. It's simple. You go back into the SASE manager, you add more seats to your ZTNA solution. You want to add another site, you go back into the SASE manager, you could purchase another site. We'll take care of all of it. Everything is automated. >> If you're a VMware customer, what's in it for them? >> This is great for VMware. It's the automation of the complete security stack. It's the automation of the SD-WAN portion. And we think that this total package is something that's going to be very appealing to VMware fans, VMware customers, and most importantly, when a VMware customer comes to us and says, "I have a ton of experience with VMware, and I don't want to move away from it, but I can really use the management and the infrastructure that you guys have," I'm able to say yes. >> And then you got the Aria coming out, now you got the cross-cloud, going to be very interesting. Okay, what's on the roadmap? Tell us what's the secret sauce. Reveal some secrets. >> Reveal some secrets. I dunno, there's a lot of people watching. >> They're shaking their head over there, "Don't say it! Don't say it!" (laughs) >> We have a lot of exciting things on the roadmap. I will tell you this because I think it's very important. The way we are developing services today has shifted. No longer can companies afford to roll out one product a year and wait. It takes you a year to roll that product out, and it's stale by the time it comes out, and then it takes you another year to fix it. We have moved to continuous development cycles. We are keeping track of what's going on in the market, what the hot trends are, what the hot services are, and as SASE continues to evolve, we will be able to quickly evolve. So while we do have some ideas of where we want to go on the roadmap, and I'm sure they're shaking their heads over there, what I love is we now have the ability to listen to what our customers want and act quickly. >> I call it the holy trinity. Network storage, compute, get that software intelligence at the edge which is going to be really popular. You guys are in a really perfect position. Thanks for coming on, sharing on theCUBE. >> Thank you so much, thank you. >> Okay, Darren's here on theCUBE breaking it down for Lumen, formerly CenturyLink, rebranded a few years ago. Connectivity is the key. You still got to connect, network, compute, storage, and you got the data center now, the cloud hybrid, now multicloud. This is the super CUBE, covering supercloud here at VMware Explore 2022. We'll be right back after this short break. (upbeat music)

Published Date : Aug 31 2022

SUMMARY :

and you got the partner I really appreciate being here. So a lot of the same kind of So the quick update from Lumen What is the use case with the customers? And you need to define SASE. And that is what we're What is some of the things specifically do it all at the same time. So again, shock to the system. to run their businesses any way they want. and the pandemic really And the flexibility of our the three major pillars: So networking is into the fold again. So the network conversation So we need to be able So go into a little more depth consume it the way you want to, to do this with Lumen." Because the advantage and the flexibility of I mean, at the end of the So you got to understand the end to end. if we could do that? I want to make sure. and the amount of storage, You got the keys to the You go back into the SASE manager, and the infrastructure And then you got the Aria coming out, I dunno, there's a lot of people watching. have the ability to listen get that software intelligence at the edge and you got the data center now,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

Darren WolnerPERSON

0.99+

CenturyLinkORGANIZATION

0.99+

LumenORGANIZATION

0.99+

AWSORGANIZATION

0.99+

100%QUANTITY

0.99+

35 toolsQUANTITY

0.99+

three daysQUANTITY

0.99+

24QUANTITY

0.99+

DavePERSON

0.99+

SASEORGANIZATION

0.99+

five simple stepsQUANTITY

0.99+

SASETITLE

0.99+

two setsQUANTITY

0.99+

oneQUANTITY

0.99+

450,000 route milesQUANTITY

0.99+

VMwareORGANIZATION

0.99+

two layersQUANTITY

0.98+

2010DATE

0.98+

todayDATE

0.98+

three major pillarsQUANTITY

0.98+

a yearQUANTITY

0.97+

FourNetORGANIZATION

0.97+

two questionsQUANTITY

0.96+

theCUBEORGANIZATION

0.95+

SDORGANIZATION

0.95+

LumenLOCATION

0.94+

VictoriaPERSON

0.94+

dotcomORGANIZATION

0.94+

first timeQUANTITY

0.94+

VmworldORGANIZATION

0.92+

DarrenPERSON

0.92+

WANORGANIZATION

0.91+

60 plus cloud edgeQUANTITY

0.9+

one product a yearQUANTITY

0.89+

LUMENORGANIZATION

0.89+

VMware Explore 2022TITLE

0.87+

AadORGANIZATION

0.86+

supercloudORGANIZATION

0.86+

single vendorQUANTITY

0.78+

few years agoDATE

0.78+

pandemicEVENT

0.77+

COVIDTITLE

0.77+

ZTNATITLE

0.71+

VMware ExploreORGANIZATION

0.7+

VMware Explore 2022ORGANIZATION

0.69+

CapExTITLE

0.68+

SASE ManagerTITLE

0.68+

multicloudORGANIZATION

0.67+

AriaORGANIZATION

0.64+

CUBECOMMERCIAL_ITEM

0.63+

a secondQUANTITY

0.61+

-WANOTHER

0.58+

Naina Singh & Roland Huß, Red Hat | Kubecon + Cloudnativecon Europe 2022


 

>> Announcer: "theCUBE" presents KubeCon and CloudNativeCon Europe 2022 brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to Valencia, Spain and KubeCon and CloudNativeCon Europe 2022. I'm Keith Townsend, my co-host, Paul Gillin, Senior Editor Enterprise Architecture for SiliconANGLE. We're going to talk, or continue to talk to amazing people. The coverage has been amazing, but also the city of Valencia is beautiful. I have to eat a little crow, I landed and I saw the convention center, Paul, have you got out and explored the city at all? >> Absolutely, my first reaction to Valencia when we were out in this industrial section was, "This looks like Cincinnati." >> Yes. >> But then I got on the bus second day here, 10 minutes to downtown, another world, it's almost a middle ages flavor down there with these little winding streets and just absolutely gorgeous city. >> Beautiful city. I compared it to Charlotte, no disrespect to Charlotte, but this is an amazing city. Naina Singh, Principal Product Manager at Red Hat, and Roland Huss, also Principal Product Manager at Red Hat. We're going to talk a little serverless. I'm going to get this right off the bat. People get kind of feisty when we call things like Knative serverless. What's the difference between something like a Lambda and Knative? >> Okay, so I'll start. Lambda is, like a function as a server, right? Which is one of the definitions of serverless. Serverless is a deployment platform now. When we introduced serverless to containers through Knative, that's when the serverless got revolutionized, it democratized serverless. Lambda was proprietary-based, you write small snippets of code, run for a short duration of time on demand, and done. And then Knative which brought serverless to containers, where all those benefits of easy, practical, event-driven, running on demand, going up and down, all those came to containers. So that's where Knative comes into picture. >> Yeah, I would also say that Knative is based on containers from the very beginning, and so, it really allows you to run arbitrary workloads in your container, whereas with Lambda you have only a limited set of language that you can use and you have a runtime contract there which is much easier with Knative to run your applications, for example, if it's coming in a language that is not supported by Lambda. And of course the most important benefit of Knative is it's run on top of Kubernetes, which allows you- >> Yes. >> To run your serverless platform on any other Kubernetes installation, so I think this is one of the biggest thing. >> I think we saw about three years ago there was a burst of interest around serverless computing and really some very compelling cost arguments for using it, and then it seemed to die down, we haven't heard a lot about serverless, and maybe I'm just not listening to the right people, but what is it going to take for serverless to kind of break out and achieve its potential? >> Yeah, I would say that really the big advantage of course of Knative in that case is that you can scale down to zero. I think this is one of the big things that will really bring more people onto board because you really save a lot of money with that if your applications are not running when they're not used. Yeah, I think also that, because you don't have this vendor log in part thing, when people realize that you can run really on every Kubernete platform, then I think that the journey of serverless will continue. >> And I will add that the event-driven applications, there hasn't been enough buzz around them yet. There is, but serverless is going to bring a new lease on life on them, right? The other thing is the ease of use for developers. With Knative, we are introducing a new programming model, the functions, where you don't even have to create containers, it would do create containers for you. >> So you create the servers, but not the containers? >> Right now, you create the containers and then you deploy them in a serverless fashion using Knative. But the container creation was on the developers, and functions is going to be the third component of Knative that we are developing upstream, and Red Hat donated that project, is going to be where code to cloud capability. So you bring your code and everything else will be taken care of, so. >> So, I'd call a function or, it's funny, we're kind of circular with this. What used to be, I'd write a function and put it into a container, this server will provide that function not just call that function as if I'm developing kind of a low code no code, not no code, but a low code effort. So if there's a repetitive thing that the community wants to do, you'll provide that as a predefined function or as a server. >> Yeah, exactly. So functions really helps the developer to bring their code into the container, so it's really kind of a new (indistinct) on top of Knative- >> on top op. >> And of course, it's also a more opinionated approach. It's really more closer coming to Lambda now because it also comes with a programming model, which means that you have certain signature that you have to implement and other stuff. But you can also create your own templates, because at the end what matters is that you have a container at the end that you can run on Knative. >> What kind of applications is serverless really the ideal platform? >> Yeah, of course the ideal application is a HTTP-based web application that has no state and that has a very non-uniform traffic shape, which means that, for example, if you have a business where you only have spikes at certain times, like maybe for Super Bowl or Christmas, when selling some merchandise like that, then you can scale up from zero very quickly at a arbitrary high depending on the load. And this is, I think, the big benefit over, for example, Kubernetes Horizontal Pod Autoscaling where it's more like indirect measures of value scaling based on CPR memory, but here, it directly relates one to one to the traffic that is coming in to concurrent request. Yeah, so this helps a lot for non-uniform traffic shapes that I think this has become one of the ideal use case. >> Yeah. But I think that is one of the most used or defined one, but I do believe that you can write almost all applications. There are some, of course, that would not be the right load, but as long as you are handling state through external mechanism. Let's say, for example you're using database to save the state, or you're using physical volume amount to save the state, it increases the density of your cluster because when they're running, the containers would pop up, when your application is not running, the container would go down, and the resources can be used to run any other application that you want to us, right? >> So, when I'm thinking about Lambda, I kind of get the event-driven nature of Lambda. I have a S3 bucket, and if a S3 event is driven, then my functions as the server will start, and that's kind of the listening servers. How does that work with Knative or a Kubernetes-based thing? 'Cause I don't have an event-driven thing that I can think of that kicks off, like, how can I do that in Kubernetes? >> So I'll start. So it is exactly the same thing. In Knative world, it's the container that's going to come up and your servers in the container, that will do the processing of that same event that you are talking. So let's say the notification came from S3 server when the object got dropped, that would trigger an application. And in world of Kubernetes, Knative, it's the container that's going to come up with the servers in it, do the processing, either find another servers or whatever it needs to do. >> So Knative is listening for the event, and when the event happens, then Knative executes the container. >> Exactly. >> Basically. >> So the concept of Knative source which is kind of adapted to the external world, for example, for the S3 bucket. And as soon as there is an event coming in, Knative will wake up that server, will transmit this event as a cloud event, which is another standard from the CNCF, and then when the server is done, then the server spins down again to zero so that the server is only running when there are events, which is very cost effective and which people really actually like to have this kind of way of dynamic scaling up from zero to one and even higher like that. >> Lambda has been sort of synonymous with serverless in the early going here, is Knative a competitor to Lambda, is it complimentary? Would you use the two together? >> Yeah, I would say that Lambda is a offering from AWS, so it's a cloud server there. Knative itself is a platform, so you can run it in the cloud, and there are other cloud offerings like from IBM, but you can also run it on-premise for example, that's the alternative. So you can also have hybrid set scenarios where you really can put one part into the cloud, the other part on-prem, and I think there's a big difference in that you have a much more flexibility and you can avoid this kind of Windows login compared to AWS Lambda. >> Because Knative provides specifications and performance tests, so you can move from one server to another. If you are on IBM offering that's using Knative, and if you go to a Google offering- >> A google offering. >> That's on Knative, or a Red Hat offering on Knative, it should be seamless because they're both conforming to the same specifications of Knative. Whereas if you are in Lambda, there are custom deployments, so you are only going to be able to run those workloads only on AWS. >> So KnativeCon, co-located event as part of KubeCon, I'm curious as to the level of effort in the user interaction for deploying Knative. 'Cause when I think about Lambda or cloud-run or one of the other functions as a servers, there is no backend that I have to worry about. And I think this is where some of the debate becomes over serverless versus some other definition. What's the level of lifting that needs to be done to deploy Knative in my Kubernetes environment? >> So if you like... >> Is this something that comes as based part of the OpenShift install or do I have to like, you know, I have to... >> Go ahead, you answer first. >> Okay, so actually for OpenShift, it's a code layer product. So you have this catalog of operator that you can choose from, and OpenShift Serverless is one part of that. So it's really kind of a one click install where you have also get a default configuration, you can flexibly configure it as you like. Yeah, we think that's a good user experience and of course you can go to these cloud offerings like Google Cloud one or IBM Code Engine, they just have everything set up for you. And the idea of other different alternatives, you have (indistinct) charts, you can install Knative in different ways, you also have options for the backend systems. For example, we mentioned that when an event comes in, then there's a broker in the middle of something which dispatches all the events to the servers, and there you can have a different backend system like Kafka or AMQ. So you can have very production grade messaging system which really is responsible for delivering your events to your servers. >> Now, Knative has recently, I'm sorry, did I interrupt you? >> No, I was just going to say that Knative, when we talk about, we generally just talk about the serverless deployment model, right? And the Eventing gets eclipsed in. That Eventing which provides this infrastructure for producing and consuming event is inherent part of Knative, right? So you install Knative, you install Eventing, and then you are ready to connect all your disparate systems through Events. With CloudEvents, that's the specification we use for consistent and portable events. >> So Knative recently admitted to the, or accepted by the Cloud Native Computing Foundation, incubating there. Congratulations, it's a big step. >> Thank you. >> Thanks. >> How does that change the outlook for Knative adoption? >> So we get a lot of support now from the CNCF which is really great, so we could be part of this conference, for example which was not so easy before that. And we see really a lot of interest and we also heard before the move that many contributors were not, started into looking into Knative because of this kind of non being part of a mutual foundation, so they were kind of afraid that the project would go away anytime like that. And we see the adoption really increases, but slowly at the moment. So we are still ramping up there and we really hope for more contributors. Yeah, that's where we are. >> CNCF is almost synonymous with open source and trust. So, being in CNCF and then having this first KnativeCon event as part of KubeCon, we are hoping, and it's a recent addition to CNCF as well, right? So we are hoping that this events and these interviews, this will catapult more interest into serverless. So I'm really, really hopeful and I only see positive from here on out for Knative. >> Well, I can sense the excitement. KnativeCon sold out, congratulations on that. >> Thank you. >> I can talk about serverless all day, it's a topic that I really love, it's a fascinating way to build applications and manage applications, but we have a lot more coverage to do today on "theCUBE" from Spain. From Valencia, Spain, I'm Keith Townsend along with Paul Gillin, and you're watching "theCUBE," the leader in high-tech coverage. (gentle upbeat music)

Published Date : May 19 2022

SUMMARY :

brought to you by Red Hat, I have to eat a little crow, reaction to Valencia 10 minutes to downtown, another world, I compared it to Charlotte, Which is one of the that you can use and you of the biggest thing. that you can run really the functions, where you don't even have and then you deploy them that the community wants So functions really helps the developer that you have a container at the end Yeah, of course the but I do believe that you can and that's kind of the listening servers. it's the container that's going to come up So Knative is listening for the event, so that the server is only running in that you have a much more flexibility and if you go so you are only going to be able that needs to be done of the OpenShift install and of course you can go and then you are ready So Knative recently admitted to the, that the project would go to CNCF as well, right? Well, I can sense the excitement. coverage to do today

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

Paul GillinPERSON

0.99+

Naina SinghPERSON

0.99+

IBMORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

SpainLOCATION

0.99+

twoQUANTITY

0.99+

10 minutesQUANTITY

0.99+

Roland HussPERSON

0.99+

ValenciaLOCATION

0.99+

LambdaTITLE

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

CincinnatiLOCATION

0.99+

second dayQUANTITY

0.99+

ChristmasEVENT

0.99+

PaulPERSON

0.99+

CharlotteLOCATION

0.99+

AWSORGANIZATION

0.99+

OpenShiftTITLE

0.99+

Super BowlEVENT

0.99+

KnativeORGANIZATION

0.99+

one partQUANTITY

0.99+

Valencia, SpainLOCATION

0.99+

KubeConEVENT

0.99+

Roland HußPERSON

0.98+

KnativeConEVENT

0.98+

S3TITLE

0.98+

one clickQUANTITY

0.98+

bothQUANTITY

0.98+

zeroQUANTITY

0.98+

GoogleORGANIZATION

0.98+

CNCFORGANIZATION

0.97+

oneQUANTITY

0.96+

googleORGANIZATION

0.96+

theCUTITLE

0.95+

CloudNativeCon Europe 2022EVENT

0.95+

todayDATE

0.95+

KubernetesTITLE

0.95+

firstQUANTITY

0.94+

one serverQUANTITY

0.93+

KnativeTITLE

0.93+

KubeconORGANIZATION

0.91+

KuberneteTITLE

0.91+

WindowsTITLE

0.9+

CloudEventsTITLE

0.9+

Jeanna James, AWS | VeeamON 2022


 

(bright upbeat music) >> Welcome back to theCUBE's coverage of VeeamON 2022. We're here at the Aria in Las Vegas. This is day two, Dave Vallante with David Nicholson. You know with theCUBE, we talked about the cloud a lot and the company that started the cloud, AWS. Jeanna James is here. She's the Global Alliance Manager at AWS and a data protection expert. Great to see you. Thanks for coming on theCUBE again. >> Thanks so much for having me, Dave. It's great to be here in person with everyone. >> Yes, you know, we've done a few events live more than a handful. Thanks a lot to AWS. We've done a number. We did the DC Summits. Of course, re:Invent was huge out here last year. That was right in between the sort of variant Omicron hitting. And it was a great, great show. We thought, okay, now we're back. And of course we're kind of back, but we're here and it's good to have you. So Veeam, AWS, I mean, they certainly embrace the cloud. What's your relationship there? >> Yeah, so Veeam is definitely a strong partner with AWS. And as you know, AWS is really a, you know, we have so many different services, and our customers and our partners are looking at how can I leverage those services and how do I back this up, right? Whether they're running things on premises and they want to put a copy of the data into Amazon S3, Amazon S3 Infrequent Access or Amazon S3 Glacier Deep Archive, all of these different technologies, you know Veeam supports them to get a copy from on-prem into AWS. But then the great thing is, you know, it's nice to have a copy of your data in the cloud but you might want to be able to do something with it once it gets there, right? So Veeam supports things like Amazon EC2 and Amazon EKS and EKS Anywhere. So those customers can actually recover their data directly into Amazon EC2 and EKS Anywhere. >> So we, of course, talked a lot about ransomware and that's important in that context of what you just mentioned. What are you seeing with the customers when you talk to them about ransomware? What are they asking AWS to do? Maybe we could start unpacking that a bit. >> Yeah, ransomware is definitely a huge topic today. We're constantly having that conversation. And, you know, five years ago there was a big malware attack that was called the NotPetya virus. And at that time it was based on Petya which was a ransomware virus, and it was designed to go in and, you know, lock in the data but it also went after the backup data, right? So it hold all of that data hostage so that people couldn't recover. Well, NotPetya was based on that but it was worse because it was the seek and destroy virus. So with the ransomware, you can pay a fee and get your data back. But with this NotPetya, it just went in, it propagated itself. It started installing on servers and laptops, anything it could touch and just deleting everything. And at that time, I actually happened to be in the hospital. So hospitals, all types of companies got hit by this attack. And my father had been rushed to the emergency room. I happened to be there. So I saw live what really was happening. And honestly, these network guys were running around shutting down laptops, taking them away from doctors and nurses, shutting off desktops. Putting like taping on pictures that said, do not turn on, right? And then, the nurses and staff were having to kind of take notes. And it was just, it was a mess, it was bad. >> Putting masks on the laptops essentially. >> Yeah, so just-- >> Disinfecting them or trying to. Wow, unplugging things from the network. >> Yes, because, you know, and that attack really demonstrated why you really need a copy of the data in the cloud or somewhere besides tape, right? So what happened at that time is if you lose 10 servers or something, you might be able to recover from tape, but if you lose a hundred or a thousand servers and all of your laptops, all in hours, literally a matter of hours, that is a big event, it's going to take time to recover. And so, you know, if you put a copy of the backup data in Amazon S3 and you can turn on that S3 Object Lock for immutability, you're able to recover in the cloud. >> So, can we go back to this hospital story? 'Cause that takes us inside the disaster potential. So they shut everything down, basically shut down the network so they could figure out what's going on and then fence it off, I presume. So you got, wow, so what happened? First of all, did they have to go manual, I mean? >> They had to do everything manually. It was really a different experience. >> Going back to the 1970s, I mean. >> It was, and they didn't know really how to do it, right? So they basically had kind of yellow notepads and they would take notes. Well, then let's say the doctor took notes, well, then the nurse couldn't read the notes. And even over the PA, you know, there was an announcement and it was pretty funny. Don't send down lab work request with just the last name. We need to know the first name, the last name, and the date of birth. There are multiple Joneses in this hospital so yeah (giggles). >> This is going to sound weird. But so when I was a kid, when you worked retail, if there was a charge for, you know, let's say $5.74 and, you know, they gave you, you know, amount of money, you would give them, you know, the penny back, count up in your head that's 75, give them a quarter and then give them the change. Today, of course, it works differently. The computer tells you, how much change to give. It's like they didn't know what to do. They didn't know how to do it manually 'cause they never had the manual process. >> That's exactly right. Some of the nurses and doctors had never done it manually. >> Wow, okay, so then technically they have to figure out what happened so that takes some time. However they do that. That's kind of not your job, right? I dunno if you can help with that or not. Maybe Amazon has some tooling to do that, probably does. And then you've got to recover from somewhere, not tape ideally. That's like the last resort. You put it on a Chevy Truck, Chevy Truck Access Method called CTAM, ship it in. That takes days, right? If you're lucky. So what's the ideal recovery. I presume it's a local copy somewhere. >> So the ideal-- >> It's fenced. >> In that particular situation, right? They had to really air gap so they couldn't even recover on those servers and things like that-- >> Because everything was infected on on-prem. >> Because everything was just continuing to propagate. So ideally you would have a copy of your data in AWS and you would turn on Object Lock which is the immutability, very simple check mark in Veeam to enable that. And that then you would be able to kick off your restores in Amazon EC2, and start running your business so. >> Yeah, this ties into the discussion of the ransomware survey where, you know, NotPetya was not seeking to extort money, it was seeking to just simply arrive and destroy. In the ransomware survey, some percentage of clients who paid ransom, never got their data back anyway. >> Oh my. >> So you almost have to go into this treating-- >> Huge percentage. >> Yeah, yeah, yeah. >> Like a third. >> Yeah, when you combine the ones where there was no request for ransom, you know, for any extorted funds, and then the ones where people paid but got nothing back. I know Maersk Line, the shipping company is a well studied example of what happened with NotPetya. And it's kind of chilling because what you describe, people running around shutting down laptops because they're seeing all of their peers' screens go black. >> Yes, that's exactly what's happening. >> And then you're done. So that end point is done at that point. >> So we've seen this, I always say there are these milestones in attacks. I mean, Stuxnet proved what a nation state could do and others learned from that, NotPetya, now SolarWinds. And people are freaking out about that because it's like maybe we haven't seen the last of that 'cause that was highly stealth, not a lot of, you know, Russian language in the malware. They would delete a lot of the malware. So very highly sophisticated island hopping, self forming malware. So who knows what's next? We don't know. And so you're saying the ideal is to have an air gap that's physically separate. maybe you can have one locally as well, we've heard about that too, and then you recover from that. What are you seeing in terms of your customers recovering from that? Is it taking minutes, hours, days? >> So that really de depends on the customers SLAs, right? And so with AWS, we offer multiple tiers of storage classes that provide different SLA recovery times, right? So if you're okay with data taking longer to recovery, you can use something like Amazon S3 Glacier Deep Archive. But if it's mission critical data, you probably want to put it in Amazon S3 and turn on that Object Lock for immutability sake. So nothing can be overwritten or deleted. And that way you can kick off your recoveries directly in AWS. >> One of the demos today that we saw, the recovery was exceedingly fast with a very small data loss so that's obviously a higher level SLA. You got to get what you pay for. A lot of businesses need that. I think it was like, I didn't think it was, they said four minutes data loss which is good. I'm glad they didn't say zero data loss 'cause there's really no such thing. So you've got experience, Jeanna, in the data protection business. How have you seen data protection evolve in the last decade and where do you see it going? Because let's face it, I mean when AWS started, okay, it had S3, 15 years ago, 16 years ago, whatever it was. Now, it's got all these tools as you mentioned. So you've learned, you've innovated along with your customers. You listened to your customers. That's your whole thing, customer obsession. >> That's right. >> What are they telling you? What do you see as the future? >> Definitely, we see more and more containerization. So you'll see with the Kasten by Veeam product, right? The ability to protect Amazon EKS, and Amazon EKS Anywhere, we see customers really want to take advantage of the ability to containerize and not have to do as much management, right? So much of what we call undifferentiated heavy lifting, right? So I think you'll see continued innovation in the area of containerization, you know, serverless computing. Obviously with AWS, we have a lot going on with artificial intelligence and machine learning. And, you know, the backup partners, they really have a unique capability in that they do touch a lot of data, right? So I think in the future, you know, things around artificial intelligence and machine learning and data analytics, all of those things could certainly be very applicable for folks like Veeam. >> Yeah, you know, we give a lot of, we acknowledge that backup is different from recovery but we often fall prey to making the mistake of saying, oh, well your data is available in X number of minutes. Well, that's great. What's it available to? So let's say I have backed up to S3 and it's immutable. By the way my wife keeps calling me and saying she wants mutability for me. (Jeanna laughs) I'm not sure if that's a good thing or not. But now I've got my backup in S3, begs the question, okay, well, now what do I do with it? Well, guess what you mentioned EC2. >> That's right. >> The ability exists to create a restore environment so that not only is the data available but the services are actually online and available-- >> That's right-- >> Which is what you want with EKS and Kasten. >> So if the customer is running, you know, Kubernetes, they're able to recover as well. So yes, definitely, I see more and more services like that where customers are able to recover their environment. It might be more than just a server, right? So things are changing. It's not just one, two, three, it's the whole environment. >> So speaking of the future, one of the last physical theCUBE interviews that Andy Jassy did with us. John Furrier and myself, we were asking about the edge and he had a great quote. He said, "Oh yeah, we look at the data center as just another edge node." I thought that was good classic Andy Jassy depositioning. And so it was brilliant. But nonetheless, we've talked a little bit about the edge. I was interviewing Verizon last week, and they told me they're putting outposts everywhere, like leaning in big time. And I was saying, okay, but outpost, you know, what can you do with outpost today? Oh, you can run RDS. And, you know, there's a few ecosystem partners that support it, and he's like, oh no, we're going to push Amazon. So what are you seeing at the edge in terms of data protection? Are customers giving you any feedback at this point? >> Definitely, so edge is a big deal, right? Because some workloads require that low latency, and things like outpost allow the customers to take advantage of the same API sets that they love in, you know, AWS today, like S3, right? For example. So they're able to deploy an outpost and meet some of those specific guidelines that they might have around compliance or, you know, various regulations, and then have that same consistent operational stance whether they're on-prem or in AWS. So we see that as well as the Snowball devices, you know, they're being really hardened so they can run in areas that don't have connected, you know, interfaces to the internet, right? So you've got them running in like ships or, you know, airplanes, or a field somewhere out in nowhere of this field, right? So lots of interesting things going on there. And then of course with IoT and the internet of things and so many different devices out there, we just see a lot of change in the industry and how data is being collected, how data's being created so a lot of excitement. >> Well, so the partners are key for outposts obviously 'cause you can't do it all yourself. It's almost, okay, Amazon now in a data center or an edge node. It's like me skating. It's like, hmm, I'm kind of out of my element there but I think you're learning, right? So, but partners are key to be able to support that model. >> Yes, definitely our partners are key, Veeam, of course, supports the outpost. They support the Snowball Edge devices. They do a lot. Again, they pay attention to their customers, right? Their customers are moving more and more workloads into AWS. So what do they do? They start to support those workloads, right? Because the customers also want that consistent, like we say, the consistent APIs with AWS. Well, they also want the consistent data protection strategy with Veeam. >> Well, the cloud is expanding. It's no longer just a bunch of remote services somewhere out there in the cloud. It's going to data centers. It's going out to the edge. It's going to local zones. You guys just announced a bunch of new local zones. I'm sure there are a lot of outposts in there, expanding your regions. Super cloud is forming right before our eyes. Jeanna, thanks so much for coming to theCUBE. >> Thank you. It's been great to be here. >> All right, and thank you for watching theCUBE's coverage. This is day two. We're going all day here, myself, Dave Nicholson, cohost. Check out siliconangle.com. For all the news, thecube.net, wikibon.com. We'll be right back right after this short break. (bright upbeat music)

Published Date : May 18 2022

SUMMARY :

and the company that It's great to be here Yes, you know, we've And as you know, AWS What are they asking AWS to do? So with the ransomware, you can pay a fee Putting masks on the Disinfecting them or trying to. And so, you know, if you put So you got, wow, so what happened? They had to do everything manually. And even over the PA, you know, and, you know, they gave you, Some of the nurses and doctors I dunno if you can help with that or not. was infected on on-prem. And that then you would be where, you know, NotPetya was for ransom, you know, So that end point is done at that point. and then you recover from that. And that way you can kick You got to get what you pay for. in the area of containerization, you know, Yeah, you know, we give a lot of, Which is what you So if the customer is So what are you seeing at the edge that they love in, you know, Well, so the partners are Veeam, of course, supports the outpost. It's going out to the edge. It's been great to be here. All right, and thank you for

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David NicholsonPERSON

0.99+

Dave NicholsonPERSON

0.99+

Jeanna JamesPERSON

0.99+

AWSORGANIZATION

0.99+

JeannaPERSON

0.99+

Andy JassyPERSON

0.99+

Dave VallantePERSON

0.99+

10 serversQUANTITY

0.99+

75QUANTITY

0.99+

VerizonORGANIZATION

0.99+

DavePERSON

0.99+

AmazonORGANIZATION

0.99+

John FurrierPERSON

0.99+

$5.74QUANTITY

0.99+

four minutesQUANTITY

0.99+

Las VegasLOCATION

0.99+

last weekDATE

0.99+

last yearDATE

0.99+

siliconangle.comOTHER

0.99+

TodayDATE

0.99+

five years agoDATE

0.99+

thecube.netOTHER

0.99+

OneQUANTITY

0.99+

todayDATE

0.99+

oneQUANTITY

0.98+

Maersk LineORGANIZATION

0.98+

theCUBEORGANIZATION

0.98+

NotPetyaTITLE

0.97+

15 years agoDATE

0.97+

EKSORGANIZATION

0.97+

DC SummitsEVENT

0.97+

VeeamORGANIZATION

0.97+

16 years agoDATE

0.97+

NotPetyaORGANIZATION

0.96+

a quarterQUANTITY

0.96+

wikibon.comOTHER

0.96+

twoQUANTITY

0.96+

S3TITLE

0.96+

SolarWindsORGANIZATION

0.95+

RussianOTHER

0.95+

1970sDATE

0.95+

threeQUANTITY

0.93+

a hundredQUANTITY

0.92+

day twoQUANTITY

0.92+

thirdQUANTITY

0.92+

NotPetyaOTHER

0.9+

EC2TITLE

0.89+

Matt Coulter, Liberty Mutual | AWS re:Invent 2021


 

(upbeat music) >> Good afternoon and welcome back to Las Vegas. You're watching theCUBE's coverage of AWS 2021. My name is Dave Vellante. theCUBE goes out to the events. We extract the signal from the noise. Very few physical events this year doing a lot of hybrid stuff. It's great to be back in hybrid event... Physical event land, 25,000 people here. Probably a little few more registered than that. And then on the periphery, got to be another at least 10,000 people that came in, flew in and out, see what's happening. A bunch of VCs, checking things out, a few parties last night and so forth. A lot of action here. It's like re:Invent is back. Matt Coulter is here. He's a technical architect at Liberty Mutual. Matt, thanks for flying in from Belfast. Good to see ya. >> Dave, and thanks for having me today. >> Pleasure. So what's your role as a technical architect? Maybe describe that, we'll get into a little bit. >> Yeah so I am here to empower and enable our developers across the globe to rapidly deliver business value and solve problems for our customers in a well-architected way that doesn't introduce problems or risks, you know, later down the line. So instead of thinking of me as someone who directly every day, build software, I try to create the environment where other people can rapidly build software. >> That's, you know, it's interesting. because you're a developer, right? You can use like, "Hey I code." That's what normally you would say but you're actually creating frameworks and business model so that others can learn, teach them how to fish, so we speak. >> Yeah because I can only scale, there's a certain amount. Whereas if I can teach, there's 5,000 people in Liberty Mutual's tech organization. So if I can teach the 5,000 to be 5% better, it's way more than me even if I 10Xed >> When did you first touch the Cloud? >> Personally, it would have been four/five years ago. That's when I started in the Cloud. >> What was that experience like for you? >> Oh, it was hard. It was very different to anything that we'd done in the past. So it's because you... Traditionally, you would have just written your small piece of code. You would have had a big application that was out there, it had been out there maybe 20 years, it was deployed, and you were just adding a couple of lines. Whereas when you start putting stuff into the Cloud, it's out there. It's on the internet for anyone there to try and hack or try to get into. It was a bit overwhelming the amount that you needed to learn. So it was- >> Was it worth it? >> Oh yeah. Completely. (laughing) So that's the thing, that I would never go back to the way we did things before. And that's why I'm so passionate, enthusiastic about the stuff I've been doing. Because to me, the amount of benefits you can get, like now we can deliver thing. We have teams going out there and doing discovery and framing with the business. And they're pushing well-architected products three days later into production. That was unheard of before, you know, this year. >> Yeah. So you were part of Werner's keynote this morning. Of course that's always one of the keynotes that's most anticipated at re:Invent. It's on the sort of last day. He's awesome. This is you know, 10th year of re:Invent. He sort of did a look back. He started out (chuckles) he's just a cool guy and very passionate. But talk about what your role was in the keynote. >> Yeah so I had a section towards the end of the keynote, and I was to talk about Liberty Mutual's serverless first journey. I actually went through from 2014 through to the current day of all the major Cloud milestones that we've hit. And I talked through some of the impact it's had on our business and the impact it's had on our developers. And yeah it's just been this incredible journey where as I said, it was hard at the start. So we had to spark this culture within our company that we were going to empower and enable our developers and we were going to get them excited about doing this. And that's why we needed to make it safe. So there was a lot of work went down at the start to make the Cloud safe for our developers to experiment. And then the past two years have been known that it's safe, okay? Let's see what it can do. Let's go. >> Yeah so Liberty Mutual has been around many many years, Boston-based, you know, East Coast-based, my home city. I don't live in Boston but I consider it my city. And so talk about your business a little bit because you're an established company. I don't know, probably a hundred years old, right? Any all other newbies nipping at your business, right? Coming in with low-cost products. Maybe not bringing as much protection as you dig into it. But regardless, you've got to compete with them technically. So what are some of the drivers in your business and how are you using the Cloud to sort of defend your turf and grow? >> Yeah so first of all, we're 109 years old. (laughing) Yeah. So absolutely, there's an entire insurtech market of people here gunning for the big Liberty Mutual because we've been here for so long. And our whole thing is we're focused on our customers. So we want to be there for people in their time of need. Because at a point in time whenever you need insurance, typically something is going wrong. And that's why we're building innovative solutions like a serverless call center we built, that after natural disaster, it can automatically process claims in less than four minutes. So instead of having to wait on hold for maybe an hour, you can just text or pick up the phone, and four minutes later your claims are through. And that's we're using technology always focused on the customer. >> That's unbelievable. Think about that experience, to me. I mean I've filed claims before and it's, it's kind of time consuming. And you're saying you've compressed that to minutes? Days, weeks, you know, and now you've compressed that to minutes? >> Yeah. >> Tell us more about how you did that. >> And that's because it's a fully serverless solution that was built. So it doesn't require like people to scale. It can scale to whatever number of our customers need to make a claim at that point because that would typically be the bottleneck if there's some kind of natural disaster. So that means that if something happens we can just switch it on. And customers can choose not to use it. You can always choose to say I want to speak to a person. But now with this technology, we can just make it easy and just go. Everything, all the information we know in the back end, we just use it and actually make things better for you. >> You're talking about the impact that it had on your business and developers. So how do you quantify that? Maybe start with the business. Maybe share some ways in which you look at that measure. >> Yeah, so I mean, in terms of how we measure the impact of the Cloud on our business, we're always looking at our profitability and we're always looking, as I say, at our customers. And ideally, I want our Cloud bill to go down as our number of customers goes up because that's why we're using the serverless fast mindset, we call it. We don't want to build anything we don't have to build. We want to take the best that's out there and just piece it together and produce these products for our customers. So yeah, that's having an impact on our business because now developers aren't spending weeks, months, years doing all this configuration. And they can actually sit down with the business and understand how we write insurance. So now we can start being innovative with our products and talking about the real business instead of everything else. >> When you say you want your Cloud bill to go down, you know, it reminds me like in the old days of IT budgeting, right? It was always slash, do more with less cut, cut, cut, right? And it was kind of going in cycles. But with the Cloud a lot of customers that I talk to, they were like, might be going down as a percentage of revenues but actually it might be going up as you launch more projects because they're driving revenue. There's a tighter tie between revenue and Cloud bill. How do you look at that? >> Yeah. So I mean, with every project, you have to look at the worth-based development often and whether or not it's going to hold this away in the market. And the key thing is with the serverless products that are being released now, they cost pennies if they're low scale. So you can actually launch a new product into the market and it maybe only cost you $20 to see if that thing would fit in the market. So by the time you're getting into the big bills you know whether or not you've got a market fit and you can decide whether you want to pivot. >> Oh wow. So you you've compressed, that's another business metric. You've compressed the time to get certainty around product market fit, right? Which is huge because you really can't go to market until you have product market fit (laughing) >> Exactly. You have to be. Thoroughly understand if it's going to work. >> Right because if you go to the market and you've got 50% churn. (laughing) Well, you don't want to be worried about the go-to market. You got to get back to the product so you can test that and you can generate. >> So that's why, yeah, As I said, we have developers who can go out and do discovery and framing on a potential product and deliver it three days later which (chuckles) >> How has the Cloud effected developer satisfaction or passion? I guess it's... I mean we're in AWS Cloud. Our developers, we tell them "Okay, you got to go back on-prem." They would say, "I quit." (laughing) How has it affected their lives? >> Yeah it's completely there for them, it's way better. So now we have way more ownership over any, you know, of everything we ever did. So it feels like you're truly a part of Liberty Mutual and you're solving Liberty's problems now. Because it's not a case of like, "Okay, let's put in a request to stand up a server, it's going to take six months. And then let's do some big long acquisition." It's a case of like, "Let's actually get done into the nitty gritty of what we going to build." And that's- >> How do you use the Cloud developer kit? Maybe you could talk about that. I mean, explain what it is. It's a framework. But explain from your perspective. >> Yeah so the Cloud typically, it started off, and lot of it was done by Cloud infrastructure engineers who created these big YAML files. That's how they defined all the stuff that's going to be deployed. But that's not typically the development language that most developers use. The CDK is in like Java, TypeScript, .NET, Python. The language is developers ready known love. And it means that they can use everything they already know from all of their previous development experience and bring it to the Cloud. And you see some benefits like, you get, I talked about this morning, a 1500 line YAML file was reduced to 14 lines of TypeScript. And that's what we're talking about with the cognitive difference for a developer using CDK versus anything else. >> Cognitive abstraction, >> Right? >> Yeah. And so it just simplifies your living and you spend more time doing cool stuff. >> Yeah we can write an abstraction for our specific needs once. And then everybody can use that abstraction. And if we want to make a change and make it better, everyone benefits instead of everybody doing the same thing all the time. >> So for people who are unfamiliar, what do you need? You need an AWS account, obviously. You got to get a command-line interface, I would imagine. maybe some Node.js often running, or is it- >> Yeah. So that's it. You need an AWS account, and then you need to install CDK, which is from Node Package Manager. And then from there, it depends on which way you want to start. You could use my project CDK patterns, has a whole ray of working patterns that you can clone among commands. You just have to type, like one command you've got a pattern, and then CDK deploy. And you'll have something working. >> Okay so what do you do day-to-day? You sort of, you evangelize folks to come in and get trained? Is there just like a backlog of people that want your time? How do you manage that? >> So I try to be the place that I'm needed the most based on impact of the business. And that's why I try to go in. Liberty split up into different areas and I try to go into those areas, understand where they are versus where they need to be. And then if I can do that across everywhere, you can see the common thesis. And then I can see where I can have the most impact across the board instead of focusing on one micro place. So there's a variety of tools and techniques that I would do, you know, to go through that but that's the crux of it. >> So you look at your business across the portfolio, so you have portfolio view. And then you do a gap analysis essentially, say "Okay, where can I approach this framework and technology from a developer standpoint, add value? >> Yeah like I could go into every single team with every single project, draw it all out and like, what we call Wardley map, and then you can draw a line and then say "Everything blue in this line is undifferentiated, heavy-lifted. I want you to migrate that. And here's how you're going to do it I've already built the tools for that." And that's how we can drive those conversations. >> So, you know, it's funny, I spent a lot of time in the insurance business not in the business but consulting with heads of application development and looking at portfolios. And you know, they did their thing. But you know, a lot of people sort of question, "Can developers in an insurance company actually become cool Cloud native developers?" You're doing it, right? So that's going to be an amazing transformation for your colleagues and your industry. And it's happening as we look around here (indistinct) >> And that's the thing, in Liberty I'm not the only one. So there's Tommy Gloklin, he's an AWS hero, and there's Diali Mikan, who's an AWS hero. And Diali is in Workgrid but we're still all the same family. >> So what does it mean to be an AWS hero? >> Yeah so this is something that AWS has to offer you to join. So basically, it's about impacting the community. It's not... There's not like a checklist of items you can go through and you're hero. It's you have to be nominated internally through AWS, and then you have to have the right intentions. And yeah, just follow through. >> Dave: That's awesome. Yeah so our producer, Lynette, is looking for an Irish limerick. You know, every, say I'm half Irish is through my marriage. Dad, you didn't know that, did you? And every year we have a St Patrick's Day party and my daughter comes up with limericks. So I don't know, if you have one that you want to share. If you don't, that's fine. >> I have no limericks for now. I'm so sorry. (laughing) >> There once was a producer from, where are you from? (laughing) So where do you want to take this, Matt? What's your future look like with this program? >> So right now, today, I actually launched a book called the CDK book. >> Dave: Really? Awesome. >> Yeah So me and three other heroes got together and put everything we know about CDK and distilled it into one book. But the... I mean there's two sides, there's inside Liberty. The goal as I've mentioned is to get our developers to the point that they're talking about real insurance problems rather than tech. And then outside Liberty in the community the goal is things like CDK Day, which is a global conference that I created and run. And I want to just grow those farther and farther throughout the world so that eventually we can start learning you know, cross business, cross market, cross the main instead of just internally one company. >> It's impressive how tuned in you are to the business. Do you feel like the Cloud almost forces that alignment? >> It does. It definitely does. Because when you move quickly, you need to understand what you're doing. You can't bluff almost, you know. Like everything you're building you're demonstrating that every two weeks or faster. So you need to know the business to do it. >> Well, Matt, congratulations on all the great work that you've done and the keynote this morning. You know, true tech hero. We really appreciate your time coming in theCUBE. >> Thank you, Dave, for having me. >> Our pleasure. And thank you for watching. This is Dave Vellante for theCUBE at AWS re:Invent. We are the leader global tech coverage. We'll be right back. (light upbeat music)

Published Date : Dec 3 2021

SUMMARY :

And then on the periphery, So what's your and enable our developers across the globe That's what normally you would say So if I can teach the Personally, it would have the amount that you needed to learn. of benefits you can get, This is you know, 10th year of re:Invent. and the impact it's had on our developers. and how are you using the Cloud So instead of having to wait Days, weeks, you know, And customers can choose not to use it. So how do you quantify that? and talking about the real business How do you look at that? and it maybe only cost you $20 So you you've compressed, You have to be. and you can generate. "Okay, you got to go back on-prem." over any, you know, of How do you use the Cloud developer kit? And you see some benefits like, you get, and you spend more time doing cool stuff. And if we want to make a unfamiliar, what do you need? it depends on which way you want to start. that I would do, you So you look at your and then you can draw a line And you know, they did their thing. And that's the thing, in and then you have to have So I don't know, if you have I have no limericks book called the CDK book. Dave: Really? you know, cross business, in you are to the business. So you need to know the business to do it. and the keynote this morning. thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tommy GloklinPERSON

0.99+

Dave VellantePERSON

0.99+

BelfastLOCATION

0.99+

DialiPERSON

0.99+

BostonLOCATION

0.99+

Diali MikanPERSON

0.99+

MattPERSON

0.99+

LynettePERSON

0.99+

LibertyORGANIZATION

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

Las VegasLOCATION

0.99+

2014DATE

0.99+

50%QUANTITY

0.99+

Matt CoulterPERSON

0.99+

$20QUANTITY

0.99+

six monthsQUANTITY

0.99+

Liberty MutualORGANIZATION

0.99+

5,000QUANTITY

0.99+

two sidesQUANTITY

0.99+

14 linesQUANTITY

0.99+

25,000 peopleQUANTITY

0.99+

20 yearsQUANTITY

0.99+

5,000 peopleQUANTITY

0.99+

JavaTITLE

0.99+

an hourQUANTITY

0.99+

less than four minutesQUANTITY

0.99+

WernerPERSON

0.99+

Node.jsTITLE

0.99+

PythonTITLE

0.99+

CDKORGANIZATION

0.99+

St Patrick's DayEVENT

0.99+

1500 lineQUANTITY

0.99+

one bookQUANTITY

0.99+

5%QUANTITY

0.99+

this yearDATE

0.99+

todayDATE

0.99+

TypeScriptTITLE

0.98+

East CoastLOCATION

0.98+

three days laterDATE

0.98+

three days laterDATE

0.98+

10th yearQUANTITY

0.97+

firstQUANTITY

0.97+

four/five years agoDATE

0.97+

CDK DayEVENT

0.97+

.NETTITLE

0.96+

three other heroesQUANTITY

0.96+

last nightDATE

0.96+

first journeyQUANTITY

0.96+

oneQUANTITY

0.94+

one companyQUANTITY

0.94+

IrishOTHER

0.91+

CDKTITLE

0.91+

InventEVENT

0.91+

four minutes laterDATE

0.9+

109 years oldQUANTITY

0.9+

one commandQUANTITY

0.86+

LibertyLOCATION

0.85+

single teamQUANTITY

0.85+

at least 10,000 peopleQUANTITY

0.84+

two weeksQUANTITY

0.83+

Node Package ManagerTITLE

0.83+

this morningDATE

0.81+

WorkgridTITLE

0.8+

WardleyORGANIZATION

0.78+

a hundred years oldQUANTITY

0.76+

CloudTITLE

0.76+

one microQUANTITY

0.75+

Webb Brown | KubeCon + CloudNativeCon NA 2021


 

>> Welcome back to theCUBE's coverage of KubeCon + CloudNativeCon 21 live form Los Angeles. Lisa Martin, with Dave Nicholson. And we've got a CUBE alum back with us. Webb Brown is back. The co-founder and CEO of Kubecost. Welcome back! >> Thank you so much. It's great to be back. It's been right at two years, a lot's happening in our community and ecosystem as well as with our open source project and company. So awesome with that. >> Give the audience an overview in case they're not familiar with Kubecost. And then talk to us about this explosive growth that you've seen since we last saw you in person. >> Yeah, absolutely. So Kubecost provides cost management solutions purpose-built for teams throwing in Kubernetes and Cloud Native. Right? So everything we do is built on open source. All of our products can be installed in minutes. We give teams visibility into spend, then help them optimize it and govern it over time. So it's been a busy two years since we last talked, we have grown the team about, you know, 5 x, so like right around 20 people today. We now have thousands of mostly medium and large sized enterprises using the product. You know, that's north of a 10 x growth since we launched just before, you know, KubeCon San Diego, now managing billions of dollars of spin and, you know, I feel like, we're just getting started. So it's an incredibly exciting time for us as a company and also just great to be back in person with our friends in the community. >> This community is such a strong community. And it's great to see people back here. I agree. >> Absolutely, absolutely. >> So Kubecost, obviously you talk about cost optimization, but it's, you really, you're an insight engine in the sense that if you're looking at costs, you have to measure that against what you're getting for that cost. >> Absolutely. So what are some of the insights that your platform or that your tool set offers. >> Yeah, absolutely, so, you know, we think about our product is first and foremost, like visibility and monitoring and then insights and optimization and then governance. You know, if you talk to most teams today, they're still kind of getting that visibility, but once you do it quickly leads in how do we optimize? And then we're going to give you insights at every part of the stack, right? So like at the infrastructure layer, thinking about things like Spot and RIS and savings plans, et cetera. At the Kubernetes orchestration layer, thinking about things like auto scaling and, you know, setting requests and limits, et cetera, all the way up to like the application layer with all of that being purpose-built for, you know, Cloud Native Kubernetes. So the way we work as you deploy our product in your environment, anywhere you're running Kubernetes, 1.11 or above we'll run. And we're going to start dynamically generating these insights in minutes and they're real time. And again, they scaled to the largest Kubernetes clusters in the world. >> And you said, you've had a thousand or so customers in the medium to large enterprise. These are large organizations, probably brand names, I imagine we are familiar with that are leaning on Kubecost to help get that visibility that before they did not have the ability to get. >> Absolutely, absolutely. So definitely our users of our thousands of users, skews heavily towards, you know, medium and large side enterprise. Working with some amazing companies like Adobe, who, you know, just have such high scale and like complex and sophisticated infrastructure. So, you know, I think this is very natural in what we expect, which is like, as you start spending more resources, you know, missing visibility, having unoptimized infrastructure starts to be more costly. >> Absolutely. >> And we typically see as once that gets into like the multiple head count, right? And it starts to, you know, spend some, may make sense to spend some time optimizing and monitoring and, you know, putting the learning in place. So you can manage it more effectively as time goes on. >> Do you have any metrics or any X factor ranges of the costs that you've actually saved customers? >> Yeah. I mean, we've saved multiple customers in them, like many of millions of dollars at this point, >> So we're talking big. >> Really big. So yeah, we're now managing more than $2 billion of spin. So like some really big savings on a per customer base, but it's really common where we're saving, you know, north of 30%, sometimes up to 70% on your Kubernetes and related spin. And so we're giving you insights into your Kubernetes cluster and again, the full stack there, but also giving you visibility and insights into external things like external disk or cloud storage buckets or, you know, cloud sequel that, that sort of stuff, external cloud services. >> Taking those blinders off >> Exactly. And giving you that unified, you know, real time picture again, that accurately reflects everything that's going on in your system. >> So when these insights are produced or revealed, are the responses automated? or are they then manually applied? >> Yeah. Yeah. That's a great question. We support both and we support both in different ways By default, when you deploy Kubecost, and again it's, today it's Helm Install. It can be running in your cluster in, you know, minutes or less, it's deployed in read only mode. And by the way, you don't share any data externally, it's all in your local environment. So we started generating these insights, you know, right when you install in your environment. >> Let me ask you about, I'm sorry to interrupt, but when you say you're generating an insight, are you just giving an answer and guidance? or you're providing the reader background on what leads to that insight? >> Yeah. You know, is that a philosophical question of, do you need to provide the user rationale for the insight? >> Yeah, absolutely. And I think we're doing this today and we'll do more, but one example is, you know, if you just look at this notion of setting requests and limits for your applications in Kubernetes, you know, if you, in simple forms, if you set a request too high, you're potentially wasting money because the Kubernetes scheduler is presenting that resource for you. If you set it too low, you're at risk of being CPU throttled, right? So communicating that symbiotic relationship and the risk on either side really helps the team understand why do I need to strike this balance, right? It's not just cost it's performance and reliability as well. So absolutely given that background and again, out of the box we're read only, but we also have automation in our product with our cluster controller. So you can dynamically do things like right-size your infrastructure, or, you know, move workloads to Spot, et cetera. But we also have integrations with a bunch of tooling in this ecosystem. So like Prometheus native, you know, Alert Manager native, just launched an integration with Spinnaker and Armory where you can like dynamically at the time of deployment, you know, right size and have insights. So you can expect to see more from us there. But we very much think about automation is twofold. One, you know, building trust in Kubecost and our insights and adopting them over time. But then two is meeting you where you are with your existing tooling, whether it's your CICB pipeline, observability or, you know, existing kind of workflow automation system. >> Meeting customers where they are is, is critical these days. >> Absolutely. I think, especially in this market, right? where we have the potential to have so much interoperability and all these things working in harmony and also, you know, there's, there's a lot of booths back here, right? So we, you know, we have complex tech stacks and, you know, in certain cases we feel like when we bring you to our UI or API's or, you know, automation or COI's, we can do things more effective. But oftentimes when we bring that data to you, we can be more effective again, that's, you know, coming, bringing your data to Chronosphere or Prometheus or Grafana, you know, all of the tooling that you're already using on a daily, regular basis. >> Bringing that data into the tool is just another example of the value in data that the organizations can actually harness that value and unlock it. >> Webb: Yeah. >> There's so much potential there for them to be more competitive, for them to be able to develop products and services faster. >> Absolutely. Yeah, I think you're just seeing the coming of age with, you know, cost metrics into that equation. We now live in a world with Kubernetes as this amazing innovation platform where as an engineer, I can go spin up some pretty costly resources, really fast, and that's a great thing for innovation, right? But it also kind of pushes some of the accountability or awareness down to the individual >> Webb: IC who needs to be aware, you know, what, you know, things generally cost at a minimum in like a directional way, so they can make informed decisions again, when they think about this cost performance, reliability, trade-off. >> Lisa: Where are your customer conversations? Are your target users, DevOps folks? I was just wondering where finance might be in this whole game. >> Yeah, it's a great question. Given the fact that we are kind of open source first and started with open source, we, you know, 95% of the time when we start working with an infrastructure engineering team or dev ops team. They've already installed our product. They're already familiar with what we're doing, but then increasingly and increasingly fast, you know, finance is being brought into the equation and, you know, management is being brought into the equation. And I think it's a function of what we were talking about where, you know, 70% of teams grew their Kubernetes spend over the last year, you know, 20% of them more than doubled. So, you know, these are starting to be real, you know, expense items where finance is increasingly aware of what's going on. So yeah, they're coming into the picture, but it's simply thought that you starting with, and, and working with the infrastructure team, that's actually kind of putting some of these insights into action or hooking us into their pipelines or something. >> When you think of developers going out and grabbing resources, and you think of a, an insight tool that looks at controlling cost, that could seem like an inhibitor. But really if you're talking about how to efficiently use whatever resources you have to be able to have access to in terms of dollars, you could sell this to the developers on that basis. It's like, look, you have these 10 things that you want to be able to do. If you don't optimize using a tool like this, you're only going to be able to do 4 of them. >> Without a doubt. Yeah. And you know, us as our founding team, all engineers, you know, we were the ones getting those questions of, you know, how have we already spent, you know, our budget on just this project? We have these three others we want to do, right? Or why are costs going up as quickly as they are? You know, what are we spending on this application, instead of that kind of being a manual lift, like, let me go do a bunch of analysis or come back with answers. It's tools to where not only can management answer those questions themselves, but like engineering teams can make informed opportunity costs and optimizations decisions itself, whether it's tooling and automation doing it for them or them applying things, you know, directly. >> Lisa: So a lot of growth. You talked about the growth on employees, the growth in revenue, what lies ahead for Kubecost? What are some of the things that are coming on the horizon that you're really excited about? >> Yeah, we very much feel like we're just getting started you know, just like we feel this ecosystem and community is, right? Like there's been tons of progress all around, but like, wow, it's still early days. So, you know, we, we did raise, you know, five and a half million dollars from, you know, First Round who is an amazing group to work with at the end of last year. So by growing the engineering team were able to do a lot more. We got a bunch of really big things coming across all parts of our product. You can think about one thing we're really excited that's in limit availability right now is our first hosted solution. It's our first SaaS solution. And this is critically important to us in that we want to give teams the option to, if you want to own and control your data and never egress anything outside of your cluster, you can do that with our deploy product. You can do that with our open source. You can truly lock down namespace to egress and never send a byte out. Or if you'd like the convenience of us to manage it for you and be kind of stewards of your data, we're going to offer, you know, a great offering there too. So that's unlimited availability day. We're going to have a lot more announcements coming there, but we see those being at feature parity, you know, between like our enterprise offerings and our hosted solution and just, you know, a lot more coming with, you know, visibility, some more like GPU insights, you know, metrics coming quickly, a lot more with automation coming and then more integrations for governance. Again, kind of talked about Spinnaker and things like that. A lot more really interesting ones coming. >> So five and a half million raised in the last round of funding. Where are you going to be applying that? What are some of the growth engines that you want to tune with that money? >> Yeah, so, you know, first and foremost, it was really growing the engineering team, right? So we've, you know, like 4 x the engineering team in the last year, and just have an amazing group of engineers. We want to continue to do that. >> Webb: We're kind of super early on the like, you know, marketing and sales side. We're going to start thinking about that more and more, you know, our approach first off was like, we want to solve a really valuable problem and doing it in a way that is super compelling. And we think that when you do that, you know, good things happen. I think that's some of our Google background, which is like, you build a great search engine and like, you know, good things generally happen. So we're just super focused on, again, working with great users, you know, building great products that meet them where they are and solve problems that are really important to them. >> Lisa: Awesome. Well, congratulations on all the trajectory of success since we last saw you in person. >> Thank you. >> Great to have you back on the show, looking forward to, so folks can go to www.kubecost.com to learn more and see some of those announcements coming down the pike. >> Absolutely, yeah. >> Don't you make it two years before you come back. >> Webb: I would love to be back. I hope we're back bigger than ever, you know, next year, but it has been such a pleasure, you know, last time and this time, thank you so much for having me, you know, I love being part of the show and the community at large. >> It's a great community and we appreciate you sharing all your insights. >> Thank you so much. >> All right. For Dave Nicholson, I'm Lisa Martin coming to you live from Los Angeles. This is theCUBE's coverage of KubeCon and CloudNativeCon 21. We back with our next guest shortly. We'll see you there.

Published Date : Oct 15 2021

SUMMARY :

and CEO of Kubecost. Thank you so much. last saw you in person. of spin and, you know, I feel like, And it's great to see So Kubecost, obviously you or that your tool set offers. So the way we work as you And you said, you've had like Adobe, who, you know, And it starts to, you know, spend some, like many of millions of you know, north of 30%, that unified, you know, And by the way, you don't do you need to provide the at the time of deployment, you know, is critical these days. So we, you know, we have complex Bringing that data into the tool for them to be more competitive, the coming of age with, you know, aware, you know, what, you know, Lisa: Where are your over the last year, you know, and you think of a, you know, we were the ones Lisa: So a lot of growth. and just, you know, that you want to tune with that money? So we've, you know, like and like, you know, good we last saw you in person. Great to have you back on the show, years before you come back. you know, next year, but it and we appreciate you We'll see you there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Dave NicholsonPERSON

0.99+

LisaPERSON

0.99+

Los AngelesLOCATION

0.99+

95%QUANTITY

0.99+

20%QUANTITY

0.99+

AdobeORGANIZATION

0.99+

4QUANTITY

0.99+

WebbPERSON

0.99+

10 thingsQUANTITY

0.99+

thousandsQUANTITY

0.99+

70%QUANTITY

0.99+

Webb BrownPERSON

0.99+

two yearsQUANTITY

0.99+

www.kubecost.comOTHER

0.99+

more than $2 billionQUANTITY

0.99+

KubeConEVENT

0.99+

PrometheusTITLE

0.99+

twoQUANTITY

0.99+

next yearDATE

0.99+

KubecostORGANIZATION

0.99+

CloudNativeConEVENT

0.99+

two yearsQUANTITY

0.99+

five and a half million dollarsQUANTITY

0.99+

bothQUANTITY

0.99+

GoogleORGANIZATION

0.99+

last yearDATE

0.98+

threeQUANTITY

0.98+

OneQUANTITY

0.98+

KubernetesTITLE

0.98+

todayDATE

0.98+

10 xQUANTITY

0.98+

one exampleQUANTITY

0.97+

oneQUANTITY

0.97+

billions of dollarsQUANTITY

0.97+

ArmoryORGANIZATION

0.97+

firstQUANTITY

0.97+

ChronosphereTITLE

0.97+

around 20 peopleQUANTITY

0.96+

first hosted solutionQUANTITY

0.96+

five and a half milliQUANTITY

0.96+

KubeConORGANIZATION

0.96+

Cloud NativeTITLE

0.95+

Alert ManagerTITLE

0.94+

up to 70%QUANTITY

0.94+

theCUBEORGANIZATION

0.93+

millions of dollarsQUANTITY

0.89+

First RoundQUANTITY

0.89+

CUBEORGANIZATION

0.88+

SpinnakerORGANIZATION

0.87+

21QUANTITY

0.85+

a thousandQUANTITY

0.85+

San DiegoLOCATION

0.84+

GrafanaTITLE

0.83+

CICBORGANIZATION

0.82+

end of last yearDATE

0.81+

CloudNativeCon 21EVENT

0.8+

30%QUANTITY

0.76+

5 xQUANTITY

0.73+

NA 2021EVENT

0.73+

first SaaS solutionQUANTITY

0.69+

twofoldQUANTITY

0.63+

Business Update from Keith White, SVP & GM, GreenLake Cloud Services Commercial Business


 

(electronica music) >> Hello everybody. This is Dave Volante and we are covering HPE's big GreenLake announcements. We've got wall-to-wall coverage, a ton of content. We've been watching GreenLake since the beginning. And of one of the things we said early on was let's watch and see how frequently, what the cadence of innovations that HPE brings to the market. Because that's what a cloud company does. So, we're here to welcome you. Keith White is here as the Senior Vice President General Manager of GreenLake cloud services. He runs the commercial business. Keith, thanks for coming on. Help me kick off. >> Thanks for having me. It's awesome to be here. >> So you guys got some momentum orders, 40% growth a year to year on year. You got a lot of momentum, customer growth. >> Yeah, it's fantastic. It's 46%. >> Kyle, thank you for that clarification. And in 46. Big different from 40 to 46. >> No, I think what we're seeing is we're seeing the momentum happen in the marketplace, right? We have a scenario where we're bringing the cloud experience to the customer on their premises. They get to have it automated. Self-serve, easy to consume. They pay for what they use. They can have it in their data center. They can have it at the edge. They can have it at the colo, and, we can manage it all for them. And so they're really getting that true cloud experience and we're seeing it manifest itself in a variety of different customer scenarios. You know, we talked about at Discover, a lot of work that we're doing on the hybrid cloud side of the house, and a lot of work that we're doing on the edge side of things with our partners. But you know, it's exciting to see the explosion of data and how now we're providing this data capability for our customers. >> What are the big trends you're hearing from customers? And how is that informing what you're doing with Green? I mean, I feel like in a lot of ways, Keith, what happened last year, you guys were, were in a better position maybe than most. But what are you hearing and how is that informing your go forward? >> Yeah, I think it's really three things with customers, right? First off, Hey, we're trying to accelerate our digital transformation and it's all becoming about the data. So help us monetize the data, help us protect that data. Help us analyze it to make decisions. And so, you know, number one, it's all about data. Number two is wow, this pandemic, you know, we need to look for cost savings. So, we still need to move our business forward. We've got to accelerate our business, but help me find some cost savings with respect to what I can do. And third, what we're hearing is, hey, we're in a situation, where there's a lot of different capabilities happening with our workforce. They're working from home. They're working hybrid. Help us make sure that we can stay connected to those folks, but also in a secure way, making sure that they have all the tools and resources they need. So those are sort of three of the big themes that we're seeing that GreenLake really helps manifest itself, with the data we're doing now. With all the hybrid cloud capabilities. With the cost savings that we get with respect to our platform, as well as with solutions such as VDI or workforce enablements that we've, we create from a solution standpoint. . >> So, what's the customer reaction, I mean, I mean, everybody now, who's has a big on-premise state, has an as a service capability. A customer saying, oh yeah, oh yeah, how do you make it not me too? In the customer conversations? >> Yeah. I think it turns into, you know, you have to bring the holistic solution to the customer. So yes, there's technology there and we're hearing from, you know, some of the competitors out there. Yeah, we're doing as a service as well, but maybe it's a little bit of storage here. Maybe it's a little bit of networking there. Customers need that end to end solution. And so as you've seen us announce over time, we've got the building blocks, of course, compute storage and networking, but everything runs in a virtual machine. Everything runs in a container or everything runs on the bare metal itself. And that package that we've created for customers means that they can do whatever solution, or whatever workload they want So, if you're a hospital and you're running Epic for your electronic medical records, you can go that route. If you're upgrading SAP and you're using virtual machines at a very large scale, you can use this, use a GreenLake for that as well. So, as you go down the list, there's just so many opportunities with respect to bring those solutions to our customers. And then you bring in our point-next capabilities to support that. You bring in our advisory and professional services, along with our ecosystem to help enable that. You bring in our HPE financial services to help fund that digital transformation. And you've got the complete package. And that's why customers are saying, hey, you guys are now partners of us. You're not just a hardware provider, you're a partner you're helping us solve our business problems and helping us accelerate our business. >> So what should people expect today? You guys got some announcements. What should people look for? >> Well, I think this is, as we've talked about, you know, now we're sort of providing much more capabilities around the data side of the house. Because data is so such, it's the gold, if you will, of a customer's environment. So first off we want to do analytics. So we want an open platform that provides really a unified set of analytics capabilities. And this is where we have a real strong, sweet spot with respect to some of the, the software that we've built around Esperal. But also with the hardware capabilities. As you know, we have all the way up to the Cray supercomputers that, that are doing all of the analytics for whether this or, or financial data that. So, I think that's one of the key things. The second is you got to protect that data. And, and so if it's going to be on prem, I want to know that it's protected and secured. So how do I back it up? How do I have a disaster recovery plan? How do I watch out for ransomware attacks, as well? So we're providing some capabilities there. And then I'd say, lastly, because of all the experience we have with our customers now implementing these hybrid solutions, they're saying, hey, help me with this edge to cloud framework and how do I go and implement that on my own? And so we've taken all the experience and we've bucketed that into our edge to cloud adoption framework to provide that capability for our customers. So we, you know, we're really excited about, again, talking about solutions, talking about accelerating your business, not just talking about technology. >> I said up the top, Keith, that one of the ways I was evaluating you as the pace and the cadence of the innovations. And, and is that, is that fair? How do you guys think about that internally? Are you, you know, you're pushing yourself to go faster, I'm sure you are, but what's that conversation like? >> I think it's a great question because in essence, we're now pivoting the company holistically to being a cloud services and a software company. And that's really exciting and we're seeing that happen internally. But this pace of innovation is really built on what customers are asking us for us. So now that we've grown over 1200 customers worldwide. You know, over $5 billion of total contract value. You know, signing some, some large deals in a variety of solutions and workloads and verticals, et cetera. What we're now seeing is, hey, this is what we need. Help me with my internal IT out to my business groups. Help me with my edge strategy as I build the factory of the future, or, you know, help me with my data and analytics that I'm trying to accomplish for my, you know, diagnosis of, of x-rays and, and capabilities such as Carestream, if you will. So it's, it's exciting to see them come to us and say, this is the capabilities that we're requiring, and we've got our foot on the gas to provide that innovation. And we're miles ahead of the competition. >> All right, we've got an exciting day ahead. We got all kinds of technology discussions, solution discussions. We got, we got, we're going to hear from the analyst community. Really bringing you the, the full package of announcements here. Keith, thanks for helping me set this up. >> Always. Yeah. Thanks so much for having me. >> I look forward today. And thank you for watching. Keep it right there. Tons of content coming your way. You're watching The Cubes coverage of HP's big GreenLake announcement. Right back. (electronica music)

Published Date : Sep 28 2021

SUMMARY :

And of one of the things It's awesome to be here. So you guys got some momentum orders, Yeah, it's fantastic. Kyle, thank you for that clarification. They can have it at the edge. And how is that informing of the big themes that we're oh yeah, how do you make it not me too? And then you bring in our So what should people expect today? it's the gold, if you will, Keith, that one of the ways So now that we've grown over Really bringing you the, so much for having me. And thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

Dave VellantePERSON

0.99+

Michael DellPERSON

0.99+

Rebecca KnightPERSON

0.99+

MichaelPERSON

0.99+

ComcastORGANIZATION

0.99+

ElizabethPERSON

0.99+

Paul GillanPERSON

0.99+

Jeff ClarkPERSON

0.99+

Paul GillinPERSON

0.99+

NokiaORGANIZATION

0.99+

SavannahPERSON

0.99+

DavePERSON

0.99+

RichardPERSON

0.99+

MichealPERSON

0.99+

Carolyn RodzPERSON

0.99+

Dave VallantePERSON

0.99+

VerizonORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Eric SeidmanPERSON

0.99+

PaulPERSON

0.99+

Lisa MartinPERSON

0.99+

GoogleORGANIZATION

0.99+

KeithPERSON

0.99+

Chris McNabbPERSON

0.99+

JoePERSON

0.99+

CarolynPERSON

0.99+

QualcommORGANIZATION

0.99+

AlicePERSON

0.99+

2006DATE

0.99+

JohnPERSON

0.99+

NetflixORGANIZATION

0.99+

AWSORGANIZATION

0.99+

congressORGANIZATION

0.99+

EricssonORGANIZATION

0.99+

AT&TORGANIZATION

0.99+

Elizabeth GorePERSON

0.99+

Paul GillenPERSON

0.99+

Madhu KuttyPERSON

0.99+

1999DATE

0.99+

Michael ConlanPERSON

0.99+

2013DATE

0.99+

Michael CandolimPERSON

0.99+

PatPERSON

0.99+

Yvonne WassenaarPERSON

0.99+

Mark KrzyskoPERSON

0.99+

BostonLOCATION

0.99+

Pat GelsingerPERSON

0.99+

DellORGANIZATION

0.99+

Willie LuPERSON

0.99+

IBMORGANIZATION

0.99+

YvonnePERSON

0.99+

HertzORGANIZATION

0.99+

AndyPERSON

0.99+

2012DATE

0.99+

MicrosoftORGANIZATION

0.99+

2021 084 Meena Gowdar


 

(bright music) >> Welcome to this session of the AWS EC2 15th birthday event. I'm your host, Lisa Martin. I'm joined by Meena Gowdar, the principal product manager for AWS Outposts at AWS. Meena, welcome to the program. >> Thanks Lisa. It's great to be joining here today. >> So you were the first product manager hired to lead the development of the Outpost service. Talk to us about back in the day. The vision of Outpost at that time. >> Yeah, Outpost vision has always been to extend the AWS experience to customers on premises location, and provide a truly consistent hybrid experience, with the same AWS services, APIs and suite of tools available at the region. So we launched Outpost to support customers' workloads that cannot migrate to the region. These are applications that are sensitive to latency, such as manufacturing, workloads, financial trading workloads. Then there are applications that do heavy edge data processing, like image assisted diagnostics and hospitals for example, or smart cities that are fitted with cameras and sensors that gather so much data. And then another use case was regarding data residency that need to remain within certain jurisdictions. Now that AWS cloud is available in 25 regions and we have seven more coming, but that doesn't cover every corner of the world, and customers want us to be closer to their end-users. So Outpost allows them to bring the AWS experience where customer wants us to be. To answer your question about the use case evolution, along the way, in addition to the few that I just mentioned, we've seen a couple of surprises. The first one is application migration. It is an interesting trend from large enterprises that could run applications in the cloud, but must first rearchitect their applications to be cloud ready. These applications need to go through modernization while remaining in close proximity to other dependent systems. So by using Outpost, customers can modernize and containerize using AWS services, while they continued to remain on premises before moving to the region. Here, Outpost acts as a launchpad, serving them to make that leap to the region. We were also surprised by the different types of data residency use cases that customers are thinking about Outposts. For example, iGaming, as sports betting is a growing trend in many countries, they're also heavily regulated requiring providers to run their applications within state boundaries. Outposts allows application providers to standardize on a common AWS infrastructure and deploy the application in as many locations as they want to scale. >> So a lot of evolution and it's short time-frame, and I know that as we're here talking about the EC2 15th birthday, Amazon EC2 Core to AWS, but it's also at the core of Outposts, how does EC2 work on Outposts? >> The simple answer is EC2 works just the same as Outposts does in the region, so giving customers access to the same APIs, tools, and metrics that they are familiar with. With Outposts, customers will access the capacity, just like how they would access them in an availability zone. Customers can extend their VPC from the region and launch EC2 instances using the same APIs, just like how they would do in the region. So they also get to benefit all the tools like auto-scaling, CloudWatch metrics, Flow Logs that they are already familiar with. So the other thing that I also want to share is, at GA, we launched Outposts with the Gen 5 Intel Cascade Lake Processor based instances, that's because they run on AWS Nitro Systems. The Nitro Systems allows us to extend the AWS experience to customers location in a secure manner, and bring all the capabilities to manage and virtualize the underlying compute storage and network capabilities, just the way we do that in the region. So staying true to that Outpost product vision, customers can experience the same sort of EC2 feature sets like EC2 placement groups on demand, capacity, reservations, sharing through resource access managers, IM policies, and security groups so it really is the same EC2. >> I imagine having that same experience, the user experience was a big advantage for customers that were in the last 18 months rapidly transforming and digitizing their businesses. Any customer examples pop up that to you that really speak to, we kept this user experience the same, it really helped customers pivot quickly when the pandemic struck. >> It almost feels like we haven't missed a beat Outpost being a fully managed service that can be rolled into customer's data center, has been a huge differentiator. Especially at a time where customers have to be nimble and ready to respond to their customers or end users. If at all, we've seen the adoption accelerate in the last 12 to 18 months, and that is reflected through our global expansion. We currently support 60 countries worldwide, and we've seen customers deploying Outposts and migrating more applications to run on Outpost worldwide. >> Right. So lots of evolution going on as I mentioned a minute ago. Talk to me about some of the things that you're most excited about. What do you think is coming down the pike in the next 6 to 10 months? >> We're excited about expanding the core EC2 instance offerings, especially bringing our own Graviton Arm processor based instances on Outposts, because of the AWS nitro systems. Most easy to instances that launch in the region will also become available on Outpost. Again, back to the vision to provide a consistent hybrid experience for AWS customers. We're also excited about the 1U and 2U Outpost server form factors, which we will launch later this year. The Outpost service will support both the Intel Ice Lake Processor based instances, and also Graviton Processor based instances. So customers who can't install and, you know, 42U form factor Outposts, can now bring AWS experience in retail stores, back office, and other remote locations that are not traditional data centers. So we're very excited about our next couple of years, and what we are going to be launching for customers. >> Excellent. Meena, Thank you for joining me today for the EC2 15th birthday, talking about the vision of outposts. Again, you were the first product manager hired to lead the development of that. Pretty exciting. What's gone on then the unique use cases that have driven its evolution, and some of the things that are coming down the pike. Very exciting. Thank you for your time. >> Thank you, Lisa, >> For Meena Gowdar, I'm Lisa Martin. Thanks for watching. (bright music)

Published Date : Aug 20 2021

SUMMARY :

the AWS EC2 15th birthday event. It's great to be joining here today. to lead the development the AWS experience to and bring all the capabilities the user experience was a in the last 12 to 18 months, in the next 6 to 10 months? that launch in the region and some of the things Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Meena GowdarPERSON

0.99+

LisaPERSON

0.99+

AWSORGANIZATION

0.99+

MeenaPERSON

0.99+

25 regionsQUANTITY

0.99+

60 countriesQUANTITY

0.99+

Ice LakeCOMMERCIAL_ITEM

0.99+

todayDATE

0.99+

2021 084OTHER

0.99+

Cascade Lake ProcessorCOMMERCIAL_ITEM

0.99+

OutpostORGANIZATION

0.99+

bothQUANTITY

0.98+

IntelORGANIZATION

0.98+

EC2TITLE

0.98+

AmazonORGANIZATION

0.98+

first oneQUANTITY

0.97+

first productQUANTITY

0.94+

EC2 CoreCOMMERCIAL_ITEM

0.93+

firstQUANTITY

0.93+

a minute agoDATE

0.92+

later this yearDATE

0.92+

OutpostTITLE

0.92+

15th birthdayQUANTITY

0.91+

Nitro SystemsCOMMERCIAL_ITEM

0.88+

Gen 5COMMERCIAL_ITEM

0.86+

seven moreQUANTITY

0.85+

CloudWatchTITLE

0.83+

last 18 monthsDATE

0.82+

Graviton ArmCOMMERCIAL_ITEM

0.8+

first product managerQUANTITY

0.8+

iGamingTITLE

0.78+

2UOTHER

0.76+

AWS OutpostsORGANIZATION

0.74+

GALOCATION

0.73+

next couple of yearsDATE

0.72+

EC2COMMERCIAL_ITEM

0.72+

OutpostsORGANIZATION

0.69+

pandemicEVENT

0.66+

18 monthsQUANTITY

0.66+

OutpostsTITLE

0.64+

10 monthsQUANTITY

0.63+

Outpost visionTITLE

0.63+

GravitonOTHER

0.61+

OutpostsCOMMERCIAL_ITEM

0.56+

6QUANTITY

0.54+

12QUANTITY

0.5+

42UOTHER

0.48+

OutpostCOMMERCIAL_ITEM

0.33+

1UOTHER

0.31+

Breaking Analysis: Can anyone tame the identity access beast? Okta aims to try...


 

>> From "theCUBE" studios in Palo Alto in Boston, bringing you data-driven insights from "theCUBE" in ETR. This is breaking analysis with Dave Vellante. >> Chief Information Security Officer's site trust, is the number one value attribute, they can deliver to their organizations. And when it comes to security, identity is the new attack surface. As such identity and access management, continue to be the top priority among technology decision makers. It also happens to be one of the most challenging and complicated areas of the cybersecurity landscape. Okta, a leader in the identity space has announced its intent to converge privileged access and Identity Governance in an effort to simplify the landscape and re-imagine identity. Our research shows that interest in this type of consolidation is very high, but organizations believe technical debt, compatibility issues, expense and lack of talent are barriers to reaching cyber nirvana, with their evolving Zero-Trust networks. Hello and welcome to this week's Wikibon CUBE insights, powered by ETR. In this breaking analysis, we'll explore the complex and evolving world of identity access and privileged account management, with an assessment of Okta's market expansion aspirations and fresh data from ETR, and input from my colleague Eric Bradley. Let's start by exploring identity and why it's fundamental to digital transformations. Look the pandemic accelerated digital and digital raises the stakes in cybersecurity. We've covered this extensively, but today we're going to drill into identity, which is one of the hardest nuts to crack in security. If hackers can steal someone's identity, they can penetrate networks. If that someone has privileged access to databases, financial information, HR systems, transaction systems, the backup corpus, well. You get the point. There are many bespoke tools to support a comprehensive identity access management and privilege access system. Single sign-on, identity aggregation, de-duplication of identities, identity creation, the governance of those identities, group management. Many of these tools are open source. So you have lots of vendors, lots of different systems, and often many dashboards. Practitioners tell us that it's the paper cuts that kill them, patches that aren't applied, open ports, orphan profiles that aren't disabled. They'd love to have a single dashboard, but it's often not practical for large organizations because of the bespoke nature of the tooling and the skills required to manage them. Now, adding to this complexity, many organizations have different identity systems for privileged accounts, the general employee population and customer identity. For example, around 50 percent of ETR respondents in a recent survey use different systems for workforce identity and consumer identity. Now this is often done because the consumer identity is a totally different journey. The consumer is out in the wild and takes an unknown, nonlinear path and then enters the known space inside a brand's domain. The employee identity journey is known throughout. You go onboarding, to increasing responsibilities and more access to off-boarding. Privileged access may even have different attributes, does usually like no email and, or no shared credentials. And we haven't even touched on the other identity consumers in the ecosystem like selling partners, suppliers, machines, etcetera. Like I said, it's complicated and meeting the needs of auditors is stressful and expensive for CSOs. Open chest wounds, such as sloppy histories of privileged access approvals, obvious role conflicts, missing data, inconsistent application of policy and the list goes on. The expense of securing digital operations goes well beyond the software and hardware acquisition costs. So there's a real need and often desire, to converge these systems. But technical debt makes it difficult. Companies have spent a lot of time, effort and money on their identity systems and they can't just rip and replace. So they often build by integrating piece parts or they add on to their Quasi-integrated monolithic systems. And then there's the whole Zero-Trust concept. It means a lot of different things to a lot of different people, but folks are asking if I have Zero-Trust, does it eliminate the need for identity? And what does that mean for my architecture, going forward. So, let's take a snapshot of some of the key players in identity and PAM, Privileged Access Management. This is an X-Y graph that we always like to show. It shows the net score or spending velocity, spending momentum on the vertical axis and market share or presence in the ETR dataset on the horizontal axis. It's not like revenue market share. It's just, it's mentioned market share if you will. So it's really presence in the dataset. Now, note the chart insert, the table, which shows the actual data for Net Score and Shared In, which informs the position of the dot. The red dotted line there, it indicates an elevated level. Anything over 40 percent that mark, we consider the strongest spending velocity. Now within this subset of vendors that we've chosen, where we've tried to identify some, most of them are pure plays, in this identity space. You can see there are six above that 40 percent mark including Zscaler, which tops the charts, Okta, which has been at or near the top for several quarters. There's an argument by the way, to be made that Okta and Zscaler are on a collision course as Okta expands it's TAM, but let's just park that thought for a moment. You can see Microsoft with a highly elevated spending score and a massive presence on the horizontal axis, CyberArk and SailPoint, which Okta is now aiming to disrupt and Auth zero, which Okta officially acquired in may of this year, more on that later now. Now, below that 40 percent mark you can see Cisco, which is largely acquired companies in order to build its security portfolio. For example, Duo which focuses on access and multi-factor authentication. Now, word of caution, Cisco and Microsoft in particular are overstated because, this includes their entire portfolio of security products, whereas the others are more closely aligned as pure plays in identity and privileged access. ThycotyicCentrify is pretty close to that 40 percent mark and came about as a result of the two companies merging in April of this year. More evidence of consolidation in this space, BeyondTrust is close to the red line as well, which is really interesting because this is a company whose roots go back to the VAX VMS days, which many of you don't even know what a VAX VMS is in the mid 1980s. It was the mini computer standard and the company has evolved to provide more modern PAM solutions. Ping Identity is also notable in that, it essentially emerged after the dot com bust in the early 2000s as an identity solution provider for single sign-on, SSO and multifactor authentication, MFA solutions. In IPO'd in the second half of 2019, just prior to the pandemic. It's got a $2 billion market cap-down from its highs of around $3 billion earlier this year and last summer. And like many of the remote work stocks, they bounced around, as the reopening trade and lofty valuations have weighed on many of these names, including Okta and SailPoint. Although CyberArk, actually acted well after its August 12th earnings call as its revenue growth about doubled year on year. So hot space and a big theme this year is around Okta's acquisition of Auth zero and its announcement at Oktane 2021, where it entered the PAM market and announced its thrust to converge its platform around PAM and Identity Governance and administration. Now I spoke earlier this week with Diya Jolly, who's the Chief Product Officer at Okta and I'll share some of her thoughts later in this segment. But first let's look at some of the ETR data from a recent drill down study that our friends over there conducted. This data is from a drill down that was conducted early this summer, asking organizations how important it is to have a single dashboard for access management, Identity Governance and privileged access. This goes directly to Okta strategy that it announced this year at it's Oktane user conference. Basically 80 percent of the respondents want this. So this is no surprise. Now let's stay on this theme of convergence. ETR asks security pros if they thought convergence between access management and Identity Governance would occur within the next three years. And as you can see, 89% believe this is going to happen. They either strongly agree, agree, or somewhat agree. I mean, it's almost as though the CSOs are willing this to occur. And this seemingly bodes well for Okta, which in April announced its intent to converge PAM and IGA. Okta's Diya jolly stressed to me that this move was in response to customer demand. And this chart confirms that, but there's a deeper analysis worth exploring. Traditional tools of identity, single sign-on SSO and multi-factor authentication MFA, they're being commoditized. And the most obvious example of this is OAuth or Open Authorization. You know, log in with Twitter, Google, LinkedIn, Amazon, Facebook. Now Okta currently has around a $35 billion market cap as of today, off from its highs, which were well over 40 billion earlier this year. Okta stated, previously stated, total addressable market was around 55 billion. So CEO, Todd McKinnon had to initiate a TAM expansion play, which is the job of any CEO, right? Now, this move does that. It increases the company's TAM by probably around $20 to $30 billion in our view. Moreover, the number one criticism of Okta is, "Your price is too high." That's a good problem to have I say. Regardless, Okta has to think about adding more value to its customers and prospects, and this move both expands its TAM and supports its longer-term vision to enable a secure user-controlled ubiquitous, digital identity, supporting federated users and data within a centralized system. Now, the other thing Jolly stressed to me is that Okta is heavily focused on the user experience, making it simple and consumer grade easy. At Oktane 21, she gave a keynote laying out the company's vision. It was a compelling presentation designed to show how complex the problem is and how Okta plans to simplify the experience for end users, service providers, brands, and the overall technical community across the ecosystem. But look, there are a lot of challenges, the company faces to pull this off. So let's dig into that a little bit. Zero-Trust has been the buzz word and it's a direction, the industry is moving towards, although there are skeptics. Zero-Trust today is aspirational. It essentially says you don't trust any user or device. And the system can ensure the right people or machines, have the proper level of access to the resources they need all the time, with a fantastic user experience. So you can see why I call this nirvana earlier. In previous breaking analysis segments, we've laid out a map for protecting your digital identity, your passwords, your crypto wallets, how to create Air Gaps. It's a bloody mess. So ETR asked security pros if they thought a hybrid of access management and Zero-Trust network could replace their PAM systems, because if you can achieve Zero-Trust in a world with no shared credentials and real-time access, a direction which Diya jolly clearly told me Okta is headed, then in theory, you can eliminate the need for Privileged Access Management. Another way of looking at this is, you do for every user what you do for PAM users. And that's how you achieve Zero-Trust. But you can see from this picture that there's more uncertainty here with nearly 50 percent of the sample, not in agreement that this is achievable. Practitioners in Eric Bradley's round tables tell us that you'll still need the PAM system to do things, like session auditing and credential checkouts and other things. But much of the PAM functionality could be handled by this Zero-Trust environment we believe. ETR then asks the security pros, how difficult it would be to replace their PAM systems. And this is where it gets interesting. You can see by this picture. The enthusiasm wanes quite a bit when the practitioners have to think about the challenges associated with replacing Privileged Access Management Systems with a new hybrid. Only 20 percent of the respondents see this as, something that is easy to do, likely because they are smaller and don't have a ton of technical debt. So the question and the obvious question is why? What are the difficulties and challenges of replacing these systems? Here's a diagram that shows the blockers. 53 percent say gaps in capabilities. 26 percent say there's no clear ROI. IE too expensive and 11 percent interestingly said, they want to stay with best of breed solutions. Presumably handling much of the integration of the bespoke capabilities on their own. Now speaking with our Eric Bradley, he shared that there's concern about "rip and replace" and the ability to justify that internally. There's also a significant buildup in technical debt, as we talked about earlier. One CSO on an Eric Bradley ETR insights panel explained that the big challenge Okta will face here, is the inertia of entrenched systems from the likes of SailPoint, Thycotic and others. Specifically, these companies have more mature stacks and have built in connectors to legacy systems over many years and processes are wired to these systems and would be very difficult to change with skill sets aligned as well. One practitioner told us that he went with SailPoint almost exclusively because of their ability to interface with SAP. Further, he said that he believed, Okta would be great at connecting to other cloud API enabled systems. There's a large market of legacy systems for which Okta would have to build custom integrations and that would be expensive and would require a lot of engineering. Another practitioner said, "We're not implementing Okta, but we strongly considered it." The reason they didn't go with was the company had a lot of on-prem legacy apps and so they went with Microsoft Identity Manager, but that didn't meet the grade because the user experience was subpar. So they're still searching for a solution that can be good at both cloud and on-prem. Now, a third CSO said, quote, " I've spent a lot of money, writing custom connectors to SailPoint", and he's stressed a lot of money, he said that several times. "So, who was going to write those custom connectors for me? Will Okta do it for free? I just don't see that happening", end quote. Further, this individual said, quote, "It's just not going to be an easy switch. And to be clear, SailPoint is not our PAM solution. That's why we're looking at CyberArk." So the complexity that, unquote. So the complexity and fragmentation continues. And personally I see this as a positive trend for Okta, if it can converge these capabilities. Now I pressed Okta's Diya Jolly on these challenges and the difficulties of replacing them over to our stacks of the competitors. She fully admitted, this was a real issue But her answer was that Okta is betting on the future of microservices and cloud disruption. Her premise is that Okta's platform is better suited for this new application environment, and they're essentially betting on organizations modernizing their application portfolios and Okta believes that it will be ultimately a tailwind for the company. Now let's look at the age old question of best of breed versus incumbent slash integrated suite. ETR and it's drilled down study ask customers, when thinking about identity and access management solutions, do you prefer best of breed and incumbent that you're already using or the most cost efficient solution? The respondents were asked to force rank one, two and three, and you can see, incumbent just edged out best in breed with a 2.2 score versus a 2.1, with the most cost-effective choice at 1.7. Now, overall, I would say, this is good news for Okta. Yes, they faced the issues that we brought up earlier but as digital transformations lead to modernizing much of the application portfolio with container and microservices, Okta will be in a position, assuming it continues to innovate, to pick up much of this business. And to the point earlier, where the CSO told us they're going to use both SailPoint and CyberArk. When ETR asked practitioners which vendors are in the best position to benefit from Zero-Trust, the Zero-Trust trend, the answers were not surprisingly all over the place. Lots of Okta came up. Zscaler came up a lot too, hmm. There's that collision course. But plenty of SailPoint, Palo Alto, Microsoft, Netskope, Dichotic, Centrify, Cisco, all over the map. So now let's look specifically at how practitioners are thinking about Okta's latest announcements. This chart shows the results of the question. Are you planning to evaluate Okta's recently announced Identity Governance and PAM offerings? 45 to nearly 50 percent of the respondents either were already using or plan to evaluate, with just around 40 percent saying they had no plans to evaluate. So again, this is positive news for Okta in our view. The huge portion of the market is going to take a look at what Okta's doing. Combined with the underlying trends that we shared earlier related to the need for convergence, this is good news for the company. Now, even if the blockers are too severe to overcome, Okta will be on the radar and is on the radar as you can see from this data. And as with the Microsoft MIM example, the company will be seen as increasingly strategic, Okta that is, and could get another bite at the apple. Moreover, Okta's acquisition of Auth zero is strategically important. One of the other things Jolly told me is they see initiative starting both from devs and then hand it over to IT to implement, and then the reverse where IT may be the starting point and then go to devs to productize the effort. The Auth zero acquisition gives Okta plays in both games, because as we've reported earlier, Okta wasn't strong with the devs, Auth zero that was their wheelhouse. Now Okta has both. Now on the one hand, when you talk to practitioners, they're excited about the joint capabilities and the gaps that Auth zero fills. On the other hand, it takes out one of Okta's main competitors and customers like competition. So I guess I look at it this way. Many enterprises will spend more money to save time. And that's where Okta has traditionally been strong. Premium pricing but there's clear value, in that it's easier, less resources required, skillsets are scarce. So boom, good fit. Other enterprises look at the price tag of an Okta and, they actually have internal development capabilities. So they prefer to spend engineering time to save money. That's where Auth zero has seen its momentum. Now Todd McKinnon and company, they can have it both ways because of that acquisition. If the price of Okta classic is too high, here's a lower cost solution with Auth zero that can save you money if you have the developer talent and the time. It's a compelling advantage, that's unique. Okay, let's wrap. The road to Zero-Trust networks is long and arduous. The goal is to understand, support and enable access for different roles, safely and securely, across an ecosystem of consumers, employees, partners, suppliers, all the consumers, (laughs softly) of your touch points to your security system. You've got to simplify the user experience. Today's kluge of password, password management, security exposures, just not going to cut it in the digital future. Supporting users in a decentralized, no-moat world, the queen has left her castle, as I often say is compulsory. But you must have federated governance. And there's always going to be room for specialists in this space. Especially for industry specific solutions for instance, within healthcare, education, government, etcetera. Hybrids are the reality for companies that have any on-prem legacy apps. Now Okta has put itself in a leadership position, but it's not alone. Complexity and fragmentation will likely remain. This is a highly competitive market with lots of barriers to entry, which is both good and bad for Okta. On the one hand, unseating incumbents will not be easy. On the other hand, Okta is both scaling and growing rapidly, revenues are growing almost 50% per annum and with it's convergence agenda and Auth zero, it can build a nice moat to its business and keep others out. Okay, that's it for now. Remember, these episodes are all available as podcasts, wherever you listen, just search braking analysis podcast, and please subscribe. Thanks to my colleague, Eric Bradley, and our friends over at ETR. Check out ETR website at "etr.plus" for all the data and all the survey action. We also publish a full report every week on "wikibon.com" and "siliconangle.com". So make sure you check that out and browse the breaking analysis collection. There are nearly a hundred of these episodes on a variety of topics, all available free of charge. Get in touch with me. You can email me at "david.vellante@siliconangle.com" or "@dvellante" on Twitter. Comment on our LinkedIn posts. This is Dave Vellante for "theCUBE" insights powered by ETR. Have a great week everybody. Stay safe, be well And we'll see you next time. (upbeat music)

Published Date : Aug 20 2021

SUMMARY :

with Dave Vellante. and the skills required to manage them.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Eric BradleyPERSON

0.99+

Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

OktaORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Eric BradleyPERSON

0.99+

$2 billionQUANTITY

0.99+

45QUANTITY

0.99+

NetskopeORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

SailPointORGANIZATION

0.99+

sixQUANTITY

0.99+

CentrifyORGANIZATION

0.99+

Todd McKinnonPERSON

0.99+

AprilDATE

0.99+

Diya JollyPERSON

0.99+

AmazonORGANIZATION

0.99+

appleORGANIZATION

0.99+

40 percentQUANTITY

0.99+

August 12thDATE

0.99+

CyberArkORGANIZATION

0.99+

DichoticORGANIZATION

0.99+

two companiesQUANTITY

0.99+

JollyPERSON

0.99+

TAMORGANIZATION

0.99+

david.vellante@siliconangle.comOTHER

0.99+

11 percentQUANTITY

0.99+

89%QUANTITY

0.99+

Palo AltoORGANIZATION

0.99+

53 percentQUANTITY

0.99+

26 percentQUANTITY

0.99+

ETRORGANIZATION

0.99+

bothQUANTITY

0.99+

LinkedInORGANIZATION

0.99+

both gamesQUANTITY

0.99+

last summerDATE

0.99+

Auth zeroORGANIZATION

0.99+

80 percentQUANTITY

0.99+

threeQUANTITY

0.99+

around $20QUANTITY

0.99+

ThycoticORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

TwitterORGANIZATION

0.99+

mid 1980sDATE

0.99+

IGAORGANIZATION

0.99+

20 percentQUANTITY

0.99+

early 2000sDATE

0.99+

twoQUANTITY

0.99+

Auth zeroORGANIZATION

0.99+

"MINI-MASTER CLASS" w Raj Pai1


 

>>Mhm Hello, I'm jennifer with the cube. We're here at Rogers vice president of EC two Product Manager, NWS raj. Thanks for coming off its quick cube conversation. Um Congratulations on your 15th birthday of E C two. You get the keys to the kingdom of one of the hottest products. The most important product you look at. I look at our billets. Ec two is the highest, it's always the best everyone focuses on. It's the compute a lot of other goodness with amazon cloud. Thanks for coming on. >>Thank you. Thanks for having me. >>So, can you break down the graviton two processor overview? Why is custom Silicon important and why should architects and developers understand the opportunity with graviton to these are the other opportunities within 80 bucks. What's the, what's the magic do it we should that they think about as the architect their cloud. >>Yeah. So, I mean, I think why it's important is what you said like so much uh the workloads that they're running at the end of the day is running on EC two, whether it's running on Ec two directly or running on one of the other AWS services that's built on a C two and when you have, when you're able to, when we're able to innovate and deliver a very significant price performance advantage, not just lowers their costs. So like there, It's hardly a day of industry where you're able to go and do a pretty simple migration and get a 40% price performance improvement and that's huge and I think that's why this is, you know, raising a lot of interest. Is that um, customers, I found it relatively easy to go and do this migration and get that benefit. >>That's awesome mirage. I gotta ask you ec two offers more than 400 instant types with different combinations of compute memory, networking and storage, which is obviously the backbone of the cloud. A lot of people that are coming in learning about clouds, what does it mean that there's all these instances that because it's just more combinations, different workloads, why 400 instance types? What does that mean for someone learning about clouds? Does it mean anything to you actually? Would you explain the difference of instance types of 400 of them? >>Yeah. So, I mean when you think about an instance type, it's essentially configuration of a virtual machine, there's a certain amount of memory, there's a certain amount of processing power. Uh there could be a certain amount of disk and workloads, uh, the different ratios of these uh, dimensions, these characteristics. So by offering selection across a wide variety of instances were really able to optimize the compute that particular workload needs. The customers could essentially uh, increase their performance and have a more optimized price for what they want to get done. So ultimately, that's what that's what it's about having the right form factor for a given workload and the more configurations that we have, the more we're able to tune for those workloads. >>It's like having a driver riding a car you want the driver type to match the road, match the engine. So the instance has to match the profile of the app, the workload and kind of, and is that kind of where you're getting at getting met? You can do that. >>Yeah. And you know, and one of the things that we're also investing in at the same time as tools to enable customers to realize and learn what the right instance is. So, you know, we launched about a year ago uh capability called compute optimizer that lets customers look at their workloads, you know, in flight essentially and make recommendations saying, hey, instead of this instance, you know, you could Move to um this other instance type and save 50% or you know, as an example. So, um, you know, part of it, creating the selection and the other part of it is creating the tools. So customers, do you know what the right fit is for them so that they can really optimize their thin >>Well Roger, I really preach this is going to ask me anything guru question, but here's the simple one. What is gravitas to, at the end of the day when someone asks you what is graviton too? >>Yeah. So I mean grandma can do is a processor, it's a chip, it's a CPU um and so what that means essentially is and it's an arm. Basic. So um, you know, with, with are just like you have intel and AMG processors, these are the, the circuitry and the computer that does the work. Right. And um with, with Gravitas on we support arm which is a different architecture set but one that has been around long enough and it's pretty ubiquitous across mobile devices and servers now. So the operating systems that you know, you know all the Linux operating system, the tools that you know, they all work and are able to run on Graviton too. So this means that when you have applications, you can very easily take it from the same AMG or intel X 86 platform and move it over and just get the efficiencies that gravity to offers with lower power envelope and higher performance >>there it is many master class here at raj. Pie Vice President Ec two product management laying down the graviton to knowledge and for folks learning about cloud and architects really want to know the difference. It's a 40% performance improvement, lower power envelope, 20% less than cost. I believe something those range about right about in the same territory there. So basically high performance, lower costs, better power. So for workloads that demanded you got the option raj. Thank you for sharing. Thank you. All right. I'm john for, with the cube Thanks for watching. Mhm mm

Published Date : Aug 13 2021

SUMMARY :

The most important product you look at. Thanks for having me. So, can you break down the graviton two processor overview? and that's huge and I think that's why this is, you know, raising a lot of interest. Does it mean anything to you actually? So ultimately, that's what that's what it's about having the right form factor So the instance has to match the profile of the app, the workload and kind of, So, um, you know, part of it, creating the selection and the other part of to, at the end of the day when someone asks you what is graviton too? that you know, you know all the Linux operating system, the tools that you know, So for workloads that demanded you got the option raj.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RogerPERSON

0.99+

20%QUANTITY

0.99+

40%QUANTITY

0.99+

50%QUANTITY

0.99+

AWSORGANIZATION

0.99+

400QUANTITY

0.99+

LinuxTITLE

0.99+

AMGORGANIZATION

0.99+

400 instanceQUANTITY

0.99+

oneQUANTITY

0.98+

80 bucksQUANTITY

0.98+

intelORGANIZATION

0.98+

15th birthdayQUANTITY

0.97+

EC twoORGANIZATION

0.97+

johnPERSON

0.97+

more than 400 instant typesQUANTITY

0.95+

amazon cloudORGANIZATION

0.93+

EC twoTITLE

0.91+

X 86COMMERCIAL_ITEM

0.91+

GravitonTITLE

0.9+

RajPERSON

0.89+

Ec twoTITLE

0.86+

E C twoEVENT

0.86+

a year agoDATE

0.84+

C twoTITLE

0.84+

vice presidentPERSON

0.81+

RogersPERSON

0.73+

aboutDATE

0.71+

twoORGANIZATION

0.68+

twoQUANTITY

0.62+

Product ManagerPERSON

0.61+

Vice PresidentPERSON

0.61+

NWSORGANIZATION

0.53+

ec twoORGANIZATION

0.49+

rajPERSON

0.49+

EcCOMMERCIAL_ITEM

0.45+

gravitonOTHER

0.35+

PiePERSON

0.29+

Unpacking IBM's Summer 2021 Announcement | CUBEconversation


 

(upbeat music) >> There are many constants in the storage business, relentlessly declining costs per bit. Innovations that perpetually battle the laws of physics, a seemingly endless flow of venture capital, very intense competition. And there's one other constant in the storage industry, Eric Herzog. And he joins us today in this CUBE video exclusive to talk about IBM's recent storage announcements. Eric, welcome back to theCUBE. Great to see you, my friend. >> Great Dave, thank you very much. Of course, IBM always loves to participate with theCUBE and everything you guys do. Thank you very much for inviting us to come today. >> Really our pleasure. So we're going to cover a lot of ground. IBM Storage made a number of announcements this month around data resilience. You've got a new as a service model. You've got performance enhancements. Eric, can you give us, give us the top line summary of the hard news? >> Yeah. Top line. IBM is enhancing data and cyber resiliency across all non mainframe platforms. We already have it on the mainframe of course, and we're changing CapEx to OpEx with our storage as a service. Those are the key takeaways and the hot ticket items from an end user perspective. >> So maybe we could start with sort of the cyber piece. I mean, wow. I mean the last 18 months have been incredible and you're just seeing, you know, new levels of threats. The work from home pivot has created greater exposure. Organizations are kind of rethinking hybrid. You're seeing the ascendancy of some of the sort of hot cyber startups, but, but you're also seeing the, not only of the attack vectors winded, but the, the techniques are different. You know, threat hunting has become much more important. Your responses to threats. You have to be really careful the whole ransomware thing. So what are some of the big trends that you guys are seeing that are kind of informing how you approach the market? >> Well, first of all, it's gotten a lot worse. In fact, Fortune magazine just released the Fortune 500 a couple of weeks ago, and they had a survey that's public of CEOs, and they said, "What's the number one threat to your business? With no list just what's the number one threat?" Cyber security was number one 66% of the Fortune 500 Chief Executive Officers. Not CIOs not CTOs, but literally the CEOs of the biggest companies in the world. However, it's not just big companies. It hits the mid size, the small companies, everyone is open now to cyber threats and cyber attacks. >> Yeah. So for sure. And it's (chuckles) across the board. Let's talk about your solution, the announcement that you made here. Safeguard Copy, I think is what the branding is. >> Yeah. So what we've done is we've got a number of different technologies within our storage portfolio. For example, with our Spectrum Protect product, we can see anomalous pattern detection and backup data sets. Why would that matter? If I am going to hold theCUBE for ransom, if I don't get control of your secondary storage, snaps, replicas, and backups, you can just essentially say, I'm not paying you. You could just do a recovery, right? So we have anomalous protection there. We see encryption, we encrypt at rest with no performance penalty with our FlashSystem's family. We do air gapping. And in case of safeguarded copy, it's a form of air gapping. So we see physical air gapping with tape. logical air gapping, but to a remote location with snaps or replicas to your Cloud provider, and then local logical on-prem, which is what safeguarded copy does. We've had this technology for many years now on the mainframe platform. And we brought it down to the non mainframe environments, Linux, UNIX, and the Windows Server world by putting safeguarded copy on our FlashSystem's portfolio. >> So, okay. So part of the strategy is air gapping. So you're taking a copy, your air gapping it. You probably, you probably take those snaps, you know, at different intervals, you mix that up, et cetera. How do you manage the copies? How do you ensure if I have to do a recovery that you've got kind of a consistent data set? >> Yeah. So a couple things, first of all, we can create on a single FlashSystem array the full array up to 15,000 immutable copies, essentially they're weren't, you can't delete them, you can't change them. On a per volume basis, you can have 255. This is all managed with our storage copy manager, which can automate the entire process. Creation, deletion, frequency, and even recovery mode. So for example, I could have volume one and volume one perhaps I need to make immutable copies every four hours, while at 255 divided by four a day, I can go for many months and still be making those immutable copies. But with our Copy Services Manager, you can set up to be only 30 days, 60 days, you can set the frequency and once you set it up, it's all automated. And you can even integrate with IBM's QRadar, which is a threat detection and breach software from the security division of IBM. And when certain threats hit, it can actually automatically kick off a safeguarded copy. So what we do is make sure you've got that incredibly rapid recovery. And in fact, you can get air gapping, remotely. We have this on the main frame and a number of large global Fortune 500's actually do double air gapping, local logical, right? So they can do recovery in just a couple hours if they have an attack. And then they take that local logical and either go remote logical. Okay. Which gives them a second level of protection, or they'll go out to tape. So you can use this in a myriad of ways. You can have multiple protection. We even, by the way Dave, have three separate different admin levels. So you can have three different types of admins. One admin can't delete, one admin can. So that way you're also safe from what I'll call industrial espionage. So you can never know if someone's going to be stealing stuff from inside with multiple administrative capabilities, it makes it more difficult for someone to steal your data and then sell it to somebody. >> So, okay. Yeah, right. Because immutable is sort of, well, you're saying that you can set it up so that only one admin has control over that, is that right? If you want it... >> There's three, there's three admins with different levels of control. >> Right. >> And the whole point of having a three admins with different levels of control, is you have that extra security from an internal IT perspective versus one person, again, think of the old war movies, you know, nuclear war movies. Thank God it's never happened. Where two guys turn the key. So you've got some protection, we've got multiple admin level to do that as well. So it's a great solution with the air gapping. It's rapid recovery because it's local, but it is fully logically air gapped separated from the host. It's immutable, it's WORM, Write Once, Read Many can't delete can't change. Can't do anything. And you can automate all the management with our Copy Services Manager software that will work with safeguard copy. >> You, you talked about earlier, you could detect anomalous behavior. So, so presumably this can help with, with detecting threats, is that? >> Well, that's what our spectrum protect product does. My key point was we have all levels of data resiliency across the whole portfolio, whether it be encrypting data at rest, with our VTLs, we can encrypt in-flight. We have safeguarded copy on the mainframe, safeguarded copy on FlashSystems, any type of storage, including our competitor storage. You could air gap it to tape, right? With our spectrum virtualized software in our SAN Volume Controller, you could actually air gap out to a Cloud for 500 arrays that aren't even ours. So what we've done is put in a huge set of data and cyber resiliency across the portfolio. One thing that I've noticed, Dave, that's really strange. Storage is intrinsic to every data center, whether you're big, medium, or small. And when most people think about a cybersecurity strategy from a corporate perspective, they usually don't even think about storage. I've been shocked, but I've been in meetings with CEOs and VPs and they said, "oh, you're right, storage is, is a risk." I don't know why they don't think of it. And clearly many of the security channel partners, right? You have channel that are very focused on security and security consultants, they often don't think about the storage gaps. So we're trying to make sure, A, we've got broad coverage, primary storage, secondary storage, backup, you know, all kinds of things that we can do. And we make sure that we're talking to the end users, as well as the channel to realize that if you don't have data resilience storage, you do not have a corporate cybersecurity strategy because you just left out the storage part. >> Right on. Eric, are you seeing any use case patterns emerge in the customer base? >> Well, the main use case is prioritizing workloads. Obviously, as you do the immutable copies, you chew up capacity. Right now there's a good reason to do that. So you've got these immutable copies, but what they're doing is prioritizing workloads. What are the workloads? I absolutely have to have up and going rapidly. What are other workloads that are super important, but I could do maybe remote logical air gapping? What ones can I put out to tape? Where I have a logical, where I have a true physical air gap. But of course tape can take a long recovery time. So they're prioritizing their applications, workloads and use case to figure out what they need to have a safeguarded copy with what they could do. And by the way, they're trying to do that as well. You know, with our FlashSystem products, we could encrypt data at rest with no performance penalty. So if you were getting, you know, 30,000 database records and they were taken, you know, 10 seconds for sake of argument, when you encrypt, normally you slow that down. Well, guess what, when you encrypt with our FlashSystem product. So in fact, you know, it's interesting Dave, we have a comprehensive and free cyber resiliency assessment, no charge to the end-user, no charge to a business partner if they want to engage with us. And we will look at based on the NIST framework, any gaps. So for example, if theCUBE said, these five databases are most critical databases, then part of our cyber resilience assess and say, "ah, well, we noticed that you're not encrypting those. Why are you not encrypting those?" And by the way, that cyber resilience assessment works not only for IBM storage, but any storage estate they've got. So if they're homogenous, we can evaluate that if they're heterogeneous in their storage estate would evaluate that, and it is vendor agnostic and conforms to the NIST framework, which of course is adopted all over the world. And it's a great thing for people to get free, no obligation. You don't have to buy a single thing from IBM. It's just a free assessment of their storage and what cyber security exposure they have in their storage estate. And that's a free thing that we offer that includes safeguarded copy, encryption, air gapping, all the various functionality. And we'll say, "why are you not encrypting? Why are you not air gapping?" That if it's that important, "what, why are you leaving these things exposed?" So that's what our free cyber resilience assessment does. >> Got to love those freebies take advantage of those for sure. A lot of, a lot of organizations will charge big bucks for those. You know, maybe not ridiculously huge bucks, but you're talking tens of thousands. Sometimes you'll get up to hundreds of thousands of dollars for that type of type of assessment. So that's, you've got to take advantage of that if you're a customer out there. You know, I, I wanted to ask you about just kind of shift topics here and get into the, as a service piece of it. So you guys announced your, your as a service for storage, a lot of people have also done that. What do we need to know about the IBM Solution? And what's different from the others, maybe two part question, but what's the first part. What do we need to know? >> A couple of thing is, from an overall strategy perspective, you don't buy storage. It's a full OpEx model. IBM retains legal title. We own it. We'll do the software upgrades as needed. We may even go ahead and swap the physical system out. You buy an SLA, a tier if you will. You buy capacity, performance, we own it. So let's take an easy one. Our tier two, we give you our worst case performance at 2,250 IOPS per terabyte. Our competitors by the way, when you look at their contracts and look what they're putting out there, they will give you their best case number. So if they're two is 2,250, that's the best case. With us it's our worst case, which means if your applications or workloads get 4,000 IOPS per terabyte, it's free. We don't charge you for that. We give you the worst case scenario and our numbers are higher than our competition. So we make sure that we're differentiated true OpEx model. It's not a modified Lease model. So it's truly converts CapEx into operational expense. We have a base as everybody does, but we have a variable. And guess what? There's the base price and the variable price are the same. So if you don't use the variable, we don't charge you. We bill you for 1/4 in arrears, every feature function that's on our FlashSystem technology such as safeguarded copy, which we just talked about. AI based tiering, data at rest encryption with no performance penalty, data in compression with no performance, all those features you get, all of them, all we're doing is giving you an option. We still let you buy CapEx. We will let you lease with IBM Global Financial Services. And guess what? You could do a full OpEx model. The technology though, our flash core modules, our spectrum virtualized software is all the same. So it's all the same feature function. It's not some sort of stripped down model. We even offer Dave, 100% availability option. We give Six Nines of availability as a default, several of the competitor, which is only five minutes and 26 seconds of downtime, several of our competitors, guess what they give? Fournines. If you want five or six, you got to pay for it. We just give you six as a default differentiator, but then we're the only vendor to offer 100% availability guarantee. Now that is an option. It's the one option. But since we're already at Six Nines, when our competitors are at Four or Five Nines, we already have better availability with our storage as a service than the competition does. >> So let me just make this, make sure I'm clear on this. So you got Six Nines as part of the service. That's >> Absolutely >> Fundamental. And I get, I can pay up for 100% availability option. And, >> Yes you can. >> So what does that, what does that mean? Practically? You're putting in redundancies and, >> Right, right. So we have a technology known as HyperSwap. We have several public references by the way, at ibm.com. We've been shipping HyperSwap on both the mainframe, probably eight or nine years now. We brought it to our FlashSystem product probably five years ago. As I mentioned, we've got public references. You don't pay for the software by the way, you do have to have a dual node cluster. And HyperSwap allows you to do that. But you can do that as a service. You can buy it. You can do as CapEx, right? When you need the additional FlashSystem to go with it again, the software is free. So you're not to pay for the software. You just have to pay for the additional system level componentry, but you can do that as a service and have it completely be an OpEx model as well. We even assign a technical account manager to every account. Every account gets a technical account manager. If you will, concierge service comes with every OpEx version of our storage as a service. >> So what does that mean? What does that concierge do? Just paying attention to (indistinct) >> Concierge service will do a quarterly, a quarterly review with you. So let's say theCUBE bought 10,000 other analyst firms in the industry. You're now the behemoth. And you at theCUBE are using IBM storage as a service. You call up your technical account manager to say, "Guess what? We just bought these companies. We're going to convert them all to storage as a service, A, we need a higher tier, you could upgrade the tier B, we have a one-year contract, but you know what we'd like to extend it to two, C, we think we need more capacity." You tell your technical account manager, they'll take care of all of that for you, as well as giving you best practices. For example, if you decide you want to do safeguarded copy, which you can do, because it's built into our spectrum virtualized software, which is part of our storage as a service, we can give you best practices on that he would tell you, or she would tell you about our integration with our security visions, QRadar. So those are various best practices. So the technical account manager makes sure the software is always up to date, right? All the little things that you would have to do yourself if you own it, we take care of, because we legally own it, which is allow you to buy it as a service. So it is a true OpEx model from a financial perspective. >> In the term of the contracts are what? One, two and three years. >> One to five. >> Yeah. Okay. >> If you don't renew and you don't cancel, we'll automatically re up you at the exact tier you're at, at the exact same price. Several of our competitors, by the way, if you do that, they actually charge you a premium until you sign a contract. We do not. So if you have a contract based on tier two, right? We go buy SLA tier one, tier two, tier three. So if I have a tier two contract at theCUBE, and you forgot to get the contract done at the end of two years, but you still want it, you can go for the next 2/4. I mean, well our business partner as I should say, "Dave, don't you want to sign a contract, you said you like it." Obviously you would, but we will let you stay. You just say, now I want to keep it without a contract. And we don't charge your premium. Our competitors if you don't have a contract, they charge your premium. If you keep it installed without putting a contract in place. So little things like that clearly differentiate what we do. We don't charge a premium. If you go above the base. One of the competitors, in fact, when you go into the variable space, okay? And by the way, we provide 50% extra capacity. We over-provision. The other competitors usually do 25%. We do 50%. No charge, is just part of the service. So the other vendors, if you go into the variable space, they raised the price. So if it's $5, you know, for X capacity and you go into the, which is your base, and then you go above that, they charge you $7 and 50 cents. We don't. It's $5 at the base and $5 at the variable. Now obviously your variable can be very big or very small, but whatever the variable is, we charge you. But we do not charge you an a bigger price. Couple of competitors when you go into the variable world, they charge you more. Guess what it gets you to do, raise your base capacity. (Eric laughs) >> Yeah. I mean, that's, that should, the math should be the opposite of that, in my view. If you make a commitment to a vendor, say, okay, I'm going to commit to X. You have a nice chart on this, actually in your, in your deck. If I'm going to commit to X, and then I'm going to add on, I would think the add on price per bit should be at the same or lower. It shouldn't be higher. Right? And I get, I get what you're saying there. They're forcing you to jack up the base, but then you're taking all the risk. That's not a shared risk model. I get... >> And that's why we made sure that we don't do that. In fact, Dave, you can, you know, the fact that we don't charge you a premium if you go beyond your contract period and say, "I still wanted to do it, but I haven't done the contract yet." The other guys charge you a premium, if you go beyond your contract period. We don't do that either. So we try to be end-user friendly, customer friendly, and we've also factored in our business partners can participate in this program. At least one of our competitors came out with a program and guess what? Partners could not participate. It was all direct. And that company by happens to have about 80% of their business through the channel and their partners were basically cut out of the model, which by the way, is what a lot of Cloud providers had done in the past as well. So it was not a channel friendly model, we're channel friendly, we're end user-friendly, it's all about ease of use. In fact, when you need more capacity, it takes about 10 minutes to get the new capacity up and going, that's it? >> How long does it take to set up? How long does it take to set up initially? And how long does it take to get new capacity? >> So, first of all, we deploy either in a Colo facility that you've contracted with, including Equinix, Equinix, is part of our press release, or we install on your site. So the technical account managers is assigned, he would call up theCUBE and say, "When is it okay for us to come install the storage?" We install it. You don't install anything. You just say, here's your space. Go ahead and install. We do the installation. You then of course do the normal rationing of the capacity to this goes to this Oracle, this goes to SAP. This goes to Mongo or Cassandra, right? You do that part, but we install it. We get it up and going. We get it turned on. We hook it up to your switching infrastructure. If you've got switching infrastructure, we do all of that. And then when you need more capacity, we use our storage insights pro which automatically monitors capacity, performance, and potential tech support problems. So we give you 50% extra, right? If you drop that to 25%, so you now don't have 50% extra anymore, you only have 25% extra, we'll, the technical account manager would call you and say, "Dave, do you know that we'd like to come install extra capacity at no charge to get you back up to that 50% margin?" So we always call because it's on your site or in your Colo facility, right? We own the asset, but we set it up and you know, it takes a week or two, whatever it takes to ship to whatever location. Now by the way, our storage as a service for 2021 will be in North America and Europe only, we are really expanding our storage as a service outside into Asia and into Latin America, et cetera, but not until 2022. So we'll start out with North America and Europe first. >> So I presume part of that is figuring out just the compensation models right? And so how, how did you solve that? I mean, you can't, you know, you don't seem to be struggling with that. Like some do. I think there's some people dipping their toes in the water. Was that because, you know, IBM's got experience with like SAS pricing or how were you thinking about that and how did you deal with kind of the internal (indistinct) >> Sure. So, first of all, we've had for several years, our storage utility model. >> Right? >> Our storage utility model has been sort of a hybrid part CapEx and part OpEx. So first of all, we were already halfway there to an OpEx model with our storage utility model that's item, number one. It also gave us the experience of the billing. So for example, we bill you for a full quarter. We don't send you a monthly bill. We send you a quarterly bill. And guess what, we always bill you in arrears. So for example, since theCUBE is going to be a customer this quarter, we will send you a bill for this quarter in October for the October quarter, we'll send you a bill for that quarter in January. Okay. And if it goes up, it goes up. If it goes down, it goes down. And if you don't use any variable, there's no bill. Because what we do is the base you pay for once a year, the variable you pay for by on a quarterly basis. So if you, if you are within the base, we don't send you a bill at all because there's no bill. You didn't go into the variable capacity area at all. >> I love that. >> When you have a variable It can go up and down. >> Is that unique to some, do some competitors try to charge you up front? Like if it's a one-year term. (Dave laughs) >> Everbody charges, everybody builds yearly on the base capacity. Pretty much everyone does that. >> Okay, so upfront you pay for the base? Okay. >> Right. And the variable can be zero. If you really only use the base, then there is no variable. We only bill for it's a pay for what you use model. So if you don't use any of the variable, we never charge you for variable. Now, you know, because you guys have written about it, storage grows exponentially. So the odds of them ending up needing some of the variable is moderately high. The other thing we've done is we didn't just look at what we've done with our storage utility model, but we actually looked at Cloud providers. And in fact, not only IBM storage, but almost every of our competitors does a comparison to Cloud pricing. And when you do apples to apples, Cloud vendors are more expensive than storage as a services, not just from us, but pretty much for a moment. So let's take an example. We're Six Nines by default. Okay. So as you know, most Cloud providers provide three or Fournines as the default. They'll let you get five or Six Nines, but guess what? They charge you extra. So item number one. Second thing, performance, as you know, the performance of Cloud storage is usually very weak, but you can make it faster if you want to. They charge extra for that. We're sitting at 2,250 terabytes per IOPS, excuse me, per terabytes. That's incredible performance If you've got 100 terabytes, okay. And if your applications and workloads and that's the worst case, by the way, which differentiates from our competitors who usually quote the best case, we quote you the worst case and our worst case by the way, is almost always higher than their best cases in each of the tiers. So at their middle tier, our worst case is usually better than their best case. But the point is, if you get 4,000 IOPS per terabyte and you're on a tier two contract, it's a two-tier contract. And in fact, let's say that theCUBE has a five-year deal. And we base this on our FlashSystem technology. And so let's say for tier two, for sake of argument, FlashSystem, 7,200. We come out two years after theCUBE has it installed with the FlashSystem, 7,400. And let's say the FlashSystem, 7,400, won't deliver a 2,250 IOPS per terabyte, but 5,000, if we choose to replace it, 'cause remember it's our physical property. We own it. If we choose to replace that 7,200 with a 7,400, and now you get 5,000 IOPS per terabyte, it's free. You signed a tier two contract for five years. So two years later, if we decide to put a different physical system there and it's faster, or has four more software features, we don't charge you for any of that. You signed an SLA for tier two. >> You haven't Paid for capacity, right? All right. >> You are paying for the capacity (indistinct) performance, you don't pay for that. If we swap it out and the, the array is physically faster, and has got five new software features. You pay nothing, you pay what your original contract was based on the capacity. >> What I'm saying is you're learning from the Cloud providers 'cause you are a Cloud provider. But you know, a lot of the Cloud providers always sort of talk about how they lower prices. They lower prices, but you know, well, you worked at storage companies your whole life and they, they lower prices on a regular basis because they 'cause the cost of the curve. And so. >> Right. The cost of storage to Cloud, I mean, the average price decline in the storage industry is between 15 and 25%, depending on the year, every single year. >> Right. >> As, you know, you used to be with one of those analysts firms that used to track it by the numbers. So you've seen the numbers. >> For sure. Absolutely. >> On average it drops 15 to 25% every year. >> So, what's driving this then? If it's, it's not necessarily, is it the shift from, from CapEx to OPEX? Is it just a more convenient model than on a Cloud like model? How do you see that? >> So what's happened in IT overall is of course it started with people like salesforce.com. Well, over 10 years ago, and of course it's swept the software industry software as a service. So once that happened, then you now see infrastructure as a service, servers, switches, storage, and an IBM with our storage as a service, we're providing that storage capability. So that as a service model, getting off of the traditional licensing in the software world, which still is out there, but it's mostly now is mostly software as a service has now moved into the infrastructure space. From our perspective, we are giving our business partners and our customers, the choice. You still want to buy it. No problem. You want to lease it? No problem. You want a full OpEx model. No problem. So for us, we're able to offer any of the three options. The, as a service model that started in software has moved now into the systems world. So people want to change often that CapEx into OpEx, we can even see Global Fortune 500s where one division is doing something and a different division might do something else, or they might do it different by geography. In a certain geography, they buy our FlashSystem products and other geographies they lease them. And in other geographies it's, as a service. We are delivering the same feature, function, benefit from a performance availability software function. We just give them a different way to procure. Do you want CapEx you want leasing or OpEx you pick what you want, we'll deliver the right solution for you. >> So, you got the optionality. And that's great. You've thought that out, but, but the reason I'm asking Eric, is I'm trying to figure out this is not just for you for everybody. Is this a check-off item or is this going to be the prevailing way in which storage is consumed? So if you had, if you had a guess, let's go far out. So we're not making any near-term forecast, but end of the decade, is this going to be the dominant model or is it going to be, you know, one of the few. >> It will be one of a few, but it'll be a big few. It'll be the big, one of the biggest. So for sake of argument, there we'll still be CapEx, they'll still be OpEx they'll still be, or there will be OpEx and they're still be leasing, but I will bet you, you know, at the end of this decade, it'll be 40 to 50% will be on the OpEx model. And the other two will have the other 50%. I don't think it's going to move to everything 'cause remember, it's a little easier during the software world. In the system world, you've got to put the storage, the servers, or the networking on the prem, right? Otherwise you're not truly, you know, you got to make it a true OpEx model. There's legal restrictions. You have to make it OpEx, if not, then, you know, based on the a country's practice, depending on the country, you're in, they could say, "Well, no, you really bought that. It's not really a service model." So there's legal constraints that the software worldwise easier to get through and easier to get to bypass. Right? So, and remember, now everything is software as a service, but go back when salesforce.com was started, everyone in the enterprise was doing ELAs and all the small companies were buying some sort of contract, right, or buying by the (indistinct) basis. It took a while for that to change. Now, obviously the predominant model is software as a service, but I would argue given when salesforce.com started, which was, you know, 2007 or so, it took a good 10 years for software as a service to become the dominant level. So I think A, it won't take 10 full years because the software world has blazed a trail now for the systems world. But I do think you'll see, right. We're sitting here know halfway through 2021, that you're going to have a huge percentage. Like I said, the dominant percentage will be OpEx, but the other two will still be there as well. >> Right. >> By the way, you know in software, almost, no one's doing ELAs these days, right? A few people still do, but it's very rare, right? It's all software as a service. So we see that over time doing the same thing in the, in the infrastructure side, but we do think it will be slower. And we'll, we'll offer all three as, as long as customers want it. >> I think you're right. I think it's going to be mixed. Like, do I care more about my income statement or my balance sheet and the different companies or individual different divisions are going to have different requirements. Eric, you got to leave it there. Thanks much for your time and taking us through this announcement. Always great to see you. >> Great. Thank you very much. We really appreciate our time with theCUBE. >> All right. Thank you for watching this CUBE conversation. This is Dave Vellante and we'll see you next time. (upbeat music)

Published Date : Jul 29 2021

SUMMARY :

in the storage business, and everything you guys do. Eric, can you give us, and the hot ticket items how you approach the market? of the Fortune 500 Chief the announcement that you made here. you can just essentially say, So part of the strategy is air gapping. So you can use this in a myriad of ways. If you want it... different levels of control. And you can automate all the management you could detect anomalous behavior. And clearly many of the security are you seeing any use So in fact, you know, So you guys announced your, So if you don't use the So you got Six Nines And I get, And HyperSwap allows you to do that. we can give you best practices on that In the term of the contracts are what? Yeah. So the other vendors, if you If you make a commitment if you go beyond your So we give you 50% extra, right? and how did you deal with kind of the So, first of all, we've the variable you pay for When you have a variable to charge you up front? on the base capacity. Okay, so upfront you pay for the base? So if you don't use any of the variable, You haven't Paid for capacity, right? you pay what your original contract was But you know, decline in the storage industry As, you know, For sure. 15 to 25% every year. Do you want CapEx you want leasing or OpEx So if you had, if not, then, you know, By the way, you know in software, Eric, you got to leave it there. Thank you very much. Thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

EricPERSON

0.99+

OneQUANTITY

0.99+

EquinixORGANIZATION

0.99+

AsiaLOCATION

0.99+

$7QUANTITY

0.99+

Eric HerzogPERSON

0.99+

$5QUANTITY

0.99+

sixQUANTITY

0.99+

fiveQUANTITY

0.99+

IBM Global Financial ServicesORGANIZATION

0.99+

40QUANTITY

0.99+

five-yearQUANTITY

0.99+

North AmericaLOCATION

0.99+

2,250QUANTITY

0.99+

60 daysQUANTITY

0.99+

OPEXORGANIZATION

0.99+

100%QUANTITY

0.99+

25%QUANTITY

0.99+

one-yearQUANTITY

0.99+

Latin AmericaLOCATION

0.99+

50%QUANTITY

0.99+

EuropeLOCATION

0.99+

5,000QUANTITY

0.99+

twoQUANTITY

0.99+

threeQUANTITY

0.99+

three adminsQUANTITY

0.99+

CapExORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

2,250 terabytesQUANTITY

0.99+

10 secondsQUANTITY

0.99+

2021DATE

0.99+

2007DATE

0.99+

October quarterDATE

0.99+

a weekQUANTITY

0.99+

100 terabytesQUANTITY

0.99+

OracleORGANIZATION

0.99+

15QUANTITY

0.99+

255QUANTITY

0.99+

7,200QUANTITY

0.99+

two guysQUANTITY

0.99+

26 secondsQUANTITY

0.99+

North AmericaLOCATION

0.99+

five years agoDATE

0.99+

FlashSystemTITLE

0.99+

first partQUANTITY

0.99+

eightQUANTITY

0.99+

five minutesQUANTITY

0.99+

two-tierQUANTITY

0.99+

one divisionQUANTITY

0.99+

two partQUANTITY

0.99+

eachQUANTITY

0.99+

five yearsQUANTITY

0.99+

nine yearsQUANTITY

0.99+

Clemens Reijnen, Sogeti, part of Capgemini | IBM Think 2021


 

>> Narrator: From around the globe. It's theCUBE with digital coverage of IBM Think 2021 brought to you by IBM. >> Well, hi everybody, John Walls here on theCUBE as we continue our IBM Think initiative. And today talking with Clemens Reijnen who is the Global CTO Cloud and DevOps Leader at Capgemini. And the Clemens, thanks for joining us here on theCUBE. Good to see you today. >> Thank you. Thank you very much. Nice to be here. >> Yeah, tell us a little bit about Capgemini, if you will, first off for our viewers at home who might not be familiar with your services. Tell us a little bit about that and maybe a little bit more about your specific responsibilities there. >> So who doesn't know Capgemini in this system in the greater world and the IT world as we lived on a stone. So Capgemini is a worldwide system integrated with offerings in all kinds of spaces and all areas there. My responsibility is mainly around cloud and DevOps and taking care of countries or delivery centers have the right knowledge around cloud and the right capabilities around DevOps. And to support our customers and with their journey to the cloud into a digital organization. >> Yeah. Everybody's talking about digital these days. >> Everybody yeah. >> And it's magical digital transformation that's occurring, that's been going on for quite some time. What does that look like to you? And when you start defining digital organizations and digital transformations what are the kinds of things that you're talking about with organizations in terms of that kind of migration path? >> Yeah. So it's quite interesting to just start discussion about how does a digital landscape looks like for an organization wants to start transforming to a digital organization. And then when you are looking at that I'm always talking to discretion with business capabilities. An organization wants to create business capabilities either to interact and engage with their workforce and to make them work in the most efficient way. And what they are using for that are all kinds of different digital channels. And those digital channels they can be a mobile app. I'm working with my mobile app to connect with my work. I'm calling, I'm using zoom, I'm using teams and that kind of stuff. We also using chatbots for IT devices. And that's what the normal workforce expect nowadays. All have to have all those digital channels to interact with the business. That's also on the other side, at the customer side and organizations want to engage and grow on the customer site and have their nice interaction there. And again, they are using those digital channels all the different digital channels, maybe IoT, maybe API to interact with those customers to bring them the engagement interaction they really want to have. And in that transformation part definitely they are looking at what kind of challenges I have with working with customers like this and working with my workforce. Now everybody's working from home challenges with maybe the connections and that kind of stuff. But they are also starting to leverage, and that's where the transformation and migration start with their on-prem systems, their legacy systems to move those kinds of capabilities and enrich that with cloud native capabilities to all kinds of enterprise solutions like the ones from IBM for example, to expose that to their digital channels, to their organizations. And that's the landscape, how it looks like. And then we have the discussion with organizations. How do you want to engage with your customers? What kind of digital channels do you need? What are the business systems you have and how can we enrich them and expose them to the outside world with all the enterprise solutions around them. >> And when you talk about a process like this which sounds holistic, right? You're looking at, what do you have? Where do you want to go? What are your business needs? Which all makes great sense. But then all of a sudden you start hitting speed bumps along the way. There are always challenges in terms of deployments There are always challenges in terms of decisions and those things. So what are you hearing again from on the customer side about, what are my pain points? What are my headaches here as I know, I want to make this jump, but how do I get there? And I have these obstacles in my way. >> Yeah, definitely. And the ones I explained already which are underlooked for site and on the customer side. You want to have the engagements there you want to have interactions there. And then you have that whole digital landscape which comes with some interesting challenges. Then how do I implement this landscape in the right scalable way? How do I expose my data in such a way that it is secure? How do I leverage all the capabilities from the platforms I'm using? And how do I make all these moving parts consistent, compliant with the regulations I need to work towards to? How do I make it secure? So those are definitely big enterprise challenges like appliances, security and that kind of stuff but also technology challenges. How do I adopt those kinds of technologies? How do I make it scalable? How do I make it really an integrated solution on its own? So that my platform is not only working for the digital channels we know right now but they are also ready for the digital channels We don't know yet will start to come here. That's the biggest challenges there for me. >> Yeah. I want to get into that a little bit later too. Cause you raised a great point. Well, let's just jump right now. We know what the here now is but you just talked about building for the future building for a more expansive footprint or kinds of capabilities that frankly we're not even aware of right now. So how do you plan for that kind of flexibility that kind of agility when it's a bit unpredictable? >> Yeah. And that's what every organization tries to be agile, flexible, resilient and you need to build your system conform that. And well we normally start with you need to have a clear foundation and a foundation when, for example when you are using the cloud for it every organization is cloud for it. You want to have that foundation in such a way that those digital channels can connect really easy to it. And then the capabilities the business capabilities created are done by product teams product and feature teams are creating those kinds of capabilities on top of that cloud foundation. And in that foundation, you want to put everything in place. What makes it easy for those teams to focus on that business functionality on those business capabilities. You want to make it very easy for them to do it the right thing that I always love to say that that's what you want to put in your cloud foundation. And that's where you are harnessing your security. Every application with learning on the foundation has secure. You are embracing a standard way of working although not every DevOps teams like that they want to be organizing and that kind of stuff. But when you are having 50 or a 100 DevOps teams you'd want to have some kind of standardization and provide them a way. And again, the easy way should be the right way to provide them templates, provide them technologies so that they can really focus very quickly on those kinds of business capabilities. So the cloud foundation is the base that needs to be in place. >> Now, you've been doing this for a long time and the conversation used to be, shall we move to the cloud? Can we move to the cloud? Now it's about how fast can we move to the cloud? How much do we move to the cloud? So looking at that kind of the change in paradigm if you will, what are organizations having to consider in terms of the scale, the depth, the breadth of their offering now, because innovation and as you know, it can happen at a much faster pace than it could have just a very short time ago. >> Yeah. And then I'm reflecting again back to the easy thing should be the right thing. That's what you want to do for your DevOps. >> I love that concept. (laughs) >> And that's where you should focus on as an organization. For example, what we've put in place. We put a lot of standardization, a lot of knowledge in place in what we call in an Inner Source library. And in that Inner Source library, for example we put all kinds of strips, all kinds of templates all kinds of standardization for teams who want to deploy OpenShift on their platform or want to start working with certain cloud packs. That they can set it up very easily conforms the standards of your organization and start moving from there. And then in the cloud foundation, you have your cloud management and the IBM Cloud Manager because organizations are definitely going towards the hybrid scenarios, different organizational or units wants to start using different clouds in there. And also for the migration part you want to have that grow from there. And standardization, Inner Source and having those templates ready, it's key for organizations now to speed up and be ready to start juggling around with workloads now on any cloud where you want to and that's the idea. >> Sure. Now, so Red Hats involved in this she had IBM involved as well obviously your partnership working with them. Talk about that kind of merger of resources, if you will. And in terms of what the value proposition is to your clients at the end of the day to have that kind of firepower working in their behalf. >> Yeah. And that's for example, IBM is for us a very important partner. Definitely on the hybrid multi-cloud scenarios where we can leverage OpenShift on those kinds of platforms for our customers. And we created what I said, templates, scripts. We use the IBM garage projects for it to create deployments for our teams in a kind of self servicing way to deploy those OpenShift clusters on top of the cloud platform of their choice. And then for sure, with the multi-cloud manager from IBM we can manage that actually in the lending zone and that's actually the whole ID. And you want to give the flexibility and the speeds to your DevOps teams to be able to do the right thing is the easy thing. And then manage it from your cloud foundation so that they are comfortable that when they're putting the workloads in that whole multi hybrid cloud platform that it is managed, organized all in the right way. And that that's definitely where IBM Red Hat OpenShift comes in play. And because they have already such a great tool sets ready it really think DevOps. That's what I really like. And also with the migrations, it comes with a lot of DevOps capabilities in there not playing lift shift but also the modelisation immediately in there. And that's what I like about our partnership with IBM is just, they are DevOps in mind also. That's cool. >> Yeah. What about the speed here? Just in general, just about the, almost the pace of change and what's happening in that space cause it used to be these kinds of things took forever. It seemed like or evolutions, transitions were to take a long period of time. It's not the case anymore now that things are happening in relatively lightning speed. So when you're talking with an organization about the kinds of changes they could make and the speed at which they can do that. Marry those up for me and those conversations that you're having. And if I'm a CIO out there and I'm thinking about how am I going to flip this switch? Convince me right now, (Clemens laughing) What are the key factors? And how easy, how right will this be for me? >> So as a CIO, you want to have your scalable and your flexible organization probably at this moment, you're sitting with your on-prem system with probably a very large relational database with several components around there. And now you want to fuel those digital channels there. The great way with IBM with Red Hat is that we can deploy OpenShift container solutions everywhere and then starting to modernize those small components or at that big relational database. And we were at starting to do that, we can do that really at Lightspeed. And there are, we have a factory model up and running, where we can put in the application landscape of a customer and look at it and say, "Okay, this one is quite easy. We are running it to, or modernization street. And it runs into a container." And from there, you start to untangle actually the hair ball of your whole application landscape and starting to move those components. And you definitely want to prioritize them. And that's where you have discussions with the business, which is most valuable to move first and which one to move there. And that's actually what we put in place is the factory model to analyze an application landscape of a customer, having the discussions with those customers and then say, "Okay we are going to move these workloads first. Then we are going to analyze the count of these and then we are going to move these." And we really start rocking fast moving their workloads to the cloud and so that they can start and reach those digital channels you want to do it in half. >> Well, a great process. And I love your analogies by the way you say about hairball there. (Clemens and John laughing) I totally get it. Hey Clemens, thank you for the time today. I appreciate hearing about the Capgemini story and about your partnership with IBM. Thank you very much. >> Thank you very much. >> All right. So well, we have learned one thing the easy thing is the right thing and that's the Capgemini way of getting things done. You've been watching part of the IBM Think initiative here on theCUBE. (upbeat music)

Published Date : May 12 2021

SUMMARY :

of IBM Think 2021 brought to you by IBM. And the Clemens, Thank you very much. with your services. And to support our customers about digital these days. And when you start defining What are the business systems you have And when you talk And the ones I explained already building for the future And that's where you are So looking at that kind of the change That's what you want I love that concept. And also for the migration part And in terms of what and the speeds to your DevOps teams and the speed at which they can do that. And that's where you have And I love your analogies by the way and that's the Capgemini

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

John WallsPERSON

0.99+

ClemensPERSON

0.99+

JohnPERSON

0.99+

CapgeminiORGANIZATION

0.99+

Clemens ReijnenPERSON

0.99+

50QUANTITY

0.99+

oneQUANTITY

0.99+

todayDATE

0.99+

DevOpsTITLE

0.99+

OpenShiftTITLE

0.99+

Red HatsORGANIZATION

0.97+

SogetiPERSON

0.97+

firstQUANTITY

0.94+

Red HatORGANIZATION

0.91+

Red Hat OpenShiftTITLE

0.88+

2021DATE

0.88+

100 DevOpsQUANTITY

0.82+

Think 2021COMMERCIAL_ITEM

0.79+

IBM ThinkORGANIZATION

0.79+

Cloud ManagerTITLE

0.71+

ThinkEVENT

0.67+

SourceOTHER

0.66+

LightspeedORGANIZATION

0.64+

CapgeminiLOCATION

0.51+

theCUBEORGANIZATION

0.45+

theCUBETITLE

0.42+

ThinkTITLE

0.33+

Wim Coekaerts, Oracle | CUBEconversations


 

(bright upbeat music) >> Hello everyone, and welcome to this exclusive Cube Conversation. We have the pleasure today to welcome, Wim Coekaerts, senior vice president of software development at Oracle. Wim, it's good to see you. How you been, sir? >> Good, it's been a while since we last talked but I'm excited to be here, as always. >> It was during COVID though and so I hope to see you face to face soon. But so Wim, since the Barron's Article declared Oracle a Cloud giant, we've really been sort of paying attention and amping up our coverage of Oracle and asking a lot of questions like, is Oracle really a Cloud giant? And I'll say this, we've always stressed that Oracle invests in R&D and of course there's a lot of D in that equation. And over the past year, we've seen, of course the autonomous database is ramping up, especially notable on Exadata Cloud@Customer, we've covered that extensively. We covered the autonomous data warehouse announcement, the blockchain piece, which of course got me excited 'cause I get to talk about crypto with Juan. Roving Edge, which for everybody who might not be familiar with that, it's an edge cloud service, dedicated regions that you guys announced, which is a managed cloud region. And so it's clear, you guys are serious about cloud. These are all cloud first services using second gen OCI. So, Oracle's making some moves but the question is, what are customers doing? Are they buying this stuff? Are they leaning into these new deployment models for the databases? What can you tell us? >> You know, definitely. And I think, you know, the reason that we have so many different services is that not every customer is the same, right? One of the things that people don't necessarily realize, I guess, is in the early days of cloud lots of startups went there because they had no local infrastructure. It was easy for them to get started in something completely new. Our customers are mostly enterprise customers that have huge data centers in many cases, they have lots of real estate local. And when they think about cloud they're wondering how can we create an environment that doesn't cause us to have two ops teams and two ways of managing things. And so, they're trying to figure out exactly what it means to take their real estate and either move it wholesale to the cloud over a period of years, or they say, "Hey, some of these things need to be local maybe even for regulatory purposes." Or just because they want to keep some data locally within their own data centers but then they have to move other things remotely. And so, there's many different ways of solving the problem. And you can't just say, "Here's one cloud, this is where you go and that's it." So, we basically say, if you're on prem, we provide you with cloud services on-premises, like dedicated regions or Oracle Exadata Cloud@Customer and so forth so that you get the benefits of what we built for cloud and spend a lot of time on, but you can run them in your own data center or people say, "No, no, no. I want to get rid of my data centers, I do it remotely." Okay, then you do it in Oracle cloud directly. Or you have a hybrid model where you say, "Some stays local, some is remote." The nice thing is you get the exact same API, the exact same way of managing things, no matter how you deploy it. And that's a big differentiator. >> So, is it fair to say that you guys have, I think of it as a purpose built club, 'cause I talk to a lot of customers. I mean, take an insurance app like Claims, and customers tell me, "I'm not putting that into the public cloud." But you're making a case that it actually might make sense in your cloud because you can support those mission critical applications with the exact same experience, same API, same... I can get, you know, take Rack for instance, I can't get, you know, real application clusters in an Amazon cloud but presumably I can get them in your cloud. So, is it fair to say you have a purpose built cloud specifically for the most demanding applications? Is that a right way to look at it or not necessarily? >> Well, it's interesting. I think the thing to be careful of is, I guess, purpose built cloud might for some people mean, "Oh, you can only do things if it's Oracle centric." Right, and so I think that fundamentally, Oracle cloud provides a generic cloud. You can run anything you want, any application, any deployment model that you have. Whether you're an Oracle customer or not, we provide you with a full cloud service, right? However, given that we know and have known, obviously for a long time, how our products run best, when we designed OCI gen two, when we designed the networking stack, the storage layer and all that stuff, we made sure that it would be capable of running our more complex environments because our advantage is, Oracle customers have a place where they can run Oracle the best. Right, and so obviously the context of purpose-built fits that model, where yes, we've made some design choices that allow us to run Rack inside OCI and allow us to deploy Exadatas inside OCI which you cannot do in other clouds. So yes, it's purpose built in that sense but I would caution on the side of that it sometimes might imply that it's unique to Oracle products and I guess one way to look at it is if you can run Oracle, you can run everything else, right? Because it's such a complex suite of products that if you can run that then it it'll support any other (mumbling). >> Right. Right, it's like New York city. You make it there, you can make it anywhere. If I can run the most demanding mission critical applications, well, then I can run a web app for instance, okay. I got a question on tooling 'cause there's a lot of tooling, like sometimes it makes my eyes bleed when I look at all this stuff and doesn't... Square the circle for me, doesn't autonomous, an autonomous database like Autonomous Linux, for instance, doesn't it eliminate the need for all these management tools? >> You know, it does. It eliminates the need for the management at the lower level, right. So, with the autonomous Linux, what we offer and what we do is, we automatically patch the operating system for you and make sure it's secure from a security patching point of view. We eliminate the downtime, so when we do it then you don't have to restart applications. However, we don't know necessarily what the app is that is installed on top of it. You know, people can deploy their own applications, they can run third party applications, they can use it for development environments and so forth. So, there's sort of the core operating system layer and on the database side, you know, we take care of database patching and upgrades and storage management and all that stuff. So the same thing, if you run your own application inside the database, we can manage the database portion but we don't manage the application portion just like on the operating system. And so, there's still a management level that's required, no matter what, a level above that. And the other thing and I think this is what a lot of the stuff we're doing is based on is, you still have tons of stuff on-premises that needs full management. You have applications that you migrate that are not running Autonomous Linux, could be a Windows application that's running or it could be something on a different Linux distribution or you could still have some databases installed that you manage yourself, you don't want to use the autonomous or you're on a third-party. And so we want to make sure that we can address all of them with a single set of tools, right. >> Okay, so I wonder, can you give us just an overview, just briefly of the products that comprise into the cloud services, your management solution, what's in that portfolio? How should we think about it? >> Yeah, so it basically starts with Enterprise Manager on-premises, right? Which has been the tool that our Oracle database customers in particular have been using for many years and is widely used by our customer base. And so you have those customers, most of their real estate is on-premises and they can use enterprise management with local. They have it running and they don't want to change. They can keep doing that and we keep enhancing as you know, with newer versions of Enterprise Manager getting better. So, then there's the transition to cloud and so what we've been doing over the last several years is basically, looking at the things, well, one aspect is looking at things people, likes of Enterprise Manager and make sure that we provide similar functionality in Oracle cloud. So, we have Performance Hub for looking at how the database performance is working. We have APM for Application Performance Monitoring, we have Logging Analytics that looks at all the different log files and helps make sense of it for you. We have Database Management. So, a lot of the functionality that people like in Enterprise Manager mentioned the database that we've built into Oracle cloud, and, you know, a number of other things that are coming Operations Insights, to look at how databases are performing and how we can potentially do consolidation and stuff. So we've basically looked at what people have been using on-premises, how we can replicate that in Oracle cloud and then also, when you're in a cloud, how you can make make use of all the base services that a cloud vendor provides, telemetry, logging and so forth. And so, it's a broad portfolio and what it allows us to do with our customers is say, "Look, if you're predominantly on-prem, you want to stay there, keep using Enterprise Manager. If you're starting to move to Oracle cloud, you can first use EM, look at what's happening in the cloud and then switch over, start using all the management products we have in the cloud and let go of the Enterprise Manager instance on-premise. So you can gradually shift, you can start using more and more. Maybe you start with analytics first and then you start with insights and then you switch to database management. So there's a whole suite of possibilities. >> (indistinct) you mentioned APM, I've been watching that space, it's really evolved. I mean, you saw, you know, years ago, Splunk came out with sort of log analytics, maybe simplified that a little bit, now you're seeing some open source stuff come out. You're seeing a lot of startups come out, you saw Cisco made an acquisition with AppD and that whole space is transforming it seems that the future is all about that end to end visibility, simplifying the ability to remediate problems. And I'm thinking, okay, you just mentioned, you guys have a lot of these capabilities, you got Autonomous, is that sort of where you're headed with your capabilities? >> It definitely is and in fact, one of the... So, you know, APM allows you to say, "Hey, here's my web browser and it's making a connection to the database, to a middle tier" and it's hard for operations people in companies to say, hey, the end user calls and says, "You know, my order entry system is slow. Is it the browser? Is it the middle tier that they connect to? Is it the database that's overloaded in the backend?" And so, APM helps you with tracing, you know, what happens from where to where, where the delays are. Now, once you know where the delay is, you need to drill down on it. And then you need to go look at log files. And that's where the logging piece comes in. And what happens very often is that these log files are very difficult to read. You have networking log files and you have database log files and you have reslog files and you almost have to be an expert in all of these things. And so, then with Logging Analytics, we basically provide sort of an expert dashboard system on top of that, that allows us to say, "Hey! When you look at logging for the network stack, here are the most important errors that we could find." So you don't have to go and learn all the details of these things. And so, the real advantages of saying, "Hey, we have APM, we have Logging Analytics, we can tie the two together." Right, and so we can provide a solution that actually helps solve the problem, rather than, you need to use APM for one vendor, you need to use Logging Analytics from another vendor and you know, that doesn't necessarily work very well. >> Yeah and that's why you're seeing with like the ELK Stack it's cool, you're an open source guy, it's cool as an open source, but it's complicated to set up all that that brings. So, that's kind of a cool approach that you guys are taking. You mentioned Enterprise Manager, you just made a recent announcement, a new release. What's new in that new release? >> So Enterprise Manager 13.5 just got released. And so EM keeps improving, right? We've made a lot of changes over over the years and one of the things we've done in recent years is do more frequent updates sort of the cloud model frequent updates that are not just bug fixes but also introduce new functionality so people get more stuff more frequently rather than you know, once a year. And that's certainly been very attractive because it shows that it's a lively evolving product. And one of the main focus areas of course is cloud. And so a lot of work that happens in Enterprise Manager is hybrid cloud, which basically means I run Enterprise Manager and I have some stuff in Oracle cloud, I might have some other stuff in another cloud vendors environment and so we can actually see which databases are where and provide you with one consolidated view and one tool, right? And of course it supports Autonomous Database and Exadata in cloud servers and so forth. So you can from EM see both your databases on-premises and also how it's doing in in Oracle cloud as you potentially migrate things over. So that's one aspect. And then the other one is in terms of operations and automation. One of the things that we started doing again with Enterprise Manager in the last few years is making sure that everything has a REST API. So we try to make the experience with Enterprise Manager be very similar to how people work with a cloud service. Most folks now writing automation tools are used to calling REST APIs. EM in the early days didn't have REST APIs, now we're making sure everything works that way. And one of the advantages is that we can do extensibility without having to rewrite the product, that we just add the API clause in the agent and it makes it a lot easier to become part of the modern system. Another thing that we introduced last year but that we're evolving with more dashboards and so forth is the Grafana plugin. So even though Enterprise Manager provides lots of cool tools, a lot of cloud operations folks use a tool called Grafana. And so we provide a plugin that allows customers to have Grafana dashboards but the data actually comes out of Enterprise Manager. So that allows us to integrate EM into a more cloudy world in a cloud environment. I think the other important part is making sure that again, Enterprise Manager has sort of a cloud feel to it. So when you do patching and upgrades, it's near zero downtime which basically means that we do all the upgrades for you without having to bring EM down. Because even though it's a management tool, it's used for operations. So if there were downtime for patching Enterprise Manager for an hour, then for that hour, it's a blackout window for all the monitoring we do. And so we want to avoid that from happening, so now EM is upgrading, even though all the events are still happening and being processed, and then we do a very short switch. So that help our operations people to be more available. >> Yes. I mean, I've been talking about Automated Operations since, you know, lights out data centers since the eighties back in (laughs). I remember (indistinct) data center one-time lights out there were storage tech libraries in there and so... But there were a lot of unintended consequences around, you know, automated ops, and so people were sort of scared to go there, at least lean in too much but now with all this machine intelligence... So you're talking about ops automation, you mentioned the REST APIs, the Grafana plugins, the Cloud feel, is that what you're bringing to the table that's unique, is that unique to Oracle? >> Well, the integration with Oracle in that sense is unique. So one example is you mentioned the word migration, right? And so database migration tends to be something, you know, customers obviously take very serious. We go from one place, you have to move all your data to another place that runs in a slightly different environment. And so how do you know whether that migration is going to work? And you can't migrate a thousand databases manually, right? So automation, again, it's not just... Automation is not just to say, "Hey, I can do an upgrade of a system or I can make sure that nothing is done by hand when you patch something." It's more about having a huge fleet of servers and a huge fleet of databases. How can you move something from one place to another and automate that? And so with EM, you know, we start with sort of the prerequisite phase. So we're looking at the existing environment, how much memory does it need? How much storage does it use? Which version of the database does it have? How much data is there to move? Then on the target side, we see whether the target can actually run in that environment. Then we go and look at, you know, how do you want to migrate? Do you want to migrate everything from a sort of a physical model or do you want to migrate it from a logical model? Do you want to do it while your environment is still running so that you start backing up the data to the target database while your existing production system is still running? Then we do a short switch afterwards, or you say, "No, I want to bring my database down. I want to do the migrate and then bring it back up." So there's different deployment models that we can let our customers pick. And then when the migration is done, we have a ton of health checks that can validate whether the target database will run through basically the exact same way. And then you can say, "I want to migrate 10 databases or 50 databases" and it'll work, It's all automated out of the box. >> So you're saying, I mean, you've looked at the prevailing way you've done migrations, historically you'd have to freeze the code and then migrate, and it would take forever, it was a function of the number of lines of code you had. And then a lot of times, you know, people would say, "We're not going to freeze the code" and then they would almost go out of business trying to merge the two. You're saying in 2021, you can give customers the choice, you can migrate, you could change the, you know, refuel the plane while you're in midair? Is that essentially what you're saying? >> That's a good way of describing it, yeah. So your existing database is running and we can do a logical backup and restore. So while transactions are happening we're still migrating it over and then you can do a cutoff. It makes the transition a lot easier. But the other thing is that in the past, migrations would typically be two things. One is one database version to the next, more upgrades than migration. Then the second one is that old hardware or a different CPU architecture are moving to newer hardware in a new CPU architecture. Those were sort of the typical migrations that you had prior to Cloud. And from a CIS admin point of view or a DBA it was all something you could touch, that you could physically touch the boxes. When you move to cloud, it's this nebulous thing somewhere in a data center that you have no access to. And that by itself creates a barrier to a lot of admins and DBA's from saying, "Oh, it'll be okay." There's a lot of concern. And so by baking in all these tests and the prerequisites and all the dashboards to say, you know, "This is what you use. These are the features you use. We know that they're available on the other side so you can do the migration." It helps solve some of these problems and remove the barriers. >> Well that was just kind of same same vision when you guys came up with it. I don't know, quite a while ago now. And it took a while to get there with, you know, you had gen one and then gen two but that is, I think, unique to Oracle. I know maybe some others that are trying to do that as well, but you were really the first to do that and so... I want to switch topics to talk about security. It's hot topic. You guys, you know, like many companies really focused on security. Does Enterprise Manager bring any of that over? I mean, the prevailing way to do security often times is to do scripts and write, you know, custom security policy scripts are fragile, they break, what can you tell us about security? >> Yeah. So there's really two things, you know. One is, we obviously have our own best security practices. How we run a database inside Oracle for our own world, we've learned about that over the years. And so we sort of baked that knowledge into Enterprise Manager. So we can say, "Hey, if you install this way, we do the install and the configuration based on our best practice." That's one thing. The other one is there's STIG, there's PCI and they're ShipBob, those are the main ones. And so customers can do their own way. They can download the documentation and do it manually. But what we've done is, and we've done this for a long time, is basically bake those policies into Enterprise Manager. So you can say, "Here's my database this needs to be PCI compliant or it needs to be HIPAA compliant and you push a button and then we validate the policies in those documents or in those prescript described files. And we make sure that the database is combined to that. And so we take that manual work and all that stuff basically out of the picture, we say, "Push this button and we'll take care of it." >> Now, Wim, but just quick sidebar here, last time we talked, it was under a year ago. It was definitely during COVID and it's still during COVID. We talked about the state of the penguin. So I'm wondering, you know, what's the latest update for Linux, any Linux developments that we should be aware of? >> Linux, we're still working very hard on Autonomous Linux and that's something where we can really differentiate and solve a problem. Of course, one of the things to mention is that Enterprise Manager can can do HIPAA compliance on Oracle Linux as well. So the security practices are not just for the database it can also go down to the operating system. Anyway, so on the Autonomous Linux side, you know, management in an Oracle Cloud's OS management is evolving. We're spending a lot of time on integrating log capturing, and if something were to go wrong that we can analyze a log file on the fly and send you a notification saying, "Hey, you know there was this bug and here's the cause." And it was potentially a fix for it to Autonomous Linux and we're putting a lot of effort into that. And then also sort of IT/operation management where we can look at the different applications that are running. So you're running a web server on a Linux environment or you're running some Java processes, we can see what's running. We can say, "Hey, here's the CPU utilization over the past week or the past year." And then how is this evolving? Say, if something suddenly spikes we can say, "Well, that's normal, because every Monday morning at 10 o'clock there's a spike or this is abnormal." And then you can start drilling this down. And this comes back to overtime integration with whether it's APM or Logging Analytics, we can tie the dots, right? We can connect them, we can say, "Push this thing, then click on that link." We give you the information. So it's that integration with the entire cloud platform that's really happening now >> Integration, there's that theme again. I want to come back to migration and I think you did a good job of explaining how you sort of make that non-disruptive and you know, your customers, I think, you know, generally you're pushing you know, that experience which makes people more comfortable. But my question is, why do people want to migrate if it works and it's on prem, are they doing it just because they want to get out of the data center business? Or is it a better experience in the cloud? What can you tell us there? >> You know, it's a little bit of everything. You know, one is, of course the idea that data center maintenance costs are very high. The other one is that when you run your own data center, you know, we obviously have this problem but when you're a cloud vendor, you have these problems but we're in this business. But if you buy a server, then in three years that server basically is depreciated by new versions and they have to do migration stuff. And so one of the advantages with cloud is you push a button, you have a new version of the hardware, basically, right? So the refreshes happen on a regular basis. You don't have to go and recycle that yourself. Then the other part is the subscription model. It's a lot easier to pay for what you use rather than you have a data center whether it's used or not, you pay for it. So there's the cost advantages and predictability of what you need, you pay for, you can say, "Oh next year we need to get x more of EMs." And it's easier to scale that, right? We take care of dealing with capacity planning. You don't have to deal with capacity planning of hardware, we do that as the cloud vendor. So there's all these practical advantages you get from doing it remotely and that's really what the appeal is. >> Right. So, as it relates to Enterprise Manager, did you guys have to like tear down the code and rebuild it? Was it entire like redo? How did you achieve that? >> No, no, no. So, Enterprise Manager keeps evolving and you know, we changed the underlying technologies here and there, piecemeal, not sort of a wholesale replacement. And so in talking about five, there's a lot of new stuff but it's built on the existing EM core. And so we're just, you know, improving certain areas. One of the things is, stability is important for our customers, obviously. And so by picking things piecemeal, we replace one engine rather than the whole thing. It allows us to introduce change more slowly, right. And then it's well-tested as a unit and then when we go on to the next thing. And then the other one is I mentioned earlier, a lot of the automation and extensibility comes from REST APIs. And so instead of basically re-writing everything we just provide a REST endpoint and we make all the new features that we built automatically be REST enabled. So that makes it a lot easier for us to introduce new stuff. >> Got it. So if I want to poke around with this new version of Enterprise Manager, can I do that? Is there a place I can go, do I have to call a rep? How does that work? >> Yeah, so for information you can just go to oracle.com/enterprise manager. That's the website that has all the data. The other thing is if you're already playing with Oracle Cloud or you use Oracle Cloud, we have Enterprise Manager images in the marketplace. So if you have never used EM, you can go to Oracle Cloud, push a button in the marketplace and you get a full Enterprise Manager installation in a matter of minutes. And then you can just start using that as well. >> Awesome. Hey, I wanted to ask you about, you know, people forget that you guys are the stewards of MySQL and we've been looking at MySQL Database Cloud service with HeatWave Did you name that? And so I wonder if you could talk about what you're doing with regard to managing HeatWave environments? >> So, HeatWave is the MySQL option that helps with analytics, right? And it really accelerates MySQL usage by 100 x and in some cases more and it's transparent to the customer. So as a MySQL user, you connect with standard MySQL applications and APIs and SQL and everything. And the HeatWave part is all done within the MySQL server. The engine itself says, "Oh, this SQL query, we can offload to the backend HeatWave cluster," which then goes in memory operations and blazingly fast returns it to you. And so the nice thing is that it turns every single MySQL database into also a data warehouse without any change whatsoever in your application. So it's been widely popular and it's quite exciting. I didn't personally name it, HeatWave, that was not my decision, but it sounds very cool. >> That's very cool. >> Yeah, It's a very cool name. >> We love MySQL, we started our company on the lamp stack, so like many >> Oh? >> Yeah, yeah. >> Yeah, yeah. That's great. So, yeah. And so with HeatWave or MySQL in general we're basically doing the same thing as we have done for the Oracle Database. So we're going to add more functionality in our database management tools to also look at HeatWave. So whether it's doing things like performance hub or generic database management and monitoring tools, we'll expand that in, you know, in the near future, in the future. >> That's great. Well, Wim, it's always a pleasure. Thank you so much for coming back in "The Cube" and letting me ask all my Colombo questions. It was really a pleasure having you. (mumbling) >> It's good be here. Thank you so much. >> You're welcome. And thank you for watching, everybody, this is Dave Vellante. We'll see you next time. (bright music)

Published Date : Apr 27 2021

SUMMARY :

How you been, sir? but I'm excited to be here, as always. And so it's clear, you guys and so forth so that you get So, is it fair to say you that if you can run that You make it there, you and on the database side, you know, and then you switch to it seems that the future is all about and you know, that doesn't approach that you guys are taking. all the upgrades for you since, you know, lights out And so with EM, you know, of lines of code you had. and then you can do a cutoff. is to do scripts and write, you know, and you push a button and So I'm wondering, you know, And then you can start drilling this down. and you know, your customers, And so one of the advantages with cloud is did you guys have to like tear And so we're just, you know, How does that work? And then you can just And so I wonder if you could And so the nice thing is that it turns we'll expand that in, you know, Thank you so much for Thank you so much. And thank you for watching, everybody,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

OracleORGANIZATION

0.99+

Wim CoekaertsPERSON

0.99+

50 databasesQUANTITY

0.99+

CiscoORGANIZATION

0.99+

10 databasesQUANTITY

0.99+

2021DATE

0.99+

Enterprise ManagerTITLE

0.99+

New YorkLOCATION

0.99+

EnterpriseTITLE

0.99+

MySQLTITLE

0.99+

JavaTITLE

0.99+

last yearDATE

0.99+

two thingsQUANTITY

0.99+

three yearsQUANTITY

0.99+

twoQUANTITY

0.99+

LinuxTITLE

0.99+

an hourQUANTITY

0.99+

Enterprise ManagerTITLE

0.99+

WindowsTITLE

0.99+

SQLTITLE

0.99+

100 xQUANTITY

0.99+

OneQUANTITY

0.99+

next yearDATE

0.99+

one toolQUANTITY

0.99+

oneQUANTITY

0.99+

AmazonORGANIZATION

0.98+

todayDATE

0.98+

second oneQUANTITY

0.98+

firstQUANTITY

0.98+

one exampleQUANTITY

0.98+

Enterprise Manager 13.5TITLE

0.98+

one aspectQUANTITY

0.98+

one engineQUANTITY

0.97+

WimPERSON

0.97+

gen oneQUANTITY

0.97+

bothQUANTITY

0.97+

once a yearQUANTITY

0.97+

Oracle CloudTITLE

0.97+

one wayQUANTITY

0.97+

GrafanaTITLE

0.97+

BarronPERSON

0.97+

first servicesQUANTITY

0.96+

HeatWaveORGANIZATION

0.96+

past yearDATE

0.96+

one-timeQUANTITY

0.96+

gen twoQUANTITY

0.96+

one placeQUANTITY

0.96+

past weekDATE

0.96+

two waysQUANTITY

0.95+

IBM24 Clemens Reijnen VTT


 

(upbeat music) >> Narrator: From around the globe. It's theCUBE with digital coverage of IBM Think 2021 brought to you by IBM. >> Well, hi everybody, John Walls here on theCUBE as we continue our IBM Think initiative. And today talking with Clemens Reijnen who is the Global CTO Cloud and DevOps Leader at Capgemini. And the Clemens, thanks for joining us here on theCUBE. Good to see you today. >> Thank you. Thank you very much. Nice to be here. >> Yeah, tell us a little bit about Capgemini, if you will, first off for our viewers at home who might not be familiar with your services. Tell us a little bit about that and maybe a little bit more about your specific responsibilities there. >> So who doesn't know Capgemini in this system in the greater world and the IT world as we lived on a stone. So Capgemini is a worldwide system integrated with offerings in all kinds of spaces and all areas there. My responsibility is mainly around cloud and DevOps and taking care of countries or delivery centers have the right knowledge around cloud and the right capabilities around DevOps. And to support our customers and with their journey to the cloud into a digital organization. >> Yeah. Everybody's talking about digital these days. >> Everybody yeah. >> And it's magical digital transformation that's occurring, that's been going on for quite some time. What does that look like to you? And when you start defining digital organizations and digital transformations what are the kinds of things that you're talking about with organizations in terms of that kind of migration path? >> Yeah. So it's quite interesting to just start discussion about how does a digital landscape looks like for an organization wants to start transforming to a digital organization. And then when you are looking at that I'm always talking to discretion with business capabilities. An organization wants to create business capabilities either to interact and engage with their workforce and to make them work in the most efficient way. And what they are using for that are all kinds of different digital channels. And those digital channels they can be a mobile app. I'm working with my mobile app to connect with my work. I'm calling, I'm using zoom, I'm using teams and that kind of stuff. We also using chatbots for IT devices. And that's what the normal workforce expect nowadays. All have to have all those digital channels to interact with the business. That's also on the other side, at the customer side and organizations want to engage and grow on the customer site and have their nice interaction there. And again, they are using those digital channels all the different digital channels, maybe IoT, maybe API to interact with those customers to bring them the engagement interaction they really want to have. And in that transformation part definitely they are looking at what kind of challenges I have with working with customers like this and working with my workforce. Now everybody's working from home challenges with maybe the connections and that kind of stuff. But they also started to leverage and that's where the transformation and migration start with their on-prem systems, their legacy systems to move those kinds of capabilities and enrich that with cloud native capabilities to all kinds of enterprise solutions like the ones from IBM for example, to expose that to their digital channels, to their organizations. And that's the landscape, how it looks like. And then we have the discussion with organizations. How do you want to engage with your customers? What kind of digital channels do you need? What are the business systems you have and how can we enrich them and expose them to the outside world with all the enterprise solutions around you. >> And when you talk about a process like this which sounds holistic, right? You're looking at, what do you have? Where do you want to go? What are your business needs? Which all makes great sense. But then all of a sudden you start hitting speed bumps along the way. There are always challenges in terms of deployments There are always challenges in terms of decisions and those things. So what are you hearing again from on the customer side about, what are my pain points? What are my headaches here as I know, I want to make this jump, but how do I get there? And I have these obstacles in my way. >> Yeah, definitely. And the ones I explained already which are underlooked for site and on the customer side. You want to have the engagements there you want to have interactions there. And then you have that whole digital landscape which comes with some interesting challenges. Then how do I implement this landscape in the right scalable way? How do I expose my data in such a way that it is secure? How do I leverage all the capabilities from the platforms I'm using? And how do I make all these moving parts consistent, compliant with the regulations I need to work towards to? How do I make it secure? So those are definitely big enterprise challenges like appliances, security and that kind of stuff but also technology challenges. How do I adopt those kinds of technologies? How do I make it scalable? How do I make it really an integrated solution on its own? So that my platform is not only working for the digital channels we know right now but they are also ready for the digital channels We don't know yet will start to come here. That's the biggest challenges there for me. >> Yeah. I want to get into that a little bit later too. Cause you raised a great point. Well, let's just jump right now. We know what the here now is but you just talked about building for the future building for a more expansive footprint or kinds of capabilities that frankly we're not even aware of right now. So how do you plan for that kind of flexibility that kind of agility when it's a bit unpredictable? >> Yeah. And that's what every organization tries to be agile, flexible, resilient and you need to build your system conform that. And well we normally start with you need to have a clear foundation and a foundation when, for example when you are using the cloud for it every organization is cloud for it. You want to have that foundation in such a way that those digital channels can connect really easy to it. And then the capabilities the business capabilities created are done by product teams product and feature teams are creating those kinds of capabilities on top of that cloud foundation. And in that foundation, you want to put everything in place. What makes it easy for those teams to focus on that business functionality on those business capabilities. You want to make it very easy for them to do it the right thing that I always love to say that that's what you want to put in your cloud foundation. And that's where you are harnessing your security. Every application with learning on the foundation has secure. You are embracing a standard way of working although not every DevOps teams like that they want to be organizing and that kind of stuff. But when you are having 50 or a 100 DevOps teams you'd want to have some kind of standardization and provide them a way. And again, the easy way should be the right way to provide them templates, provide them technologies so that they can really focus very quickly on those kinds of business capabilities. So the cloud foundation is the base that needs to be in place. >> Now, you've been doing this for a long time and the conversation used to be, shall we move to the cloud? Can we move to the cloud? Now it's about how fast can we move to the cloud? How much do we move to the cloud? So looking at that kind of the change in paradigm if you will, what are organizations having to consider in terms of the scale, the depth, the breadth of their offering now, because innovation and as you know, it can happen at a much faster pace than it could have just a very short time ago. >> Yeah. And then I'm reflecting again back to the easy thing should be the right thing. That's what you want to do for your DevOps. >> I love that concept. (laughs) >> And that's where you should focus on as an organization. For example, what we've put in place. We put a lot of standardization, a lot of knowledge in place in what we call in an Inner Source library. And in that Inner Source library, for example we put all kinds of strips, all kinds of templates all kinds of standardization for teams who want to deploy OpenShift on their platform or want to start working with certain cloud packs. That they can set it up very easily conforms the standards of your organization and start moving from there. And then in the cloud foundation, you have your cloud management and the IBM Cloud Manager because organizations are definitely going towards the hybrid scenarios, different organizational or units wants to start using different clouds in there. And also for the migration part you want to have that grow from there. And standardization, Inner Source and having those templates ready, it's key for organizations now to speed up and be ready to start juggling around with workloads now on any cloud where you want to and that's the idea. >> Sure. Now, so Red Hats involved in this she had IBM involved as well obviously your partnership working with them. Talk about that kind of merger of resources, if you will. And in terms of what the value proposition is to your clients at the end of the day to have that kind of firepower working in their behalf. >> Yeah. And that's for example, IBM is for us a very important partner. Definitely on the hybrid multi-cloud scenarios where we can leverage OpenShift on those kinds of platforms for our customers. And we created what I said, templates, scripts. We use the IBM garage projects for it to create deployments for our teams in a kind of self servicing way to deploy those OpenShift clusters on top of the cloud platform of their choice. And then for sure, with the multi-cloud manager from IBM we can manage that actually in the lending zone and that's actually the whole ID. And you want to give the flexibility and the speeds to your DevOps teams to be able to do the right thing is the easy thing. And then manage it from your cloud foundation so that they are comfortable that when they're putting the workloads in that whole multi hybrid cloud platform that it is managed, organized all in the right way. And that that's definitely where IBM Red Hat OpenShift comes in play. And because they have already such a great tool sets ready it really think DevOps. That's what I really like. And also with the migrations, it comes with a lot of DevOps capabilities in there not playing lift shift but also the modelisation immediately in there. And that's what I like about our partnership with IBM is just, they are DevOps in mind also. That's cool. >> Yeah. What about the speed here? Just in general, just about the, almost the pace of change and what's happening in that space cause it used to be these kinds of things took forever. It seemed like or evolutions, transitions were to take a long period of time. It's not the case anymore now that things are happening in relatively lightning speed. So when you're talking with an organization about the kinds of changes they could make and the speed at which they can do that. Marry those up for me and those conversations that you're having. And if I'm a CIO out there and I'm thinking about how am I going to flip this switch? Convince me right now, (Clemens laughing) What are the key factors? And how easy, how right will this be for me? >> So as a CIO, you want to have your scalable and your flexible organization probably at this moment, you're sitting with your on-prem system with probably a very large relational database with several components around there. And now you want to fuel those digital channels there. The great way with IBM with Red Hat is that we can deploy OpenShift container solutions everywhere and then starting to modernize those small components or at that big relational database. And we were at starting to do that, we can do that really at Lightspeed. And there are, we have a factory model up and running, where we can put in the application landscape of a customer and look at it and say, "Okay, this one is quite easy. We are running it to, or modernization street. And it runs into a container." And from there, you start to untangle actually the hair ball of your whole application landscape and starting to move those components. And you definitely want to prioritize them. And that's where you have discussions with the business, which is most valuable to move first and which one to move there. And that's actually what we put in place is the factory model to analyze an application landscape of a customer, having the discussions with those customers and then say, "Okay we are going to move these workloads first. Then we are going to analyze the count of these and then we are going to move these." And we really start rocking fast moving their workloads to the cloud and so that they can start and reach those digital channels you want to do it in half. >> Well, a great process. And I love your analogies by the way you say about hairball there. (Clemens and John laughing) I totally get it. Hey Clemens, thank you for the time today. I appreciate hearing about the Capgemini story and about your partnership with IBM. Thank you very much. >> Thank you very much. >> All right. So well, we have learned one thing the easy thing is the right thing and that's the Capgemini way of getting things done. You've been watching part of the IBM Think initiative here on theCUBE. (upbeat music)

Published Date : Apr 16 2021

SUMMARY :

of IBM Think 2021 brought to you by IBM. And the Clemens, Thank you very much. with your services. And to support our customers about digital these days. And when you start defining What are the business systems you have And when you talk And the ones I explained already building for the future And that's where you are So looking at that kind of the change That's what you want I love that concept. And also for the migration part And in terms of what and the speeds to your DevOps teams and the speed at which they can do that. And that's where you have And I love your analogies by the way and that's the Capgemini

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

John WallsPERSON

0.99+

JohnPERSON

0.99+

ClemensPERSON

0.99+

Clemens ReijnenPERSON

0.99+

CapgeminiORGANIZATION

0.99+

50QUANTITY

0.99+

oneQUANTITY

0.99+

todayDATE

0.99+

OpenShiftTITLE

0.98+

Red HatORGANIZATION

0.98+

DevOpsTITLE

0.97+

Red HatsORGANIZATION

0.96+

firstQUANTITY

0.93+

100 DevOpsQUANTITY

0.88+

Red Hat OpenShiftTITLE

0.87+

ThinkEVENT

0.83+

Cloud ManagerTITLE

0.71+

LightspeedORGANIZATION

0.69+

SourceOTHER

0.61+

Think 2021COMMERCIAL_ITEM

0.61+

ThinkTITLE

0.59+

theCUBEORGANIZATION

0.56+

IBM24ORGANIZATION

0.5+

Rob Emsley, Dell Technologies and Stephen Manley, Druva | CUBEConversations


 

overnight covid completely exposed those companies that were really not ready for the digital age there was a mad rush to the cloud in an effort to reshape the very notion of business resiliency and enable employees to remain productive so that they continue serve customers data protection was at the heart of this shift and cloud data protection has become a fundamental staple of organizations operating models hello everyone this is dave vellante and welcome to this cube conversation i'm joined by two long time friends of the cube rob emsley is the director of product marketing at dell technologies and stephen manley is the chief technology officer at dhruva guys great to have you on the program thanks for being here yeah great to be here dave this is the high point of my day dave all right i'm glad to hear it stephen it's been a while since we missing you guys so tell you face-to-face maybe it'll happen before 22. but we haven't aged a bit david ditto listen we've been talking for years about this shift to the cloud but in the past 12 months boy we've seen the pace of workloads that have moved to the cloud really accelerate so rob maybe you could start it off how do you see the market and perhaps what are some of the blind spots maybe that people need to think about when they're moving workloads so fast to the cloud yeah good question dave i mean you know we've spoken a number of times around how our focus has significantly shifted over the last couple of years i mean only a couple of years ago you know our focus was you know very much on on-premise data protection but over the last couple of years you know more workloads have shifted to the cloud you know customers have have started adopting sas applications and and all of these environments uh you know are creating data that is is so critical to these customers to protect you know so you know we've definitely found uh the more and more of our conversations have been centered around what can you do for me when it comes to protecting workloads in the cloud environment yeah now of course stephen this is kind of your wheelhouse how how are you thinking about the these market shifts yeah you know it's interesting and the data protection market heck the data market in general you know you see these these these sort of cycles happen and and for a long time we had a cycle where applications and environments were consolidating a lot it was all vms and oracle and sql and and we seem to be exploding out the other way to this there's a massive sprawl of different types of applications in different places like rob said you've got microsoft 365 and you've got salesforce and you've got workloads running in the cloud the world looks different and and you add on top of that the the new security threats as people move into the cloud i mean we you know a number of years ago we talked about how ransomware was an emerging threat we're way past emerging into you know there's a ransomware attack every six seconds and everybody wakes up terrified about it and so so so we really see the market has shifted i think in terms of what the apps are and also in terms of what the threats and the focus uh has come into play right well thanks for that there's there's some hard news which we're going to get to but but before we do rob stephen was mentioning the sas apps and we've been sort of watching that space for a while but a lot of people will ask why do i need a separate data protection layer doesn't my sas provider protect my data don't they replicate it they're they're cloud vendors why do i need to buy yet another backup product yeah there's there's a fairly common misconception dave you know that both sas application vendors and and and cloud vendors you know inherently are you know providing all of the data protection that you need um the reality is that they're not you know i think when you think about a lot of the data within those environments you know certainly they're focused on providing availability you know an availability you know is absolutely one thing that you can for the most part rely on the uh the cloud vendors uh to deliver to you but when it comes to actually um protecting yourself from you know accidental deletion you know protecting yourself from uh cyber threats and cyber crime that may infect your data you know through malicious acts you know that's really where you need to supplement the environment that the cloud providers provide you you know with you know best-in-class data protection solutions you know and this is really where you know we're really looking to introduce new innovations into the market you know to really really help customers you know with their client-based data protection yeah now you got some news here uh but let's kind of dig in if we we could to the to the innovations behind that maybe rob you could you could kick it off and then stephen will bring you in yeah so first piece of news that we're really happy to announce is the introduction of a new dell emc paraprotect backup service which is a new cloud data protection solution powered by druva you know hence you know the reason that stephen and i are here today it's designed to deliver additional protection without increasing it complexity so what powered by druva what does that mean can you add some color to that absolutely so you know when we really started looking at the expansion of our powerpatek portfolio you know we already have the ability to deliver both on-premises protection and to deliver that same software within the public cloud from a a paraprotect software delivery model but what we really didn't have within the portfolio is a cloud data protection platform and we really looked at you know what was available in the market we looked at our ability to develop that you know ourselves and we decided that the best path for our customers to bring capabilities to them as soon as we possibly could was to partner with druva you know when we really looked at the capabilities that that druva has been delivering for many years you know the capabilities that they have across many dimensions of of of cloud-based workloads and we're already engaged with them probably about six months ago you know first introduced druva as a an option uh to be resold by ourselves uh salesforce and partners and then we're pleased to to introduce uh a dell emc branded service power protect backup service okay so just one more point of clarification then stephen i want to bring you in so we're talking about this includes sas apps as well i'm talking 365 the google apps which we use extensively with crm salesforce for example what platforms are you actually you know connecting to and providing protection for yeah so the the real priority for us was to was to expand our power protect portfolio to support a variety of sas applications you mentioned you know uh real real major ones with respect to microsoft 365 um google workplace as well as um as uh as salesforce but the other thing that we also get with patek backup service is the ability to provide a cloud-based data protection service that supports endpoints such as laptops and desktops but also the ability to support hybrid workloads so for some customers the ability to use private backup service to give them support um for virtual machine backups both vmware and hyper-v but also application environments like oracle and sql and lastly but not least you know one of the things that backup service also provides when it comes to virtual machines is not only virtual machines on-premises but also virtual machines within the public cloud specifically vmware client on aws so stephen i i mean i i i remember i was talking to just several years ago and i've always liked sort of the druva model but it felt at the time you're like a little ahead of your time but boy the market has really come to you maybe you could just tell us a little bit more about the just generally cloud-based data protection and and the sort of low down on on your platform yeah and again i think you're right the market has absolutely swept in this direction like we were talking about with applications in so many places and endpoints in so many places and data centers and remote offices with data sprawled everywhere we find customers are looking for a solution that can connect to everything i i don't want seven different backup solutions one for each of those things i want one centralized solution and so kind of a data protection as a service becomes really appealing because instead of setting all of these things up on your own well it's just built in for you uh and and then the fact that it's it's as a service helps with things like the ransomware protection because it's off site in another location under another account and so we really see customers saying this is appealing because it helps keep my costs down it helps to keep my complexity down there's fewer moving parts and one of the nicest things is as i move to the cloud i get that one fixed cost right i'm not i'm not dealing with the oh wow this this bill is not what i was expecting it just comes in with with what i was what i was carrying and so it really comes down to as you go to the cloud you want a platform that's that's got everything built in uh something that and let's face it dell emc is is this this is this has always been the case you know that storage of last resort that backup that you can trust right you want something with a history like you said you've been talking to jaspreet for a while druva is a company that's got a proven track record that your data is going to be safe and it's going to be recoverable and you're going to want someone that can innovate quickly right so that as more new you know cloud applications arise you know we're there to help you protect them as they emerge so so talk a little bit more about the timing i mean we talked earlier about that okay covered really forced to shift to the cloud uh and you guys clearly have skated to the puck and you also you referenced sort of new workloads and and i'm just wondering how you see that from a you know timing standpoint and at this moment in time why this is such a you know the right fit yeah we we've seen a lot of customers over the last again 12 months or so you know one really accelerate their shift to things like sas applications microsoft 365 you know and and we're not just talking exchange online and onedrive but sharepoint online microsoft teams really going all in because they're finding that as as i'm distributed as i have a remote workforce my endpoints became more important again but also the ability to have collaboration became important and the more i depend on those tools to collaborate the more i'm depending on them to to replace what used to be in-person meetings where we could have a whiteboard and discuss things and it's it's done through collaboration online tools well i need to protect that not just because the data is important but because that's not how my business is running and so that entire environment is important and so it's really accelerated people coming and looking for solutions because they've realized how important these environments and this data is so stephen you mentioned you guys i mean i obviously have a track record but you got some vision too and i want to sort of poke at that a little bit i mean essentially is is what you're building is an abstraction layer that is essentially my data protection cloud is that how we should think about this and you've got your reference pricing i've seen your pricing it's clean it looks to me anyway like a like true cloud pricing gonna dial it up dial it down pay as you go consume it as you as you wish maybe talk about that a little bit yeah i mean i think if you think about the future of uh uh of consumption is that you know so many customers are looking for different choices than what many vendors have provided them in the past you know i think that you know the the days of of going through a you know a long procurement cycle and uh you know working through purchasing in order to get a big capital expense approved you know is it's just not the way that many of our customers are looking to operate now so i think that you know one of the things that we're looking at you know across the portfolio you know whether or not it be you know on-premises solutions or or cloud-based services is to provide all of that capability as a service you know i think that that will be you know a real future point of of arrival for us is we really rotate to offer that across all of our capabilities dave you know whether or not it be you know in the domain of storage or in the domain of data protection the concept of everything as a service is really something which is going to become more of the norm you know versus the exception so what does a customer have to do to be up and running what's that experience like is this going to log on and and everything's sort of you know there to them they what do they see what's the experience like yeah well that's one of the great things about parapatek backup service is that you know once the customer has has has worked through their you know their their uh their dell technologies you know sales uh team or their or their dell technologies partner you know they effectively you know get an activation um you know code to to sign up and and set up their credentials with powerpit backup servers and once they actually do that you know one of the things that they don't have to worry about is the deployment of the infrastructure the infrastructure is always on ready to go so what they do is they simply point powerpit backup service at the data sources that they wish to protect you know and that's one of the the great advantages around you know a sas based data protection platform you know and it's one of the things that that makes it very easy to get customers up and running with powerpath backup service so i'm guessing you have a roadmap you may be you maybe not you may be holding out on us and some of the other things that you're doing in this space but but what can you tell us about about other things you might be doing or that might be coming what can we expect well i mean you know dave that one of the things that you know we always talk about it's the power of the portfolio so so with the addition of private backup service it's not the only news that we're making with respect to cloud data protection you know i mentioned earlier that uh we have the ability to deploy our on-premises solutions in the public cloud with powerprotect data manager and our powerprotect virtual appliances you know and with this uh announcement that brings backup service into the portfolio we're also uh pleased to expand our support of the public cloud with full support of google cloud platform making powerprotect data manager available in the google marketplace and then lastly but not least you know our other cloud snapshot manager offering you know is now also fully integrated with our powerprotect virtual appliances to allow customers to store uh aws snapshots in a deduplicated fashion within aws s3 so that's an excellent capability that we've introduced to reduce the cost of storing um aws infrastructure backups for longer periods of time so really you know we've really continued to double down in bringing new cloud data protection capabilities to our customers wherever they may be yeah nice now steven you guys must be stoked have a partner like dell just massive distribution channel i wonder if you could give us any final thoughts you know thoughts on on the relationship how you see the future unfolding yeah i mean and obviously i've got you know history with with dell and emc and rob and one of the things you know i think dell's always been fabulous at is giving customers the flexibility to protect their data when they want how they want where they want with the investment protection but if it shifts over time they'll be there for them right going all the way back to the data protection suite and all those those those fantastic things we've done historically and so it's it's really it's great to to align with somebody that's got the same kind of values we do which is at druva it's that same model right wherever you want to protect your data wherever it is we're going to be there for you and so it was great that i think dell and druva both saw this demand from our customers and we said you know this is the right match right this is how we're going to help people keep their data safe as they start you know and continue and extend their journeys to the cloud and so you know dell proposes the the power protect backup service powered by druva and and everybody wins the dell's customers are safer dell completes its offering and let's face it it does help druva accelerate our momentum so this is this is this is and it's a lot of fun just hanging out with the people i used to work with especially wrong it's good seeing him again well you guys both have kind of alluded to the portfolio and the optionality that dell brings to its customers but rob you know i'll give you the final word a lot of times optionality brings complexity but this seems to be a really strong step in the direction of simplifying the world for your customers but rob i'll give you the last word yeah for sure i mean we've always said that it's not a one-size-fits-all world you know i think that you know one of the things that this um evolution of our powerpatek portfolio brings you know is an excellent added option for our customers you know many of the customers if not almost all of the customers that we currently sell to you know have a requirement for sas application protection you know many of them now especially after the last year have an added um sensitivity to endpoint protection you know so so those two things alone you know i think are are two things that all dell technology customers can really take advantage of with the introduction of private backup servers you know this is just a continued evolution of our uh capabilities to bring innovative data protection for multi-cloud workloads that last point is a great point about the endpoints because you got remote workers so exposed guys thanks so much for sharing the announcement details and the relationship and really good luck with the offering we'll be watching thanks dave thanks dave and thank you for watching this cube conversation this is dave vellante for the cube we'll see you next time you

Published Date : Apr 6 2021

SUMMARY :

and rob and one of the things you know i

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
microsoftORGANIZATION

0.99+

dellORGANIZATION

0.99+

Rob EmsleyPERSON

0.99+

druvaORGANIZATION

0.99+

two thingsQUANTITY

0.99+

davePERSON

0.99+

stephenPERSON

0.99+

Stephen ManleyPERSON

0.99+

rob emsleyPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

dell technologiesORGANIZATION

0.98+

stephen manleyPERSON

0.98+

todayDATE

0.98+

dave vellantePERSON

0.98+

first pieceQUANTITY

0.98+

bothQUANTITY

0.98+

sqlTITLE

0.97+

oneQUANTITY

0.97+

several years agoDATE

0.97+

oracleTITLE

0.97+

a couple of years agoDATE

0.97+

stevenPERSON

0.97+

last yearDATE

0.96+

about six months agoDATE

0.96+

dhruvaORGANIZATION

0.95+

a lot of the dataQUANTITY

0.95+

every six secondsQUANTITY

0.95+

one thingQUANTITY

0.94+

firstQUANTITY

0.93+

eachQUANTITY

0.93+

one of the thingsQUANTITY

0.93+

powerprotectTITLE

0.92+

jaspreetORGANIZATION

0.92+

googleORGANIZATION

0.91+

patekORGANIZATION

0.91+

12 monthsQUANTITY

0.91+

robPERSON

0.9+

last couple of yearsDATE

0.89+

emcORGANIZATION

0.88+

a lot of peopleQUANTITY

0.86+

david dittoPERSON

0.85+

thingsQUANTITY

0.85+

a number of timesQUANTITY

0.85+

DruvaPERSON

0.85+

rob stephenPERSON

0.84+

sasTITLE

0.83+

past 12 monthsDATE

0.82+

a lot of customersQUANTITY

0.81+

aws s3TITLE

0.8+

awsORGANIZATION

0.79+

a number of years agoDATE

0.78+

last couple of yearsDATE

0.78+

seven different backupQUANTITY

0.78+

yearsQUANTITY

0.77+

Matt Burr, General Manager, FlashBlade, Pure Storage | The Convergence of File and Object


 

from around the globe it's thecube presenting the convergence of file and object brought to you by pure storage we're back with the convergence of file and object a special program made possible by pure storage and co-created with the cube so in this series we're exploring that convergence between file and object storage we're digging into the trends the architectures and some of the use cases for unified fast file and object storage uffo with me is matt burr who's the vice president general manager of flashblade at pure storage hello matt how you doing i'm doing great morning dave how are you good thank you hey let's start with a little 101 you know kind of the basics what is unified fast file and object yeah so look i mean i think you got to start with first principles talking about the rise of unstructured data so when we think about unstructured data you sort of think about the projections 80 of data by 2025 is going to be unstructured data whether that's machine generated data or you know ai and ml type workloads you start to sort of see this i don't want to say it's a boom uh but it's sort of a renaissance for unstructured data if you will where we move away from you know what we've traditionally thought of as general purpose nas and and file shares to you know really things that focus on uh fast object taking advantage of s3 cloud native applications that need to integrate with applications on site um you know ai workloads ml workloads tend to look to share data across uh you know multiple data sets and you really need to have a platform that can deliver both highly performant and scalable fast file and object from one system so talk a little bit more about some of the drivers that you know bring forth that need to unify file an object yeah i mean look you know there's a there's there's a real challenge um in managing you know bespoke uh bespoke infrastructure or architectures around general purpose nas and daz etc so um if you think about how a an architect sort of looks at an application they might say well okay i need to have um you know fast daz storage proximal to the application um but that's gonna require a tremendous amount of dabs which is a tremendous amount of drives right hard drives are you know historically pretty pretty pretty unwieldy to manage because you're replacing them relatively consistently at multi-petabyte scale so you start to look at things like the complexity of das you start to look at the complexity of general purpose nas and you start to just look at quite frankly something that a lot of people don't really want to talk about anymore but actual data center space right like consolidation matters the ability to take you know something that's the size of a microwave like a modern flash blade or a modern um you know uffo device replaces something that might be you know the size of three or four or five refrigerators so matt why is is now the right time for this i mean for years nobody really paid much attention to object s3 already obviously changed you know that course most of the world's data is still stored in file formats and you get there with nfs or smb why is now the time to think about unifying object and and file well because we're moving to things like a contactless society um you know the the things that we're going to do are going to just require a tremendous amount more compute power network and quite frankly storage throughput and you know i can give you two sort of real primary examples here right um you know warehouses are being you know taken over by robots if you will um it's not a war it's a it's a it's sort of a friendly advancement in you know how do i how do i store a box in a warehouse and you know we have we have a customer who focuses on large sort of big box distribution warehousing and you know a box that carried a an object uh two weeks ago might have a different box size two weeks later well that robot needs to know where the space is in the data center in order to put it but also needs to be able to process hey i don't want to put the thing that i'm going to access the most in the back of the warehouse i'm going to put that thing in the front of the warehouse all of those types of data you know sort of real time you can think of the robot as almost an edge device uh is processing in real time unstructured data and its object right so it's sort of the emergence of these new types of workloads and i give you the opposite example the other end of the spectrum is ransomware right you know today you know we'll talk to customers and they'll say quite commonly hey if you know anybody can sell me a backup device i need something that can restore quickly if you had the ability to restore something in 270 terabytes an hour or 250 terabytes an hour that's much faster when you're dealing with a ransomware attack you want to get your data back quickly you know so i want to actually i was going to ask you about that later but since you brought it up what is the right i guess call it architecture for for for ransomware i mean how and explain like how unified object and file would support me i get the fast recovery but how would you recommend a customer uh go about architecting a ransomware proof you know system yeah well you know with with flashblade and and with flasharray there's an actual feature called called safe mode and that safe mode actually protects uh the snapshots and and the data from uh sort of being is a part of the of the ransomware event and so if you're in a type of ransomware situation like this you're able to leverage safe mode and you say okay what happens in a ransomware attack is you can't get access to your data and so you know the bad guy the perpetrator is basically saying hey i'm not going to give you access to your data until you pay me you know x in bitcoin or whatever it might be right um with with safe mode those snapshots are actually protected outside of the ransomware blast zone and you can bring back those snapshots because what's your alternative if you're not doing something like that your alternative is either to pay and unlock your data or you have to start retouring restoring excuse me from tape or slow disk that could take you days or weeks to get your data back so leveraging safe mode um you know in either the flash for the flash blade product is a great way to go about uh architecting against ransomware i got to put my i'm thinking like a customer now so safe mode so that's an immutable mode right can't change the data um is it can can an administrator go in and change that mode can he turn it off do i still need an air gap for example what would you recommend there yeah so there there are still um uh you know sort of our back or rollback role-based access control policies uh around who can access that safe mode and who can right okay so uh anyway subject for a different day i want to i want to actually bring up uh if you don't object a topic that i think used to be really front and center and it now be is becoming front and center again i mean wikibon just produced a research note forecasting the future of flash and hard drives and those of you who follow us know we've done this for quite some time and you can if you could bring up the chart here you you could see and we see this happening again it was originally we forecast the the death of of quote unquote high spin speed disk drives which is kind of an oxymoron but you can see on here on this chart this hard disk had a magnificent journey but they peaked in volume in manufacturing volume in 2010 and the reason why that is is so important is that volumes now are steadily dropping you can see that and we use wright's law to explain why this is a problem and wright's law essentially says that as you your cumulative manufacturing volume doubles your cost to manufacture decline by a constant percentage now i won't go too much detail on that but suffice it to say that flash volumes are growing very rapidly hdd volumes aren't and so flash because of consumer volumes can take advantage of wright's law and that constant reduction and that's what's really important for the next generation which is always more expensive to build and so this kind of marks the beginning of the end matt what do you think what what's the future hold for spinning disc in your view uh well i can give you the answer on two levels on a personal level uh it's why i come to work every day uh you know the the eradication or or extinction of an inefficient thing um you know i like to say that inefficiency is the bane of my existence uh and i think hard drives are largely inefficient and i'm willing to accept the sort of long-standing argument that um you know we've seen this transition in block right and we're starting to see it repeat itself in in unstructured data um and i'm willing to accept the argument that cost is a vector here and it most certainly is right hdds have been considerably cheaper uh than than than flash storage um you know even to this day uh you know up to this point right but we're starting to approach the point where you sort of reach a 3x sort of you know differentiator between the cost of an hdd and an sdd and you know that really is that point in time when uh you begin to pick up a lot of volume and velocity and so you know that tends to map directly to you know what you're seeing here which is you know a slow decline uh which i think is going to become even more rapid kind of probably starting around next year where you start to see sds excuse me ssds uh you know really replacing hdds uh at a much more rapid clip particularly on the unstructured data side and it's largely around cost the the workloads that we talked about robots and warehouses or you know other types of advanced machine learning and artificial intelligence type applications and workflows you know they require a degree of performance that a hard drive just can't deliver we are we are seeing sort of the um creative innovative uh disruption of an entire industry right before our eyes it's a fun thing to live through yeah and and we would agree i mean it doesn't the premise there is it doesn't have to be less expensive we think it will be by you know the second half or early second half of this decade but even if it's a we think around a 3x delta the value of of ssd relative to spinning disk is going to overwhelm just like with your laptop you know it got to the point where you said why would i ever have a spinning disc in my laptop we see the same thing happening here um and and so and we're talking about you know raw capacity you know put in compression and dedupe and everything else that you really can't do with spinning discs because of the performance issues you can do with flash okay let's come back to uffo can we dig into the challenges specifically that that this solves for customers give me give us some examples yeah so you know i mean if we if we think about the examples um you know the the robotic one um i think is is is the one that i think is the marker for you know kind of of of the the modern side of of of what we see here um but what we're you know what we're what we're seeing from a trend perspective which you know not everybody's deploying robots right um you know there's there's many companies that are you know that aren't going to be in either the robotic business uh or or even thinking about you know sort of future type oriented type things but what they are doing is greenfield applications are being built on object um generally not on not on file and and not on block and so you know the rise of of object as sort of the the sort of let's call it the the next great protocol for um you know for uh for for modern workloads right this is this is that that modern application coming to the forefront and that could be anything from you know financial institutions you know right down through um you know we've even see it and seen it in oil and gas uh we're also seeing it across across healthcare uh so you know as as as companies take the opportunity as industries to take this opportunity to modernize you know they're modernizing not on things that are are leveraging you know um you know sort of archaic disk technology they're they're they're really focusing on on object but they still have file workflows that they need to that they need to be able to support and so having the ability to be able to deliver those things from one device in a capacity orientation or a performance orientation while at the same time dramatically simplifying the overall administration of your environment both physically and non-physically is a key driver so the great thing about object is it's simple it's a kind of a get put metaphor um it's it scales out you know because it's got metadata associated with the data uh and and it's cheap the drawback is you don't necessarily associate it with high performance and and as well most applications don't you know speak in that language they speak in the language of file you know or as you mentioned block so i i see real opportunities here if i have some some data that's not necessarily frequently accessed you know every day but yet i want to then whether end of quarter or whatever it is i want to i want to or machine learning i want to apply some ai to that data i want to bring it in and then apply a file format uh because for performance reasons is that right maybe you could unpack that a little bit yeah so um you know we see i mean i think you described it well right um but i don't think object necessarily has to be slow um and nor does it have to be um you know because when you think about you brought up a good point with metadata right being able to scale to a billions of objects being able to scale to billions of objects excuse me is of value right um and i think people do traditionally associate object with slow but it's not necessarily slow anymore right we we did a sort of unofficial survey of of of our of our customers and our employee base and when people described object they thought of it as like law firms and storing a word doc if you will um and that that's just you know i think that there's a lack of understanding or a misnomer around what modern what modern object has become and perform an object particularly at scale when we're talking about billions of objects you know that's the next frontier right um is it at pace performance wise with you know the other protocols no but it's making leaps and grounds so you talked a little bit more about some of the verticals that you see i mean i think when i think of financial services i think transaction processing but of course they have a lot of tons of unstructured data are there any patterns you're seeing by by vertical market um we're you know we're not that's the interesting thing um and you know um as a as a as a as a company with a with a block heritage or a block dna those patterns were pretty easy to spot right there were a certain number of databases that you really needed to support oracle sql some postgres work etc then kind of the modern databases around cassandra and things like that you knew that there were going to be vmware environments you know you could you could sort of see the trends and where things were going unstructured data is such a broader horizontal um thing right so you know inside of oil and gas for example you have you know um you have specific applications and bespoke infrastructures for those applications um you know inside of media entertainment you know the same thing the the trend that we're seeing the commonality that we're seeing is the modernization of you know object as a starting point for all the all of the net new workloads within within those industry verticals right that's the most common request we see is what's your object roadmap what's your you know what's your what's your object strategy you know where do you think where do you think object is going so um there isn't any um you know sort of uh there's no there's no path uh it's really just kind of a wide open field in front of us with common requests across all industries so the amazing thing about pure just as a kind of a little you know quasi you know armchair historian the industry is pure was really the only company in many many years to be able to achieve escape velocity break through a billion dollars i mean three part couldn't do it isilon couldn't do it compellent couldn't do it i could go on but pure was able to achieve that as an independent company uh and so you become a leader you look at the gartner magic quadrant you're a leader in there i mean if you've made it this far you've got to have some chops and so of course it's very competitive there are a number of other storage suppliers that have announced products that unify object and file so i'm interested in how pure differentiates why pure um it's a great question um and it's one that uh you know having been a long time puritan uh you know i take pride in answering um and it's actually a really simple answer um it's it's business model innovation and technology right the the technology that goes behind how we do what we do right and i don't mean the product right innovation is product but having a better support model for example um or having on the business model side you know evergreen storage right where we sort of look at your relationship to us as a subscription right um you know we're gonna sort of take the thing that that you've had and we're gonna modernize that thing in place over time such that you're not rebuying that same you know terabyte or you know petabyte of storage that you've that you that you've paid for over time so um you know sort of three legs of the stool uh that that have made you know pure clearly differentiated i think the market has has recognized that um you're right it's it's hard to break through to a billion dollars um but i look forward to the day that you know we we have two billion dollar products and i think with uh you know that rise in in unstructured data growing to 80 by 2025 and you know the massive transition that you know you guys have noted in in in your hdd slide i think it's a huge opportunity for us on you know the other unstructured data side of the house you know the other thing i'd add matt and i've talked to cause about this is is it's simplicity first i've asked them why don't you do this why don't you do it and the answer is always the same is that adds complexity and we we put simplicity for the customer ahead of everything else and i think that served you very very well what about the economics of of unified file and object i mean if you bringing additional value presumably there's a there there's a cost to that but there's got to be also a business case behind it what kind of impact have you seen with customers yeah i mean look i'll i'll go back to something i mentioned earlier which is just the reclamation of floor space and power and cooling right um you know there's a you know there's people people people want to search for kind of the the sexier element if you will when it comes to looking at how we how you derive value from something but the reality is if you're reducing your power consumption by you know by by a material percentage um power bills matter in big in big data centers you know customers typically are are facing you know a paradigm of well i i want to go to the cloud but you know the clouds are not being more expensive than i thought it was going to be or you know i've figured out what i can use in the cloud i thought it was going to be everything but it's not going to be everything so hybrid's where we're landing but i want to be out of the data center business and i don't want to have a team of 20 storage people to match you know to administer my storage um you know so there's sort of this this very tangible value around you know hey if i could manage um you know multiple petabytes with one full-time engineer uh because the system uh to your and kaza's point was radically simpler to administer didn't require someone to be running around swapping drives all the time would that be a value the answer is yes 100 of the time right and then you start to look at okay all right well on the uffo side from a product perspective hey if i have to manage a you know bespoke environment for this application if i have to manage a bespoke environment for this application and a spoke environment for this application and this focus environment for this application i'm managing four different things and can i actually share data across those four different things there's ways to share data but most customers it just gets too complex how do you even know what your what your gold.master copy is of data if you have it in four different places or you try to have it in four different places and it's four different siloed infrastructures so when you get to the sort of the side of you know how do we how do you measure value in uffo it's actually being able to have all of that data concentrated in one place so that you can share it from application to application got it i'm interested we use a couple minutes left i'm interested in the the update on flashblade you know generally but also i have a specific question i mean look getting file right is hard enough uh you just announced smb support for flashblade i'm interested in you know how that fits in i think it's kind of obvious with file and object converging but give us the update on on flashblade and maybe you could address that specific question yeah so um look i mean we're we're um you know tremendously excited about the growth of flashblade uh you know we we we found workloads we never expected to find um you know the rapid restore workload was one that was actually brought to us from from a customer actually um and has become you know one of our one of our top two three four you know workloads so um you know we're really happy with the trend we've seen in it um and you know mapping back to you know thinking about hdds and ssds you know we're well on a path to building a billion dollar business here so you know we're very excited about that but to your point you know you don't just snap your fingers and get there right um you know we've learned that doing file and object uh is is harder than block um because there's more things that you have to go do for one you're basically focused on three protocols s b nfs and s3 not necessarily in that order um but to your point about s b uh you know we we are on the path through to releasing um you know smb full full native smb support in in the system that will allow us to uh service customers we have a limitation with some customers today where they'll have an smb portion of their nfs workflow um and we do great on the nfs side um but you know we didn't we didn't have the ability to plug into the s p component of their workflow so that's going to open up a lot of opportunity for us um on on that front um and you know we continue to you know invest significantly across the board in in areas like security which is you know become more than just a hot button you know today security's always been there but it feels like it's blazing hot today and so you know going through the next couple years we'll be looking at uh you know developing some some uh you know pretty material security elements of the product as well so uh well on a path to a billion dollars is the net on that and uh you know we're we're fortunate to have have smb here and we're looking forward to introducing that to to those customers that have you know nfs workloads today with an s b component yeah nice tailwind good tam expansion strategy matt thanks so much we're out of time but really appreciate you coming on the program we appreciate you having us and uh thanks much dave good to see you all right good to see you and you're watching the convergence of file and object keep it right there we'll be back with more right after this short break [Music]

Published Date : Jan 28 2021

SUMMARY :

i need to have um you know fast daz

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2010DATE

0.99+

Matt BurrPERSON

0.99+

250 terabytesQUANTITY

0.99+

270 terabytesQUANTITY

0.99+

2025DATE

0.99+

threeQUANTITY

0.99+

fourQUANTITY

0.99+

matt burrPERSON

0.99+

todayDATE

0.99+

billion dollarsQUANTITY

0.98+

two levelsQUANTITY

0.98+

billions of objectsQUANTITY

0.98+

two weeks laterDATE

0.98+

80QUANTITY

0.98+

two weeks agoDATE

0.98+

one systemQUANTITY

0.98+

an hourQUANTITY

0.97+

cassandraPERSON

0.97+

mattPERSON

0.97+

next yearDATE

0.96+

billions of objectsQUANTITY

0.96+

davePERSON

0.96+

one deviceQUANTITY

0.96+

bothQUANTITY

0.96+

first principlesQUANTITY

0.93+

second halfQUANTITY

0.93+

billion dollarQUANTITY

0.91+

petabyteQUANTITY

0.9+

four different siloed infrastructuresQUANTITY

0.89+

two billion dollarQUANTITY

0.89+

one placeQUANTITY

0.89+

next couple yearsDATE

0.88+

80 of dataQUANTITY

0.88+

early second half of this decadeDATE

0.87+

20 storage peopleQUANTITY

0.86+

four different thingsQUANTITY

0.86+

five refrigeratorsQUANTITY

0.86+

oneQUANTITY

0.84+

oracle sqlTITLE

0.81+

one full-timeQUANTITY

0.8+

wikibonORGANIZATION

0.79+

four different placesQUANTITY

0.79+

firstQUANTITY

0.79+

3xQUANTITY

0.78+

a lot of peopleQUANTITY

0.78+

FlashBladeORGANIZATION

0.78+

end of quarterDATE

0.77+

a couple minutesQUANTITY

0.77+

two sortQUANTITY

0.75+

isilonORGANIZATION

0.74+

s3TITLE

0.74+

three partQUANTITY

0.72+

100 ofQUANTITY

0.7+

terabyteQUANTITY

0.7+

three legsQUANTITY

0.68+

twoQUANTITY

0.68+

multiple petabytesQUANTITY

0.68+

vice presidentPERSON

0.65+

many yearsQUANTITY

0.61+

flashbladeORGANIZATION

0.57+

many companiesQUANTITY

0.56+

tonsQUANTITY

0.55+

gartnerORGANIZATION

0.53+

General ManagerPERSON

0.53+

multiQUANTITY

0.51+

general managerPERSON

0.45+

PureORGANIZATION

0.34+

Eric Herzog, IBM & Sam Werner, IBM | CUBE Conversation, October 2020


 

(upbeat music) >> Announcer: From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world. This is a CUBE conversation. >> Hey, welcome back everybody. Jeff Frick here with the CUBE, coming to you from our Palo Alto studios today for a CUBE conversation. we've got a couple of a CUBE alumni veterans who've been on a lot of times. They've got some exciting announcements to tell us today, so we're excited to jump into it, So let's go. First we're joined by Eric Herzog. He's the CMO and VP worldwide storage channels for IBM Storage, made his time on theCUBE Eric, great to see you. >> Great, thanks very much for having us today. >> Jeff: Absolutely. And joining him, I think all the way from North Carolina, Sam Werner, the VP of, and offering manager business line executive storage for IBM. Sam, great to see you as well. >> Great to be here, thank you. >> Absolutely. So let's jump into it. So Sam you're in North Carolina, I think that's where the Red Hat people are. You guys have Red Hat, a lot of conversations about containers, containers are going nuts. We know containers are going nuts and it was Docker and then Kubernetes. And really a lot of traction. Wonder if you can reflect on, on what you see from your point of view and how that impacts what you guys are working on. >> Yeah, you know, it's interesting. We talk, everybody hears about containers constantly. Obviously it's a hot part of digital transformation. What's interesting about it though is most of those initiatives are being driven out of business lines. I spend a lot of time with the people who do infrastructure management, particularly the storage teams, the teams that have to support all of that data in the data center. And they're struggling to be honest with you. These initiatives are coming at them, from application developers and they're being asked to figure out how to deliver the same level of SLAs the same level of performance, governance, security recovery times, availability. And it's a scramble for them to be quite honest they're trying to figure out how to automate their storage. They're trying to figure out how to leverage the investments they've made as they go through a digital transformation and keep in mind, a lot of these initiatives are accelerating right now because of this global pandemic we're living through. I don't know that the strategy's necessarily changed, but there's been an acceleration. So all of a sudden these storage people kind of trying to get up to speed or being thrown right into the mix. So we're working directly with them. You'll see, in some of our announcements, we're helping them, you know, get on that journey and provide the infrastructure their teams need. >> And a lot of this is driven by multicloud and hybrid cloud, which we're seeing, you know, a really aggressive move to before it was kind of this rush to public cloud. And that everybody figured out, "Well maybe public cloud isn't necessarily right for everything." And it's kind of this horses for courses, if you will, with multicloud and hybrid cloud, another kind of complexity thrown into the storage mix that you guys have to deal with. >> Yeah, and that's another big challenge. Now in the early days of cloud, people were lifting and shifting applications trying to get lower capex. And they were also starting to deploy DevOps, in the public cloud in order to improve agility. And what they found is there were a lot of challenges with that, where they thought lifting and shifting an application will lower their capital costs the TCO actually went up significantly. Where they started building new applications in the cloud. They found they were becoming trapped there and they couldn't get the connectivity they needed back into their core applications. So now we're at this point where they're trying to really, transform the rest of it and they're using containers, to modernize the rest of the infrastructure and complete the digital transformation. They want to get into a hybrid cloud environment. What we found is, enterprises get two and a half X more value out of the IT when they use a hybrid multicloud infrastructure model versus an all public cloud model. So what they're trying to figure out is how to piece those different components together. So you need a software-driven storage infrastructure that gives you the flexibility, to deploy in a common way and automate in a common way, both in a public cloud but on premises and give you that flexibility. And that's what we're working on at IBM and with our colleagues at Red Hat. >> So Eric, you've been in the business a long time and you know, it's amazing as it just continues to evolve, continues to evolve this kind of unsexy thing under the covers called storage, which is so foundational. And now as data has become, you know, maybe a liability 'cause I have to buy a bunch of storage. Now it is the core asset of the company. And in fact a lot of valuations on a lot of companies is based on its value, that's data and what they can do. So clearly you've got a couple of aces in the hole you always do. So tell us what you guys are up to at IBM to take advantage of the opportunity. >> Well, what we're doing is we are launching, a number of solutions for various workloads and applications built with a strong container element. For example, a number of solutions about modern data protection cyber resiliency. In fact, we announced last year almost a year ago actually it's only a year ago last week, Sam and I were on stage, and one of our developers did a demo of us protecting data in a container environment. So now we're extending that beyond what we showed a year ago. We have other solutions that involve what we do with AI big data and analytic applications, that are in a container environment. What if I told you, instead of having to replicate and duplicate and have another set of storage right with the OpenShift Container configuration, that you could connect to an existing external exabyte class data lake. So that not only could your container apps get to it, but the existing apps, whether they'll be bare-metal or virtualized, all of them could get to the same data lake. Wow, that's a concept saving time, saving money. One pool of storage that'll work for all those environments. And now that containers are being deployed in production, that's something we're announcing as well. So we've got a lot of announcements today across the board. Most of which are container and some of which are not, for example, LTO-9, the latest high performance and high capacity tape. We're announcing some solutions around there. But the bulk of what we're announcing today, is really on what IBM is doing to continue to be the leader in container storage support. >> And it's great, 'cause you talked about a couple of very specific applications that we hear about all the time. One obviously on the big data and analytics side, you know, as that continues to do, to kind of chase history of honor of ultimately getting the right information to the right people at the right time so they can make the right decision. And the other piece you talked about was business continuity and data replication, and to bring people back. And one of the hot topics we've talked to a lot of people about now is kind of this shift in a security threat around ransomware. And the fact that these guys are a little bit more sophisticated and will actually go after your backup before they let you know that they're into your primary storage. So these are two, really important market areas that we could see continue activity, as all the people that we talk to every day. You must be seeing the same thing. >> Absolutely we are indeed. You know, containers are the wave. I'm a native California and I'm coming to you from Silicon Valley and you don't fight the wave, you ride it. So at IBM we're doing that. We've been the leader in container storage. We, as you know, way back when we invented the hard drive, which is the foundation of almost this entire storage industry and we were responsible for that. So we're making sure that as container is the coming wave that we are riding that in and doing the right things for our customers, for our channel partners that support those customers, whether they be existing customers, and obviously, with this move to containers, is going to be some people searching for probably a new vendor. And that's something that's going to go right into our wheelhouse because of the things we're doing. And some of our capabilities, for example, with our FlashSystems, with our Spectrum Virtualize, we're actually going to be able to support CSI snapshots not only for IBM Storage, but our Spectrum Virtualize products supports over 500 different arrays, most of which aren't ours. So if you got that old EMC VNX2 or that HPE, 3PAR or aNimble or all kinds of other storage, if you need CSI snapshot support, you can get it from IBM, with our Spectrum Virtualize software that runs on our FlashSystems, which of course cuts capex and opex, in a heterogeneous environment, but gives them that advanced container support that they don't get, because they're on older product from, you know, another vendor. We're making sure that we can pull our storage and even our competitor storage into the world of containers and do it in the right way for the end user. >> That's great. Sam, I want to go back to you and talk about the relationship with the Red Hat. I think it was about a year ago, I don't have my notes in front of me, when IBM purchased Red Hat. Clearly you guys have been working very closely together. What does that mean for you? You've been in the business for a long time. You've been at IBM for a long time, to have a partner you know, kind of embed with you, with Red Hat and bringing some of their capabilities into your portfolio. >> It's been an incredible experience, and I always say my friends at Red Hat because we spend so much time together. We're looking at now, leveraging a community that's really on the front edge of this movement to containers. They bring that, along with their experience around storage and containers, along with the years and years of enterprise class storage delivery that we have in the IBM Storage portfolio. And we're bringing those pieces together. And this is a case of truly one plus one equals three. And you know, an example you'll see in this announcement is the integration of our data protection portfolio with their container native storage. We allow you to in any environment, take a snapshot of that data. You know, this move towards modern data protection is all about a movement to doing data protection in a different way which is about leveraging snapshots, taking instant copies of data that are application aware, allowing you to reuse and mount that data for different purposes, be able to protect yourself from ransomware. Our data protection portfolio has industry leading ransomware protection and detection in it. So we'll actually detect it before it becomes a problem. We're taking that, industry leading data protection software and we are integrating it into Red Hat, Container Native Storage, giving you the ability to solve one of the biggest challenges in this digital transformation which is backing up your data. Now that you're moving towards, stateful containers and persistent storage. So that's one area we're collaborating. We're working on ensuring that our storage arrays, that Eric was talking about, that they integrate tightly with OpenShift and that they also work again with, OpenShift Container Storage, the Cloud Native Storage portfolio from, Red Hat. So we're bringing these pieces together. And on top of that, we're doing some really, interesting things with licensing. We allow you to consume the Red Hat Storage portfolio along with the IBM software-defined Storage portfolio under a single license. And you can deploy the different pieces you need, under one single license. So you get this ultimate investment protection and ability to deploy anywhere. So we're, I think we're adding a lot of value for our customers and helping them on this journey. >> Yeah Eric, I wonder if you could share your perspective on multicloud management. I know that's a big piece of what you guys are behind and it's a big piece of kind of the real world as we've kind of gotten through the hype and now we're into production, and it is a multicloud world and it is, you got to manage this stuff it's all over the place. I wonder if you could speak to kind of how that challenge you know, factors into your design decisions and how you guys are about, you know, kind of the future. >> Well we've done this in a couple of ways in things that are coming out in this launch. First of all, IBM has produced with a container-centric model, what they call the Multicloud Manager. It's the IBM Cloud Pak for multicloud management. That product is designed to manage multiple clouds not just the IBM Cloud, but Amazon, Azure, et cetera. What we've done is taken our Spectrum Protect Plus and we've integrated it into the multicloud manager. So what that means, to save time, to save money and make it easier to use, when the customer is in the multicloud manager, they can actually select Spectrum Protect Plus, launch it and then start to protect data. So that's one thing we've done in this launch. The other thing we've done is integrate the capability of IBM Spectrum Virtualize, running in a FlashSystem to also take the capability of supporting OCP, the OpenShift Container Platform in a Clustered environment. So what we can do there, is on-premise, if there really was an earthquake in Silicon Valley right now, that OpenShift is sitting on a server. The servers just got crushed by the roof when it caved in. So you want to make sure you've got disaster recovery. So what we can do is take that OpenShift Container Platform Cluster, we can support it with our Spectrum Virtualize software running on our FlashSystem, just like we can do heterogeneous storage that's not ours, in this case, we're doing it with Red Hat. And then what we can do is to provide disaster recovery and business continuity to different cloud vendors not just to IBM Cloud, but to several cloud vendors. We can give them the capability of replicating and protecting that Cluster to a cloud configuration. So if there really was an earthquake, they could then go to the cloud, they could recover that Red Hat Cluster, to a different data center and run it on-prem. So we're not only doing the integration with a multicloud manager, which is multicloud-centric allowing ease of use with our Spectrum Protect Plus, but incase of a really tough situation of fire in a data center, earthquake, hurricane, whatever, the Red Hat OpenShift Cluster can be replicated out to a cloud, with our Spectrum Virtualize Software. So in most, in both cases, multicloud examples because in the first one of course the multicloud manager is designed and does support multiple clouds. In the second example, we support multiple clouds where our Spectrum Virtualize for public clouds software so you can take that OpenShift Cluster replicate it and not just deal with one cloud vendor but with several. So showing that multicloud management is important and then leverage that in this launch with a very strong element of container centricity. >> Right >> Yeah, I just want to add, you know, and I'm glad you brought that up Eric, this whole multicloud capability with, the Spectrum Virtualize. And I could see the same for our Spectrum Scale Family, which is our storage infrastructure for AI and big data. We actually, in this announcement have containerized the client making it very simple to deploy in Kubernetes Cluster. But one of the really special things about Spectrum Scale is it's active file management. This allows you to build out a file system not only on-premises for your, Kubernetes Cluster but you can actually extend that to a public cloud and it automatically will extend the file system. If you were to go into a public cloud marketplace which it's available in more than one, you can go in there click deploy, for example, in AWS Marketplace, click deploy it will deploy your Spectrum Scale Cluster. You've now extended your file system from on-prem into the cloud. If you need to access any of that data, you can access it and it will automatically cash you on locally and we'll manage all the file access for you. >> Yeah, it's an interesting kind of paradox between, you know, kind of the complexity of what's going on in the back end, but really trying to deliver simplicity on the front end. Again, this ultimate goal of getting the right data to the right person at the right time. You just had a blog post Eric recently, that you talked about every piece of data isn't equal. And I think it's really highlighted in this conversation we just had about recovery and how you prioritize and how you, you know, think about, your data because you know, the relative value of any particular piece might be highly variable, which should drive the way that you treated in your system. So I wonder if you can speak a little bit, you know, to helping people think about data in the right way. As you know, they both have all their operational data which they've always had, but now they've got all this unstructured data that's coming in like crazy and all data isn't created equal, as you said. And if there is an earthquake or there is a ransomware attack, you need to be smart about what you have available to bring back quickly. And maybe what's not quite so important. >> Well, I think the key thing, let me go to, you know a modern data protection term. These are two very technical terms was, one is the recovery time. How long does it take you to get that data back? And the second one is the recovery point, at what point in time, are you recovering the data from? And the reason those are critical, is when you look at your datasets, whether you replicate, you snap, you do a backup. The key thing you've got to figure out is what is my recovery time? How long is it going to take me? What's my recovery point. Obviously in certain industries you want to recover as rapidly as possible. And you also want to have the absolute most recent data. So then once you know what it takes you to do that, okay from an RPO and an RTO perspective, recovery point objective, recovery time objective. Once you know that, then you need to look at your datasets and look at what does it take to run the company if there really was a fire and your data center was destroyed. So you take a look at those datasets, you see what are the ones that I need to recover first, to keep the company up and rolling. So let's take an example, the sales database or the support database. I would say those are pretty critical to almost any company, whether you'd be a high-tech company, whether you'd be a furniture company, whether you'd be a delivery company. However, there also is probably a database of assets. For example, IBM is a big company. We have buildings all over, well, guess what? We don't lease a chair or a table or a whiteboard. We buy them. Those are physical assets that the company has to pay, you know, do write downs on and all this other stuff, they need to track it. If we close a building, we need to move the desk to another building. Like even if we leasing a building now, the furniture is ours, right? So does an asset database need to be recovered instantaneously? Probably not. So we should focus on another thing. So let's say on a bank. Banks are both online and brick and mortar. I happened to be a Wells Fargo person. So guess what? There's Wells Fargo banks, two of them in the city I'm in, okay? So, the assets of the money, in this case now, I don't think the brick and mortar of the building of Wells Fargo or their desks in there but now you're talking financial assets or their high velocity trading apps. Those things need to be recovered almost instantaneously. And that's what you need to do when you're looking at datasets, is figure out what's critical to the business to keep it up and rolling, what's the next most critical. And you do it in basically the way you would tear anything. What's the most important thing, what's the next most important thing. It doesn't matter how you approach your job, how you used to approach school, what are the classes I have to get an A and what classes can I not get an A and depending on what your major was, all that sort of stuff, you're setting priorities, right? And the dataset, since data is the most critical asset of any company, whether it's a Global Fortune 500 or whether it's Herzog Cigar Store, all of those assets, that data is the most valuable. So you've got to make sure, recover what you need as rapidly as you need it. But you can't recover all of it. You just, there's just no way to do that. So that's why you really ranked the importance of the data to use sameware, with malware and ransomware. If you have a malware or ransomware attack, certain data you need to recover as soon as you can. So if there, for example, as a, in fact there was one Jeff, here in Silicon Valley as well. You've probably read about the University of California San Francisco, ended up having to pay over a million dollars of ransom because some of the data related to COVID research University of California, San Francisco, it was the health care center for the University of California in Northern California. They are working on COVID and guess what? The stuff was held for ransom. They had no choice, but to pay them. And they really did pay, this is around end of June, of this year. So, okay, you don't really want to do that. >> Jeff: Right >> So you need to look at everything from malware and ransomware, the importance of the data. And that's how you figure this stuff out, whether be in a container environment, a traditional environment or virtualized environment. And that's why data protection is so important. And with this launch, not only are we doing the data protection we've been doing for years, but now taking it to the heart of the new wave, which is the wave of containers. >> Yeah, let me add just quickly on that Eric. So think about those different cases you talked about. You're probably going to want for your mission critically. You're going to want snapshots of that data that can be recovered near instantaneously. And then, for some of your data, you might decide you want to store it out in cloud. And with Spectrum Protect, we just announced our ability to now store data out in Google cloud. In addition to, we already supported AWS Azure IBM Cloud, in various on-prem object stores. So we already provided that capability. And then we're in this announcement talking about LTL-9. And you got to also be smart about which data do you need to keep, according to regulation for long periods of time, or is it just important to archive? You're not going to beat the economics nor the safety of storing data out on tape. But like Eric said, if all of your data is out on tape and you have an event, you're not going to be able to restore it quickly enough at least the mission critical things. And so those are the things that need to be in snapshot. And that's one of the main things we're announcing here for Kubernetes environments is the ability to quickly snapshot application aware backups, of your mission critical data in your Kubernetes environments. It can very quickly to be recovered. >> That's good. So I'll give you the last word then we're going to sign off, we are out of time, but I do want to get this in it's 2020, if I didn't ask the COVID question, I would be in big trouble. So, you know, you've all seen the memes and the jokes about really COVID being an accelerant to digital transformation, not necessarily change, but certainly a huge accelerant. I mean, you guys have a, I'm sure a product roadmap that's baked pretty far and advanced, but I wonder if you can speak to, you know, from your perspective, as COVID has accelerated digital transformation you guys are so foundational to executing that, you know, kind of what is it done in terms of what you're seeing with your customers, you know, kind of the demand and how you're seeing this kind of validation as to an accelerant to move to these better types of architectures? Let's start with you Sam. >> Yeah, you know I, and I think i said this, but I mean the strategy really hasn't changed for the enterprises, but of course it is accelerating it. And I see storage teams more quickly getting into trouble, trying to solve some of these challenges. So we're working closely with them. They're looking for more automation. They have less people in the data center on-premises. They're looking to do more automation simplify the management of the environment. We're doing a lot around Ansible to help them with that. We're accelerating our roadmaps around that sort of integration and automation. They're looking for better visibility into their environments. So we've made a lot of investments around our storage insights SaaS platform, that allows them to get complete visibility into their data center and not just in their data center. We also give them visibility to the stores they're deploying in the cloud. So we're making it easier for them to monitor and manage and automate their storage infrastructure. And then of course, if you look at everything we're doing in this announcement, it's about enabling our software and our storage infrastructure to integrate directly into these new Kubernetes, initiatives. That way as this digital transformation accelerates and application developers are demanding more and more Kubernetes capabilities. They're able to deliver the same SLAs and the same level of security and the same level of governance, that their customers expect from them, but in this new world. So that's what we're doing. If you look at our announcement, you'll see that across, across the sets of capabilities that we're delivering here. >> Eric, we'll give you the last word, and then we're going to go to Eric Cigar Shop, as soon as this is over. (laughs) >> So it's clearly all about storage made simple, in a Kubernetes environment, in a container environment, whether it's block storage, file storage, whether it be object storage and IBM's goal is to offer ever increasing sophisticated services for the enterprise at the same time, make it easier and easier to use and to consume. If you go back to the old days, the storage admins manage X amount of gigabytes, maybe terabytes. Now the same admin is managing 10 petabytes of data. So the data explosion is real across all environments, container environments, even old bare-metal. And of course the not quite so new anymore virtualized environments. The admins need to manage that more and more easily and automated point and click. Use AI based automated tiering. For example, we have with our Easy Tier technology, that automatically moves data when it's hot to the fastest tier. And when it's not as hot, it's cool, it pushes down to a slower tier, but it's all automated. You point and you click. Let's take our migration capabilities. We built it into our software. I buy a new array, I need to migrate the data. You point, you click, and we automatic transparent migration in the background on the fly without taking the servers or the storage down. And we always favor the application workload. So if the application workload is heavy at certain times a day, we slow the migration. At night for sake of argument, If it's a company that is not truly 24 by seven, you know, heavily 24 by seven, and at night, it slows down, we accelerate the migration. All about automation. We've done it with Ansible, here in this launch, we've done it with additional integration with other platforms. So our Spectrum Scale for example, can use the OpenShift management framework to configure and to grow our Spectrum Scale or elastic storage system clusters. We've done it, in this case with our Spectrum Protect Plus, as you saw integration into the multicloud manager. So for us, it's storage made simple, incredibly new features all the time, but at the same time we do that, make sure that it's easier and easier to use. And in some cases like with Ansible, not even the real storage people, but God forbid, that DevOps guy messes with a storage and loses that data, wow. So by, if you're using something like Ansible and that Ansible framework, we make sure that essentially the DevOps guy, the test guy, the analytics guy, basically doesn't lose the data and screw up the storage. And that's a big, big issue. So all about storage made simple, in the right way with incredible enterprise features that essentially we make easy and easy to use. We're trying to make everything essentially like your iPhone, that easy to use. That's the goal. And with a lot less storage admins in the world then there has been an incredible storage growth every single year. You'd better make it easy for the same person to manage all that storage. 'Cause it's not shrinking. It is, someone who's sitting at 50 petabytes today, is 150 petabytes the next year and five years from now, they'll be sitting on an exabyte of production data, and they're not going to hire tons of admins. It's going to be the same two or four people that were doing the work. Now they got to manage an exabyte, which is why this storage made simplest is such a strong effort for us with integration, with the Open, with the Kubernetes frameworks or done with OpenShift, heck, even what we used to do in the old days with vCenter Ops from VMware, VASA, VAAI, all those old VMware tools, we made sure tight integration, easy to use, easy to manage, but sophisticated features to go with that. Simplicity is really about how you manage storage. It's not about making your storage dumb. People want smarter and smarter storage. Do you make it smarter, but you make it just easy to use at the same time. >> Right. >> Well, great summary. And I don't think I could do a better job. So I think we'll just leave it right there. So congratulations to both of you and the teams for these announcement after a whole lot of hard work and sweat went in, over the last little while and continued success. And thanks for the, check in, always great to see you. >> Thank you. We love being on theCUBE as always. >> All right, thanks again. All right, he's Eric, he was Sam, I'm I'm Jeff, you're watching theCUBE. We'll see you next time, thanks for watching. (upbeat music)

Published Date : Nov 2 2020

SUMMARY :

leaders all around the world. coming to you from our Great, thanks very Sam, great to see you as well. on what you see from your point of view the teams that have to that you guys have to deal with. and complete the digital transformation. So tell us what you guys are up to at IBM that you could connect to an existing And the other piece you talked and I'm coming to you to have a partner you know, and ability to deploy anywhere. of what you guys are behind and make it easier to use, And I could see the same for and how you prioritize that the company has to pay, So you need to look at and you have an event, to executing that, you know, of security and the same Eric, we'll give you the last word, And of course the not quite so new anymore So congratulations to both of you We love being on theCUBE as always. We'll see you next time,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Eric HerzogPERSON

0.99+

IBMORGANIZATION

0.99+

Sam WernerPERSON

0.99+

SamPERSON

0.99+

twoQUANTITY

0.99+

EricPERSON

0.99+

Silicon ValleyLOCATION

0.99+

Jeff FrickPERSON

0.99+

Wells FargoORGANIZATION

0.99+

October 2020DATE

0.99+

Wells FargoORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

BostonLOCATION

0.99+

50 petabytesQUANTITY

0.99+

10 petabytesQUANTITY

0.99+

North CarolinaLOCATION

0.99+

Palo AltoLOCATION

0.99+

last yearDATE

0.99+

150 petabytesQUANTITY

0.99+

CaliforniaLOCATION

0.99+

oneQUANTITY

0.99+

University of CaliforniaORGANIZATION

0.99+

2020DATE

0.99+

a year agoDATE

0.99+

both casesQUANTITY

0.99+

24QUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

CUBEORGANIZATION

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

next yearDATE

0.99+

threeQUANTITY

0.99+

bothQUANTITY

0.99+

second exampleQUANTITY

0.99+

Eric Cigar ShopORGANIZATION

0.99+

Herzog Cigar StoreORGANIZATION

0.99+

OpenShiftTITLE

0.99+

todayDATE

0.99+

DevOpsTITLE

0.98+

over 500 different arraysQUANTITY

0.98+

end of JuneDATE

0.98+

four peopleQUANTITY

0.98+

vCenter OpsTITLE

0.98+

Caitlin Gordon promo v2


 

(upbeat music) >> From theCube studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a Cube Conversation. >> Hi, Lisa Martin here with Caitlin Gordon, the VP of product marketing for Dell Technologies. Caitlin, welcome back to theCube, we are excited to see you again. >> I'm very excited to be here again. So data protection in the news, what's going on? Yeah, it's been a busy year, we had our, obviously our PowerProtect DD appliance launched last year. And then this year, we have announcements on the software side, we had announcements at vMworld, some more at Dell Technologies world. And now today we're announcing even more, which is the new PowerProtect DP series appliances, the new integrated appliances, it's really exciting. So we now have our PowerProtect DP, the next generation of Data Domain. And we have our PowerProtect DP appliances, integrated appliances. And that's all about combining both protection storage, protection software in a single converged all in one offering. It's really popular with our customers today, because of the simplicity, the ability to really modernize your data protection in a very simple way, get up, really up and running quickly. And in fact, it's really the fastest growing part of the backup appliance market. >> I have read that the integrated appliances, our market is growing twice as fast as the target market. So give us a picture of what customers can expect from the new DP series. >> Yeah, it's not that just similar to actually our DD series from last year, which is there's four models in the new DP series. And it's really all about getting better performance, better efficiency, we've got new hardware assisted compression, denser drives, and all that gives us the ability to get faster backups, faster recovery. In fact, you get 38% faster backups, 45% faster recovery, more logical capacity, 30% more logical capacity 65 to one deduplication, which is just incredible. And 60,000 IOPS for instant access, so really ups the game both in performance and in efficiency. >> Those are big numbers. You mentioned a DD launched last year, contrast it with what you're announcing now, what's the significance of the DP series. >> And this is exciting for us because it does a couple things, it expands our PowerProtect appliance family, with the new DP series of integrated appliances. But at the same time, we're also announcing other important PowerProtect enhancements. on the software side, PowerProtect Data Manager, which we've been enhancing and continuing to talk about all year, also has some new improvements, the ability to deploy it in Azure, in AWS Govcloud for in-cloud protection, the enhancements that we've done with VMware that we talked about not that long ago at VMworld, about being able to integrate with storage based policy management, really automating and simplifying VMware protection. And it's really all about Kubernetes, right? And the ability to support Kubernetes as well. So not only is this an exciting appliance launch for us, but it's also the marking of yet even more enhancements on the PowerProtect Data Manager side. And all that together means that with PowerProtect, you really have a one stop shop for all of your data protection needs, no matter where the data lives. No matter what SLA, whether it's a physical virtual appliance, whether it's target or integrated. You've all got them in the PowerProtect family now. >> Excellent, all right, last question for you Caitlin. We know Dell Technologies is focused on three big waves, its cloud, VMware, and cyber recovery. Anything else you want to add here? >> cyber resiliency, cyber recovery, ransomware has really risen to the top of the list unfortunately for many organizations, and PowerProtect cyber recovery is really an important enhancement that we also have with this announcement today. We've had this offering and market for a couple years, but the exciting new enhancement here. So it is the first cyber recovery solution endorsed by Sheltered Harbor. And if you're not familiar with PowerProtect cyber recovery, it provides an automated air gapped solution for data isolation and then cyber sense provides the analytics and the forensics for discovering, diagnosing and remediating those attacks. So it's really all about ransomware protecting from or covering from those attacks, which unfortunately have become all too common for our customers today. >> Excellent news Caitlin, thanks for sharing what's new. Congratulations to you and the Dell team. >> Thank you so much Lisa. >> Okay Gordon. I'm Lisa Martin. You're watching theCube (upbeat music)

Published Date : Oct 27 2020

SUMMARY :

leaders all around the world, the VP of product marketing on the software side, we had announcements I have read that the And it's really all about of the DP series. And the ability to support question for you Caitlin. So it is the first cyber Congratulations to you and the Dell team. I'm Lisa Martin.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CaitlinPERSON

0.99+

Lisa MartinPERSON

0.99+

Caitlin GordonPERSON

0.99+

45%QUANTITY

0.99+

30%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

38%QUANTITY

0.99+

LisaPERSON

0.99+

GordonPERSON

0.99+

DellORGANIZATION

0.99+

last yearDATE

0.99+

Dell TechnologiesORGANIZATION

0.99+

AWSORGANIZATION

0.99+

four modelsQUANTITY

0.99+

65QUANTITY

0.99+

Sheltered HarborORGANIZATION

0.99+

this yearDATE

0.99+

twiceQUANTITY

0.99+

bothQUANTITY

0.98+

todayDATE

0.98+

BostonLOCATION

0.98+

KubernetesTITLE

0.98+

PowerProtect DPCOMMERCIAL_ITEM

0.97+

VMworldORGANIZATION

0.97+

vMworldORGANIZATION

0.97+

60,000 IOPSQUANTITY

0.96+

PowerProtect DPCOMMERCIAL_ITEM

0.93+

AzureTITLE

0.93+

VMwareTITLE

0.92+

PowerProtectCOMMERCIAL_ITEM

0.89+

PowerProtectTITLE

0.89+

singleQUANTITY

0.87+

PowerProtect Data ManagerTITLE

0.83+

first cyberQUANTITY

0.82+

theCubeORGANIZATION

0.82+

one offeringQUANTITY

0.77+

one stopQUANTITY

0.73+

PowerProtect DDCOMMERCIAL_ITEM

0.72+

couple yearsQUANTITY

0.71+

three bigQUANTITY

0.7+

GovcloudTITLE

0.69+

one deduplicationQUANTITY

0.69+

coupleQUANTITY

0.57+

SLATITLE

0.51+

PowerProtectORGANIZATION

0.47+

v2OTHER

0.43+

wavesEVENT

0.32+

Vaughn Stewart, Pure Storage | VMworld 2020


 

>> Narrator: From around the globe, it's theCUBE. With digital coverage of VMworld 2020 brought to you by VMware and its ecosystem partners. >> Welcome back, I'm Stuart Miniman and this is theCUBES's coverage of VMworld 2020. Our 11th year doing the show and happy to welcome back to the program one of our CUBE's alums. Somebody that's is going to VMworld longer than we have been doing it for theCUBE. So Vaughn Stewart he is the Vice President of Technology Alliances with Pure Storage Vaughn, nice to see you. How you doing? >> Hey, Stu. CUBE thanks for having me back. I miss you guys I wish we were doing this in person. >> Yeah, we all wish we were in person but as we've been saying all this year, we get to be together even while we're apart. So we look to you on little screens and things like that rather than bumping into each other at some of the after parties or the coffee shops all around San Francisco. So Vaughn, obviously you know Pure Storage long, long, long partnership with VMware. I think back the first time that I probably met with the Pure team, in person, it probably was around Moscone, having a breakfast having a lunch, having a briefing or the likes. So just give us the high level. I know we've got a lot of things to dig into. Pure and VMware, how's the partnership going these days? >> Partnership is growing fantastic Pure invests a lot of engineering resources in programs with VMware. Particularly the VMware design partner programs for vVols, Container-Native Storage et cetera. The relationship is healthy the business is growing strong. I'm very excited about the investments that VMware is making around VMware Cloud Foundation as a replatforming of what's going on MPREM to help better enable hybrid cloud and to support Tanzu and Kubernetes platforms. So a lot going on at the infrastructure level that ultimately helps customers of all to adopt cloud native workloads and applications. >> Wonderful. Well a lot of pieces to unpack that. Of course Tanzu big piece of what they're talking about. But let's start. You mentioned VCF. You know what is it on the infrastructure side, that is kind of driving your customer adoption these days, and the some of the latest integrations that you're doing? >> Yeah you know VCF has really caught the attention of our mid to large or mid to enterprise size customers. The focus around, as I use the phrase replatform is planning out with VMworld phrase. But the focus on simplifying the lifecycle management, giving you a greater means to connect to the public cloud. I don't know if you're aware, but all VMware public cloud offerings have the VCF framework in terms of architectural framework. So now bringing that back on-prem, allowing customers on a per workload domain basis to extend to a hybrid cloud capability. It's a really big advancement from kind of the base vSphere infrastructure, which architecturally hasn't had a significant advancement in a number of years. What's really big around VCF besides the hybrid connectivity, is the couple of new tools SDDC Manager and vSphere Lifecycle Manager. These tools can actually manage the infrastructure from bare metal up to workload domains and then from workload domains you're now handing off to considered like delegated vCenter Servers right? So that the owner of a workload if you will and then that person can go ahead and provision virtual machines or containers, based on whatever is required to run their workloads. So for us the big gain of this is the advancement in the VMware management. They are bringing their strength in providing simplicity, and end-to-end hardwared application management to disaggregated architectures. Where the focus of that capability has been with HCI over say the past five or six years. And so this really helps close that last gap, if you will, and completes a 360 degree view of providing simplified management across dissimilar architecture and it's consistent and it's standardized by VMware. So HCI, disaggregated architecture, public cloud, it all operates the same. >> So Vaughn, you made a comment about not a lot of changes. If I remember our friends at VMware they made a statement vSphere 7 was the biggest architectural change in over a decade. Of course bringing in Kubernetes it's a major piece of the Tanzu discussion. Pure. Your team's been pretty busy in the Kubernetes space too. Recent acquisition of Portwox to help accelerate that. Maybe let's talk a little bit about you know cloud native. What you're hearing from your customers. (chuckles) And yeah, like we've Dave Vellante had a nice interview with, Pure and Portwox CEOs. Give the VMworld audience a little bit of an update as you know where you all fit in the Kubernetes space. >> Yeah and actually, there was a lot that you shared there kind of in connecting the VCF piece through to vSphere 7 and a lot of changes there in driving into Tanzu and containers. So maybe we're going to jump around here a bit but look we're really excited. We've been working with VMware, but in addition to all of our application partners, you are seeing nearly every traditional enterprise application being replatformed to support containers. I'd love to share with you more details, but there's a lot of NDAs I'd be breaking in that. But the way for enterprise adoption of containers is right upon us. And so the timing for VMware Tanzu is ideal. Our focus has always been around providing a rich set of data services. One that provides faster provisioning, simplified fleet management, and the ability to move that container and those data services between different clouds and different cloud platforms, Be it on-prem, or in the public cloud space. We've had a lot of success doing that with the Pure Service Orchestrator Version 6.0 enables CSI compliant persistent storage capabilities. And it does support Tanzu today. The addition or I should say the acquisition of Portworx is really interesting. Because now we're bringing on an enhanced set of data services that not only run on a Pure Storage storage products, but runs universally regardless of the storage platform, or the Cloud architecture. The capabilities within Portworx are above and beyond what we had in PSO. So this is a great expansion of our capabilities. And ultimately we want to help customers. Whether they want to do containers solely on Tanzu, or if they're going to mix Tanzu with say Amazon EKS, or they've got some department that does development on OpenShift. Whatever it might be. You know that the focus of storage vendors is obviously to help customers make that data available on these platforms through a consistent control plane. >> Yeah. Vaughn it's a great acquisition. Think a nice fit. Anybody that's been talking to Pure the last year or so you've been. How do we take the storage make it more cloud native if you will. So you've got code. Obviously, you've got a great partnership with VMware, but as you said, in Amazon and some of the other hyper clouds those clouds, those storage services, no matter where a customer is, so that that core value, of course we know, is this the software underneath it. And that's what Portworx is. So you know not only Pure's, but other hardware, other clouds and the likes. So a really interesting space You know Vaughn, you and I've been covering this, since the early days of VMware. Hey this software is kind of a big deal and you know (chuckles) cloud in many ways is an extension of what we're doing. I know we used to joke how many years was it that VMworld was storage world? You know. >> Ooh yeah. >> There was talk about like big architectural changes, you know vVols When that finally came out, it was years of hard work by many of the big companies, including your previous and current you know employer. What's the latest? My understanding is that there are some updates there when it comes to the underlying vVols. What are the storage people need to know? >> Yeah. So great question and VMware is always been infrastructure world really Right? Like it is a showcase for storage. But it's also been a showcase for the compute vendors and every Intel partner. From a storage perspective, a lot is going on this year that should really excite both VMware admins and those who are storage centric in their day-to-day jobs. Let's start with the recent news. vVols has been promoted within VCF to being principal storage. For those of you who maybe are unfamiliar with this term 'principal storage' VMware Cloud Foundation supports any form of storage that's supported by vSphere. But SDDC manager tool that I was sharing with you earlier that really excites large scale organizations around it's end-to-end simplicity and management. It had a smaller, less robust support list when it comes to provisioning external storage. And so it had two tiers. Principal and secondary. Principal meant SDDC manager could provision and deprovision sub-tenants. So the recent news brings vVols both on Fiber Channel and iSCSI up to that principal tier. Pure Storage is a VMware design partner around vVols. We are one of the most adopted vVols storage platforms, and we are really leaning in on VCF. So we are very happy to see that come to fruition for our customers. Part of why VMware partners with Pure Storage around VCF, is they want VCF enabled on any Fabric. And you know some vendors only offer ethernet only forms of connectivity. But with Pure Storage, we don't care what your Fabric is right. We just want to provide the data services be it ethernet, fiber channel or next generation NVMe over Fabric. That last point segments into another recent announcement from from VMware. Which is the support for NVMe over Fabric within vSphere 7. This is key because NVMe over Fabric allows the IO path to move away from SCSI based form of communication one to a memory based form of communication. And this unleashes a new level of performance, a way to better support those business and mission critical applications. Or a way to drive greater density into a smaller form factor and footprint within your data center. Obviously Fabric upgrades tend to not happen in conjunction with hypervisor upgrades, but the ability to provide customers a roadmap and a means to be able to continually evolve their infrastructure non disruptively, is our key there. It would be remiss of me to not point out one kind of orthogonal element, which is the new vMotion capabilities that are in vSphere 7. Customers have been tried for a number of years, probably from vSphere 4 through six to virtualize more performance centric and resource intense applications. And they've had some challenges around scale, particularly with the non-disruptive. The ability to non disruptively move a workload. VMware rewrote vMotion for vSphere 7 so it can tackle these larger more performance centric workloads. And when you combine that along with the addition of like NVMe over Fabric support, I think you're truly at a time where you can say, almost every workload can run on a VMware platform, right? From your traditional two two consolidation where you started to looking at performance centric AI, in machine learning workloads. >> Yeah. A lot of pieces you just walked through Vaughn, I'm glad especially the NVMe over Fabric piece. Just want to drill down one level there. As you said, there's a lot of pieces to make sure that this is fully worked. The standards are done, the software is there, the hardware, the various interconnects there and then okay, when's does the customer actually ready to upgrade that? How much of that is just you know okay hitting the update button. How much of that is do I need to do a refresh? And we understand that the testing and purchasing cycles there. So how many customers are you talking to that are like, "Okay I've got all the pieces, "we're ready to roll, "we're implementing in 2020." And you know, what's that roadmap look like for kind of the typical enterprise, which I know is a bit of an oxymoron? (laughs) >> So we've got a handful. I think that's a fair way to give you a size without giving you an exact number. We had a handful of customers who have NVMe over Fabric deployments today. The deployments tend to be application or workload centric versus ubiquitous across the data center. Which I think does bear an opportunity for VMware adoption to be a little bit earlier than across the entire data center. Because most VMware architectures today are based on top of rack switching. Whether that switching is fiber channel or ethernet base, I think the ability to then upgrade that switch. Either you've got modern hardware and it just needs a firmware update, or you've got to replace that hardware and implement NVMe over Fabric. I think that's very attractive. Particularly that you can do so in a non disruptive manner with a flash array or with flash deck. We expect to see the adoption really start to take take hold in 2021. But you probably won't see large market gains until 2022 or 23. >> Well that's super helpful Vaughn especially Pure Storage you've got customers that have some of the most demanding performance environments out there. So they are some of the early adopters that you would expect go into adopting this new technology. All right. I guess last piece, listening to the keynote looking at all the announcements that they have you know, VMware obviously has a big push into the cloud native space they've made a whole lot of acquisitions. We touched on a little bit before but what's your take as to what you are hearing from your customers, where they are with adoption into really modernizing and accelerating their businesses today? >> I think for the majority of our customers and again I would consider more of a commercial or mid market centric up through enterprise. They've particularity enterprise, they've adapted cloud native technologies particularity in developing their own internal or customer facing applications. So I don't think the technology is new. I think where it's newer is this re platforming of enterprise applications and I think that what's driving the timeline for VMware. We have a number of Pivotal deployments that run up here. Very large scale Pivotal deployments that run on Pure. And hopefully as you audience knows Pivotal is what VMware Tanzu has been rebranded as. So we've had success there. We've have had success in the test and development and in the web facing application space. But now this is a broader initiative from VMware supporting enterprise apps along with you know the cloud native disaggregated applications that have been built over the last say five to 10 years. But to provide it though a single management plane. So I'm bullish, I'm really bullish I think they are in a unique position compared to the rest of our technology partners you know they own the enterprise virtualization real estate and as so their ability to successfully add cloud native application to that, I think it's a powerful mix . For us the opportunity is great. I want to thank you for focusing on the fact that we've been able to deliver performance. But performances found on any flash product. And it's not to demote our performance by any means, but when you look at our customers and what they purchase us in terms of the repeat purchases, it's around simplicity, it's around the native integration with VMware and the extending of that value prop through our capabilities whether it's through the end-to-end infrastructure management, through data protection extending in the hybrid cloud. That's where Pure Storage customers fall in love with Pure Storage. And so it's a combination of performance, simplicity and ultimately, you know, economics. As we know economics drive most technical decisions not the actual technology itself. >> Well, Vaughn Stewart thank you so much for the update, congratulation on all the new things that are being brought out in the partnership >> Thank you Stu appreciate being on theCUBE, big shout out to VMware congratulations on VMworld 2020, look forward to seeing everybody soon >> All right, stay tuned for more coverage VMworld 2020 I'm Stu Miniman and that you for watching theCUBE. (bright upbeat music)

Published Date : Sep 30 2020

SUMMARY :

brought to you by VMware and happy to welcome back to the program I miss you guys a briefing or the likes. and to support Tanzu and and the some of the latest So that the owner of in the Kubernetes space too. and the ability to move that container and you know (chuckles) What are the storage people need to know? but the ability to provide for kind of the typical enterprise, I think the ability to to what you are hearing and in the web facing application space. I'm Stu Miniman and that

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

VMwareORGANIZATION

0.99+

Stuart MinimanPERSON

0.99+

2020DATE

0.99+

2021DATE

0.99+

AmazonORGANIZATION

0.99+

two tiersQUANTITY

0.99+

San FranciscoLOCATION

0.99+

Vaughn StewartPERSON

0.99+

360 degreeQUANTITY

0.99+

TanzuORGANIZATION

0.99+

VMware Cloud FoundationORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

fiveQUANTITY

0.99+

PureORGANIZATION

0.99+

vSphere 7TITLE

0.99+

2022DATE

0.99+

11th yearQUANTITY

0.99+

last yearDATE

0.99+

one levelQUANTITY

0.98+

MosconeLOCATION

0.98+

bothQUANTITY

0.98+

Fiber ChannelORGANIZATION

0.98+

PortworxORGANIZATION

0.98+

23DATE

0.98+

VaughnPERSON

0.98+

first timeQUANTITY

0.97+

10 yearsQUANTITY

0.96+

Vice PresidentPERSON

0.96+

vMotionTITLE

0.96+

singleQUANTITY

0.96+

vSphereTITLE

0.96+

vSphere Lifecycle ManagerTITLE

0.95+

iSCORGANIZATION

0.95+

Amazon EKSORGANIZATION

0.95+

todayDATE

0.94+

IntelORGANIZATION

0.93+

oneQUANTITY

0.93+

PortwoxORGANIZATION

0.93+

VMworld 2020EVENT

0.93+

sixQUANTITY

0.92+

VCFTITLE

0.92+

coupleQUANTITY

0.91+

SDDC ManagerTITLE

0.9+

VMware TanzuORGANIZATION

0.89+

KubernetesTITLE

0.89+

Pure StorageORGANIZATION

0.88+

HCIORGANIZATION

0.87+

vSphere 4TITLE

0.87+

PivotalTITLE

0.85+

over a decadeQUANTITY

0.85+

Version 6.0OTHER

0.85+

VMworldORGANIZATION

0.84+

KubernetesORGANIZATION

0.84+

Kubernetes on Any Infrastructure Top to Bottom Tutorials for Docker Enterprise Container Cloud


 

>>all right, We're five minutes after the hour. That's all aboard. Who's coming aboard? Welcome everyone to the tutorial track for our launchpad of them. So for the next couple of hours, we've got a SYRIZA videos and experts on hand to answer questions about our new product, Doctor Enterprise Container Cloud. Before we jump into the videos and the technology, I just want to introduce myself and my other emcee for the session. I'm Bill Milks. I run curriculum development for Mirant us on. And >>I'm Bruce Basil Matthews. I'm the Western regional Solutions architect for Moran Tissue esa and welcome to everyone to this lovely launchpad oven event. >>We're lucky to have you with us proof. At least somebody on the call knows something about your enterprise Computer club. Um, speaking of people that know about Dr Enterprise Container Cloud, make sure that you've got a window open to the chat for this session. We've got a number of our engineers available and on hand to answer your questions live as we go through these videos and disgusting problem. So that's us, I guess, for Dr Enterprise Container Cloud, this is Mirant asses brand new product for bootstrapping Doctor Enterprise Kubernetes clusters at scale Anything. The airport Abu's? >>No, just that I think that we're trying Thio. Uh, let's see. Hold on. I think that we're trying Teoh give you a foundation against which to give this stuff a go yourself. And that's really the key to this thing is to provide some, you know, many training and education in a very condensed period. So, >>yeah, that's exactly what you're going to see. The SYRIZA videos we have today. We're going to focus on your first steps with Dr Enterprise Container Cloud from installing it to bootstrapping your regional child clusters so that by the end of the tutorial content today, you're gonna be prepared to spin up your first documentary prize clusters using documented prize container class. So just a little bit of logistics for the session. We're going to run through these tutorials twice. We're gonna do one run through starting seven minutes ago up until I guess it will be ten fifteen Pacific time. Then we're gonna run through the whole thing again. So if you've got other colleagues that weren't able to join right at the top of the hour and would like to jump in from the beginning, ten. Fifteen Pacific time. We're gonna do the whole thing over again. So if you want to see the videos twice, you got public friends and colleagues that, you know you wanna pull in for a second chance to see this stuff, we're gonna do it all. All twice. Yeah, this session. Any any logistics I should add, Bruce that No, >>I think that's that's pretty much what we had to nail down here. But let's zoom dash into those, uh, feature films. >>Let's do Edmonds. And like I said, don't be shy. Feel free to ask questions in the chat or engineers and boosting myself are standing by to answer your questions. So let me just tee up the first video here and walk their cost. Yeah. Mhm. Yes. Sorry. And here we go. So our first video here is gonna be about installing the Doctor Enterprise Container Club Management cluster. So I like to think of the management cluster as like your mothership, right? This is what you're gonna use to deploy all those little child clusters that you're gonna use is like, Come on it as clusters downstream. So the management costs was always our first step. Let's jump in there >>now. We have to give this brief little pause >>with no good day video. Focus for this demo will be the initial bootstrap of the management cluster in the first regional clusters to support AWS deployments. The management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case, eight of us and the Elsie um, components on the UCP Cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a big strap note on this dependencies on handling with download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the adduce environment. The fourth configuring the deployment, defining things like the machine types on the fifth phase. Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node, just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now we're just checking through AWS to make sure that the account we want to use we have the correct credentials on the correct roles set up and validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just going to check that we can, from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next, we're going to run it. I'm in. Deploy it. Changing into that big struck folder. Just making see what's there. Right now we have no license file, so we're gonna get the license filed. Oh, okay. Get the license file through the more antis downloads site, signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, Once we've done that, we can now go ahead with the rest of the deployment. See that the follow is there. Uh, huh? That's again checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. All right, The next big step is valid in all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running an AWS policy create. So it is part of that is creating our Food trucks script, creating the mystery policy files on top of AWS, Just generally preparing the environment using a cloud formation script you'll see in a second will give a new policy confirmations just waiting for it to complete. Yeah, and there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created Today I am console. Go to that new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media Access key I D and the secret access key. We went, Yeah, usually then exported on the command line. Okay. Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Yes. Okay, that's the key. Secret X key. Right on. Let's kick it off. Yeah, So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you, and as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the west side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS. At the end of the process, that cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Okay. Local clusters boat just waiting for the various objects to get ready. Standard communities objects here Okay, so we speed up this process a little bit just for demonstration purposes. Yeah. There we go. So first note is being built the best in host. Just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for a W s to create the instance. Okay. Yes. Here, beauty there. Okay. Mhm. Okay. Yeah, yeah. Okay. On there. We got question. Host has been built on three instances for the management clusters have now been created. We're going through the process of preparing. Those nodes were now copying everything over. See that? The scaling up of controllers in the big Strap cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Yeah. Yeah, just waiting for key. Clark. Uh huh. Start to finish up. Yeah. No. What? Now we're shutting down control this on the local bootstrap node on preparing our I. D. C. Configuration. Fourth indication, soon as this is completed. Last phase will be to deploy stack light into the new cluster the last time Monitoring tool set way Go stack like to plan It has started. Mhm coming to the end of the deployment Mountain. Yeah, America. Final phase of the deployment. Onda, We are done. Okay, You'll see. At the end they're providing us the details of you. I log in so there's a keeper clogging. You can modify that initial default password is part of the configuration set up with one documentation way. Go Councils up way can log in. Yeah, yeah, thank you very much for watching. >>Excellent. So in that video are wonderful field CTO Shauna Vera bootstrapped up management costume for Dr Enterprise Container Cloud Bruce, where exactly does that leave us? So now we've got this management costume installed like what's next? >>So primarily the foundation for being able to deploy either regional clusters that will then allow you to support child clusters. Uh, comes into play the next piece of what we're going to show, I think with Sean O'Mara doing this is the child cluster capability, which allows you to then deploy your application services on the local cluster. That's being managed by the ah ah management cluster that we just created with the bootstrap. >>Right? So this cluster isn't yet for workloads. This is just for bootstrapping up the downstream clusters. Those or what we're gonna use for workings. >>Exactly. Yeah. And I just wanted to point out, since Sean O'Mara isn't around, toe, actually answer questions. I could listen to that guy. Read the phone book, and it would be interesting, but anyway, you can tell him I said that >>he's watching right now, Crusoe. Good. Um, cool. So and just to make sure I understood what Sean was describing their that bootstrap er knows that you, like, ran document fresh pretender Cloud from to begin with. That's actually creating a kind kubernetes deployment kubernetes and Docker deployment locally. That then hits the AWS a p i in this example that make those e c two instances, and it makes like a three manager kubernetes cluster there, and then it, like, copies itself over toe those communities managers. >>Yeah, and and that's sort of where the transition happens. You can actually see it. The output that when it says I'm pivoting, I'm pivoting from my local kind deployment of cluster AP, I toothy, uh, cluster, that's that's being created inside of AWS or, quite frankly, inside of open stack or inside of bare metal or inside of it. The targeting is, uh, abstracted. Yeah, but >>those air three environments that we're looking at right now, right? Us bare metal in open staff environments. So does that kind cluster on the bootstrap er go away afterwards. You don't need that afterwards. Yeah, that is just temporary. To get things bootstrapped, then you manage things from management cluster on aws in this example? >>Yeah. Yeah. The seed, uh, cloud that post the bootstrap is not required anymore. And there's no, uh, interplay between them after that. So that there's no dependencies on any of the clouds that get created thereafter. >>Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, be a temporary container that would bootstrap all the other containers. Go away. It's, uh, so sort of a similar, similar temporary transient bootstrapping model. Cool. Excellent. What will convict there? It looked like there wasn't a ton, right? It looked like you had to, like, set up some AWS parameters like credentials and region and stuff like that. But other than that, that looked like heavily script herbal like there wasn't a ton of point and click there. >>Yeah, very much so. It's pretty straightforward from a bootstrapping standpoint, The config file that that's generated the template is fairly straightforward and targeted towards of a small medium or large, um, deployment. And by editing that single file and then gathering license file and all of the things that Sean went through, um, that that it makes it fairly easy to script >>this. And if I understood correctly as well that three manager footprint for your management cluster, that's the minimum, right. We always insist on high availability for this management cluster because boy do not wanna see oh, >>right, right. And you know, there's all kinds of persistent data that needs to be available, regardless of whether one of the notes goes down or not. So we're taking care of all of that for you behind the scenes without you having toe worry about it as a developer. >>No, I think there's that's a theme that I think will come back to throughout the rest of this tutorial session today is there's a lot of there's a lot of expertise baked him to Dr Enterprise Container Cloud in terms of implementing best practices for you like the defaulter, just the best practices of how you should be managing these clusters, Miss Seymour. Examples of that is the day goes on. Any interesting questions you want to call out from the chap who's >>well, there was. Yeah, yeah, there was one that we had responded to earlier about the fact that it's a management cluster that then conduce oh, either the the regional cluster or a local child molester. The child clusters, in each case host the application services, >>right? So at this point, we've got, in some sense, like the simplest architectures for our documentary prize Container Cloud. We've got the management cluster, and we're gonna go straight with child cluster. In the next video, there's a more sophisticated architecture, which will also proper today that inserts another layer between those two regional clusters. If you need to manage regions like across a BS, reads across with these documents anything, >>yeah, that that local support for the child cluster makes it a lot easier for you to manage the individual clusters themselves and to take advantage of our observation. I'll support systems a stack light and things like that for each one of clusters locally, as opposed to having to centralize thumb >>eso. It's a couple of good questions. In the chat here, someone was asking for the instructions to do this themselves. I strongly encourage you to do so. That should be in the docks, which I think Dale helpfully thank you. Dale provided links for that's all publicly available right now. So just head on in, head on into the docks like the Dale provided here. You can follow this example yourself. All you need is a Mirante license for this and your AWS credentials. There was a question from many a hear about deploying this toe azure. Not at G. Not at this time. >>Yeah, although that is coming. That's going to be in a very near term release. >>I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. Very bracing. Cool. Okay. Any other thoughts on this one does. >>No, just that the fact that we're running through these individual pieces of the steps Well, I'm sure help you folks. If you go to the link that, uh, the gentleman had put into the chat, um, giving you the step by staff. Um, it makes it fairly straightforward to try this yourselves. >>E strongly encourage that, right? That's when you really start to internalize this stuff. OK, but before we move on to the next video, let's just make sure everyone has a clear picture in your mind of, like, where we are in the life cycle here creating this management cluster. Just stop me if I'm wrong. Who's creating this management cluster is like, you do that once, right? That's when your first setting up your doctor enterprise container cloud environment of system. What we're going to start seeing next is creating child clusters and this is what you're gonna be doing over and over and over again. When you need to create a cluster for this Deb team or, you know, this other team river it is that needs commodity. Doctor Enterprise clusters create these easy on half will. So this was once to set up Dr Enterprise Container Cloud Child clusters, which we're going to see next. We're gonna do over and over and over again. So let's go to that video and see just how straightforward it is to spin up a doctor enterprise cluster for work clothes as a child cluster. Undocumented brands contain >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster, the scaling of the cluster and how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the U I so you can switch. Project Mary only has access to development. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Nan Yes, this H Keys Associate ID for Mary into her team on the cloud credentials that allow you to create access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences, Right? Let's now set up semester search keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name, we copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our local machine. A simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, you go to the clusters tab. We hit the create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one release version five point seven is the current release Onda Attach. Mary's Key is necessary Key. We can then check the rest of the settings, confirming the provider Any kubernetes c r D r I p address information. We can change this. Should we wish to? We'll leave it default for now on. Then what components? A stack light I would like to deploy into my Custer. For this. I'm enabling stack light on logging on Aiken. Sit up the retention sizes Attention times on. Even at this stage, at any customer alerts for the watchdogs. E consider email alerting which I will need my smart host details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I commend side on the route device size. There we go, my three machines obviously creating. I now need to add some workers to this custom. So I go through the same process this time once again, just selecting worker. I'll just add to once again, the AM is extremely important. Will fail if we don't pick the right, Am I for a boon to machine in this case and the deployment has started. We can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here we've created the VPC. We've created the sub nets on We've created the Internet gateway. It's unnecessary made of us and we have no warnings of the stage. Yeah, this will then run for a while. We have one minute past waken click through. We can check the status of the machine bulls as individuals so we can check the machine info, details of the machines that we've assigned, right? Mhm Onda. See any events pertaining to the machine areas like this one on normal? Yeah. Just watch asked. The community's components are waiting for the machines to start. Go back to Custer's. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway on the stage. The machines have been built on assigned. I pick up the U. S. Thank you. Yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. Mhm. No speeding things up a little bit. This whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured. Mhm, mhm. And then we go. Cluster has been deployed. So once the classes deployed, we can now never get around our environment. Okay, Are cooking into configure cluster We could modify their cluster. We could get the end points for alert alert manager on See here The griffon occupying and Prometheus are still building in the background but the cluster is available on you would be able to put workloads on it the stretch to download the cube conflict so that I can put workloads on it. It's again three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster Mhm all right Now that the build is fully completed, we can check out cluster info on. We can see that Allow the satellite components have been built. All the storage is there, and we have access to the CPU. I So if we click into the cluster, we can access the UCP dashboard, right? Shit. Click the signing with Detroit button to use the SSO on. We give Mary's possible to use the name once again. Thing is, an unlicensed cluster way could license at this point. Or just skip it on. There. We have the UCP dashboard. You can see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon, a data just being automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Mhm. So we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah. General dashboard of Cuba navies cluster one of this is configurable. You can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster, all right to scale the cluster on to add a notice. A simple is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger disks and you'll see that worker has been added from the provisioning state on shortly. We will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workouts are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button, validating which release you would like to update to. In this case, the next available releases five point seven point one. Here I'm kicking the update by in the background We will coordinate. Drain each node slowly go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Girl, we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already on In a few minutes we'll see that there are great has been completed. There we go. Great. Done. Yeah. If you work loads of both using proper cloud native community standards, there will be no impact. >>Excellent. So at this point, we've now got a cluster ready to start taking our communities of workloads. He started playing or APs to that costume. So watching that video, the thing that jumped out to me at first Waas like the inputs that go into defining this workload cost of it. All right, so we have to make sure we were using on appropriate am I for that kind of defines the substrate about what we're gonna be deploying our cluster on top of. But there's very little requirements. A so far as I could tell on top of that, am I? Because Docker enterprise Container Cloud is gonna bootstrap all the components that you need. That s all we have is kind of kind of really simple bunch box that we were deploying these things on top of so one thing that didn't get dug into too much in the video. But it's just sort of implied. Bruce, maybe you can comment on this is that release that Shawn had to choose for his, uh, for his cluster in creating it. And that release was also the thing we had to touch. Wanted to upgrade part cluster. So you have really sharp eyes. You could see at the end there that when you're doing the release upgrade enlisted out a stack of components docker, engine, kubernetes, calico, aled, different bits and pieces that go into, uh, go into one of these commodity clusters that deploy. And so, as far as I can tell in that case, that's what we mean by a release. In this sense, right? It's the validated stack off container ization and orchestration components that you know we've tested out and make sure it works well, introduction environments. >>Yeah, and and And that's really the focus of our effort is to ensure that any CVS in any of the stack are taken care of that there is a fixes air documented and up streamed to the open stack community source community, um, and and that, you know, then we test for the scaling ability and the reliability in high availability configuration for the clusters themselves. The hosts of your containers. Right. And I think one of the key, uh, you know, benefits that we provide is that ability to let you know, online, high. We've got an update for you, and it's fixes something that maybe you had asked us to fix. Uh, that all comes to you online as your managing your clusters, so you don't have to think about it. It just comes as part of the product. >>You just have to click on Yes. Please give me that update. Uh, not just the individual components, but again. It's that it's that validated stack, right? Not just, you know, component X, y and Z work. But they all work together effectively Scalable security, reliably cool. Um, yeah. So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old universal control plane. Doctor Enterprise. On top of that, Sean had the classic comment there, you know? Yeah. Yeah. You'll see a little warnings and errors or whatever. When you're setting up, UCP don't handle, right, Just let it do its job, and it will converge all its components, you know, after just just a minute or two. But we saw in that video, we sped things up a little bit there just we didn't wait for, you know, progress fighters to complete. But really, in real life, that whole process is that anything so spend up one of those one of those fosters so quite quite quick. >>Yeah, and and I think the the thoroughness with which it goes through its process and re tries and re tries, uh, as you know, and it was evident when we went through the initial ah video of the bootstrapping as well that the processes themselves are self healing, as they are going through. So they will try and retry and wait for the event to complete properly on. And once it's completed properly, then it will go to the next step. >>Absolutely. And the worst thing you could do is panic at the first warning and start tearing things that don't don't do that. Just don't let it let it heal. Let take care of itself. And that's the beauty of these manage solutions is that they bake in a lot of subject matter expertise, right? The decisions that are getting made by those containers is they're bootstrapping themselves, reflect the expertise of the Mirant ISS crew that has been developing this content in these two is free for years and years now, over recognizing humanities. One cool thing there that I really appreciate it actually that it adds on top of Dr Enterprise is that automatic griffon a deployment as well. So, Dr Enterprises, I think everyone knows has had, like, some very high level of statistics baked into its dashboard for years and years now. But you know our customers always wanted a double click on that right to be able to go a little bit deeper. And Griffon are really addresses that it's built in dashboards. That's what's really nice to see. >>Yeah, uh, and all of the alerts and, uh, data are actually captured in a Prometheus database underlying that you have access to so that you are allowed to add new alerts that then go out to touch slack and say hi, You need to watch your disk space on this machine or those kinds of things. Um, and and this is especially helpful for folks who you know, want to manage the application service layer but don't necessarily want to manage the operations side of the house. So it gives them a tool set that they can easily say here, Can you watch these for us? And Miran tas can actually help do that with you, So >>yeah, yeah, I mean, that's just another example of baking in that expert knowledge, right? So you can leverage that without tons and tons of a long ah, long runway of learning about how to do that sort of thing. Just get out of the box right away. There was the other thing, actually, that you could sleep by really quickly if you weren't paying close attention. But Sean mentioned it on the video. And that was how When you use dark enterprise container cloud to scale your cluster, particularly pulling a worker out, it doesn't just like Territo worker down and forget about it. Right? Is using good communities best practices to cordon and drain the No. So you aren't gonna disrupt your workloads? You're going to just have a bunch of containers instantly. Excellent crash. You could really carefully manage the migration of workloads off that cluster has baked right in tow. How? How? Document? The brass container cloud is his handling cluster scale. >>Right? And And the kubernetes, uh, scaling methodology is is he adhered to with all of the proper techniques that ensure that it will tell you. Wait, you've got a container that actually needs three, uh, three, uh, instances of itself. And you don't want to take that out, because that node, it means you'll only be able to have to. And we can't do that. We can't allow that. >>Okay, Very cool. Further thoughts on this video. So should we go to the questions. >>Let's let's go to the questions >>that people have. Uh, there's one good one here, down near the bottom regarding whether an a p I is available to do this. So in all these demos were clicking through this web. You I Yes, this is all a p. I driven. You could do all of this. You know, automate all this away is part of the CSC change. Absolutely. Um, that's kind of the point, right? We want you to be ableto spin up. Come on. I keep calling them commodity clusters. What I mean by that is clusters that you can create and throw away. You know, easily and automatically. So everything you see in these demos eyes exposed to FBI? >>Yeah. In addition, through the standard Cube cuddle, Uh, cli as well. So if you're not a programmer, but you still want to do some scripting Thio, you know, set up things and deploy your applications and things. You can use this standard tool sets that are available to accomplish that. >>There is a good question on scale here. So, like, just how many clusters and what sort of scale of deployments come this kind of support our engineers report back here that we've done in practice up to a Zeman ia's like two hundred clusters. We've deployed on this with two hundred fifty nodes in a cluster. So were, you know, like like I said, hundreds, hundreds of notes, hundreds of clusters managed by documented press container fall and then those downstream clusters, of course, subject to the usual constraints for kubernetes, right? Like default constraints with something like one hundred pods for no or something like that. There's a few different limitations of how many pods you can run on a given cluster that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. >>Yeah, E. I mean, I don't think that we constrain any of the capabilities that are available in the, uh, infrastructure deliveries, uh, service within the goober Netease framework. So were, you know, But we are, uh, adhering to the standards that we would want to set to make sure that we're not overloading a node or those kinds of things, >>right. Absolutely cool. Alright. So at this point, we've got kind of a two layered our protection when we are management cluster, but we deployed in the first video. Then we use that to deploy one child clustering work, classroom, uh, for more sophisticated deployments where we might want to manage child clusters across multiple regions. We're gonna add another layer into our architectural we're gonna add in regional cluster management. So this idea you're gonna have the single management cluster that we started within the first video. On the next video, we're gonna learn how to spin up a regional clusters, each one of which would manage, for example, a different AWS uh, US region. So let me just pull out the video for that bill. We'll check it out for me. Mhm. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectures of you how to set up the management environment, prepare for the deployment deployment overview and then just to prove it, to play a regional child cluster. So, looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case AWS on the LCN components on the D you speak Cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need a regional cluster? Different platform architectures, for example aws who have been stack even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager we also Machine Manager were held. Mandel are managed as well as the actual provider logic. Mhm. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. And you see, it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster similar to what we're going to deploy now, also only has three managers once again, no workers. But as a comparison, here's a child cluster This one has three managers, but also has additional workers associate it to the cluster. All right, we need to connect. Tell bootstrap note. Preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine. All right. A few things we have to do to make sure the environment is ready. First thing we're going to see go into route. We'll go into our releases folder where we have the kozberg struck on. This was the original bootstrap used to build the original management cluster. Yeah, we're going to double check to make sure our cube con figures there once again, the one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything is working. A condom. No damages waken access to a swell. Yeah. Next we're gonna edit the machine definitions. What we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I. So that's found under the templates AWS directory. We don't need to edit anything else here. But we could change items like the size of the machines attempts. We want to use that The key items to ensure where you changed the am I reference for the junta image is the one for the region in this case AWS region for utilizing this was no construct deployment. We have to make sure we're pointing in the correct open stack images. Yeah, okay. Set the correct and my save file. Now we need to get up credentials again. When we originally created the bootstrap cluster, we got credentials from eight of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we're just exporting the AWS access key and I d. What's important is CAAs aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our cube conflict that we want to use for the management cluster. When we looked at earlier Yeah, now we're exporting that. Want to call the cluster region Is Frank Foods Socrates Frankfurt yet trying to use something descriptive It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed. Um, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at W s and waiting for that bastard and no to get started. Please. The best you nerd Onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy. Dr. Enterprise, this is probably the longest face. Yeah, seeing the second that all the nerds will go from the player deployed. Prepare, prepare. Yeah, You'll see their status changes updates. He was the first night ready. Second, just applying second already. Both my time. No waiting from home control. Let's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running the date of the U. S. All my stay. Ah, now we're playing Stockland. Switch over is done on. Done. Now I will build a child cluster in the new region very, very quickly to find the cluster will pick. Our new credential has shown up. We'll just call it Frankfurt for simplicity a key and customs to find. That's the machine. That cluster stop with three managers. Set the correct Am I for the region? Yeah, Do the same to add workers. There we go test the building. Yeah. Total bill of time Should be about fifteen minutes. Concedes in progress. It's going to expect this up a little bit. Check the events. We've created all the dependencies, machine instances, machines, a boat shortly. We should have a working cluster in Frankfurt region. Now almost a one note is ready from management. Two in progress. Yeah, on we're done. Clusters up and running. Yeah. >>Excellent. So at this point, we've now got that three tier structure that we talked about before the video. We got that management cluster that we do strapped in the first video. Now we have in this example to different regional clustering one in Frankfurt, one of one management was two different aws regions. And sitting on that you can do Strap up all those Doctor enterprise costumes that we want for our work clothes. >>Yeah, that's the key to this is to be able to have co resident with your actual application service enabled clusters the management co resident with it so that you can, you know, quickly access that he observation Elson Surfboard services like the graph, Ana and that sort of thing for your particular region. A supposed to having to lug back into the home. What did you call it when we started >>the mothership? >>The mothership. Right. So we don't have to go back to the mother ship. We could get >>it locally. Yeah, when, like to that point of aggregating things under a single pane of glass? That's one thing that again kind of sailed by in the demo really quickly. But you'll notice all your different clusters were on that same cluster. Your pain on your doctor Enterprise Container Cloud management. Uh, court. Right. So both your child clusters for running workload and your regional clusters for bootstrapping. Those child clusters were all listed in the same place there. So it's just one pane of glass to go look for, for all of your clusters, >>right? And, uh, this is kind of an important point. I was, I was realizing, as we were going through this. All of the mechanics are actually identical between the bootstrapped cluster of the original services and the bootstrapped cluster of the regional services. It's the management layer of everything so that you only have managers, you don't have workers and that at the child cluster layer below the regional or the management cluster itself, that's where you have the worker nodes. And those are the ones that host the application services in that three tiered architecture that we've now defined >>and another, you know, detail for those that have sharp eyes. In that video, you'll notice when deploying a child clusters. There's not on Lee. A minimum of three managers for high availability management cluster. You must have at least two workers that's just required for workload failure. It's one of those down get out of work. They could potentially step in there, so your minimum foot point one of these child clusters is fine. Violence and scalable, obviously, from a >>That's right. >>Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want to my last video. There's another question here about, like where these clusters can live. So again, I know these examples are very aws heavy. Honestly, it's just easy to set up down on the other us. We could do things on bare metal and, uh, open stack departments on Prem. That's what all of this still works in exactly the same way. >>Yeah, the, uh, key to this, especially for the the, uh, child clusters, is the provision hers? Right? See you establish on AWS provision or you establish a bare metal provision or you establish a open stack provision. Or and eventually that list will include all of the other major players in the cloud arena. But you, by selecting the provision or within your management interface, that's where you decide where it's going to be hosted, where the child cluster is to be hosted. >>Speaking off all through a child clusters. Let's jump into our last video in the Siri's, where we'll see how to spin up a child cluster on bare metal. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. So why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent. Provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and supports high performance workloads like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another opera visor. Lay it between so continue on the theme Why Communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p G A s G p us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. Uh, we can handle utilization in the scheduling. Better Onda we increase the performances and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project will add the bare metal hosts, including the host name. I put my credentials I pay my address the Mac address on then provide a machine type label to determine what type of machine it is for later use. Okay, let's get started. So well again. Was the operator thing. We'll go and we'll create a project for our machines to be a member off helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. So the first thing we had to be in post, Yeah, many of the machine A name. Anything you want, que experimental zero one. Provide the IAP my user name type my password. Okay. On the Mac address for the common interface with the boot interface and then the i p m I i p address These machines will be at the time storage worker manager. He's a manager. Yeah, we're gonna add a number of other machines on will. Speed this up just so you could see what the process looks like in the future. Better discovery will be added to the product. Okay. Okay. Getting back there we have it are Six machines have been added, are busy being inspected, being added to the system. Let's have a look at the details of a single note. Yeah, you can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. I see. Okay, let's go and create the cluster. Yeah, So we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So we'll credit custom. We'll give it a name, but if it were selecting bare metal on the region, we're going to select the version we want to apply. No way. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of dress range on update the address range that we want to use for the cluster. Check that the sea ideal blocks for the Cuban ladies and tunnels are what we want them to be. Enable disabled stack light. Yeah, and soothe stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here. We're focused on building communities clusters, so we're gonna put the count of machines. You want managers? We're gonna pick the label type manager and create three machines is the manager for the Cuban eighties. Casting Okay thing. We're having workers to the same. It's a process. Just making sure that the worker label host level are I'm sorry. On when Wait for the machines to deploy. Let's go through the process of putting the operating system on the notes validating and operating system deploying doctor identifies Make sure that the cluster is up and running and ready to go. Okay, let's review the bold events waken See the machine info now populated with more information about the specifics of things like storage and of course, details of a cluster etcetera. Yeah, yeah, well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build? And that brings us to the end of this particular demo. You can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>All right, so there we have it, deploying a cluster to bare metal. Much the same is how we did for AWS. I guess maybe the biggest different stepwise there is there is that registration face first, right? So rather than just using AWS financials toe magically create PM's in the cloud. You got a point out all your bare metal servers to Dr Enterprise between the cloud and they really come in, I guess three profiles, right? You got your manager profile with a profile storage profile which has been labeled as allocate. Um, crossword cluster has appropriate, >>right? And And I think that the you know, the key differentiator here is that you have more physical control over what, uh, attributes that love your cat, by the way, uh, where you have the different attributes of a server of physical server. So you can, uh, ensure that the SSD configuration on the storage nodes is gonna be taken advantage of in the best way the GP use on the worker nodes and and that the management layer is going to have sufficient horsepower to, um, spin up to to scale up the the environments, as required. One of the things I wanted to mention, though, um, if I could get this out without the choking much better. Um, is that Ah, hey, mentioned the load balancer and I wanted to make sure in defining the load balancer and the load balancer ranges. Um, that is for the top of the the cluster itself. That's the operations of the management, uh, layer integrating with your systems internally to be able to access the the Cube Can figs. I I p address the, uh, in a centralized way. It's not the load balancer that's working within the kubernetes cluster that you are deploying. That's still cube proxy or service mesh, or however you're intending to do it. So, um, it's kind of an interesting step that your initial step in building this, um and we typically use things like metal L B or in gen X or that kind of thing is to establish that before we deploy this bear mental cluster so that it can ride on top of that for the tips and things. >>Very cool. So any other thoughts on what we've seen so far today? Bruce, we've gone through all the different layers. Doctor enterprise container clouds in these videos from our management are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. Closing thoughts before we take just a very short break and run through these demos again. >>You know, I've been very exciting. Ah, doing the presentation with you. I'm really looking forward to doing it the second time, so that we because we've got a good rhythm going about this kind of thing. So I'm looking forward to doing that. But I think that the key elements of what we're trying to convey to the folks out there in the audience that I hope you've gotten out of it is that will that this is an easy enough process that if you follow the step by steps going through the documentation that's been put out in the chat, um, that you'll be able to give this a go yourself, Um, and you don't have to limit yourself toe having physical hardware on prim to try it. You could do it in a ws as we've shown you today. And if you've got some fancy use cases like, uh, you you need a Hadoop And and, uh, you know, cloud oriented ai stuff that providing a bare metal service helps you to get there very fast. So right. Thank you. It's been a pleasure. >>Yeah, thanks everyone for coming out. So, like I said we're going to take a very short, like, three minute break here. Uh, take the opportunity to let your colleagues know if they were in another session or they didn't quite make it to the beginning of this session. Or if you just want to see these demos again, we're going to kick off this demo. Siri's again in just three minutes at ten. Twenty five a. M. Pacific time where we will see all this great stuff again. Let's take a three minute break. I'll see you all back here in just two minutes now, you know. Okay, folks, that's the end of our extremely short break. We'll give people just maybe, like one more minute to trickle in if folks are interested in coming on in and jumping into our demo. Siri's again. Eso For those of you that are just joining us now I'm Bill Mills. I head up curriculum development for the training team here. Moran Tous on Joining me for this session of demos is Bruce. Don't you go ahead and introduce yourself doors, who is still on break? That's cool. We'll give Bruce a minute or two to get back while everyone else trickles back in. There he is. Hello, Bruce. >>How'd that go for you? Okay, >>Very well. So let's kick off our second session here. I e just interest will feel for you. Thio. Let it run over here. >>Alright. Hi. Bruce Matthews here. I'm the Western Regional Solutions architect for Marantz. Use A I'm the one with the gray hair and the glasses. Uh, the handsome one is Bill. So, uh, Bill, take it away. >>Excellent. So over the next hour or so, we've got a Siris of demos that's gonna walk you through your first steps with Dr Enterprise Container Cloud Doctor Enterprise Container Cloud is, of course, Miranda's brand new offering from bootstrapping kubernetes clusters in AWS bare metal open stack. And for the providers in the very near future. So we we've got, you know, just just over an hour left together on this session, uh, if you joined us at the top of the hour back at nine. A. M. Pacific, we went through these demos once already. Let's do them again for everyone else that was only able to jump in right now. Let's go. Our first video where we're gonna install Dr Enterprise container cloud for the very first time and use it to bootstrap management. Cluster Management Cluster, as I like to describe it, is our mother ship that's going to spin up all the other kubernetes clusters, Doctor Enterprise clusters that we're gonna run our workloads on. So I'm gonna do >>I'm so excited. I can hardly wait. >>Let's do it all right to share my video out here. Yeah, let's do it. >>Good day. The focus for this demo will be the initial bootstrap of the management cluster on the first regional clusters. To support AWS deployments, the management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case AWS and the Elsom components on the UCP cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a bootstrap note on its dependencies on handling the download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the ideas environment, the fourth configuring the deployment, defining things like the machine types on the fifth phase, Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node. Just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now, we're just checking through aws to make sure that the account we want to use we have the correct credentials on the correct roles set up on validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just gonna check that we can from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next we're going to run it. Yeah, I've been deployed changing into that big struck folder, just making see what's there right now we have no license file, so we're gonna get the license filed. Okay? Get the license file through more antis downloads site signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, since we've done that, we can now go ahead with the rest of the deployment. Yeah, see what the follow is there? Uh huh. Once again, checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. Alright. Next big step is violating all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running in AWS policy create. So it is part of that is creating our food trucks script. Creating this through policy files onto the AWS, just generally preparing the environment using a cloud formation script, you'll see in a second, I'll give a new policy confirmations just waiting for it to complete. And there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created. Good day. I am console. Go to the new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media access Key I. D and the secret access key, but usually then exported on the command line. Okay, Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Okay, thanks. Is key. So you could X key Right on. Let's kick it off. So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you. Um, as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the AWS side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS at the end of the process. That cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Yeah, okay. Local clusters boat. Just waiting for the various objects to get ready. Standard communities objects here. Yeah, you mentioned Yeah. So we've speed up this process a little bit just for demonstration purposes. Okay, there we go. So first note is being built the bastion host just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for AWS to create the instance. Okay. Yeah. Beauty there. Movies. Okay, sketch. Hello? Yeah, Okay. Okay. On. There we go. Question host has been built on three instances for the management clusters have now been created. Okay, We're going through the process of preparing. Those nodes were now copying everything over. See that scaling up of controllers in the big strapped cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Right? Okay. Just waiting for key. Clark. Uh huh. So finish up. Yeah. No. Now we're shutting down. Control this on the local bootstrap node on preparing our I. D. C configuration, fourth indication. So once this is completed, the last phase will be to deploy stack light into the new cluster, that glass on monitoring tool set, Then we go stack like deployment has started. Mhm. Coming to the end of the deployment mountain. Yeah, they were cut final phase of the deployment. And we are done. Yeah, you'll see. At the end, they're providing us the details of you. I log in. So there's a key Clark log in. Uh, you can modify that initial default possible is part of the configuration set up where they were in the documentation way. Go Councils up way can log in. Yeah. Yeah. Thank you very much for watching. >>All right, so at this point, what we have we got our management cluster spun up, ready to start creating work clusters. So just a couple of points to clarify there to make sure everyone caught that, uh, as advertised. That's darker. Enterprise container cloud management cluster. That's not rework loans. are gonna go right? That is the tool and you're gonna use to start spinning up downstream commodity documentary prize clusters for bootstrapping record too. >>And the seed host that were, uh, talking about the kind cluster dingy actually doesn't have to exist after the bootstrap succeeds eso It's sort of like, uh, copies head from the seed host Toothy targets in AWS spins it up it then boots the the actual clusters and then it goes away too, because it's no longer necessary >>so that bootstrapping know that there's not really any requirements, Hardly on that, right. It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, as you just said, it's just a kubernetes in docker cluster on that piece. Drop note is just gonna get torn down after the set up finishes on. You no longer need that. Everything you're gonna do, you're gonna drive from the single pane of glass provided to you by your management cluster Doctor enterprise Continue cloud. Another thing that I think is sort of interesting their eyes that the convict is fairly minimal. Really? You just need to provide it like aws regions. Um, am I? And that's what is going to spin up that spending that matter faster. >>Right? There is a mammal file in the bootstrap directory itself, and all of the necessary parameters that you would fill in have default set. But you have the option then of going in and defining a different Am I different for a different region, for example? Oh, are different. Size of instance from AWS. >>One thing that people often ask about is the cluster footprint. And so that example you saw they were spitting up a three manager, um, managing cluster as mandatory, right? No single manager set up at all. We want high availability for doctrine Enterprise Container Cloud management. Like so again, just to make sure everyone sort of on board with the life cycle stage that we're at right now. That's the very first thing you're going to do to set up Dr Enterprise Container Cloud. You're going to do it. Hopefully exactly once. Right now, you've got your management cluster running, and they're gonna use that to spend up all your other work clusters Day today has has needed How do we just have a quick look at the questions and then lets take a look at spinning up some of those child clusters. >>Okay, e think they've actually been answered? >>Yeah, for the most part. One thing I'll point out that came up again in the Dail, helpfully pointed out earlier in surgery, pointed out again, is that if you want to try any of the stuff yourself, it's all of the dogs. And so have a look at the chat. There's a links to instructions, so step by step instructions to do each and every thing we're doing here today yourself. I really encourage you to do that. Taking this out for a drive on your own really helps internalizing communicate these ideas after the after launch pad today, Please give this stuff try on your machines. Okay, So at this point, like I said, we've got our management cluster. We're not gonna run workloads there that we're going to start creating child clusters. That's where all of our work and we're gonna go. That's what we're gonna learn how to do in our next video. Cue that up for us. >>I so love Shawn's voice. >>Wasn't that all day? >>Yeah, I watched him read the phone book. >>All right, here we go. Let's now that we have our management cluster set up, let's create a first child work cluster. >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster the scaling of the cluster on how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the u I. So you can switch Project Mary only has access to development. Uh huh. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Man. Yes, this H keys, Associate ID for Mary into her team on the cloud credentials that allow you to create or access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences. Right. Let's now set up some ssh keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name. We copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our machine. A very simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, we got the clusters tab we had to create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one released version five point seven is the current release Onda Attach. Mary's Key is necessary key. We can then check the rest of the settings, confirming the provider any kubernetes c r D a r i p address information. We can change this. Should we wish to? We'll leave it default for now and then what components of stack light? I would like to deploy into my custom for this. I'm enabling stack light on logging, and I consider the retention sizes attention times on. Even at this stage, add any custom alerts for the watchdogs. Consider email alerting which I will need my smart host. Details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I convinced side on the route. Device size. There we go. My three machines are busy creating. I now need to add some workers to this cluster. So I go through the same process this time once again, just selecting worker. I'll just add to once again the am I is extremely important. Will fail if we don't pick the right. Am I for a Clinton machine? In this case and the deployment has started, we can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen. Cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here. We've created the VPC. We've created the sub nets on. We've created the Internet Gateway. It's unnecessary made of us. And we have no warnings of the stage. Okay, this will then run for a while. We have one minute past. We can click through. We can check the status of the machine balls as individuals so we can check the machine info, details of the machines that we've assigned mhm and see any events pertaining to the machine areas like this one on normal. Yeah. Just last. The community's components are waiting for the machines to start. Go back to customers. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway. And at this stage, the machines have been built on assigned. I pick up the U S. Yeah, yeah, yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. No speeding things up a little bit this whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured mhm and then we go. Cluster has been deployed. So once the classes deployed, we can now never get around. Our environment are looking into configure cluster. We could modify their cluster. We could get the end points for alert Alert Manager See here the griffon occupying and Prometheus are still building in the background but the cluster is available on You would be able to put workloads on it at this stage to download the cube conflict so that I can put workloads on it. It's again the three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster. All right, Now that the build is fully completed, we can check out cluster info on. We can see that all the satellite components have been built. All the storage is there, and we have access to the CPU. I. So if we click into the cluster, we can access the UCP dashboard, click the signing with the clock button to use the SSO. We give Mary's possible to use the name once again. Thing is an unlicensed cluster way could license at this point. Or just skip it on. Do we have the UCP dashboard? You could see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon. A data just been automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Um, so we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah, a general dashboard of Cuba Navies cluster. What If this is configurable, you can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster. All right to scale the cluster on to add a No. This is simple. Is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger group disks and you'll see that worker has been added in the provisioning state. On shortly, we will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node we would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workloads are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button validating which release you would like to update to this case. This available releases five point seven point one give you I'm kicking the update back in the background. We will coordinate. Drain each node slowly, go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Who we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already. Yeah, and in a few minutes, we'll see that the upgrade has been completed. There we go. Great. Done. If you work loads of both using proper cloud native community standards, there will be no impact. >>All right, there. We haven't. We got our first workload cluster spun up and managed by Dr Enterprise Container Cloud. So I I loved Shawn's classic warning there. When you're spinning up an actual doctor enterprise deployment, you see little errors and warnings popping up. Just don't touch it. Just leave it alone and let Dr Enterprises self healing properties take care of all those very transient temporary glitches, resolve themselves and leave you with a functioning workload cluster within victims. >>And now, if you think about it that that video was not very long at all. And that's how long it would take you if someone came into you and said, Hey, can you spend up a kubernetes cluster for development development A. Over here, um, it literally would take you a few minutes to thio Accomplish that. And that was with a W s. Obviously, which is sort of, ah, transient resource in the cloud. But you could do exactly the same thing with resource is on Prem or resource is, um physical resource is and will be going through that later in the process. >>Yeah, absolutely one thing that is present in that demo, but that I like to highlight a little bit more because it just kind of glides by Is this notion of, ah, cluster release? So when Sean was creating that cluster, and also when when he was upgrading that cluster, he had to choose a release. What does that didn't really explain? What does that mean? Well, in Dr Enterprise Container Cloud, we have released numbers that capture the entire staff of container ization tools that will be deploying to that workload costume. So that's your version of kubernetes sed cor DNs calico. Doctor Engineer. All the different bits and pieces that not only work independently but are validated toe work together as a staff appropriate for production, humanities, adopted enterprise environments. >>Yep. From the bottom of the stack to the top, we actually test it for scale. Test it for CVS, test it for all of the various things that would, you know, result in issues with you running the application services. And I've got to tell you from having, you know, managed kubernetes deployments and things like that that if you're the one doing it yourself, it can get rather messy. Eso This makes it easy. >>Bruce, you were staying a second ago. They I'll take you at least fifteen minutes to install your release. Custer. Well, sure, but what would all the other bits and pieces you need toe? Not just It's not just about pressing the button to install it, right? It's making the right decision. About what components work? Well, our best tested toe be successful working together has a staff? Absolutely. We this release mechanism and Dr Enterprise Container Cloud. Let's just kind of package up that expert knowledge and make it available in a really straightforward, fashionable species. Uh, pre Confederate release numbers and Bruce is you're pointing out earlier. He's got delivered to us is updates kind of transparent period. When when? When Sean wanted toe update that cluster, he created little update. Custer Button appeared when an update was available. All you gotta do is click. It tells you what Here's your new stack of communities components. It goes ahead. And the straps those components for you? >>Yeah, it actually even displays at the top of the screen. Ah, little header That says you've got an update available. Do you want me to apply? It s o >>Absolutely. Another couple of cool things. I think that are easy to miss in that demo was I really like the on board Bafana that comes along with this stack. So we've been Prometheus Metrics and Dr Enterprise for years and years now. They're very high level. Maybe in in previous versions of Dr Enterprise having those detailed dashboards that Ravana provides, I think that's a great value out there. People always wanted to be ableto zoom in a little bit on that, uh, on those cluster metrics, you're gonna provides them out of the box for us. Yeah, >>that was Ah, really, uh, you know, the joining of the Miranda's and Dr teams together actually spawned us to be able to take the best of what Morantes had in the open stack environment for monitoring and logging and alerting and to do that integration in in a very short period of time so that now we've got it straight across the board for both the kubernetes world and the open stack world. Using the same tool sets >>warm. One other thing I wanna point out about that demo that I think there was some questions about our last go around was that demo was all about creating a managed workplace cluster. So the doctor enterprise Container Cloud managers were using those aws credentials provisioned it toe actually create new e c two instances installed Docker engine stalled. Doctor Enterprise. Remember all that stuff on top of those fresh new VM created and managed by Dr Enterprise contain the cloud. Nothing unique about that. AWS deployments do that on open staff doing on Parramatta stuff as well. Um, there's another flavor here, though in a way to do this for all of our long time doctor Enterprise customers that have been running Doctor Enterprise for years and years. Now, if you got existing UCP points existing doctor enterprise deployments, you plug those in to Dr Enterprise Container Cloud, uh, and use darker enterprise between the cloud to manage those pre existing Oh, working clusters. You don't always have to be strapping straight from Dr Enterprises. Plug in external clusters is bad. >>Yep, the the Cube config elements of the UCP environment. The bundling capability actually gives us a very straightforward methodology. And there's instructions on our website for exactly how thio, uh, bring in import and you see p cluster. Um so it it makes very convenient for our existing customers to take advantage of this new release. >>Absolutely cool. More thoughts on this wonders if we jump onto the next video. >>I think we should move press on >>time marches on here. So let's Let's carry on. So just to recap where we are right now, first video, we create a management cluster. That's what we're gonna use to create All our downstream were closed clusters, which is what we did in this video. Let's maybe the simplest architectures, because that's doing everything in one region on AWS pretty common use case because we want to be able to spin up workload clusters across many regions. And so to do that, we're gonna add a third layer in between the management and work cluster layers. That's gonna be our regional cluster managers. So this is gonna be, uh, our regional management cluster that exists per region that we're going to manage those regional managers will be than the ones responsible for spending part clusters across all these different regions. Let's see it in action in our next video. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectural overview, how to set up the management environment, prepare for the deployment deployment overview, and then just to prove it, to play a regional child cluster. So looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case, AWS on the L C M components on the d you speak cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need original cluster? Different platform architectures, for example AWS open stack, even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager. We also machine manager. We're hell Mandel are managed as well as the actual provider logic. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. When you see it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster, similar to what we're going to deploy now. Also only has three managers once again, no workers. But as a comparison is a child cluster. This one has three managers, but also has additional workers associate it to the cluster. Yeah, all right, we need to connect. Tell bootstrap note, preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine Mhm. All right, A few things we have to do to make sure the environment is ready. First thing we're gonna pseudo into route. I mean, we'll go into our releases folder where we have the car's boot strap on. This was the original bootstrap used to build the original management cluster. We're going to double check to make sure our cube con figures there It's again. The one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything's working, condone, load our images waken access to a swell. Yeah, Next, we're gonna edit the machine definitions what we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I So that's found under the templates AWS directory. We don't need to edit anything else here, but we could change items like the size of the machines attempts we want to use but the key items to ensure where changed the am I reference for the junta image is the one for the region in this case aws region of re utilizing. This was an open stack deployment. We have to make sure we're pointing in the correct open stack images. Yeah, yeah. Okay. Sit the correct Am I save the file? Yeah. We need to get up credentials again. When we originally created the bootstrap cluster, we got credentials made of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we just exporting AWS access key and I d. What's important is Kaz aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our Q conflict that we want to use for the management cluster when we looked at earlier. Yeah, now we're exporting that. Want to call? The cluster region is Frankfurt's Socrates Frankfurt yet trying to use something descriptive? It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at us and waiting for the past, you know, to get started. Please the best your node, onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy Dr Enterprise, he says. Probably the longest face we'll see in a second that all the nodes will go from the player deployed. Prepare, prepare Mhm. We'll see. Their status changes updates. It was the first word ready. Second, just applying second. Grady, both my time away from home control that's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running a data for us? Yeah, almost a on. Now we're playing Stockland. Thanks. Whichever is done on Done. Now we'll build a child cluster in the new region very, very quickly. Find the cluster will pick our new credential have shown up. We'll just call it Frankfurt for simplicity. A key on customers to find. That's the machine. That cluster stop with three manages set the correct Am I for the region? Yeah, Same to add workers. There we go. That's the building. Yeah. Total bill of time. Should be about fifteen minutes. Concedes in progress. Can we expect this up a little bit? Check the events. We've created all the dependencies, machine instances, machines. A boat? Yeah. Shortly. We should have a working caster in the Frankfurt region. Now almost a one note is ready from management. Two in progress. On we're done. Trust us up and running. >>Excellent. There we have it. We've got our three layered doctor enterprise container cloud structure in place now with our management cluster in which we scrap everything else. Our regional clusters which manage individual aws regions and child clusters sitting over depends. >>Yeah, you can. You know you can actually see in the hierarchy the advantages that that presents for folks who have multiple locations where they'd like a geographic locations where they'd like to distribute their clusters so that you can access them or readily co resident with your development teams. Um and, uh, one of the other things I think that's really unique about it is that we provide that same operational support system capability throughout. So you've got stack light monitoring the stack light that's monitoring the stack light down to the actual child clusters that they have >>all through that single pane of glass that shows you all your different clusters, whether their workload cluster like what the child clusters or usual clusters from managing different regions. Cool. Alright, well, time marches on your folks. We've only got a few minutes left and I got one more video in our last video for the session. We're gonna walk through standing up a child cluster on bare metal. So so far, everything we've seen so far has been aws focus. Just because it's kind of easy to make that was on AWS. We don't want to leave you with the impression that that's all we do, we're covering AWS bare metal and open step deployments as well documented Craftsman Cloud. Let's see it in action with a bare metal child cluster. >>We are on the home stretch, >>right. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. Yeah, so why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and support high performance workouts like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another hyper visor layer in between. So continuing on the theme Why communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p g A s G p, us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. We can handle utilization in the scheduling better Onda. We increase the performance and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project. Will add the bare metal hosts, including the host name. I put my credentials. I pay my address, Mac address on, then provide a machine type label to determine what type of machine it is. Related use. Okay, let's get started Certain Blufgan was the operator thing. We'll go and we'll create a project for our machines to be a member off. Helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. Yeah. So the first thing we had to be in post many of the machine a name. Anything you want? Yeah, in this case by mental zero one. Provide the IAP My user name. Type my password? Yeah. On the Mac address for the active, my interface with boot interface and then the i p m i P address. Yeah, these machines. We have the time storage worker manager. He's a manager. We're gonna add a number of other machines on will speed this up just so you could see what the process. Looks like in the future, better discovery will be added to the product. Okay, Okay. Getting back there. We haven't Are Six machines have been added. Are busy being inspected, being added to the system. Let's have a look at the details of a single note. Mhm. We can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. Okay, it's going to create the cluster. Mhm. Okay, so we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So credit custom. We'll give it a name. Thank you. But he thought were selecting bare metal on the region. We're going to select the version we want to apply on. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of the dress range update the address range that we want to use for the cluster. Check that the sea idea blocks for the communities and tunnels are what we want them to be. Enable disabled stack light and said the stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here we're focused on building communities clusters. So we're gonna put the count of machines. You want managers? We're gonna pick the label type manager on create three machines. Is a manager for the Cuban a disgusting? Yeah, they were having workers to the same. It's a process. Just making sure that the worker label host like you are so yes, on Duin wait for the machines to deploy. Let's go through the process of putting the operating system on the notes, validating that operating system. Deploying Docker enterprise on making sure that the cluster is up and running ready to go. Okay, let's review the bold events. We can see the machine info now populated with more information about the specifics of things like storage. Yeah, of course. Details of a cluster, etcetera. Yeah, Yeah. Okay. Well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build, and that brings us to the end of this particular do my as you can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>Here we have a child cluster on bare metal for folks that wanted to play the stuff on Prem. >>It's ah been an interesting journey taken from the mothership as we started out building ah management cluster and then populating it with a child cluster and then finally creating a regional cluster to spread the geographically the management of our clusters and finally to provide a platform for supporting, you know, ai needs and and big Data needs, uh, you know, thank goodness we're now able to put things like Hadoop on, uh, bare metal thio in containers were pretty exciting. >>Yeah, absolutely. So with this Doctor Enterprise container cloud platform. Hopefully this commoditized scooping clusters, doctor enterprise clusters that could be spun up and use quickly taking provisioning times. You know, from however many months to get new clusters spun up for our teams. Two minutes, right. We saw those clusters gets better. Just a couple of minutes. Excellent. All right, well, thank you, everyone, for joining us for our demo session for Dr Enterprise Container Cloud. Of course, there's many many more things to discuss about this and all of Miranda's products. If you'd like to learn more, if you'd like to get your hands dirty with all of this content, police see us a training don Miranda's dot com, where we can offer you workshops and a number of different formats on our entire line of products and hands on interactive fashion. Thanks, everyone. Enjoy the rest of the launchpad of that >>thank you all enjoy.

Published Date : Sep 17 2020

SUMMARY :

So for the next couple of hours, I'm the Western regional Solutions architect for Moran At least somebody on the call knows something about your enterprise Computer club. And that's really the key to this thing is to provide some, you know, many training clusters so that by the end of the tutorial content today, I think that's that's pretty much what we had to nail down here. So the management costs was always We have to give this brief little pause of the management cluster in the first regional clusters to support AWS deployments. So in that video are wonderful field CTO Shauna Vera bootstrapped So primarily the foundation for being able to deploy So this cluster isn't yet for workloads. Read the phone book, So and just to make sure I understood The output that when it says I'm pivoting, I'm pivoting from on the bootstrap er go away afterwards. So that there's no dependencies on any of the clouds that get created thereafter. Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, The config file that that's generated the template is fairly straightforward We always insist on high availability for this management cluster the scenes without you having toe worry about it as a developer. Examples of that is the day goes on. either the the regional cluster or a We've got the management cluster, and we're gonna go straight with child cluster. as opposed to having to centralize thumb So just head on in, head on into the docks like the Dale provided here. That's going to be in a very near term I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. No, just that the fact that we're running through these individual So let's go to that video and see just how We can check the status of the machine bulls as individuals so we can check the machine the thing that jumped out to me at first Waas like the inputs that go into defining Yeah, and and And that's really the focus of our effort is to ensure that So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old of the bootstrapping as well that the processes themselves are self healing, And the worst thing you could do is panic at the first warning and start tearing things that don't that then go out to touch slack and say hi, You need to watch your disk But Sean mentioned it on the video. And And the kubernetes, uh, scaling methodology is is he adhered So should we go to the questions. Um, that's kind of the point, right? you know, set up things and deploy your applications and things. that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. to the standards that we would want to set to make sure that we're not overloading On the next video, we're gonna learn how to spin up a Yeah, Do the same to add workers. We got that management cluster that we do strapped in the first video. Yeah, that's the key to this is to be able to have co resident with So we don't have to go back to the mother ship. So it's just one pane of glass to the bootstrapped cluster of the regional services. and another, you know, detail for those that have sharp eyes. Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want all of the other major players in the cloud arena. Let's jump into our last video in the Siri's, So the first thing we had to be in post, Yeah, many of the machine A name. Much the same is how we did for AWS. nodes and and that the management layer is going to have sufficient horsepower to, are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. that's been put out in the chat, um, that you'll be able to give this a go yourself, Uh, take the opportunity to let your colleagues know if they were in another session I e just interest will feel for you. Use A I'm the one with the gray hair and the glasses. And for the providers in the very near future. I can hardly wait. Let's do it all right to share my video So the first thing is, we need those route credentials which we're going to export on the command That is the tool and you're gonna use to start spinning up downstream It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, and all of the necessary parameters that you would fill in have That's the very first thing you're going to Yeah, for the most part. Let's now that we have our management cluster set up, let's create a first We can check the status of the machine balls as individuals so we can check the glitches, resolve themselves and leave you with a functioning workload cluster within exactly the same thing with resource is on Prem or resource is, All the different bits and pieces And I've got to tell you from having, you know, managed kubernetes And the straps those components for you? Yeah, it actually even displays at the top of the screen. I really like the on board Bafana that comes along with this stack. the best of what Morantes had in the open stack environment for monitoring and logging So the doctor enterprise Container Cloud managers were Yep, the the Cube config elements of the UCP environment. More thoughts on this wonders if we jump onto the next video. Let's maybe the simplest architectures, of the regional cluster and how it connects to the management cluster on their components, There we have it. that we provide that same operational support system capability Just because it's kind of easy to make that was on AWS. Just making sure that the worker label host like you are so yes, It's ah been an interesting journey taken from the mothership Enjoy the rest of the launchpad

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MaryPERSON

0.99+

SeanPERSON

0.99+

Sean O'MaraPERSON

0.99+

BrucePERSON

0.99+

FrankfurtLOCATION

0.99+

three machinesQUANTITY

0.99+

Bill MilksPERSON

0.99+

AWSORGANIZATION

0.99+

first videoQUANTITY

0.99+

second phaseQUANTITY

0.99+

ShawnPERSON

0.99+

first phaseQUANTITY

0.99+

ThreeQUANTITY

0.99+

Two minutesQUANTITY

0.99+

three managersQUANTITY

0.99+

fifth phaseQUANTITY

0.99+

ClarkPERSON

0.99+

Bill MillsPERSON

0.99+

DalePERSON

0.99+

Five minutesQUANTITY

0.99+

NanPERSON

0.99+

second sessionQUANTITY

0.99+

Third phaseQUANTITY

0.99+

SeymourPERSON

0.99+

Bruce Basil MatthewsPERSON

0.99+

Moran TousPERSON

0.99+

five minutesQUANTITY

0.99+

hundredsQUANTITY

0.99+

Why Use IaaS When You Can Make Bare Metal Cloud-Native?


 

>>Hi, Oleg. So great of you to join us today. I'm really looking forward to our session. Eso Let's get started. So if I can get you to give a quick intro to yourself and then if you can share with us what you're going to be discussing today >>Hi, Jake. In my name is Oleg Elbow. I'm a product architect and the Doctor Enterprise Container Cloud team. Uh, today I'm going to talk about running kubernetes on bare metal with a container cloud. My goal is going to tell you about this exciting feature and why we think it's important and what we actually did to make it possible. >>Brilliant. Thank you very much. So let's get started. Eso from my understanding kubernetes clusters are typically run in virtual machines in clouds. So, for example, public cloud AWS or private cloud maybe open staff based or VM ware V sphere. So why why would you go off and run it on their mettle? >>Well, uh, the Doctor Enterprise container cloud already can run Coburn eighties in the cloud, as you know, and the idea behind the container clouds to enable us to manage multiple doctor enterprise clusters. But we want to bring innovation to kubernetes. And instead of spending a lot of resources on the hyper visor and virtual machines, we just go all in for kubernetes directly environmental. >>Fantastic. So it sounds like you're suggesting then to run kubernetes directly on their mettle. >>That's correct. >>Fantastic and without a hyper visor layer. >>Yes, we all know the reasons to run kubernetes and virtual machines it's in The first place is mutual mutual isolation off workloads, but virtualization. It comes with the performance, heat and additional complexity. Uh, another. And when Iran coordinated the director on the hardware, it's a perfect opportunity for developers. They can see performance boost up to 30% for certain container workloads. Uh, this is because the virtualization layer adds a lot off overhead, and even with things like enhanced placement awareness technologies like Numa or processor opinion, it's it's still another head. By skipping over the virtualization, we just remove this overhead and gained this boost. >>Excellent, though it sounds like 30% performance boost very appealing. Are there any other value points or positive points that you can pull out? >>Yes, Besides, the hyper visor over had virtual machines. They also have some static resource footprint. They take up the memory and CPU cycles and overall reintroduces the density of containers per host. Without virtual machines, you can run upto 16% more containers on the same host. >>Excellent. Really great numbers there. >>One more thing to point out directly. Use environmental makes it easier to use a special purpose hardware like graphic processors or virtual no virtual network functions for don't work interfaces or the field programmable gate arrays for custom circuits, Uh, and you can share them between containers more efficiently. >>Excellent. I mean, there's some really great value points you pulled out there. So 30% performance boost, 60% density boost on it could go off and support specialized hardware a lot easier. But let's talk about now. The applications. So what sort of applications do you think would benefit from this The most? >>Well, I'm thinking primarily high performance computations and deep learning will benefit, Uh, which is the more common than you might think of now they're artificial Intelligence is gripping into a lot off different applications. Uh, it really depends on memory capacity and performance, and they also use a special devices like F P G s for custom circuits widely sold. All of it is applicable to the machine learning. Really? >>And I mean, that whole ai piece is I mean, really exciting. And we're seeing this become more commonplace across a whole host of sectors. So you're telcos, farmers, banking, etcetera. And not just I t today. >>Yeah, that's indeed very exciting. Uh, but creating communities closer environmental, unfortunately, is not very easy. >>Hope so it sounds like there may be some challenges or complexities around it. Ondas this, I guess. The reason why there's not many products then out there today for kubernetes on their metal on baby I like. Could you talk to us then about some of the challenges that this might entail? >>Well, there are quite a few challenges first, and for most, there is no one way to manage governmental infrastructures Nowadays. Many vendors have their solutions that are not always compatible with each other and not necessarily cover all aspects off this. Um So we've worked an open source project called metal cube metal cooped and integrated it into the doctor Enterprise Container Cloud To do this unified bar middle management for us. >>And you mentioned it I hear you say is that open source? >>There is no project is open source. We had a lot of our special sauce to it. Um, what it does, Basically, it enables us to manage the hardware servers just like a cloud server Instances. >>And could you go? I mean, that's very interesting, but could you go into a bit more detail and specifically What do you mean? As cloud instances, >>of course they can. Generally, it means to manage them through some sort of a p I or programming interface. Uh, this interface has to cover all aspects off the several life cycle, like hardware configuration, operating system management network configuration storage configuration, Uh, with help off Metal cube. We extend the carbonated C p i to enable it to manage bare metal hosts. And aled these suspects off its life cycle. The mental que project that's uses open stack. Ironic on. Did it drops it in the Cuban. It s a P I. And ironic does all the heavy lifting off provisioned. It does it in a very cloud native way. Uh, it configures service using cloud they need, which is very familiar to anyone who deals with the cloud and the power is managed transparently through the i p my protocol on. But it does a lot to hide the differences between different hardware hosts from the user and in the Doctor Enterprise Container Cloud. We made everything so the user doesn't really feel the difference between bare metal server and cloud VM. >>So, Oleg, are you saying that you can actually take a machine that's turned off and turn it on using the commands? >>That's correct. That's the I. P M I. R Intelligent platform management interface. Uh, it gives you an ability to interact directly with the hardware. You can manager monitor things like power, consumption, temperature, voltage and so on. But what we use it for is to manage the food source and the actual power state of the server. So we have a group of service that are available and we can turn them on. And when we need them, just if we were spinning the VM >>Excellent. So that's how you get around the fact that while aled cloud the ends of the same, the hardware is all different. But I would assume you would have different server configurations in one environment So how would you get around that? >>Uh, yeah, that Zatz. Excellent questions. So some elements of the berm mental management the FBI that we developed, they are specifically to enable operators toe handle wider range of hardware configurations. For example, we make it possible to consider multiple network interfaces on the host. We support flexible partitioning off hard disks and other storage devices. We also make it possible thio boot remote live using the unified extended firmware interface for modern systems. Or just good old bias for for the legacy ones. >>Excellent. So yeah, thanks. Thanks for sharing that that. Now let's take a look at the rest of the infrastructure and eggs. So what about things like networking and storage house that managed >>Oh, Jakey, that's some important details. So from the networking standpoint, the most important thing for kubernetes is load balancing. We use some proven open source technologies such a Zengin ICS and met a little bit to handle. Handle that for us and for the storage. That's ah, a bit more tricky part. There are a lot off different stories. Solutions out. There s o. We decided to go with self and ah cooperator for self self is very much your and stable distributed stories system. It has incredible scalability. We actually run. Uh, pretty big clusters in production with chef and rock makes the life cycle management for self very robust and cloud native with health shaking and self correction. That kind of stuff. So any kubernetes cluster that Dr Underprice Container Cloud provision for environmental Potentially. You can have the self cluster installed self installed in this cluster and provide stories that is accessible from any node in the cluster to any port in the cluster. So that's, uh, called Native Storage components. Native storage. >>Wonderful. But would that then mean that you'd have to have additional hardware so mawr hardware for the storage cluster, then? >>Not at all. Actually, we use Converse storage architecture in the current price container cloud and the workloads and self. They share the same machines and actually managed by the same kubernetes cluster A. Some point in the future, we plan to add more fully, even more flexibility to this, uh, self configuration and enable is share self, where all communities cluster will use a single single self back, and that's that's not the way for us to optimize our very basically. >>Excellent. So thanks for covering the infrastructure part. What would be good is if we can get an understanding them for that kind of look and feel, then for the operators and the users of the system. So what can they say? >>Yeah, the case. We know Doc Enterprise Container Cloud provides a web based user interface that is, uh, but enables to manage clusters. And the bare metal management actually is integrated into this interface and provides provides very smooth user experience. A zone operator, you need to add or enrolled governmental hosts pretty much the same way you add cloud credentials for any other for any other providers for any other platforms. >>Excellent. I mean, Oleg, it sounds really interesting. Would you be able to share some kind of demo with us? It be great to see this in action. Of >>course. Let's let's see what we have here. So, >>uh, thank you. >>Uh, so, first of all, you take a bunch of governmental service and you prepare them, connect and connect them to the network is described in the dogs and bootstrap container cloud on top of these, uh, three of these bare metal servers. Uh, once you put through, you have the container cloud up and running. You log into the u I. Let's start here. And, uh, I'm using the generic operator user for now. Its's possible to integrate it with your in the entity system with the customer and the entity system and get real users there. Mhm. So first of all, let's create a project. It will hold all off our clusters. And once we created it, just switched to it. And the first step for an operator is to add some burr metal hosts of the project. As you see it empty, uh, toe at the berm. It'll host. You just need a few parameters. Uh, name that will allow you to identify the server later. Then it's, ah, user name and password to access the IBM. My controls off the server next on, and it's very important. It's the hardware address off the first Internet port. It will be used to remotely boot the server over network. Uh, finally, that Z the i p address off the i p m i n point and last, but not the least. It's the bucket, uh, toe Assign the governmental host to. It's a label that is assigned to it. And, uh, right now we offer just three default labels or buckets. It's, ah, manager, manager, hosts, worker hosts and storage hosts. And depending on the hardware configuration of the server, you assign it to one of these three groups. You will see how it's used later in the phone, so note that least six servers are required to deploy managed kubernetes cluster. Just as for for the cloud providers. Um, there is some information available now about the service is the result of inspection. By the way, you can look it up. Now we move. Want to create a cluster, so you need to provide the name for the cluster. Select the release off Dr Enterprise Engine and next next step is for provider specific information. You need to specify the address of the Class three guy and point here, and the range of feathers is for services that will be installed in the cluster. The user war close um kubernetes Network parameter school be changed as well, but the defaults are usually okay. Now you can enable or disable stack light the monitoring system for the Burnett's cluster and provide some parameters to eat custom parameters. Uh, finally you click create to create the cluster. It's an empty cluster that we need to add some machines to. So we need a least three manager notes. The form is very simple. You just select the roll off the community snowed. It's either manager of worker Onda. You need to select this label bucket from which the environmental hospital we picked. We go with the manager label for manager notes and work your label for the workers. Uh, while question is deploying, let's check out some machine information. The storage data here, the names off the disks are taken from the environmental host Harbor inspection data that we checked before. Now we wait for servers to be deployed. Uh, it includes ah, operating system, and the government is itself. So uh, yeah, that's that's our That's our you user interface. Um, if operators need to, they can actually use Dr Enterprise Container Container cloud FBI for some more sophisticated, sophisticated configurations or to integrate with an external system, for example, configuration database. Uh, all the burr mental tasks they just can be executed through the carbonated C. P. I and by changing the custom resources customer sources describing the burr mental notes and objects >>Mhm, brilliant. Well, thank you for bringing that life. It's always good. Thio See it in action. I guess from my understanding, it looks like the operators can use the same tools as develops or developers but for managing their infrastructure, then >>yes, Exactly. For example, if you're develops and you use lands, uh, to monitor and manage your cluster, uh, the governmental resources are just another set of custom resources for you. Uh, it is possible to visualize and configure them through lands or any other developer to for kubernetes. >>Excellent. So from what I can see, that really could bridge the gap, then between infrastructure operators on develops and developer teams. Which is which is a big thing? >>Yes, that's that's Ah, one of our aspirations is to unify the user experience because we've seen a lot of these situations when infrastructure is operated by one set of tools and the container platform uses agnostic off it end users and offers completely different set of tools. So as a develops, you have to be proficient in both, and that's not very sustainable for some developers. Team James. >>Sure. Okay, well, thanks for covering that. That's great. E mean, there's obviously other container platforms out there in the market today. It would be great if you could explain only one of some of the differences there and in how Dr Enterprise Container Cloud approaches bare metal. >>Yeah, that's that's a That's an excellent question, Jake. Thank you. So, uh, in container cloud in the container Cloud Burr Mental management Unlike another container platforms, Burr metal management is highly and is tightly integrated in the in the product. It's integrated on the U and the A p I, and on the back and implementation level. Uh, other platforms typically rely on the user to provision in the ber metal hosts before they can deploy kubernetes on it. Uh, this leaves the operating system management hardware configuration hardware management mostly with dedicated infrastructure greater steam. Uh, Dr Enterprise Container Cloud might help to reduce this burden and this infrastructure management costs by just automated and effectively removing the part of responsibility from the infrastructure operators. And that's because container cloud on bare metal is essentially full stack solution. It includes the hardware configuration covers, operating system lifecycle management, especially, especially the security updates or C e updates. Uh, right now, at this point, the only out of the box operating system that we support is you, Bhutto. We're looking to expand this, and, as you know, the doctor Enterprise engine. It makes it possible to run kubernetes on many different platforms, including even Windows. And we plan to leverage this flexibility in the doctor enterprise container cloud full extent to expand this range of operating systems that we support. >>Excellent. Well, Oleg, we're running out of time. Unfortunately, I mean, I've thoroughly enjoyed our conversation today. You've pulled out some excellent points you talked about potentially up to a 30% performance boost up to 60% density boost. Um, you've also talked about how it can help with specialized hardware and make this a lot easier. Um, we also talked about some of the challenges that you could solve, obviously, by using docker enterprise container clouds such as persistent storage and load balancing. There's obviously a lot here, but thank you so much for joining us today. It's been fantastic. And I hope that we've given some food for thoughts to go out and try and deployed kubernetes on Ben. It'll so thanks. So leg >>Thank you for coming. BJ Kim

Published Date : Sep 14 2020

SUMMARY :

Hi, Oleg. So great of you to join us today. My goal is going to tell you about this exciting feature and why we think it's So why why would you go off And instead of spending a lot of resources on the hyper visor and virtual machines, So it sounds like you're suggesting then to run kubernetes directly By skipping over the virtualization, we just remove this overhead and gained this boost. Are there any other value points or positive points that you can pull out? Yes, Besides, the hyper visor over had virtual machines. Excellent. Uh, and you can share them between containers more efficiently. So what sort of applications do you think would benefit from this The most? Uh, which is the more common than you might think And I mean, that whole ai piece is I mean, really exciting. Uh, but creating communities closer environmental, the challenges that this might entail? metal cooped and integrated it into the doctor Enterprise Container Cloud to it. We made everything so the user doesn't really feel the difference between bare metal server Uh, it gives you an ability to interact directly with the hardware. of the same, the hardware is all different. So some elements of the berm mental Now let's take a look at the rest of the infrastructure and eggs. So from the networking standpoint, so mawr hardware for the storage cluster, then? Some point in the future, we plan to add more fully, even more flexibility So thanks for covering the infrastructure part. And the bare metal management actually is integrated into this interface Would you be able to share some Let's let's see what we have here. And depending on the hardware configuration of the server, you assign it to one of these it looks like the operators can use the same tools as develops or developers Uh, it is possible to visualize and configure them through lands or any other developer Which is which is a big thing? So as a develops, you have to be proficient in both, It would be great if you could explain only one of some of the differences there and in how Dr in the doctor enterprise container cloud full extent to expand Um, we also talked about some of the challenges that you could solve, Thank you for coming.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
OlegPERSON

0.99+

Oleg ElbowPERSON

0.99+

30%QUANTITY

0.99+

JakePERSON

0.99+

FBIORGANIZATION

0.99+

IBMORGANIZATION

0.99+

todayDATE

0.99+

JakeyPERSON

0.99+

AWSORGANIZATION

0.99+

bothQUANTITY

0.99+

firstQUANTITY

0.99+

first stepQUANTITY

0.98+

three groupsQUANTITY

0.98+

oneQUANTITY

0.98+

one setQUANTITY

0.98+

BJ KimPERSON

0.98+

WindowsTITLE

0.97+

up to 30%QUANTITY

0.97+

Doctor EnterpriseORGANIZATION

0.96+

IranORGANIZATION

0.93+

threeQUANTITY

0.91+

singleQUANTITY

0.91+

BenPERSON

0.91+

OndaORGANIZATION

0.9+

JamesPERSON

0.9+

EsoORGANIZATION

0.89+

three managerQUANTITY

0.87+

BurnettORGANIZATION

0.86+

One more thingQUANTITY

0.84+

three defaultQUANTITY

0.84+

eachQUANTITY

0.83+

upto 16% moreQUANTITY

0.81+

60% densityQUANTITY

0.79+

single selfQUANTITY

0.76+

up to 60%QUANTITY

0.75+

Zengin ICSTITLE

0.73+

IaaSTITLE

0.73+

six serversQUANTITY

0.72+

HarborORGANIZATION

0.68+

P GTITLE

0.68+

EnterpriseTITLE

0.67+

Dr EnterpriseORGANIZATION

0.67+

I. P MTITLE

0.64+

threeOTHER

0.64+

upQUANTITY

0.63+

Dr Enterprise Container CloudORGANIZATION

0.63+

DoctorORGANIZATION

0.6+

CubanOTHER

0.58+

Coburn eightiesORGANIZATION

0.58+

toolsQUANTITY

0.56+

ThioPERSON

0.55+

BhuttoORGANIZATION

0.55+

CloudTITLE

0.54+

Doc Enterprise ContainerTITLE

0.5+

Doctor Enterprise ContainerTITLE

0.5+

ZatzPERSON

0.49+

TeamPERSON

0.49+

Container CloudTITLE

0.36+