Image Title

Search Results for Axeda:

Bob Thome, Tim Chien & Subban Raghunathan, Oracle


 

>>Earlier this week, Oracle announced the new X nine M generation of exit data platforms for its cloud at customer and legacy on prem deployments. And the company made some enhancements to its zero data loss, recovery appliance. CLRA something we've covered quite often since its announcement. We had a video exclusive with one Louisa who was the executive vice president of mission critical database technologies. At Oracle. We did that on the day of the announcement who got his take on it. And I asked Oracle, Hey, can we get some subject matter experts, some technical gurus to dig deeper and get more details on the architecture because we want to better understand some of the performance claims that Oracle is making. And with me today is Susan. Who's the vice president of product management for exit data database machine. Bob tome is the vice president of product management for exit data cloud at customer. And Tim chin is the senior director of product management for DRA folks. Welcome to this power panel and welcome to the cube. >>Thank you, Dave. >>Can we start with you? Um, Juan and I, we talked about the X nine M a that Oracle just launched a couple of days ago. Maybe you could give us a recap, some of the, what do we need to know? The, especially I'm interested in the big numbers once more so we can just understand the claims you're making around this announcement. We can dig into that. >>Absolutely. They've very excited to do that. In a nutshell, we have the world's fastest database machine for both LTP and analytics, and we made that even faster, not just simply faster, but for all LPP we made it 70% faster and we took the oil PPV ops all the way up to 27.6 million read IOPS and mind you, this is being measured at the sequel layer for analytics. We did pretty much the same thing, an 87% increase in analytics. And we broke through that one terabyte per second barrier, absolutely phenomenal stuff. Now, while all those numbers by themselves are fascinating, here's something that's even more fascinating in my mind, 80% of the product development work for extra data, X nine M was done during COVID, which means all of us were remote. And what that meant was extreme levels of teamwork between the development teams, manufacturing teams, procurement teams, software teams, the works. I mean, everybody coming together as one to deliver this product, I think it's kudos to everybody who touched this product in one way or the other extremely proud of it. >>Thank you for making that point. And I'm laughing because it's like you the same bolt of a mission-critical OLT T O LTP performance. You had the world record, and now you're saying, adding on top of that. Um, but, okay. But, so there are customers that still, you know, build the builder and they're trying to build their own exit data. What they do is they buy their own servers and storage and networking components. And I do that when I talk to them, they'll say, look, they want to maintain their independence. They don't want to get locked in Oracle, or maybe they believe it's cheaper. You know, maybe they're sort of focused on the, the, the CapEx the CFO has him in the headlock, or they might, sometimes they talk about, they want a platform that can support, you know, horizontal, uh, apps, maybe not Oracle stuff, or, or maybe they're just trying to preserve their job. I don't know, but why shouldn't these customers roll their own and why can't they get similar results just using standard off the shelf technologies? >>Great question. It's going to require a little involved answer, but let's just look at the statistics to begin with. Oracle's exit data was first productized in Delaware to the market in 2008. And at that point in time itself, we had industry leadership across a number of metrics. Today, we are at the 11th generation of exit data, and we are way far ahead than the competition, like 50 X, faster hundred X faster, right? I mean, we are talking orders of magnitude faster. How did we achieve this? And I think the answer to your question is going to lie in what are we doing at the engineering level to make these magical numbers come to, uh, for right first, it starts with the hardware. Oracle has its own hardware server design team, where we are embedding in capabilities towards increasing performance, reliability, security, and scalability down at the hardware level, the database, which is a user level process talks to the hardware directly. >>The only reason we can do this is because we own the source code for pretty much everything in between, starting with the database, going into the operating system, the hypervisor. And as I, as I just mentioned the hardware, and then we also worked with the former elements on this entire thing, the key to making extra data, the best Oracle database machine lies in that engineering, where we take the operating system, make it fit like tongue and groove into, uh, a bit with the opera, with the hardware, and then do the same with the database. And because we have got this deep insight into what are the workloads that are, that are running at any given point in time on the compute side of extra data, we can then do micromanagement at the software layers of how traffic flows are flowing through the entire system and do things like, you know, prioritize all PP transactions on a very specific, uh, you know, queue on the RDMA. >>We'll converse Ethan at be able to do smart scan, use the compute elements in the storage tier to be able to offload SQL processing. They call them the longer I used formats of data, extend them into flash, just a whole bunch of things that we've been doing over the last 12 years, because we have this deep engineering, you can try to cobble a system together, which sort of looks like an extra data. It's got a network and it's got storage, tiering compute here, but you're not going to be able to achieve anything close to what we are doing. The biggest deal in my mind, apart from the performance and the high availability is the security, because we are testing the stack top to bottom. When you're trying to build your own best of breed kind of stuff. You're not going to be able to do that because it depended on the server that had to do something and HP to do something else or Dell to do something else and a Brocade switch to do something it's not possible. We can do this, we've done it. We've proven it. We've delivered it for over a decade. End of story. For as far as I'm concerned, >>I mean, you know, at this fine, remember when Oracle purchased Sohn and I know a big part of that purchase was to get Java, but I remember saying at the time it was a brilliant acquisition. I was looking at it from a financial standpoint. I think you paid seven and a half billion for it. And it automatically, when you're, when Safra was able to get back to sort of pre acquisition margins, you got the Oracle uplift in terms of revenue multiples. So then that standpoint, it was a no brainer, but the other thing is back in the Unix days, it was like HP. Oracle was the standard. And, and in terms of all the benchmarks and performance, but even then, I'm sure you work closely with HP, but it was like to get the stuff to work together, you know, make sure that it was going to be able to recover according to your standards, but you couldn't actually do that deep engineering that you just described now earlier, Subin you, you, you, you stated that the X sign now in M you get, oh, LTP IO, IOP reads at 27 million IOPS. Uh, you got 19 microseconds latency, so pretty impressive stuff, impressive numbers. And you kind of just went there. Um, but how are you measuring these numbers versus other performance claims from your competitors? What what's, you know, are you, are you stacking the deck? Can you give you share with us there? >>Sure. So Shada incidents, we are mentioning it at the sequel layer. This is not some kind of an ion meter or a micro benchmark. That's looking at just a flash subsystem or just a persistent memory subsystem. This is measured at the compute, not doing an entire set of transactions. And how many times can you finish that? Right? So that's how it's being measured. Now. Most people cannot measure it like that because of the disparity and the number of vendors that are involved in that particular solution, right? You've got servers from vendor a and storage from vendor B, the storage network from vendor C, the operating system from vendor D. How do you tune all of these things on your own? You cannot write. I mean, there's only certain bells and whistles and knobs that are available for you to tune, but so that's how we are measuring the 19 microseconds is at the sequel layer. >>What that means is this a real world customer running a real world. Workload is guaranteed to get that kind of a latency. None of the other suppliers can make that claim. This is the real world capability. Now let's take a look at that 19 microseconds we boast and we say, Hey, we had an order of magnitude two orders of magnitude faster than everybody else. When it comes down to latency. And one things that this is we'll do our magic while it is magical. The magic is really grounded in deep engineering and deep physics and science. The way we implement this is we, first of all, put the persistent memory tier in the storage. And that way it's shared across all of the database instances that are running on the compute tier. Then we have this ultra fast hundred gigabit ethernet RDMA over converged ethernet fabric. >>With this, what we have been able to do is at the hardware level between two network interface guides that are resident on that fabric, we create paths that enable high priority low-latency communication between any two end points on that fabric. And then given the fact that we implemented persistent memory in the storage tier, what that means is with that persistent memory, sitting on the memory bus of the processor in the storage tier, we can perform it remote direct memory access operation from the compute tier to memory address spaces in the persistent memory of the storage tier, without the involvement of the operating system on either end, no context, switches, knowing processing latencies and all of that. So it's hardware to hardware, communication with security built in, which is immutable, right? So all of this is built into the hardware itself. So there's no software involved. You perform a read, the data comes back 19 microseconds, boom. End of story. >>Yeah. So that's key to my next topic, which is security because if you're not getting the OSTP involved and that's, you know, very oftentimes if I can get access to the OSTP, I get privileged. Like I can really take advantage of that as a hacker. But so, but, but before I go there, like Oracle talks about, it's got a huge percentage of the Gayety 7% of the fortune 100 companies run their mission, critical workloads on exit data. But so that's not only important to the companies, but they're serving consumer me, right. I'm going to my ATM or I'm swiping my credit card. And Juan mentioned that you use a layered security model. I just sort of inferred anyway, that, that having this stuff in hardware and not have to involve access to the OS actually contributes to better security. But can you describe this in a bit more detail? >>So yeah, what Brian was talking about was this layered security set differently. It is defense in depth, and that's been our mantra and philosophy for several years now. So what does that entail? As I mentioned earlier, we designed our own servers. We do this for performance. We also do it for security. We've got a number of features that are built into the hardware that make sure that we've got immutable areas of form where we, for instance, let me give you this example. If you take an article x86 server, just a standard x86 server, not even express in the form of an extra data system, even if you had super user privileges sitting on top of an operating system, you cannot modify the bias as a user, as a super user that has to be done through the system management network. So we put gates and protection modes, et cetera, right in the hardware itself. >>Now, of course the security of that hardware goes all the way back to the fact that we own the design. We've got a global supply chain, but we are making sure that our supply chain is protected monitored. And, uh, we also protect the last mile of the supply chain, which is we can detect if there's been any tampering of form where that's been, uh, that's occurred in the hardware while the hardware shipped from our factory to the customers, uh, docks. Right? So we, we know that something's been tampered with the moment it comes back up on the customer. So that's on the hardware. Let's take a look at the operating system, Oracle Linux, we own article the next, the entire source code. And what shipping on exit data is the unbreakable enterprise Connell, the carnal and the operating system itself have been reduced in terms of eliminating all unnecessary packages from that operating system bundle. >>When we deliver it in the form of the data, let's put some real numbers on that. A standard Oracle Linux or a standard Linux distribution has got about 5,000 plus packages. These things include like print servers, web servers, a whole bunch of stuff that you're not absolutely going to use at all on exit data. Why ship those? Because the moment you ship more stuff than you need, you are increasing the, uh, the target, uh, that attackers can get to. So on AXA data, there are only 701 packages. So compare this 5,413 packages on a standard Linux, 701 and exit data. So we reduced the attack surface another aspect on this, when we, we do our own STIG, uh, ASCAP benchmarking. If you take a standard Linux and you run that ASCAP benchmark, you'll get about a 30% pass score on exit data. It's 90 plus percent. >>So which means we are doing the heavy lifting of doing the security checks on the operating system before it even goes out to the factory. And then you layer on Oracle database, transparent data encryption. We've got all kinds of protection capabilities, data reduction, being able to do an authentication on a user ID basis, being able to log it, being able to track it, being able to determine who access the system when and log back. So it's basically defend at every single layer. And then of course the customer's responsibility. It doesn't just stop by getting this high secure, uh, environment. They have to do their own job of them securing their network perimeters, securing who has physical access to the system and everything else. So it's a giant responsibility. And as you mentioned, you know, you as a consumer going to an ATM machine and withdrawing money, you would do 200. You don't want to see 5,000 deducted from your account. And so all of this is made possible with exited and the amount of security focus that we have on the system >>And the bank doesn't want to see it the other way. So I'm geeking out here in the cube, but I got one more question for you. Juan talked about X nine M best system for database consolidation. So I, I kinda, you know, it was built to handle all LTP analytics, et cetera. So I want to push you a little bit on this because I can make an argument that, that this is kind of a Swiss army knife versus the best screwdriver or the best knife. How do you respond to that concern and how, how do you respond to the concern that you're putting too many eggs in one basket? Like, what do you tell people to fear you're consolidating workloads to save money, but you're also narrowing the blast radius. Isn't that a problem? >>Very good question there. So, yes. So this is an interesting problem, and it is a balancing act. As you correctly pointed out, you want to have the economies of scale that you get when you consolidate more and more databases, but at the same time, when something happens when hardware fails or there's an attack, you want to make sure that you have business continuity. So what we are doing on exit data, first of all, as I mentioned, we are designing our own hardware and a building in reliability into the system and at the hardware layer, that means having redundancy, redundancy for fans, power supplies. We even have the ability to isolate faulty cores on the processor. And we've got this a tremendous amount of sweeping that's going on by the system management stack, looking for problem areas and trying to contain them as much as possible within the hardware itself. >>Then you take it up to the software layer. We used our reliability to then build high availability. What that implies is, and that's fundamental to the exited architecture is this entire scale out model, our based system, you cannot go smaller than having two database nodes and three storage cells. Why is that? That's because you want to have high availability of your database instances. So if something happens to one server hardware, software, whatever you got another server that's ready to take on that load. And then with real application clusters, you can then switch over between these two, why three storage cells. We want to make sure that when you have got duplicate copies of data, because you at least want to have one additional copy of your data in case something happens to the disc that has got that only that one copy, right? So the reason we have got three is because then you can Stripe data across these three different servers and deliver high availability. >>Now you take that up to the rack level. A lot of things happen. Now, when you're really talking about the blast radius, you want to make sure that if something physically happens to this data center, that you have infrastructure that's available for it to function for business continuity, we maintain, which is why we have the maximum availability architecture. So with components like golden gate and active data guard, and other ways by which we can keep to this distant systems in sync is extremely critical for us to deliver these high availability paths that make, uh, the whole equation about how many eggs in one basket versus containing the containment of the blast radius. A lot easier to grapple with because business continuity is something which is paramount to us. I mean, Oracle, the enterprise is running on Xcel data. Our high value cloud customers are running on extra data. And I'm sure Bob's going to talk a lot more about the cloud piece of it. So I think we have all the tools in place to, to go after that optimization on how many eggs in one basket was his blast radius. It's a question of working through the solution and the criticalities of that particular instance. >>Okay, great. Thank you for that detailed soup. We're going to give you a break. You go take a breath, get a, get a drink of water. Maybe we'll come back to you. If we have time, let's go to Bob, Bob, Bob tome, X data cloud at customer X nine M earlier this week, Juan said kinda, kinda cocky. What we're bothering, comparing exit data against your cloud, a customer against outpost or Azure stack. Can you elaborate on, on why that is? >>Sure. Or you, you know, first of all, I want to say, I love, I love baby. We go south posts. You know why it affirms everything that we've been doing for the past four and a half years with clouded customer. It affirms that cloud is running that running cloud services in customers' data center is a large and important market, large and important enough that AWS felt that the need provide these, um, you know, these customers with an AWS option, even if it only supports a sliver of the functionality that they provide in the public cloud. And that's what they're doing. They're giving it a sliver and they're not exactly leading with the best they could offer. So for that reason, you know, that reason alone, there's really nothing to compare. And so we, we give them the benefit of the doubt and we actually are using their public cloud solutions. >>Another point most customers are looking to deploy to Oracle cloud, a customer they're looking for a per performance, scalable, secure, and highly available platform to deploy. What's offered their most critical databases. Most often they are Oracle databases does outposts for an Oracle database. No. Does outpost run a comparable database? Not really does outposts run Amazon's top OTP and analytics database services, the ones that are top in their cloud public cloud. No, that we couldn't find anything that runs outposts that's worth comparing against X data clouded customer, which is why the comparisons are against their public cloud products. And even with that still we're looking at numbers like 50 times a hundred times slower, right? So then there's the Azure stack. One of the key benefits to, um, you know, that customers love about the cloud that I think is really under, appreciated it under appreciated is really that it's a single vendor solution, right? You have a problem with cloud service could be I as pass SAS doesn't matter. And there's a single vendor responsible for fixing your issue as your stack is missing big here, because they're a multi-vendor cloud solution like AWS outposts. Also, they don't exactly offer the same services in the cloud that they offer on prem. And from what I hear, it can be a management nightmare requiring specialized administrators to keep that beast running. >>Okay. So, well, thanks for that. I'll I'll grant you that, first of all, granted that Oracle was the first with that same, same vision. I always tell people that, you know, if they say, well, we were first I'm like, well, actually, no, Oracle's first having said that, Bob and I hear you that, that right now, outpost is a one Datto version. It doesn't have all the bells and whistles, but neither did your cloud when you first launched your cloud. So let's, let's let it bake for a while and we'll come back in a couple of years and see how things compare. So if you're up for it. Yeah. >>Just remember that we're still in the oven too. Right. >>Okay. All right. Good. I love it. I love the, the chutzpah. One also talked about Deutsche bank. Um, and that, I, I mean, I saw that Deutsche bank announcement, how they're working with Oracle, they're modernizing their infrastructure around database. They're building other services around that and kind of building their own sort of version of a cloud for their customers. How does exit data cloud a customer fit in to that whole Deutsche bank deal? Is, is this solution unique to Deutsche bank? Do you see other organizations adopting clouded customer for similar reasons and use cases? >>Yeah, I'll start with that. First. I want to say that I don't think Georgia bank is unique. They want what all customers want. They want to be able to run their most important workloads. The ones today running their data center on exit eight as a non other high-end systems in a cloud environment where they can benefit from things like cloud economics, cloud operations, cloud automations, but they can't move to public cloud. They need to maintain the service levels, the performance, the scalability of the security and the availability that their business has. It has come to depend on most clouds can't provide that. Although actually Oracle's cloud can our public cloud Ken, because our public cloud does run exit data, but still even with that, they can't do it because as a bank, they're subject to lots of rules and regulations, they cannot move their 40 petabytes of data to a point outside the control of their data center. >>They have thousands of interconnected databases, right? And applications. It's like a rat's nest, right? And this is similar many large customers have this problem. How do you move that to the cloud? You can move it piecemeal. Uh, I'm going to move these apps and, you know, not move those apps. Um, but suddenly ended up with these things where some pieces are up here. Some pieces are down here. The thing just dies because of the long latency over a land connection, it just doesn't work. Right. So you can also shut it down. Let's shut it down on, on Friday and move everything all at once. Unfortunately, when you're looking at it, a state decides that most customers have, you're not going to be able to, you're going to be down for a month, right? Who can, who can tolerate that? So it's a big challenge and exited cloud a customer let's then move to the cloud without losing control of their data. >>And without unhappy having to untangle that thousands of interconnected databases. So, you know, that's why these customers are choosing X data, clouded customer. More importantly, it sets them up for the future with exited cloud at customer, they can run not just in their data center, but they could also run in public cloud, adjacent sites, giving them a path to moving some work out of the data center and ultimately into the public cloud. You know, as I said, they're not unique. Other banks are watching and some are acting and it's not just banks. Just last week. Telefonica telco in Spain announced their intent to migrate the bulk of their Oracle databases to excavate a cloud at customer. This will be the key cloud platform running. They're running in their data center to support both new services, as well as mission critical and operational systems. And one last important point exited cloud a customer can also run autonomous database. Even if customers aren't today ready to adopt this. A lot of them are interested in it. They see it as a key piece of the puzzle moving forward in the future and customers know that they can easily start to migrate to autonomous in the future as they're ready. And this of course is going to drive additional efficiencies and additional cost savings. >>So, Bob, I got a question for you because you know, Oracle's playing both sides, right? You've got a cloud, you know, you've got a true public cloud now. And, and obviously you have a huge on-premise state. When I talk to companies that don't own a cloud, uh, whether it's Dell or HPE, Cisco, et cetera, they have made, they make the point. And I agree with them by the way that the world is hybrid, not everything's going into the, to the cloud. However, I had a lot of respect for folks at Amazon as well. And they believed long-term, they'll say this, they've got them on record of saying this, that they believe long-term ultimately all workloads are going to be running in the cloud. Now, I guess it depends on how you define the cloud. The cloud is expanding and all that other stuff. But my question to you, because again, you kind of on both sides, here are our hybrid solutions like cloud at customer. Do you see them as a stepping stone to the cloud, or is cloud in your data center, sort of a continuous sort of permanent, you know, essential play >>That. That's a great question. As I recall, people debated this a few years back when we first introduced clouded customer. And at that point, some people I'm talking about even internal Oracle, right? Some people saw this as a stop gap measure to let people leverage cloud benefits until they're really ready for the public cloud. But I think over the past four and a half years, the changing the thinking has changed a little bit on this. And everyone kind of agrees that clouded customer may be a stepping stone for some customers, but others see that as the end game, right? Not every workload can run in the public cloud, not at least not given the, um, you know, today's regulations and the issues that are faced by many of these regulated industries. These industries move very, very slowly and customers are content to, and in many cases required to retain complete control of their data and they will be running under their control. They'll be running with that data under their control and the data center for the foreseeable future. >>Oh, I got another question for kind of just, if I could take a little tangent, cause the other thing I hear from the, on the, the, the on-prem don't own, the cloud folks is it's actually cheaper to run in on-prem, uh, because they're getting better at automation, et cetera. When you get the exact opposite from the cloud guys, they roll their eyes. Are you kidding me? It's way cheaper to run it in the cloud, which is more cost-effective is it one of those? It depends, Bob. >>Um, you know, the great thing about numbers is you can make, you can, you can kind of twist them to show anything that you want, right? That's a have spreadsheet. Can I, can, I can sell you on anything? Um, I think that there's, there's customers who look at it and they say, oh, on-premise sheet is cheaper. And there's customers who look at it and say, the cloud is cheaper. If you, um, you know, there's a lot of ways that you may incur savings in the cloud. A lot of it has to do with the cloud economics, the ability to pay for what you're using and only what you're using. If you were to kind of, you know, if you, if you size something for your peak workload and then, you know, on prem, you probably put a little bit of a buffer in it, right? >>If you size everything for that, you're gonna find that you're paying, you know, this much, right? All the time you're paying for peak workload all the time with the cloud, of course, we support scaling up, scaling down. We supply, we support you're paying for what you use and you can scale up and scale down. That's where the big savings is now. There's also additional savings associated with you. Don't have the cloud vendors like work. Well, we manage that infrastructure for you. You no longer have to worry about it. Um, we have a lot of automation, things that you use to either, you know, probably what used to happen is you used to have to spend hours and hours or years or whatever, scripting these things yourselves. We now have this automation to do it. We have, um, you eyes that make things ad hoc things, as simple as point and click and, uh, you know, that eliminates errors. And, and it's often difficult to put a cost on those things. And I think the more enlightened customers can put a cost on all of those. So the people that were saying it's cheaper to run on prem, uh, they, they either, you know, have a very stable workload that never changes and their environment never changes, um, or more likely. They just really haven't thought through the, all the hidden costs out there. >>All right, you got some new features. Thank you for that. By the way, you got some new features in, in cloud, a customer, a what are those? Do I have to upgrade to X nine M to, to get >>All right. So, you know, we're always introducing new features for clouded customer, but two significant things that we've rolled out recently are operator access control and elastic storage expansion. As we discussed, many organizations are using Axeda cloud a customer they're attracting the cloud economics, the operational benefits, but they're required by regulations to retain control and visibility of their data, as well as any infrastructure that sits inside their data center with operator access control, enabled cloud operations, staff members must request access to a customer system, a customer, it team grants, a designated person, specific access to a specific component for a specific period of time with specific privileges, they can then kind of view audit controls in real time. And if they see something they don't like, you know, Hey, what's this guy doing? It looks like he's, he's stealing my data or doing something I don't like, boom. >>They can kill that operators, access the session, the connections, everything right away. And this gives everyone, especially customers that need to, you know, regulate remote access to their infrastructure. It gives them the confidence that they need to use exit data cloud, uh, conduct, customer service. And, and the other thing that's new is, um, elastic storage expansion. Customers could out add additional service to their system either at initial deployment or after the fact. And this really provides two important benefits. The first is that they can right size their configuration if they need only the minimum compute capacity, but they don't need the maximum number of storage servers to get that capacity. They don't need to subscribe to kind of a fixed shape. We used to have fixed shapes, I guess, with hundreds of unnecessary database cores, just to get the storage capacity, they can select a smaller system. >>And then incrementally add on that storage. The second benefit is the, is kind of key for many customers. You are at a storage, guess what you can add more. And that way, when you're out of storage, that's really important. Now they'll get to your last part of that question. Do you need a deck, a new, uh, exit aquatic customer XIM system to get these features? No they're available for all gen two exited clouded customer systems. That's really one of the best things about cloud. The service you subscribed to today just keeps getting better and better. And unless there's some technical limitation that, you know, we, and it, which is rare, most new features are available even for the oldest cloud customer systems. >>Cool. And you can bring that in on from my, my last question for you, Bob is a, another one on security. Obviously, again, we talked to Susan about this. It's a big deal. How can customer data be secure if it's in the cloud, if somebody, other than the, their own vetted employees are managing the underlying infrastructure, is is that a concern you hear a lot and how do you handle that? >>You know, it's, it's only something because a lot of these customers, they have big, you know, security people and it's their job to be concerned about that kind of stuff. And security. However, is one of the biggest, but least appreciate appreciated benefits of cloud cloud vendors, such as Oracle hire the best and brightest security experts to ensure that their clouds are secure. Something that only the largest customers can afford to do. You're a small, small shop. You're not going to be able to, you know, hire some of this expertise. So you're better off being in the cloud. Customers who are running in the Oracle cloud can also use articles, data, safe tool, which we provide, which basically lets you inspect your databases, insurance. Sure that everything is locked down and secure and your data is secure. But your question is actually a little bit different. >>It was about potential internal threats to company's data. Given the cloud vendor, not the customer's employees have access to the infrastructure that sits beneath the databases and really the first and most important thing we do to protect customers' data is we encrypt that database by default. Actually Subin listed a whole laundry list of things, but that's the one thing I want to point out. We encrypt your database. It's, you know, it's, it's encrypted. Yes. It sits on our infrastructure. Yes. Our operations persons can actually see those data files sitting on the infrastructure, but guess what? They can't see the data. The data is encrypted. All they see as kind of a big encrypted blob. Um, so they can't access the data themselves. And you know, as you'd expect, we have very tight controls over operations access to the infrastructure. They need to securely log in using mechanisms by stuff to present, prevent unauthorized access. >>And then all access is logged and suspicious. Activities are investigated, but that still may not be enough for some customers, especially the ones I mentioned earlier, the regulated industries. And that's why we offer app operator access control. As I mentioned, that gives customers complete control over the access to the infrastructure. The, when the, what ops can do, how long can they do it? Customers can monitor in real time. And if they see something they don't like they stop it immediately. Lastly, I just want to mention Oracle's data ball feature. This prevents administrators from accessing data, protecting data from road operators, robot, world operations, whether they be from Oracle or from the customer's own it staff, this database option. A lot of ball is sorry. Database ball data vault is included when running a license included service on exited clouded customer. So basically to get it with the service. Got it. >>Hi Tom. Thank you so much. It's unbelievable, Bob. I mean, we've got a lot to unpack there, but uh, we're going to give you a break now and go to Tim, Tim chin, zero data loss, recovery appliance. We always love that name. The big guy we think named it, but nobody will tell us, but we've been talking about security. There's been a lot of news around ransomware attacks. Every industry around the globe, any knucklehead with, uh, with a high school diploma could become a ransomware attack or go in the dark web, get, get ransomware as a service stick, a, put a stick in and take a piece of the VIG and hopefully get arrested. Um, with, when you think about database, how do you deal with the ransomware challenge? >>Yeah, Dave, um, that's an extremely important and timely question. Um, we are hearing this from our customers. We just talk about ha and backup strategies and ransomware, um, has been coming up more and more. Um, and the unfortunate thing that these ransoms are actually paid, um, uh, in the hope of the re you know, the, uh, the ability to access the data again. So what that means it tells me is that today's recovery solutions and processes are not sufficient to get these systems back in a reliable and timely manner. Um, and so you have to pay the ransom, right, to get, uh, to get the, even a hope of getting the data back now for databases. This can have a huge impact because we're talking about transactional workloads. And so even a compromise of just a few minutes, a blip, um, can affect hundreds or even thousands of transactions. This can literally represent hundreds of lost orders, right? If you're a big manufacturing company or even like millions of dollars worth of, uh, financial transactions in a bank. Right. Um, and that's why protecting databases at a transaction level is especially critical, um, for ransomware. And that's a huge contrast to traditional backup approaches. Okay. >>So how do you approach that? What do you, what do you do specifically for ransomware protection for the database? >>Yeah, so we have the zero data loss recovery appliance, which we announced the X nine M generation. Um, it is really the only solution in the market, which offers that transaction level of protection, which allows all transactions to be recovered with zero RPO, zero again, and this is only possible because Oracle has very innovative and unique technology called real-time redo, which captures all the transactional changes from the databases by the appliance, and then stored as well by the appliance, moreover, the appliance validates all these backups and reading. So you want to make sure that you can recover them after you've sent them, right? So it's not just a file level integrity check on a file system. That's actual database level of validation that the Oracle blocks and the redo that I mentioned can be restored and recovered as a usable database, any kind of, um, malicious attack or modification of that backup data and transmit that, or if it's even stored on the appliance and it was compromised would be immediately detected and reported by that validation. >>So this allows administrators to take action. This is removing that system from the network. And so it's a huge leap in terms of what customers can get today. The last thing I just want to point out is we call our cyber vault deployment, right? Um, a lot of customers in the industry are creating what we call air gapped environments, where they have a separate location where their backup copies are stored physically network separated from the production systems. And so this prevents ransomware for possibly infiltrating that last good copy of backups. So you can deploy recovery appliance in a cyber vault and have it synchronized at random times when the network's available, uh, to, to keep it in sync. Right. Um, so that combined with our transaction level zero data loss validation, it's a nice package and really a game changer in protecting and recovering your databases from modern day cyber threats. >>Okay, great. Thank you for clarifying that air gap piece. Cause I, there was some confusion about that. Every data protection and backup company that I know as a ransomware solution, it's like the hottest topic going, you got newer players in, in, in recovery and backup like rubric Cohesity. They raised a ton of dough. Dell has got solutions, HPE just acquired Zerto to deal with this problem. And other things IBM has got stuff. Veem seems to be doing pretty well. Veritas got a range of, of recovery solutions. They're sort of all out there. What's your take on these and their strategy and how do you differentiate? >>Yeah, it's a pretty crowded market, like you said. Um, I think the first thing you really have to keep in mind and understand that these vendors, these new and up and coming, um, uh, uh, vendors start in the copy data management, we call CDN space and they're not traditional backup recovery designed are purpose built for the purpose of CDM products is to provide these fast point in time copies for test dev non-production use, and that's a viable problem and it needs a solution. So you create these one time copy and then you create snapshots. Um, after you apply these incremental changes to that copy, and then the snapshot can be quickly restored and presented as like it's a fully populated, uh, file. And this is all done through the underlying storage of block pointers. So all of this kind of sounds really cool and modern, right? It's like new and upcoming and lots of people in the market doing this. Well, it's really not that modern because we've, we know storage, snapshot technologies has been around for years. Right. Um, what these new vendors have been doing is essentially repackaging the old technology for backup and recovery use cases and having sort of an easier to use automation interface wrapped around it. >>Yeah. So you mentioned a copy data management, uh, last year, active FIO. Uh, they started that whole space from what I recall at one point there, they value more than a billion dollars. They were acquired by Google. Uh, and as I say, they kind of created that, that category. So fast forward a little bit, nine months a year, whatever it's been, do you see that Google active FIO offer in, in, in customer engagements? Is that something that you run into? >>We really don't. Um, yeah, it was really popular and known some years ago, but we really don't hear about it anymore. Um, after the acquisition, you look at all the collateral and the marketing, they are really a CDM and backup solution exclusively for Google cloud use cases. And they're not being positioned as for on premises or any other use cases outside of Google cloud. That's what, 90, 90 plus percent of your market there that isn't addressable now by Activia. So really we don't see them in any of our engagements at this time. >>I want to come back and push it a little bit, uh, on some of the tech that you said, it's kind of really not that modern. Uh, I mean it's, if they certainly position it as modern, a lot of the engineers who are building there's new sort of backup and recovery capabilities came from the hyperscalers, whether it's copy data management, you know, the bot mock quote, unquote modern backup recovery, it's kind of a data management, sort of this nice all in one solution seems pretty compelling. How does recovery clients specifically stack up? You know, a lot of people think it's a niche product for, for really high end use cases. Is that fair? How do you see a town? >>Yeah. Yeah. So it's, I think it's so important to just, you know, understand, again, the fundamental use of this technology is to create data copies for test W's right. Um, and that's really different than operational backup recovery in which you must have this ability to do full and point in time recoverability in any production outage or Dr. Situation. Um, and then more importantly, after you recover and your applications are back in business, that performance must continue to meet servers levels as before. And when you look at a CDM product, um, and you restore a snapshot and you say with that product and the application is brought up on that restored snapshot, what happens or your production application is now running on actual read rideable snapshots on backup storage. Remember they don't restore all the data back to the production, uh, level stores. They're restoring it as a snapshot okay. >>Onto their storage. And so you have a huge difference in performance. Now running these applications where they instantly recovered, if you will database. So to meet these true operational requirements, you have to fully restore the files to production storage period. And so recovery appliance was first and foremost designed to accomplish this. It's an operational recovery solution, right? We accomplish that. Like I mentioned, with this real-time transaction protection, we have incremental forever backup strategies. So that you're just taking just the changes every day. And you, you can create these virtual full backups that are quickly restored, fully restored, if you will, at 24 terabytes an hour. And we validate and document that performance very clearly in our website. And of course we provide that continuous recovery validation for all the backups that are stored on the system. So it's, um, it's a very nice, complete solution. >>It scales to meet your demands, hundreds of thousands of databases, you know, it's, um, you know, these CDM products might seem great and they work well for a few databases, but then you put a real enterprise load and these hundreds of databases, and we've seen a lot of times where it just buckles, you know, it can't handle that kind of load in that, uh, in that scale. Uh, and, and this is important because customers read their marketing and read the collateral like, Hey, instant recovery. Why wouldn't I want that? Well, it's, you know, nicer than it looks, you know, it always sounds better. Right. Um, and so we have to educate them and about exactly what that means for the database, especially backup recovery use cases. And they're not really handled well, um, with their products. >>I know I'm like way over. I had a lot of questions on this announcement and I was gonna, I was gonna let you go, Tim, but you just mentioned something that, that gave me one more question if I may. So you talked about, uh, supporting hundreds of thousands of databases. You petabytes, you have real world use cases that, that actually leverage the, the appliance in these types of environments. Where does it really shine? >>Yeah. Let me just give you just two real quick ones. You know, we have a company energy transfer, the major natural gas and pipeline operator in the U S so they are a big part of our country's critical infrastructure services. We know ransomware, and these kinds of threats are, you know, are very much viable. We saw the colonial pipeline incident that happened, right? And so the attack, right, critical services while energy transfer was running, lots of databases and their legacy backup environments just couldn't keep up with their enterprise needs. They had backups taking like, well, over a day, they had restores taking several hours. Um, and so they had problems and they couldn't meet their SLS. They moved to the recovery appliance and now they're seeing backwards complete with that incremental forever in just 15 minutes. So that's like a 48 times improvement in backup time. >>And they're also seeing restores completing in about 30 minutes, right. Versus several hours. So it's a, it's a huge difference for them. And they also get that nice recovery validation and monitoring by the system. They know the health of their enterprise at their fingertips. The second quick one is just a global financial services customer. Um, and they have like over 10,000 databases globally and they, they really couldn't find a solution other than throw more hardware kind of approach to, uh, to fix their backups. Well, this, uh, not that the failures and not as the issues. So they moved to recovery appliance and they saw their failed backup rates go down for Matta plea. They saw four times better backup and restore performance. Um, and they have also a very nice centralized way to monitor and manage the system. Uh, real-time view if you will, that data protection health for their entire environment. Uh, and they can show this to the executive management and auditing teams. This is great for compliance reporting. Um, and so they finally done that. They have north of 50 plus, um, recovery appliances a day across that on global enterprise. >>Love it. Thank you for that. Um, uh, guys, great power panel. We have a lot of Oracle customers in our community and the best way to, to help them is to, I get to ask you a bunch of questions and get the experts to answer. So I wonder if you could bring us home, maybe you could just sort of give us the, the top takeaways that you want to your customers to remember in our audience to remember from this announcement. >>Sure, sorry. Uh, I want to actually pick up from where Tim left off and talk about a real customer use case. This is hot off the press. One of the largest banks in the United States, they decided to, that they needed to update. So performance software update on 3000 of their database instances, which are spanning 68, exited a clusters, massive undertaking, correct. They finished the entire task in three hours, three hours to update 3000 databases and 68 exited a clusters. Talk about availability, try doing this on any other infrastructure, no way anyone's going to be able to achieve this. So that's on terms of the availability, right? We are engineering in all of the aspects of database management, performance, security availability, being able to provide redundancy at every single level is all part of the design philosophy and how we are engineering this product. And as far as we are concerned, the, the goal is for forever. >>We are just going to continue to go down this path of increasing performance, increasing the security aspect of the, uh, of the infrastructure, as well as our Oracle database and keep going on this. You know, this, while these have been great results that we've delivered with extra data X nine M the, the journey is on and to our customers. The biggest advantage that you're going to get from the kind of performance metrics that we are driving with extra data is consolidation consolidate more, move, more database instances onto the extended platform, gain the benefits from that consolidation, reduce your operational expenses, reduce your capital expenses. They use your management expenses, all of those, bring it down to accelerator. Your total cost of ownership is guaranteed to go down. Those are my key takeaways, Dave >>Guys, you've been really generous with your time. Uh Subin uh, uh, uh, Bob, Tim, I appreciate you taking my questions and we'll willingness to go toe to toe, really? Thanks for your time. >>You're welcome, David. Thank you. Thank you. >>And thank you for watching this video exclusive from the cube. This is Dave Volante, and we'll see you next time. Be well.

Published Date : Oct 4 2021

SUMMARY :

We did that on the day of the announcement who got his take on it. Maybe you could give us a recap, 80% of the product development work for extra data, that still, you know, build the builder and they're trying to build their own exit data. And I think the answer to your question is going to lie in what are we doing at the engineering And as I, as I just mentioned the hardware, and then we also worked with the former elements on in the storage tier to be able to offload SQL processing. you know, make sure that it was going to be able to recover according to your standards, the storage network from vendor C, the operating system from vendor D. How do you tune all of these None of the other suppliers can make that claim. remote direct memory access operation from the compute tier to And Juan mentioned that you use a layered security model. that are built into the hardware that make sure that we've got immutable areas of form Now, of course the security of that hardware goes all the way back to the fact that we own the design. Because the moment you ship more stuff than you need, you are increasing going to an ATM machine and withdrawing money, you would do 200. And the bank doesn't want to see it the other way. economies of scale that you get when you consolidate more and more databases, but at the same time, So if something happens to one server hardware, software, whatever you the blast radius, you want to make sure that if something physically happens We're going to give you a break. of the functionality that they provide in the public cloud. you know, that customers love about the cloud that I think is really under, appreciated it under I always tell people that, you know, if they say, well, we were first I'm like, Just remember that we're still in the oven too. Do you see other organizations adopting clouded customer for they cannot move their 40 petabytes of data to a point outside the control of their data center. Uh, I'm going to move these apps and, you know, not move those apps. They see it as a key piece of the puzzle moving forward in the future and customers know that they can You've got a cloud, you know, you've got a true public cloud now. not at least not given the, um, you know, today's regulations and the issues that are When you get the exact opposite from the cloud guys, they roll their eyes. the cloud economics, the ability to pay for what you're using and only what you're using. Um, we have a lot of automation, things that you use to either, you know, By the way, you got some new features in, in cloud, And if they see something they don't like, you know, Hey, what's this guy doing? And this gives everyone, especially customers that need to, you know, You are at a storage, guess what you can add more. is is that a concern you hear a lot and how do you handle that? You're not going to be able to, you know, hire some of this expertise. And you know, as you'd expect, that gives customers complete control over the access to the infrastructure. but uh, we're going to give you a break now and go to Tim, Tim chin, zero Um, and so you have to pay the ransom, right, to get, uh, to get the, even a hope of getting the data back now So you want to make sure that you can recover them Um, a lot of customers in the industry are creating what we it's like the hottest topic going, you got newer players in, in, So you create these one time copy Is that something that you run into? Um, after the acquisition, you look at all the collateral I want to come back and push it a little bit, uh, on some of the tech that you said, it's kind of really not that And when you look at a CDM product, um, and you restore a snapshot And so you have a huge difference in performance. and we've seen a lot of times where it just buckles, you know, it can't handle that kind of load in that, I had a lot of questions on this announcement and I was gonna, I was gonna let you go, And so the attack, right, critical services while energy transfer was running, Uh, and they can show this to the executive management to help them is to, I get to ask you a bunch of questions and get the experts to answer. They finished the entire task in three hours, three hours to increasing the security aspect of the, uh, of the infrastructure, uh, uh, Bob, Tim, I appreciate you taking my questions and we'll willingness to go toe Thank you. And thank you for watching this video exclusive from the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TomPERSON

0.99+

SusanPERSON

0.99+

BrianPERSON

0.99+

CiscoORGANIZATION

0.99+

2008DATE

0.99+

DavidPERSON

0.99+

DellORGANIZATION

0.99+

IBMORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Dave VolantePERSON

0.99+

48 timesQUANTITY

0.99+

70%QUANTITY

0.99+

OracleORGANIZATION

0.99+

JuanPERSON

0.99+

Bob ThomePERSON

0.99+

Tim ChienPERSON

0.99+

DavePERSON

0.99+

AmazonORGANIZATION

0.99+

TimPERSON

0.99+

BobPERSON

0.99+

Deutsche bankORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

40 petabytesQUANTITY

0.99+

3000QUANTITY

0.99+

DelawareLOCATION

0.99+

87%QUANTITY

0.99+

50 timesQUANTITY

0.99+

three hoursQUANTITY

0.99+

19 microsecondsQUANTITY

0.99+

Tim chinPERSON

0.99+

90QUANTITY

0.99+

ConnellORGANIZATION

0.99+

5,000QUANTITY

0.99+

hundredsQUANTITY

0.99+

Deutsche bankORGANIZATION

0.99+

TodayDATE

0.99+

90 plus percentQUANTITY

0.99+

5,413 packagesQUANTITY

0.99+

80%QUANTITY

0.99+

last weekDATE

0.99+

HPORGANIZATION

0.99+

68QUANTITY

0.99+

seven and a half billionQUANTITY

0.99+

HPEORGANIZATION

0.99+

FirstQUANTITY

0.99+

SpainLOCATION

0.99+

AXAORGANIZATION

0.99+

two ordersQUANTITY

0.99+

United StatesLOCATION

0.99+

one copyQUANTITY

0.99+

Bob tomePERSON

0.99+

27 millionQUANTITY

0.99+

LouisaPERSON

0.99+

24 terabytesQUANTITY

0.99+

15 minutesQUANTITY

0.99+

Stefanie Chiras, Red Hat | Red Hat Summit 2021 Virtual Experience


 

(ambient music) >> Hello and welcome back to theCUBEs' coverage of Red Hat Summit 21 virtual. I'm John Furrier. Host of theCUBE. This year, virtual again, soon to be in real life, Post COVID. As the fall comes into play, we're going to start to see life come back and the digital transformation continue to accelerate. And we've got a great guest, Stefanie Chiras, Senior Vice President and General Manager at Red Hat. CUBE alumni. Great to see you. Stephanie, Thanks for coming on. >> No, it's my pleasure, John. Thanks for having me. I'm thrilled to be here with you and look forward to doing it in person soon. >> I can't wait. A lot of people on their vaccine, some say that by the fall vaccines, where pretty much everyone 12 and over, will be vaccinated but we're going to start to see the onboarding of real life again but never going to be the same. Digital business, at the speed of online, offline, almost redefined and re-imagine. Not the old, offline, online paradigms. You're starting to see that come together. That's the focus. That's the top story in the technology industry. That really brings together the topic that I'd like to talk to you about, which is edge computing and RHEL and Linux. This is the topic where all the action is. Obviously hybrid operating models have been pretty much agreed upon by the industry. That is the way it is. Multicloud is on the horizon but edge part of the distributed system. This is where the action is. A natural extension to the open hybrid cloud which you guys have been pioneering. Take me through your thoughts on this edge computing dynamic with RHEL. >> Yeah. So as you said, we have been on this open hybrid cloud strategy for eight years or so. Very focused on providing customers choice both in where they run, what they run, how they run their applications. And the beauty of this strategy is the strategy endures because it's able to adapt to new technologies coming in. And as you said, edge is where things are happening now. It's enabling customers to do so many new and different things. You take kind of all of the dynamics that are happening in technology with data being produced everywhere, new even architectures and compute capabilities that can bring compute right out there to the data. You get 5G networks coming in and incredible advances in telco and networking. You pull that out. Now you've created a dynamic where the technology can really make edge a viable place to now extend how open hybrid cloud can reach and deliver value. And, our goal is to bring our platform and our ecosystem to do everything from the core of your data center out to public clouds, multiple public clouds. And now bring that all the way out to the edge. >> You know, we talk about edge, you know, we talk decentralization, distributed computing. These are the paradigms that are getting re-imagined, if you will, and expanded. You guys talk about and you talk about specifically this idea of digital fast economy requires a new kind of infrastructure. Talk about this because this is, you know, some say virtual first, media first, data first, video first, I mean, developer first, everything's like a first thing, but this is...focuses on the new normal. Take us through this new economy. >> It's really about how you focus on being able to deliver digitally with decisions near the data, and to be able to adapt to that. It's thinking about how you take footprints and now your footprint out at the edge becomes a part of that. One of the things that's really exciting about edge is it does have some specific use case requirements. And we're seeing some things come back. Things like, I mean, we've talked in the past about heterogeneous computing and heterogeneous architectures and the possibilities that exist there. Now at the edge we're seeing different architecture show up, which is great to see. Being able to bring a platform that can allow the use of those different architectures out at the edge to deliver value is a great thing. In addition, we're seeing bare-metal come back out at the edge. You can really imagine spaces where out at the edge you have new architectures with bare-metal deployments and you're operating containers that are touching directly onto that bare-metal. It brings a whole new paradigm to how to deliver value but now we can bring the consistency of what Linux and RHEL and OpenShift with containers can bridge across that whole space. >> So heterogeneous computing, distributed computing, multi-vendor, if you kind of weave those keywords together you have to have a supporting operating model that allows for different services, cloud services, network services, application services, work together. This kind of puts an emphasis on a control plane, a software platform that can bring this together. This is the core, if I understand the Red Hat strategy properly, you guys are going right at this point. Is that true? >> Yeah, that's absolutely right. It is. When everything else, you can get value from everything else changing what stays the same to help keep you efficient and consistent across it? And that's where we focus on the platforms. And as open hybrid cloud changes with different optionalities, our focus is to bring that sort of single common control plane that provides consistency. So you can develop once and reuse, but make it adaptable to how you want to leverage that application as a container, as a BM, on bare-metal, out at the edge, on multiple public clouds. It's really about expanding that landscape that open hybrid cloud can touch. And you'll see in other discussions, you know, one of the places we're going into new is in the edge, manage services also become part of that paradigm. So, it really is our focus to be that common control plane, provide accessibility while still delivering consistency. And let's face it consistency down at the operating system level, that's what starts to deliver your things like security. And boy, it's a critical topic today, right? To make sure that as you expand and distribute and you've got compute running out there with data, security is top of mind. >> I have to ask you, we've been having many conversations in the open source community, Linux foundation, CNCF, KubeCon, CloudNativeCon, and other other communities. And the common thread is... And I want to get your reaction to this statement, the statement is "Edge computing's foundation must be open across the board." Talk about that. What's your reaction to that? And how does that relate to Red Hat and what you guys are doing at the edge and with RHEL. >> I mean, we really believe an open source brings compatibility and standardization that allows innovation to grow. In any new technology, fragmentation causes the death of the new technology. So you...our focus is, it will have to be, I mean, we firmly believe it absolutely has to be built on an open platform that has standards so that the ecosystem, and the ecosystem around edge is complex. You have multiple hardware capabilities, multiple vendors, any edge deployment will be multi-vendor. So how do you pull all of that together in an ecosystem? It is about having that foundation be open and be able to be accessible and built upon by everyone. >> You know, you were talking earlier about the edge in 5G and we just talking about open. This is the future of computing, both consumer and enterprise, whether it's, you know, a factory or a consumer wearing a wearable device or sensors on cameras, on lights and cities and all these things are happening. I want to get your reaction to that because there's a difference between industrial IOT devices and consumer IOT devices. Both have different ramifications. You know, 5G certainly is not so much a consumer as it is also a business technology, as you get the kind of throughputs you're seeing. So, both consumer and industrial enterprise capabilities are emerging. What's your position on that? >> I mean, I think edge is one of those things that it's been hard for people to wrap their head around a bit because what we deal with edge in our own personal lives, whether that be in our connected home or our mobile phone, that's one view of what edge does in one set of value that it does. But from a separate lens edge is everything from how telco is deployed to how data is aggregated in from sensors and how decisions are made. I mean, we're seeing in spaces, whether it be in manufacturing and adding AI onto manufacturing floors, how do you have, you know, in vehicles, I mean, vehicles are becoming sort of mobile software centers now. So, there is a whole shift in edge that is different from industry 4.0 and from kind of operational transformation edge that it's driving all the way into kind of the things that we see everyday which is more the global space and how our homes are connected. And I think now we're starting to see a real maturity in how the world views edge to be able to compartmentalize what enterprise edge is able to do, how edge can change operational technologies, as well as how edge can change kind of our daily lives. >> Great vision and great insights. Definitely awesome. Thought leadership there. I totally agree. I think it's exciting you see confluence of so many awesome technologies and a bright future with the technology platforms and with society open now is defacto everything not just in tech and truth, whether it's journalism or reporting, society and security, again, trust. Open, trust, technology. I got all come in together. The confluence of all those are as going on. So, I think you've got a great read on that. So thanks for sharing. Red Hat Summit. What's new? Tell us what's new here and what's being talked about that no one's heard before and what's the existing stuff that's getting better. >> Yeah, we'd love to. So we are really doubling down on edge within our portfolio. We have, you probably saw in November, we had some announcements, both in OpenShift as well as in RHEL in order to add features and capabilities that deliver specifically for edge use cases. Things like the ability to do updates and roll back in a RHEL deployment. We are continuing to drive things into our products that cater to the needs of edge deployment. As part of that, we are engaged with a whole lot of customers today deploying their edge, and that's across industries, things from telco to energy to transportation. And so, as we look at all of those cases that we've been kind of engaged with and delivering value to customers, we are bringing forward the Red Hat edge brand. It's going to be our collection point to shine a spotlight for how the features and functions in our portfolio can come together and be used to deliver in edge deployments. It'll be our space where we can showcase use cases, where we're seeing success with customers but really to pull together 'cause it is a portfolio story and it's an ecosystem story. How do we pull that together in one spot? And in order to support that here at Summit, we are announcing some really key additions into RHEL 8.4 that really focused on the specific needs of what edge is driving. You'll see things like the ability in RHEL to create streamlined OS image generation. And we can simply manage that into container images. That container magic, right? To be able to repeatably deploy an image, repeatably deployed application out to the edge, that has become a key need in these edge deployments. So we've simplified that so operations teams can really meet the scale of their fleets and deploy it in a super consistent way. We've added capabilities. Image builder, we had brought out already, but we've added capabilities to create customized installation media. It's simplifies for bare-metal deployments. And as I mentioned out at the edge work, it's really small bare-metal deployments where you can bring that container right onto their bare-metal. Can imagine a lot of situations where that brings a lot of value. We introduced in RHEL 8.0 podman as our container engine. And we've added new automatic updates in that. So, again, getting back to security fixes. Simple to ensure that you have the latest security fixes. Application updates and we're continuing to add changes and updates into Universal Base Image. Universal base image is a collection of user space packages that are available to the community, fully redistributable. The goal of those user space packages is to enable developers to be able to create container images with those packages included and then they can redistribute them when they're run on OpenShift or they're run on RHELs. So we can really work through that user space and to that host, matching, and we can stand behind that matching, then we can support it, but it allows for a lot of freedom and flexibility with Universal Based Image to really expand where we can go and help folks kind of create, deploy and develop their applications. We're also moving into, I think, one of the things you see in edge is a real industry slant. We're starting to see edge deployments take on real industry flavors. And so we are engaging in some spots, things like, whether it be from automotive to industrial and operational technology. How do we engage in those industry verticals? How do we engage with the right partners? One of the things that's key that we're looking at, 'cause it is core to what we do, is things like functional safety. And, we're working with a company called Axeda who's a leader in this space for functional safety, for how do we bring that level of security and certification into the RHEL space when it's deployed out there at the edge? So, it's an exciting space, everything from the technology to the partnerships, to how we engage as industry verticals. But this is a... I'm really excited to have the Red Hat... >> I can tell. Super excited. You know, one of the things that's interesting is that the industry trivia as theCUBE has been around for 11 years now. We've been to all of Red Hat events and IBM events for many, many years. But I actually interviewed Arvind, who is now the CEO of IBM, who now owns Red Hat, at Red Hat Summit in San Francisco, like three years ago. And, he had a smile on his face and he just announced the acquisition shortly after 'cause I was hitting him with some cloud native questions. A lot of this stuff about kind of what's hitting today and you just laid it out. RHEL, if I get this right, and of course I'm connecting the dots here in real time, It's an operating system that hits bare-metal, open hybrid cloud, edge, public cloud and across the enterprise. It's an operating system. Okay. So, okay. We know all know that. Okay, you apply that to a cloud operating model, you have some system software. So the question, which by the way is, what's going to power the next gen cloud. I think is what Arvind wants and you guys hope. So the question for you Stephanie is, what applications do you hope to create on top of... and what do you have today that RHEL is powering because if you have great systems software like RHEL, that's enabling applications. I'm assuming that's cloud services, that's new cloud native. Take us through that part of the stack. What's your vision? >> Yeah, absolutely. And I think one of the key things that I would touch on is that it's part of the reason we build our portfolio the way we do, right? We have RHEL of course for your kind of Linux deployments that you described but RHEL CoreOS is part of OpenShift and that consistency delivers into the platform and then both of those can then serve the applications that you need to deploy. And we are really excited to be able to do things like work with the transportation industry, folks like Alstom who do really bring edge capabilities all the way out into the rails of the train systems. They, from high speed trains to metros to monorail, they have built their whole strategy on RHEL and Ansible Automation Platform. It's about the platform, just as you said that operating system, delivering the flexibility to pull the applications on top and those applications could be anything from things that require functional safety, right? Things like in vehicles, as an example, could be anything from artificial intelligence, which goes out into manufacturing. But having that stable platform underneath, whether or not using RHEL or OpenShift, that consistency, it opens up the world to how applications can be deployed on it. But I am super excited about what AI and machine learning out at the edge can do and what being able to bring really hardened security capabilities out to the edge, what that opens up for new technologies and businesses. >> That's super exciting. And I think the edge is a great exclamation point around any debate anyone might've had around what the distributed architecture is going to look like. It's pretty clear now what the landscape is from an enterprise standpoint. And given that, what should people know about the edge? What's the update? What's the modern takeaway now that we're, I mean, obviously COVID has proven that there's a lot of edge applications that kind of were under forecast or accelerated, working at home, dealing with network security, you name it. It's been kind of over-amplified, for sure. But now that COVID is kind of coming, there's light at the end of the tunnel, coming to an end, it's going to be still a hybrid world. I mean, hybrid everything, not just hybrid cloud I mean hybrid everything. So edge now can not be ignored. What should people take away from Red Hat Summit this year? >> Absolutely. I think it's the possibilities that edge can bring. And there are different stages of maturity. Telco, beautiful example of how to deploy edge. In telco, as a market continues to drive the.... kind of pioneer what is done in edge. You see a lot of embedded edge, right? Things that you deploy or your business may deploy that is... you purchase it from a company and it's more embedded as an appliance level. And then there's what the enterprise will do with edge specifically for their businesses. What I think you'll see is a catch-up across all of these spaces, that those three are complimentary, right? You've may consume some of your edge from a partner and a full solution. You may build some of your own edge as you expand your data center and distribute it. And you're made leverage. Of course you'll leverage what's being done by the telcos. So what I think you'll see is a balance in multiple types of edge being deployed and the different values that it can deliver. >> Stefanie, final question for you. And thanks for taking the time. Great conversation and interview here for Red Hat Summit. As the General Manager you're constantly talking to customers. I know that. Personally, you've told me that. Many stories off-camera. But also you have to look inside the organization, run the business, keep an eye on the product roadmap and make sure everything's pumping on all cylinders. What is the customer telling you right now? And what's the common pattern that people are talking about, things that they're looking to do, projects they're funding, and what's the most important story that we should be covering. And what's the most important story people aren't talking about? >> So I think one of the things, I'm really seeing, as you mentioned at the beginning we've been talking about open hybrid cloud for a long time. There was a period of time where hybrid cloud was happening to folks or kind of, it was a bit... some of developers were using it from here. Now, hybrid cloud is intentional. It is very intentional about how customers are strategically taking a view of what they deploy where, how they deploy it and taking a bit advantage of the optionality that hybrid can do. So that's one of the things I'm most excited about. I think the next steps that will happen is a balancing of how do they expand that out into, how do they balance a managed services addition into their hybrid cloud, how do they manage that with also having VMs and a large VM deployment on prem. To me now the biggest thing that is being looked at is how do companies make these decisions in a strategic way that is kind of holistic rather than making point decisions. And I am seeing that transition in the customers I talk to. It's not how do I deal with hybrid cloud, it's how do I make hybrid cloud work for me and really deliver value to me and how do I make those decisions as a company. And honestly that requires kind of what you talked about earlier. It requires within those customers to have the structure, the organizational structure, the communication, the transparency, the openness that you've talked about. That takes a strategy like open hybrid cloud a long way. So it's both the people and the process and the technology coming together. >> You know, Stefanie, we do so many interviews in theCUBE and you've been on so many times, you go back and look back and say, "You know, in that year, 2010, we were talking about this." Chiras, I was talking to a friend and we were just talking about 2015. That was the big conversation of moving to the cloud, you know. Startups are all there. Born in the cloud. So, you know, early generation was all about the startup cloud. They all got that. 2015 was like move to the cloud. This year, the conversation isn't about moving to the cloud is about scale and all those enterprise requirements now that are coming from the hybrid. Now that that's been decided, you starting to see that operating model connect. So it's not so much moving to the cloud, it's I've moved to the cloud and now I got to run some now enterprise grade scale operationally. What's your reaction to that? >> Absolutely. I mean, to me, the, I love the intentionality that I'm seeing now in customers, but when it comes down to it, it's about speed of deploying applications, it's about having the security and the stability in order to deploy that, to give you confidence in order to go out and scale it out. So to me, it is speed, stability and scale. Those three comes together. And how do you pull that together with whole of the choices we have today and the technologies today to deliver value and competitive differentiation. >> Open source is winning and you guys are doing a great job. Stefanie, thank you for coming on and spending so much time chatting here in theCUBE for Red Hat Summit. Thanks for your time. >> Well, my pleasure, John. Good to see you. >> Okay. Great to see you. This is theCUBEs' coverage of Red Hat Summit 21 virtual. I'm John Furrier with theCUBE. Thanks for watching. (ambient music)

Published Date : Apr 27 2021

SUMMARY :

and the digital transformation I'm thrilled to be here with you that I'd like to talk to you about, And the beauty of this strategy and you talk about specifically and to be able to This is the core, to how you want to And how does that relate to Red Hat and the ecosystem around edge is complex. This is the future of computing, and from kind of operational the technology platforms Things like the ability to So the question for you Stephanie is, and that consistency it's going to be still a hybrid world. and the different values And thanks for taking the time. and the technology coming together. now that are coming from the hybrid. and the technologies today and you guys are doing a great job. Good to see you. of Red Hat Summit 21 virtual.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Stefanie ChirasPERSON

0.99+

StefaniePERSON

0.99+

ArvindPERSON

0.99+

StephaniePERSON

0.99+

IBMORGANIZATION

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

NovemberDATE

0.99+

telcoORGANIZATION

0.99+

AxedaORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

2015DATE

0.99+

RHEL 8.4TITLE

0.99+

eight yearsQUANTITY

0.99+

RHELTITLE

0.99+

ChirasPERSON

0.99+

11 yearsQUANTITY

0.99+

San FranciscoLOCATION

0.99+

Red HatORGANIZATION

0.99+

bothQUANTITY

0.99+

threeQUANTITY

0.99+

LinuxTITLE

0.99+

oneQUANTITY

0.99+

three years agoDATE

0.99+

Red Hat Summit 21EVENT

0.99+

Red Hat SummitEVENT

0.99+

2010DATE

0.99+

RHEL 8.0TITLE

0.99+

todayDATE

0.99+

OpenShiftTITLE

0.98+

BothQUANTITY

0.98+

This yearDATE

0.98+

one setQUANTITY

0.98+

RHEL CoreOSTITLE

0.98+

OneQUANTITY

0.98+

one spotQUANTITY

0.98+

CUBEORGANIZATION

0.97+

12QUANTITY

0.96+

Red Hat Summit 2021EVENT

0.96+

CNCFORGANIZATION

0.96+

RHELsTITLE

0.95+

AlstomORGANIZATION

0.95+

singleQUANTITY

0.94+

this yearDATE

0.93+

theCUBEs'ORGANIZATION

0.93+

KubeConORGANIZATION

0.92+