Image Title

Search Results for AXA:

Bob Thome, Tim Chien & Subban Raghunathan, Oracle


 

>>Earlier this week, Oracle announced the new X nine M generation of exit data platforms for its cloud at customer and legacy on prem deployments. And the company made some enhancements to its zero data loss, recovery appliance. CLRA something we've covered quite often since its announcement. We had a video exclusive with one Louisa who was the executive vice president of mission critical database technologies. At Oracle. We did that on the day of the announcement who got his take on it. And I asked Oracle, Hey, can we get some subject matter experts, some technical gurus to dig deeper and get more details on the architecture because we want to better understand some of the performance claims that Oracle is making. And with me today is Susan. Who's the vice president of product management for exit data database machine. Bob tome is the vice president of product management for exit data cloud at customer. And Tim chin is the senior director of product management for DRA folks. Welcome to this power panel and welcome to the cube. >>Thank you, Dave. >>Can we start with you? Um, Juan and I, we talked about the X nine M a that Oracle just launched a couple of days ago. Maybe you could give us a recap, some of the, what do we need to know? The, especially I'm interested in the big numbers once more so we can just understand the claims you're making around this announcement. We can dig into that. >>Absolutely. They've very excited to do that. In a nutshell, we have the world's fastest database machine for both LTP and analytics, and we made that even faster, not just simply faster, but for all LPP we made it 70% faster and we took the oil PPV ops all the way up to 27.6 million read IOPS and mind you, this is being measured at the sequel layer for analytics. We did pretty much the same thing, an 87% increase in analytics. And we broke through that one terabyte per second barrier, absolutely phenomenal stuff. Now, while all those numbers by themselves are fascinating, here's something that's even more fascinating in my mind, 80% of the product development work for extra data, X nine M was done during COVID, which means all of us were remote. And what that meant was extreme levels of teamwork between the development teams, manufacturing teams, procurement teams, software teams, the works. I mean, everybody coming together as one to deliver this product, I think it's kudos to everybody who touched this product in one way or the other extremely proud of it. >>Thank you for making that point. And I'm laughing because it's like you the same bolt of a mission-critical OLT T O LTP performance. You had the world record, and now you're saying, adding on top of that. Um, but, okay. But, so there are customers that still, you know, build the builder and they're trying to build their own exit data. What they do is they buy their own servers and storage and networking components. And I do that when I talk to them, they'll say, look, they want to maintain their independence. They don't want to get locked in Oracle, or maybe they believe it's cheaper. You know, maybe they're sort of focused on the, the, the CapEx the CFO has him in the headlock, or they might, sometimes they talk about, they want a platform that can support, you know, horizontal, uh, apps, maybe not Oracle stuff, or, or maybe they're just trying to preserve their job. I don't know, but why shouldn't these customers roll their own and why can't they get similar results just using standard off the shelf technologies? >>Great question. It's going to require a little involved answer, but let's just look at the statistics to begin with. Oracle's exit data was first productized in Delaware to the market in 2008. And at that point in time itself, we had industry leadership across a number of metrics. Today, we are at the 11th generation of exit data, and we are way far ahead than the competition, like 50 X, faster hundred X faster, right? I mean, we are talking orders of magnitude faster. How did we achieve this? And I think the answer to your question is going to lie in what are we doing at the engineering level to make these magical numbers come to, uh, for right first, it starts with the hardware. Oracle has its own hardware server design team, where we are embedding in capabilities towards increasing performance, reliability, security, and scalability down at the hardware level, the database, which is a user level process talks to the hardware directly. >>The only reason we can do this is because we own the source code for pretty much everything in between, starting with the database, going into the operating system, the hypervisor. And as I, as I just mentioned the hardware, and then we also worked with the former elements on this entire thing, the key to making extra data, the best Oracle database machine lies in that engineering, where we take the operating system, make it fit like tongue and groove into, uh, a bit with the opera, with the hardware, and then do the same with the database. And because we have got this deep insight into what are the workloads that are, that are running at any given point in time on the compute side of extra data, we can then do micromanagement at the software layers of how traffic flows are flowing through the entire system and do things like, you know, prioritize all PP transactions on a very specific, uh, you know, queue on the RDMA. >>We'll converse Ethan at be able to do smart scan, use the compute elements in the storage tier to be able to offload SQL processing. They call them the longer I used formats of data, extend them into flash, just a whole bunch of things that we've been doing over the last 12 years, because we have this deep engineering, you can try to cobble a system together, which sort of looks like an extra data. It's got a network and it's got storage, tiering compute here, but you're not going to be able to achieve anything close to what we are doing. The biggest deal in my mind, apart from the performance and the high availability is the security, because we are testing the stack top to bottom. When you're trying to build your own best of breed kind of stuff. You're not going to be able to do that because it depended on the server that had to do something and HP to do something else or Dell to do something else and a Brocade switch to do something it's not possible. We can do this, we've done it. We've proven it. We've delivered it for over a decade. End of story. For as far as I'm concerned, >>I mean, you know, at this fine, remember when Oracle purchased Sohn and I know a big part of that purchase was to get Java, but I remember saying at the time it was a brilliant acquisition. I was looking at it from a financial standpoint. I think you paid seven and a half billion for it. And it automatically, when you're, when Safra was able to get back to sort of pre acquisition margins, you got the Oracle uplift in terms of revenue multiples. So then that standpoint, it was a no brainer, but the other thing is back in the Unix days, it was like HP. Oracle was the standard. And, and in terms of all the benchmarks and performance, but even then, I'm sure you work closely with HP, but it was like to get the stuff to work together, you know, make sure that it was going to be able to recover according to your standards, but you couldn't actually do that deep engineering that you just described now earlier, Subin you, you, you, you stated that the X sign now in M you get, oh, LTP IO, IOP reads at 27 million IOPS. Uh, you got 19 microseconds latency, so pretty impressive stuff, impressive numbers. And you kind of just went there. Um, but how are you measuring these numbers versus other performance claims from your competitors? What what's, you know, are you, are you stacking the deck? Can you give you share with us there? >>Sure. So Shada incidents, we are mentioning it at the sequel layer. This is not some kind of an ion meter or a micro benchmark. That's looking at just a flash subsystem or just a persistent memory subsystem. This is measured at the compute, not doing an entire set of transactions. And how many times can you finish that? Right? So that's how it's being measured. Now. Most people cannot measure it like that because of the disparity and the number of vendors that are involved in that particular solution, right? You've got servers from vendor a and storage from vendor B, the storage network from vendor C, the operating system from vendor D. How do you tune all of these things on your own? You cannot write. I mean, there's only certain bells and whistles and knobs that are available for you to tune, but so that's how we are measuring the 19 microseconds is at the sequel layer. >>What that means is this a real world customer running a real world. Workload is guaranteed to get that kind of a latency. None of the other suppliers can make that claim. This is the real world capability. Now let's take a look at that 19 microseconds we boast and we say, Hey, we had an order of magnitude two orders of magnitude faster than everybody else. When it comes down to latency. And one things that this is we'll do our magic while it is magical. The magic is really grounded in deep engineering and deep physics and science. The way we implement this is we, first of all, put the persistent memory tier in the storage. And that way it's shared across all of the database instances that are running on the compute tier. Then we have this ultra fast hundred gigabit ethernet RDMA over converged ethernet fabric. >>With this, what we have been able to do is at the hardware level between two network interface guides that are resident on that fabric, we create paths that enable high priority low-latency communication between any two end points on that fabric. And then given the fact that we implemented persistent memory in the storage tier, what that means is with that persistent memory, sitting on the memory bus of the processor in the storage tier, we can perform it remote direct memory access operation from the compute tier to memory address spaces in the persistent memory of the storage tier, without the involvement of the operating system on either end, no context, switches, knowing processing latencies and all of that. So it's hardware to hardware, communication with security built in, which is immutable, right? So all of this is built into the hardware itself. So there's no software involved. You perform a read, the data comes back 19 microseconds, boom. End of story. >>Yeah. So that's key to my next topic, which is security because if you're not getting the OSTP involved and that's, you know, very oftentimes if I can get access to the OSTP, I get privileged. Like I can really take advantage of that as a hacker. But so, but, but before I go there, like Oracle talks about, it's got a huge percentage of the Gayety 7% of the fortune 100 companies run their mission, critical workloads on exit data. But so that's not only important to the companies, but they're serving consumer me, right. I'm going to my ATM or I'm swiping my credit card. And Juan mentioned that you use a layered security model. I just sort of inferred anyway, that, that having this stuff in hardware and not have to involve access to the OS actually contributes to better security. But can you describe this in a bit more detail? >>So yeah, what Brian was talking about was this layered security set differently. It is defense in depth, and that's been our mantra and philosophy for several years now. So what does that entail? As I mentioned earlier, we designed our own servers. We do this for performance. We also do it for security. We've got a number of features that are built into the hardware that make sure that we've got immutable areas of form where we, for instance, let me give you this example. If you take an article x86 server, just a standard x86 server, not even express in the form of an extra data system, even if you had super user privileges sitting on top of an operating system, you cannot modify the bias as a user, as a super user that has to be done through the system management network. So we put gates and protection modes, et cetera, right in the hardware itself. >>Now, of course the security of that hardware goes all the way back to the fact that we own the design. We've got a global supply chain, but we are making sure that our supply chain is protected monitored. And, uh, we also protect the last mile of the supply chain, which is we can detect if there's been any tampering of form where that's been, uh, that's occurred in the hardware while the hardware shipped from our factory to the customers, uh, docks. Right? So we, we know that something's been tampered with the moment it comes back up on the customer. So that's on the hardware. Let's take a look at the operating system, Oracle Linux, we own article the next, the entire source code. And what shipping on exit data is the unbreakable enterprise Connell, the carnal and the operating system itself have been reduced in terms of eliminating all unnecessary packages from that operating system bundle. >>When we deliver it in the form of the data, let's put some real numbers on that. A standard Oracle Linux or a standard Linux distribution has got about 5,000 plus packages. These things include like print servers, web servers, a whole bunch of stuff that you're not absolutely going to use at all on exit data. Why ship those? Because the moment you ship more stuff than you need, you are increasing the, uh, the target, uh, that attackers can get to. So on AXA data, there are only 701 packages. So compare this 5,413 packages on a standard Linux, 701 and exit data. So we reduced the attack surface another aspect on this, when we, we do our own STIG, uh, ASCAP benchmarking. If you take a standard Linux and you run that ASCAP benchmark, you'll get about a 30% pass score on exit data. It's 90 plus percent. >>So which means we are doing the heavy lifting of doing the security checks on the operating system before it even goes out to the factory. And then you layer on Oracle database, transparent data encryption. We've got all kinds of protection capabilities, data reduction, being able to do an authentication on a user ID basis, being able to log it, being able to track it, being able to determine who access the system when and log back. So it's basically defend at every single layer. And then of course the customer's responsibility. It doesn't just stop by getting this high secure, uh, environment. They have to do their own job of them securing their network perimeters, securing who has physical access to the system and everything else. So it's a giant responsibility. And as you mentioned, you know, you as a consumer going to an ATM machine and withdrawing money, you would do 200. You don't want to see 5,000 deducted from your account. And so all of this is made possible with exited and the amount of security focus that we have on the system >>And the bank doesn't want to see it the other way. So I'm geeking out here in the cube, but I got one more question for you. Juan talked about X nine M best system for database consolidation. So I, I kinda, you know, it was built to handle all LTP analytics, et cetera. So I want to push you a little bit on this because I can make an argument that, that this is kind of a Swiss army knife versus the best screwdriver or the best knife. How do you respond to that concern and how, how do you respond to the concern that you're putting too many eggs in one basket? Like, what do you tell people to fear you're consolidating workloads to save money, but you're also narrowing the blast radius. Isn't that a problem? >>Very good question there. So, yes. So this is an interesting problem, and it is a balancing act. As you correctly pointed out, you want to have the economies of scale that you get when you consolidate more and more databases, but at the same time, when something happens when hardware fails or there's an attack, you want to make sure that you have business continuity. So what we are doing on exit data, first of all, as I mentioned, we are designing our own hardware and a building in reliability into the system and at the hardware layer, that means having redundancy, redundancy for fans, power supplies. We even have the ability to isolate faulty cores on the processor. And we've got this a tremendous amount of sweeping that's going on by the system management stack, looking for problem areas and trying to contain them as much as possible within the hardware itself. >>Then you take it up to the software layer. We used our reliability to then build high availability. What that implies is, and that's fundamental to the exited architecture is this entire scale out model, our based system, you cannot go smaller than having two database nodes and three storage cells. Why is that? That's because you want to have high availability of your database instances. So if something happens to one server hardware, software, whatever you got another server that's ready to take on that load. And then with real application clusters, you can then switch over between these two, why three storage cells. We want to make sure that when you have got duplicate copies of data, because you at least want to have one additional copy of your data in case something happens to the disc that has got that only that one copy, right? So the reason we have got three is because then you can Stripe data across these three different servers and deliver high availability. >>Now you take that up to the rack level. A lot of things happen. Now, when you're really talking about the blast radius, you want to make sure that if something physically happens to this data center, that you have infrastructure that's available for it to function for business continuity, we maintain, which is why we have the maximum availability architecture. So with components like golden gate and active data guard, and other ways by which we can keep to this distant systems in sync is extremely critical for us to deliver these high availability paths that make, uh, the whole equation about how many eggs in one basket versus containing the containment of the blast radius. A lot easier to grapple with because business continuity is something which is paramount to us. I mean, Oracle, the enterprise is running on Xcel data. Our high value cloud customers are running on extra data. And I'm sure Bob's going to talk a lot more about the cloud piece of it. So I think we have all the tools in place to, to go after that optimization on how many eggs in one basket was his blast radius. It's a question of working through the solution and the criticalities of that particular instance. >>Okay, great. Thank you for that detailed soup. We're going to give you a break. You go take a breath, get a, get a drink of water. Maybe we'll come back to you. If we have time, let's go to Bob, Bob, Bob tome, X data cloud at customer X nine M earlier this week, Juan said kinda, kinda cocky. What we're bothering, comparing exit data against your cloud, a customer against outpost or Azure stack. Can you elaborate on, on why that is? >>Sure. Or you, you know, first of all, I want to say, I love, I love baby. We go south posts. You know why it affirms everything that we've been doing for the past four and a half years with clouded customer. It affirms that cloud is running that running cloud services in customers' data center is a large and important market, large and important enough that AWS felt that the need provide these, um, you know, these customers with an AWS option, even if it only supports a sliver of the functionality that they provide in the public cloud. And that's what they're doing. They're giving it a sliver and they're not exactly leading with the best they could offer. So for that reason, you know, that reason alone, there's really nothing to compare. And so we, we give them the benefit of the doubt and we actually are using their public cloud solutions. >>Another point most customers are looking to deploy to Oracle cloud, a customer they're looking for a per performance, scalable, secure, and highly available platform to deploy. What's offered their most critical databases. Most often they are Oracle databases does outposts for an Oracle database. No. Does outpost run a comparable database? Not really does outposts run Amazon's top OTP and analytics database services, the ones that are top in their cloud public cloud. No, that we couldn't find anything that runs outposts that's worth comparing against X data clouded customer, which is why the comparisons are against their public cloud products. And even with that still we're looking at numbers like 50 times a hundred times slower, right? So then there's the Azure stack. One of the key benefits to, um, you know, that customers love about the cloud that I think is really under, appreciated it under appreciated is really that it's a single vendor solution, right? You have a problem with cloud service could be I as pass SAS doesn't matter. And there's a single vendor responsible for fixing your issue as your stack is missing big here, because they're a multi-vendor cloud solution like AWS outposts. Also, they don't exactly offer the same services in the cloud that they offer on prem. And from what I hear, it can be a management nightmare requiring specialized administrators to keep that beast running. >>Okay. So, well, thanks for that. I'll I'll grant you that, first of all, granted that Oracle was the first with that same, same vision. I always tell people that, you know, if they say, well, we were first I'm like, well, actually, no, Oracle's first having said that, Bob and I hear you that, that right now, outpost is a one Datto version. It doesn't have all the bells and whistles, but neither did your cloud when you first launched your cloud. So let's, let's let it bake for a while and we'll come back in a couple of years and see how things compare. So if you're up for it. Yeah. >>Just remember that we're still in the oven too. Right. >>Okay. All right. Good. I love it. I love the, the chutzpah. One also talked about Deutsche bank. Um, and that, I, I mean, I saw that Deutsche bank announcement, how they're working with Oracle, they're modernizing their infrastructure around database. They're building other services around that and kind of building their own sort of version of a cloud for their customers. How does exit data cloud a customer fit in to that whole Deutsche bank deal? Is, is this solution unique to Deutsche bank? Do you see other organizations adopting clouded customer for similar reasons and use cases? >>Yeah, I'll start with that. First. I want to say that I don't think Georgia bank is unique. They want what all customers want. They want to be able to run their most important workloads. The ones today running their data center on exit eight as a non other high-end systems in a cloud environment where they can benefit from things like cloud economics, cloud operations, cloud automations, but they can't move to public cloud. They need to maintain the service levels, the performance, the scalability of the security and the availability that their business has. It has come to depend on most clouds can't provide that. Although actually Oracle's cloud can our public cloud Ken, because our public cloud does run exit data, but still even with that, they can't do it because as a bank, they're subject to lots of rules and regulations, they cannot move their 40 petabytes of data to a point outside the control of their data center. >>They have thousands of interconnected databases, right? And applications. It's like a rat's nest, right? And this is similar many large customers have this problem. How do you move that to the cloud? You can move it piecemeal. Uh, I'm going to move these apps and, you know, not move those apps. Um, but suddenly ended up with these things where some pieces are up here. Some pieces are down here. The thing just dies because of the long latency over a land connection, it just doesn't work. Right. So you can also shut it down. Let's shut it down on, on Friday and move everything all at once. Unfortunately, when you're looking at it, a state decides that most customers have, you're not going to be able to, you're going to be down for a month, right? Who can, who can tolerate that? So it's a big challenge and exited cloud a customer let's then move to the cloud without losing control of their data. >>And without unhappy having to untangle that thousands of interconnected databases. So, you know, that's why these customers are choosing X data, clouded customer. More importantly, it sets them up for the future with exited cloud at customer, they can run not just in their data center, but they could also run in public cloud, adjacent sites, giving them a path to moving some work out of the data center and ultimately into the public cloud. You know, as I said, they're not unique. Other banks are watching and some are acting and it's not just banks. Just last week. Telefonica telco in Spain announced their intent to migrate the bulk of their Oracle databases to excavate a cloud at customer. This will be the key cloud platform running. They're running in their data center to support both new services, as well as mission critical and operational systems. And one last important point exited cloud a customer can also run autonomous database. Even if customers aren't today ready to adopt this. A lot of them are interested in it. They see it as a key piece of the puzzle moving forward in the future and customers know that they can easily start to migrate to autonomous in the future as they're ready. And this of course is going to drive additional efficiencies and additional cost savings. >>So, Bob, I got a question for you because you know, Oracle's playing both sides, right? You've got a cloud, you know, you've got a true public cloud now. And, and obviously you have a huge on-premise state. When I talk to companies that don't own a cloud, uh, whether it's Dell or HPE, Cisco, et cetera, they have made, they make the point. And I agree with them by the way that the world is hybrid, not everything's going into the, to the cloud. However, I had a lot of respect for folks at Amazon as well. And they believed long-term, they'll say this, they've got them on record of saying this, that they believe long-term ultimately all workloads are going to be running in the cloud. Now, I guess it depends on how you define the cloud. The cloud is expanding and all that other stuff. But my question to you, because again, you kind of on both sides, here are our hybrid solutions like cloud at customer. Do you see them as a stepping stone to the cloud, or is cloud in your data center, sort of a continuous sort of permanent, you know, essential play >>That. That's a great question. As I recall, people debated this a few years back when we first introduced clouded customer. And at that point, some people I'm talking about even internal Oracle, right? Some people saw this as a stop gap measure to let people leverage cloud benefits until they're really ready for the public cloud. But I think over the past four and a half years, the changing the thinking has changed a little bit on this. And everyone kind of agrees that clouded customer may be a stepping stone for some customers, but others see that as the end game, right? Not every workload can run in the public cloud, not at least not given the, um, you know, today's regulations and the issues that are faced by many of these regulated industries. These industries move very, very slowly and customers are content to, and in many cases required to retain complete control of their data and they will be running under their control. They'll be running with that data under their control and the data center for the foreseeable future. >>Oh, I got another question for kind of just, if I could take a little tangent, cause the other thing I hear from the, on the, the, the on-prem don't own, the cloud folks is it's actually cheaper to run in on-prem, uh, because they're getting better at automation, et cetera. When you get the exact opposite from the cloud guys, they roll their eyes. Are you kidding me? It's way cheaper to run it in the cloud, which is more cost-effective is it one of those? It depends, Bob. >>Um, you know, the great thing about numbers is you can make, you can, you can kind of twist them to show anything that you want, right? That's a have spreadsheet. Can I, can, I can sell you on anything? Um, I think that there's, there's customers who look at it and they say, oh, on-premise sheet is cheaper. And there's customers who look at it and say, the cloud is cheaper. If you, um, you know, there's a lot of ways that you may incur savings in the cloud. A lot of it has to do with the cloud economics, the ability to pay for what you're using and only what you're using. If you were to kind of, you know, if you, if you size something for your peak workload and then, you know, on prem, you probably put a little bit of a buffer in it, right? >>If you size everything for that, you're gonna find that you're paying, you know, this much, right? All the time you're paying for peak workload all the time with the cloud, of course, we support scaling up, scaling down. We supply, we support you're paying for what you use and you can scale up and scale down. That's where the big savings is now. There's also additional savings associated with you. Don't have the cloud vendors like work. Well, we manage that infrastructure for you. You no longer have to worry about it. Um, we have a lot of automation, things that you use to either, you know, probably what used to happen is you used to have to spend hours and hours or years or whatever, scripting these things yourselves. We now have this automation to do it. We have, um, you eyes that make things ad hoc things, as simple as point and click and, uh, you know, that eliminates errors. And, and it's often difficult to put a cost on those things. And I think the more enlightened customers can put a cost on all of those. So the people that were saying it's cheaper to run on prem, uh, they, they either, you know, have a very stable workload that never changes and their environment never changes, um, or more likely. They just really haven't thought through the, all the hidden costs out there. >>All right, you got some new features. Thank you for that. By the way, you got some new features in, in cloud, a customer, a what are those? Do I have to upgrade to X nine M to, to get >>All right. So, you know, we're always introducing new features for clouded customer, but two significant things that we've rolled out recently are operator access control and elastic storage expansion. As we discussed, many organizations are using Axeda cloud a customer they're attracting the cloud economics, the operational benefits, but they're required by regulations to retain control and visibility of their data, as well as any infrastructure that sits inside their data center with operator access control, enabled cloud operations, staff members must request access to a customer system, a customer, it team grants, a designated person, specific access to a specific component for a specific period of time with specific privileges, they can then kind of view audit controls in real time. And if they see something they don't like, you know, Hey, what's this guy doing? It looks like he's, he's stealing my data or doing something I don't like, boom. >>They can kill that operators, access the session, the connections, everything right away. And this gives everyone, especially customers that need to, you know, regulate remote access to their infrastructure. It gives them the confidence that they need to use exit data cloud, uh, conduct, customer service. And, and the other thing that's new is, um, elastic storage expansion. Customers could out add additional service to their system either at initial deployment or after the fact. And this really provides two important benefits. The first is that they can right size their configuration if they need only the minimum compute capacity, but they don't need the maximum number of storage servers to get that capacity. They don't need to subscribe to kind of a fixed shape. We used to have fixed shapes, I guess, with hundreds of unnecessary database cores, just to get the storage capacity, they can select a smaller system. >>And then incrementally add on that storage. The second benefit is the, is kind of key for many customers. You are at a storage, guess what you can add more. And that way, when you're out of storage, that's really important. Now they'll get to your last part of that question. Do you need a deck, a new, uh, exit aquatic customer XIM system to get these features? No they're available for all gen two exited clouded customer systems. That's really one of the best things about cloud. The service you subscribed to today just keeps getting better and better. And unless there's some technical limitation that, you know, we, and it, which is rare, most new features are available even for the oldest cloud customer systems. >>Cool. And you can bring that in on from my, my last question for you, Bob is a, another one on security. Obviously, again, we talked to Susan about this. It's a big deal. How can customer data be secure if it's in the cloud, if somebody, other than the, their own vetted employees are managing the underlying infrastructure, is is that a concern you hear a lot and how do you handle that? >>You know, it's, it's only something because a lot of these customers, they have big, you know, security people and it's their job to be concerned about that kind of stuff. And security. However, is one of the biggest, but least appreciate appreciated benefits of cloud cloud vendors, such as Oracle hire the best and brightest security experts to ensure that their clouds are secure. Something that only the largest customers can afford to do. You're a small, small shop. You're not going to be able to, you know, hire some of this expertise. So you're better off being in the cloud. Customers who are running in the Oracle cloud can also use articles, data, safe tool, which we provide, which basically lets you inspect your databases, insurance. Sure that everything is locked down and secure and your data is secure. But your question is actually a little bit different. >>It was about potential internal threats to company's data. Given the cloud vendor, not the customer's employees have access to the infrastructure that sits beneath the databases and really the first and most important thing we do to protect customers' data is we encrypt that database by default. Actually Subin listed a whole laundry list of things, but that's the one thing I want to point out. We encrypt your database. It's, you know, it's, it's encrypted. Yes. It sits on our infrastructure. Yes. Our operations persons can actually see those data files sitting on the infrastructure, but guess what? They can't see the data. The data is encrypted. All they see as kind of a big encrypted blob. Um, so they can't access the data themselves. And you know, as you'd expect, we have very tight controls over operations access to the infrastructure. They need to securely log in using mechanisms by stuff to present, prevent unauthorized access. >>And then all access is logged and suspicious. Activities are investigated, but that still may not be enough for some customers, especially the ones I mentioned earlier, the regulated industries. And that's why we offer app operator access control. As I mentioned, that gives customers complete control over the access to the infrastructure. The, when the, what ops can do, how long can they do it? Customers can monitor in real time. And if they see something they don't like they stop it immediately. Lastly, I just want to mention Oracle's data ball feature. This prevents administrators from accessing data, protecting data from road operators, robot, world operations, whether they be from Oracle or from the customer's own it staff, this database option. A lot of ball is sorry. Database ball data vault is included when running a license included service on exited clouded customer. So basically to get it with the service. Got it. >>Hi Tom. Thank you so much. It's unbelievable, Bob. I mean, we've got a lot to unpack there, but uh, we're going to give you a break now and go to Tim, Tim chin, zero data loss, recovery appliance. We always love that name. The big guy we think named it, but nobody will tell us, but we've been talking about security. There's been a lot of news around ransomware attacks. Every industry around the globe, any knucklehead with, uh, with a high school diploma could become a ransomware attack or go in the dark web, get, get ransomware as a service stick, a, put a stick in and take a piece of the VIG and hopefully get arrested. Um, with, when you think about database, how do you deal with the ransomware challenge? >>Yeah, Dave, um, that's an extremely important and timely question. Um, we are hearing this from our customers. We just talk about ha and backup strategies and ransomware, um, has been coming up more and more. Um, and the unfortunate thing that these ransoms are actually paid, um, uh, in the hope of the re you know, the, uh, the ability to access the data again. So what that means it tells me is that today's recovery solutions and processes are not sufficient to get these systems back in a reliable and timely manner. Um, and so you have to pay the ransom, right, to get, uh, to get the, even a hope of getting the data back now for databases. This can have a huge impact because we're talking about transactional workloads. And so even a compromise of just a few minutes, a blip, um, can affect hundreds or even thousands of transactions. This can literally represent hundreds of lost orders, right? If you're a big manufacturing company or even like millions of dollars worth of, uh, financial transactions in a bank. Right. Um, and that's why protecting databases at a transaction level is especially critical, um, for ransomware. And that's a huge contrast to traditional backup approaches. Okay. >>So how do you approach that? What do you, what do you do specifically for ransomware protection for the database? >>Yeah, so we have the zero data loss recovery appliance, which we announced the X nine M generation. Um, it is really the only solution in the market, which offers that transaction level of protection, which allows all transactions to be recovered with zero RPO, zero again, and this is only possible because Oracle has very innovative and unique technology called real-time redo, which captures all the transactional changes from the databases by the appliance, and then stored as well by the appliance, moreover, the appliance validates all these backups and reading. So you want to make sure that you can recover them after you've sent them, right? So it's not just a file level integrity check on a file system. That's actual database level of validation that the Oracle blocks and the redo that I mentioned can be restored and recovered as a usable database, any kind of, um, malicious attack or modification of that backup data and transmit that, or if it's even stored on the appliance and it was compromised would be immediately detected and reported by that validation. >>So this allows administrators to take action. This is removing that system from the network. And so it's a huge leap in terms of what customers can get today. The last thing I just want to point out is we call our cyber vault deployment, right? Um, a lot of customers in the industry are creating what we call air gapped environments, where they have a separate location where their backup copies are stored physically network separated from the production systems. And so this prevents ransomware for possibly infiltrating that last good copy of backups. So you can deploy recovery appliance in a cyber vault and have it synchronized at random times when the network's available, uh, to, to keep it in sync. Right. Um, so that combined with our transaction level zero data loss validation, it's a nice package and really a game changer in protecting and recovering your databases from modern day cyber threats. >>Okay, great. Thank you for clarifying that air gap piece. Cause I, there was some confusion about that. Every data protection and backup company that I know as a ransomware solution, it's like the hottest topic going, you got newer players in, in, in recovery and backup like rubric Cohesity. They raised a ton of dough. Dell has got solutions, HPE just acquired Zerto to deal with this problem. And other things IBM has got stuff. Veem seems to be doing pretty well. Veritas got a range of, of recovery solutions. They're sort of all out there. What's your take on these and their strategy and how do you differentiate? >>Yeah, it's a pretty crowded market, like you said. Um, I think the first thing you really have to keep in mind and understand that these vendors, these new and up and coming, um, uh, uh, vendors start in the copy data management, we call CDN space and they're not traditional backup recovery designed are purpose built for the purpose of CDM products is to provide these fast point in time copies for test dev non-production use, and that's a viable problem and it needs a solution. So you create these one time copy and then you create snapshots. Um, after you apply these incremental changes to that copy, and then the snapshot can be quickly restored and presented as like it's a fully populated, uh, file. And this is all done through the underlying storage of block pointers. So all of this kind of sounds really cool and modern, right? It's like new and upcoming and lots of people in the market doing this. Well, it's really not that modern because we've, we know storage, snapshot technologies has been around for years. Right. Um, what these new vendors have been doing is essentially repackaging the old technology for backup and recovery use cases and having sort of an easier to use automation interface wrapped around it. >>Yeah. So you mentioned a copy data management, uh, last year, active FIO. Uh, they started that whole space from what I recall at one point there, they value more than a billion dollars. They were acquired by Google. Uh, and as I say, they kind of created that, that category. So fast forward a little bit, nine months a year, whatever it's been, do you see that Google active FIO offer in, in, in customer engagements? Is that something that you run into? >>We really don't. Um, yeah, it was really popular and known some years ago, but we really don't hear about it anymore. Um, after the acquisition, you look at all the collateral and the marketing, they are really a CDM and backup solution exclusively for Google cloud use cases. And they're not being positioned as for on premises or any other use cases outside of Google cloud. That's what, 90, 90 plus percent of your market there that isn't addressable now by Activia. So really we don't see them in any of our engagements at this time. >>I want to come back and push it a little bit, uh, on some of the tech that you said, it's kind of really not that modern. Uh, I mean it's, if they certainly position it as modern, a lot of the engineers who are building there's new sort of backup and recovery capabilities came from the hyperscalers, whether it's copy data management, you know, the bot mock quote, unquote modern backup recovery, it's kind of a data management, sort of this nice all in one solution seems pretty compelling. How does recovery clients specifically stack up? You know, a lot of people think it's a niche product for, for really high end use cases. Is that fair? How do you see a town? >>Yeah. Yeah. So it's, I think it's so important to just, you know, understand, again, the fundamental use of this technology is to create data copies for test W's right. Um, and that's really different than operational backup recovery in which you must have this ability to do full and point in time recoverability in any production outage or Dr. Situation. Um, and then more importantly, after you recover and your applications are back in business, that performance must continue to meet servers levels as before. And when you look at a CDM product, um, and you restore a snapshot and you say with that product and the application is brought up on that restored snapshot, what happens or your production application is now running on actual read rideable snapshots on backup storage. Remember they don't restore all the data back to the production, uh, level stores. They're restoring it as a snapshot okay. >>Onto their storage. And so you have a huge difference in performance. Now running these applications where they instantly recovered, if you will database. So to meet these true operational requirements, you have to fully restore the files to production storage period. And so recovery appliance was first and foremost designed to accomplish this. It's an operational recovery solution, right? We accomplish that. Like I mentioned, with this real-time transaction protection, we have incremental forever backup strategies. So that you're just taking just the changes every day. And you, you can create these virtual full backups that are quickly restored, fully restored, if you will, at 24 terabytes an hour. And we validate and document that performance very clearly in our website. And of course we provide that continuous recovery validation for all the backups that are stored on the system. So it's, um, it's a very nice, complete solution. >>It scales to meet your demands, hundreds of thousands of databases, you know, it's, um, you know, these CDM products might seem great and they work well for a few databases, but then you put a real enterprise load and these hundreds of databases, and we've seen a lot of times where it just buckles, you know, it can't handle that kind of load in that, uh, in that scale. Uh, and, and this is important because customers read their marketing and read the collateral like, Hey, instant recovery. Why wouldn't I want that? Well, it's, you know, nicer than it looks, you know, it always sounds better. Right. Um, and so we have to educate them and about exactly what that means for the database, especially backup recovery use cases. And they're not really handled well, um, with their products. >>I know I'm like way over. I had a lot of questions on this announcement and I was gonna, I was gonna let you go, Tim, but you just mentioned something that, that gave me one more question if I may. So you talked about, uh, supporting hundreds of thousands of databases. You petabytes, you have real world use cases that, that actually leverage the, the appliance in these types of environments. Where does it really shine? >>Yeah. Let me just give you just two real quick ones. You know, we have a company energy transfer, the major natural gas and pipeline operator in the U S so they are a big part of our country's critical infrastructure services. We know ransomware, and these kinds of threats are, you know, are very much viable. We saw the colonial pipeline incident that happened, right? And so the attack, right, critical services while energy transfer was running, lots of databases and their legacy backup environments just couldn't keep up with their enterprise needs. They had backups taking like, well, over a day, they had restores taking several hours. Um, and so they had problems and they couldn't meet their SLS. They moved to the recovery appliance and now they're seeing backwards complete with that incremental forever in just 15 minutes. So that's like a 48 times improvement in backup time. >>And they're also seeing restores completing in about 30 minutes, right. Versus several hours. So it's a, it's a huge difference for them. And they also get that nice recovery validation and monitoring by the system. They know the health of their enterprise at their fingertips. The second quick one is just a global financial services customer. Um, and they have like over 10,000 databases globally and they, they really couldn't find a solution other than throw more hardware kind of approach to, uh, to fix their backups. Well, this, uh, not that the failures and not as the issues. So they moved to recovery appliance and they saw their failed backup rates go down for Matta plea. They saw four times better backup and restore performance. Um, and they have also a very nice centralized way to monitor and manage the system. Uh, real-time view if you will, that data protection health for their entire environment. Uh, and they can show this to the executive management and auditing teams. This is great for compliance reporting. Um, and so they finally done that. They have north of 50 plus, um, recovery appliances a day across that on global enterprise. >>Love it. Thank you for that. Um, uh, guys, great power panel. We have a lot of Oracle customers in our community and the best way to, to help them is to, I get to ask you a bunch of questions and get the experts to answer. So I wonder if you could bring us home, maybe you could just sort of give us the, the top takeaways that you want to your customers to remember in our audience to remember from this announcement. >>Sure, sorry. Uh, I want to actually pick up from where Tim left off and talk about a real customer use case. This is hot off the press. One of the largest banks in the United States, they decided to, that they needed to update. So performance software update on 3000 of their database instances, which are spanning 68, exited a clusters, massive undertaking, correct. They finished the entire task in three hours, three hours to update 3000 databases and 68 exited a clusters. Talk about availability, try doing this on any other infrastructure, no way anyone's going to be able to achieve this. So that's on terms of the availability, right? We are engineering in all of the aspects of database management, performance, security availability, being able to provide redundancy at every single level is all part of the design philosophy and how we are engineering this product. And as far as we are concerned, the, the goal is for forever. >>We are just going to continue to go down this path of increasing performance, increasing the security aspect of the, uh, of the infrastructure, as well as our Oracle database and keep going on this. You know, this, while these have been great results that we've delivered with extra data X nine M the, the journey is on and to our customers. The biggest advantage that you're going to get from the kind of performance metrics that we are driving with extra data is consolidation consolidate more, move, more database instances onto the extended platform, gain the benefits from that consolidation, reduce your operational expenses, reduce your capital expenses. They use your management expenses, all of those, bring it down to accelerator. Your total cost of ownership is guaranteed to go down. Those are my key takeaways, Dave >>Guys, you've been really generous with your time. Uh Subin uh, uh, uh, Bob, Tim, I appreciate you taking my questions and we'll willingness to go toe to toe, really? Thanks for your time. >>You're welcome, David. Thank you. Thank you. >>And thank you for watching this video exclusive from the cube. This is Dave Volante, and we'll see you next time. Be well.

Published Date : Oct 4 2021

SUMMARY :

We did that on the day of the announcement who got his take on it. Maybe you could give us a recap, 80% of the product development work for extra data, that still, you know, build the builder and they're trying to build their own exit data. And I think the answer to your question is going to lie in what are we doing at the engineering And as I, as I just mentioned the hardware, and then we also worked with the former elements on in the storage tier to be able to offload SQL processing. you know, make sure that it was going to be able to recover according to your standards, the storage network from vendor C, the operating system from vendor D. How do you tune all of these None of the other suppliers can make that claim. remote direct memory access operation from the compute tier to And Juan mentioned that you use a layered security model. that are built into the hardware that make sure that we've got immutable areas of form Now, of course the security of that hardware goes all the way back to the fact that we own the design. Because the moment you ship more stuff than you need, you are increasing going to an ATM machine and withdrawing money, you would do 200. And the bank doesn't want to see it the other way. economies of scale that you get when you consolidate more and more databases, but at the same time, So if something happens to one server hardware, software, whatever you the blast radius, you want to make sure that if something physically happens We're going to give you a break. of the functionality that they provide in the public cloud. you know, that customers love about the cloud that I think is really under, appreciated it under I always tell people that, you know, if they say, well, we were first I'm like, Just remember that we're still in the oven too. Do you see other organizations adopting clouded customer for they cannot move their 40 petabytes of data to a point outside the control of their data center. Uh, I'm going to move these apps and, you know, not move those apps. They see it as a key piece of the puzzle moving forward in the future and customers know that they can You've got a cloud, you know, you've got a true public cloud now. not at least not given the, um, you know, today's regulations and the issues that are When you get the exact opposite from the cloud guys, they roll their eyes. the cloud economics, the ability to pay for what you're using and only what you're using. Um, we have a lot of automation, things that you use to either, you know, By the way, you got some new features in, in cloud, And if they see something they don't like, you know, Hey, what's this guy doing? And this gives everyone, especially customers that need to, you know, You are at a storage, guess what you can add more. is is that a concern you hear a lot and how do you handle that? You're not going to be able to, you know, hire some of this expertise. And you know, as you'd expect, that gives customers complete control over the access to the infrastructure. but uh, we're going to give you a break now and go to Tim, Tim chin, zero Um, and so you have to pay the ransom, right, to get, uh, to get the, even a hope of getting the data back now So you want to make sure that you can recover them Um, a lot of customers in the industry are creating what we it's like the hottest topic going, you got newer players in, in, So you create these one time copy Is that something that you run into? Um, after the acquisition, you look at all the collateral I want to come back and push it a little bit, uh, on some of the tech that you said, it's kind of really not that And when you look at a CDM product, um, and you restore a snapshot And so you have a huge difference in performance. and we've seen a lot of times where it just buckles, you know, it can't handle that kind of load in that, I had a lot of questions on this announcement and I was gonna, I was gonna let you go, And so the attack, right, critical services while energy transfer was running, Uh, and they can show this to the executive management to help them is to, I get to ask you a bunch of questions and get the experts to answer. They finished the entire task in three hours, three hours to increasing the security aspect of the, uh, of the infrastructure, uh, uh, Bob, Tim, I appreciate you taking my questions and we'll willingness to go toe Thank you. And thank you for watching this video exclusive from the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TomPERSON

0.99+

SusanPERSON

0.99+

BrianPERSON

0.99+

CiscoORGANIZATION

0.99+

2008DATE

0.99+

DavidPERSON

0.99+

DellORGANIZATION

0.99+

IBMORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Dave VolantePERSON

0.99+

48 timesQUANTITY

0.99+

70%QUANTITY

0.99+

OracleORGANIZATION

0.99+

JuanPERSON

0.99+

Bob ThomePERSON

0.99+

Tim ChienPERSON

0.99+

DavePERSON

0.99+

AmazonORGANIZATION

0.99+

TimPERSON

0.99+

BobPERSON

0.99+

Deutsche bankORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

40 petabytesQUANTITY

0.99+

3000QUANTITY

0.99+

DelawareLOCATION

0.99+

87%QUANTITY

0.99+

50 timesQUANTITY

0.99+

three hoursQUANTITY

0.99+

19 microsecondsQUANTITY

0.99+

Tim chinPERSON

0.99+

90QUANTITY

0.99+

ConnellORGANIZATION

0.99+

5,000QUANTITY

0.99+

hundredsQUANTITY

0.99+

Deutsche bankORGANIZATION

0.99+

TodayDATE

0.99+

90 plus percentQUANTITY

0.99+

5,413 packagesQUANTITY

0.99+

80%QUANTITY

0.99+

last weekDATE

0.99+

HPORGANIZATION

0.99+

68QUANTITY

0.99+

seven and a half billionQUANTITY

0.99+

HPEORGANIZATION

0.99+

FirstQUANTITY

0.99+

SpainLOCATION

0.99+

AXAORGANIZATION

0.99+

two ordersQUANTITY

0.99+

United StatesLOCATION

0.99+

one copyQUANTITY

0.99+

Bob tomePERSON

0.99+

27 millionQUANTITY

0.99+

LouisaPERSON

0.99+

24 terabytesQUANTITY

0.99+

15 minutesQUANTITY

0.99+

Adrian and Adam Keynote v4 fixed audio blip added slide


 

>>Welcome everyone. Good morning. Good evening to all of you around the world. I am so excited to welcome you to launch bad our annual conference for customers, for partners, for our own colleagues here at Mirandes. This is meant to be a forum for learning, for sharing for discovery. One of openness. We're incredibly excited. Do you have you here with us? I want to take a few minutes this morning and opened the conference and share with you first and foremost where we're going as a company. What is our vision then? I also want to share with you on update on what we have been up to you for the past year. Especially with two important acquisitions, Doc Enterprise and then container and lens. And what are some of the latest developments at Mirandes? And then I'll close also with an exciting announcement that we have today, which we hope is going to be interesting and valuable for all of you. But let me start with our mission. What are we here to Dio? It's very simple. We want to help you the ship code faster. This is something that we're very excited about, something that we have achieved for many of you around the world. And we just want thio double down on. We feel this is a mission that's very much worthwhile and relevant and important to you. Now, how do we do that? How do we help you ship code faster? There are three things we believe in. We believe in this world of cloud. Um, choice is incredibly important. We all know that developers want to use the latest tools. We all know that cloud technology is evolving very quickly and new innovations appear, um, very, very quickly, and we want to make them available to you. So choice is very important. At the same time, consuming choice can be difficult. So our mission is to make choice simple for you to give developers and operators simplicity and then finally underpinning everything that we dio is security. These are the three big things that we invest in and that we believe that choice, simplicity and security and the foundation technology that we're betting on to make that happen for you is kubernetes many of you, many of our customers use kubernetes from your aunties today and they use it at scale. And this is something we want to double down on the fundamental benefit. The our key promise we want to deliver for you is Speed. And we feel this is very relevant and important and and valuable in the world that we are in today. So you might also be interested in what have been our priorities since we acquired Doc Enterprise. What has happened for the past year at Miranda's And there are three very important things we focused on as a company. The first one is customer success. Um, when we acquired Doc Enterprise, the first thing we did is listen to you connect with the most important customers and find out what was your sentiment. What did you like? What were you concerned about? What needed to improve? How can we create more value and a better experience for you? So, customers success has been a top of our list of priorities ever since. And here is what we've heard here is what you've told us. You've told us that you very much appreciated the technology that you got a lot of value out of the technology, but that at the same time, there are some things that we can do better. Specifically, you wanted better. Sele's better support experience. You also wanted more clarity on the road map. You also wanted to have a deeper alignment and a deeper relationship between your needs and your requirements and our our technical development that keep people in our development organization are most important engineers. So those three things are were very, very important to you and they were very important to us here. So we've taken that to heart and over the past 12 months, we believe, as a team, we have dramatically improved the customer support experience. We introduced new SLS with prod care. We've rolled out a roadmap to many many of our customers. We've taken your requirements of the consideration and we've built better and deeper relationships with so many of you. And the evidence for that that we've actually made some progress is in a significant increase off the work clothes and in usage of all platforms. I was so fortunate that we were able to build better and stronger relationships and take you to the next level of growth for companies like Visa like soc T general, like nationwide, like Bosch, like Axa X l like GlaxoSmithKline, like standard and Poor's, like Apple A TNT. So many, many off you, Many of all customers around the world, I believe over the past 12 months have experienced better, better, better support strong s L. A s a deeper relationship and a lot more clarity on our roadmap and our vision forward. The second very big priority for us over the last year has been product innovation. This is something that we are very excited about that we've invested. Most of our resource is in, and we've delivered some strong proof points. Doc Enterprise 3.1 has been the first release that we have shipped. Um, as Mirant is as the unified company, Um, it's had some big innovative features or Windows support or a I and machine learning use cases and a significant number off improvements in stability and scalability earlier this year. We're very excited to have a quiet lens and container team, which is by far the most popular kubernetes. I'd, um, in the world today and every day, 600 new users are starting to use lens to manage the community's clusters to deploy applications on top of communities and to dramatically simplify the experience for communities for operators and developers alike. That is a very big step forward for us as a company. And then finally, this week at this conference, we announcing our latest product, which we believe is a huge step forward for Doc Enterprise and which we call Doc Enterprise, Container Cloud, and you will hear a lot more about that during this conference. The third vector of development, the third priority for us as a company over the past year was to become mawr and Mawr developer centric. As we've seen over the past 10 years, developers really move the world forward. They create innovation, they create new software. And while our platform is often managed and run and maybe even purchased by RT architects and operators and I T departments, the actual end users are developers. And we made it our mission a za company, to become closer and closer to developers to better understand their needs and to make our technology as easy and fast to consume as possible for developers. So as a company, we're becoming more and more developers centric, really. The two core products which fit together extremely well to make that happen, or lens, which is targeted squarely at a new breed off kubernetes developers sitting on the desktop and managing communities, environments and the applications on top on any cloud platform anywhere and then DACA enterprise contain a cloud which is a new and radically innovative, contain a platform which we're bringing to market this week. So with this a za background, what is the fundamental problem which we solve for you, for our customers? What is it that we feel are are your pain points that can help you resolve? We see too very, very big trends in the world today, which you are experiencing. On one side, we see the power of cloud emerging with more features mawr innovation, more capabilities coming to market every day. But with those new features and new innovations, there is also an exponential growth in cloud complexity and that cloud complexity is becoming increasingly difficult to navigate for developers and operators alike. And at the same time, we see the pace of change in the economy continuing to accelerate on bits in the economy and in the technology as well. So when you put these two things together on one hand, you have MAWR and Mawr complexity. On the other hand, you have fast and faster change. This makes for a very, very daunting task for enterprises, developers and operators to actually keep up and move with speed. And this is exactly the central problem that we want to solve for you. We want to empower you to move with speed in the middle off rising complexity and change and do it successfully and with confidence. So with that in mind, we are announcing this week at LAUNCHPAD a big and new concept to take the company forward and take you with us to create value for you. And we call this your cloud everywhere, which empowers you to ship code faster. Dr. Enterprise Container Cloud is a lynch bit off your cloud everywhere. It's a radical and new container platform, which gives you our customers a consistent experience on public clouds and private clouds alike, which enables you to ship code faster on any infrastructure, anywhere with a cohesive cloud fabric that meets your security standards that offers a choice or private and public clouds and offer you a offers you a simple, an extremely easy and powerful to use experience. for developers. All of this is, um, underpinned by kubernetes as the foundation technology we're betting on forward to help you achieve your goals at the same time. Lens kubernetes e. It's also very, very well into the real cloud. Every concept, and it's a second very strong linchpin to take us forward because it creates the developing experience. It supports developers directly on their desktop, enabling them Thio manage communities workloads to test, develop and run communities applications on any infrastructure anywhere. So Doc, Enterprise, Container, Cloud and Lens complement each other perfectly. So I'm very, very excited to share this with you today and opened the conference for you. And with this I want to turn it over to my colleague Adam Parker, who runs product development at Mirandes to share a lot more detail about Doc Enterprise Container Cloud. Why we're excited about it. Why we feel is a radical step forward to you and why we feel it can add so much value to your developers and operators who want to embrace the latest kubernetes technology and the latest container technology on any platform anywhere. I look forward to connecting with you during the conference and we should all the best. Bye bye. >>Thanks, Adrian. My name is Adam Parco, and I am vice president of engineering and product development at Mirant ISS. I'm extremely excited to be here today And to present to you Dr Enterprise Container Cloud Doc Enterprise Container Cloud is a major leap forward. It Turpal charges are platform. It is your cloud everywhere. It has been completely designed and built around helping you to ship code faster. The world is moving incredibly quick. We have seen unpredictable and rapid changes. It is the goal of Docker Enterprise Container Cloud to help navigate this insanity by focusing on speed and efficiency. To do this requires three major pillars choice, simplicity and security. The less time between a line of code being written and that line of code running in production the better. When you decrease that cycle, time developers are more productive, efficient and happy. The code is higher, quality contains less defects, and when bugs are found are fixed quicker and more easily. And in turn, your customers get more value sooner and more often. Increasing speed and improving developer efficiency is paramount. To do this, you need to be able to cycle through coding, running, testing, releasing and monitoring all without friction. We enabled us by offering containers as a service through a consistent, cloudlike experience. Developers can log into Dr Enterprise Container Cloud and, through self service, create a cluster No I T. Tickets. No industry specific experience required. Need a place to run. A workload simply created nothing quicker than that. The clusters air presented consistently no matter where they're created, integrate your pipelines and start deploying secure images everywhere. Instantly. You can't have cloud speed if you start to get bogged down by managing, so we offer fully automated lifecycle management. Let's jump into the details of how we achieve cloud speed. The first is cloud choice developers. Operators add mons users they all want. In fact, mandate choice choice is extremely important in efficiency, speed and ultimately the value created. You have cloud choice throughout the full stack. Choice allows developers and operators to use the tooling and services their most familiar with most efficient with or perhaps simply allows them to integrate with any existing tools and services already in use, allowing them to integrate and move on. Doc Enterprise Container Cloud isn't constructive. It's open and flexible. The next important choice we offer is an orchestration. We hear time and time again from our customers that they love swarm. That's simply enough for the majority of their applications. And that just works that they have skills and knowledge to effectively use it. They don't need to be or find coop experts to get immediate value, so we will absolutely continue to offer this choice and orchestration. Our existing customers could rest assure their workloads will continue to run. Great as always. On the other hand, we can't ignore the popularity that growth, the enthusiasm and community ecosystem that has exploded with communities. So we will also be including a fully conforming, tested and certified kubernetes going down the stock. You can't have choice or speed without your choice and operating system. This ties back to developer efficiency. We want developers to be able to leverage their operating system of choice, were initially supporting full stack lifecycle management for a bun, too, with other operating systems like red hat to follow shortly. Lastly, all the way down at the bottom of stack is your choice in infrastructure choice and infrastructure is in our DNA. We have always promoted no locking and flexibility to run where needed initially were supporting open stock AWS and full life cycle management of bare metal. We also have a road map for VM Ware and other public cloud providers. We know there's no single solution for the unique and complex requirements our customers have. This is why we're doubling down on being the most open platform. We want you to truly make this your cloud. If done wrong, all this choice at speed could have been extremely complex. This is where cloud simplification comes in. We offer a simple and consistent as a service cloud experience, from installation to day to ops clusters Air created using a single pane of glass no matter where they're created, giving a simple and consistent interface. Clusters can be created on bare metal and private data centers and, of course, on public cloud applications will always have specific operating requirements. For example, data protection, security, cost efficiency edge or leveraging specific services on public infrastructure. Being able to create a cluster on the infrastructure that makes the most sense while maintaining a consistent experience is incredibly powerful to developers and operators. This helps developers move quick by being able to leverage the infra and services of their choice and operators by leveraging, available, compute with the most efficient and for available. Now that we have users self creating clusters, we need centralized management to support this increase in scale. Doc Enterprise Container cloud use is the single pane of glass for observe ability and management of all your clusters. We have day to ops covered to keep things simple and new. Moving fast from this single pane of glass, you can manage the full stack lifecycle of your clusters from the infra up, including Dr Enterprise, as well as the fully automated deployment and management of all components deployed through it. What I'm most excited about is Doc Enterprise Container Cloud as a service. What do I mean by as a service doctor? Enterprise continue. Cloud is fully self managed and continuously delivered. It is always up to date, always security patched, always available new features and capabilities pushed often and directly to you truly as a service experience anywhere you want, it run. Security is of utmost importance to Miranda's and our customers. Security can't be an afterthought, and it can't be added later with Doctor and a price continued cloud, we're maintaining our leadership and security. We're doing this by leveraging the proven security and Dr Enterprise. Dr. Enterprise has the best and the most complete security certifications and compliance, such as Stig Oscar, How and Phipps 1 $40 to thes security certifications allows us to run in the world's most secure locations. We are proud and honored to have some of the most security conscious customers in the world from all industries into. She's like insurance, finance, health care as well as public, federal and government agencies. With Dr Enterprise Container Cloud. We put security as our top concern, but importantly, we do it with speed. You can't move fast with security in the way so they solve this. We've added what we're calling invisible security security enabled by default and configured for you as part of the platform. Dr Price Container Cloud is multi tenant with granular are back throughout. In conjunction with Doc Enterprise, Docker Trusted Registry and Dr Content Trust. We have a complete end to end secured software supply chain Onley run the images that have gone through the appropriate channels that you have authorized to run on the most secure container engine in the >>industry. >>Lastly, I want to quickly touch on scale. Today. Cluster sprawl is a very real thing. There are test clusters, staging clusters and, of course, production clusters. There's also different availability zones, different business units and so on. There's clusters everywhere. These clusters are also running all over the place. We have customers running Doc Enterprise on premise there, embracing public cloud and not just one cloud that might also have some bare metal. So cloud sprawl is also a very real thing. All these clusters on all these clouds is a maintenance and observe ability. Nightmare. This is a huge friction point to scaling Dr Price. Container Cloud solves these issues, lets you scale quicker and more easily. Little recap. What's new. We've added multi cluster management. Deploy and attach all your clusters wherever they are. Multi cloud, including public private and bare metal. Deploy your clusters to any infra self service cluster creation. No more I T. Tickets to get resources. Incredible speed. Automated Full stack Lifecycle management, including Dr Enterprise Container, cloud itself as a service from the in for up centralized observe ability with a single pane of glass for your clusters, their health, your APs and most importantly to our existing doc enterprise customers. You can, of course, add your existing D clusters to Dr Enterprise Container Cloud and start leveraging the many benefits it offers immediately. So that's it. Thank you so much for attending today's keynote. This was very much just a high level introduction to our exciting release. There is so much more to learn about and try out. I hope you are as excited as I am to get started today with Doc Enterprise. Continue, Cloud, please attend the tutorial tracks up Next is Miska, with the world's most popular Kubernetes E Lens. Thanks again, and I hope you enjoy the rest of our conference.

Published Date : Sep 15 2020

SUMMARY :

look forward to connecting with you during the conference and we should all the best. We want you to truly make this your cloud. This is a huge friction point to scaling Dr Price.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AdrianPERSON

0.99+

BoschORGANIZATION

0.99+

Adam ParcoPERSON

0.99+

Adam ParkerPERSON

0.99+

GlaxoSmithKlineORGANIZATION

0.99+

firstQUANTITY

0.99+

AWSORGANIZATION

0.99+

VisaORGANIZATION

0.99+

AdamPERSON

0.99+

standard and Poor'sORGANIZATION

0.99+

secondQUANTITY

0.99+

MirantORGANIZATION

0.99+

first releaseQUANTITY

0.99+

600 new usersQUANTITY

0.99+

last yearDATE

0.99+

MirandesORGANIZATION

0.98+

threeQUANTITY

0.98+

two thingsQUANTITY

0.98+

LAUNCHPADORGANIZATION

0.98+

MawrORGANIZATION

0.98+

Dr EnterpriseORGANIZATION

0.98+

todayDATE

0.98+

this weekDATE

0.97+

TodayDATE

0.97+

Mirant ISSORGANIZATION

0.97+

Doc EnterpriseORGANIZATION

0.97+

third vectorQUANTITY

0.97+

third priorityQUANTITY

0.97+

first oneQUANTITY

0.97+

two important acquisitionsQUANTITY

0.97+

WindowsTITLE

0.96+

two core productsQUANTITY

0.96+

Axa X lORGANIZATION

0.96+

one cloudQUANTITY

0.96+

MirandaORGANIZATION

0.96+

three thingsQUANTITY

0.96+

one sideQUANTITY

0.96+

mawrORGANIZATION

0.96+

Apple A TNTORGANIZATION

0.95+

MiskaPERSON

0.94+

single paneQUANTITY

0.93+

single solutionQUANTITY

0.92+

DocORGANIZATION

0.91+

Dr. EnterpriseORGANIZATION

0.9+

past yearDATE

0.9+

How and PhippsORGANIZATION

0.89+

past yearDATE

0.89+

LensORGANIZATION

0.88+

this morningDATE

0.87+

ContainerORGANIZATION

0.87+

earlier this yearDATE

0.85+

Doc Enterprise 3.1TITLE

0.85+

Dr Content TrustORGANIZATION

0.84+

Doc EnterpriseTITLE

0.84+

Stig OscarORGANIZATION

0.84+

Docker Enterprise Container CloudTITLE

0.83+

Dr PriceORGANIZATION

0.82+

soc T generalORGANIZATION

0.82+

Container CloudORGANIZATION

0.81+

Doc Enterprise Container CloudTITLE

0.81+

EnterpriseORGANIZATION

0.79+

three major pillarsQUANTITY

0.78+

Enterprise Container Cloud Doc Enterprise Container CloudTITLE

0.78+

ContainerTITLE

0.77+

one handQUANTITY

0.76+

monthsDATE

0.75+

$40QUANTITY

0.74+

RTORGANIZATION

0.73+

Dr Price ContainerORGANIZATION

0.72+

DioORGANIZATION

0.7+

SelePERSON

0.7+

Jitesh Ghai, Informatica | CUBE Conversation, July 2020


 

>> Narrator: From the Cube Studios in Palo Alto, in Boston, connecting with thought leaders all around the world. This is theCUBE conversation. >> Hello and welcome back to this CUBE Conversation, I'm John Furrier here in theCUBE Studios, your hosts for our remote interviews as part of our coverage and continue to get the interviews during COVID-19. Great talk and session here about data warehouses, data lakes, data everything, hybrid cloud, and back on theCube for a return Cube alumni, virtual alumni, Jitesh Ghai senior vice president general manager of data management, Informatica. Great to see you come back. We had a great chat about privacy in the last session and data scale. Great to see you again. >> Likewise John, great seeing you is always a pleasure to join you and discuss some of the prevailing topics in the space of data. >> Well it's great that you're available on remote. And thanks for coming back again, because we want to dig into really the digital transformation aspect of the challenges that your customers have specifically around data warehouses and data lakes, because this has become a big topic. What are the biggest challenges that you guys see your customers facing with digital transformation? >> Yeah, great question. Really, it comes down to ensuring every digital transformation should be data-driven. There is a data work stream to help inform thoughtful insights that drive decisions to embark on and realize outcomes from the transformation. And for that you need a healthy, productive, modern, agile, flexible data and analytics stack. And so what we are enabling our customers realize is a modern cloud-native, cloud-first, data and analytics stack built on modern architectures of data lakes and data warehouses, all in the cloud. >> So you mentioned the data warehouse, modern cloud and the data lake. Tell us more about that. What's going on there. How does, how do customers approach that? Because it's not the old fashioned way, and data lakes been around for a while too, by the way, some people call it the data swamp, but they don't take care of it. Talk about those two things and how customers attack that strategic imperative to get it done right? >> Yeah, there's been a tremendous amount of disruption and innovation in the data and analytics stack. And what we're really seeing, I think you mentioned it is, 15 even 20 years ago, they were these things called data marts that the finance teams would report against, for financial reporting, regulatory compliance, et cetera. Then there was this, these things called data warehouses that were bringing together data from across the enterprise for comprehensive enterprise views to run the business as well as to perform reporting. And then with the advent of big data about five years ago, we had Hadoop-based data lakes, which as you mentioned, we're also in many cases, data swamps because of the lack of governance, lack of cataloging and insights into what is in the lake, who should, and shouldn't access the lake. And very quickly that itself got disrupted from Hadoop to Spark. And very quickly customers realize that, hey, you know what? Managing these 5,100, several hundred node, Hadoop lakes, Sparked lakes on-premise is extremely expensive and hardware extremely expensive and people extremely expensive and maintaining and patching and et cetera, et cetera. And so the demand very rapidly shifted to cloud-first, cloud-native data lakes. Equally, we're seeing customers realize the benefits of cloud-first cloud-native, the flexibility, the elasticity, the agility. And we're seeing them realize their data warehouses and reporting in the cloud as well for the same elastic benefits for performance as well as for economics. >> So what is the critical capabilities needed to be successful with kind of a modern data warehouse or a data lake that's a last to can scaling and providing value? What are those critical capabilities required to be successful? >> For sure, exactly. It's first and foremost cloud-first cloud-native, but, why are we Informatica, uniquely positioned and excited to enable, this modernization of the data and analytics stack in the cloud, as it comes down to foundational capabilities that we're recognized as a leader in, across the three magic quadrants of metadata management, data integration and data quality. Oftentimes, when folks are prototyping, they immediately start hand coding and, putting some data together through some ingestion, basic ingestion capability. And they think that they're building a data Lake or populating a data warehouse, but to truly build a system of record, you need comprehensive data management, integration and data quality capabilities. And that's really what we're offering to our customers as a cloud-first cloud-native. So that it's not just your data lakes and data warehouses that are cloud-first cloud-native. So is your data management stack so that you get the same flexibility, agility, resiliency, benefits. >> I don't think many people are really truly understand how important what you just said is the cloud-native capabilities. In addition to some of those things, it's really imperative to be built for the future. So with that, can you give me a couple of examples of customers that you can showcase to illustrate, the success of having the critical capabilities from Informatica. >> Yeah, what we've found is an enabler to be data-driven, requires organizations to bring data together to various applications and various sources of data on-premise in the cloud from SaaS apps, from a cloud PaaS databases, as well as from on-premise databases on-premise applications. And that's typically done in a data lake architecture. It's in that architecture that you have multiple zones of curation, you have a landing zone, a prep zone, and then it's certified datasets that you can democratize. And we spoke about some of this previously under the topic of data governance and privacy. What we are enabling with these capabilities of metadata management data integration, data quality is onboarding all of this data comprehensively processing it and getting it ready for analytics teams for data science teams. Kelly Services for example, is managing the recruitment of over a half a million candidates using greater data-driven insights within their data lake architecture, leveraging our integration quality metadata management capabilities to realize these outcomes. AXA XL is doing very similar things with their data lake and data warehousing architecture, to inform, the data science teams or more productive underwriting. So a tremendous amount of data-driven insights, being data-driven, being a data-driven organization really comes down to this foundational architecture of cloud data warehousing and data lakes, and the associated cloud-first cloud-native data management that we're enabling our customers, realize these, realize that becoming a data-driven organization. >> Okay, Jitesh, I got to put you on the spot on this one. I'm a customer pretend for a minute I'm a customer. I say, okay, I'm comfortable with my old fashion. My grandfather's data warehouse had it for years. It spits out the reports it needs to spit out, data lake I'm really not, I got it, I got a bunch of servers. Maybe we'll put our toe in the water there and try it out, but I'm good right now. I'm not sure I'm ready to go there. My boss is telling me, I'm telling them I'm good. I got a cloud strategy with Microsoft. I've got a cloud strategy with AWS on paper. We're going to go that way, but I'm not going to move. I need to just stay where I'm at. What do you say to that customer? First of all, I don't think anyone's that kind of that, well unless they're really in the legacy world, but may be they're locked in, but for the most part, they're saying, hey, I'm not ready to move. >> We see, we see both. We see the spectrum. We of course, to us data management, being cloud-first being cloud-native, necessitates that your capability support hybrid architectures. So there is a, there are a class of customers that for potentially regulatory compliance reasons, typically financial services, certainly comes to mind where they're decidedly, align state of their estate is on-premise. It's an old fashioned data centers. Well, those customers, we have market leading capabilities that we've had for many, many, many, many, many years. And that's fine. That works too. But we're naturally seeing organizations, even banks and financial services awakened to all the obvious benefits of a cloud-first strategy and are starting to modernize various pieces. First, it was just decommissioning data centers and moving their application and analytics and data estate to the cloud, as it's bring your own licenses as we refer to it. That very quickly, it has modernized to, I want to leverage the past data offerings within an AWS within an Azure, within a GCP. I want to leverage this modern data warehouse from Snowflake. And there, that's when customers are realizing this benefit and realizing the acceleration of value they can get by unshackling themselves from the burden of managing servers, managing the software, the operating system, as well as the associated applications, databases that need to be administered, upgraded, et cetera, abstracting away all of that so that they can really focus on the problem of data, collecting it, processing it, and enabling the larger lines of business to be data-driven, enabling those digital transformations that we were speaking about earlier. >> Well, I know you mentioned a Snowflake. I think they're actually hot company in Silicon Valley. They filed to go public. Everyone I've talked to loves working with them. They're easy to use and I think they're eating into Redshift a little bit from Amazon side. Certainly anyone's using old school data warehouses, Oh, they look at Snowflake is great. How does a customer who wants to get to that kind of experience set up for that? There's some that you guys do. We've had many conversations with some of the leaders at Informatica about this and your board members, and you've got to, you've got to set the foundation and you've got to get this done right. Take us through what it takes to do that. I mean, timetable, are we talking months, weeks, days, is that a migration for a year? It depends on how big it is, but if I do want to take that step to set my company up for these kinds of large cloud scale cloud-native benefits. >> Yeah, great question, great question John. Really, how customers approach it varies significantly. We have a segment of the market that really just picks up, our trial version free, but we have a freemium embedded within the Snowflake experience so that you can select us within as a Snowflake administrator and select us as the data management tooling that you want to use to start ingesting and onboarding and processing data within the Snowflake platform. We have customers that are building net new data warehouses for a line of business like marketing. Where they need, enterprise class, enterprise scale, data management as they service capabilities. And that's where we enable and support them. We also see customers recognizing that their on-premise data and analytics stack their cloud data Lake or their cloud data warehouse is too expensive, is not delivering on the latest and greatest features or the necessary insights. And therefore they are migrating that on-premise data warehouse to a cloud-native data warehouse, like Snowflake, like Redshift, BigQuery and so forth. And that's where we have technologies and capabilities that have helped them build this on-premise data warehouse, the business logic, all the ETL, the processing that was authored on-premise. We have a way of converting that and repurposing it within our cloud-first cloud-native metaphors, so that they get the benefit of continued value from their existing estate, but within a modern cloud-first cloud-native paradigm, that's elastic that serverless and so forth. >> Jitesh, always great to speak with you. You've got a great thought leadership, just an expertise, but also leading a big group within Informatica around data warehouses and data management in general, that you're the GM as well, you've got a PNL responsibility. Thanks for coming on. I do want to ask you while I got you here to react to some of the news, and how it means what it means for the enterprise. So I just did a panel session on Sunday. My new, "meet the analysts segment show" I'm putting together around the EU's recent decision to shoot down the privacy shield law in the UK, mainly because of the data sharing. GDPR is kicking in, California is doing something here. It kind of teases out the broader trend of data sharing, right? And responsibility. Well, I'm going to surveil you. You're going to say, it's not necessarily related to Informatica, so to speak, but it does kind of give a tell sign that, this idea of having your data to be managed so you can have kinds of the policies you need to be adaptive to. It turns out no one knows what's going on. I got data over here. I got data over there. So it's kind of data all over the place. And you know, one law says this, the other law contradicts it, tons of loopholes, but it points out what can happen when data gets out of control. >> Yeah, and then that's exactly right. And that's why, when I say metadata management is a critical foundational capability to build these modern data and analytics architectures it's because metadata management enables cataloging and understanding where all your data is, how it's proliferating and ensuring that it enables that it also enables governance as a result, because metadata management gives you technical metadata. It gives you business metadata. The combination on all of these different types of metadata enabled you to have an organized view of your data state, enable you to plan on how you want the process, manage work with the data and who you can and cannot share that data with. And that's that governing framework that enables organizations to be data-driven to democratize data, but within a governance framework. So extremely critical, but to democratize data, to be more data-driven you also need the govern data. And that's how metadata management with integration and quality really bring things together. >> And to have a user experience that's agile and modern contemporary, you got to have the compliance governance, but you've got to enable the application developers or the use cases to not be waiting. You got to be fast. >> That's exactly right. In this new modern world, digital transformation, faster pace, everybody wants to be data-driven. And that spans a spectrum of deeply technical data engineers, data analysts, data scientists, all the way to nontechnical business users that want to do some ad hoc analytics and want the data when they want it. And it's critical. We have built that on a foundation of intelligent metadata, or what we call a CLAIRE engine, and we have built the fit for use deliberate experiences. What are the appropriate personas, the deeply technical ones, wanting more technical experiences, all the way to nontechnical business users just want data in a simple data marketplace type of shopping paradigm. So critical to meet the UX requirements, the user experience requirements for there's a varied group of data consumers. >> Great to have you on I'll let you have the last word. Talk to the people who are watching this that may be a customer of yours, or may be in the need to be a customer of Informatica. What's your pitch? What would you say to that customer? Why Informatica? Give the pitch. >> Informatica is a laser focused singularly focused on the problem of data management. We are independent and neutral. So we work with your corporate standard, whether it's AWS, Azure, GCP, your best of breed selections, whether it's Snowflake or Databricks. And in many cases, we see the global 2000 select multiple cloud vendors. One division goes with AWS and other goes with Azure. And so the world of data analytics is decidedly multicloud. It's, while we recognize that data is proliferating everywhere, and there are multiple technologies and multiple PaaS offerings from various cloud vendors where data may reside including on-premise you want, and while all of that might be fragmented, you want a single data management capability within your organization that brings together metadata management, integration quality, and is increasingly automating the job of data management, leveraging AI and ML. So that in this data 4.0 world, Informatica is enabling AI power data management, so that you can get faster insights and be more data-driven and deliver more business outcomes. >> Jitesh Ghai, senior vice president, and general manager of data management at Informatica. You're watching our virtual coverage and remote interviews with all the Informatica thought leaders and experts and senior executives and customers here on theCUBE I'm John Furrier. Thanks for watching. (upbeat music)

Published Date : Jul 22 2020

SUMMARY :

Narrator: From the Cube and continue to get the of the prevailing topics aspect of the challenges And for that you need a healthy, call it the data swamp, data marts that the finance of the data and analytics of customers that you can and the associated cloud-first but I'm not going to move. databases that need to be There's some that you guys do. is not delivering on the of the policies you need to be more data-driven you And to have a user What are the appropriate personas, or may be in the need to be And so the world of data and general manager of data

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jitesh GhaiPERSON

0.99+

MicrosoftORGANIZATION

0.99+

JohnPERSON

0.99+

InformaticaORGANIZATION

0.99+

AWSORGANIZATION

0.99+

SundayDATE

0.99+

John FurrierPERSON

0.99+

JiteshPERSON

0.99+

AmazonORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

July 2020DATE

0.99+

AXA XLORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

two thingsQUANTITY

0.99+

5,100QUANTITY

0.99+

EUORGANIZATION

0.99+

BostonLOCATION

0.99+

FirstQUANTITY

0.99+

firstQUANTITY

0.99+

bothQUANTITY

0.99+

CubeORGANIZATION

0.99+

Cube StudiosORGANIZATION

0.99+

One divisionQUANTITY

0.99+

Kelly ServicesORGANIZATION

0.99+

UKLOCATION

0.98+

over a half a million candidatesQUANTITY

0.98+

DatabricksORGANIZATION

0.97+

SnowflakeTITLE

0.96+

HadoopTITLE

0.96+

singleQUANTITY

0.94+

theCUBE StudiosORGANIZATION

0.94+

SnowflakeORGANIZATION

0.93+

one lawQUANTITY

0.93+

data martsORGANIZATION

0.92+

SparkTITLE

0.92+

five years agoDATE

0.91+

GCPORGANIZATION

0.91+

a yearQUANTITY

0.9+

20 years agoDATE

0.89+

AzureORGANIZATION

0.84+

theCubeORGANIZATION

0.83+

CUBEORGANIZATION

0.82+

2000QUANTITY

0.79+

RedshiftORGANIZATION

0.76+

three magicQUANTITY

0.76+

AzureTITLE

0.76+

15DATE

0.72+

SaaSTITLE

0.72+

-19OTHER

0.71+

hundredQUANTITY

0.66+

COVIDEVENT

0.66+

aboutDATE

0.66+

BigQueryORGANIZATION

0.66+

theCUBEORGANIZATION

0.58+

4.0QUANTITY

0.58+

GDPRTITLE

0.57+

CaliforniaLOCATION

0.55+

yearsQUANTITY

0.55+

CUBE ConversationEVENT

0.43+

ConversationEVENT

0.36+

Mike Ferris, Red Hat | IBM Think 2020


 

>>From the cube studios in Palo Alto in Boston. It's the cube covering IBM thing brought to you by IBM. >>Welcome back. I'm Stu Miniman and we're here with the cubes coverage of IBM. Thank you. 2020. The global experience reaching all of the participants of the event where they are. I'm happy to welcome back one of our cube alumni, Mike Farris, who is the vice president of corporate development and strategy at red hat. Mike, it's great to see you. Likewise too. Happy to be here. All right, so what Mike, uh, you know, lots of things to talk about a few weeks back. Uh, of course the management changes happened. Uh, we're fresh off of a red hat summit. Uh, I, I had a pleasure really talking to a lot of your peers, uh, your new boss, uh, and uh, you know, many of the customers. Uh, but for our, I think audience, right? Bring us up to speed. Uh, you know, back in 2019, it, uh, the, the largest software acquisition ever, uh, completed with IBM buying red hat and there've been some management changes, uh, some people, uh, switching roles. >>And, and you've got a new title, so, uh, bring her audience speed. Sure. Absolutely. So it's, it's been an exciting several, several months as we've gone through this. Of course. Um, we knew things were going to happen, things were announced clearly with Jenny's retirement quite a while ago. Um, but certainly, you know, the Arvin announcement and then as well as having both Jim Whitehurst become president. Okay. Oh, Cormier becoming CEO of red hat. You know, it's been an exciting several months trying to try to go through this and understand, you know, what would change and frankly, what would not change. Um, I'll say from red hats perspective, having been with red hat for coming up, you're on 20 years, uh, not a lot is really changed. We're still focused on our mission of being the owner leading enterprise open source software company, uh, focusing on both taking our, our platforms, both red hat enterprise Linux and now OpenShift a Ford in the market, partnering around middleware components, hardening around our management, uh, as well as our storage elements. >>So, you know, our mission hasn't changed and that's kind of one of the key aspects of this. I'll say that certainly, you know, with Arvind now as CEO of IBM and Jim Whitehurst is president of IBM along with Oh for me or being, you know, CEO of red hat and we've got a really strong leadership group in place at IBM that understands what red hat is, what we mean to the customer and just as importantly what we mean to the open source community. Uh, and, and that type of action and, and, and drive is certainly something that, that we think, you know, that leadership in place will help to ensure that the value we've delivered to customers, frankly from day one back when we launched red hat enterprise Linux or red hat advanced server, frankly, uh, it's something that, that we'll be able to continue to do and drive in the community and with the customers as we move forward. >>Yeah. Mike, it's interesting when we look out, uh, on the, the ecosystems and happening out there, we understand for customers sometimes it might be challenging to say, Hey, I listened to 10 different vendors and they all say the same words. I've got multi hybrid cloud, digital modernization, things like that. Well, with our hat as a, as an analyst firm, we kind of say, okay, everybody does things a little bit different. Do you know if you look at the big cloud players, they are all playing different games. When we looked at the IBM strategy pre acquisition of red hat and red hat, they line up pretty well, you know, red hat. Yeah, very much. At summit it was open hybrid cloud. Uh, when I look at IBM, maybe a little bit more talk of multicloud than hybrid. Well, but hybrid is long bend a piece of it. >>So yeah. Okay. Give us a little bit of the inside, you know, with your strategy hat on it. How much had it been okay. Strong alignment, obviously IBM and red hat decades. Um, but you know, there are some places where, uh, you need to make sure that people understand that, you know, red sat still please markers with all the clouds. And of course IBM has services that span many places, but they also have, you know, products and services that are, uh, it was particular to IBM thing. Absolutely. And I think, you know, it's important to note, and this is well established that, you know, one of the core, uh, justifications and reasons for the acquisition was really around red hats. A physician, not just an open source, but in the hybrid cloud. Um, we've been talking about that for sure many years in fact, before most of the vendor's name has predicted up. >>Um, uh, but just as importantly, I think if you look back at Marvin Krishna's announcements on frankly the day that he was named CEO, uh, you know, he starts talking about things like IBM's focus being hybrid. Yeah. AI. And how did those things come together and who were the participants in that value being delivered? Certainly from red hat's perspective is, as we've said, we've been talking about hybrid and delivering on hybrid for many years now. Now that's being, being pushed as part of the IBM overall message. Um, and so certainly being able to leverage that value and extend it throughout the ecosystem that IBM brings throughout the software that IBM has and their services. You know, certainly we think we've got a, a good opportunity to really take that message broader in the market. Um, you know, with again, with, with both Paul and Jim, president and CEO of red hat working together and we'll be able to take that and leverage that capability throughout all of IBM generally. >>Yeah. I'm glad you brought up the AI piece because one of the things that really struck me, thumb it often we're talking about plot worms and we're talking about infrastructure. And while that is my background, we understand that the reason infrastructure exists is because my Apple, that application and one of the most important piece of applications or data. So, you know, red hat of course has a strong history with hi guys, uh, to applications and data. You, you've got an operating system as you know, one of the core pieces of what you're doing. And when I think about IBM and its strengths, well the first thing I probably think of is services. But the second thing I think of was all of the businesses productivity, uh, the databases, you know, all these applications that IBM has. I read it over the years, uh, wondering if we can just click down one notch and you talk about, uh, you know, hybrid cloud and AI and everything. >>How are IBM and red hat helping customers build all of those new applications go through those transformations, uh, to really be modern enterprises? Yeah, so certainly if you look at red hat's history where we focused very much on building the platforms and again, whether that was red hat, enterprise, Linux open shift or J boss, you know, our focus has been how can we make a standardized platform, it will work across the industry regardless of use case or industry verdict. IBM, you know, has both platforms as well as a lot of investment in capabilities in the higher level value services as well as the specializations. And use of these applications and platforms for specific vertical industries. And a lot of what they've been able to bring to the table with your investments in Watson and AI as well as a lot of their data services has certainly start to come to fruition. >>And when we start taking these two in combination and applying, for example, a focus on developers, developer tools, being able to bring a value to not just uh, the operations folks, but also the developer side and really put a lot of the AI capabilities cross that we're starting to see, you know, accelerated value, accelerated use. And then if you layer that on top of a hybrid approach, you know, we've got a very strong message that crosses everything from, you know, existing applications to net new applications before developing from their DevOps cycle all the way through their operation cycle at the bottom end where they're, they're actually trying to do boy cross multiple platforms, multiple infrastructures, and keep everything consistently managed, secured and operated. And that's, that's really the overall message that we're seeing as we talk about this together with IBM. All right. So, Mike, you touched on some of the products that that red hat, uh, offers in the portfolio. >>Uh, it was, it was a real focus at summit, not really to talk about the announcements, you know, a week before a summit two came out. Yeah. Uh, OpenShift bar dog four wasn't a big w blob. Uh, you know, give us the update on really the red hat portfolio and you know, where are those points? You know, IBM is helping red hat scale. Yeah. So certainly you've touched on some of the big ones, right? Well, OpenShift itself with the four dot. Four release brings a lot of new capabilities, uh, that are being brought forward to those customers. I have a better management, better capabilities and what they can do from monitoring service, et cetera. Um, but certainly also things like what we're doing with OpenShift virtualization, which was another announcement. There were, we're actually doing, you know, bringing a game, changing capability to the market, uh, and enabling customers that have both existing, uh, virtual virtualized environments and also new or, or migrated or transformed a container, native environments and running those on the same platform. >>With the same management infrastructure, we see that as huge to be able to simplify the management capabilities, understand cost and be able to control those environments in a much more consistent way. Uh, secondly, uh, you know, one of the big things that's been happening is really around advanced container management. What we're calling an ACM. Uh, this is, this is a good example of how red hat and IBM have worked together, uh, to bring existing IBM capabilities and what they had called a multi cluster management or MCM and bring those not just into red hat yes. Part of our platforms, but also have red hat take the step of open sourcing that and making it part of the industry standard through open source community. So being able to take that type of value that IBM had matured, take it through red hat into the open source community, but simultaneously deliver it to our customers. >>Yeah. Open shift and make it part of the platform. It's something we really see as, as a huge value add. Mmm. We're also doing a lot more with hyperscalers, especially in the space of OpenShift managed services. Uh, you saw some of those last week and I would encourage everyone to go out and, and look at the Paul Cormier and Scott Guthrie announcements that we did. There was a keynote, a video that you can go review. Uh, but, but certainly, uh, certainly the focus on how do we work with these hyperscalers inclusive of IBM, uh, to make open shift and much more fluid deployment option, have it more, more service oriented, a both on premise and off premise so the customers can actually, uh, work together better in it. Yeah. A red hat I think has always done a really good job of highlighting those partnerships. It's way easy on the outside to talk about the competitive nature of the industry. >>And I remember a few years ago, a red hat made, you know, a strong partnership with AWS. You mentioned, you know, Scott Guthrie from Microsoft. Well, okay. Not Satya Nadella. Okay. Love it last year, but Microsoft long partner. Oh, okay. Of course, with IBM back to the earliest days, uh, and with red hat or, uh, you know, in the much more recent days, uh, there was those partnerships. So critically important. ACM definitely an area, uh, we want to watch it. It was really question we had had, if you look at last year, Microsoft announced Azure, uh, there are lots of solutions announced as to how am I going to manage in this multicloud world. Um, because it's not, my piece is everywhere. It's now I need to manage a lot of things that are out of my control from different vendors and hopefully we learned a lot of the lessons from the multi-vendor era that will be fixed in the multi cloud era. >>Oh, absolutely. And you know, arc was part of our discussion with Scott Guthrie last week or Paul's discussion and you'll see a demo of that. But I would also expect that you'll see more things coming from us markers as well. Right. You know, this is about building a platform, a hybrid platform that works in a multicloud world and being able to describe that in a very consistent way. Manage it. You were at entitled it in a very consistent way of across all the vendors, inclusive of both self and managed services, only one option. And so we're very focused on doing that. Um, IBM, certainly AXA assisting in that, helping grow it. But overall this focus is really about red has perspective about making that hybrid, right? the leading hybrid platform, the leading Coobernetti's. Okay. uh, in the industry. And that's, that's really where starting from with OpenShift. >>All right. So, so Mike, we started out the discussion talking about some of the changes and you know, where red hat stays, red hat and where the company is working together. Obviously the leadership changes. Oh, we're a big piece. Uh, congratulations you, you got, you know, a new role. I've seen quite a few people, uh, with some new titles. Uh, you know, w which is always nice to see. Uh, the, the people that have been working for a long time. The other area where seems from the outside there coordinated effort is around the covert response. So, you know, I've seen the, the public letters from, from Arvin Krishna of course. red hat and Paul Cormier's letter. Well, he is there. Uh, IBM was one of the first companies that we had heard from, uh, that said, Hey, you know, we're not going to RSA conference this year. >>We're moving digital, uh, with the events. So no real focus on them boys. And then of course boarding customers. Yeah. How does that covert response happen? And am I right from the outside that it looks like there, there is a bit of United right attack, this global pandemic response. It is a, you know, I think there's two levels to this. Certainly between red hat and IBM were well coordinated. Um, within, within red hat we have, uh, we have teams that are specifically dedicated to making sure, yeah, our associates and more importantly, uh, our customers and the overall communities are well-served through this. As you said earlier in the interview, uh, certainly we hold back on any significant product announcements at summit, including with some of our partners merely because we wanted to maintain this focus on how can we help everyone through this very unfortunate experience. >>Um, and so, you know, as obviously a lot of us, all of us are sitting at home now globally. Uh, the focus is very much how do we stay connected or we keep the business flowing as much as possible through this and, and, and keep people safe and secure in their environments and make sure that we serve both the customers and the associates. Yes. Awesome away. So there's a lot of sensitivity and we want to make sure that, you know, the industry and the overall world knows, uh, that we're very focused on keeping people healthy and moving forward as we, as we work through this together as a world. Yeah. Well, Mike Ferris, thank you so much for the update. It's been been a pleasure catching up. Great. Thanks dude. Appreciate it. All right. Stay tuned for lots more coverage from IBM. Think 20, 20. The global digital experience. Okay. To a minimum. And thank you. We're watching. Thank you.

Published Date : May 5 2020

SUMMARY :

IBM thing brought to you by IBM. uh, and uh, you know, many of the customers. Um, but certainly, you know, the Arvin announcement and then as well as having both Jim Whitehurst become president. is president of IBM along with Oh for me or being, you know, CEO of red hat and we've got a really hat and red hat, they line up pretty well, you know, red hat. And I think, you know, it's important to note, and this is well established frankly the day that he was named CEO, uh, you know, he starts talking about things like IBM's uh, the databases, you know, all these applications that IBM has. IBM, you know, has both platforms as well as cross that we're starting to see, you know, accelerated value, accelerated use. on really the red hat portfolio and you know, where are those points? Uh, secondly, uh, you know, one of the big things that's been happening is really around advanced container Uh, you saw some of those last week and I would encourage everyone to go out and, and with red hat or, uh, you know, in the much more recent days, uh, there was those partnerships. And you know, arc was part of our discussion with Scott Guthrie last week or Paul's discussion and you'll see a demo So, so Mike, we started out the discussion talking about some of the changes and you know, It is a, you know, I think there's two levels to this. and we want to make sure that, you know, the industry and the overall world knows,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

MikePERSON

0.99+

Jim WhitehurstPERSON

0.99+

Mike FarrisPERSON

0.99+

AWSORGANIZATION

0.99+

Arvin KrishnaPERSON

0.99+

Mike FerrisPERSON

0.99+

Satya NadellaPERSON

0.99+

Palo AltoLOCATION

0.99+

Paul CormierPERSON

0.99+

Scott GuthriePERSON

0.99+

MicrosoftORGANIZATION

0.99+

2019DATE

0.99+

PaulPERSON

0.99+

AXAORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

last yearDATE

0.99+

AppleORGANIZATION

0.99+

CormierPERSON

0.99+

JimPERSON

0.99+

Red HatORGANIZATION

0.99+

last weekDATE

0.99+

Stu MinimanPERSON

0.99+

ArvindPERSON

0.99+

bothQUANTITY

0.99+

JennyPERSON

0.99+

two levelsQUANTITY

0.99+

2020DATE

0.99+

one optionQUANTITY

0.98+

10 different vendorsQUANTITY

0.98+

oneQUANTITY

0.98+

red hatORGANIZATION

0.98+

OpenShiftTITLE

0.97+

FordORGANIZATION

0.97+

second thingQUANTITY

0.97+

this yearDATE

0.97+

LinuxTITLE

0.97+

first thingQUANTITY

0.97+

Marvin KrishnaPERSON

0.97+

BostonLOCATION

0.97+

RSAEVENT

0.95+

twoQUANTITY

0.95+

red hatTITLE

0.95+