Image Title

Search Results for ORACLE:

Oracle Aspires to be the Netflix of AI | Cube Conversation


 

(gentle music playing) >> For centuries, we've been captivated by the concept of machines doing the job of humans. And over the past decade or so, we've really focused on AI and the possibility of intelligent machines that can perform cognitive tasks. Now in the past few years, with the popularity of machine learning models ranging from recent ChatGPT to Bert, we're starting to see how AI is changing the way we interact with the world. How is AI transforming the way we do business? And what does the future hold for us there. At theCube, we've covered Oracle's AI and ML strategy for years, which has really been used to drive automation into Oracle's autonomous database. We've talked a lot about MySQL HeatWave in database machine learning, and AI pushed into Oracle's business apps. Oracle, it tends to lead in AI, but not competing as a direct AI player per se, but rather embedding AI and machine learning into its portfolio to enhance its existing products, and bring new services and offerings to the market. Now, last October at Cloud World in Las Vegas, Oracle partnered with Nvidia, which is the go-to AI silicon provider for vendors. And they announced an investment, a pretty significant investment to deploy tens of thousands more Nvidia GPUs to OCI, the Oracle Cloud Infrastructure and build out Oracle's infrastructure for enterprise scale AI. Now, Oracle CEO, Safra Catz said something to the effect of this alliance is going to help customers across industries from healthcare, manufacturing, telecoms, and financial services to overcome the multitude of challenges they face. Presumably she was talking about just driving more automation and more productivity. Now, to learn more about Oracle's plans for AI, we'd like to welcome in Elad Ziklik, who's the vice president of AI services at Oracle. Elad, great to see you. Welcome to the show. >> Thank you. Thanks for having me. >> You're very welcome. So first let's talk about Oracle's path to AI. I mean, it's the hottest topic going for years you've been incorporating machine learning into your products and services, you know, could you tell us what you've been working on, how you got here? >> So great question. So as you mentioned, I think most of the original four-way into AI was on embedding AI and using AI to make our applications, and databases better. So inside mySQL HeatWave, inside our autonomous database in power, we've been driving AI, all of course are SaaS apps. So Fusion, our large enterprise business suite for HR applications and CRM and ELP, and whatnot has built in AI inside it. Most recently, NetSuite, our small medium business SaaS suite started using AI for things like automated invoice processing and whatnot. And most recently, over the last, I would say two years, we've started exposing and bringing these capabilities into the broader OCI Oracle Cloud infrastructure. So the developers, and ISVs and customers can start using our AI capabilities to make their apps better and their experiences and business workflow better, and not just consume these as embedded inside Oracle. And this recent partnership that you mentioned with Nvidia is another step in bringing the best AI infrastructure capabilities into this platform so you can actually build any type of machine learning workflow or AI model that you want on Oracle Cloud. >> So when I look at the market, I see companies out there like DataRobot or C3 AI, there's maybe a half dozen that sort of pop up on my radar anyway. And my premise has always been that most customers, they don't want to become AI experts, they want to buy applications and have AI embedded or they want AI to manage their infrastructure. So my question to you is, how does Oracle help its OCI customers support their business with AI? >> So it's a great question. So I think what most customers want is business AI. They want AI that works for the business. They want AI that works for the enterprise. I call it the last mile of AI. And they want this thing to work. The majority of them don't want to hire a large and expensive data science teams to go and build everything from scratch. They just want the business problem solved by applying AI to it. My best analogy is Lego. So if you think of Lego, Lego has these millions Lego blocks that you can use to build anything that you want. But the majority of people like me or like my kids, they want the Lego death style kit or the Lego Eiffel Tower thing. They want a thing that just works, and it's very easy to use. And still Lego blocks, you still need to build some things together, which just works for the scenario that you're looking for. So that's our focus. Our focus is making it easy for customers to apply AI where they need to, in the right business context. So whether it's embedding it inside the business applications, like adding forecasting capabilities to your supply chain management or financial planning software, whether it's adding chat bots into the line of business applications, integrating these things into your analytics dashboard, even all the way to, we have a new platform piece we call ML applications that allows you to take a machine learning model, and scale it for the thousands of tenants that you would be. 'Cause this is a big problem for most of the ML use cases. It's very easy to build something for a proof of concept or a pilot or a demo. But then if you need to take this and then deploy it across your thousands of customers or your thousands of regions or facilities, then it becomes messy. So this is where we spend our time making it easy to take these things into production in the context of your business application or your business use case that you're interested in right now. >> So you mentioned chat bots, and I want to talk about ChatGPT, but my question here is different, we'll talk about that in a minute. So when you think about these chat bots, the ones that are conversational, my experience anyway is they're just meh, they're not that great. But the ones that actually work pretty well, they have a conditioned response. Now they're limited, but they say, which of the following is your problem? And then if that's one of the following is your problem, you can maybe solve your problem. But this is clearly a trend and it helps the line of business. How does Oracle think about these use cases for your customers? >> Yeah, so I think the key here is exactly what you said. It's about task completion. The general purpose bots are interesting, but as you said, like are still limited. They're getting much better, I'm sure we'll talk about ChatGPT. But I think what most enterprises want is around task completion. I want to automate my expense report processing. So today inside Oracle we have a chat bot where I submit my expenses the bot ask a couple of question, I answer them, and then I'm done. Like I don't need to go to our fancy application, and manually submit an expense report. I do this via Slack. And the key is around managing the right expectations of what this thing is capable of doing. Like, I have a story from I think five, six years ago when technology was much inferior than it is today. Well, one of the telco providers I was working with wanted to roll a chat bot that does realtime translation. So it was for a support center for of the call centers. And what they wanted do is, Hey, we have English speaking employees, whatever, 24/7, if somebody's calling, and the native tongue is different like Hebrew in my case, or Chinese or whatnot, then we'll give them a chat bot that they will interact with and will translate this on the fly and everything would work. And when they rolled it out, the feedback from customers was horrendous. Customers said, the technology sucks. It's not good. I hate it, I hate your company, I hate your support. And what they've done is they've changed the narrative. Instead of, you go to a support center, and you assume you're going to talk to a human, and instead you get a crappy chat bot, they're like, Hey, if you want to talk to a Hebrew speaking person, there's a four hour wait, please leave your phone and we'll call you back. Or you can try a new amazing Hebrew speaking AI powered bot and it may help your use case. Do you want to try it out? And some people said, yeah, let's try it out. Plus one to try it out. And the feedback, even though it was the exact same technology was amazing. People were like, oh my God, this is so innovative, this is great. Even though it was the exact same experience that they hated a few weeks earlier on. So I think the key lesson that I picked from this experience is it's all about setting the right expectations, and working around the right use case. If you are replacing a human, the level is different than if you are just helping or augmenting something that otherwise would take a lot of time. And I think this is the focus that we are doing, picking up the tasks that people want to accomplish or that enterprise want to accomplish for the customers, for the employees. And using chat bots to make those specific ones better rather than, hey, this is going to replace all humans everywhere, and just be better than that. >> Yeah, I mean, to the point you mentioned expense reports. I'm in a Twitter thread and one guy says, my favorite part of business travel is filling out expense reports. It's an hour of excitement to figure out which receipts won't scan. We can all relate to that. It's just the worst. When you think about companies that are building custom AI driven apps, what can they do on OCI? What are the best options for them? Do they need to hire an army of machine intelligence experts and AI specialists? Help us understand your point of view there. >> So over the last, I would say the two or three years we've developed a full suite of machine learning and AI services for, I would say probably much every use case that you would expect right now from applying natural language processing to understanding customer support tickets or social media, or whatnot to computer vision platforms or computer vision services that can understand and detect objects, and count objects on shelves or detect cracks in the pipe or defecting parts, all the way to speech services. It can actually transcribe human speech. And most recently we've launched a new document AI service. That can actually look at unstructured documents like receipts or invoices or government IDs or even proprietary documents, loan application, student application forms, patient ingestion and whatnot and completely automate them using AI. So if you want to do one of the things that are, I would say common bread and butter for any industry, whether it's financial services or healthcare or manufacturing, we have a suite of services that any developer can go, and use easily customized with their own data. You don't need to be an expert in deep learning or large language models. You could just use our automobile capabilities, and build your own version of the models. Just go ahead and use them. And if you do have proprietary complex scenarios that you need customer from scratch, we actually have the most cost effective platform for that. So we have the OCI data science as well as built-in machine learning platform inside the databases inside the Oracle database, and mySQL HeatWave that allow data scientists, python welding people that actually like to build and tweak and control and improve, have everything that they need to go and build the machine learning models from scratch, deploy them, monitor and manage them at scale in production environment. And most of it is brand new. So we did not have these technologies four or five years ago and we've started building them and they're now at enterprise scale over the last couple of years. >> So what are some of the state-of-the-art tools, that AI specialists and data scientists need if they're going to go out and develop these new models? >> So I think it's on three layers. I think there's an infrastructure layer where the Nvidia's of the world come into play. For some of these things, you want massively efficient, massively scaled infrastructure place. So we are the most cost effective and performant large scale GPU training environment today. We're going to be first to onboard the new Nvidia H100s. These are the new super powerful GPU's for large language model training. So we have that covered for you in case you need this 'cause you want to build these ginormous things. You need a data science platform, a platform where you can open a Python notebook, and just use all these fancy open source frameworks and create the models that you want, and then click on a button and deploy it. And it infinitely scales wherever you need it. And in many cases you just need the, what I call the applied AI services. You need the Lego sets, the Lego death style, Lego Eiffel Tower. So we have a suite of these sets for typical scenarios, whether it's cognitive services of like, again, understanding images, or documents all the way to solving particular business problems. So an anomaly detection service, demand focusing service that will be the equivalent of these Lego sets. So if this is the business problem that you're looking to solve, we have services out there where we can bring your data, call an API, train a model, get the model and use it in your production environment. So wherever you want to play, all the way into embedding this thing, inside this applications, obviously, wherever you want to play, we have the tools for you to go and engage from infrastructure to SaaS at the top, and everything in the middle. >> So when you think about the data pipeline, and the data life cycle, and the specialized roles that came out of kind of the (indistinct) era if you will. I want to focus on two developers and data scientists. So the developers, they hate dealing with infrastructure and they got to deal with infrastructure. Now they're being asked to secure the infrastructure, they just want to write code. And a data scientist, they're spending all their time trying to figure out, okay, what's the data quality? And they're wrangling data and they don't spend enough time doing what they want to do. So there's been a lack of collaboration. Have you seen that change, are these approaches allowing collaboration between data scientists and developers on a single platform? Can you talk about that a little bit? >> Yeah, that is a great question. One of the biggest set of scars that I have on my back from for building these platforms in other companies is exactly that. Every persona had a set of tools, and these tools didn't talk to each other and the handoff was painful. And most of the machine learning things evaporate or die on the floor because of this problem. It's very rarely that they are unsuccessful because the algorithm wasn't good enough. In most cases it's somebody builds something, and then you can't take it to production, you can't integrate it into your business application. You can't take the data out, train, create an endpoint and integrate it back like it's too painful. So the way we are approaching this is focused on this problem exactly. We have a single set of tools that if you publish a model as a data scientist and developers, and even business analysts that are seeing a inside of business application could be able to consume it. We have a single model store, a single feature store, a single management experience across the various personas that need to play in this. And we spend a lot of time building, and borrowing a word that cellular folks used, and I really liked it, building inside highways to make it easier to bring these insights into where you need them inside applications, both inside our applications, inside our SaaS applications, but also inside custom third party and even first party applications. And this is where a lot of our focus goes to just because we have dealt with so much pain doing this inside our own SaaS that we now have built the tools, and we're making them available for others to make this process of building a machine learning outcome driven insight in your app easier. And it's not just the model development, and it's not just the deployment, it's the entire journey of taking the data, building the model, training it, deploying it, looking at the real data that comes from the app, and creating this feedback loop in a more efficient way. And that's our focus area. Exactly this problem. >> Well thank you for that. So, last week we had our super cloud two event, and I had Juan Loza on and he spent a lot of time talking about how open Oracle is in its philosophy, and I got a lot of feedback. They were like, Oracle open, I don't really think, but the truth is if you think about database Oracle database, it never met a hardware platform that it didn't like. So in that sense it's open. So, but my point is, a big part of of machine learning and AI is driven by open source tools, frameworks, what's your open source strategy? What do you support from an open source standpoint? >> So I'm a strong believer that you don't actually know, nobody knows where the next slip fog or the next industry shifting innovation in AI is going to come from. If you look six months ago, nobody foreseen Dali, the magical text to image generation and the exploding brought into just art and design type of experiences. If you look six weeks ago, I don't think anybody's seen ChatGPT, and what it can do for a whole bunch of industries. So to me, assuming that a customer or partner or developer would want to lock themselves into only the tools that a specific vendor can produce is ridiculous. 'Cause nobody knows, if anybody claims that they know where the innovation is going to come from in a year or two, let alone in five or 10, they're just wrong or lying. So our strategy for Oracle is to, I call this the Netflix of AI. So if you think about Netflix, they produced a bunch of high quality shows on their own. A few years ago it was House of Cards. Last month my wife and I binge watched Ginny and Georgie, but they also curated a lot of shows that they found around the world and bought them to their customers. So it started with things like Seinfeld or Friends and most recently it was Squid games and those are famous Israeli TV series called Founder that Netflix bought in, and they bought it as is and they gave it the Netflix value. So you have captioning and you have the ability to speed the movie and you have it inside your app, and you can download it and watch it offline and everything, but nobody Netflix was involved in the production of these first seasons. Now if these things hunt and they're great, then the third season or the fourth season will get the full Netflix production value, high value budget, high value location shooting or whatever. But you as a customer, you don't care whether the producer and director, and screenplay writing is a Netflix employee or is somebody else's employee. It is fulfilled by Netflix. I believe that we will become, or we are looking to become the Netflix of AI. We are building a bunch of AI in a bunch of places where we think it's important and we have some competitive advantage like healthcare with Acellular partnership or whatnot. But I want to bring the best AI software and hardware to OCI and do a fulfillment by Oracle on that. So you'll get the Oracle security and identity and single bill and everything you'd expect from a company like Oracle. But we don't have to be building the data science, and the models for everything. So this means both open source recently announced a partnership with Anaconda, the leading provider of Python distribution in the data science ecosystem where we are are doing a joint strategic partnership of bringing all the goodness into Oracle customers as well as in the process of doing the same with Nvidia, and all those software libraries, not just the Hubble, both for other stuff like Triton, but also for healthcare specific stuff as well as other ISVs, other AI leading ISVs that we are in the process of partnering with to get their stuff into OCI and into Oracle so that you can truly consume the best AI hardware, and the best AI software in the world on Oracle. 'Cause that is what I believe our customers would want the ability to choose from any open source engine, and honestly from any ISV type of solution that is AI powered and they want to use it in their experiences. >> So you mentioned ChatGPT, I want to talk about some of the innovations that are coming. As an AI expert, you see ChatGPT on the one hand, I'm sure you weren't surprised. On the other hand, maybe the reaction in the market, and the hype is somewhat surprising. You know, they say that we tend to under or over-hype things in the early stages and under hype them long term, you kind of use the internet as example. What's your take on that premise? >> So. I think that this type of technology is going to be an inflection point in how software is being developed. I truly believe this. I think this is an internet style moment, and the way software interfaces, software applications are being developed will dramatically change over the next year two or three because of this type of technologies. I think there will be industries that will be shifted. I think education is a good example. I saw this thing opened on my son's laptop. So I think education is going to be transformed. Design industry like images or whatever, it's already been transformed. But I think that for mass adoption, like beyond the hype, beyond the peak of inflected expectations, if I'm using Gartner terminology, I think certain things need to go and happen. One is this thing needs to become more reliable. So right now it is a complete black box that sometimes produce magic, and sometimes produce just nonsense. And it needs to have better explainability and better lineage to, how did you get to this answer? 'Cause I think enterprises are going to really care about the things that they surface with the customers or use internally. So I think that is one thing that's going to come out. And the other thing that's going to come out is I think it's going to come industry specific large language models or industry specific ChatGPTs. Something like how OpenAI did co-pilot for writing code. I think we will start seeing this type of apps solving for specific business problems, understanding contracts, understanding healthcare, writing doctor's notes on behalf of doctors so they don't have to spend time manually recording and analyzing conversations. And I think that would become the sweet spot of this thing. There will be companies, whether it's OpenAI or Microsoft or Google or hopefully Oracle that will use this type of technology to solve for specific very high value business needs. And I think this will change how interfaces happen. So going back to your expense report, the world of, I'm going to go into an app, and I'm going to click on seven buttons in order to get some job done like this world is gone. Like I'm going to say, hey, please do this and that. And I expect an answer to come out. I've seen a recent demo about, marketing in sales. So a customer sends an email that is interested in something and then a ChatGPT powered thing just produces the answer. I think this is how the world is going to evolve. Like yes, there's a ton of hype, yes, it looks like magic and right now it is magic, but it's not yet productive for most enterprise scenarios. But in the next 6, 12, 24 months, this will start getting more dependable, and it's going to change how these industries are being managed. Like I think it's an internet level revolution. That's my take. >> It's very interesting. And it's going to change the way in which we have. Instead of accessing the data center through APIs, we're going to access it through natural language processing and that opens up technology to a huge audience. Last question, is a two part question. And the first part is what you guys are working on from the futures, but the second part of the question is, we got data scientists and developers in our audience. They love the new shiny toy. So give us a little glimpse of what you're working on in the future, and what would you say to them to persuade them to check out Oracle's AI services? >> Yep. So I think there's two main things that we're doing, one is around healthcare. With a new recent acquisition, we are spending a significant effort around revolutionizing healthcare with AI. Of course many scenarios from patient care using computer vision and cameras through automating, and making better insurance claims to research and pharma. We are making the best models from leading organizations, and internal available for hospitals and researchers, and insurance providers everywhere. And we truly are looking to become the leader in AI for healthcare. So I think that's a huge focus area. And the second part is, again, going back to the enterprise AI angle. Like we want to, if you have a business problem that you want to apply here to solve, we want to be your platform. Like you could use others if you want to build everything complicated and whatnot. We have a platform for that as well. But like, if you want to apply AI to solve a business problem, we want to be your platform. We want to be the, again, the Netflix of AI kind of a thing where we are the place for the greatest AI innovations accessible to any developer, any business analyst, any user, any data scientist on Oracle Cloud. And we're making a significant effort on these two fronts as well as developing a lot of the missing pieces, and building blocks that we see are needed in this space to make truly like a great experience for developers and data scientists. And what would I recommend? Get started, try it out. We actually have a shameless sales plug here. We have a free deal for all of our AI services. So it typically cost you nothing. I would highly recommend to just go, and try these things out. Go play with it. If you are a python welding developer, and you want to try a little bit of auto mail, go down that path. If you're not even there and you're just like, hey, I have these customer feedback things and I want to try out, if I can understand them and apply AI and visualize, and do some cool stuff, we have services for that. My recommendation is, and I think ChatGPT got us 'cause I see people that have nothing to do with AI, and can't even spell AI going and trying it out. I think this is the time. Go play with these things, go play with these technologies and find what AI can do to you or for you. And I think Oracle is a great place to start playing with these things. >> Elad, thank you. Appreciate you sharing your vision of making Oracle the Netflix of AI. Love that and really appreciate your time. >> Awesome. Thank you. Thank you for having me. >> Okay. Thanks for watching this Cube conversation. This is Dave Vellante. We'll see you next time. (gentle music playing)

Published Date : Jan 24 2023

SUMMARY :

AI and the possibility Thanks for having me. I mean, it's the hottest So the developers, So my question to you is, and scale it for the thousands So when you think about these chat bots, and the native tongue It's just the worst. So over the last, and create the models that you want, of the (indistinct) era if you will. So the way we are approaching but the truth is if you the movie and you have it inside your app, and the hype is somewhat surprising. and the way software interfaces, and what would you say to them and you want to try a of making Oracle the Netflix of AI. Thank you for having me. We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NetflixORGANIZATION

0.99+

OracleORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Elad ZiklikPERSON

0.99+

MicrosoftORGANIZATION

0.99+

twoQUANTITY

0.99+

Safra CatzPERSON

0.99+

EladPERSON

0.99+

thousandsQUANTITY

0.99+

AnacondaORGANIZATION

0.99+

two partQUANTITY

0.99+

fourth seasonQUANTITY

0.99+

House of CardsTITLE

0.99+

LegoORGANIZATION

0.99+

second partQUANTITY

0.99+

GoogleORGANIZATION

0.99+

first seasonsQUANTITY

0.99+

SeinfeldTITLE

0.99+

Last monthDATE

0.99+

third seasonQUANTITY

0.99+

four hourQUANTITY

0.99+

last weekDATE

0.99+

HebrewOTHER

0.99+

Las VegasLOCATION

0.99+

last OctoberDATE

0.99+

OCIORGANIZATION

0.99+

three yearsQUANTITY

0.99+

bothQUANTITY

0.99+

two frontsQUANTITY

0.99+

first partQUANTITY

0.99+

Juan LozaPERSON

0.99+

FounderTITLE

0.99+

fourDATE

0.99+

six weeks agoDATE

0.99+

todayDATE

0.99+

two yearsQUANTITY

0.99+

pythonTITLE

0.99+

fiveQUANTITY

0.99+

a yearQUANTITY

0.99+

six months agoDATE

0.99+

two developersQUANTITY

0.99+

firstQUANTITY

0.98+

PythonTITLE

0.98+

H100sCOMMERCIAL_ITEM

0.98+

five years agoDATE

0.98+

oneQUANTITY

0.98+

FriendsTITLE

0.98+

one guyQUANTITY

0.98+

10QUANTITY

0.97+

Juan Loaiza, Oracle | Building the Mission Critical Supercloud


 

(upbeat music) >> Welcome back to Supercloud two where we're gathering a number of industry luminaries to discuss the future of cloud services. And we'll be focusing on various real world practitioners today, their challenges, their opportunities with an emphasis on data, self-service infrastructure and how organizations are evolving their data and cloud strategies to prepare for that next era of digital innovation. And we really believe that support for multiple cloud estates is a first step of any Supercloud. And in that regard Oracle surprise some folks with its Azure collaboration the Oracle database and exit database services. And to discuss the challenges of developing a mission critical Supercloud we welcome Juan Loaiza, who's the executive vice president of Mission Critical Database Technologies at Oracle. Juan, you're many time CUBE alums so welcome back to the show. Great to see you. >> Great to see you, and happy to be here with you. >> Yeah, thank you. So a lot of people felt that Oracle was resistant to multicloud strategies and preferred to really have everything run just on the Oracle cloud infrastructure, OCI and maybe that was a misperception maybe you guys were misunderstood or maybe you had to change your heart. Take us through the decision to support multiple cloud platforms >> Now we've supported multiple cloud platforms for many years, so I think that was probably a misperception. Oracle database, we partnered up with Amazon very early on in their cloud when they had kind of the the first cloud out there. And we had Oracle database running on their cloud. We have backup, we have a lot of stuff running. So, yeah, part of the philosophy of Oracle has always been we partner with every platform. We're very open we started with SQL and APIs. As we develop new technologies we push them into the SQL standard. So that's always been part of the ecosystem at Oracle. That's how we think we get an advantage by being more open. I think if we try to create this isolated little world it actually hurts us and hurts customers. So for us it's a win-win to be open across the clouds. >> So Supercloud is this concept that we put forth to describe a platform or some people think it's an architecture if you have an opinion, and I'd love to hear it but it provides a programmatically consistent set of services that hosted on heterogeneous cloud providers. And so we look at the Oracle database service for Azure as fitting within this definition. In your view, is this accurate? >> Yeah, I would broaden it. I'd see a little bit more than that. We just think that services should be available from everywhere, right? So, I mean, it's a little bit like if you go back to the pre-internet world, there was things like AOL and CompuServe and those were kind of islands. And if you were on AOL, you really didn't have access to anything on CompuServe and vice versa. And the cloud world has evolved a little bit like that. And we just think that's the wrong model. They shouldn't these clouds are part of the world and they need to be interconnected like all the rest of the world. It's been a long time with telephones internet, everything, everything's interconnected. Everything should work seamlessly together. So that's how we believe if you're running in one cloud and you're running let's say an application, one cloud you want to use a service from another cloud should be completely simple to do that. It shouldn't be, I can only use what's in AOL or CompuServe or whatever else. It should not be isolated. >> Well, we got a long way to go before that Nirvana exists but one example is the Oracle database service with Azure. So what exactly does that service provide? I'm interested in how consistent the service experience is across clouds. Did you create a purpose-built PaaS layer to achieve this common experience? Or is it off the shelf Terraform? Is there unique value in the PaaS layer? Let's dig into some of those questions. I know I just threw six at you. >> Yeah, I mean, so what this is, is what we're trying to do is very simple. Which is, for example, starting with the Oracle database we want to make that seamless to use from anywhere you're running. Whether it's on-prem, on some other cloud, anywhere else you should be able to seamlessly use the Oracle database and it should look like the internet. There's no friction. There's not a lot of hoops you got to jump just because you're trying to use a database that isn't local to you. So it's pretty straightforward. And in terms of things like Azure, it's not easy to do because all these clouds have a lot of kind of very unique technologies. So what we've done is at Oracle is we've said, "Okay we're going to make Oracle database look exactly like if it was running on Azure." That means we'll use the Azure security systems, the identity management systems, the networking, there's things like monitoring and management. So we'll push all these technologies. For example, when we have monitoring event or we have alerts we'll push those into the Azure console. So as a user, it looks to you exactly as if that Oracle database was running inside Azure. Also, the networking is a big challenge across these clouds. So we've basically made that whole thing seamless. So we create the super high bandwidth network between Azure and Oracle. We make sure that's extremely low latency, under two milliseconds round trip. It's all within the local metro region. So it's very fast, very high bandwidth, very low latency. And we take care establishing the links and making sure that it's secure and all that kind of stuff. So at a high level, it looks to you like the database is--even the look and feel of the screens. It's the Azure colors, it's the Azure buttons it's the Azure layout of the screens so it looks like you're running there and we take care of all the technical details underlying that which there's a lot which has taken a lot of work to make it work seamlessly. >> In the magic of that abstraction. Juan, does it happen at the PaaS layer? Could you take us inside that a little bit? Is there intelligence in there that helps you deal with latency or are there any kind of purpose-built functions for this service? >> You could think of it as... I mean it happens at a lot of different layers. It happens at the identity management layer, it happens at the networking layer, it happens at the database layer, it happens at the monitoring layer, at the management layer. So all those things have been integrated. So it's not one thing that you just go and do. You have to integrate all these different services together. You can access files in Azure from the Oracle database. Again, that's completely seamless. You, it's just like if it was local to our cloud you get your Azure files in your kind of S3 equivalent. So yeah, the, it's not one thing. There's a whole lot of pieces to the ecosystem. And what we've done is we've worked on each piece separately to make sure that it's completely seamless and transparent so you don't have to think about it, it just works. >> So you kind of answered my next question which is one of the technical hurdles. It sounds like the technical hurdles are that integration across the entire stack. That's the sort of architecture that you've built. What was the catalyst for this service? >> Yeah, the catalyst is just fulfilling our vision of an open cloud world. It's really like I said, Oracle, from the very beginning has been believed in open standards. Customers should be able to have choice customers should be able to use whatever they want from wherever they want. And we saw that, you know in the new world of cloud that had broken down everybody had their own authentication system management system, monitoring system networking system, configuration system. And it became very difficult. There was a lot of friction to using services across cloud. So we said, "Well, okay we can fix that." It's work, it's significant amount of work but we know how to do it and let's just go do it and make it easy for customers. >> So given Oracle is really your main focus is on mission critical workloads. You talked about this low latency network, I mean but you still have physical distances, so how are you managing that latency? What's the experience been for customers across Azure and OCI? >> Yeah, so it, it's a good point. I mean, latency can be an issue. So the good thing about clouds is we have a lot of cloud data centers. We have dozens and dozens of cloud data centers around the world. And Azure has dozens and dozens of cloud data centers. And in most cases, they're in the same metro region because there's kind of natural metro regions within each country that you want to put your cloud data centers in. So most of our data centers are actually very close to the Azure data centers. There's the kind of northern Virginia, there's London, there's Tokyo I mean, there's natural places where everybody puts their data centers Seoul et cetera. And so that's the real key. So that allows us to put a very high bandwidth and low latency network. The real problems with latency come when you're trying to go along physical distance. If you're trying to connect, you know across the Pacific or you know across the country or something like that, then you can get in trouble with latency within the same metro region. It's extremely fast. It tends to be around one, you know the highest two millisecond that's roundtrip through all the routers and connections and gateways and everything else. With everything taken into consideration, what we guarantee is it's always less than two millisecond which is a very low latency time. So that tends to not be a problem because it's extremely low latency. >> I was going to ask you less than two milliseconds. So, earlier in the program we had Jack Greenfield who runs architecture for Walmart, and he was explaining what we call their Supercloud, and it's runs across Azure, GCP, and they're on-prem. They have this thing called the triplet model. So my question to you is, are you in situations where you guaranteeing that less than two milliseconds do you have situations where you're bringing, you know Exadata Cloud, a customer on-prem to achieve that? Or is this just across clouds? >> Yeah, in this case, we're talking public cloud data center to public cloud data center. >> Oh okay. >> So add your public cloud data center to Oracle Public Cloud data center. They're in the same metro region. We set up the connections, we do all the technology to make it seamless. And from a customer point of view they don't really see the network. Also, remember that SQL is actually designed to have very low bandwidth and latency requirements. So it is a language. So you don't go to the database and say do this one little thing for me. You send it a SQL statement that can actually access lots of data while in the database. So the real latency requirement of a SQL database is within the database. So I need to access all that data fast. So I need very fast access to storage very fast access across node. That's what exit data gives you. But you send one request and that request can do a huge amount of work and then return one answer. And that's kind of the design point of SQL. So SQL is inherently low bandwidth requirements, it was used back in the eighties when we used to have 10 megabit networks and the the biggest companies in the world ran back then. So right now we're talking over hundred hundreds of gigabits. So it's really not much of a challenge. When you're designed to run on 10 megabit to say, okay I'm going to give you 10,000 times what you were designed for it's really, it's a pretty low hurdle jump. >> What about the deployment models? How do you handle this? Is it a single global instance across clouds or do you sort of instantiate in each you got exudate in Azure and exudates in OCI? What's the deployment model look like? >> It's pretty straightforward. So customer decides where they want to run their application and database. So there's natural places where people go. If you're in Tokyo, you're going to choose the local Tokyo data centers for both, you know Microsoft and Oracle. If you're in London, you're going to do that. If you're in California you're going to choose maybe San Jose, something like that. So a customer just chooses. We both have data centers in that metro region. So they create their service on Azure and then they go to our console which looks just like an Azure console and say all right create me a database. And then we choose the closest Oracle data center which is generally a few miles away, and then it it all gets created. So from a customer point of view, it's very straightforward. >> I'm always in awe about how simple you make things sound. All right what about security? You talked a little bit before about identity access how you sort of abstracting the Azure capabilities away so that you've simplified it for your customers but are there any other specific security things that you need to do? How much did you have to abstract the underlying primitives of Azure or OCI to present that common experience to customers? >> Yeah, so there's really two big things. One is the identity management. Like my name is X on Azure and I have this set of privileges. Oracle has its own identity management system, right? So what we didn't want is that you have to kind of like bridge these things yourself. It's a giant pain to do that. So we actually what we call federate across these identity managements. So you put your credentials into Azure and then they automatically get to use the exact same credentials and identity in the Oracle cloud. So again, you don't have to think about it, it just works. And then the second part is that the whole bridging the network. So within a cloud you generally have virtual network that's private to your company. And so at Oracle, we bridge the private network that you created in, for example, Azure to the private network that we create for you in Oracle. So it is still a private network without you having to do a whole bunch of work. So it's just like if you were in your own data center other people can't get into your network. So it's secured at the network level, it's secured at the identity management, and encryption level. And again we did a lot of work to make that seamless for customers and they don't have to worry about it because we did the work. That's really as simple as it gets. >> That's what's Supercloud's supposed to be all about. Alright, we were talking earlier about sort of the misperception around multicloud, your view of Open I think, which is you run the Oracle database, wherever the customer wants to run it. So you got this database service across OCI and Azure customers today, they run Oracle database in AWS. You got heat wave, MySQL, heat wave that you announced on AWS, Google touts a bare metal offering where you can run Oracle on GCP. Do you see a day when you extend an OCI Azure like situation across multiple clouds? Would that bring benefits to customers or will the world of database generally remain largely fenced with maybe a few exceptions like what you're doing with OCI and Azure? I'm particularly interested in your thoughts on egress fees as maybe one of the reasons that there is a barrier to this happening and why maybe these stove pipes, exist today and in the future. What are your thoughts on that? >> Yeah, we're very open to working with everyone else out there. Like I said, we've always been, big believers in customers should have choice and you should be able to run wherever you want. So that's been kind of a founding principle of Oracle. We have the Azure, we did a partnership with them, we're open to doing other partnerships and you're going to see other things coming down the pipe on the topic of egress. Yeah, the large egress fees, it's pretty obvious what goes on with that. Various vendors like to have large egress fees because they want to keep things kind of locked into their cloud. So it's not a very customer friendly thing to do. And I think everybody recognizes that it's really trying to kind of course or put a lot of friction on moving data out of a particular cloud. And that's not what we do. We have very, very low egress fees. So we don't really do that and we don't think anybody else should do that. But I think customers at the end of the day, will win that battle. They're going to have to go back to their vendor and say, well I have choice in clouds and if you're going to impose these limits on me, maybe I'll make a different choice. So that's ultimately how these things get resolved. >> So do you think other cloud providers are going to take a page out of what you're doing with Azure and provide similar solutions? >> Yeah, well I think customers want, I mean, I've talked to a lot of customers, this is what they want, right? I mean, there's really no doubt no customer wants to be locked into a single ecosystem. There's nobody out there that wants that. And as the competition, when they start seeing an open ecosystem evolving they're going to be like, okay, I'd rather go there than the closed ecosystem, and that's going to put pressure on the closed ecosystems. So that's the nature of competition. That's what ultimately will tip the balance on these things. >> So Juan, even though you have this capability of distributing a workload across multiple clouds as in our Supercloud premise it's still something that's relatively new. It's a big decision that maybe many people might consider somewhat of a risk. So I'm curious who's driving the decisions for your initial customers? What do they want to get out of it? What's the decision point there? >> Yeah, I mean, this is generally driven by customers that want a specific technology in a cloud. I think the risk, I haven't seen a lot of people worry too much about the risk. Everybody involved in this is a very well known, very reputable firm. I mean, Oracle's been around for 40 years. We run most of the world's largest companies. I think customers understand we're not going to build a solution that's going to put their technology and their business at risk. And the same thing with Azure and others. So I don't see customers too worried about this is a risky move because it's really not. And you know, everybody understands networking at the end the day networking works. I mean, how does the internet work? It's a known quantity. It's not like it's some brand new invention. What we're really doing is breaking down the barriers to interconnecting things. Automating 'em, making 'em easy. So there's not a whole lot of risk here for customers. And like I said, every single customer in the world loves an open ecosystem. It's just not a question. If you go to a customer would you rather put your technology or your business to run on a closed ecosystem or an open system? It's kind of not even worth asking a question. It's a no-brainer. >> All right, so we got to go. My last question. What do you think of the term "Supercloud"? You think it'll stick? >> We'll see. There's a lot of terms out there and it's always fun to see which terms stick. It's a cool term. I like it, but the decision makers are actually the public, what sticks and what doesn't. It's very hard to predict. >> Yeah well, it's been a lot of fun having you on, Juan. Really appreciate your time and always good to see you. >> All right, Dave, thanks a lot. It's always fun to talk to you. >> You bet. All right, keep it right there. More Supercloud two content from theCUBE Community Dave Vellante for John Furrier. We'll be right back. (upbeat music)

Published Date : Jan 12 2023

SUMMARY :

and cloud strategies to prepare happy to be here with you. just on the Oracle cloud of the ecosystem at Oracle. and I'd love to hear it And the cloud world has Or is it off the shelf Terraform? So at a high level, it looks to you Juan, does it happen at the PaaS layer? it happens at the database layer, So you kind of And we saw that, you know What's the experience been for customers across the Pacific or you know So my question to you is, to public cloud data center. So the real latency requirement and then they go to our console the Azure capabilities away So it's secured at the network level, So you got this database We have the Azure, we did So that's the nature of competition. What's the decision point there? down the barriers to the term "Supercloud"? and it's always fun to and always good to see you. It's always fun to talk to you. Vellante for John Furrier.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MicrosoftORGANIZATION

0.99+

OracleORGANIZATION

0.99+

DavePERSON

0.99+

WalmartORGANIZATION

0.99+

Juan LoaizaPERSON

0.99+

AmazonORGANIZATION

0.99+

San JoseLOCATION

0.99+

CaliforniaLOCATION

0.99+

Dave VellantePERSON

0.99+

TokyoLOCATION

0.99+

JuanPERSON

0.99+

LondonLOCATION

0.99+

sixQUANTITY

0.99+

10,000 timesQUANTITY

0.99+

Jack GreenfieldPERSON

0.99+

GoogleORGANIZATION

0.99+

second partQUANTITY

0.99+

AWSORGANIZATION

0.99+

less than two millisecondQUANTITY

0.99+

less than two millisecondsQUANTITY

0.99+

OneQUANTITY

0.99+

SQLTITLE

0.99+

10 megabitQUANTITY

0.99+

bothQUANTITY

0.99+

AOLORGANIZATION

0.98+

each pieceQUANTITY

0.98+

MySQLTITLE

0.98+

first cloudQUANTITY

0.98+

singleQUANTITY

0.98+

each countryQUANTITY

0.98+

John FurrierPERSON

0.98+

two big thingsQUANTITY

0.98+

under two millisecondsQUANTITY

0.98+

oneQUANTITY

0.98+

northern VirginiaLOCATION

0.98+

CompuServeORGANIZATION

0.97+

first stepQUANTITY

0.97+

Mission Critical Database TechnologiesORGANIZATION

0.97+

one requestQUANTITY

0.97+

SeoulLOCATION

0.97+

AzureTITLE

0.97+

eachQUANTITY

0.97+

two millisecondQUANTITY

0.97+

AzureORGANIZATION

0.96+

one cloudQUANTITY

0.95+

one thingQUANTITY

0.95+

cloud data centersQUANTITY

0.95+

one answerQUANTITY

0.95+

SupercloudORGANIZATION

0.94+

AMD & Oracle Partner to Power Exadata X9M


 

(upbeat jingle) >> The history of Exadata in the platform is really unique. And from my vantage point, it started earlier this century as a skunkworks inside of Oracle called Project Sage back when grid computing was the next big thing. Oracle saw that betting on standard hardware would put it on an industry curve that would rapidly evolve. Last April, for example, Oracle announced the availability of Exadata X9M in OCI, Oracle Cloud Infrastructure. One thing that hasn't been as well publicized is that Exadata on OCI is using AMD's EPYC processors in the database service. EPYC is not Eastern Pacific Yacht Club for all you sailing buffs, rather it stands for Extreme Performance Yield Computing, the enterprise grade version of AMD's Zen architecture which has been a linchpin of AMD's success in terms of penetrating enterprise markets. And to focus on the innovations that AMD and Oracle are bringing to market, we have with us today, Juan Loaiza, who's executive vice president of mission critical technologies at Oracle, and Mark Papermaster, who's the CTO and EVP of technology and engineering at AMD. Juan, welcome back to the show. Mark, great to have you on The Cube in your first appearance, thanks for coming on. Juan, let's start with you. You've been on The Cube a number of times, as I said, and you've talked about how Exadata is a top platform for Oracle database. We've covered that extensively. What's different and unique from your point of view about Exadata Cloud Infrastructure X9M on OCI? >> So as you know, Exadata, it's designed top down to be the best possible platform for database. It has a lot of unique capabilities, like we make extensive use of RDMA, smart storage. We take advantage of everything we can in the leading hardware platforms. X9M is our next generation platform and it does exactly that. We're always wanting to be, to get all the best that we can from the available hardware that our partners like AMD produce. And so that's what X9M in it is, it's faster, more capacity, lower latency, more iOS, pushing the limits of the hardware technology. So we don't want to be the limit, the software database software should not be the limit, it should be the actual physical limits of the hardware. That that's what X9M's all about. >> Why, Juan, AMD chips in X9M? >> We're introducing AMD chips. We think they provide outstanding performance, both for OTP and for analytic workloads. And it's really that simple, we just think the performance is outstanding in the product. >> Mark, your career is quite amazing. I could riff on history for hours but let's focus on the Oracle relationship. Mark, what are the relevant capabilities and key specs of the AMD chips that are used in Exadata X9M on Oracle's cloud? >> Well, thanks. It's really the basis of the great partnership that we have with Oracle on Exadata X9M and that is that the AMD technology uses our third generation of Zen processors. Zen was architected to really bring high performance back to X86, a very strong roadmap that we've executed on schedule to our commitments. And this third generation does all of that, it uses a seven nanometer CPU that is a core that was designed to really bring throughput, bring really high efficiency to computing and just deliver raw capabilities. And so for Exadata X9M, it's really leveraging all of that. It's really a balanced processor and it's implemented in a way to really optimize high performance. That is our whole focus of AMD. It's where we've reset the company focus on years ago. And again, great to see the super smart database team at Oracle really partner with us, understand those capabilities and it's been just great to partner with them to enable Oracle to really leverage the capabilities of the Zen processor. >> Yeah. It's been a pretty amazing 10 or 11 years for both companies. But Mark, how specifically are you working with Oracle at the engineering and product level and what does that mean for your joint customers in terms of what they can expect from the collaboration? >> Well, here's where the collaboration really comes to play. You think about a processor and I'll say, when Juan's team first looked at it, there's general benchmarks and the benchmarks are impressive but they're general benchmarks. And they showed the base processing capability but the partnership comes to bear when it means optimizing for the workloads that Exadata X9M is really delivering to the end customers. And that's where we dive down and as we learn from the Oracle team, we learn to understand where bottlenecks could be, where is there tuning that we could in fact really boost the performance above that baseline that you get in the generic benchmarks. And that's what the teams have done, so for instance, you look at optimizing latency to our DMA, you look at optimizing throughput on oil TP and database processing. When you go through the workloads and you take the traces and you break it down and you find the areas that are bottlenecking and then you can adjust, we have thousands of parameters that can be adjusted for a given workload. And that's the beauty of the partnership. So we have the expertise on the CPU engineering, Oracle Exadata team knows innately what the customers need to get the most out of their platform. And when the teams came together, we actually achieved anywhere from 20% to 50% gains on specific workloads, it is really exciting to see. >> Mark, last question for you is how do you see this relationship evolving in the future? Can you share a little roadmap for the audience? >> You bet. First off, given the deep partnership that we've had on Exadata X9M, it's really allowed us to inform our future design. So in our current third generation, EPYC is that is really what we call our epic server offerings. And it's a 7,003 third gen and Exadara X9M. So what about fourth gen? Well, fourth gen is well underway, ready for the future, but it incorporates learning that we've done in partnership with Oracle. It's going to have even more through capabilities, it's going to have expanded memory capabilities because there's a CXL connect express link that'll expand even more memory opportunities. And I could go on. So that's the beauty of a deep partnership as it enables us to really take that learning going forward. It pays forward and we're very excited to fold all of that into our future generations and provide even a better capabilities to Juan and his team moving forward. >> Yeah, you guys have been obviously very forthcoming. You have to be with Zen and EPYC. Juan, anything you'd like to add as closing comments? >> Yeah. I would say that in the processor market there's been a real acceleration in innovation in the last few years, there was a big move 10, 15 years ago when multicore processors came out. And then we were on that for a while and then things started stagnating, but in the last two or three years, AMD has been leading this, there's been a dramatic acceleration in innovation so it's very exciting to be part of this and customers are getting a big benefit from this. >> All right. Hey, thanks for coming back on The Cube today. Really appreciate your time. >> Thanks. Glad to be here. >> All right and thank you for watching this exclusive Cube conversation. This is Dave Vellante from The Cube and we'll see you next time. (upbeat jingle)

Published Date : Sep 22 2022

SUMMARY :

in the database service. in the leading hardware platforms. And it's really that simple, and key specs of the the great partnership that we have expect from the collaboration? but the partnership comes to So that's the beauty of a deep partnership You have to be with Zen and EPYC. but in the last two or three years, coming back on The Cube today. Glad to be here. and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JuanPERSON

0.99+

Dave VellantePERSON

0.99+

OracleORGANIZATION

0.99+

Juan LoaizaPERSON

0.99+

MarkPERSON

0.99+

10QUANTITY

0.99+

20%QUANTITY

0.99+

Mark PapermasterPERSON

0.99+

AMDORGANIZATION

0.99+

Last AprilDATE

0.99+

11 yearsQUANTITY

0.99+

thousandsQUANTITY

0.99+

both companiesQUANTITY

0.99+

iOSTITLE

0.99+

7,003QUANTITY

0.99+

X9MTITLE

0.99+

50%QUANTITY

0.99+

fourth genQUANTITY

0.98+

todayDATE

0.98+

FirstQUANTITY

0.98+

ZenCOMMERCIAL_ITEM

0.97+

third generationQUANTITY

0.97+

X86COMMERCIAL_ITEM

0.97+

first appearanceQUANTITY

0.97+

ExadataTITLE

0.97+

third genQUANTITY

0.96+

earlier this centuryDATE

0.96+

seven nanometerQUANTITY

0.96+

ExadataORGANIZATION

0.94+

firstQUANTITY

0.92+

Eastern Pacific Yacht ClubORGANIZATION

0.9+

EPYCORGANIZATION

0.87+

bothQUANTITY

0.86+

OCITITLE

0.85+

One thingQUANTITY

0.83+

Exadata X9MCOMMERCIAL_ITEM

0.81+

Power ExadataORGANIZATION

0.81+

The CubeORGANIZATION

0.8+

OCIORGANIZATION

0.79+

The CubeCOMMERCIAL_ITEM

0.79+

ZenORGANIZATION

0.78+

three yearsQUANTITY

0.78+

Exadata X9MCOMMERCIAL_ITEM

0.74+

X9MCOMMERCIAL_ITEM

0.74+

yearsDATE

0.73+

15 years agoDATE

0.7+

10DATE

0.7+

EPYCOTHER

0.65+

ExadaraORGANIZATION

0.64+

Oracle Cloud InfrastructureORGANIZATION

0.61+

last few yearsDATE

0.6+

Exadata Cloud Infrastructure X9MTITLE

0.6+

Oracle Announces MySQL HeatWave on AWS


 

>>Oracle continues to enhance my sequel Heatwave at a very rapid pace. The company is now in its fourth major release since the original announcement in December 2020. 1 of the main criticisms of my sequel, Heatwave, is that it only runs on O. C I. Oracle Cloud Infrastructure and as a lock in to Oracle's Cloud. Oracle recently announced that heat wave is now going to be available in AWS Cloud and it announced its intent to bring my sequel Heatwave to Azure. So my secret heatwave on AWS is a significant TAM expansion move for Oracle because of the momentum AWS Cloud continues to show. And evidently the Heatwave Engineering team has taken the development effort from O. C I. And is bringing that to A W S with a number of enhancements that we're gonna dig into today is senior vice president. My sequel Heatwave at Oracle is back with me on a cube conversation to discuss the latest heatwave news, and we're eager to hear any benchmarks relative to a W S or any others. Nippon has been leading the Heatwave engineering team for over 10 years and there's over 100 and 85 patents and database technology. Welcome back to the show and good to see you. >>Thank you. Very happy to be back. >>Now for those who might not have kept up with the news, uh, to kick things off, give us an overview of my sequel, Heatwave and its evolution. So far, >>so my sequel, Heat Wave, is a fully managed my secret database service offering from Oracle. Traditionally, my secret has been designed and optimised for transaction processing. So customers of my sequel then they had to run analytics or when they had to run machine learning, they would extract the data out of my sequel into some other database for doing. Unlike processing or machine learning processing my sequel, Heat provides all these capabilities built in to a single database service, which is my sequel. He'd fake So customers of my sequel don't need to move the data out with the same database. They can run transaction processing and predicts mixed workloads, machine learning, all with a very, very good performance in very good price performance. Furthermore, one of the design points of heat wave is is a scale out architecture, so the system continues to scale and performed very well, even when customers have very large late assignments. >>So we've seen some interesting moves by Oracle lately. The collaboration with Azure we've we've covered that pretty extensively. What was the impetus here for bringing my sequel Heatwave onto the AWS cloud? What were the drivers that you considered? >>So one of the observations is that a very large percentage of users of my sequel Heatwave, our AWS users who are migrating of Aurora or so already we see that a good percentage of my secret history of customers are migrating from GWS. However, there are some AWS customers who are still not able to migrate the O. C. I to my secret heat wave. And the reason is because of, um, exorbitant cost, which was charges. So in order to migrate the workload from AWS to go see, I digress. Charges are very high fees which becomes prohibitive for the customer or the second example we have seen is that the latency of practising a database which is outside of AWS is very high. So there's a class of customers who would like to get the benefits of my secret heatwave but were unable to do so and with this support of my secret trip inside of AWS, these customers can now get all the grease of the benefits of my secret he trip without having to pay the high fees or without having to suffer with the poorly agency, which is because of the ws architecture. >>Okay, so you're basically meeting the customer's where they are. So was this a straightforward lifted shift from from Oracle Cloud Infrastructure to AWS? >>No, it is not because one of the design girls we have with my sequel, Heatwave is that we want to provide our customers with the best price performance regardless of the cloud. So when we decided to offer my sequel, he headed west. Um, we have optimised my sequel Heatwave on it as well. So one of the things to point out is that this is a service with the data plane control plane and the console are natively running on AWS. And the benefits of doing so is that now we can optimise my sequel Heatwave for the E. W s architecture. In addition to that, we have also announced a bunch of new capabilities as a part of the service which will also be available to the my secret history of customers and our CI, But we just announced them and we're offering them as a part of my secret history of offering on AWS. >>So I just want to make sure I understand that it's not like you just wrapped your stack in a container and stuck it into a W s to be hosted. You're saying you're actually taking advantage of the capabilities of the AWS cloud natively? And I think you've made some other enhancements as well that you're alluding to. Can you maybe, uh, elucidate on those? Sure. >>So for status, um, we have taken the mind sequel Heatwave code and we have optimised for the It was infrastructure with its computer network. And as a result, customers get very good performance and price performance. Uh, with my secret he trade in AWS. That's one performance. Second thing is, we have designed new interactive counsel for the service, which means that customers can now provision there instances with the council. But in addition, they can also manage their schemas. They can. Then court is directly from the council. Autopilot is integrated. The council we have introduced performance monitoring, so a lot of capabilities which we have introduced as a part of the new counsel. The third thing is that we have added a bunch of new security features, uh, expose some of the security features which were part of the My Secret Enterprise edition as a part of the service, which gives customers now a choice of using these features to build more secure applications. And finally, we have extended my secret autopilot for a number of old gpus cases. In the past, my secret autopilot had a lot of capabilities for Benedict, and now we have augmented my secret autopilot to offer capabilities for elderly people. Includes as well. >>But there was something in your press release called Auto thread. Pooling says it provides higher and sustained throughput. High concerns concerns concurrency by determining Apple number of transactions, which should be executed. Uh, what is that all about? The auto thread pool? It seems pretty interesting. How does it affect performance? Can you help us understand that? >>Yes, and this is one of the capabilities of alluding to which we have added in my secret autopilot for transaction processing. So here is the basic idea. If you have a system where there's a large number of old EP transactions coming into it at a high degrees of concurrency in many of the existing systems of my sequel based systems, it can lead to a state where there are few transactions executing, but a bunch of them can get blocked with or a pilot tried pulling. What we basically do is we do workload aware admission control and what this does is it figures out, what's the right scheduling or all of these algorithms, so that either the transactions are executing or as soon as something frees up, they can start executing, so there's no transaction which is blocked. The advantage to the customer of this capability is twofold. A get significantly better throughput compared to service like Aurora at high levels of concurrency. So at high concurrency, for instance, uh, my secret because of this capability Uh oh, thread pulling offers up to 10 times higher compared to Aurora, that's one first benefit better throughput. The second advantage is that the true part of the system never drops, even at high levels of concurrency, whereas in the case of Aurora, the trooper goes up, but then, at high concurrency is, let's say, starting, uh, level of 500 or something. It depends upon the underlying shit they're using the troopers just dropping where it's with my secret heatwave. The truth will never drops. Now, the ramification for the customer is that if the truth is not gonna drop, the user can start off with a small shape, get the performance and be a show that even the workload increases. They will never get a performance, which is worse than what they're getting with lower levels of concurrency. So this let's leads to customers provisioning a shape which is just right for them. And if they need, they can, uh, go with the largest shape. But they don't like, you know, over pay. So those are the two benefits. Better performance and sustain, uh, regardless of the level of concurrency. >>So how do we quantify that? I know you've got some benchmarks. How can you share comparisons with other cloud databases especially interested in in Amazon's own databases are obviously very popular, and and are you publishing those again and get hub, as you have done in the past? Take us through the benchmarks. >>Sure, So benchmarks are important because that gives customers a sense of what performance to expect and what price performance to expect. So we have run a number of benchmarks. And yes, all these benchmarks are available on guitar for customers to take a look at. So we have performance results on all the three castle workloads, ol DB Analytics and Machine Learning. So let's start with the Rdp for Rdp and primarily because of the auto thread pulling feature. We show that for the IPCC for attended dataset at high levels of concurrency, heatwave offers up to 10 times better throughput and this performance is sustained, whereas in the case of Aurora, the performance really drops. So that's the first thing that, uh, tend to alibi. Sorry, 10 gigabytes. B B C c. I can come and see the performance are the throughput is 10 times better than Aurora for analytics. We have done a comparison of my secret heatwave in AWS and compared with Red Ship Snowflake Googled inquiry, we find that the price performance of my secret heatwave compared to read ship is seven times better. So my sequel, Heat Wave in AWS, provides seven times better price performance than red ship. That's a very, uh, interesting results to us. Which means that customers of Red Shift are really going to take the service seriously because they're gonna get seven times better price performance. And this is all running in a W s so compared. >>Okay, carry on. >>And then I was gonna say, compared to like, Snowflake, uh, in AWS offers 10 times better price performance. And compared to Google, ubiquity offers 12 times better price performance. And this is based on a four terabyte p PCH workload. Results are available on guitar, and then the third category is machine learning and for machine learning, uh, for training, the performance of my secret heatwave is 25 times faster compared to that shit. So all the three workloads we have benchmark's results, and all of these scripts are available on YouTube. >>Okay, so you're comparing, uh, my sequel Heatwave on AWS to Red Shift and snowflake on AWS. And you're comparing my sequel Heatwave on a W s too big query. Obviously running on on Google. Um, you know, one of the things Oracle is done in the past when you get the price performance and I've always tried to call fouls you're, like, double your price for running the oracle database. Uh, not Heatwave, but Oracle Database on a W s. And then you'll show how it's it's so much cheaper on on Oracle will be like Okay, come on. But they're not doing that here. You're basically taking my sequel Heatwave on a W s. I presume you're using the same pricing for whatever you see to whatever else you're using. Storage, um, reserved instances. That's apples to apples on A W s. And you have to obviously do some kind of mapping for for Google, for big query. Can you just verify that for me, >>we are being more than fair on two dimensions. The first thing is, when I'm talking about the price performance for analytics, right for, uh, with my secret heat rape, the cost I'm talking about from my secret heat rape is the cost of running transaction processing, analytics and machine learning. So it's a fully loaded cost for the case of my secret heatwave. There has been I'm talking about red ship when I'm talking about Snowflake. I'm just talking about the cost of these databases for running, and it's only it's not, including the source database, which may be more or some other database, right? So that's the first aspect that far, uh, trip. It's the cost for running all three kinds of workloads, whereas for the competition, it's only for running analytics. The second thing is that for these are those services whether it's like shit or snowflakes, That's right. We're talking about one year, fully paid up front cost, right? So that's what most of the customers would pay for. Many of the customers would pay that they will sign a one year contract and pay all the costs ahead of time because they get a discount. So we're using that price and the case of Snowflake. The costs were using is their standard edition of price, not the Enterprise edition price. So yes, uh, more than in this competitive. >>Yeah, I think that's an important point. I saw an analysis by Marx Tamer on Wiki Bond, where he was doing the TCO comparisons. And I mean, if you have to use two separate databases in two separate licences and you have to do et yelling and all the labour associated with that, that that's that's a big deal and you're not even including that aspect in in your comparison. So that's pretty impressive. To what do you attribute that? You know, given that unlike, oh, ci within the AWS cloud, you don't have as much control over the underlying hardware. >>So look hard, but is one aspect. Okay, so there are three things which give us this advantage. The first thing is, uh, we have designed hateful foreign scale out architecture. So we came up with new algorithms we have come up with, like, uh, one of the design points for heat wave is a massively partitioned architecture, which leads to a very high degree of parallelism. So that's a lot of hype. Each were built, So that's the first part. The second thing is that although we don't have control over the hardware, but the second design point for heat wave is that it is optimised for commodity cloud and the commodity infrastructure so we can have another guys, what to say? The computer we get, how much network bandwidth do we get? How much of, like objects to a brand that we get in here? W s. And we have tuned heat for that. That's the second point And the third thing is my secret autopilot, which provides machine learning based automation. So what it does is that has the users workload is running. It learns from it, it improves, uh, various premieres in the system. So the system keeps getting better as you learn more and more questions. And this is the third thing, uh, as a result of which we get a significant edge over the competition. >>Interesting. I mean, look, any I SV can go on any cloud and take advantage of it. And that's, uh I love it. We live in a new world. How about machine learning workloads? What? What did you see there in terms of performance and benchmarks? >>Right. So machine learning. We offer three capabilities training, which is fully automated, running in France and explanations. So one of the things which many of our customers told us coming from the enterprise is that explanations are very important to them because, uh, customers want to know that. Why did the the system, uh, choose a certain prediction? So we offer explanations for all models which have been derailed by. That's the first thing. Now, one of the interesting things about training is that training is usually the most expensive phase of machine learning. So we have spent a lot of time improving the performance of training. So we have a bunch of techniques which we have developed inside of Oracle to improve the training process. For instance, we have, uh, metal and proxy models, which really give us an advantage. We use adaptive sampling. We have, uh, invented in techniques for paralysing the hyper parameter search. So as a result of a lot of this work, our training is about 25 times faster than that ship them health and all the data is, uh, inside the database. All this processing is being done inside the database, so it's much faster. It is inside the database. And I want to point out that there is no additional charge for the history of customers because we're using the same cluster. You're not working in your service. So all of these machine learning capabilities are being offered at no additional charge inside the database and as a performance, which is significantly faster than that, >>are you taking advantage of or is there any, uh, need not need, but any advantage that you can get if two by exploiting things like gravity. John, we've talked about that a little bit in the past. Or trainee. Um, you just mentioned training so custom silicon that AWS is doing, you're taking advantage of that. Do you need to? Can you give us some insight >>there? So there are two things, right? We're always evaluating What are the choices we have from hybrid perspective? Obviously, for us to leverage is right and like all the things you mention about like we have considered them. But there are two things to consider. One is he is a memory system. So he favours a big is the dominant cost. The processor is a person of the cost, but memory is the dominant cost. So what we have evaluated and found is that the current shape which we are using is going to provide our customers with the best price performance. That's the first thing. The second thing is that there are opportunities at times when we can use a specialised processor for vaccinating the world for a bit. But then it becomes a matter of the cost of the customer. Advantage of our current architecture is on the same hardware. Customers are getting very good performance. Very good, energetic performance in a very good machine learning performance. If you will go with the specialised processor, it may. Actually, it's a machine learning, but then it's an additional cost with the customers we need to pay. So we are very sensitive to the customer's request, which is usually to provide very good performance at a very low cost. And we feel is that the current design we have as providing customers very good performance and very good price performance. >>So part of that is architectural. The memory intensive nature of of heat wave. The other is A W s pricing. If AWS pricing were to flip, it might make more sense for you to take advantage of something like like cranium. Okay, great. Thank you. And welcome back to the benchmarks benchmarks. Sometimes they're artificial right there. A car can go from 0 to 60 in two seconds. But I might not be able to experience that level of performance. Do you? Do you have any real world numbers from customers that have used my sequel Heatwave on A W s. And how they look at performance? >>Yes, absolutely so the my Secret service on the AWS. This has been in Vera for, like, since November, right? So we have a lot of customers who have tried the service. And what actually we have found is that many of these customers, um, planning to migrate from Aurora to my secret heat rape. And what they find is that the performance difference is actually much more pronounced than what I was talking about. Because with Aurora, the performance is actually much poorer compared to uh, like what I've talked about. So in some of these cases, the customers found improvement from 60 times, 240 times, right? So he travels 100 for 240 times faster. It was much less expensive. And the third thing, which is you know, a noteworthy is that customers don't need to change their applications. So if you ask the top three reasons why customers are migrating, it's because of this. No change to the application much faster, and it is cheaper. So in some cases, like Johnny Bites, what they found is that the performance of their applications for the complex storeys was about 60 to 90 times faster. Then we had 60 technologies. What they found is that the performance of heat we have compared to Aurora was 100 and 39 times faster. So, yes, we do have many such examples from real workloads from customers who have tried it. And all across what we find is if it offers better performance, lower cost and a single database such that it is compatible with all existing by sequel based applications and workloads. >>Really impressive. The analysts I talked to, they're all gaga over heatwave, and I can see why. Okay, last question. Maybe maybe two and one. Uh, what's next? In terms of new capabilities that customers are going to be able to leverage and any other clouds that you're thinking about? We talked about that upfront, but >>so in terms of the capabilities you have seen, like they have been, you know, non stop attending to the feedback from the customers in reacting to it. And also, we have been in a wedding like organically. So that's something which is gonna continue. So, yes, you can fully expect that people not dressed and continue to in a way and with respect to the other clouds. Yes, we are planning to support my sequel. He tripped on a show, and this is something that will be announced in the near future. Great. >>All right, Thank you. Really appreciate the the overview. Congratulations on the work. Really exciting news that you're moving my sequel Heatwave into other clouds. It's something that we've been expecting for some time. So it's great to see you guys, uh, making that move, and as always, great to have you on the Cube. >>Thank you for the opportunity. >>All right. And thank you for watching this special cube conversation. I'm Dave Volonte, and we'll see you next time.

Published Date : Sep 14 2022

SUMMARY :

The company is now in its fourth major release since the original announcement in December 2020. Very happy to be back. Now for those who might not have kept up with the news, uh, to kick things off, give us an overview of my So customers of my sequel then they had to run analytics or when they had to run machine So we've seen some interesting moves by Oracle lately. So one of the observations is that a very large percentage So was this a straightforward lifted shift from No, it is not because one of the design girls we have with my sequel, So I just want to make sure I understand that it's not like you just wrapped your stack in So for status, um, we have taken the mind sequel Heatwave code and we have optimised Can you help us understand that? So this let's leads to customers provisioning a shape which is So how do we quantify that? So that's the first thing that, So all the three workloads we That's apples to apples on A W s. And you have to obviously do some kind of So that's the first aspect And I mean, if you have to use two So the system keeps getting better as you learn more and What did you see there in terms of performance and benchmarks? So we have a bunch of techniques which we have developed inside of Oracle to improve the training need not need, but any advantage that you can get if two by exploiting We're always evaluating What are the choices we have So part of that is architectural. And the third thing, which is you know, a noteworthy is that In terms of new capabilities that customers are going to be able so in terms of the capabilities you have seen, like they have been, you know, non stop attending So it's great to see you guys, And thank you for watching this special cube conversation.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VolontePERSON

0.99+

December 2020DATE

0.99+

AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

FranceLOCATION

0.99+

AWSORGANIZATION

0.99+

10 timesQUANTITY

0.99+

two thingsQUANTITY

0.99+

OracleORGANIZATION

0.99+

HeatwaveTITLE

0.99+

100QUANTITY

0.99+

60 timesQUANTITY

0.99+

one yearQUANTITY

0.99+

12 timesQUANTITY

0.99+

GWSORGANIZATION

0.99+

60 technologiesQUANTITY

0.99+

first partQUANTITY

0.99+

240 timesQUANTITY

0.99+

two separate licencesQUANTITY

0.99+

third categoryQUANTITY

0.99+

second advantageQUANTITY

0.99+

0QUANTITY

0.99+

seven timesQUANTITY

0.99+

two secondsQUANTITY

0.99+

twoQUANTITY

0.99+

AppleORGANIZATION

0.99+

seven timesQUANTITY

0.99+

GoogleORGANIZATION

0.99+

oneQUANTITY

0.99+

25 timesQUANTITY

0.99+

second pointQUANTITY

0.99+

NovemberDATE

0.99+

85 patentsQUANTITY

0.99+

second thingQUANTITY

0.99+

AuroraTITLE

0.99+

third thingQUANTITY

0.99+

EachQUANTITY

0.99+

second exampleQUANTITY

0.99+

10 gigabytesQUANTITY

0.99+

three thingsQUANTITY

0.99+

OneQUANTITY

0.99+

two benefitsQUANTITY

0.99+

one aspectQUANTITY

0.99+

first aspectQUANTITY

0.98+

two separate databasesQUANTITY

0.98+

over 10 yearsQUANTITY

0.98+

fourth major releaseQUANTITY

0.98+

39 timesQUANTITY

0.98+

first thingQUANTITY

0.98+

Heat WaveTITLE

0.98+

AMD Oracle Partnership Elevates MySQLHeatwave


 

(upbeat music) >> For those of you who've been following the cloud database space, you know that MySQL HeatWave has been on a technology tear over the last 24 months with Oracle claiming record breaking benchmarks relative to other database platforms. So far, those benchmarks remain industry leading as competitors have chosen not to respond, perhaps because they don't feel the need to, or maybe they don't feel that doing so would serve their interest. Regardless, the HeatWave team at Oracle has been very aggressive about its performance claims, making lots of noise, challenging the competition to respond, publishing their scripts to GitHub. But so far, there are no takers, but customers seem to be picking up on these moves by Oracle and it's likely the performance numbers resonate with them. Now, the other area we want to explore, which we haven't thus far, is the engine behind HeatWave and that is AMD. AMD's epic processors have been the powerhouse on OCI, running MySQL HeatWave since day one. And today we're going to explore how these two technology companies are working together to deliver these performance gains and some compelling TCO metrics. In fact, a recent Wikibon analysis from senior analyst Marc Staimer made some TCO comparisons in OLAP workloads relative to AWS, Snowflake, GCP, and Azure databases, you can find that research on wikibon.com. And with that, let me introduce today's guest, Nipun Agarwal senior vice president of MySQL HeatWave and Kumaran Siva, who's the corporate vice president for strategic business development at AMD. Welcome to theCUBE gentlemen. >> Welcome. Thank you. >> Thank you, Dave. >> Hey Nipun, you and I have talked a lot about this. You've been on theCUBE a number of times talking about MySQL HeatWave. But for viewers who may not have seen those episodes maybe you could give us an overview of HeatWave and how it's different from competitive cloud database offerings. >> Sure. So MySQL HeatWave is a fully managed MySQL database service offering from Oracle. It's a single database, which can be used to run transactional processing, analytics and machine learning workloads. So, in the past, MySQL has been designed and optimized for transaction processing. So customers of MySQL when they had to run, analytics machine learning, would need to extract the data out of MySQL, into some other database or service, to run analytics or machine learning. MySQL HeatWave offers a single database for running all kinds of workloads so customers don't need to extract data into some of the database. In addition to having a single database, MySQL HeatWave is also very performant compared to one up databases and also it is very price competitive. So the advantages are; single database, very performant, and very good price performance. >> Yes. And you've published some pretty impressive price performance numbers against competitors. Maybe you could describe those benchmarks and highlight some of the results, please. >> Sure. So one thing to notice that the performance of any database is going to like vary, the performance advantage is going to vary based on, the size of the data and the specific workloads, so the mileage varies, that's the first thing to know. So what we have done is, we have published multiple benchmarks. So we have benchmarks on PPCH or PPCDS and we have benchmarks on different data sizes because based on the customer's workload, the mileage is going to vary, so we want to give customers a broad range of comparisons so that they can decide for themselves. So in a specific case, where we are running on a 30 terabyte PPCH workload, HeatWave is about 18 times better price performance compared to Redshift. 18 times better compared to Redshift, about 33 times better price performance, compared to Snowflake, and 42 times better price performance compared to Google BigQuery. So, this is on 30 Terabyte PPCH. Now, if the data size is different, or the workload is different, the characteristics may vary slightly but this is just to give a flavor of the kind of performance advantage MySQL HeatWave offers. >> And then my last question before we bring in Kumaran. We've talked about the secret sauce being the tight integration between hardware and software, but would you add anything to that? What is that secret sauce in HeatWave that enables you to achieve these performance results and what does it mean for customers? >> So there are three parts to this. One is HeatWave has been designed with a scale out architecture in mind. So we have invented and implemented new algorithms for skill out query processing for analytics. The second aspect is that HeatWave has been really optimized for cloud, commodity cloud, and that's where AMD comes in. So for instance, many of the partitioning schemes we have for processing HeatWave, we optimize them for the L3 cache of the AMD processor. The thing which is very important to our customers is not just the sheer performance but the price performance, and that's where we have had a very good partnership with AMD because not only does AMD help us provide very good performance, but the price performance, right? And that all these numbers which I was showing, big part of it is because we are running on AMD which provides very good price performance. So that's the second aspect. And the third aspect is, MySQL autopilot, which provides machine learning based automation. So it's really these three things, a combination of new algorithms, design for scale out query processing, optimized for commodity cloud hardware, specifically AMD processors, and third, MySQL auto pilot which gives us this performance advantage. >> Great, thank you. So that's a good segue for AMD and Kumaran. So Kumaran, what is AMD bringing to the table? What are the, like, for instance, relevance specs of the chips that are used in Oracle cloud infrastructure and what makes them unique? >> Yeah, thanks Dave. That's a good question. So, OCI is a great customer of ours. They use what we call the top of stack devices meaning that they have the highest core count and they also are very, very fast cores. So these are currently Zen 3 cores. I think the HeatWave product is right now deployed on Zen 2 but will shortly be also on the Zen 3 core as well. But we provide in the case of OCI 64 cores. So that's the largest devices that we build. What actually happens is, because these large number of CPUs in a single package and therefore increasing the density of the node, you end up with this fantastic TCO equation and the cost per performance, the cost per for deployed services like HeatWave actually ends up being extraordinarily competitive and that's a big part of the contribution that we're bringing in here. >> So Zen 3 is the AMD micro architecture which you introduced, I think in 2017, and it's the basis for EPIC, which is sort of the enterprise grade that you really attacked the enterprise with. Maybe you could elaborate a little bit, double click on how your chips contribute specifically to HeatWave's, price performance results. >> Yeah, absolutely. So in the case of HeatWave, so as Nipun alluded to, we have very large L3 caches, right? So in our very, very top end parts just like the Milan X devices, we can go all the way up to like 768 megabytes of L3 cache. And that gives you just enormous performance and performance gains. And that's part of what we're seeing with HeatWave today and that not that they're currently on the second generation ROM based product, 'cause it's a 7,002 based product line running with the 64 cores. But as time goes on, they'll be adopting the next generation Milan as well. And the other part of it too is, as our chip led architecture has evolved, we know, so from the first generation Naples way back in 2017, we went from having multiple memory domains and a sort of NUMA architecture at the time, today we've really optimized that architecture. We use a common I/O Die that has all of the memory channels attached to it. And what that means is that, these scale out applications like HeatWave, are able to really scale very efficiently as they go from a small domain of CPUs to, for example the entire chip, all 64 cores that scaling, is been a key focus for AMD and being able to design and build architectures that can take advantage of that and then have applications like HeatWave that scale so well on it, has been, a key aim of ours. >> And Gen 3 moving up the Italian countryside. Nipun, you've taken the somewhat unusual step of posting the benchmark parameters, making them public on GitHub. Now, HeatWave is relatively new. So people felt that when Oracle gained ownership of MySQL it would let it wilt on the vine in favor of Oracle database, so you lost some ground and now, you're getting very aggressive with HeatWave. What's the reason for publishing those benchmark parameters on GitHub? >> So, the main reason for us to publish price performance numbers for HeatWave is to communicate to our customers a sense of what are the benefits they're going to get when they use HeatWave. But we want to be very transparent because as I said the performance advantages for the customers may vary, based on the data size, based on the specific workloads. So one of the reasons for us to publish, all these scripts on GitHub is for transparency. So we want customers to take a look at the scripts, know what we have done, and be confident that we stand by the numbers which we are publishing, and they're very welcome, to try these numbers themselves. In fact, we have had customers who have downloaded the scripts from GitHub and run them on our service to kind of validate. The second aspect is in some cases, they may be some deviations from what we are publishing versus what the customer would like to run in the production deployments so it provides an easy way, for customers to take the scripts, modify them in some ways which may suit their real world scenario and run to see what the performance advantages are. So that's the main reason, first, is transparency, so the customers can see what we are doing, because of the comparison, and B, if they want to modify it to suit their needs, and then see what is the performance of HeatWave, they're very welcome to do so. >> So have customers done that? Have they taken the benchmarks? And I mean, if I were a competitor, honestly, I wouldn't get into that food fight because of the impressive performance, but unless I had to, I mean, have customers picked up on that, Nipun? >> Absolutely. In fact, we have had many customers who have benchmarked the performance of MySQL HeatWave, with other services. And the fact that the scripts are available, gives them a very good starting point, and then they've also tweaked those queries in some cases, to see what the Delta would be. And in some cases, customers got back to us saying, hey the performance advantage of HeatWave is actually slightly higher than what was published and what is the reason. And the reason was, when the customers were trying, they were trying on the latest version of the service, and our benchmark results were posted let's say, two months back. So the service had improved in those two to three months and customers actually saw better performance. So yes, absolutely. We have seen customers download the scripts, try them and also modify them to some extent and then do the comparison of HeatWave with other services. >> Interesting. Maybe a question for both of you how is the competition responding to this? They haven't said, "Hey, we're going to come up "with our own benchmarks." Which is very common, you oftentimes see that. Although, for instance, Snowflake hasn't responded to data bricks, so that's not their game, but if the customers are actually, putting a lot of faith in the benchmarks and actually using that for buying decisions, then it's inevitable. But how have you seen the competition respond to the MySQL HeatWave and AMD combo? >> So maybe I can take the first track from the database service standpoint. When customers have more choice, it is invariably advantages for the customer because then the competition is going to react, right? So the way we have seen the reaction is that we do believe, that the other database services are going to take a closer eye to the price performance, right? Because if you're offering such good price performance, the vendors are already looking at it. And, you know, instances where they have offered let's say discount to the customers, to kind of at least like close the gap to some extent. And the second thing would be in terms of the capability. So like one of the things which I should have mentioned even early on, is that not only does MySQL HeatWave on AMD, provide very good price performance, say on like a small cluster, but it's all the way up to a cluster size of 64 nodes, which has about 1000 cores. So the point is, that HeatWave performs very well, both on a small system, as well as a huge scale out. And this is again, one of those things which is a differentiation compared to other services so we expect that even other database services will have to improve their offerings to provide the same good scale factor, which customers are now starting to expectancy, with MySQL HeatWave. >> Kumaran, anything you'd add to that? I mean, you guys are an arms dealer, you love all your OEMs, but at the same time, you've got chip competitors, Silicon competitors. How do you see the competitive-- >> I'd say the broader answer and the big picture for AMD, we're very maniacally focused on our customers, right? And OCI and Oracle are huge and important customers for us, and this particular use cases is extremely interesting both in that it takes advantage, very well of our architecture and it pulls out some of the value that AMD bring. I think from a big picture standpoint, our aim is to execute, to build to bring out generations of CPUs, kind of, you know, do what we say and say, sorry, say what we do and do what we say. And from that point of view, we're hitting, the schedules that we say, and being able to bring out the latest technology and bring it in a TCO value proposition that generationally keeps OCI and HeatWave ahead. That's the crux of our partnership here. >> Yeah, the execution's been obvious for the last several years. Kumaran, staying with you, how would you characterize the collaboration between, the AMD engineers and the HeatWave engineering team? How do you guys work together? >> No, I'd say we're in a very, very deep collaboration. So, there's a few aspects where, we've actually been working together very closely on the code and being able to optimize for both the large L3 cache that AMD has, and so to be able to take advantage of that. And then also, to be able to take advantage of the scaling. So going between, you know, our architecture is chip like based, so we have these, the CPU cores on, we call 'em CCDs and the inter CCD communication, there's opportunities to optimize an application level and that's something we've been engaged with. In the broader engagement, we are going back now for multiple generations with OCI, and there's a lot of input that now, kind of resonates in the product line itself. And so we value this very close collaboration with HeatWave and OCI. >> Yeah, and the cadence, Nip, and you and I have talked about this quite a bit. The cadence has been quite rapid. It's like this constant cycle every couple of months I turn around, is something new on HeatWave. But for question again, for both of you, what new things do you think that organizations, customers, are going to be able to do with MySQL HeatWave if you could look out next 12 to 18 months, is there anything you can share at this time about future collaborations? >> Right, look, 12 to 18 months is a long time. There's going to be a lot of innovation, a lot of new capabilities coming out on in MySQL HeatWave. But even based on what we are currently offering, and the trend we are seeing is that customers are bringing, more classes of workloads. So we started off with OLTP for MySQL, then it went to analytics. Then we increased it to mixed workloads, and now we offer like machine learning as alike. So one is we are seeing, more and more classes of workloads come to MySQL HeatWave. And the second is a scale, that kind of data volumes people are using HeatWave for, to process these mixed workloads, analytics machine learning OLTP, that's increasing. Now, along the way we are making it simpler to use, we are making it more cost effective use. So for instance, last time, when we talked about, we had introduced this real time elasticity and that's something which is a very, very popular feature because customers want the ability to be able to scale out, or scale down very efficiently. That's something we provided. We provided support for compression. So all of these capabilities are making it more efficient for customers to run a larger part of their workloads on MySQL HeatWave, and we will continue to make it richer in the next 12 to 18 months. >> Thank you. Kumaran, anything you'd add to that, we'll give you the last word as we got to wrap it. >> No, absolutely. So, you know, next 12 to 18 months we will have our Zen 4 CPUs out. So this could potentially go into the next generation of the OCI infrastructure. This would be with the Genoa and then Bergamo CPUs taking us to 96 and 128 cores with 12 channels at DDR five. This capability, you know, when applied to an application like HeatWave, you can see that it'll open up another order of magnitude potentially of use cases, right? And we're excited to see what customers can do do with that. It certainly will make, kind of the, this service, and the cloud in general, that this cloud migration, I think even more attractive. So we're pretty excited to see how things evolve in this period of time. >> Yeah, the innovations are coming together. Guys, thanks so much, we got to leave it there really appreciate your time. >> Thank you. >> All right, and thank you for watching this special Cube conversation, this is Dave Vellante, and we'll see you next time. (soft calm music)

Published Date : Sep 14 2022

SUMMARY :

and it's likely the performance Thank you. and how it's different from So the advantages are; single and highlight some of the results, please. the first thing to know. We've talked about the secret sauce So for instance, many of the relevance specs of the chips that are used and that's a big part of the contribution and it's the basis for EPIC, So in the case of HeatWave, of posting the benchmark parameters, So one of the reasons for us to publish, So the service had improved how is the competition responding to this? So the way we have seen the but at the same time, and the big picture for AMD, for the last several years. and so to be able to Yeah, and the cadence, and the trend we are seeing is we'll give you the last and the cloud in general, Yeah, the innovations we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Marc StaimerPERSON

0.99+

Dave VellantePERSON

0.99+

NipunPERSON

0.99+

OracleORGANIZATION

0.99+

2017DATE

0.99+

DavePERSON

0.99+

OCIORGANIZATION

0.99+

Zen 3COMMERCIAL_ITEM

0.99+

7,002QUANTITY

0.99+

KumaranPERSON

0.99+

second aspectQUANTITY

0.99+

Nipun AgarwalPERSON

0.99+

AMDORGANIZATION

0.99+

12QUANTITY

0.99+

64 coresQUANTITY

0.99+

768 megabytesQUANTITY

0.99+

twoQUANTITY

0.99+

MySQLTITLE

0.99+

third aspectQUANTITY

0.99+

12 channelsQUANTITY

0.99+

Kumaran SivaPERSON

0.99+

HeatWaveORGANIZATION

0.99+

96QUANTITY

0.99+

18 timesQUANTITY

0.99+

BergamoORGANIZATION

0.99+

three partsQUANTITY

0.99+

DeltaORGANIZATION

0.99+

three monthsQUANTITY

0.99+

MySQL HeatWaveTITLE

0.99+

42 timesQUANTITY

0.99+

bothQUANTITY

0.99+

18 monthsQUANTITY

0.99+

Zen 2COMMERCIAL_ITEM

0.99+

oneQUANTITY

0.99+

GitHubORGANIZATION

0.99+

OneQUANTITY

0.98+

second generationQUANTITY

0.98+

single databaseQUANTITY

0.98+

128 coresQUANTITY

0.98+

18 monthsQUANTITY

0.98+

three thingsQUANTITY

0.98+

Oracle & AMD Partner to Power Exadata X9M


 

[Music] the history of exadata in the platform is really unique and from my vantage point it started earlier this century as a skunk works inside of oracle called project sage back when grid computing was the next big thing oracle saw that betting on standard hardware would put it on an industry curve that would rapidly evolve and i remember the oracle hp database machine which was announced at oracle open world almost 15 years ago and then exadata kept evolving after the sun acquisition it became a platform that had tightly integrated hardware and software and today exadata it keeps evolving almost like a chameleon to address more workloads and reach new performance levels last april for example oracle announced the availability of exadata x9m in oci oracle cloud infrastructure and introduced the ability to run the autonomous database service or the exa data database service you know oracle often talks about they call it stock exchange performance level kind of no description needed and sort of related capabilities the company as we know is fond of putting out benchmarks and comparisons with previous generations of product and sometimes competitive products that underscore the progress that's being made with exadata such as 87 percent more iops with metrics for latency measured in microseconds mics instead of milliseconds and many other numbers that are industry-leading and compelling especially for mission-critical workloads one thing that hasn't been as well publicized is that exadata on oci is using amd's epyc processors in the database service epyc is not eastern pacific yacht club for all your sailing buffs rather it stands for extreme performance yield computing the enterprise grade version of amd's zen architecture which has been a linchpin of amd's success in terms of penetrating enterprise markets and to focus on the innovations that amd and oracle are bringing to market we have with us today juan loyza who's executive vice president of mission critical technologies at oracle and mark papermaster who's the cto and evp of technology and engineering at amd juan welcome back to the show mark great to have you on thecube and your first appearance thanks for coming on yep happy to be here thank you all right juan let's start with you you've been on thecube a number of times as i said and you've talked about how exadata is a top platform for oracle database we've covered that extensively what's different and unique from your point of view about exadata cloud infrastructure x9m on oci yeah so as you know exadata it's designed top down to be the best possible platform for database uh it has a lot of unique capabilities like we make extensive use of rdma smart storage we take advantage of you know everything we can in the leading uh hardware platforms and x9m is our next generation platform and it does exactly that we're always wanting to be to get all the best that we can from the available hardware that our partners like amd produce and so that's what x9 in it is it's faster more capacity lower latency more ios pushing the limits of the hardware technology so we don't want to be the limit the software the database software should not be the limit it should be uh the actual physical limits of the hardware and that that's what x9m is all about why won amd chips in x9m uh yeah so we're we're uh introducing uh amd chips we think they provide outstanding performance uh both for oltp and for analytic workloads and it's really that simple we just think that performance is outstanding in the product yeah mark your career is quite amazing i've been around long enough to remember the transition to cmos from emitter coupled logic in the mainframe era back when you were at ibm that was an epic technology call at the time i was of course steeped as an analyst at idc in the pc era and like like many witnessed the tectonic shift that apple's ipod and iphone caused and the timing of you joining amd is quite important in my view because it coincided with the year that pc volumes peaked and marked the beginning of what i call a stagflation period for x86 i could riff on history for hours but let's focus on the oracle relationship mark what are the relevant capabilities and key specs of the amd chips that are used in exadata x9m on oracle's cloud well thanks and and uh it's really uh the basis of i think the great partnership that we have with oracle on exadata x9m and that is that the amd technology uses our third generation of zen processors zen was you know architected to really bring high performance you know back to x86 a very very strong road map that we've executed you know on schedule to our commitments and this third generation does all of that it uses a seven nanometer cpu that is a you know core that was designed to really bring uh throughput uh bring you know really high uh efficiency uh to computing uh and just deliver raw capabilities and so uh for uh exadata x9m uh it's really leveraging all of that it's it's a uh implemented in up to 64 cores per socket it's got uh you know really anywhere from 128 to 168 pcie gen 4 io connectivity so you can you can really attach uh you know all of the uh the necessary uh infrastructure and and uh storage uh that's needed uh for exadata performance and also memory you have to feed the beast for those analytics and for the oltp that juan was talking about and so it does have eight lanes of memory for high performance ddr4 so it's really as a balanced processor and it's implemented in a way to really optimize uh high performance that that is our whole focus of uh amd it's where we've you know reset the company focus on years ago and uh again uh you know great to see uh you know the the super smart uh you know database team at oracle really a partner with us understand those capabilities and it's been just great to partner with them to uh you know to you know enable oracle to really leverage the capabilities of the zen processor yeah it's been a pretty amazing 10 or 11 years for both companies but mark how specifically are you working with oracle at the engineering and product level you know and what does that mean for your joint customers in terms of what they can expect from the collaboration well here's where the collaboration really comes to play you think about a processor and you know i'll say you know when one's team first looked at it there's general benchmarks and the benchmarks are impressive but they're general benchmarks and you know and they showed you know the i'll say the you know the base processing capability but the partnership comes to bear uh when it when it means optimizing for the workloads that exadata x9m is really delivering to the end customers and that's where we dive down and and as we uh learn from the oracle team we learned to understand where bottlenecks could be uh where is there tuning that we could in fact in fact really boost the performance above i'll say that baseline that you get in the generic benchmarks and that's what the teams have done so for instance you look at you know optimizing latency to rdma you look at just throughput optimizing throughput on otp and database processing when you go through the workloads and you take the traces and you break it down and you find the areas that are bottlenecking and then you can adjust we have you know thousands of parameters that can be adjusted for a given workload and that's again that's the beauty of the partnership so we have the expertise on the cpu engineering uh you know oracle exudated team knows innately what the customers need to get the most out of their platform and when the teams came together we actually achieved anywhere from 20 percent to 50 gains on specific workloads it's really exciting to see so okay so so i want to follow up on that is that different from the competition how are you driving customer value you mentioned some you know some some percentage improvements are you measuring primarily with with latency how do you look at that well uh you know we are differentiated with the uh in the number of factors we bring a higher core density we bring the highest core density certainly in x86 and and moreover what we've led the industry is how to scale those cores we have a very high performance fabric that connects those together so as as a customer needs more cores again we scale anywhere from 8 to 64 cores but what the trick is uh that is you add more cores you want the scale the scale to be as close to linear as possible and so that's a differentiation we have and we enable that again with that balanced computer of cpu io and memory that we design but the key is you know we pride ourselves at amd of being able to partner in a very deep fashion with our customers we listen very well i think that's uh what we've had the opportunity uh to do with uh juan and his team we appreciate that and and that is how we got the kind of performance benefits that i described earlier it's working together almost like one team and in bringing that best possible capability to the end customers great thank you for that one i want to come back to you can both the exadata database service and the autonomous database service can they take advantage of exadata cloud x9m capabilities that are in that platform yeah absolutely um you know autonomous is basically our self-driving version of the oracle database but fundamentally it is the same uh database course so both of them will take advantage of the tremendous performance that we're getting now you know when when mark takes about 64 cores that's for chip we have two chips you know it's a two socket server so it's 128 128-way processor and then from our point of view there's two threads so from the database point there's 200 it's a 256-way processor and so there's a lot of raw performance there and we've done a lot of work with the amd team to make sure that we deliver that to our customers for all the different kinds of workload including otp analytics but also including for our autonomous database so yes absolutely allah takes advantage of it now juan you know i can't let you go without asking about the competition i've written extensively about the big four hyperscale clouds specifically aws azure google and alibaba and i know that don't hate me sometimes it angers some of my friends at oracle ibm too that i don't include you in that list but but i see oracle specifically is different and really the cloud for the most demanding applications and and top performance databases and not the commodity cloud which of course that angers all my friends at those four companies so i'm ticking everybody off so how does exadata cloud infrastructure x9m compare to the likes of aws azure google and other database cloud services in terms of oltp and analytics value performance cost however you want to frame it yeah so our architecture is fundamentally different uh we've architected our database for the scale out environment so for example we've moved intelligence in the storage uh we've put uh remote direct memory access we put persistent memory into our product so we've done a lot of architectural changes that they haven't and you're starting to see a little bit of that like if you look at some of the things that amazon and google are doing they're starting to realize that hey if you're gonna achieve good results you really need to push some database uh processing into the storage so so they're taking baby steps toward that you know you know roughly 15 years after we we've had a product and again at some point they're gonna realize you really need rdma you really need you know more uh direct access to those capabilities so so they're slowly getting there but you know we're well ahead and what you know the way this is delivered is you know better availability better performance lower latency higher iops so and this is why our customers love our product and you know if you if you look at the global fortune 100 over 90 percent of them are running exit data today and even in the in our cloud uh you know over 60 of the global 100 are running exadata in the oracle cloud because of all the differentiated uh benefits that they get uh from the product uh so yeah we're we're well ahead in the in the database space mark last question for you is how do you see this relationship evolving in the future can you share a little road map for the audience you bet well first off you know given the deep partnership that we've had on exudate x9m uh it it's really allowed us to inform our future design so uh in our current uh third generation epic epyc is uh that is really uh what we call our epic server offerings and it's a 7003 third gen in and exudate x9m so what about fourth gen well fourth gen is well underway uh you know it and uh and uh you know ready to you know for the for the future but it incorporates learning uh that we've done in partnership with with oracle uh it's gonna have even more through capabilities it's gonna have expanded memory capabilities because there's a cxl connect express link that'll expand even more memory opportunities and i could go on so you know that's the beauty of a deep partnership as it enables us to really take that learning going forward it pays forward and we're very excited to to fold all of that into our future generations and provide even a better capabilities to one and his team moving forward yeah you guys have been obviously very forthcoming you have to be with with with zen and epic juan anything you'd like to add as closing comments yeah i would say that in the processor market there's been a real acceleration in innovation in the last few years um there was you know a big move 10 15 years ago when multi-core processors came out and then you know we were on that for a while and then things started staggering but in the last two or three years and amd has been leading this um there's been a dramatic uh acceleration in innovation in this space so it's very exciting to be part of this and and customers are getting a big benefit from this all right chance hey thanks for coming back in the cube today really appreciate your time thanks glad to be here all right thank you for watching this exclusive cube conversation this is dave vellante from thecube and we'll see you next time [Music]

Published Date : Sep 13 2022

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
20 percentQUANTITY

0.99+

juan loyzaPERSON

0.99+

amdORGANIZATION

0.99+

amazonORGANIZATION

0.99+

8QUANTITY

0.99+

256-wayQUANTITY

0.99+

10QUANTITY

0.99+

OracleORGANIZATION

0.99+

alibabaORGANIZATION

0.99+

87 percentQUANTITY

0.99+

128QUANTITY

0.99+

oracleORGANIZATION

0.99+

two threadsQUANTITY

0.99+

googleORGANIZATION

0.99+

11 yearsQUANTITY

0.99+

todayDATE

0.99+

50QUANTITY

0.99+

200QUANTITY

0.99+

ipodCOMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

two chipsQUANTITY

0.99+

both companiesQUANTITY

0.99+

10DATE

0.98+

iphoneCOMMERCIAL_ITEM

0.98+

earlier this centuryDATE

0.98+

last aprilDATE

0.98+

third generationQUANTITY

0.98+

juanPERSON

0.98+

64 coresQUANTITY

0.98+

128-wayQUANTITY

0.98+

two socketQUANTITY

0.98+

eight lanesQUANTITY

0.98+

awsORGANIZATION

0.97+

AMDORGANIZATION

0.97+

iosTITLE

0.97+

fourth genQUANTITY

0.96+

168 pcieQUANTITY

0.96+

dave vellantePERSON

0.95+

third genQUANTITY

0.94+

aws azureORGANIZATION

0.94+

appleORGANIZATION

0.94+

thousands of parametersQUANTITY

0.92+

yearsDATE

0.91+

15 yearsQUANTITY

0.9+

Power ExadataORGANIZATION

0.9+

over 90 percentQUANTITY

0.89+

four companiesQUANTITY

0.89+

firstQUANTITY

0.88+

ociORGANIZATION

0.87+

first appearanceQUANTITY

0.85+

one teamQUANTITY

0.84+

almost 15 years agoDATE

0.83+

seven nanometerQUANTITY

0.83+

last few yearsDATE

0.82+

one thingQUANTITY

0.82+

15 years agoDATE

0.82+

epycTITLE

0.8+

over 60QUANTITY

0.79+

amd produceORGANIZATION

0.79+

Video exclusive: Oracle adds more wood to the MySQL HeatWave fire


 

(upbeat music) >> When Oracle acquired Sun in 2009, it paid $5.6 billion net of Sun's cash and debt. Now I argued at the time that Oracle got one of the best deals in the history of enterprise tech, and I got a lot of grief for saying that because Sun had a declining business, it was losing money, and its revenue was under serious pressure as it tried to hang on for dear life. But Safra Catz understood that Oracle could pay Sun's lower profit and lagging businesses, like its low index 86 product lines, and even if Sun's revenue was cut in half, because Oracle has such a high revenue multiple as a software company, it could almost instantly generate $25 to $30 billion in shareholder value on paper. In addition, it was a catalyst for Oracle to initiate its highly differentiated engineering systems business, and was actually the precursor to Oracle's Cloud. Oracle saw that it could capture high margin dollars that used to go to partners like HP, it's original exit data partner, and get paid for the full stack across infrastructure, middleware, database, and application software, when eventually got really serious about cloud. Now there was also a major technology angle to this story. Remember Sun's tagline, "the network is the computer"? Well, they should have just called it cloud. Through the Sun acquisition. Oracle also got a couple of key technologies, Java, the number one programming language in the world, and MySQL, a key ingredient of the LAMP stack, that's Linux, Apache, MySQL and PHP, Perl or Python, on which the internet is basically built, and is used by many cloud services like Facebook, Twitter, WordPress, Flicker, Amazon, Aurora, and many other examples, including, by the way, Maria DB, which is a fork of MySQL created by MySQL's creator, basically in protest to Oracle's acquisition; the drama is Oscar worthy. It gets even better. In 2020, Oracle began introducing a new version of MySQL called MySQL HeatWave, and since late 2020 it's been in sort of a super cycle rolling, out three new releases in less than a year and a half in an attempt to expand its Tam and compete in new markets. Now we covered the release of MySQL Autopilot, which uses machine learning to automate management functions. And we also covered the bench marketing that Oracle produced against Snowflake, AWS, Azure, and Google. And Oracle's at it again with HeatWave, adding machine learning into its database capabilities, along with previously available integrations of OLAP and OLTP. This, of course, is in line with Oracle's converged database philosophy, which, as we've reported, is different from other cloud database providers, most notably Amazon, which takes the right tool for the right job approach and chooses database specialization over a one size fits all strategy. Now we've asked Oracle to come on theCUBE and explain these moves, and I'm pleased to welcome back Nipun Agarwal, who's the senior vice president for MySQL Database and HeatWave at Oracle. And today, in this video exclusive, we'll discuss machine learning, other new capabilities around elasticity and compression, and then any benchmark data that Nipun wants to share. Nipun's been a leading advocate of the HeatWave program. He's led engineering in that team for over 10 years, and he has over 185 patents in database technologies. Welcome back to the show Nipun. Great to see you again. Thanks for coming on. >> Thank you, Dave. Very happy to be back. >> Yeah, now for those who may not have kept up with the news, maybe to kick things off you could give us an overview of what MySQL HeatWave actually is so that we're all on the same page. >> Sure, Dave, MySQL HeatWave is a fully managed MySQL database service from Oracle, and it has a builtin query accelerator called HeatWave, and that's the part which is unique. So with MySQL HeatWave, customers of MySQL get a single database which they can use for transactional processing, for analytics, and for mixed workloads because traditionally MySQL has been designed and optimized for transaction processing. So in the past, when customers had to run analytics with the MySQL based service, they would need to move the data out of MySQL into some other database for running analytics. So they would end up with two different databases and it would take some time to move the data out of MySQL into this other system. With MySQL HeatWave, we have solved this problem and customers now have a single MySQL database for all their applications, and they can get the good performance of analytics without any changes to their MySQL application. >> Now it's no secret that a lot of times, you know, queries are not, you know, most efficiently written, and critics of MySQL HeatWave will claim that this product is very memory and cluster intensive, it has a heavy footprint that adds to cost. How do you answer that, Nipun? >> Right, so for offering any database service in the cloud there are two dimensions, performance and cost, and we have been very cognizant of both of them. So it is indeed the case that HeatWave is a, in-memory query accelerator, which is why we get very good performance, but it is also the case that we have optimized HeatWave for commodity cloud services. So for instance, we use the least expensive compute. We use the least expensive storage. So what I would suggest is for the customers who kind of would like to know what is the price performance advantage of HeatWave compared to any database we have benchmark against, Redshift, Snowflake, Google BigQuery, Azure Synapse, HeatWave is significantly faster and significantly lower price on a multitude of workloads. So not only is it in-memory database and optimized for that, but we have also optimized it for commodity cloud services, which makes it much lower price than the competition. >> Well, at the end of the day, it's customers that sort of decide what the truth is. So to date, what's been the customer reaction? Are they moving from other clouds from on-prem environments? Both why, you know, what are you seeing? >> Right, so we are definitely a whole bunch of migrations of customers who are running MySQL on-premise to the cloud, to MySQL HeatWave. That's definitely happening. What is also very interesting is we are seeing that a very large percentage of customers, more than half the customers who are coming to MySQL HeatWave, are migrating from other clouds. We have a lot of migrations coming from AWS Aurora, migrations from RedShift, migrations from RDS MySQL, TerriData, SAP HANA, right. So we are seeing migrations from a whole bunch of other databases and other cloud services to MySQL HeatWave. And the main reason we are told why customers are migrating from other databases to MySQL HeatWave are lower cost, better performance, and no change to their application because many of these services, like AWS Aurora are ETL compatible with MySQL. So when customers try MySQL HeatWave, not only do they get better performance at a lower cost, but they find that they can migrate their application without any changes, and that's a big incentive for them. >> Great, thank you, Nipun. So can you give us some names? Are there some real world examples of these customers that have migrated to MySQL HeatWave that you can share? >> Oh, absolutely, I'll give you a few names. Stutor.com, this is an educational SaaS provider raised out of Brazil. They were using Google BigQuery, and when they migrated to MySQL HeatWave, they found a 300X, right, 300 times improvement in performance, and it lowered their cost by 85 (audio cut out). Another example is Neovera. They offer cybersecurity solutions and they were running their application on an on-premise version of MySQL when they migrated to MySQL HeatWave, their application improved in performance by 300 times and their cost reduced by 80%, right. So by going from on-premise to MySQL HeatWave, they reduced the cost by 80%, improved performance by 300 times. We are Glass, another customer based out of Brazil. They were running on AWS EC2, and when they migrated, within hours they found that there was a significant improvement, like, you know, over 5X improvement in database performance, and they were able to accommodate a very large virtual event, which had more than a million visitors. Another example, Genius Senority. They are a game designer in Japan, and when they moved to MySQL HeatWave, they found a 90 times percent improvement in performance. And there many, many more like a lot of migrations, again, from like, you know, Aurora, RedShift and many other databases as well. And consistently what we hear is (audio cut out) getting much better performance at a much lower cost without any change to their application. >> Great, thank you. You know, when I ask that question, a lot of times I get, "Well, I can't name the customer name," but I got to give Oracle credit, a lot of times you guys have at your fingertips. So you're not the only one, but it's somewhat rare in this industry. So, okay, so you got some good feedback from those customers that did migrate to MySQL HeatWave. What else did they tell you that they wanted? Did they, you know, kind of share a wishlist and some of the white space that you guys should be working on? What'd they tell you? >> Right, so as customers are moving more data into MySQL HeatWave, as they're consolidating more data into MySQL HeatWave, customers want to run other kinds of processing with this data. A very popular one is (audio cut out) So we have had multiple customers who told us that they wanted to run machine learning with data which is stored in MySQL HeatWave, and for that they have to extract the data out of MySQL (audio cut out). So that was the first feedback we got. Second thing is MySQL HeatWave is a highly scalable system. What that means is that as you add more nodes to a HeatWave cluster, the performance of the system improves almost linearly. But currently customers need to perform some manual steps to add most to a cluster or to reduce the cluster size. So that was other feedback we got that people wanted this thing to be automated. Third thing is that we have shown in the previous results, that HeatWave is significantly faster and significantly lower price compared to competitive services. So we got feedback from customers that can we trade off some performance to get even lower cost, and that's what we have looked at. And then finally, like we have some results on various data sizes with TPC-H. Customers wanted to see if we can offer some more data points as to how does HeatWave perform on other kinds of workloads. And that's what we've been working on for the several months. >> Okay, Nipun, we're going to get into some of that, but, so how did you go about addressing these requirements? >> Right, so the first thing is we are announcing support for in-database machine learning, meaning that customers who have their data inside MySQL HeatWave can now run training, inference, and prediction all inside the database without the data or the model ever having to leave the database. So that's how we address the first one. Second thing is we are offering support for real time elasticity, meaning that customers can scale up or scale down to any number of nodes. This requires no manual intervention on part of the user, and for the entire duration of the resize operation, the system is fully available. The third, in terms of the costs, we have double the amount of data that can be processed per node. So if you look at a HeatWave cluster, the size of the cluster determines the cost. So by doubling the amount of data that can be processed per node, we have effectively reduced the cluster size which is required for planning a given workload to have, which means it reduces the cost to the customer by half. And finally, we have also run the TPC-DS workload on HeatWave and compared it with other vendors. So now customers can have another data point in terms of the performance and the cost comparison of HeatWave with other services. >> All right, and I promise, I'm going to ask you about the benchmarks, but I want to come back and drill into these a bit. How is HeatWave ML different from competitive offerings? Take for instance, Redshift ML, for example. >> Sure, okay, so this is a good comparison. Let's start with, let's say RedShift ML, like there are some systems like, you know, Snowflake, which don't even offer any, like, processing of machine learning inside the database, and they expect customers to write a whole bunch of code, in say Python or Java, to do machine learning. RedShift ML does have integration with SQL. That's a good start. However, when customers of Redshift need to run machine learning, and they invoke Redshift ML, it makes a call to another service, SageMaker, right, where so the data needs to be exported to a different service. The model is generated, and the model is also outside RedShift. With HeatWave ML, the data resides always inside the MySQL database service. We are able to generate models. We are able to train the models, run inference, run explanations, all inside the MySQL HeatWave service. So the data, or the model, never have to leave the database, which means that both the data and the models can now be secured by the same access control mechanisms as the rest of the data. So that's the first part, that there is no need for any ETL. The second aspect is the automation. Training is a very important part of machine learning, right, and it impacts the quality of the predictions and such. So traditionally, customers would employ data scientists to influence the training process so that it's done right. And even in the case of Redshift ML, the users are expected to provide a lot of parameters to the training process. So the second thing which we have worked on with HeatWave ML is that it is fully automated. There is absolutely no user intervention required for training. Third is in terms of performance. So one of the things we are very, very sensitive to is performance because performance determines the eventual cost to the customer. So again, in some benchmarks, which we have published, and these are all available on GitHub, we are showing how HeatWave ML is 25 times faster than Redshift ML, and here's the kicker, at 1% of the cost. So four benefits, the data all remain secure inside the database service, it's fully automated, much faster, much lower cost than the competition. >> All right, thank you Nipun. Now, so there's a lot of talk these days about explainability and AI. You know, the system can very accurately tell you that it's a cat, you know, or for you Silicon Valley fans, it's a hot dog or not a hot dog, but they can't tell you how the system got there. So what is explainability, and why should people care about it? >> Right, so when we were talking to customers about what they would like from a machine learning based solution, one of the feedbacks we got is that enterprise is a little slow or averse to uptaking machine learning, because it seems to be, you know, like magic, right? And enterprises have the obligation to be able to explain, or to provide a answer to their customers as to why did the database make a certain choice. With a rule based solution it's simple, it's a rule based thing, and you know what the logic was. So the reason explanations are important is because customers want to know why did the system make a certain prediction? One of the important characteristics of HeatWave ML is that any model which is generated by HeatWave ML can be explained, and we can do both global explanations or model explanations as well as we can also do local explanations. So when the system makes a specific prediction using HeatWave ML, the user can find out why did the system make such a prediction? So for instance, if someone is being denied a loan, the user can figure out what were the attribute, what were the features which led to that decision? So this ensures, like, you know, fairness, and many of the times there is also like a need for regulatory compliance where users have a right to know. So we feel that explanations are very important for enterprise workload, and that's why every model which is generated by HeatWave ML can be explained. >> Now I got to give Snowflakes some props, you know, this whole idea of separating compute from storage, but also bringing the database to the cloud and driving elasticity. So that's been a key enabler and has solved a lot of problems, in particular the snake swallowing the basketball problem, as I often say. But what about elasticity and elasticity in real time? How is your version, and there's a lot of companies chasing this, how is your approach to an elastic cloud database service different from what others are promoting these days? >> Right, so a couple of characteristics. One is that we have now fully automated the process of elasticity, meaning that if a user wants to scale up or scale down, the only thing they need to specify is the eventual size of the cluster and the system completely takes care of it transparently. But then there are a few characteristics which are very unique. So for instance, we can scale up or scale down to any number of nodes. Whereas in the case of Snowflake, the number of nodes someone can scale up or scale down to are the powers of two. So if a user needs 70 CPUs, well, their choice is either 64 or 128. So by providing this flexibly with MySQL HeatWave, customers get a custom fit. So they can get a cluster which is optimized for their specific portal. So that's the first thing, flexibility of scaling up or down to any number of nodes. The second thing is that after the operation is completed, the system is fully balanced, meaning the data across the various nodes is fully balanced. That is not the case with many solutions. So for instance, in the case of Redshift, after the resize operation is done, the user is expected to manually balance the data, which can be very cumbersome. And the third aspect is that while the resize operation is going on, the HeatWave cluster is completely available for queries, for DMLS, for loading more data. That is, again, not the case with Redshift. Redshift, suppose the operation takes 10 to 15 minutes, during that window of time, the system is not available for writes, and for a big part of that chunk of time, the system is not even available for queries, which is very limiting. So the advantages we have are fully flexible, the system is in a balanced state, and the system is completely available for the entire duration operation. >> Yeah, I guess you got that hypergranularity, which, you know, sometimes they say, "Well, t-shirt sizes are good enough," but then I think of myself, some t-shirts fit me better than others, so. Okay, I saw on the announcement that you have this lower price point for customers. How did you actually achieve this? Could you give us some details around that please? >> Sure, so there are two things for announcing this service, which lower the cost for the customers. The first thing is that we have doubled the amount of data that can be processed by a HeatWave node. So if we have doubled the amount of data, which can be a process by a node, the cluster size which is required by customers reduces to half, and that's why the cost drops to half. The way we have managed to do this is by two things. One is support for Bloom filters, which reduces the amount of intermediate memory. And second is we compress the base data. So these are the two techniques we have used to process more data per node. The second way by which we are lowering the cost for the customers is by supporting pause and resume of HeatWave. And many times you find customers of like HeatWave and other services that they want to run some other queries or some other workloads for some duration of time, but then they don't need the cluster for a few hours. Now with the support for pause and resume, customers can pause the cluster and the HeatWave cluster instantaneously stops. And when they resume, not only do we fetch the data, in a very, like, you know, a quick pace from the object store, but we also preserve all the statistics, which are used by Autopilot. So both the data and the metadata are fetched, extremely fast from the object store. So with these two capabilities we feel that it'll drive down the cost to our customers even more. >> Got it, thank you. Okay, I promised I was going to get to the benchmarks. Let's have it. How do you compare with others but specifically cloud databases? I mean, and how do we know these benchmarks are real? My friends at EMC, they were back in the day, they were brilliant at doing benchmarks. They would produce these beautiful PowerPoints charts, but it was kind of opaque, but what do you say to that? >> Right, so there are multiple things I would say. The first thing is that this time we have published two benchmarks, one is for machine learning and other is for SQL analytics. All the benchmarks, including the scripts which we have used are available on GitHub. So we have full transparency, and we invite and encourage customers or other service providers to download the scripts, to download the benchmarks and see if they get any different results, right. So what we are seeing, we have published it for other people to try and validate. That's the first part. Now for machine learning, there hasn't been a precedence for enterprise benchmarks so we talk about aiding open data sets and we have published benchmarks for those, right? So both for classification, as well as for aggression, we have run the training times, and that's where we find that HeatWave MLS is 25 times faster than RedShift ML at one percent of the cost. So fully transparent, available. For SQL analytics, in the past we have shown comparisons with TPC-H. So we would show TPC-H across various databases, across various data sizes. This time we decided to use TPC-DS. the advantage of TPC-DS over TPC-H is that it has more number of queries, the queries are more complex, the schema is more complex, and there is a lot more data skew. So it represents a different class of workloads, and which is very interesting. So these are queries derived from the TPC-DS benchmark. So the numbers we have are published this time are for 10 terabyte TPC-DS, and we are comparing with all the four majors services, Redshift, Snowflake, Google BigQuery, Azure Synapse. And in all the cases, HeatWave is significantly faster and significantly lower priced. Now one of the things I want to point out is that when we are doing the cost comparison with other vendors, we are being overly fair. For instance, the cost of HeatWave includes the cost of both the MySQL node as well as the HeatWave node, and with this setup, customers can run transaction processing analytics as well as machine learning. So the price captures all of it. Whereas with the other vendors, the comparison is only for the analytic queries, right? So if customers wanted to run RDP, you would need to add the cost of that database. Or if customers wanted to run machine learning, you would need to add the cost of that service. Furthermore, with the case of HeatWave, we are quoting pay as you go price, whereas for other vendors like, you know, RedShift, and like, you know, where applicable, we are quoting one year, fully paid upfront cost rate. So it's like, you know, very fair comparison. So in terms of the numbers though, price performance for TPC-DS, we are about 4.8 times better price performance compared to RedShift We are 14.4 times better price performance compared to Snowflake, 13 times better than Google BigQuery, and 15 times better than Synapse. So across the board, we are significantly faster and significantly lower price. And as I said, all of these scripts are available in GitHub for people to drive for themselves. >> Okay, all right, I get it. So I think what you're saying is, you could have said this is what it's going to cost for you to do both analytics and transaction processing on a competitive platform versus what it takes to do that on Oracle MySQL HeatWave, but you're not doing that. You're saying, let's take them head on in their sweet spot of analytics, or OLTP separately and you're saying you still beat them. Okay, so you got this one database service in your cloud that supports transactions and analytics and machine learning. How much do you estimate your saving companies with this integrated approach versus the alternative of kind of what I called upfront, the right tool for the right job, and admittedly having to ETL tools. How can you quantify that? >> Right, so, okay. The numbers I call it, right, at the end of the day in a cloud service price performance is the metric which gives a sense as to how much the customers are going to save. So for instance, for like a TPC-DS workload, if we are 14 times better price performance than Snowflake, it means that our cost is going to be 1/14th for what customers would pay for Snowflake. Now, in addition, in other costs, in terms of migrating the data, having to manage two different databases, having to pay for other service for like, you know, machine learning, that's all extra and that depends upon what tools customers are using or what other services they're using for transaction processing or for machine learning. But these numbers themselves, right, like they're very, very compelling. If we are 1/5th the cost of Redshift, right, or 1/14th of Snowflake, these numbers, like, themselves are very, very compelling. And that's the reason we are seeing so many of these migrations from these databases to MySQL HeatWave. >> Okay, great, thank you. Our last question, in the Q3 earnings call for fiscal 22, Larry Ellison said that "MySQL HeatWave is coming soon on AWS," and that caught a lot of people's attention. That's not like Oracle. I mean, people might say maybe that's an indication that you're not having success moving customers to OCI. So you got to go to other clouds, which by the way I applaud, but any comments on that? >> Yep, this is very much like Oracle. So if you look at one of the big reasons for success of the Oracle database and why Oracle database is the most popular database is because Oracle database runs on all the platforms, and that has been the case from day one. So very akin to that, the idea is that there's a lot of value in MySQL HeatWave, and we want to make sure that we can offer same value to the customers of MySQL running on any cloud, whether it's OCI, whether it's the AWS, or any other cloud. So this shows how confident we are in our offering, and we believe that in other clouds as well, customers will find significant advantage by having a single database, which is much faster and much lower price then what alternatives they currently have. So this shows how confident we are about our products and services. >> Well, that's great, I mean, obviously for you, you're in MySQL group. You love that, right? The more places you can run, the better it is for you, of course, and your customers. Okay, Nipun, we got to leave it there. As always it's great to have you on theCUBE, really appreciate your time. Thanks for coming on and sharing the new innovations. Congratulations on all the progress you're making here. You're doing a great job. >> Thank you, Dave, and thank you for the opportunity. >> All right, and thank you for watching this CUBE conversation with Dave Vellante for theCUBE, your leader in enterprise tech coverage. We'll see you next time. (upbeat music)

Published Date : Mar 29 2022

SUMMARY :

and get paid for the full Very happy to be back. maybe to kick things off you and that's the part which is unique. that adds to cost. So it is indeed the case that HeatWave Well, at the end of the day, And the main reason we are told So can you give us some names? and they were running their application and some of the white space and for that they have to extract the data and for the entire duration I'm going to ask you about the benchmarks, So one of the things we are You know, the system can and many of the times there but also bringing the So the advantages we Okay, I saw on the announcement and the HeatWave cluster but what do you say to that? So the numbers we have and admittedly having to ETL tools. And that's the reason we in the Q3 earnings call for fiscal 22, and that has been the case from day one. Congratulations on all the you for the opportunity. All right, and thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

$25QUANTITY

0.99+

JapanLOCATION

0.99+

Larry EllisonPERSON

0.99+

OracleORGANIZATION

0.99+

BrazilLOCATION

0.99+

two techniquesQUANTITY

0.99+

2009DATE

0.99+

EMCORGANIZATION

0.99+

14.4 timesQUANTITY

0.99+

AmazonORGANIZATION

0.99+

85QUANTITY

0.99+

10QUANTITY

0.99+

SunORGANIZATION

0.99+

300 timesQUANTITY

0.99+

14 timesQUANTITY

0.99+

two thingsQUANTITY

0.99+

$5.6 billionQUANTITY

0.99+

2020DATE

0.99+

HPORGANIZATION

0.99+

80%QUANTITY

0.99+

MySQLTITLE

0.99+

25 timesQUANTITY

0.99+

Nipun AgarwalPERSON

0.99+

RedshiftTITLE

0.99+

AWSORGANIZATION

0.99+

bothQUANTITY

0.99+

90 timesQUANTITY

0.99+

JavaTITLE

0.99+

PythonTITLE

0.99+

$30 billionQUANTITY

0.99+

GoogleORGANIZATION

0.99+

70 CPUsQUANTITY

0.99+

MySQL HeatWaveTITLE

0.99+

second aspectQUANTITY

0.99+

RedShiftTITLE

0.99+

Second thingQUANTITY

0.99+

RedShift MLTITLE

0.99+

1%QUANTITY

0.99+

Redshift MLTITLE

0.99+

NipunPERSON

0.99+

ThirdQUANTITY

0.99+

one percentQUANTITY

0.99+

13 timesQUANTITY

0.99+

first partQUANTITY

0.99+

todayDATE

0.99+

15 timesQUANTITY

0.99+

two capabilitiesQUANTITY

0.99+

Video Exclusive: Oracle EVP Juan Loaiza Announces Lower Priced Entry Point for ADB


 

(upbeat music) >> Oracle is in the midst of an acceleration of its product cycles. It really has pushed new capabilities across its database, the database platforms, and of course the cloud in an effort to really maintain its position as the gold standard for cloud database. We've reported pretty extensively on Exadata, most recently the X9M that increased database IOPS and throughput. Organizations running mission critical OLTP, analytics and mix workloads tell us that they've seen meaningfully improved performance and lower costs, which you expect in a technology cycle. I often say if Oracle calls you out by name it's a compliment and it means you've succeeded. So just a couple of weeks ago, Oracle turned up the heat on MongoDB with a Mongo compatible API, in an effort to persuade developers to run applications in a autonomous database and on OCI, Oracle cloud infrastructure. There was a big emphasis by Oracle on acid compliance transactions and automatic scaling as well as access to multiple data types. This caught my attention because in the early days of no SQL, there was a lot of chatter from folks about not needing acid capability in the database anymore. Funny how that comes around. And anyway, you see Oracle investing, they spend money in R&D We've always said that`, they're protecting their moat. Now in social I've seen some criticisms like Oracle still is not adding enough new logos, and Oracle of course will dispute that and give you some examples. But to me what's most impressive is the big name customers that Oracle gets to talk in public. Deutsche Bank, Telephonic, Experian, FedEx, I mean dozens and dozens and dozens. I work with a lot of companies and the quality of the customers Oracle puts in front of analysts like myself is very very high. At the top of the list I would say. And they're big spending customers. And as we said many times when it comes to mission critical workloads, Oracle is the king. And one of the executives behind the success is a longtime Cube alum, Juan Loaiza who's executive vice president of mission critical technologies at Oracle. And we've invited him back on today to talk about some news and Oracle's latest developments and database, Juan welcome back to the show and thanks for coming on today and talking about today's announcement. >> I'm very happy to be here today with you. >> Okay, so what are you announcing and how does this help organizations particularly with those existing Exadata cloud at customer installations? >> Yeah, the big thing we're announcing is our very successful cloud at customer platform. We're extending the capabilities of our autonomous database running on it. And specifically we're allowing much smaller configurations so customers can start small and grow with our autonomous database on our cloud customer platform. >> So let's get into granularity a little bit and double click on this. Can you go over how customers, carve up VM clusters for different workloads? What's the tangible benefit to them? >> Yeah, so it's pretty straightforward. We deploy our Cloud@Customer system anywhere the customer wants it, let's say in their data center. And then through our cloud APIs and GUIs they can carve up into pieces into basically VMs. They can say, Hey, I want a VM with eight CPUs to do this, I want a VM with 20 CPUs to that, I want a 500 CPUVM to do something else. And that's what we call a VM cluster because in Cloud@Customer, it is a highly available environment. So you don't just get one VM, you get a cluster of highly available VMs. So you carve it up. You hand it out to different aspects of a company. You might have development on one, testing on another one, some production sales on one VM, marketing on a different VM. And then you run your databases in there and that's kind of how it works and it's all done completely through our GUI and it's very, very simple 'cause they use it the same cloud APIs and GUIs that we use in the public cloud. It is the same APIs and GUIs that we use in the public cloud. >> Yeah, I was going to say sounds like cloud. So what about prerequisites? What do customers have to do to take advantage of the new capabilities? Can they run it on an Exadata cloud a customer that they installed a couple years ago? Do they have to upgrade the hardware? What migration pain is involved? >> Yeah, there's no pain, so it's just, (coughs) excuse me. I can take their existing system, they get our free software update and they can just deploy autonomous database as a VM in their existing Exadata cloud system. >> Oh nice okay what's the bottom line dollars? Our audience are always interested in cutting costs. It's one of the reasons they're moving to the cloud for example. So how does autonomous database on VM clusters, on Exadata Cloud at Customer? How does it help cut their cost? >> Well, it's pretty straightforward. So previous to this a customer would have to have dedicated a system to either autonomous database or to non autonomous data. So you have to choose one together. So on a system by system basis, you chose I want this thing autonomous, or I don't want it autonomous. Now you carve in the VMs and say for this VM I want that autonomous for that VM I want to run a regular database managed database on there. So lets customers now start small with any size they want. They could start with two CPUs and run an autonomous database and that's all they pay for is the two CPUs that they use. >> Let's talk a little about traction. I mean, I remember we covered the original Exadata announcement quite a long time ago and it's obviously evolved and taken many forms. Look, it's hard to argue that it hasn't been a big success. It has for Oracle and your target customers. Does this announcement make Exadata cloud a customer more attractive for smaller companies. In other words, does it expand the team for ADB? And if so, how? >> Yeah, absolutely. I mean our Exadata cloud platform is extremely successful. We have thousands of deployments, we have on our data platform we have about almost 90% of the global fortune 100 and thousands of smaller customers. In the cloud we have now up to 40% of the global 100 a hundred biggest companies in the world running on that. So it's been extremely successful platform and cloud a customer is super key. A lot of customers can't move their data to the public cloud. So we bring the public cloud to them with our cloud customer offering. And so that's the big customer is the fortune hundred but we have thousands of smaller customers also. And the nice thing about this offering is we can start with literally two CPUs. So we can be a very small customer and still run our autonomous data based on our cloud customer platform. >> Well, everybody cares about security and governance. I mean, especially the big guys, but the little guys that in many ways as well they want the capabilities of the large companies but they can't necessarily afford it. So I want to talk about security in particular governance and it's especially important for mission-critical apps. So how does this all change the security in governance paradigm? What do customers need to know there? >> Yeah, so the beauty of autonomous database which is the thing that we're talking about today is Oracle deals with all the security. So the OS, the hardware, firmware, VMs, the database itself all the interfaces to the VM, to the database all that is it's all done by Oracle. So, which is incredibly important because there's a constant stream of security alerts that are coming out and it's very difficult for customers to keep up with this stuff. I mean, it's hard for us and we have thousands of engineers. And so we take that whole burden away from customers. And you just don't have to think about it, we deal with it. So once you deploy an autonomous database it is always secure because anytime a security alert comes out, we will apply that and we do it in an online fashion also. So it's really, particularly for smaller customers it's even harder because to keep up with all the security that you you need a giant team of security experts and even the biggest customers struggle with that and a small customer's going to really struggle. There's just two, you have to look at the entire stack, all the different components switches, firmware, OS, VMs, database, everything. It's just very difficult to keep up. So we do it all and for small cut, they just can't do it. So really they really need to partner with a company like Oracle that has thousands of engineers that can keep up with this stuff. >> It's true what you say, even large customers this CSOs will tell you that lack of talent, lack of skill sets. They just don't have enough people and so even the big guys can't keep up. Okay, I want you to pitch me as though I'm a developer, which I'm not, but we got a lot of developers in our community. We'll be Cube con next month in Valencia, sell me on why a developer should lean into ADB on Exadata cloud as a customer? >> Yeah, it's very straightforward. So Oracle has the most advanced database in the industry and that's widely recognized by database analysts and experts in the field. Traditionally, it's been hard for a developer to use it because it's been hard to manage. It's been hard to set up, install, configure, patch, back up all that kind of stuff. Autonomous database does it all for you. So as a developer, you can just go into our console, click on creating a database. We ask you four questions, how big, how many CPUs how much storage and say, give your password. And within minutes you have a database. And at that point you can go crazy and just develop. And you don't have to worry about managing the database, patching the database, maintaining the security and the database backing up to all that stuff. You can instantly scale it. You can say, Hey, I want to grow it, you just click a button, take, grow it to much any size you want and you get all the mission critical capabilities. So it works for tiny databases but it is a stock exchange quality in terms of performance, availability, security it's a rock solid database that's super trivial. So what used to be a very complex thing is now completely trivial for a developer. So they get the best of both worlds, they get everything on the database side and it it's trivial for them to use. >> Wow, if you're doing all that stuff for 'em are they going to do on their weekends? Code? (chuckles) >> They should be developing their application and add value to their company that's kind of what they should focus on. And they can be looking at all sorts of new technologies like JSON and the database machine learning in the database graph in the database. So you can build very sophisticated applications because you don't have to worry about the database anymore. >> All right, let's talk about the competition. So it's always a topic I like to bring up with you. From a competitive perspective how is this latest and instantiation of Exadata cloud a customer X9M how's this different from running an AWS database service for instance on outpost, or let's say I want to run SQL server on Azure Stack or whatever Microsoft's calling it these days. Give us the competitive angle here. >> Yeah, there kind of is no real competition. So both Amazon and Microsoft have an at customer solution but they're very primitive. I mean, just to give you an example like Amazon doesn't run any of their premier database offerings at customers. So whether it's Aurora Redshift, doesn't run just plane does not run. It's not that it runs badly or it's got limited, just does not run. They can't run Oracle RDS on premise and same thing with Microsoft. They can't run Azure SQL, which is their premier database on their act customer platform. So that kind of tells you how limited that platform is when even their own premier offerings doesn't run on it. In contrast, we're running Exadata with our premier autonomous database. So it's our premier platform that's in use today by most of the biggest, banks, telecom to retailers et cetera in the world, thousands of smaller customers. So it's super mission critical, super proven with our premier cloud database, which is autonomous theory. So it couldn't be more black and white, this is a case where it's there really is no competition in the cloud of customer space on the database side. >> Okay, but let me follow up on that, Juan, if I may, so, okay. So it took you guys a while to get to the cloud, it's taken them a while to figure it on-prem. I mean, aren't they going to eventually sort of get there? What gives you confidence that you'll be able to to keep ahead? >> Well, there's two things, right? One is we've been doing this for a long time. I mean, that's what Oracle initially started as an on-prem and our Exadata platform has been available for over a decade. And we have a ton of experience on this. We run the biggest banks in the world already, it's not some hope for the future. This is what runs today. And our focus has always been a combination of cloud and on-prem their heart's not really in the on-prem stuff they really like. Amazon's really a public cloud only vendor and you can see from the result, it's not you can say, they can say whatever they want but you can see the results. Their outpost platform has been available for several years now and it still doesn't even run their own products. So you can kind of see how hard they're trying and how much they really care about this market. >> All right, boil it down if you just had a few things that you'd tell someone about why they should run ADB on Exadata cloud at customer, what would you say? >> It's pretty simple, which is it's the world's most sophisticated database made completely simple, that's it? So you get a stock exchange level database, you can start really small and grow and it's completely trivial to run because Oracle is automated everything within our autonomous data we use machine learning and a lot of automation to automate everything around the database. So it's kind of the best of both worlds. The best possible database starts as small as you want and is the simplest database in the world. >> So I probably should have asked you this while I was pushing the competitive question but this may be my last question, I promise. It's the age old debate It rages on, you got specialized databases kind of a right tool for the right job approach. That's clearly where Amazon is headed or what Oracle refers to is converge database. Oracle says its approach is more complete and "simpler." Take us through your thinking on this and the latest positioning so the audience can understand it a bit better. >> Yeah, so apps aren't what they used to business apps, data driven apps aren't what they used to be. They used to be kind of green screens where you just entered data. Now everyone's a very sophisticated app, they want to be have location, they want to have maps, they want to have graph in there. They want to have machine learning, they want machine learning built into the app. So they want JSON they want text, they want text search. So all these capabilities are what a modern app has to support. And so what Oracle's done is we provided a single solution that provides everything you need to build a modern app and it's all integrated together. It's all transactional. You have analytics built into the same thing. You have reporting built into the same thing. So it has everything you need to build a modern app. In contrast, what most of our competitors do is they give you these little solutions, say, okay here you do machine learning over here, you do analytics over there, you do JSON over here, you do spatial over here you do graph over there. And then it's left a developer to put an app together from all these pieces. So it's like getting the pieces of a card and having to assemble it yourself and then maintain it for the rest of your life, which is the even harder part. So one part upgrades, you got to test that. So of other piece upgrade or changes, you got to test that, you got to deal with all the security problems of all these different systems. You have to convert the data, you have to move the data back and forth it's extraordinarily complicated. Our converge database, the data sits in one place and all the algorithms come to the data. It's very simple, it is dramatically simpler. And then autonomous database is what makes managing it trivial. You don't really have to manage anything more because Oracle's automated the whole thing. >> So, Juan, we got a pretty good Cadence going here. I mean I really appreciate you coming on and giving us these little video exclusives. You can tell by again, that Cadence how frequently you guys are making new announcements. So that's great, congrats on yet another announcement. Thanks for coming back in the program appreciate it. >> Yeah, of course we invest heavily in data management. That's our core and we will continue to do that. I mean, we're investing billions of dollars a year and we intend to stay the leaders in this market. >> Great stuff and thank you for watching the Cube, your leader in enterprise tech coverage, this is Dave Vellante we'll see you next time.

Published Date : Mar 16 2022

SUMMARY :

and of course the cloud be here today with you. Yeah, the big thing we're announcing What's the tangible benefit to them? So you don't just get one VM, Do they have to upgrade the hardware? and they can just deploy It's one of the reasons So on a system by system basis, you chose and it's obviously evolved And so that's the big customer I mean, especially the big and even the biggest and so even the big guys can't keep up. and the database backing So you can build very about the competition. So that kind of tells you how limited So it took you guys a and you can see from the result, So it's kind of the best of both worlds. and the latest positioning and all the algorithms come to the data. I mean I really appreciate you coming on and we intend to stay the you for watching the Cube,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

FedExORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

ExperianORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

JuanPERSON

0.99+

Deutsche BankORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Juan LoaizaPERSON

0.99+

TelephonicORGANIZATION

0.99+

20 CPUsQUANTITY

0.99+

ValenciaLOCATION

0.99+

AWSORGANIZATION

0.99+

thousandsQUANTITY

0.99+

todayDATE

0.99+

two CPUsQUANTITY

0.99+

two thingsQUANTITY

0.99+

twoQUANTITY

0.99+

CadenceORGANIZATION

0.99+

four questionsQUANTITY

0.99+

oneQUANTITY

0.99+

OneQUANTITY

0.98+

thousands of deploymentsQUANTITY

0.98+

eight CPUsQUANTITY

0.98+

bothQUANTITY

0.98+

Azure StackTITLE

0.98+

MongoDBTITLE

0.98+

Azure SQLTITLE

0.97+

both worldsQUANTITY

0.97+

JSONTITLE

0.97+

over a decadeQUANTITY

0.96+

next monthDATE

0.96+

single solutionQUANTITY

0.94+

Aurora RedshiftTITLE

0.94+

one VMQUANTITY

0.94+

ADBORGANIZATION

0.94+

SQLTITLE

0.94+

thousands of engineersQUANTITY

0.94+

100QUANTITY

0.94+

one partQUANTITY

0.93+

billions of dollars a yearQUANTITY

0.93+

up to 40%QUANTITY

0.93+

500 CPUVMQUANTITY

0.92+

one placeQUANTITY

0.92+

couple of weeks agoDATE

0.92+

couple years agoDATE

0.87+

dozensQUANTITY

0.87+

MongoTITLE

0.87+

ExadataORGANIZATION

0.86+

CubeORGANIZATION

0.85+

Video Exclusive: Oracle Lures MongoDB Devs With New API for ADB


 

(upbeat music) >> Oracle continues to pursue a multi-mode converged database strategy. The premise of this all in one approach is to make life easier for practitioners and developers. And the most recent example is the Oracle database API for MongoDB, which was announced today. Now, Oracle, they're not the first to come out with a MongoDB compatible API, but Oracle hopes to use its autonomous database as a differentiator and further build a moat around OCI, Oracle Cloud Infrastructure. And with us to talk about Oracle's MongoDB compatible API is Gerald Venzl, who's a distinguished Product Manager at Oracle. Gerald was a guest along with Maria Colgan on the CUBE a while back, and we talked about Oracle's converge database and the kind of Swiss army knife strategy, I called it, of databases. This is dramatically different. It's an approach that we see at the opposite end of the the spectrum, for instance, from AWS, who, for example, goes after the world of developers with a different database for every use case. So, kind of picking up from there, Gerald, I wonder if you could talk about how this new MongoDB API adds to your converged model and the whole strategy there. Where does it fit? >> Yeah, thank you very much, Dave and, by the way, thanks for having me on the CUBE again. A pleasure to be here. So, essentially the MongoDB API to build the compatibility that we used with this API is a continuation of the converge database story, as you said before. Which is essentially bringing the many features of the many single purpose databases that people often like and use, together into one technology so that everybody can benefit from it. So as such, this is just a continuation that we have from so many other APIs or standards that we support. Since a long time, we already, of course to SQL because we are relational database from the get go. Also other standard like GraphQL, Sparkle, et cetera that we have. And the MongoDB API, is now essentially just the next step forward to give the developers this API that they've gotten to love and use. >> I wonder if you could talk about from the developer angle, what do they get out of it? Obviously you're appealing to the Mongo developers out there, but you've got this Mongo compatible API you're pouting the autonomous database on OCI. Why aren't they just going to use MongoDB Atlas on whatever cloud, Azure or AWS or Google Cloud platform? >> That's a very good question. We believe that the majority of developers want to just worry about their application, writing the application, and not so much about the database backend that they're using. And especially in cloud with cloud services, the reason why developers choose these services is so that they don't have to manage them. Now, autonomous database brings many topnotch advanced capabilities to database cloud services. We firmly believe that autonomous database is essentially the next generation of cloud services with all the self-driving features built in, and MongoDB developers writing applications against the MongoDB API, should not have to hold out on these capabilities either. It's like no developer likes to tune the database. No developer likes to take a downtime when they have to rescale their database to accommodate a bigger workload. And this is really where we see the benefit here, so for the developer, ideally nothing will change. You have MongoDB compatible API so they can keep on using their tools. They can build the applications the way that they do, but the benefit from the best cloud database service out there not having to worry about any of these package things anymore, that even MongoDB Atlas has a lot of shortcomings still today, as we find. >> Of cos, this is always a moving target The technology business, that's why we love it. So everybody's moving fast and investing and shaking and jiving. But, I want to ask you about, well, by the way, that's so you're hiding the underlying complexity, That's really the big takeaway there. So that's you huge for developers. But take, I was talking before about, the Amazon's approach, right tool for the right job. You got document DB, you got Microsoft with Cosmos, they compete with Mongo and they've been doing so for some time. How does Oracle's API for Mongo different from those offerings and how you going to attract their users to your JSON offering. >> So, you know, for first of all we have to kind of separate slightly document DB and AWS and Cosmos DB in Azure, they have slightly different approaches there. Document DB essentially is, a document store owned by and built by AWS, nothing different to Mongo DB, it's a head to head comparison. It's like use my document store versus the other document store. So you don't get any of the benefits of a converge database. If you ever want to do a different data model, run analytics over, etc. You still have to use the many other services that AWS provides you to. You cannot all do it into one database. Now Cosmos DB it's more in interesting because they claim to be a multi-model database. And I say claim because what we understand as multi-model database is different to what they understand as multimodel database. And also one of the reasons why we start differentiating with converge database. So what we mean is you should be able to regardless what data format you want to store in the database leverage all the functionality of the database over that data format, with no trade offs. Cosmos DB when you look at it, it essentially gives you mode of operation. When you connect as the application or the user, you have to decide at connection time, how you want, how this database should be treated. Should it be a document store? Should it be a graph store? Should it be a relational store? Once you make that choice, you are locked into that. As long as you establish that connection. So it's like, if you say, I want a document store, all you get is a document store. There's no way for you to crossly analyze with the relational data sitting in the same service. There's no for you to break these boundaries. If you ever want to add some graph data and graph analytics, you essentially have to disconnect and now treat it as a graph store. So you get multiple data models in it, but really you still get, one trick pony the moment you connect to it that you have to choose to. And that is where we see a huge differentiation again with our converge database, because we essentially say, look, one database cloud service on Oracle cloud, where it allows you to do anything, if you wish to do so. You can start as a document store if you wish to do so. If you want to write some SQL queries on top, you can do so. If you want to add some graph data, you can do so. But there's no way for you to have to rewrite your application, use different libraries and frameworks now to connect et cetera, et cetera. >> Got it. Thank you for that. Do you have any data when you talk to customers? Like I'm interested in the diversity of deployments, like for instance, how many customers are using more than one data model? Do for instance, do JSON users need support for other data types or are they happy to stay kind of in their own little sandbox? Do you have any data on that? >> So what we see from the majority of our customers, there is no such thing as one data model fits everything. So, and it's like, there again we have to differentiate the developer that builds a certain microservice, that makes happy to stay in the JSON world or relational world, or the company that's trying to derive value from the data. So it's like the relational model has not gone away since 40 years of it existence. It's still kicking strong. It's still really good at what it does. The JSON data model is really good in what it does. The graph model is really good at what it does. But all these models have been built for different purposes. Try to do graph analytics on relational or JSON data. It's like, it's really tricky, but that's why you use a graph model to begin with. Try to shield yourself from the organization of the data, how it's structured, that's really easy in the relational world, not so much when you get into a document store world. And so what we see about our customers is like as they accumulate more data, is they have many different applications to run their enterprises. The question always comes back, as we have predicted since about six, seven years now, where they say, hey, we have all this different data and different data formats. We want to bring it all together, analyze it together, get value out of the data together. We have seen a whole trend of big data emerge and disappear to answer the question and didn't quite do the trick. And we are basically now back to where we were in the early 2000's when XML databases have faded away, because everybody just allowed you to store XML in the database. >> Got it. So let's make this real for people. So maybe you could give us some examples. You got this new API from Mongo, you have your multi model database. How, take a, paint a picture of how customers are going to benefit in real world use cases. How does it kind of change the customer's world before and after if you will? >> Yeah, absolutely. So, you know the API essentially we are going to use it to accept before, you know, make the lives of the developers easier, but also of course to assist our customers with migrations from Mongo DB over to Oracle Autonomous Database. One customer that we have, for example, that would've benefited of the API several a couple of years ago, two, three years ago, it's one of the largest logistics company on the planet. They track every package that is being sent in JSON documents. So every track package is entries resembled in a JSON document, and they very early on came in with the next question of like, hey, we track all these packages and document in JSON documents. It will be really nice to know actually which packages are stuck, or anywhere where we have to intervene. It's like, can we do this? Can we analyze just how many packages get stuck, didn't get delivered on, the end of a day or whatever. And they found this struggle with this question a lot, they found this was really tricky to do back then, in that case in MongoDB. So they actually approached Oracle, they came over, they migrated over and they rewrote their applications to accommodate that. And there are happy JSON users in Oracle database, but if we were having this API already for them then they wouldn't have had to rewrite their applications or would we often see like worry about the rewriting the application later on. Usually migration use cases, we want to get kind of the migration done, get the data over be running, and then worry about everything else. So this would be one where they would've greatly benefited to shorten this migration time window. If we had already demo the Mongo API back then or this compatibility layer. >> That's a good use case. I mean, it's, one of the most prominent and painful, so anything you could do to help that is key. I remember like the early days of big data, NoSQL, of course was the big thing. There was a lot of confusion. No, people thought was none or not only SQL, which is kind of the more widely accepted interpretation today. But really, it's talking about data that's stored in a non-relational format. So, some people, again they thought that SQL was going to fade away, some people probably still believe that. And, we saw the rise of NoSQL and document databases, but if I understand it correctly, a premise for your Mongo DB API is you really see SQL as a main contributor over Mongo DB's document collections for analytics for example. Can you make, add some color here? Are you seeing, what are you seeing in terms of resurgence of SQL or the momentum in SQL? Has it ever really waned? What's your take? >> Yeah, no, it's a very good point. So I think there as well, we see to some extent history repeating itself from, this all has been tried beforehand with object databases, XML database, et cetera. But if we stay with the NoSQL databases, I think it speaks at length that every NoSQL database that as you write for the sensor you started with NoSQL, and then while actually we always meant, not only SQL, everybody has introduced a SQL like engine or interface. The last two actually join this family is MongoDB. Now they have just recently introduced a SQL compatibility for the aggregation pipelines, something where you can put in a SQL statement and that essentially will then work with aggregation pipeline. So they all acknowledge that SQL is powerful, for us this was always clear. SQL is a declarative language. Some argue it's the only true 4GL language out there. You don't have to code how to get the data, but you just ask the question and the rest is done for you. And, we think that as we, basically, has SQL ever diminished as you said before, if you look out there? SQL has always been a demand. Look at the various developer surveys, etc. The various top skills that are asked for SQL has never gone away. Everybody loves and likes and you wants to use SQL. And so, yeah, we don't think this has ever been, going away. It has maybe just been, put in the shadow by some hypes. But again, we had the same discussion in the 2000's with XML databases, with the same discussions in the 90's with object databases. And we have just frankly, all forgotten about it. >> I love when you guys come on and and let me do my thing and I can pretty much ask any question I want, because, I got to say, when Oracle starts talking about another company I know that company's doing well. So I like, I see Mongo in the marketplace and I love that you guys are calling it out and making some moves there. So here's the thing, you guys have a large install base and that can be an advantage, but it can also be a weight in your shoulder. These specialized cloud databases they don't have that legacy. So they can just kind of move freely about, less friction. Now, all the cloud database services they're going to have more and more automation. I mean, I think that's pretty clear and inevitable. And most if not all of the database vendors they're going to provide support for these kind of converged data models. However they choose to do that. They might do it through the ecosystem, like what Snowflake's trying to do, or bring it in the house themselves, like a watch maker that brings an in-house movement, if you will. But it's like death and taxes, you can't avoid it. It's got to happen. That's what customers want. So with all that being said, how do you see the capabilities that you have today with automation and converge capabilities, How do you see that, that playing out? What's, do you think it gives you enough of an advantage? And obviously it's an advantage, but is it enough of an advantage over the specialized cloud database vendors, where there's clearly a lot of momentum today? >> I mean, honestly yes, absolutely. I mean, we are with some of these databases 20 years ahead. And I give you concrete examples. It's like Oracle had transaction support asset transactions since forever. NoSQL players all said, oh, we don't need assets transactions, base transactions is fine. Yada, yada, yada. Mongo DB started introducing some transaction support. It comes with some limits, cannot be longer than 60 seconds, cannot touch more than a thousand documents as well, et cetera. They still will have to do some catching up there. I mean, it took us a while to get there, let's be honest. Glad We have been around for a long time. Same thing, now that happened with version five, is like we started some simple version of multi version concurrency control that comes along with asset transactions. The interesting part here is like, we've introduced this also an Oracle five, which was somewhere in the 80's before I even started using Oracle Database. So there's a lot of catching up to do. And then you look at the cloud services as well, there's actually certain, a lot of things that we kind of gotten take, we've kind of, we Oracle people have taken for granted and we kind of keep forgetting. For example, our elastic scale, you want to add one CPU, you add one CPU. Should you take downtime for that? Absolutely not. It's like, this is ridiculous. Why would you, you cannot take it downtime in a 24/7 backend system that runs the world. Take any of our customers. If you look at most of these cloud services or you want to reshape, you want to scale your cloud service, that's fine. It's just the VM under the covers, we just shut everything down, give you a VM with more CPU, and you boot it up again, downtown right there. So it's like, there's a lot of these things where we go like, well, we solved this frankly decades ago, that these cloud vendors will run into. And just to add one more point here, so it's like one thing that we see with all these migrations happening is exactly in that field. It's like people essentially started building on whether it's Mongo DB or other of these NoSQL databases or cloud databases. And eventually as these systems grow, as they ask more difficult questions, their use cases expand, they find shortcomings. Whether it's the scalability, whether it's the security aspects, the functionalities that we have, and this is essentially what drives them back to Oracle. And this is why we see essentially this popularity now of pendulum swimming towards our direction again, where people actually happily come over back and they come over to us, to get their workloads enterprise grade if you like. >> Well, It's true. I mean, I just reported on this recently, the momentum that you guys have in cloud because it is, 'cause you got the best mission critical database. You're all about maps. I got to tell you a quick story. I was at a vertical conference one time, I was on stage with Kurt Monash. I don't know if you know Kurt, but he knows this space really well. He's probably forgot and more about database than I'll ever know. But, and I was kind of busting his chops. He was talking about asset transactions. I'm like, well with NoSQL, who needs asset transactions, just to poke him. And he was like, "Are you out of your mind?" And, and he said, look it's everybody is going to head in this direction. It turned out, it's true. So I got to give him props for that. And so, my last question, if you had a message for, let's say there's a skeptical developer out there that's using Mongo DB and Atlas, what would you say to them? >> I would say go try it for yourself. If you don't believe us, we have an always free cloud tier out there. You just go to oracle.com/cloud/free. You sign up for an always free tier, spin up an autonomous database, go try it for yourself. See what's actually possible today. Don't just follow your trends on Hackernews and use a set study here or there. Go try it for yourself and see what's capable of >> All right, Gerald. Hey, thanks for coming into my firing line today. I really appreciate your time. >> Thank you for having me again. >> Good luck with the announcement. You're very welcome, and thank you for watching this CUBE conversation. This is Dave Vellante, We'll see you next time. (gentle music)

Published Date : Feb 10 2022

SUMMARY :

the first to come out the next step forward to I wonder if you could talk is so that they don't have to manage them. and how you going to attract their users the moment you connect to it you talk to customers? So it's like the relational So maybe you could give us some examples. to accept before, you know, make API is you really see SQL that as you write for the and I love that you And I give you concrete examples. the momentum that you guys have in cloud If you don't believe us, I really appreciate your time. and thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

Maria ColganPERSON

0.99+

Gerald VenzlPERSON

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

OracleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

GeraldPERSON

0.99+

KurtPERSON

0.99+

NoSQLTITLE

0.99+

MongoDBTITLE

0.99+

JSONTITLE

0.99+

SQLTITLE

0.99+

MongoDB AtlasTITLE

0.99+

40 yearsQUANTITY

0.99+

MongoORGANIZATION

0.99+

oneQUANTITY

0.99+

One customerQUANTITY

0.99+

oracle.com/cloud/freeOTHER

0.98+

firstQUANTITY

0.98+

Kurt MonashPERSON

0.98+

more than a thousand documentsQUANTITY

0.98+

todayDATE

0.98+

one timeQUANTITY

0.97+

twoDATE

0.97+

one databaseQUANTITY

0.97+

more than one data modelQUANTITY

0.97+

one thingQUANTITY

0.97+

90'sDATE

0.97+

one technologyQUANTITY

0.96+

20 yearsQUANTITY

0.96+

80'sDATE

0.96+

one more pointQUANTITY

0.95+

decades agoDATE

0.95+

one data modelQUANTITY

0.95+

AzureTITLE

0.94+

three years agoDATE

0.93+

seven yearsQUANTITY

0.93+

version fiveOTHER

0.92+

one approachQUANTITY

0.92+

Bob Thome, Tim Chien & Subban Raghunathan, Oracle


 

>>Earlier this week, Oracle announced the new X nine M generation of exit data platforms for its cloud at customer and legacy on prem deployments. And the company made some enhancements to its zero data loss, recovery appliance. CLRA something we've covered quite often since its announcement. We had a video exclusive with one Louisa who was the executive vice president of mission critical database technologies. At Oracle. We did that on the day of the announcement who got his take on it. And I asked Oracle, Hey, can we get some subject matter experts, some technical gurus to dig deeper and get more details on the architecture because we want to better understand some of the performance claims that Oracle is making. And with me today is Susan. Who's the vice president of product management for exit data database machine. Bob tome is the vice president of product management for exit data cloud at customer. And Tim chin is the senior director of product management for DRA folks. Welcome to this power panel and welcome to the cube. >>Thank you, Dave. >>Can we start with you? Um, Juan and I, we talked about the X nine M a that Oracle just launched a couple of days ago. Maybe you could give us a recap, some of the, what do we need to know? The, especially I'm interested in the big numbers once more so we can just understand the claims you're making around this announcement. We can dig into that. >>Absolutely. They've very excited to do that. In a nutshell, we have the world's fastest database machine for both LTP and analytics, and we made that even faster, not just simply faster, but for all LPP we made it 70% faster and we took the oil PPV ops all the way up to 27.6 million read IOPS and mind you, this is being measured at the sequel layer for analytics. We did pretty much the same thing, an 87% increase in analytics. And we broke through that one terabyte per second barrier, absolutely phenomenal stuff. Now, while all those numbers by themselves are fascinating, here's something that's even more fascinating in my mind, 80% of the product development work for extra data, X nine M was done during COVID, which means all of us were remote. And what that meant was extreme levels of teamwork between the development teams, manufacturing teams, procurement teams, software teams, the works. I mean, everybody coming together as one to deliver this product, I think it's kudos to everybody who touched this product in one way or the other extremely proud of it. >>Thank you for making that point. And I'm laughing because it's like you the same bolt of a mission-critical OLT T O LTP performance. You had the world record, and now you're saying, adding on top of that. Um, but, okay. But, so there are customers that still, you know, build the builder and they're trying to build their own exit data. What they do is they buy their own servers and storage and networking components. And I do that when I talk to them, they'll say, look, they want to maintain their independence. They don't want to get locked in Oracle, or maybe they believe it's cheaper. You know, maybe they're sort of focused on the, the, the CapEx the CFO has him in the headlock, or they might, sometimes they talk about, they want a platform that can support, you know, horizontal, uh, apps, maybe not Oracle stuff, or, or maybe they're just trying to preserve their job. I don't know, but why shouldn't these customers roll their own and why can't they get similar results just using standard off the shelf technologies? >>Great question. It's going to require a little involved answer, but let's just look at the statistics to begin with. Oracle's exit data was first productized in Delaware to the market in 2008. And at that point in time itself, we had industry leadership across a number of metrics. Today, we are at the 11th generation of exit data, and we are way far ahead than the competition, like 50 X, faster hundred X faster, right? I mean, we are talking orders of magnitude faster. How did we achieve this? And I think the answer to your question is going to lie in what are we doing at the engineering level to make these magical numbers come to, uh, for right first, it starts with the hardware. Oracle has its own hardware server design team, where we are embedding in capabilities towards increasing performance, reliability, security, and scalability down at the hardware level, the database, which is a user level process talks to the hardware directly. >>The only reason we can do this is because we own the source code for pretty much everything in between, starting with the database, going into the operating system, the hypervisor. And as I, as I just mentioned the hardware, and then we also worked with the former elements on this entire thing, the key to making extra data, the best Oracle database machine lies in that engineering, where we take the operating system, make it fit like tongue and groove into, uh, a bit with the opera, with the hardware, and then do the same with the database. And because we have got this deep insight into what are the workloads that are, that are running at any given point in time on the compute side of extra data, we can then do micromanagement at the software layers of how traffic flows are flowing through the entire system and do things like, you know, prioritize all PP transactions on a very specific, uh, you know, queue on the RDMA. >>We'll converse Ethan at be able to do smart scan, use the compute elements in the storage tier to be able to offload SQL processing. They call them the longer I used formats of data, extend them into flash, just a whole bunch of things that we've been doing over the last 12 years, because we have this deep engineering, you can try to cobble a system together, which sort of looks like an extra data. It's got a network and it's got storage, tiering compute here, but you're not going to be able to achieve anything close to what we are doing. The biggest deal in my mind, apart from the performance and the high availability is the security, because we are testing the stack top to bottom. When you're trying to build your own best of breed kind of stuff. You're not going to be able to do that because it depended on the server that had to do something and HP to do something else or Dell to do something else and a Brocade switch to do something it's not possible. We can do this, we've done it. We've proven it. We've delivered it for over a decade. End of story. For as far as I'm concerned, >>I mean, you know, at this fine, remember when Oracle purchased Sohn and I know a big part of that purchase was to get Java, but I remember saying at the time it was a brilliant acquisition. I was looking at it from a financial standpoint. I think you paid seven and a half billion for it. And it automatically, when you're, when Safra was able to get back to sort of pre acquisition margins, you got the Oracle uplift in terms of revenue multiples. So then that standpoint, it was a no brainer, but the other thing is back in the Unix days, it was like HP. Oracle was the standard. And, and in terms of all the benchmarks and performance, but even then, I'm sure you work closely with HP, but it was like to get the stuff to work together, you know, make sure that it was going to be able to recover according to your standards, but you couldn't actually do that deep engineering that you just described now earlier, Subin you, you, you, you stated that the X sign now in M you get, oh, LTP IO, IOP reads at 27 million IOPS. Uh, you got 19 microseconds latency, so pretty impressive stuff, impressive numbers. And you kind of just went there. Um, but how are you measuring these numbers versus other performance claims from your competitors? What what's, you know, are you, are you stacking the deck? Can you give you share with us there? >>Sure. So Shada incidents, we are mentioning it at the sequel layer. This is not some kind of an ion meter or a micro benchmark. That's looking at just a flash subsystem or just a persistent memory subsystem. This is measured at the compute, not doing an entire set of transactions. And how many times can you finish that? Right? So that's how it's being measured. Now. Most people cannot measure it like that because of the disparity and the number of vendors that are involved in that particular solution, right? You've got servers from vendor a and storage from vendor B, the storage network from vendor C, the operating system from vendor D. How do you tune all of these things on your own? You cannot write. I mean, there's only certain bells and whistles and knobs that are available for you to tune, but so that's how we are measuring the 19 microseconds is at the sequel layer. >>What that means is this a real world customer running a real world. Workload is guaranteed to get that kind of a latency. None of the other suppliers can make that claim. This is the real world capability. Now let's take a look at that 19 microseconds we boast and we say, Hey, we had an order of magnitude two orders of magnitude faster than everybody else. When it comes down to latency. And one things that this is we'll do our magic while it is magical. The magic is really grounded in deep engineering and deep physics and science. The way we implement this is we, first of all, put the persistent memory tier in the storage. And that way it's shared across all of the database instances that are running on the compute tier. Then we have this ultra fast hundred gigabit ethernet RDMA over converged ethernet fabric. >>With this, what we have been able to do is at the hardware level between two network interface guides that are resident on that fabric, we create paths that enable high priority low-latency communication between any two end points on that fabric. And then given the fact that we implemented persistent memory in the storage tier, what that means is with that persistent memory, sitting on the memory bus of the processor in the storage tier, we can perform it remote direct memory access operation from the compute tier to memory address spaces in the persistent memory of the storage tier, without the involvement of the operating system on either end, no context, switches, knowing processing latencies and all of that. So it's hardware to hardware, communication with security built in, which is immutable, right? So all of this is built into the hardware itself. So there's no software involved. You perform a read, the data comes back 19 microseconds, boom. End of story. >>Yeah. So that's key to my next topic, which is security because if you're not getting the OSTP involved and that's, you know, very oftentimes if I can get access to the OSTP, I get privileged. Like I can really take advantage of that as a hacker. But so, but, but before I go there, like Oracle talks about, it's got a huge percentage of the Gayety 7% of the fortune 100 companies run their mission, critical workloads on exit data. But so that's not only important to the companies, but they're serving consumer me, right. I'm going to my ATM or I'm swiping my credit card. And Juan mentioned that you use a layered security model. I just sort of inferred anyway, that, that having this stuff in hardware and not have to involve access to the OS actually contributes to better security. But can you describe this in a bit more detail? >>So yeah, what Brian was talking about was this layered security set differently. It is defense in depth, and that's been our mantra and philosophy for several years now. So what does that entail? As I mentioned earlier, we designed our own servers. We do this for performance. We also do it for security. We've got a number of features that are built into the hardware that make sure that we've got immutable areas of form where we, for instance, let me give you this example. If you take an article x86 server, just a standard x86 server, not even express in the form of an extra data system, even if you had super user privileges sitting on top of an operating system, you cannot modify the bias as a user, as a super user that has to be done through the system management network. So we put gates and protection modes, et cetera, right in the hardware itself. >>Now, of course the security of that hardware goes all the way back to the fact that we own the design. We've got a global supply chain, but we are making sure that our supply chain is protected monitored. And, uh, we also protect the last mile of the supply chain, which is we can detect if there's been any tampering of form where that's been, uh, that's occurred in the hardware while the hardware shipped from our factory to the customers, uh, docks. Right? So we, we know that something's been tampered with the moment it comes back up on the customer. So that's on the hardware. Let's take a look at the operating system, Oracle Linux, we own article the next, the entire source code. And what shipping on exit data is the unbreakable enterprise Connell, the carnal and the operating system itself have been reduced in terms of eliminating all unnecessary packages from that operating system bundle. >>When we deliver it in the form of the data, let's put some real numbers on that. A standard Oracle Linux or a standard Linux distribution has got about 5,000 plus packages. These things include like print servers, web servers, a whole bunch of stuff that you're not absolutely going to use at all on exit data. Why ship those? Because the moment you ship more stuff than you need, you are increasing the, uh, the target, uh, that attackers can get to. So on AXA data, there are only 701 packages. So compare this 5,413 packages on a standard Linux, 701 and exit data. So we reduced the attack surface another aspect on this, when we, we do our own STIG, uh, ASCAP benchmarking. If you take a standard Linux and you run that ASCAP benchmark, you'll get about a 30% pass score on exit data. It's 90 plus percent. >>So which means we are doing the heavy lifting of doing the security checks on the operating system before it even goes out to the factory. And then you layer on Oracle database, transparent data encryption. We've got all kinds of protection capabilities, data reduction, being able to do an authentication on a user ID basis, being able to log it, being able to track it, being able to determine who access the system when and log back. So it's basically defend at every single layer. And then of course the customer's responsibility. It doesn't just stop by getting this high secure, uh, environment. They have to do their own job of them securing their network perimeters, securing who has physical access to the system and everything else. So it's a giant responsibility. And as you mentioned, you know, you as a consumer going to an ATM machine and withdrawing money, you would do 200. You don't want to see 5,000 deducted from your account. And so all of this is made possible with exited and the amount of security focus that we have on the system >>And the bank doesn't want to see it the other way. So I'm geeking out here in the cube, but I got one more question for you. Juan talked about X nine M best system for database consolidation. So I, I kinda, you know, it was built to handle all LTP analytics, et cetera. So I want to push you a little bit on this because I can make an argument that, that this is kind of a Swiss army knife versus the best screwdriver or the best knife. How do you respond to that concern and how, how do you respond to the concern that you're putting too many eggs in one basket? Like, what do you tell people to fear you're consolidating workloads to save money, but you're also narrowing the blast radius. Isn't that a problem? >>Very good question there. So, yes. So this is an interesting problem, and it is a balancing act. As you correctly pointed out, you want to have the economies of scale that you get when you consolidate more and more databases, but at the same time, when something happens when hardware fails or there's an attack, you want to make sure that you have business continuity. So what we are doing on exit data, first of all, as I mentioned, we are designing our own hardware and a building in reliability into the system and at the hardware layer, that means having redundancy, redundancy for fans, power supplies. We even have the ability to isolate faulty cores on the processor. And we've got this a tremendous amount of sweeping that's going on by the system management stack, looking for problem areas and trying to contain them as much as possible within the hardware itself. >>Then you take it up to the software layer. We used our reliability to then build high availability. What that implies is, and that's fundamental to the exited architecture is this entire scale out model, our based system, you cannot go smaller than having two database nodes and three storage cells. Why is that? That's because you want to have high availability of your database instances. So if something happens to one server hardware, software, whatever you got another server that's ready to take on that load. And then with real application clusters, you can then switch over between these two, why three storage cells. We want to make sure that when you have got duplicate copies of data, because you at least want to have one additional copy of your data in case something happens to the disc that has got that only that one copy, right? So the reason we have got three is because then you can Stripe data across these three different servers and deliver high availability. >>Now you take that up to the rack level. A lot of things happen. Now, when you're really talking about the blast radius, you want to make sure that if something physically happens to this data center, that you have infrastructure that's available for it to function for business continuity, we maintain, which is why we have the maximum availability architecture. So with components like golden gate and active data guard, and other ways by which we can keep to this distant systems in sync is extremely critical for us to deliver these high availability paths that make, uh, the whole equation about how many eggs in one basket versus containing the containment of the blast radius. A lot easier to grapple with because business continuity is something which is paramount to us. I mean, Oracle, the enterprise is running on Xcel data. Our high value cloud customers are running on extra data. And I'm sure Bob's going to talk a lot more about the cloud piece of it. So I think we have all the tools in place to, to go after that optimization on how many eggs in one basket was his blast radius. It's a question of working through the solution and the criticalities of that particular instance. >>Okay, great. Thank you for that detailed soup. We're going to give you a break. You go take a breath, get a, get a drink of water. Maybe we'll come back to you. If we have time, let's go to Bob, Bob, Bob tome, X data cloud at customer X nine M earlier this week, Juan said kinda, kinda cocky. What we're bothering, comparing exit data against your cloud, a customer against outpost or Azure stack. Can you elaborate on, on why that is? >>Sure. Or you, you know, first of all, I want to say, I love, I love baby. We go south posts. You know why it affirms everything that we've been doing for the past four and a half years with clouded customer. It affirms that cloud is running that running cloud services in customers' data center is a large and important market, large and important enough that AWS felt that the need provide these, um, you know, these customers with an AWS option, even if it only supports a sliver of the functionality that they provide in the public cloud. And that's what they're doing. They're giving it a sliver and they're not exactly leading with the best they could offer. So for that reason, you know, that reason alone, there's really nothing to compare. And so we, we give them the benefit of the doubt and we actually are using their public cloud solutions. >>Another point most customers are looking to deploy to Oracle cloud, a customer they're looking for a per performance, scalable, secure, and highly available platform to deploy. What's offered their most critical databases. Most often they are Oracle databases does outposts for an Oracle database. No. Does outpost run a comparable database? Not really does outposts run Amazon's top OTP and analytics database services, the ones that are top in their cloud public cloud. No, that we couldn't find anything that runs outposts that's worth comparing against X data clouded customer, which is why the comparisons are against their public cloud products. And even with that still we're looking at numbers like 50 times a hundred times slower, right? So then there's the Azure stack. One of the key benefits to, um, you know, that customers love about the cloud that I think is really under, appreciated it under appreciated is really that it's a single vendor solution, right? You have a problem with cloud service could be I as pass SAS doesn't matter. And there's a single vendor responsible for fixing your issue as your stack is missing big here, because they're a multi-vendor cloud solution like AWS outposts. Also, they don't exactly offer the same services in the cloud that they offer on prem. And from what I hear, it can be a management nightmare requiring specialized administrators to keep that beast running. >>Okay. So, well, thanks for that. I'll I'll grant you that, first of all, granted that Oracle was the first with that same, same vision. I always tell people that, you know, if they say, well, we were first I'm like, well, actually, no, Oracle's first having said that, Bob and I hear you that, that right now, outpost is a one Datto version. It doesn't have all the bells and whistles, but neither did your cloud when you first launched your cloud. So let's, let's let it bake for a while and we'll come back in a couple of years and see how things compare. So if you're up for it. Yeah. >>Just remember that we're still in the oven too. Right. >>Okay. All right. Good. I love it. I love the, the chutzpah. One also talked about Deutsche bank. Um, and that, I, I mean, I saw that Deutsche bank announcement, how they're working with Oracle, they're modernizing their infrastructure around database. They're building other services around that and kind of building their own sort of version of a cloud for their customers. How does exit data cloud a customer fit in to that whole Deutsche bank deal? Is, is this solution unique to Deutsche bank? Do you see other organizations adopting clouded customer for similar reasons and use cases? >>Yeah, I'll start with that. First. I want to say that I don't think Georgia bank is unique. They want what all customers want. They want to be able to run their most important workloads. The ones today running their data center on exit eight as a non other high-end systems in a cloud environment where they can benefit from things like cloud economics, cloud operations, cloud automations, but they can't move to public cloud. They need to maintain the service levels, the performance, the scalability of the security and the availability that their business has. It has come to depend on most clouds can't provide that. Although actually Oracle's cloud can our public cloud Ken, because our public cloud does run exit data, but still even with that, they can't do it because as a bank, they're subject to lots of rules and regulations, they cannot move their 40 petabytes of data to a point outside the control of their data center. >>They have thousands of interconnected databases, right? And applications. It's like a rat's nest, right? And this is similar many large customers have this problem. How do you move that to the cloud? You can move it piecemeal. Uh, I'm going to move these apps and, you know, not move those apps. Um, but suddenly ended up with these things where some pieces are up here. Some pieces are down here. The thing just dies because of the long latency over a land connection, it just doesn't work. Right. So you can also shut it down. Let's shut it down on, on Friday and move everything all at once. Unfortunately, when you're looking at it, a state decides that most customers have, you're not going to be able to, you're going to be down for a month, right? Who can, who can tolerate that? So it's a big challenge and exited cloud a customer let's then move to the cloud without losing control of their data. >>And without unhappy having to untangle that thousands of interconnected databases. So, you know, that's why these customers are choosing X data, clouded customer. More importantly, it sets them up for the future with exited cloud at customer, they can run not just in their data center, but they could also run in public cloud, adjacent sites, giving them a path to moving some work out of the data center and ultimately into the public cloud. You know, as I said, they're not unique. Other banks are watching and some are acting and it's not just banks. Just last week. Telefonica telco in Spain announced their intent to migrate the bulk of their Oracle databases to excavate a cloud at customer. This will be the key cloud platform running. They're running in their data center to support both new services, as well as mission critical and operational systems. And one last important point exited cloud a customer can also run autonomous database. Even if customers aren't today ready to adopt this. A lot of them are interested in it. They see it as a key piece of the puzzle moving forward in the future and customers know that they can easily start to migrate to autonomous in the future as they're ready. And this of course is going to drive additional efficiencies and additional cost savings. >>So, Bob, I got a question for you because you know, Oracle's playing both sides, right? You've got a cloud, you know, you've got a true public cloud now. And, and obviously you have a huge on-premise state. When I talk to companies that don't own a cloud, uh, whether it's Dell or HPE, Cisco, et cetera, they have made, they make the point. And I agree with them by the way that the world is hybrid, not everything's going into the, to the cloud. However, I had a lot of respect for folks at Amazon as well. And they believed long-term, they'll say this, they've got them on record of saying this, that they believe long-term ultimately all workloads are going to be running in the cloud. Now, I guess it depends on how you define the cloud. The cloud is expanding and all that other stuff. But my question to you, because again, you kind of on both sides, here are our hybrid solutions like cloud at customer. Do you see them as a stepping stone to the cloud, or is cloud in your data center, sort of a continuous sort of permanent, you know, essential play >>That. That's a great question. As I recall, people debated this a few years back when we first introduced clouded customer. And at that point, some people I'm talking about even internal Oracle, right? Some people saw this as a stop gap measure to let people leverage cloud benefits until they're really ready for the public cloud. But I think over the past four and a half years, the changing the thinking has changed a little bit on this. And everyone kind of agrees that clouded customer may be a stepping stone for some customers, but others see that as the end game, right? Not every workload can run in the public cloud, not at least not given the, um, you know, today's regulations and the issues that are faced by many of these regulated industries. These industries move very, very slowly and customers are content to, and in many cases required to retain complete control of their data and they will be running under their control. They'll be running with that data under their control and the data center for the foreseeable future. >>Oh, I got another question for kind of just, if I could take a little tangent, cause the other thing I hear from the, on the, the, the on-prem don't own, the cloud folks is it's actually cheaper to run in on-prem, uh, because they're getting better at automation, et cetera. When you get the exact opposite from the cloud guys, they roll their eyes. Are you kidding me? It's way cheaper to run it in the cloud, which is more cost-effective is it one of those? It depends, Bob. >>Um, you know, the great thing about numbers is you can make, you can, you can kind of twist them to show anything that you want, right? That's a have spreadsheet. Can I, can, I can sell you on anything? Um, I think that there's, there's customers who look at it and they say, oh, on-premise sheet is cheaper. And there's customers who look at it and say, the cloud is cheaper. If you, um, you know, there's a lot of ways that you may incur savings in the cloud. A lot of it has to do with the cloud economics, the ability to pay for what you're using and only what you're using. If you were to kind of, you know, if you, if you size something for your peak workload and then, you know, on prem, you probably put a little bit of a buffer in it, right? >>If you size everything for that, you're gonna find that you're paying, you know, this much, right? All the time you're paying for peak workload all the time with the cloud, of course, we support scaling up, scaling down. We supply, we support you're paying for what you use and you can scale up and scale down. That's where the big savings is now. There's also additional savings associated with you. Don't have the cloud vendors like work. Well, we manage that infrastructure for you. You no longer have to worry about it. Um, we have a lot of automation, things that you use to either, you know, probably what used to happen is you used to have to spend hours and hours or years or whatever, scripting these things yourselves. We now have this automation to do it. We have, um, you eyes that make things ad hoc things, as simple as point and click and, uh, you know, that eliminates errors. And, and it's often difficult to put a cost on those things. And I think the more enlightened customers can put a cost on all of those. So the people that were saying it's cheaper to run on prem, uh, they, they either, you know, have a very stable workload that never changes and their environment never changes, um, or more likely. They just really haven't thought through the, all the hidden costs out there. >>All right, you got some new features. Thank you for that. By the way, you got some new features in, in cloud, a customer, a what are those? Do I have to upgrade to X nine M to, to get >>All right. So, you know, we're always introducing new features for clouded customer, but two significant things that we've rolled out recently are operator access control and elastic storage expansion. As we discussed, many organizations are using Axeda cloud a customer they're attracting the cloud economics, the operational benefits, but they're required by regulations to retain control and visibility of their data, as well as any infrastructure that sits inside their data center with operator access control, enabled cloud operations, staff members must request access to a customer system, a customer, it team grants, a designated person, specific access to a specific component for a specific period of time with specific privileges, they can then kind of view audit controls in real time. And if they see something they don't like, you know, Hey, what's this guy doing? It looks like he's, he's stealing my data or doing something I don't like, boom. >>They can kill that operators, access the session, the connections, everything right away. And this gives everyone, especially customers that need to, you know, regulate remote access to their infrastructure. It gives them the confidence that they need to use exit data cloud, uh, conduct, customer service. And, and the other thing that's new is, um, elastic storage expansion. Customers could out add additional service to their system either at initial deployment or after the fact. And this really provides two important benefits. The first is that they can right size their configuration if they need only the minimum compute capacity, but they don't need the maximum number of storage servers to get that capacity. They don't need to subscribe to kind of a fixed shape. We used to have fixed shapes, I guess, with hundreds of unnecessary database cores, just to get the storage capacity, they can select a smaller system. >>And then incrementally add on that storage. The second benefit is the, is kind of key for many customers. You are at a storage, guess what you can add more. And that way, when you're out of storage, that's really important. Now they'll get to your last part of that question. Do you need a deck, a new, uh, exit aquatic customer XIM system to get these features? No they're available for all gen two exited clouded customer systems. That's really one of the best things about cloud. The service you subscribed to today just keeps getting better and better. And unless there's some technical limitation that, you know, we, and it, which is rare, most new features are available even for the oldest cloud customer systems. >>Cool. And you can bring that in on from my, my last question for you, Bob is a, another one on security. Obviously, again, we talked to Susan about this. It's a big deal. How can customer data be secure if it's in the cloud, if somebody, other than the, their own vetted employees are managing the underlying infrastructure, is is that a concern you hear a lot and how do you handle that? >>You know, it's, it's only something because a lot of these customers, they have big, you know, security people and it's their job to be concerned about that kind of stuff. And security. However, is one of the biggest, but least appreciate appreciated benefits of cloud cloud vendors, such as Oracle hire the best and brightest security experts to ensure that their clouds are secure. Something that only the largest customers can afford to do. You're a small, small shop. You're not going to be able to, you know, hire some of this expertise. So you're better off being in the cloud. Customers who are running in the Oracle cloud can also use articles, data, safe tool, which we provide, which basically lets you inspect your databases, insurance. Sure that everything is locked down and secure and your data is secure. But your question is actually a little bit different. >>It was about potential internal threats to company's data. Given the cloud vendor, not the customer's employees have access to the infrastructure that sits beneath the databases and really the first and most important thing we do to protect customers' data is we encrypt that database by default. Actually Subin listed a whole laundry list of things, but that's the one thing I want to point out. We encrypt your database. It's, you know, it's, it's encrypted. Yes. It sits on our infrastructure. Yes. Our operations persons can actually see those data files sitting on the infrastructure, but guess what? They can't see the data. The data is encrypted. All they see as kind of a big encrypted blob. Um, so they can't access the data themselves. And you know, as you'd expect, we have very tight controls over operations access to the infrastructure. They need to securely log in using mechanisms by stuff to present, prevent unauthorized access. >>And then all access is logged and suspicious. Activities are investigated, but that still may not be enough for some customers, especially the ones I mentioned earlier, the regulated industries. And that's why we offer app operator access control. As I mentioned, that gives customers complete control over the access to the infrastructure. The, when the, what ops can do, how long can they do it? Customers can monitor in real time. And if they see something they don't like they stop it immediately. Lastly, I just want to mention Oracle's data ball feature. This prevents administrators from accessing data, protecting data from road operators, robot, world operations, whether they be from Oracle or from the customer's own it staff, this database option. A lot of ball is sorry. Database ball data vault is included when running a license included service on exited clouded customer. So basically to get it with the service. Got it. >>Hi Tom. Thank you so much. It's unbelievable, Bob. I mean, we've got a lot to unpack there, but uh, we're going to give you a break now and go to Tim, Tim chin, zero data loss, recovery appliance. We always love that name. The big guy we think named it, but nobody will tell us, but we've been talking about security. There's been a lot of news around ransomware attacks. Every industry around the globe, any knucklehead with, uh, with a high school diploma could become a ransomware attack or go in the dark web, get, get ransomware as a service stick, a, put a stick in and take a piece of the VIG and hopefully get arrested. Um, with, when you think about database, how do you deal with the ransomware challenge? >>Yeah, Dave, um, that's an extremely important and timely question. Um, we are hearing this from our customers. We just talk about ha and backup strategies and ransomware, um, has been coming up more and more. Um, and the unfortunate thing that these ransoms are actually paid, um, uh, in the hope of the re you know, the, uh, the ability to access the data again. So what that means it tells me is that today's recovery solutions and processes are not sufficient to get these systems back in a reliable and timely manner. Um, and so you have to pay the ransom, right, to get, uh, to get the, even a hope of getting the data back now for databases. This can have a huge impact because we're talking about transactional workloads. And so even a compromise of just a few minutes, a blip, um, can affect hundreds or even thousands of transactions. This can literally represent hundreds of lost orders, right? If you're a big manufacturing company or even like millions of dollars worth of, uh, financial transactions in a bank. Right. Um, and that's why protecting databases at a transaction level is especially critical, um, for ransomware. And that's a huge contrast to traditional backup approaches. Okay. >>So how do you approach that? What do you, what do you do specifically for ransomware protection for the database? >>Yeah, so we have the zero data loss recovery appliance, which we announced the X nine M generation. Um, it is really the only solution in the market, which offers that transaction level of protection, which allows all transactions to be recovered with zero RPO, zero again, and this is only possible because Oracle has very innovative and unique technology called real-time redo, which captures all the transactional changes from the databases by the appliance, and then stored as well by the appliance, moreover, the appliance validates all these backups and reading. So you want to make sure that you can recover them after you've sent them, right? So it's not just a file level integrity check on a file system. That's actual database level of validation that the Oracle blocks and the redo that I mentioned can be restored and recovered as a usable database, any kind of, um, malicious attack or modification of that backup data and transmit that, or if it's even stored on the appliance and it was compromised would be immediately detected and reported by that validation. >>So this allows administrators to take action. This is removing that system from the network. And so it's a huge leap in terms of what customers can get today. The last thing I just want to point out is we call our cyber vault deployment, right? Um, a lot of customers in the industry are creating what we call air gapped environments, where they have a separate location where their backup copies are stored physically network separated from the production systems. And so this prevents ransomware for possibly infiltrating that last good copy of backups. So you can deploy recovery appliance in a cyber vault and have it synchronized at random times when the network's available, uh, to, to keep it in sync. Right. Um, so that combined with our transaction level zero data loss validation, it's a nice package and really a game changer in protecting and recovering your databases from modern day cyber threats. >>Okay, great. Thank you for clarifying that air gap piece. Cause I, there was some confusion about that. Every data protection and backup company that I know as a ransomware solution, it's like the hottest topic going, you got newer players in, in, in recovery and backup like rubric Cohesity. They raised a ton of dough. Dell has got solutions, HPE just acquired Zerto to deal with this problem. And other things IBM has got stuff. Veem seems to be doing pretty well. Veritas got a range of, of recovery solutions. They're sort of all out there. What's your take on these and their strategy and how do you differentiate? >>Yeah, it's a pretty crowded market, like you said. Um, I think the first thing you really have to keep in mind and understand that these vendors, these new and up and coming, um, uh, uh, vendors start in the copy data management, we call CDN space and they're not traditional backup recovery designed are purpose built for the purpose of CDM products is to provide these fast point in time copies for test dev non-production use, and that's a viable problem and it needs a solution. So you create these one time copy and then you create snapshots. Um, after you apply these incremental changes to that copy, and then the snapshot can be quickly restored and presented as like it's a fully populated, uh, file. And this is all done through the underlying storage of block pointers. So all of this kind of sounds really cool and modern, right? It's like new and upcoming and lots of people in the market doing this. Well, it's really not that modern because we've, we know storage, snapshot technologies has been around for years. Right. Um, what these new vendors have been doing is essentially repackaging the old technology for backup and recovery use cases and having sort of an easier to use automation interface wrapped around it. >>Yeah. So you mentioned a copy data management, uh, last year, active FIO. Uh, they started that whole space from what I recall at one point there, they value more than a billion dollars. They were acquired by Google. Uh, and as I say, they kind of created that, that category. So fast forward a little bit, nine months a year, whatever it's been, do you see that Google active FIO offer in, in, in customer engagements? Is that something that you run into? >>We really don't. Um, yeah, it was really popular and known some years ago, but we really don't hear about it anymore. Um, after the acquisition, you look at all the collateral and the marketing, they are really a CDM and backup solution exclusively for Google cloud use cases. And they're not being positioned as for on premises or any other use cases outside of Google cloud. That's what, 90, 90 plus percent of your market there that isn't addressable now by Activia. So really we don't see them in any of our engagements at this time. >>I want to come back and push it a little bit, uh, on some of the tech that you said, it's kind of really not that modern. Uh, I mean it's, if they certainly position it as modern, a lot of the engineers who are building there's new sort of backup and recovery capabilities came from the hyperscalers, whether it's copy data management, you know, the bot mock quote, unquote modern backup recovery, it's kind of a data management, sort of this nice all in one solution seems pretty compelling. How does recovery clients specifically stack up? You know, a lot of people think it's a niche product for, for really high end use cases. Is that fair? How do you see a town? >>Yeah. Yeah. So it's, I think it's so important to just, you know, understand, again, the fundamental use of this technology is to create data copies for test W's right. Um, and that's really different than operational backup recovery in which you must have this ability to do full and point in time recoverability in any production outage or Dr. Situation. Um, and then more importantly, after you recover and your applications are back in business, that performance must continue to meet servers levels as before. And when you look at a CDM product, um, and you restore a snapshot and you say with that product and the application is brought up on that restored snapshot, what happens or your production application is now running on actual read rideable snapshots on backup storage. Remember they don't restore all the data back to the production, uh, level stores. They're restoring it as a snapshot okay. >>Onto their storage. And so you have a huge difference in performance. Now running these applications where they instantly recovered, if you will database. So to meet these true operational requirements, you have to fully restore the files to production storage period. And so recovery appliance was first and foremost designed to accomplish this. It's an operational recovery solution, right? We accomplish that. Like I mentioned, with this real-time transaction protection, we have incremental forever backup strategies. So that you're just taking just the changes every day. And you, you can create these virtual full backups that are quickly restored, fully restored, if you will, at 24 terabytes an hour. And we validate and document that performance very clearly in our website. And of course we provide that continuous recovery validation for all the backups that are stored on the system. So it's, um, it's a very nice, complete solution. >>It scales to meet your demands, hundreds of thousands of databases, you know, it's, um, you know, these CDM products might seem great and they work well for a few databases, but then you put a real enterprise load and these hundreds of databases, and we've seen a lot of times where it just buckles, you know, it can't handle that kind of load in that, uh, in that scale. Uh, and, and this is important because customers read their marketing and read the collateral like, Hey, instant recovery. Why wouldn't I want that? Well, it's, you know, nicer than it looks, you know, it always sounds better. Right. Um, and so we have to educate them and about exactly what that means for the database, especially backup recovery use cases. And they're not really handled well, um, with their products. >>I know I'm like way over. I had a lot of questions on this announcement and I was gonna, I was gonna let you go, Tim, but you just mentioned something that, that gave me one more question if I may. So you talked about, uh, supporting hundreds of thousands of databases. You petabytes, you have real world use cases that, that actually leverage the, the appliance in these types of environments. Where does it really shine? >>Yeah. Let me just give you just two real quick ones. You know, we have a company energy transfer, the major natural gas and pipeline operator in the U S so they are a big part of our country's critical infrastructure services. We know ransomware, and these kinds of threats are, you know, are very much viable. We saw the colonial pipeline incident that happened, right? And so the attack, right, critical services while energy transfer was running, lots of databases and their legacy backup environments just couldn't keep up with their enterprise needs. They had backups taking like, well, over a day, they had restores taking several hours. Um, and so they had problems and they couldn't meet their SLS. They moved to the recovery appliance and now they're seeing backwards complete with that incremental forever in just 15 minutes. So that's like a 48 times improvement in backup time. >>And they're also seeing restores completing in about 30 minutes, right. Versus several hours. So it's a, it's a huge difference for them. And they also get that nice recovery validation and monitoring by the system. They know the health of their enterprise at their fingertips. The second quick one is just a global financial services customer. Um, and they have like over 10,000 databases globally and they, they really couldn't find a solution other than throw more hardware kind of approach to, uh, to fix their backups. Well, this, uh, not that the failures and not as the issues. So they moved to recovery appliance and they saw their failed backup rates go down for Matta plea. They saw four times better backup and restore performance. Um, and they have also a very nice centralized way to monitor and manage the system. Uh, real-time view if you will, that data protection health for their entire environment. Uh, and they can show this to the executive management and auditing teams. This is great for compliance reporting. Um, and so they finally done that. They have north of 50 plus, um, recovery appliances a day across that on global enterprise. >>Love it. Thank you for that. Um, uh, guys, great power panel. We have a lot of Oracle customers in our community and the best way to, to help them is to, I get to ask you a bunch of questions and get the experts to answer. So I wonder if you could bring us home, maybe you could just sort of give us the, the top takeaways that you want to your customers to remember in our audience to remember from this announcement. >>Sure, sorry. Uh, I want to actually pick up from where Tim left off and talk about a real customer use case. This is hot off the press. One of the largest banks in the United States, they decided to, that they needed to update. So performance software update on 3000 of their database instances, which are spanning 68, exited a clusters, massive undertaking, correct. They finished the entire task in three hours, three hours to update 3000 databases and 68 exited a clusters. Talk about availability, try doing this on any other infrastructure, no way anyone's going to be able to achieve this. So that's on terms of the availability, right? We are engineering in all of the aspects of database management, performance, security availability, being able to provide redundancy at every single level is all part of the design philosophy and how we are engineering this product. And as far as we are concerned, the, the goal is for forever. >>We are just going to continue to go down this path of increasing performance, increasing the security aspect of the, uh, of the infrastructure, as well as our Oracle database and keep going on this. You know, this, while these have been great results that we've delivered with extra data X nine M the, the journey is on and to our customers. The biggest advantage that you're going to get from the kind of performance metrics that we are driving with extra data is consolidation consolidate more, move, more database instances onto the extended platform, gain the benefits from that consolidation, reduce your operational expenses, reduce your capital expenses. They use your management expenses, all of those, bring it down to accelerator. Your total cost of ownership is guaranteed to go down. Those are my key takeaways, Dave >>Guys, you've been really generous with your time. Uh Subin uh, uh, uh, Bob, Tim, I appreciate you taking my questions and we'll willingness to go toe to toe, really? Thanks for your time. >>You're welcome, David. Thank you. Thank you. >>And thank you for watching this video exclusive from the cube. This is Dave Volante, and we'll see you next time. Be well.

Published Date : Oct 4 2021

SUMMARY :

We did that on the day of the announcement who got his take on it. Maybe you could give us a recap, 80% of the product development work for extra data, that still, you know, build the builder and they're trying to build their own exit data. And I think the answer to your question is going to lie in what are we doing at the engineering And as I, as I just mentioned the hardware, and then we also worked with the former elements on in the storage tier to be able to offload SQL processing. you know, make sure that it was going to be able to recover according to your standards, the storage network from vendor C, the operating system from vendor D. How do you tune all of these None of the other suppliers can make that claim. remote direct memory access operation from the compute tier to And Juan mentioned that you use a layered security model. that are built into the hardware that make sure that we've got immutable areas of form Now, of course the security of that hardware goes all the way back to the fact that we own the design. Because the moment you ship more stuff than you need, you are increasing going to an ATM machine and withdrawing money, you would do 200. And the bank doesn't want to see it the other way. economies of scale that you get when you consolidate more and more databases, but at the same time, So if something happens to one server hardware, software, whatever you the blast radius, you want to make sure that if something physically happens We're going to give you a break. of the functionality that they provide in the public cloud. you know, that customers love about the cloud that I think is really under, appreciated it under I always tell people that, you know, if they say, well, we were first I'm like, Just remember that we're still in the oven too. Do you see other organizations adopting clouded customer for they cannot move their 40 petabytes of data to a point outside the control of their data center. Uh, I'm going to move these apps and, you know, not move those apps. They see it as a key piece of the puzzle moving forward in the future and customers know that they can You've got a cloud, you know, you've got a true public cloud now. not at least not given the, um, you know, today's regulations and the issues that are When you get the exact opposite from the cloud guys, they roll their eyes. the cloud economics, the ability to pay for what you're using and only what you're using. Um, we have a lot of automation, things that you use to either, you know, By the way, you got some new features in, in cloud, And if they see something they don't like, you know, Hey, what's this guy doing? And this gives everyone, especially customers that need to, you know, You are at a storage, guess what you can add more. is is that a concern you hear a lot and how do you handle that? You're not going to be able to, you know, hire some of this expertise. And you know, as you'd expect, that gives customers complete control over the access to the infrastructure. but uh, we're going to give you a break now and go to Tim, Tim chin, zero Um, and so you have to pay the ransom, right, to get, uh, to get the, even a hope of getting the data back now So you want to make sure that you can recover them Um, a lot of customers in the industry are creating what we it's like the hottest topic going, you got newer players in, in, So you create these one time copy Is that something that you run into? Um, after the acquisition, you look at all the collateral I want to come back and push it a little bit, uh, on some of the tech that you said, it's kind of really not that And when you look at a CDM product, um, and you restore a snapshot And so you have a huge difference in performance. and we've seen a lot of times where it just buckles, you know, it can't handle that kind of load in that, I had a lot of questions on this announcement and I was gonna, I was gonna let you go, And so the attack, right, critical services while energy transfer was running, Uh, and they can show this to the executive management to help them is to, I get to ask you a bunch of questions and get the experts to answer. They finished the entire task in three hours, three hours to increasing the security aspect of the, uh, of the infrastructure, uh, uh, Bob, Tim, I appreciate you taking my questions and we'll willingness to go toe Thank you. And thank you for watching this video exclusive from the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TomPERSON

0.99+

SusanPERSON

0.99+

BrianPERSON

0.99+

CiscoORGANIZATION

0.99+

2008DATE

0.99+

DavidPERSON

0.99+

DellORGANIZATION

0.99+

IBMORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Dave VolantePERSON

0.99+

48 timesQUANTITY

0.99+

70%QUANTITY

0.99+

OracleORGANIZATION

0.99+

JuanPERSON

0.99+

Bob ThomePERSON

0.99+

Tim ChienPERSON

0.99+

DavePERSON

0.99+

AmazonORGANIZATION

0.99+

TimPERSON

0.99+

BobPERSON

0.99+

Deutsche bankORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

40 petabytesQUANTITY

0.99+

3000QUANTITY

0.99+

DelawareLOCATION

0.99+

87%QUANTITY

0.99+

50 timesQUANTITY

0.99+

three hoursQUANTITY

0.99+

19 microsecondsQUANTITY

0.99+

Tim chinPERSON

0.99+

90QUANTITY

0.99+

ConnellORGANIZATION

0.99+

5,000QUANTITY

0.99+

hundredsQUANTITY

0.99+

Deutsche bankORGANIZATION

0.99+

TodayDATE

0.99+

90 plus percentQUANTITY

0.99+

5,413 packagesQUANTITY

0.99+

80%QUANTITY

0.99+

last weekDATE

0.99+

HPORGANIZATION

0.99+

68QUANTITY

0.99+

seven and a half billionQUANTITY

0.99+

HPEORGANIZATION

0.99+

FirstQUANTITY

0.99+

SpainLOCATION

0.99+

AXAORGANIZATION

0.99+

two ordersQUANTITY

0.99+

United StatesLOCATION

0.99+

one copyQUANTITY

0.99+

Bob tomePERSON

0.99+

27 millionQUANTITY

0.99+

LouisaPERSON

0.99+

24 terabytesQUANTITY

0.99+

15 minutesQUANTITY

0.99+

Juan Loaiza, Oracle | CUBE Conversation, September 2021


 

(bright music) >> Hello, everyone, and welcome to this CUBE video exclusive. This is Dave Vellante, and as I've said many times what people sometimes forget is Oracle's chairman is also its CTO, and he understands and appreciates the importance of engineering. It's the lifeblood of tech innovation, and Oracle continues to spend money on R and D. Over the past decade, the company has evolved its Exadata platform by investing in core infrastructure technology. For example, Oracle initially used InfiniBand, which in and of itself was a technical challenge to exploit for higher performance. That was an engineering innovation, and now it's moving to RoCE to try and deliver best of breed performance by today's standards. We've seen Oracle invest in machine intelligence for analytics. It's converged OLTB and mixed workloads. It's driving automation functions into its Exadata platform for things like indexing. The point is we've seen a consistent cadence of improvements with each generation of Exadata, and it's no secret that Oracle likes to brag about the results of its investments. At its heart, Oracle develops database software and databases have to run fast and be rock solid. So Oracle loves to throw around impressive numbers, like 27 million AKI ops, more than a terabyte per second for analytics scans, running it more than a terabyte per second. Look, Oracle's objective is to build the best database platform and convince its customers to run on Oracle, instead of doing it themselves or in some other cloud. And because the company owns the full stack, Oracle has a high degree of control over how to optimize the stack for its database. So this is how Oracle intends to compete with Exadata, Exadata Cloud@Customer and other products, like ZDLRA against AWS Outposts, Azure Arc and do it yourself solutions. And with me, to talk about Oracle's latest innovation with its Exadata X9M announcement is Juan Loaiza, who's the Executive Vice President of Mission Critical Database Technologies at Oracle. Juan, thanks for coming on theCUBE, always good to see you, man. >> Thanks for having me, Dave. It's great to be here. >> All right, let's get right into it and start with the news. Can you give us a quick overview of the X9M announcement today? >> Yeah, glad to. So, we've had Exadata on the market for a little over a dozen years, and every year, as you mentioned, we make it better and better. And so this year we're introducing our X9M family of products, and as usual, we're making it better. We're making it better across all the different dimensions for OLTP, for analytics, lower costs, higher IOPs, higher throughputs, more capacity, so it's better all around, and we're introducing a lot of new software features as well that make it easier to use, more manageable, more highly available, more options for customers, more isolation, more workload consolidation, so it's our usual better and better every year. We're already way ahead of the competition in pretty much every metric you can name, but we're not sitting back. We have the pedal to the metal and we're keeping it there. >> Okay, so as always, you announced some big numbers. You're referencing them. I did in my upfront narrative. You've claimed double to triple digit performance improvements. Tell us, what's the secret sauce that allows you to achieve that magnitude of performance gain? >> Yeah, there's a lot of secret sauce in Exadata. First of all, we have custom designed hardware, so we design the systems from the top down, so it's not a generic system. It's designed to run database with a specific and sole focus of running database, and so we have a lot of technologies in there. Persistent memory is a really big one that we've introduced that enables super low response times for OLTP. The RoCE, the remote RDMA over convergency ethernet with a hundred gigabit network is a big thing, offload to storage servers is a big thing. The columnar processing of the storage is a huge thing, so there's a lot of secret sauce, most of it is software and hardware related and interesting about it, it's very unique. So we've been introducing more and more technologies and actually advancing our lead by introducing very unique, very effective technologies, like the ones I mentioned, and we're continuing that with our X9 generation. >> So that persistent memory allows you to do a right directly, atomic right directly to memory, and then what, you update asynchronously to the backend at some point? Can you double click on that a little bit? >> Yeah, so we use persistent memory as kind of the first tier of storage. And the thing about persistent memory is persistent. Unlike normal memory, it doesn't lose its contents when you lose power, so it's just as good as flash or traditional spinning disks in terms of storing data. And the integration that we do is we do what's called remote direct memory access, that means the hardware sends the new data directly into persistent memory and storage with no software, getting rid of all the software layers in between, and that's what enables us to achieve this extremely low latency. Once it's in persistent memory, it's stored. It's as good as being in flash or disc. So there's nothing else that we need to do. We do age things out of persistent memory to keep only hot data in there. That's one of the tricks that we do to make sure, because persistent memory is more expensive than flash or disc, so we tier it. So we age data in and out as it becomes hot, age it out as it becomes cold, but once it's in persistent memory, it's as good as being stored. It is stored. >> I love it. Flash is a slow tier now. So, (laughs) let's talk about what this-- >> Right, I mean persistent memory is about an order of magnitude faster. Flash is more than an order of magnitude faster than disk drive, so it is a new technology that provides big benefits, particularly for latency on OLTP. >> Great, thank you for that, okay, we'll get out of the plumbing. Let's talk about what this announcement means to customers. How does all this performance, and you got a lot of scale here, how does it translate into tangible results say, for a bank? >> Yeah, so there's a lot of ways. So, I mentioned performance is a big thing, always with Exadata. We're increasing the performance significantly for OLTP, analytics, so OLTP, 50, 60% performance improvements, analytics, 80% performance improvements in terms of costs, effectiveness, 30 to 60% improvement, so all of these things are big benefits. You know, one of the differences between a server product like Exadata and a consumer product is performance translates in the cost also. If I get a new smartphone that's faster, it doesn't actually reduce my costs, it just makes my experience a little better. But with a server product like Exadata, if I have 50% faster, I can translate that into I can serve 50% more users, 50% more workload, 50% more data, or I can buy a 50% smaller system to run the same workload. So, when we talk about performance, it also means lower costs, so if big customers of ours, like banks, telecoms, retailers, et cetera, they can take that performance and turn it into better response times. They can also take that performance and turn it into lower costs, and everybody loves both of those things, so both of those are big benefits for our customers. >> Got it, thank you. Now in a move that was maybe a little bit controversial, you stated flat out that you're not going to bother to compare Exadata cloud and customer performance against AWS Outposts and Azure Stack, rather you chose to compare to RDS, Redshift, Azure SQL. Why, why was that? >> Yeah, so our Exadata runs in the public cloud. We have Exadata that runs in Cloud@Customer, and we have Exadata that runs on Prem. And Azure and Azure Stack, they have something a little more similar to Cloud@Customer. They have where they take their cloud solutions and put them in the customer data center. So when we came out with our new X8, 9M Cloud@Customer, we looked at those technologies and honestly, we couldn't even come up with a good comparison with their equivalent, for example, AWS Outpost, because those products really just don't really run. For example, the two database products that Outposts promote or that Amazon promotes is Aurora for OLTP and Redshift for analytics. Well, those two can't even run at all on their Outposts product. So, it's kind of like beating up on a child or something. (laughs) It doesn't make sense. They're out of our weight class, so we're not even going to compare against them. So we compared what we run, both in public cloud and Cloud@Customer against their best product, which is the Redshifts and the Auroras in their public cloud, which is their most scalable available products. With their equivalent Cloud@Customer, not only does it not perform, it doesn't run at all. Their Premiere products don't run at all on those platforms. >> Okay, but RDS does, right? I think, and Redshift and Azure SQL, right, will run a their version, so you compare it against those. What were the results of the benchmarks when you did made those comparisons? >> Yeah, so compared against their public cloud or Cloud@Customer, we generally get results that are something like 50 times lower latency and close to a hundred times higher analytic throughput, so it's orders of magnitude. We're not talking 50%, we're talking 50 times, so compared to those products, there really is kind of, we're in a different league. It's kind of like they're the middle school little league and we're the professional team, so it's really dramatically different. It's not even in the same league. >> All right, now you also chose to compare the X9M performance against on-premises storage systems. Why and what were those results? >> Yeah, so with the on-premises, traditionally customers bought conventional storage and that kind of stuff, and those products have advanced quite a bit. And again, those aren't optimized. Those aren't designed to run database, but some customers have traditionally deployed those, you know, there's less and less these days, but we do get many times faster both on OLTP and analytic performance there, I mean, with analytics that can be up to 80 times faster, so again, dramatically better, but yeah, there's still a lot of on-premise systems, so we didn't want to ignore that fact and compare only to cloud products. >> So these are like to like in the sense that they're running the same level of database. You're not playing games in terms of the versioning, obviously, right? >> Actually, we're giving them a lot of the benefit. So we're taking their published numbers that aren't even running a database, and they use these low-level benchmarking tools to generate these numbers. So, we're comparing our full end-to-end database to storage numbers against their low-level IO tool that they've published in their data sheets, so again, we're trying to give them the benefit of the doubt, but we're still orders of magnitude better. >> Okay, now another claim that caught our attention was you said that 87% of the Fortune 100 organizations run Exadata, and you're claiming many thousands of other organizations globally. Can you paint a picture of the ICP, the Ideal Customer Profile for Exadata? What's a typical customer look like, and why do they use Exadata, Juan? >> Yeah, so the ideal customer is pretty straightforward, customers that care about data. That's pretty much it. (Dave laughs) If you care about data, if you care about performance of data, if you care about availability of data, if you care about manageability, if you care about security, those are the customers that should be looking strongly at Exadata, and those are the customers that are adopting Exadata. That's why you mentioned 87% of the global Fortune 100 have already adopted Exadata. If you look at a lot of industries, for example, pretty much every major bank almost in the entire world is running Exadata, and they're running it for their mission critical workloads, things like financial trading, regulatory compliance, user interfaces, the stuff that really matters. But in addition to the biggest companies, we also have thousands of smaller companies that run it for the same reason, because their data matters to them, and it's frankly the best platform, which is why we get chosen by these very, very sophisticated customers over and over again, and why this product has grown to encompass most of the major corporations in the world and governments also. >> Now, I know Deutsche bank is a customer, and I guess now an engineering partner from the announcement that I saw earlier this summer. They're using Cloud@Customer, and they're collaborating on things like security, blockchain, machine intelligence, and my inference is Deutsch Bank is looking to build new products and services that are powered by your platforms. What can you tell us about that? Can you share any insights? Are they going to be using X9M, for example? >> Yes, Deutsche Bank is a partnership that we announced a few months ago. It's a major partnership. Deutsche Bank is one of the biggest banks in the world. They traditionally are an on-premises customer, and what they've announced is they're going to move almost the entire database estate to our Exadata Cloud@Customer platform, so they want to go with a cloud platform, but they're big enough that they want to run it in their own data center for certain regulatory reasons. And so, the announcement that we made with them is they're moving the vast bulk of their data estate to this platform, including their core banking, regulatory applications, so their most critical applications. So, obviously they've done a lot of testing. They've done a lot of trials and they have the confidence to make this major transition to a cloud model with the Exadata Cloud@Customer solution, and we're also working with them to enhance that product and to work in various other fields, like you mentioned, machine learning, blockchain, that kind of project also. So it's a big deal when one of the biggest, most conservative, best respected financial institution in the world says, "We're going all in on this product," that's a big deal. >> Now outside of banking, I know a number of years ago, I stumbled upon an installation or a series of installations that Samsung found out about them as a customer. I believe it's now public, but they've something like 300 Exadatas. So help us understand, is it common that customers are building these kinds of Exadata farms? Is this an outlier? >> Yeah, so we have many large customers that have dozens to hundreds of Exadatas, and it's pretty simple, they start with one or two, and then they see the benefits, themselves, and then it grows. And Samsung is probably the biggest, most successful and most respected electronics company in the world. They are a giant company. They have a lot of different sub units. They do their own manufacturing, so manufacturing's one of their most critical applications, but they have lots of other things they run their Exadata for. So we're very happy to have them as one of our major customers that run Exadata, and by the way, Exadata again, very huge in electronics, in manufacturing. It's not just banking and that kind of stuff. I mean, manufacturing is incredibly critical. If you're a company like Samsung, that's your bread and butter. If your factory stops working, you have huge problems. You can't produce products, and you will want to improve the quality. You want to improve the tracking. You want to improve the customer service, all that requires a huge amount of data. Customers like Samsung are generating terabytes and terabytes of data per day from their manufacturing system. They track every single piece, everything that happens, so again, big deal, they care about data. They care deeply about data. They're a huge Exadata customer. That's kind of the way it works. And they've used it for many years, and their use is growing and growing and growing, and now they're moving to the cloud model as well. >> All right, so we talked about some big customers and Juan, as you know, we've covered Exadata since its inception. We were there at the announcement. We've always stressed the fit in our research with mission critical workloads, which especially resonates with these big customers. My question is how does Exadata resonate with the smaller customer base? >> Yeah, so we talk a lot about the biggest customers, because honestly they have the most critical requirements. But, at some level they have worldwide requirements, so if one of the major financial institutions goes down, it's not just them that's affected, that reverberates through the entire world. There's many other customers that use Exadata. Maybe their application doesn't stop the world, but it stops them, so it's very important to them. And so one of the things that we've introduced in our Cloud@Customer and public cloud Exadata platforms is the ability for Oracle to manage all the infrastructure, which enables smaller customers that don't have as much IT sophistication to adopt these very mission critical technology, so that's one of the big advancements. Now, we've always had smaller customers, but now we're getting more and more. We're getting universities, governments, smaller businesses adopting Exadata, because the cloud model for adopting is dramatically simpler. Oracle does all the administration, all the low-level stuff. They don't have to get involved in it at all. They can just use the data. And, on top of that comes our autonomous database, which makes it even easier for smaller customers to adapt. So Exadata, which some people think of as a very high-end platform in this cloud model, and particularly with autonomous databases is very accessible and very useful for any size customer really. >> Yeah, by all accounts, I wouldn't debate Exadata has been a tremendous success. But you know, a lot of customers, they still prefer to roll their own, do it themselves, and when I talk to them and ask them, "Okay, why is that?" They feel it limits their reliance on a single vendor, and it gives them better ability to build what I call a horizontal infrastructure that can support say non-Oracle workloads, so what do you tell those customers? Why should those customers run Oracle database on Exadata instead of a DIY infrastructure? >> Yeah, so that debate has gone on for a lot of years. And actually, what I see, there's less and less of that debate these days. You know, initially customers, many customers, they were used to building their own. That's kind of what they did. They were pretty good at it. What we have shown customers, and when we talk about these major banks, those are the kinds of people that are really good at it. They have giant IT departments. If you look at a major bank in the world, they have tens of thousands of people in their IT departments. These are gigantic multi-billion dollar organizations, so they were pretty good at this kind of stuff. And what we've shown them is you can't build this yourself. There's so much software that we've written to integrate with the database that you just can't build yourself, it's not possible. It's kind of like trying to build your own smartphone. You really can't do it, the scale, the complexity of the problem. And now as the cloud model comes in, customers are realizing, hey, all this attention to building my own infrastructure, it's kind of last decade, last century. We need to move on to more of an as a service model, so we can focus on our business. Let enterprises that are specialized in infrastructure, like Oracle that are really, really good at it, take care of the low-level details, and let me focus on things that differentiate me as a business. It's not going to differentiate them to establish their own storage for database. That's not a differentiator, and they can't do it nearly as well as we can, and a lot of that is because we write a lot of special technology and software that they just can't do themselves, it's not possible. It's just like you can't build your own smartphone. It's just really not possible. >> Now, another area that we've covered extensively, we were there at the unveiling, as well is ZDLRA, Zero Data Loss Recovery Appliance. We've always liked this product, especially for mission critical workloads, we're near zero data loss, where you can justify that. But while we always saw it as somewhat of a niche market, first of all, is that fair, and what's new with ZDLRA? >> Yeah ZDLRA has been in the market for a number of years. We have some of the biggest corporations in the world running on that, and one of the big benefits has been zero data loss, so again, if you care about data, you can't lose data. You can't restore to last night's backup if something happens. So if you're a bank, you can't restore everybody's data to last night. Suppose you made a deposit during the day. They're like, "Hey, sorry, Mr. Customer, your deposit, "well, we don't have any record of it anymore, "'cause we had to restore to last night's backup," you know, that doesn't work. It doesn't work for airlines. It doesn't work for manufacturing. That whole model is obsolete, so you need zero data loss, and that's why we introduced Zero Data Loss Recovery Appliance, and it's been very successful in the market. In addition to zero data loss, it actually provides much faster restore, much more reliable restores. It's more scalable, so it has a lot of advantages. With our X9M generation, we're introducing several new capabilities. First of all, it has higher capacity, so we can store more backups, keep data for longer. Another thing is we're actually dropping the price of the entry-level configuration of ZDLRA, so it makes it more affordable and more usable by smaller businesses, so that's a big deal. And then the other thing that we're hearing a lot about, and if you read the news at all, you hear a lot about ransomware. This is a major problem for the world, cyber criminals breaking into your network and taking the data ransom. And so we've introduced some, we call cyber vault capabilities in ZDLRA. They help address this ransomware issue that's kind of rampant throughout the world, so everybody's worried about that. There's now regulatory compliance for ransomware that particularly financial institutions have to conform to, and so we're introducing new capabilities in that area as well, which is a big deal. In addition, we now have the ability to have multiple ZDLRAs in a large enterprise, and if something happens to one, we automatically fail over backups to another. We can replicate across them, so it makes it, again, much more resilient with replication across different recovery appliances, so a lot of new improvements there as well. >> Now, is an air gap part of that solution for ransomware? >> No, air gap, you really can't have your back, if you're continuously streaming changes to it, you really can't have an air gap there, but you can protect the data. There's a number of technologies to protect the data. For example, one of the things that a cyber criminal wants to do is they want to take control of your data and then get rid of your backup, so you can't restore them. So as a simple example of one thing we're doing is we're saying, "Hey, once we have the data, "you can't delete it for a certain amount of days." So you might say, "For the 30 days, "I don't care who you are. "I don't care what privileges you have. "I don't care anything, I'm holding onto that data "for at least 30 days," so for example, a cyber criminal can't come in and say, "Hey, I'm going to get into the system "and delete that stuff or encrypt it," or something like that. So that's a simple example of one of the things that the cyber vault does. >> So, even as an administrator, I can't change that policy? >> That's right, that's one of the goals is doesn't matter what privileges you have, you can't change that policy. >> Does that eliminate the need for an air gap or would you not necessarily recommend, would you just have another layer of protection? What's your recommendation on that to customers? >> We always recommend multiple layers of protection, so for example, in our ZDLRA, we support, we offload tape backups directly from the appliance, so a great way to protect the data from any kind of thing is you put it on a tape, and guess what, once that tape drive is filed away, I don't care what cyber criminal you are, if you're remote, you can't access that data. So, we always promote multiple layers, multiple technologies to protect the data, and tape is a great way to do that. We can also now archive. In addition to tape, we can now archive to the public cloud, to our object storage servers. We can archive to what we call our ZFS appliance, which is a very low cost storage appliance, so there's a number of secondary archive copies that we offload and implement for customers. We make it very easy to do that. So, yeah, you want multiple layers of protection. >> Got it, okay, your tape is your ultimate air gap. ZDLRA is your low RPO device. You've got cloud kind of in the middle, maybe that's your cheap and deep solution, so you have some options. >> Juan: Yes. >> Okay, last question. Summarize the announcement, if you had to mention two or three takeaways from the X9M announcement for our audience today, what would you choose to share? >> I mean, it's pretty straightforward. It's the new generation. It's significantly faster for OLTP, for analytics, significantly better consolidation, more cost-effective. That's the big picture. Also there's a lot of software enhancements to make it better, improve the management, make it more usable, make it better disaster recovery. I talked about some of these cyber vault capabilities, so it's improved across all the dimensions and not in small ways, in big ways. We're talking 50% improvement, 80% improvements. That's a big change, and also we're keeping the price the same, so when you get a 50 or 80% improvement, we're not increasing the price to match that, so you're getting much better value as well. And that's pretty much what it is. It's the same product, even better. >> Well, I love this cadence that we're on. We love having you on these video exclusives. We have a lot of Oracle customers in our community, so we appreciate you giving us the inside scope on these announcements. Always a pleasure having you on theCUBE. >> Thanks for having me. It's always fun to be with you, Dave. >> All right, and thank you for watching. This is Dave Vellante for theCUBE, and we'll see you next time. (bright music)

Published Date : Sep 28 2021

SUMMARY :

and databases have to run It's great to be here. of the X9M announcement today? We have the pedal to the metal sauce that allows you to achieve and so we have a lot of that means the hardware sends the new data Flash is a slow tier now. that provides big benefits, and you got a lot of scale here, and everybody loves both of those things, Now in a move that was maybe and we have Exadata that runs on Prem. and Azure SQL, right, and close to a hundred times Why and what were those results? and compare only to cloud products. of the versioning, obviously, right? and they use these of the Fortune 100 and it's frankly the best platform, is looking to build new and to work in various other it common that customers and now they're moving to and Juan, as you know, is the ability for Oracle to and it gives them better ability to build and a lot of that is because we write first of all, is that fair, and so we're introducing new capabilities of one of the things That's right, that's one of the goals In addition to tape, we can now You've got cloud kind of in the middle, from the X9M announcement the price to match that, so we appreciate you It's always fun to be with you, Dave. and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

SamsungORGANIZATION

0.99+

Deutsche BankORGANIZATION

0.99+

JuanPERSON

0.99+

twoQUANTITY

0.99+

Juan LoaizaPERSON

0.99+

Deutsche bankORGANIZATION

0.99+

DavePERSON

0.99+

September 2021DATE

0.99+

OracleORGANIZATION

0.99+

50 timesQUANTITY

0.99+

thousandsQUANTITY

0.99+

30 daysQUANTITY

0.99+

Deutsch BankORGANIZATION

0.99+

50%QUANTITY

0.99+

30QUANTITY

0.99+

AmazonORGANIZATION

0.99+

oneQUANTITY

0.99+

50QUANTITY

0.99+

80%QUANTITY

0.99+

87%QUANTITY

0.99+

ZDLRAORGANIZATION

0.99+

60%QUANTITY

0.99+

bothQUANTITY

0.99+

AWSORGANIZATION

0.99+

last nightDATE

0.99+

last centuryDATE

0.99+

first tierQUANTITY

0.99+

dozensQUANTITY

0.98+

this yearDATE

0.98+

more than a terabyte per secondQUANTITY

0.98+

RedshiftTITLE

0.97+

ExadataORGANIZATION

0.97+

FirstQUANTITY

0.97+

hundredsQUANTITY

0.97+

X9MTITLE

0.97+

more than a terabyte per secondQUANTITY

0.97+

OutpostsORGANIZATION

0.96+

Azure SQLTITLE

0.96+

Azure StackTITLE

0.96+

zero dataQUANTITY

0.96+

over a dozen yearsQUANTITY

0.96+

Video Exclusive: Oracle Announces New MySQL HeatWave Capabilities


 

(bright music) >> Surprising many people, including myself, Oracle last year began investing pretty heavily in the MySQL space. Now those investments continue today. Let me give you a brief history. Last December, Oracle made its first HeatWave announcement. Where converged OLTP and OLAP together in a single MySQL database. Now, what wasn't surprising was the approach Oracle took. They leveraged hardware to improve the performance and lower the cost. You see when Oracle acquired Sun more than a decade ago, rather than rely on loosely coupled partnerships with hardware vendors to speed up its databases. Oracle set out on a path to tightly integrate hardware and software innovations using its own in-house engineering. So with his first, MySQL HeatWave announcement, Oracle leaned heavily on developing software on top of an in-memory database technology to create an embedded OLAP capability that eliminates the need for ETL and data from a transaction system into a separate analytics database. Now in doing so, Oracle is taking a similar approach with its MySQL today, as it does for its, or back then, whereas it does for its mainstream Oracle database. And today extends that. And what I mean by that is it's converging capabilities in a single platform. So the argument is this simplifies and accelerates analytics that lowers the costs and allows analytics, things like analytics to be run on data that is more fresh. Now, as many of you know, this is a different strategy than how, for example, an AWS approaches database where it creates purpose-built database services, targeted at specific workloads. These are philosophical design decisions made for a variety of reasons, but it's very clear which direction Oracle is headed in. Today, Oracle continues its HeatWave announcement cadence with a focus on increased automation as well. The company is continuing the trend of using clustering technology to scale out for both performance and capacity. And again, that theme of marrying hardware with software Oracle is also making announcements that focus on security. Hello everyone and welcome to this video exclusive. This is Dave Vellante. We're going to dig into these capabilities, Nipun Agarwal here. He's VP of MySQL HeatWave and advanced development in Oracle. Nipun has been leading the MySQL and HeatWave development effort for nearly a decade. He's got 180 patents to his name about half of which are associated with HeatWave. Nipun, welcome back to the show. Great to have you. >> Thank you, Dave. >> So before we get into the new news, if we could, maybe you could give us all a quick overview of HeatWave again, and what problems you originally set out to solve with it? >> Sure. So HeatWave is a in-memory query accelerator for MySQL. Now, as most people are aware, MySQL was originally designed and optimized for transactional processing. So when customers had the need to run analytics, they would need to extract data from the, MySQL database into another database and run analytics. With MySQL HeatWave, customers get a single database, which can be used both for transactional processing and for analytics. There's no need to move the data from one database to another database and all existing tools and applications, which are compatible with MySQL, continue to work as is. So in-memory query accelerator for MySQL and this is significantly faster than any version of MySQL database. And also it's much faster than specialized databases for analytics. >> Yeah, we're going to talk about that. And so obviously when you made the announcement last December, you had, I'm sure, a core group of, of early customers and beta customers, but then you opened it up to the world. So what was the reaction once you expose that to customers? >> The reaction has been very positive, Dave. So initially we're thinking that they're going to be a lot of customers who are on premise users of MySQL, who are going to migrate to the service. And surely that was the case. But the part which was very interesting and surprising is that we see many customers who are migrating from other cloud vendors or migrating from other cloud services to MySQL HeatWave. And most notably the biggest number of migrations we are seeing are from AWS Aurora and AWS RDS. >> Interesting. Okay. I wonder if you've got other feedback you're obviously responding in a pretty, pretty fast cadence here, you know, seven, eight month cadence. What are the feedback that you get, were there gaps that customers wanted you to to close? >> Sure. Yes. So as customers starting moving in to HeatWave they found that HeatWave is much faster, much cheaper. And when it's so much faster, they told us that there are some classes of queries, which could just not run earlier, which they can now with HeatWave. So it makes the applications richer because they can write new classes of queries with which they could not in the past. But in terms of the feedback or enhancement requests we got, I would say they will categorize the number one was automation. There've been customers move their database from on-premise to the cloud. They expect more automation. So that was the number one thing. The second thing was people wanted the ability to run analytics on larger sizes of data with MySQL HeatWave because they like what they saw and they wanted us to increase the data size limit, which can be processed by HeatWave. Third one was they wanted more classes of queries to be accessed with HeatWave. Initially, when we went out, HeatWave was designed to be an accelerator for analytic queries but more and more customers started seeing the benefit of beyond just analytics. More towards mixed workloads. So that was a third request. And then finally they wanted us to scale to a larger cluster size. And that's what we have done over the last several months that incorporating this feedback, which you've gotten from customers. >> So you're addressing those, those, those gaps. And thank you for sharing that with us. I got the press release here. I wonder if we could kind of go through these. Let's start with AutoPilot, you know, what's, what's that all about? What's different about AutoPilot? >> That's right. So MySQL AutoPilot provides machine learning based automation. So the first difference is that not only is it automating things, where and as a cloud provider as a service provider, we feel there are a lot of opportunities for us to automate, but the big difference about the approach we've taken with MySQL AutoPilot is that it's all driven based on the data and the queries. It's machine learning based automation. That's the first aspect. The second thing is this is all done natively in the server, right? So we are enhancing the, MySQL engine. We're enhancing the HeatWave engine and that's where all the logic and all the processing resides. In order to do this, we have had to collect new kinds of data. So for instance, in the past, people would collect statistics, which are based on just the data. Now we also collect statistics based on queries, for instance, what is the compilation time? What is the execution time? And we have augmented this with new machine learning models. And finally we have made a lot of innovations, a lot of inventions in the process where we collect data in a smart way. We process data in a smart way and the machine learning models we are talking about, also have a lot of innovation. And that's what gives us an edge over what other vendors may try to do. >> Yeah. I mean, I'm just, again, I'm looking at this meat, this pretty meaty preference, press release. Auto-provisioning, auto parallel load, auto data placement, auto encoding, auto error, auto recovery, auto scheduling, and you know, using a lot of, you know, computer science techniques that are well-known, first in first out, auto change propagation. So really focusing on, on driving that automation for customers. The other piece of it that struck me, and I said this in my intro is, you know, using clustering technology, clustering technology has been around for a long time as, as in-memory database, but applying it and integrating it. My sense is that's really about scale and performance and taking advantage of course, cloud being able to drive that scale instantaneously, but talk about scale a little bit in your philosophy there and why so much emphasis on scalability? >> Right. So what we want to do is to provide the fastest engine for running analytics. And that's why we do the processing in memory. Now, one of the issues with in process, in-memory processing is that the amount of data which you're processing has to reside in memory. So when we went out in the version one, given the footprint of the MySQL customers we spoke to, we thought 12 terabytes of processing at any given point in time, would be adequate. In the very first month, we got feedback that customers wanted us to process larger amounts of data with HeatWave, because they really like what they saw and they wanted us to increase. So if we have increased deployment from 12 terabytes to 32 terabytes and in order to do so, we now have a HeatWave cluster, which can be up to 64 nodes. That's one aspect on the query processing side. Now to answer the question as to why so much of an emphasis it's because this is something which is extremely difficult to do in query processing that as you scale the size of the cluster, the kind of algorithms, the kind of techniques you have to use so that you achieve a very high efficiency with a very large cluster. These are things which are easy to do, because what we want to make sure is that as customers have the need for like, like a processing larger amount of data, one of the big benefits customers get by using a cloud as opposed to on-premise is that they don't need to worry about provisioning gear ahead of time. So if they have more data with the cloud, they should be able to like process pool data easily. But when they process more data, they should expect the same kind of performance. So same kind of efficiency on a larger data size, similar to a smaller data size. And this is something traditionally other database vendors have struggled to provide. So this is a important problem. This is a tough engineering problem. And that's why a lot of emphasis on this to make sure that we provide our customers with very high efficiency of processing as they increase the size of the data. >> You're saying, traditionally, you'll get diminishing returns as you scale. So sort of as, as the volume grows, you're not able to take as much advantage or you're less efficient. And you're saying you've, you've largely solved that problem you're able to use. I mean, people always talk about scaling linearly and I'm always skeptical, but, but you're saying, especially in database, that's been a challenge, but you're, you're saying you've solved that problem largely. >> Right. What I would say is that we have a system which is very efficient, more efficient than like, you know, any of the database we are aware of. So as you said, perfect scaling is hard with you, right? I mean, that's a critical limit of scale factor one. That's very hard to achieve. We are now close to 90% efficiency for n2n queries. This is not for primitives. This is for n2n queries, both on industry benchmarks, as well as real world customer workloads. So this 90% efficiency we believe is very good and higher than what many of the vendors provide. >> Yeah. Right. So you're not, not just primitives the whole end to end cycle. I think 0.89, I think was the number that I, that I saw just to be technically correct there, but that's pretty, pretty good. Now let's talk about the benchmarks. It wouldn't be an Oracle announcement with some, some benchmarks. So you laid out today in your announcement, some, some pretty outstanding performance and price performance numbers, particularly you called out it's, it's. I feel like it's a badge of honor. If, if Oracle calls me out, I feel like I'm doing well. You called out Snowflake and Amazons. So maybe you could go over those benchmark results that we could peel the onion on that a little bit. >> Right. So the first thing to realize is that we want to have benchmarks, which are credible, right? So it's not the case that we have taken some specific unique workloads where HeatWave shines. That's not the case. What we did was we took a industry standard benchmark, which is like, you know, TPC-H. And furthermore, we had a third party, independent firm do this comparison. So let's first compare with Snowflake. On a 10 terabyte TPC-H benchmark HeatWave is seven times faster and one fifth the cost. So with this, it is 35 times better price performance compared to Snowflake, right? So seven times faster than Snowflake and one fifth of the cost. So HeatWave is 35 times better price performance compared to Snowflake. Not just that, Snowflake only does analytics, whereas MySQL HeatWave does both transactional processing and analytics. It's not a specialized database, MySQL HeatWave is a general purpose database, which can do both OLTP analytics whereas Snowflake can only do analytics. So to be 35 times more efficient than a database service, which is specialized only for one case, which is analytics, we think it's pretty good. So that's a comparison with Snowflake. >> So that's, that's you're using, I presume you got to be using list prices for that, obviously. >> That is correct. >> So there's discounts, let's put that into context of maybe 35 X better. You're not going to get that kind of discount. I wouldn't think. >> That is correct. >> Okay. What about Redshift? Aqua for Redshift has gained a lot of momentum in the marketplace. How do you compare against that? >> Right. So we did a comparison with Redshift, Aqua, same benchmark, 10 terabytes, TPC-H. And again, this was done by a third party. Here, HeatWave is six and a half times faster at half the cost. So HeatWave is 13 times better price performance compared to Redshift Aqua. And the same thing for Redshift. It's a specialized database only for analytics. So customers need to have two databases, one for transaction processing, one for analytics, with Redshift. Whereas with MySQL HeatWave, it's a single database for both. And it is so much faster than Redshift. That again, we feel is a pretty remarkable. >> Now, you mentioned earlier, but you're not, you're obviously I presume not, you're not cheating here. You're not including the cost of the transaction processing data store. Right? We're, we're, we're ignoring that for a minute. Ignoring that you got to, you know, move data, ETL, we're just talking about like the like, is that correct? >> Right. This is extremely fair and extremely generous comparison. Not only are we not including the cost of the source OLTP database, the cost in the case of the Redshift I'm talking about is the cost for one year paid full upfront. So this is a best pricing. A customer can get for one year subscription with Redshift. Whereas when I'm talking about HeatWave, this is the pay as you go price. And the third aspect is, this is Redshift when it is completely fully optimized. I don't think anyone else can get much better numbers on Redshift than we have. Right? So fully optimized configuration of Redshift looking at the one year pre-pay cost of Redshift and not including the source database. >> Okay. And then speaking of transaction processing database, what about Aurora? You mentioned earlier that that you're seeing a lot of migration from Aurora. Can you add some color to that? >> Right. And this is a very interesting question in a, it was a very interesting observation for us when we did the launch back in December, we had numbers on four terabytes, TPC-H with Aurora. So if you look at the same benchmark, four terabytes TPC-H HeatWave is 1,400 times faster than Aurora at half the cost, which makes it 2,800 times better price performance compared to Aurora. So very good number. What we have found is that many customers who are running on Aurora started migrating to HeatWave, and these customers had a mix of transaction processing and analytics, and the data sizes are much smaller. Even those customers found that there was a significant improvement in performance and reduction in costs when they migrated to HeatWave. In the announcement today, many of the references are those class of customers. So for that, we decided to choose another benchmark, which is called CH-benchmark on a much smaller data size. And even for that, even for mixed workloads, we find that HeatWave is 18 times faster, provides over a hundred times higher throughput than Aurora at 42% of the cost. So in terms of price performance gain, it is much, much better than Aurora, even for mixed workloads. And then if you consider a pure OLTP assume you have an application, which has only OLTP, which by the way is like, you know, a very uncommon scenario, but even if that were be the case, in that case for pure OLTP only, MySQL HeatWave is at par with Aurora, with respect to performance, but MySQL HeatWave costs 42% of Aurora. So the point is that in the whole spectrum, pure OLTP, mixed workloads or analytics, MySQL HeatWave is going to be fraction of the cost of a Aurora. And depending upon your query workload, your acceleration can be anywhere from 14,000 times to 18 times faster. >> That's interesting. I mean, you've been at this for the better part of a decade, because my sense is that HeatWave is all about OLAP. And that's really where you've put the majority, if not all of the innovation. But you're saying just coming into December's announcement, you were at par with a, in a, in a, in a, in a rare, but, but hypothetical OLTP workload. >> That is correct. >> Yeah. >> Well, you know, I got to push you still on this because a lot of times these benchmarks are a function of the skills of the individuals performing these tests, right? So can I, if I want to run them myself, you know, if you publish these benchmarks, what if a customer wants to replicate these tests and try to see if they can tune up, you know, Redshift better than you guys did? >> Sure. So I'll say a couple of things. One is all the numbers which I'm talking about both for Redshift and Snowflake were done by a third party firm, but all the numbers we is talking about, TPC-H, as well has CH-benchmark. All the scripts are published on GitHub. So anyone is very welcome. In fact, we encourage customers to go and try it for themselves, and they will find that the numbers are absolutely as advertised. In fact, we had couple of companies like in the last several months who went to GitHub, they downloaded our TPCH scripts and they reported that the performance numbers they were seeing with HeatWave were actually better than we had published back in December. And the reason was that since December we had new code, which was running. So our numbers were actually better than advertised. So all the benchmarks are published. They are all available on GitHub. You can go to the HeatWave website on oracle.com and get the link for it. And we welcome anyone to come and try these numbers for themselves. >> All right. Good. Great. Thank you for that. Now you mentioned earlier that you were somewhat surprised, not surprised that you got customers migrating from on-prem databases, but you also saw migration from other clouds. How do you expect the trend with regard to this new announcement? Do you have any sense as to how that's going to go? >> Right. So one of the big changes from December to now is that we have now focused quite a bit on mixed workloads. So in the past, in December, when we first went out, HeatWave was designed primarily for analytics. Now, what we have found is that there's a very large class of customers who have mixed workloads and who also have smaller data sizes. We now have introduced a lot of technology, including things like auto scheduling, definitely improvement in performance, where MySQL HeatWave is a very superior solution compared to Aurora or other databases out there, both in terms of performance as well as price for these mixed workloads and better latency, better throughput, lower costs. So we expect this trend of migration to MySQL HeatWave, to accelerate. So we are seeing customers migrate from Azure. We are seeing customers migrate from GCP and by far the number one migrations we are seeing are from AWS. So I think based on the new features and technologies, we have announced today, this migration is going to accelerate. >> All right, last question. So I said earlier, it's, it's, it seems like you're applying what are generally well understood and proven technologies, like in-memory, you like clustering to solve these problems. And I think about, you know, the, the things that you're doing, and I wonder, you know, I mean, these things have been around for awhile and why has this type of approach not been introduced by others previously? >> Right. Well, so the main thing is it takes time, right? That we designed HeatWave from the ground up for the cloud. And as a part of that, we had to invent new algorithms for distributed query processing for the cloud. We put in the hooks for machine learning processes. We're sealing processing right from the ground up. So this has taken us close to a decade. It's been hundreds of person-years of investment, dozens of patents which have gone in. Another aspect is it takes talent from different areas. So we have like, you know, people working in distributed query processing, we have people who have a lot of like background in machine learning. And then given that we are like the custodians of the MySQL database, we have a very rich set of customers we can reach out to, to get feedback from them as to what are the pinpoints. So culmination of these trends, which we have this talent, the customer base and the time, so we spent almost close to a decade to make this thing work. So that's what it takes. It takes time, patience, patience, and talent. >> A lot of software innovation bringing together, as I said, that hardware and software strategy. Very interesting. Nipun, thanks so much. I appreciate your, your insights and coming on this video exclusive. >> Thank you, Dave. Thank you for the opportunity. >> My pleasure. And thank you for watching everybody. This is Dave Vellante for theCUBE. We'll see you next time. (bright music)

Published Date : Aug 10 2021

SUMMARY :

So the argument is this simplifies the data from one database So what was the reaction once And most notably the What are the feedback that you get, So it makes the applications I got the press release here. So for instance, in the past, and I said this in my intro is, you know, In the very first month, we So sort of as, as the volume grows, any of the database we are So maybe you could go over So the first thing to realize So that's, that's you're using, You're not going to get in the marketplace. And the same thing for Redshift. of the transaction and not including the source database. a lot of migration from Aurora. So the point is that in the if not all of the innovation. but all the numbers we is talking about, not surprised that you So in the past, in December, And I think about, you know, the, of the MySQL database, we have A lot of software Thank you for the opportunity. you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

2,800 timesQUANTITY

0.99+

DavePERSON

0.99+

DecemberDATE

0.99+

one yearQUANTITY

0.99+

12 terabytesQUANTITY

0.99+

1,400 timesQUANTITY

0.99+

14,000 timesQUANTITY

0.99+

OracleORGANIZATION

0.99+

32 terabytesQUANTITY

0.99+

AmazonsORGANIZATION

0.99+

35 timesQUANTITY

0.99+

18 timesQUANTITY

0.99+

90%QUANTITY

0.99+

AWSORGANIZATION

0.99+

NipunPERSON

0.99+

first aspectQUANTITY

0.99+

Last DecemberDATE

0.99+

Nipun AgarwalPERSON

0.99+

last yearDATE

0.99+

MySQLTITLE

0.99+

sevenQUANTITY

0.99+

42%QUANTITY

0.99+

13 timesQUANTITY

0.99+

seven timesQUANTITY

0.99+

180 patentsQUANTITY

0.99+

SunORGANIZATION

0.99+

third requestQUANTITY

0.99+

firstQUANTITY

0.99+

one caseQUANTITY

0.99+

AutoPilotTITLE

0.99+

0.89QUANTITY

0.99+

second thingQUANTITY

0.99+

third aspectQUANTITY

0.99+

oneQUANTITY

0.99+

two databasesQUANTITY

0.99+

10 terabyteQUANTITY

0.99+

MySQL AutoPilotTITLE

0.99+

bothQUANTITY

0.99+

Third oneQUANTITY

0.99+

todayDATE

0.99+

last DecemberDATE

0.99+

MySQL HeatWaveTITLE

0.99+

HeatWaveORGANIZATION

0.99+

OneQUANTITY

0.99+

10 terabytesQUANTITY

0.98+

GitHubORGANIZATION

0.98+

one fifthQUANTITY

0.98+

Maria Colgan & Gerald Venzl, Oracle | June CUBEconversation


 

(upbeat music) Developers have become the new king makers in the world of digital and cloud. The rise of containers and microservices has accelerated the transition to cloud native applications. A lot of people will talk about application architecture and the related paradigms and the benefits they bring for the process of writing and delivering new apps. But a major challenge continues to be, the how and the what when it comes to accessing, processing and getting insights from the massive amounts of data that we have to deal with in today's world. And with me are two experts from the data management world who will share with us how they think about the best techniques and practices based on what they see at large organizations who are working with data and developing so-called data-driven apps. Please welcome Maria Colgan and Gerald Venzl, two distinguish product managers from Oracle. Folks, welcome, thanks so much for coming on. >> Thanks for having us Dave. >> Thank you very much for having us. >> Okay, Maria let's start with you. So, we throw around this term data-driven, data-driven applications. What are we really talking about there? >> So data-driven applications are applications that work on a diverse set of data. So anything from spatial to sensor data, document data as well as your usual transaction processing data. And what they're going to do is they'll generate value from that data in very different ways to a traditional application. So for example, they may use machine learning, they are able to do product recommendations in the middle of a transaction. Or we could use graph to be able to identify an influencer within the community so we can target them with a specific promotion. It could also use spatial data to be able to help find the nearest stores to a particular customer. And because these apps are deployed on multiple platforms, everything from mobile devices as well as standard browsers, they need a data platform that's going to be both secure, reliable and scalable. >> Well, so when you think about how the workloads are shifting I mean, we're not talking about, you know it's not anymore a world of just your ERP or your HCM or your CRM, you know kind of the traditional operational systems. You really are seeing an explosion of these new data oriented apps. You're seeing, you know, modeling in the cloud, you are going to see more and more inferencing, inferencing at the edge. But Maria maybe you could talk a little bit about sort of the benefits that customers are seeing from developing these types of applications. I mean, why should people care about data-driven apps? >> Oh, for sure, there's massive benefits to them. I mean, probably the most obvious one for any business regardless of the industry, is that they not only allow you to understand what your customers are up to, but they allow you to be able to anticipate those customer's needs. So that helps businesses maintain that competitive edge and retain their customers. But it also helps them make data-driven decisions in real time based on actual data rather than on somebody's gut feeling or basing those decisions on historical data. So for example, you can do real-time price adjustments on products based on demand and so forth, that kind of thing. So it really changes the way people do business today. >> So Gerald, you think about the narrative in the industry everybody wants to be a platform player all your customers they are becoming software companies, they are becoming platform players. Everybody wants to be like, you know name a company that is huge trillion dollar market cap or whatever, and those are data-driven companies. And so it would seem to me that data-driven applications, there's nobody, no company really shouldn't be data-driven. Do you buy that? >> Yeah, absolutely. I mean, data-driven, and that naturally the whole industry is data-driven, right? It's like we all have information technologies about processing data and deriving information out of it. But when it comes to app development I think there is a big push to kind of like we have to do machine learning in our applications, we have to get insights from data. And when you actually look back a bit and take a step back, you see that there's of course many different kinds of applications out there as well that's not to be forgotten, right? So there is a usual front end user interfaces where really the application all it does is just entering some piece of information that's stored somewhere or perhaps a microservice that's not attached to a data to you at all but just receives or asks calls (indistinct). So I think it's not necessarily so important for every developer to kind of go on a bandwagon that they have to be data-driven. But I think it's equally important for those applications and those developers that build applications, that drive the business, that make business critical decisions as Maria mentioned before. Those guys should take really a close look into what data-driven apps means and what the data to you can actually give to them. Because what we see also happening a lot is that a lot of the things that are well known and out there just ready to use are being reimplemented in the applications. And for those applications, they essentially just ended up spending more time writing codes that will be already there and then have to maintain and debug the code as well rather than just going to market faster. >> Gerald can you talk to the prevailing approaches that developers take to build data-driven applications? What are the ones that you see? Let's dig into that a little bit more and maybe differentiate the different approaches and talk about that? >> Yeah, absolutely. I think right now the industry is like in two camps, it's like sort of a religious war going on that you'll see often happening with different architectures and so forth going on. So we have single purpose databases or data management technologies. Which are technologies that are as the name suggests build around a single purpose. So it's like, you know a typical example would be your ordinary key-value store. And a key-value store all it does is it allows you to store and retrieve a piece of data whatever that may be really, really fast but it doesn't really go beyond that. And then the other side of the house or the other camp would be multimodal databases, multimodal data management technologies. Those are technologies that allow you to store different types of data, different formats of data in the same technology in the same system alongside. And, you know, when you look at the geographics out there of what we have from technology, is pretty much any relational database or any database really has evolved into such a multimodal database. Whether that's MySQL that allows you to store or chase them alongside relational or even a MongoDB that allows you to do or gives you native graph support since (mumbles) and as well alongside the adjacent support. >> Well, it's clearly a trend in the industry. We've talked about this a lot in The Cube. We know where Oracle stands on this. I mean, you just mentioned MySQL but I mean, Oracle Databases you've been extending, you've mentioned JSON, we've got blockchain now in there you're infusing, you know ML and AI into the database, graph database capabilities, you know on and on and on. We talked a lot about we compared that to Amazon which is kind of the right tool, the right job approach. So maybe you could talk about, you know, your point of view, the benefits for developers of using that converged database if I can use that word approach being able to store multiple data formats? Why do you feel like that's a better approach? >> Yeah, I think on a high level it comes down to complexity. You are actually avoiding additional complexity, right? So not every use case that you have necessarily warrants to have yet another data management technology or yet the special build technology for managing that data, right? It's like many use cases that we see out there happily want to just store a piece of a chase and document, a piece of chase in a database and then perhaps retrieve it again afterwards so write some simple queries over it. And you really don't have to get a new database technology or a NoSQL database into the mix if you already have some to just fulfill that exact use case. You could just happily store that information as well in the database you already have. And what it really comes down to is the learning curve for developers, right? So it's like, as you use the same technology to store other types of data, you don't have to learn a new technology, you don't have to associate yourself with new and learn new drivers. You don't have to find new frameworks and you don't have to know how to necessarily operate or best model your data for that database. You can essentially just reuse your knowledge of the technology as well as the libraries and code you have already built in house perhaps in another application, perhaps, you know framework that you used against the same technology because it is still the same technology. So, kind of all comes down again to avoiding complexity rather than not fragmenting you know, the many different technologies we have. If you were to look at the different data formats that are out there today it's like, you know, you would end up with many different databases just to store them if you were to fully religiously follow the single purpose best built technology for every use case paradigm, right? And then you would just end up having to manage many different databases more than actually focusing on your app and getting value to your business or to your user. >> Okay, so I get that and I buy that by the way. I mean, especially if you're a larger organization and you've got all these projects going on but before we go back to Maria, Gerald, I want to just, I want to push on that a little bit. Because the counter to that argument would be in the analogy. And I wonder if you, I'd love for you to, you know knock this analogy off the blocks. The counter would be okay, Oracle is the Swiss Army knife and it's got, you know, all in one. But sometimes I need that specialized long screwdriver and I go into my toolbox and I grab that. It's better than the screwdriver in my Swiss Army knife. Why, are you the Swiss Army knife of databases? Or are you the all-in-one have that best of breed screwdriver for me? How do you think about that? >> Yeah, that's a fantastic question, right? And I think it's first of all, you have to separate between Oracle the company that has actually multiple data management technologies and databases out there as you said before, right? And Oracle Database. And I think Oracle Database is definitely a Swiss Army knife has many capabilities of since the last 40 years, you know that we've seen object support coming that's still in the Oracle Database today. We have seen XML coming, it's still in the Oracle Database, graph, spatial, et cetera. And so you have many different ways of managing your data and then on top of that going into the converge, not only do we allow you to store the different data model in there but we actually allow you also to, you apply all the security policies and so forth on top of it something Maria can talk more about the mission around converged database. I would also argue though that for some aspects, we do actually have to or add a screwdriver that you talked about as well. So especially in the relational world people get very quickly hung up on this idea that, oh, if you only do rows and columns, well, that's kind of what you put down on disk. And that was never true, it's the relational model is actually a logical model. What's probably being put down on disk is blocks that align themselves nice with block storage and always has been. So that allows you to actually model and process the data sort of differently. And one common example or one good example that we have that we introduced a couple of years ago was when, column and databases were very strong and you know, the competition came it's like, yeah, we have In-Memory column that stores now they're so much better. And we were like, well, orienting the data role-based or column-based really doesn't matter in the sense that we store them as blocks on disks. And so we introduced the in memory technology which gives you an In-Memory column, a representation of your data as well alongside your relational. So there is an example where you go like, well, actually you know, if you have this use case of the column or analytics all In-Memory, I would argue Oracle Database is also that screwdriver you want to go down to and gives you that capability. Because not only gives you representation in columnar, but also which many people then forget all the analytic power on top of SQL. It's one thing to store your data columnar, it's a completely different story to actually be able to run analytics on top of that and having all the built-in functionalities and stuff that you want to do with the data on top of it as you analyze it. >> You know, that's a great example, the kilometer 'cause I remember there was like a lot of hype around it. Oh, it's the Oracle killer, you know, at Vertica. Vertica is still around but, you know it never really hit escape velocity. But you know, good product, good company, whatever. Natezza, it kind of got buried inside of IBM. ParXL kind of became, you know, red shift with that deal so that kind of went away. Teradata bought a company, I forget which company it bought but. So that hype kind of disapated and now it's like, oh yeah, columnar. It's kind of like In-Memory, we've had a In-Memory databases ever since we've had databases you know, it's a kind of a feature not a sector. But anyway, Maria, let's come back to you. You've got a lot of customer experience. And you speak with a lot of companies, you know during your time at Oracle. What else are you seeing in terms of the benefits to this approach that might not be so intuitive and obvious right away? >> I think one of the biggest benefits to having a multimodel multiworkload or as we call it a converged database, is the fact that you can get greater data synergy from it. In other words, you can utilize all these different techniques and data models to get better value out of that data. So things like being able to do real-time machine learning, fraud detection inside a transaction or being able to do a product recommendation by accessing three different data models. So for example, if I'm trying to recommend a product for you Dave, I might use graph analytics to be able to figure out your community. Not just your friends, but other people on our system who look and behave just like you. Once I know that community then I can go over and see what products they bought by looking up our product catalog which may be stored as JSON. And then on top of that I can then see using the key-value what products inside that catalog those community members gave a five star rating to. So that way I can really pinpoint the right product for you. And I can do all of that in one transaction inside the database without having to transform that data into different models or God forbid, access different systems to be able to get all of that information. So it really simplifies how we can generate that value from the data. And of course, the other thing our customers love is when it comes to deploying data-driven apps, when you do it on a converged database it's much simpler because it is that standard data platform. So you're not having to manage multiple independent single purpose databases. You're not having to implement the security and the high availability policies, you know across a bunch of different diverse platforms. All of that can be done much simpler with a converged database 'cause the DBA team of course, is going to just use that standard set of tools to manage, monitor and secure those systems. >> Thank you for that. And you know, it's interesting, you talk about simplification and you are in Juan's organization so you've big focus on mission critical. And so one of the things that I think is often overlooked well, we talk about all the time is recovery. And if things are simpler, recovery is faster and easier. And so it's kind of the hallmark of Oracle is like the gold standard of the toughest apps, the most mission critical apps. But I wanted to get to the cloud Maria. So because everything is going to the cloud, right? Not all workloads are going to the cloud but everybody is talking about the cloud. Everybody has cloud first mentality and so yes, it's a hybrid world. But the natural next question is how do you think the cloud fits into this world of data-driven apps? >> I think just like any app that you're developing, the cloud helps to accelerate that development. And of course the deployment of these data-driven applications. 'Cause if you think about it, the developer is instantly able to provision a converged database that Oracle will automatically manage and look after for them. But what's great about doing something like that if you use like our autonomous database service is that it comes in different flavors. So you can get autonomous transaction processing, data warehousing or autonomous JSON so that the developer is going to get a database that's been optimized for their specific use case, whatever they are trying to solve. And it's also going to contain all of that great functionality and capabilities that we've been talking about. So what that really means to the developer though is as the project evolves and inevitably the business needs change a little, there's no need to panic when one of those changes comes in because your converged database or your autonomous database has all of those additional capabilities. So you can simply utilize those to able to address those evolving changes in the project. 'Cause let's face it, none of us normally know exactly what we need to build right at the very beginning. And on top of that they also kind of get a built-in buddy in the cloud, especially in the autonomous database. And that buddy comes in the form of built-in workload optimizations. So with the autonomous database we do things like automatic indexing where we're using machine learning to be that buddy for the developer. So what it'll do is it'll monitor the workload and see what kind of queries are being run on that system. And then it will actually determine if there are indexes that should be built to help improve the performance of that application. And not only does it bill those indexes but it verifies that they help improve the performance before publishing it to the application. So by the time the developer is finished with that app and it's ready to be deployed, it's actually also been optimized by the developers buddy, the Oracle autonomous database. So, you know, it's a really nice helping hand for developers when they're building any app especially data-driven apps. >> I like how you sort of gave us, you know the truth here is you don't always know where you're going when you're building an app. It's like it goes from you are trying to build it and they will come to start building it and we'll figure out where it's going to go. With Agile that's kind of how it works. But so I wonder, can you give some examples of maybe customers or maybe genericize them if you need to. Data-driven apps in the cloud where customers were able to drive more efficiency, where the cloud buddy allowed the customers to do more with less? >> No, we have tons of these but I'll try and keep it to just a couple. One that comes to mind straight away is retrace. These folks built a blockchain app in the Oracle Cloud that allows manufacturers to actually share the supply chain with the consumer. So the consumer can see exactly, who made their product? Using what raw materials? Where they were sourced from? How it was done? All of that is visible to the consumer. And in order to be able to share that they had to work on a very diverse set of data. So they had everything from JSON documents to images as well as your traditional transactions in there. And they store all of that information inside the Oracle autonomous database, they were able to build their app and deploy it on the cloud. And they were able to do all of that very, very quickly. So, you know, that ability to work on multiple different data types in a single database really helped them build that product and get it to market in a very short amount of time. Another customer that's doing something really, really interesting is MindSense. So these guys operate the largest mines in Canada, Chile, and Peru. But what they do is they put these x-ray devices on the massive mechanical shovels that are at the cove or at the mine face. And what that does is it senses the contents of the buckets inside these mining machines. And it's looking to see at that content, to see how it can optimize the processing of the ore inside in that bucket. So they're looking to minimize the amount of power and water that it's going to take to process that. And also of course, minimize the amount of waste that's going to come out of that project. So all of that sensor data is sent into an autonomous database where it's going to be processed by a whole host of different users. So everything from the mine engineers to the geo scientists, to even their own data scientists utilize that data to drive their business forward. And what I love about these guys is they're not happy with building just one app. MindSense actually use our built-in low core development environment, APEX that comes as part of the autonomous database and they actually produce applications constantly for different aspects of their business using that technology. And it's actually able to accelerate those new apps to the business. It takes them now just a couple of days or weeks to produce an app instead of months or years to build those new apps. >> Great, thank you for that Maria. Gerald, I'm going to push you again. So, I said upfront and talked about microservices and the cloud and containers and you know, anybody in the developer space follows that very closely. But some of the things that we've been talking about here people might look at that and say, well, they're kind of antithetical to microservices. This is our Oracles monolithic approach. But when you think about the benefits of microservices, people want freedom of choice, technology choice, seen as a big advantage of microservices and containers. How do you address such an argument? >> Yeah, that's an excellent question and I get that quite often. The microservices architecture in general as I said before had architectures, Linux distributions, et cetera. It's kind of always a bit of like there's an academic approach and there's a pragmatic approach. And when you look at the microservices the original definitions that came out at the early 2010s. They actually never said that each microservice has to have a database. And they also never said that if a microservice has a database, you have to use a different technology for each microservice. Just like they never said, you have to write a microservice in a different programming language, right? So where I'm going with this is like, yes you know, sometimes when you look at some vendors out there, some niche players, they push this message or they jump on this academic approach of like each microservice has the best tool at hand or I'd use a different database for your purpose, et cetera. Which almost often comes across like us. You know, we want to stay part of the conversation. Nothing stops a developer from, you know using a multimodal database for the microservice and just using that as a document store, right? Or just using that as a relational database. And, you know, sometimes I mean, it was actually something that happened that was really interesting yesterday I don't know whether you follow Dave or not. But Facebook had an outage yesterday, right? And Facebook is one of those companies that are seen as the Silicon Valley, you know know how to do microservices companies. And when you add through the outage, well, what happened, right? Some unfortunate logical error with configuration as a force that took a database cluster down. So, you know, there you have it where you go like, well, maybe not every microservice is actually in fact talking to its own database or its own special purpose database. I think there, you know, well, what we should, the industry should be focusing much more on this argument of which technology to use? What's the right tool for a job? Is more to ask themselves, what business problem actually are we trying to solve? And therefore what's the right approach and the right technology for this. And so therefore, just as I said before, you know multimodal databases they do have strong benefits. They have many built-in functionalities that are already there and they allow you to reduce this complexity of having to know many different technologies, right? And so it's not only to store different data models either you know, treat a multimodal database as a chasing documents store or a relational database but most databases are multimodal since 20 plus years. But it's also actually being able to perhaps if you store that data together, you can perhaps actually derive additional value for somebody else but perhaps not for your application. But like for example, if you were to use Oracle Database you can actually write queries on top of all of that data. It doesn't really matter for our query engine whether it's the data is format that then chase or the data is formatted in rows and columns you can just rather than query over it. And that's actually very powerful for those guys that have to, you know get the reporting done the end of the day, the end of the week. And for those guys that are the data scientists that they want to figure out, you know which product performed really well or can we tweak something here and there. When you look into that space you still see a huge divergence between the guys to put data in kind of the altarpiece style and guys that try to derive new insights. And there's still a lot of ETL going around and, you know we have big data technologies that some of them come and went and some of them came in that are still around like Apache Spark which is still like a SQL engine on top of any of your data kind of going back to the same concept. And so I will say that, you know, for developers when we look at microservices it's like, first of all, is the argument you were making because the vendor or the technology you want to use tells you this argument or, you know, you kind of want to have an argument to use a specific technology? Or is it really more because it is the best technology, to best use for this given use case for this given application that you have? And if so there's of course, also nothing wrong to use a single purpose technology either, right? >> Yeah, I mean, whenever I talk about Oracle I always come back to the most important applications, the mission critical. It's very difficult to architect databases with microservices and containers. You have to be really, really careful. And so and again, it comes back to what we were talking before about with Maria that the complexity and the recovery. But Gerald I want to stay with you for a minute. So there's other data management technologies popping out there. I mean, I've seen some people saying, okay just leave the data in an S3 bucket. We can query that, then we've got some magic sauce to do that. And so why are you optimistic about you know, traditional database technology going forward? >> I would say because of the history of databases. So one thing that once struck me when I came to Oracle and then got to meet great people like Juan Luis and Andy Mendelsohn who had been here for a long, long time. I come to realization that relational databases are around for about 45 years now. And, you know, I was like, I'm too young to have been around then, right? So I was like, what else was around 45 years? It's like just the tech stack that we have today. It's like, how does this look like? Well, Linux only came out in 93. Well, databases pre-date Linux a lot rather than as I started digging I saw a lot of technologies come and go, right? And you mentioned before like the technologies that data management systems that we had that came and went like the columnar databases or XML databases, object databases. And even before relational databases before Cot gave us the relational model there were apparently these networks stores network databases which to some extent look very similar to adjacent documents. There wasn't a harder storing data and a hierarchy to format. And, you know when you then start actually reading the Cot paper and diving a little bit more into the relation model, that's I think one important crux in there that most of the industry keeps forgetting or it hasn't been around to even know. And that is that when Cot created the relational model, he actually focused not so much on the application putting the data in, but on future users and applications still being able to making sense out of the data, right? And that's kind of like I said before we had those network models, we had XML databases you have adjacent documents stores. And the one thing that they all have along with it is like the application that puts the data in decides the structure of the data. And that's all well and good if you had an application of the developer writing an application. It can become really tricky when 10 years later you still want to look at that data and the application that the developer is no longer around then you go like, what does this all mean? Where is the structure defined? What is this attribute? What does it mean? How does it correlate to others? And the one thing that people tend to forget is that it's actually the data that's here to stay not someone who does the applications where it is. Ideally, every company wants to store every single byte of data that they have because there might be future value in it. Economically may not make sense that's now much more feasible than just years ago. But if you could, why wouldn't you want to store all your data, right? And sometimes you actually have to store the data for seven years or whatever because the laws require you to. And so coming back then and you know, like 10 years from now and looking at the data and going like making sense of that data can actually become a lot more difficult and a lot more challenging than having to first figure out and how we store this data for general use. And that kind of was what the relational model was all about. We decompose the data structures into tables and columns with relationships amongst each other so therefore between each other. So that therefore if somebody wants to, you know typical example would be well you store some purchases from your web store, right? There's a customer attribute in it. There's some credit card payment information in it, just some product information on what the customer bought. Well, in the relational model if you just want to figure out which products were sold on a given day or week, you just would query the payment and products table to get the sense out of it. You don't need to touch the customer and so forth. And with the hierarchical model you have to first sit down and understand how is the structure, what is the customer? Where is the payment? You know, does the document start with the payment or does it start with the customer? Where do I find this information? And then in the very early days those databases even struggled to then not having to scan all the documents to get the data out. So coming back to your question a bit, I apologize for going on here. But you know, it's like relational databases have been around for 45 years. I actually argue it's one of the most successful software technologies that we have out there when you look in the overall industry, right? 45 years is like, in IT terms it's like from a star being the ones who are going supernova. You have said it before that many technologies coming and went, right? And just want to add a more really interesting example by the way is Hadoop and HDFS, right? They kind of gave us this additional promise of like, you know, the 2010s like 2012, 2013 the hype of Hadoop and so forth and (mumbles) and HDFS. And people are just like, just put everything into HDFS and worry about the data later, right? And we can query it and map reduce it and whatever. And we had customers actually coming to us they were like, great we have half a petabyte of data on an HDFS cluster and we have no clue what's stored in there. How do we figure this out? What are we going to do now? Now you had a big data cleansing problem. And so I think that is why databases and also data modeling is something that will not go away anytime soon. And I think databases and database technologies are here for quite a while to stay. Because many of those are people they don't think about what's happening to the data five years from now. And many of the niche players also and also frankly even Amazon you know, following with this single purpose thing is like, just use the right tool for the job for your application, right? Just pull in the data there the way you wanted. And it's like, okay, so you use technologies all over the place and then five years from now you have your data fragmented everywhere in different formats and, you know inconsistencies, and, and, and. And those are usually when you come back to this data-driven business critical business decision applications the worst case scenario you can have, right? Because now you need an army of people to actually do data cleansing. And there's not a coincidence that data science has become very, very popular the last recent years as we kind of went on with this proliferation of different database or data management technologies some of those are not even database. But I think I leave it at that. >> It's an interesting talk track because you're right. I mean, no schema on right was alluring, but it definitely created some problems. It also created an entire, you know you referenced the hyper specialized roles and did the data cleansing component. I mean, maybe technology will eventually solve that problem but it hasn't up at least up tonight. Okay, last question, Maria maybe you could start off and Gerald if you want to chime in as well it'd be great. I mean, it's interesting to watch this industry when Oracle sort of won the top database mantle. I mean, I watched it, I saw it. It was, remember it was Informix and it was (indistinct) too and of course, Microsoft you got to give them credit with SQL server, but Oracle won the database wars. And then everything got kind of quiet for awhile database was sort of boring. And then it exploded, you know, all the, you know not only SQL and the key-value stores and the cloud databases and this is really a hot area now. And when we looked at Oracle we said, okay, Oracle it's all about Oracle Database, but we've seen the kind of resurgence in MySQL which everybody thought, you know once Oracle bought Sun they were going to kill MySQL. But now we see you investing in HeatWave, TimesTen, we talked about In-Memory databases before. So where do those fit in Maria in the grand scheme? How should we think about Oracle's database portfolio? >> So there's lots of places where you'd use those different things. 'Cause just like any other industry there are going to be new and boutique use cases that are going to benefit from a more specialized product or single purpose product. So good examples off the top of my head of the kind of systems that would benefit from that would be things like a stock exchange system or a telephone exchange system. Both of those are latency critical transaction processing applications where they need microsecond response times. And that's going to exceed perhaps what you might normally get or deploy with a converged database. And so Oracle's TimesTen database our In-Memory database is perfect for those kinds of applications. But there's also a host of MySQL applications out there today and you said it yourself there Dave, HeatWave is a great place to provision and deploy those kinds of applications because it's going to run 100 times faster than AWS (mumbles). So, you know, there really is a place in the market and in our customer's systems and the needs they have for all of these different members of our database family here at Oracle. >> Yeah, well, the internet is basically running in the lamp stack so I see MySQL going away. All right Gerald, will give you the final word, bring us home. >> Oh, thank you very much. Yeah, I mean, as Maria said, I think it comes back to what we discussed before. There is obviously still needs for special technologies or different technologies than a relational database or multimodal database. Oracle has actually many more databases that people may first think of. Not only the three that we have already mentioned but there's even SP so the Oracle's NoSQL database. And, you know, on a high level Oracle is a data management company, right? And we want to give our customers the best tools and the best technology to manage all of their data. Rather than therefore there has to be a need or there should be a part of the business that also focuses on this highly specialized systems and this highly specialized technologies that address those use cases. And I think it makes perfect sense. It's like, you know, when the customer comes to Oracle they're not only getting this, take this one product you know, and if you don't like it your problem but actually you have choice, right? And choice allows you to make a decision based on what's best for you and not necessarily best for the vendor you're talking to. >> Well guys, really appreciate your time today and your insights. Maria, Gerald, thanks so much for coming on The Cube. >> Thank you very much for having us. >> And thanks for watching this Cube conversation this is Dave Vellante and we'll see you next time. (upbeat music)

Published Date : Jun 24 2021

SUMMARY :

in the world of digital and cloud. and the benefits they bring What are we really talking about there? the nearest stores to kind of the traditional So it really changes the way So Gerald, you think about to you at all but just receives or even a MongoDB that allows you to do ML and AI into the database, in the database you already have. and I buy that by the way. of since the last 40 years, you know the benefits to this approach is the fact that you can get And so one of the things that And that buddy comes in the form of the truth here is you don't and deploy it on the cloud. and the cloud and containers and you know, is the argument you were making that the complexity and the recovery. because the laws require you to. And then it exploded, you and the needs they have in the lamp stack so I and the best technology to and your insights. we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Gerald VenzlPERSON

0.99+

Andy MendelsohnPERSON

0.99+

MariaPERSON

0.99+

ChileLOCATION

0.99+

PeruLOCATION

0.99+

Maria ColganPERSON

0.99+

CanadaLOCATION

0.99+

OracleORGANIZATION

0.99+

GeraldPERSON

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Maria ColganPERSON

0.99+

seven yearsQUANTITY

0.99+

IBMORGANIZATION

0.99+

Juan LuisPERSON

0.99+

100 timesQUANTITY

0.99+

five starQUANTITY

0.99+

DavePERSON

0.99+

FacebookORGANIZATION

0.99+

two expertsQUANTITY

0.99+

AWSORGANIZATION

0.99+

SunORGANIZATION

0.99+

45 yearsQUANTITY

0.99+

MySQLTITLE

0.99+

threeQUANTITY

0.99+

yesterdayDATE

0.99+

each microserviceQUANTITY

0.99+

Swiss ArmyORGANIZATION

0.99+

early 2010sDATE

0.99+

TeradataORGANIZATION

0.99+

Swiss ArmyORGANIZATION

0.99+

LinuxTITLE

0.99+

10 years laterDATE

0.99+

2012DATE

0.99+

two campsQUANTITY

0.99+

SQLTITLE

0.99+

BothQUANTITY

0.98+

Oracle DatabaseTITLE

0.98+

2010sDATE

0.98+

TimesTenORGANIZATION

0.98+

HadoopTITLE

0.98+

firstQUANTITY

0.98+

OraclesORGANIZATION

0.98+

VerticaORGANIZATION

0.98+

tonightDATE

0.98+

2013DATE

0.98+

Maria Colgan & Gerald Venzl, Oracle | June CUBEconversation


 

(upbeat music) >> It'll be five, four, three and then silent two, one, and then you guys just follow my lead. We're just making some last minute adjustments. Like I said, we're down two hands today. So, you good Alex? Okay, are you guys ready? >> I'm ready. >> Ready. >> I got to get get one note here. >> So I noticed Maria you stopped anyway, so I have time. >> Just so they know Dave and the Boston Studio, are they both kind of concurrently be on film even when they're not speaking or will only the speaker be on film for like if Gerald's drawing while Maria is talking about-- >> Sorry but then I missed one part of my onboarding spiel. There should be, if you go into gallery there should be a label. There should be something labeled Boston live switch feed. If you pin that gallery view you'll see what our program currently being recorded is. So any time you don't see yourself on that feed is an excellent time to take a drink of water, scratch your nose, check your notes. Do whatever you got to do off screen. >> Can you give us a three shot, Alex? >> Yes, there it is. >> And then go to me, just give me a one-shot to Dave. So when I'm here you guys can take a drink or whatever >> That makes sense? >> Yeah. >> Excellent, I will get my recordings restarted and we'll open up when Dave's ready. >> All right, you guys ready? >> Ready. >> All right Steve, you go on mute. >> Okay, on me in 5, 4, 3. Developers have become the new king makers in the world of digital and cloud. The rise of containers and microservices has accelerated the transition to cloud native applications. A lot of people will talk about application architecture and the related paradigms and the benefits they bring for the process of writing and delivering new apps. But a major challenge continues to be, the how and the what when it comes to accessing, processing and getting insights from the massive amounts of data that we have to deal with in today's world. And with me are two experts from the data management world who will share with us how they think about the best techniques and practices based on what they see at large organizations who are working with data and developing so-called data-driven apps. Please welcome Maria Colgan and Gerald Venzl, two distinguish product managers from Oracle. Folks, welcome, thanks so much for coming on. >> Thanks for having us Dave. >> Thank you very much for having us. >> Okay, Maria let's start with you. So, we throw around this term data-driven, data-driven applications. What are we really talking about there? >> So data-driven applications are applications that work on a diverse set of data. So anything from spatial to sensor data, document data as well as your usual transaction processing data. And what they're going to do is they'll generate value from that data in very different ways to a traditional application. So for example, they may use machine learning, they are able to do product recommendations in the middle of a transaction. Or we could use graph to be able to identify an influencer within the community so we can target them with a specific promotion. It could also use spatial data to be able to help find the nearest stores to a particular customer. And because these apps are deployed on multiple platforms, everything from mobile devices as well as standard browsers, they need a data platform that's going to be both secure, reliable and scalable. >> Well, so when you think about how the workloads are shifting I mean, we're not talking about, you know it's not anymore a world of just your ERP or your HCM or your CRM, you know kind of the traditional operational systems. You really are seeing an explosion of these new data oriented apps. You're seeing, you know, modeling in the cloud, you are going to see more and more inferencing, inferencing at the edge. But Maria maybe you could talk a little bit about sort of the benefits that customers are seeing from developing these types of applications. I mean, why should people care about data-driven apps? >> Oh, for sure, there's massive benefits to them. I mean, probably the most obvious one for any business regardless of the industry, is that they not only allow you to understand what your customers are up to, but they allow you to be able to anticipate those customer's needs. So that helps businesses maintain that competitive edge and retain their customers. But it also helps them make data-driven decisions in real time based on actual data rather than on somebody's gut feeling or basing those decisions on historical data. So for example, you can do real-time price adjustments on products based on demand and so forth, that kind of thing. So it really changes the way people do business today. >> So Gerald, you think about the narrative in the industry everybody wants to be a platform player all your customers they are becoming software companies, they are becoming platform players. Everybody wants to be like, you know name a company that is huge trillion dollar market cap or whatever, and those are data-driven companies. And so it would seem to me that data-driven applications, there's nobody, no company really shouldn't be data-driven. Do you buy that? >> Yeah, absolutely. I mean, data-driven, and that naturally the whole industry is data-driven, right? It's like we all have information technologies about processing data and deriving information out of it. But when it comes to app development I think there is a big push to kind of like we have to do machine learning in our applications, we have to get insights from data. And when you actually look back a bit and take a step back, you see that there's of course many different kinds of applications out there as well that's not to be forgotten, right? So there is a usual front end user interfaces where really the application all it does is just entering some piece of information that's stored somewhere or perhaps a microservice that's not attached to a data to you at all but just receives or asks calls (indistinct). So I think it's not necessarily so important for every developer to kind of go on a bandwagon that they have to be data-driven. But I think it's equally important for those applications and those developers that build applications, that drive the business, that make business critical decisions as Maria mentioned before. Those guys should take really a close look into what data-driven apps means and what the data to you can actually give to them. Because what we see also happening a lot is that a lot of the things that are well known and out there just ready to use are being reimplemented in the applications. And for those applications, they essentially just ended up spending more time writing codes that will be already there and then have to maintain and debug the code as well rather than just going to market faster. >> Gerald can you talk to the prevailing approaches that developers take to build data-driven applications? What are the ones that you see? Let's dig into that a little bit more and maybe differentiate the different approaches and talk about that? >> Yeah, absolutely. I think right now the industry is like in two camps, it's like sort of a religious war going on that you'll see often happening with different architectures and so forth going on. So we have single purpose databases or data management technologies. Which are technologies that are as the name suggests build around a single purpose. So it's like, you know a typical example would be your ordinary key-value store. And a key-value store all it does is it allows you to store and retrieve a piece of data whatever that may be really, really fast but it doesn't really go beyond that. And then the other side of the house or the other camp would be multimodal databases, multimodal data management technologies. Those are technologies that allow you to store different types of data, different formats of data in the same technology in the same system alongside. And, you know, when you look at the geographics out there of what we have from technology, is pretty much any relational database or any database really has evolved into such a multimodal database. Whether that's MySQL that allows you to store or chase them alongside relational or even a MongoDB that allows you to do or gives you native graph support since (mumbles) and as well alongside the adjacent support. >> Well, it's clearly a trend in the industry. We've talked about this a lot in The Cube. We know where Oracle stands on this. I mean, you just mentioned MySQL but I mean, Oracle Databases you've been extending, you've mentioned JSON, we've got blockchain now in there you're infusing, you know ML and AI into the database, graph database capabilities, you know on and on and on. We talked a lot about we compared that to Amazon which is kind of the right tool, the right job approach. So maybe you could talk about, you know, your point of view, the benefits for developers of using that converged database if I can use that word approach being able to store multiple data formats? Why do you feel like that's a better approach? >> Yeah, I think on a high level it comes down to complexity. You are actually avoiding additional complexity, right? So not every use case that you have necessarily warrants to have yet another data management technology or yet the special build technology for managing that data, right? It's like many use cases that we see out there happily want to just store a piece of a chase and document, a piece of chase in a database and then perhaps retrieve it again afterwards so write some simple queries over it. And you really don't have to get a new database technology or a NoSQL database into the mix if you already have some to just fulfill that exact use case. You could just happily store that information as well in the database you already have. And what it really comes down to is the learning curve for developers, right? So it's like, as you use the same technology to store other types of data, you don't have to learn a new technology, you don't have to associate yourself with new and learn new drivers. You don't have to find new frameworks and you don't have to know how to necessarily operate or best model your data for that database. You can essentially just reuse your knowledge of the technology as well as the libraries and code you have already built in house perhaps in another application, perhaps, you know framework that you used against the same technology because it is still the same technology. So, kind of all comes down again to avoiding complexity rather than not fragmenting you know, the many different technologies we have. If you were to look at the different data formats that are out there today it's like, you know, you would end up with many different databases just to store them if you were to fully religiously follow the single purpose best built technology for every use case paradigm, right? And then you would just end up having to manage many different databases more than actually focusing on your app and getting value to your business or to your user. >> Okay, so I get that and I buy that by the way. I mean, especially if you're a larger organization and you've got all these projects going on but before we go back to Maria, Gerald, I want to just, I want to push on that a little bit. Because the counter to that argument would be in the analogy. And I wonder if you, I'd love for you to, you know knock this analogy off the blocks. The counter would be okay, Oracle is the Swiss Army knife and it's got, you know, all in one. But sometimes I need that specialized long screwdriver and I go into my toolbox and I grab that. It's better than the screwdriver in my Swiss Army knife. Why, are you the Swiss Army knife of databases? Or are you the all-in-one have that best of breed screwdriver for me? How do you think about that? >> Yeah, that's a fantastic question, right? And I think it's first of all, you have to separate between Oracle the company that has actually multiple data management technologies and databases out there as you said before, right? And Oracle Database. And I think Oracle Database is definitely a Swiss Army knife has many capabilities of since the last 40 years, you know that we've seen object support coming that's still in the Oracle Database today. We have seen XML coming, it's still in the Oracle Database, graph, spatial, et cetera. And so you have many different ways of managing your data and then on top of that going into the converge, not only do we allow you to store the different data model in there but we actually allow you also to, you apply all the security policies and so forth on top of it something Maria can talk more about the mission around converged database. I would also argue though that for some aspects, we do actually have to or add a screwdriver that you talked about as well. So especially in the relational world people get very quickly hung up on this idea that, oh, if you only do rows and columns, well, that's kind of what you put down on disk. And that was never true, it's the relational model is actually a logical model. What's probably being put down on disk is blocks that align themselves nice with block storage and always has been. So that allows you to actually model and process the data sort of differently. And one common example or one good example that we have that we introduced a couple of years ago was when, column and databases were very strong and you know, the competition came it's like, yeah, we have In-Memory column that stores now they're so much better. And we were like, well, orienting the data role-based or column-based really doesn't matter in the sense that we store them as blocks on disks. And so we introduced the in memory technology which gives you an In-Memory column, a representation of your data as well alongside your relational. So there is an example where you go like, well, actually you know, if you have this use case of the column or analytics all In-Memory, I would argue Oracle Database is also that screwdriver you want to go down to and gives you that capability. Because not only gives you representation in columnar, but also which many people then forget all the analytic power on top of SQL. It's one thing to store your data columnar, it's a completely different story to actually be able to run analytics on top of that and having all the built-in functionalities and stuff that you want to do with the data on top of it as you analyze it. >> You know, that's a great example, the kilometer 'cause I remember there was like a lot of hype around it. Oh, it's the Oracle killer, you know, at Vertica. Vertica is still around but, you know it never really hit escape velocity. But you know, good product, good company, whatever. Natezza, it kind of got buried inside of IBM. ParXL kind of became, you know, red shift with that deal so that kind of went away. Teradata bought a company, I forget which company it bought but. So that hype kind of disapated and now it's like, oh yeah, columnar. It's kind of like In-Memory, we've had a In-Memory databases ever since we've had databases you know, it's a kind of a feature not a sector. But anyway, Maria, let's come back to you. You've got a lot of customer experience. And you speak with a lot of companies, you know during your time at Oracle. What else are you seeing in terms of the benefits to this approach that might not be so intuitive and obvious right away? >> I think one of the biggest benefits to having a multimodel multiworkload or as we call it a converged database, is the fact that you can get greater data synergy from it. In other words, you can utilize all these different techniques and data models to get better value out of that data. So things like being able to do real-time machine learning, fraud detection inside a transaction or being able to do a product recommendation by accessing three different data models. So for example, if I'm trying to recommend a product for you Dave, I might use graph analytics to be able to figure out your community. Not just your friends, but other people on our system who look and behave just like you. Once I know that community then I can go over and see what products they bought by looking up our product catalog which may be stored as JSON. And then on top of that I can then see using the key-value what products inside that catalog those community members gave a five star rating to. So that way I can really pinpoint the right product for you. And I can do all of that in one transaction inside the database without having to transform that data into different models or God forbid, access different systems to be able to get all of that information. So it really simplifies how we can generate that value from the data. And of course, the other thing our customers love is when it comes to deploying data-driven apps, when you do it on a converged database it's much simpler because it is that standard data platform. So you're not having to manage multiple independent single purpose databases. You're not having to implement the security and the high availability policies, you know across a bunch of different diverse platforms. All of that can be done much simpler with a converged database 'cause the DBA team of course, is going to just use that standard set of tools to manage, monitor and secure those systems. >> Thank you for that. And you know, it's interesting, you talk about simplification and you are in Juan's organization so you've big focus on mission critical. And so one of the things that I think is often overlooked well, we talk about all the time is recovery. And if things are simpler, recovery is faster and easier. And so it's kind of the hallmark of Oracle is like the gold standard of the toughest apps, the most mission critical apps. But I wanted to get to the cloud Maria. So because everything is going to the cloud, right? Not all workloads are going to the cloud but everybody is talking about the cloud. Everybody has cloud first mentality and so yes, it's a hybrid world. But the natural next question is how do you think the cloud fits into this world of data-driven apps? >> I think just like any app that you're developing, the cloud helps to accelerate that development. And of course the deployment of these data-driven applications. 'Cause if you think about it, the developer is instantly able to provision a converged database that Oracle will automatically manage and look after for them. But what's great about doing something like that if you use like our autonomous database service is that it comes in different flavors. So you can get autonomous transaction processing, data warehousing or autonomous JSON so that the developer is going to get a database that's been optimized for their specific use case, whatever they are trying to solve. And it's also going to contain all of that great functionality and capabilities that we've been talking about. So what that really means to the developer though is as the project evolves and inevitably the business needs change a little, there's no need to panic when one of those changes comes in because your converged database or your autonomous database has all of those additional capabilities. So you can simply utilize those to able to address those evolving changes in the project. 'Cause let's face it, none of us normally know exactly what we need to build right at the very beginning. And on top of that they also kind of get a built-in buddy in the cloud, especially in the autonomous database. And that buddy comes in the form of built-in workload optimizations. So with the autonomous database we do things like automatic indexing where we're using machine learning to be that buddy for the developer. So what it'll do is it'll monitor the workload and see what kind of queries are being run on that system. And then it will actually determine if there are indexes that should be built to help improve the performance of that application. And not only does it bill those indexes but it verifies that they help improve the performance before publishing it to the application. So by the time the developer is finished with that app and it's ready to be deployed, it's actually also been optimized by the developers buddy, the Oracle autonomous database. So, you know, it's a really nice helping hand for developers when they're building any app especially data-driven apps. >> I like how you sort of gave us, you know the truth here is you don't always know where you're going when you're building an app. It's like it goes from you are trying to build it and they will come to start building it and we'll figure out where it's going to go. With Agile that's kind of how it works. But so I wonder, can you give some examples of maybe customers or maybe genericize them if you need to. Data-driven apps in the cloud where customers were able to drive more efficiency, where the cloud buddy allowed the customers to do more with less? >> No, we have tons of these but I'll try and keep it to just a couple. One that comes to mind straight away is retrace. These folks built a blockchain app in the Oracle Cloud that allows manufacturers to actually share the supply chain with the consumer. So the consumer can see exactly, who made their product? Using what raw materials? Where they were sourced from? How it was done? All of that is visible to the consumer. And in order to be able to share that they had to work on a very diverse set of data. So they had everything from JSON documents to images as well as your traditional transactions in there. And they store all of that information inside the Oracle autonomous database, they were able to build their app and deploy it on the cloud. And they were able to do all of that very, very quickly. So, you know, that ability to work on multiple different data types in a single database really helped them build that product and get it to market in a very short amount of time. Another customer that's doing something really, really interesting is MindSense. So these guys operate the largest mines in Canada, Chile, and Peru. But what they do is they put these x-ray devices on the massive mechanical shovels that are at the cove or at the mine face. And what that does is it senses the contents of the buckets inside these mining machines. And it's looking to see at that content, to see how it can optimize the processing of the ore inside in that bucket. So they're looking to minimize the amount of power and water that it's going to take to process that. And also of course, minimize the amount of waste that's going to come out of that project. So all of that sensor data is sent into an autonomous database where it's going to be processed by a whole host of different users. So everything from the mine engineers to the geo scientists, to even their own data scientists utilize that data to drive their business forward. And what I love about these guys is they're not happy with building just one app. MindSense actually use our built-in low core development environment, APEX that comes as part of the autonomous database and they actually produce applications constantly for different aspects of their business using that technology. And it's actually able to accelerate those new apps to the business. It takes them now just a couple of days or weeks to produce an app instead of months or years to build those new apps. >> Great, thank you for that Maria. Gerald, I'm going to push you again. So, I said upfront and talked about microservices and the cloud and containers and you know, anybody in the developer space follows that very closely. But some of the things that we've been talking about here people might look at that and say, well, they're kind of antithetical to microservices. This is our Oracles monolithic approach. But when you think about the benefits of microservices, people want freedom of choice, technology choice, seen as a big advantage of microservices and containers. How do you address such an argument? >> Yeah, that's an excellent question and I get that quite often. The microservices architecture in general as I said before had architectures, Linux distributions, et cetera. It's kind of always a bit of like there's an academic approach and there's a pragmatic approach. And when you look at the microservices the original definitions that came out at the early 2010s. They actually never said that each microservice has to have a database. And they also never said that if a microservice has a database, you have to use a different technology for each microservice. Just like they never said, you have to write a microservice in a different programming language, right? So where I'm going with this is like, yes you know, sometimes when you look at some vendors out there, some niche players, they push this message or they jump on this academic approach of like each microservice has the best tool at hand or I'd use a different database for your purpose, et cetera. Which almost often comes across like us. You know, we want to stay part of the conversation. Nothing stops a developer from, you know using a multimodal database for the microservice and just using that as a document store, right? Or just using that as a relational database. And, you know, sometimes I mean, it was actually something that happened that was really interesting yesterday I don't know whether you follow Dave or not. But Facebook had an outage yesterday, right? And Facebook is one of those companies that are seen as the Silicon Valley, you know know how to do microservices companies. And when you add through the outage, well, what happened, right? Some unfortunate logical error with configuration as a force that took a database cluster down. So, you know, there you have it where you go like, well, maybe not every microservice is actually in fact talking to its own database or its own special purpose database. I think there, you know, well, what we should, the industry should be focusing much more on this argument of which technology to use? What's the right tool for a job? Is more to ask themselves, what business problem actually are we trying to solve? And therefore what's the right approach and the right technology for this. And so therefore, just as I said before, you know multimodal databases they do have strong benefits. They have many built-in functionalities that are already there and they allow you to reduce this complexity of having to know many different technologies, right? And so it's not only to store different data models either you know, treat a multimodal database as a chasing documents store or a relational database but most databases are multimodal since 20 plus years. But it's also actually being able to perhaps if you store that data together, you can perhaps actually derive additional value for somebody else but perhaps not for your application. But like for example, if you were to use Oracle Database you can actually write queries on top of all of that data. It doesn't really matter for our query engine whether it's the data is format that then chase or the data is formatted in rows and columns you can just rather than query over it. And that's actually very powerful for those guys that have to, you know get the reporting done the end of the day, the end of the week. And for those guys that are the data scientists that they want to figure out, you know which product performed really well or can we tweak something here and there. When you look into that space you still see a huge divergence between the guys to put data in kind of the altarpiece style and guys that try to derive new insights. And there's still a lot of ETL going around and, you know we have big data technologies that some of them come and went and some of them came in that are still around like Apache Spark which is still like a SQL engine on top of any of your data kind of going back to the same concept. And so I will say that, you know, for developers when we look at microservices it's like, first of all, is the argument you were making because the vendor or the technology you want to use tells you this argument or, you know, you kind of want to have an argument to use a specific technology? Or is it really more because it is the best technology, to best use for this given use case for this given application that you have? And if so there's of course, also nothing wrong to use a single purpose technology either, right? >> Yeah, I mean, whenever I talk about Oracle I always come back to the most important applications, the mission critical. It's very difficult to architect databases with microservices and containers. You have to be really, really careful. And so and again, it comes back to what we were talking before about with Maria that the complexity and the recovery. But Gerald I want to stay with you for a minute. So there's other data management technologies popping out there. I mean, I've seen some people saying, okay just leave the data in an S3 bucket. We can query that, then we've got some magic sauce to do that. And so why are you optimistic about you know, traditional database technology going forward? >> I would say because of the history of databases. So one thing that once struck me when I came to Oracle and then got to meet great people like Juan Luis and Andy Mendelsohn who had been here for a long, long time. I come to realization that relational databases are around for about 45 years now. And, you know, I was like, I'm too young to have been around then, right? So I was like, what else was around 45 years? It's like just the tech stack that we have today. It's like, how does this look like? Well, Linux only came out in 93. Well, databases pre-date Linux a lot rather than as I started digging I saw a lot of technologies come and go, right? And you mentioned before like the technologies that data management systems that we had that came and went like the columnar databases or XML databases, object databases. And even before relational databases before Cot gave us the relational model there were apparently these networks stores network databases which to some extent look very similar to adjacent documents. There wasn't a harder storing data and a hierarchy to format. And, you know when you then start actually reading the Cot paper and diving a little bit more into the relation model, that's I think one important crux in there that most of the industry keeps forgetting or it hasn't been around to even know. And that is that when Cot created the relational model, he actually focused not so much on the application putting the data in, but on future users and applications still being able to making sense out of the data, right? And that's kind of like I said before we had those network models, we had XML databases you have adjacent documents stores. And the one thing that they all have along with it is like the application that puts the data in decides the structure of the data. And that's all well and good if you had an application of the developer writing an application. It can become really tricky when 10 years later you still want to look at that data and the application that the developer is no longer around then you go like, what does this all mean? Where is the structure defined? What is this attribute? What does it mean? How does it correlate to others? And the one thing that people tend to forget is that it's actually the data that's here to stay not someone who does the applications where it is. Ideally, every company wants to store every single byte of data that they have because there might be future value in it. Economically may not make sense that's now much more feasible than just years ago. But if you could, why wouldn't you want to store all your data, right? And sometimes you actually have to store the data for seven years or whatever because the laws require you to. And so coming back then and you know, like 10 years from now and looking at the data and going like making sense of that data can actually become a lot more difficult and a lot more challenging than having to first figure out and how we store this data for general use. And that kind of was what the relational model was all about. We decompose the data structures into tables and columns with relationships amongst each other so therefore between each other. So that therefore if somebody wants to, you know typical example would be well you store some purchases from your web store, right? There's a customer attribute in it. There's some credit card payment information in it, just some product information on what the customer bought. Well, in the relational model if you just want to figure out which products were sold on a given day or week, you just would query the payment and products table to get the sense out of it. You don't need to touch the customer and so forth. And with the hierarchical model you have to first sit down and understand how is the structure, what is the customer? Where is the payment? You know, does the document start with the payment or does it start with the customer? Where do I find this information? And then in the very early days those databases even struggled to then not having to scan all the documents to get the data out. So coming back to your question a bit, I apologize for going on here. But you know, it's like relational databases have been around for 45 years. I actually argue it's one of the most successful software technologies that we have out there when you look in the overall industry, right? 45 years is like, in IT terms it's like from a star being the ones who are going supernova. You have said it before that many technologies coming and went, right? And just want to add a more really interesting example by the way is Hadoop and HDFS, right? They kind of gave us this additional promise of like, you know, the 2010s like 2012, 2013 the hype of Hadoop and so forth and (mumbles) and HDFS. And people are just like, just put everything into HDFS and worry about the data later, right? And we can query it and map reduce it and whatever. And we had customers actually coming to us they were like, great we have half a petabyte of data on an HDFS cluster and we have no clue what's stored in there. How do we figure this out? What are we going to do now? Now you had a big data cleansing problem. And so I think that is why databases and also data modeling is something that will not go away anytime soon. And I think databases and database technologies are here for quite a while to stay. Because many of those are people they don't think about what's happening to the data five years from now. And many of the niche players also and also frankly even Amazon you know, following with this single purpose thing is like, just use the right tool for the job for your application, right? Just pull in the data there the way you wanted. And it's like, okay, so you use technologies all over the place and then five years from now you have your data fragmented everywhere in different formats and, you know inconsistencies, and, and, and. And those are usually when you come back to this data-driven business critical business decision applications the worst case scenario you can have, right? Because now you need an army of people to actually do data cleansing. And there's not a coincidence that data science has become very, very popular the last recent years as we kind of went on with this proliferation of different database or data management technologies some of those are not even database. But I think I leave it at that. >> It's an interesting talk track because you're right. I mean, no schema on right was alluring, but it definitely created some problems. It also created an entire, you know you referenced the hyper specialized roles and did the data cleansing component. I mean, maybe technology will eventually solve that problem but it hasn't up at least up tonight. Okay, last question, Maria maybe you could start off and Gerald if you want to chime in as well it'd be great. I mean, it's interesting to watch this industry when Oracle sort of won the top database mantle. I mean, I watched it, I saw it. It was, remember it was Informix and it was (indistinct) too and of course, Microsoft you got to give them credit with SQL server, but Oracle won the database wars. And then everything got kind of quiet for awhile database was sort of boring. And then it exploded, you know, all the, you know not only SQL and the key-value stores and the cloud databases and this is really a hot area now. And when we looked at Oracle we said, okay, Oracle it's all about Oracle Database, but we've seen the kind of resurgence in MySQL which everybody thought, you know once Oracle bought Sun they were going to kill MySQL. But now we see you investing in HeatWave, TimesTen, we talked about In-Memory databases before. So where do those fit in Maria in the grand scheme? How should we think about Oracle's database portfolio? >> So there's lots of places where you'd use those different things. 'Cause just like any other industry there are going to be new and boutique use cases that are going to benefit from a more specialized product or single purpose product. So good examples off the top of my head of the kind of systems that would benefit from that would be things like a stock exchange system or a telephone exchange system. Both of those are latency critical transaction processing applications where they need microsecond response times. And that's going to exceed perhaps what you might normally get or deploy with a converged database. And so Oracle's TimesTen database our In-Memory database is perfect for those kinds of applications. But there's also a host of MySQL applications out there today and you said it yourself there Dave, HeatWave is a great place to provision and deploy those kinds of applications because it's going to run 100 times faster than AWS (mumbles). So, you know, there really is a place in the market and in our customer's systems and the needs they have for all of these different members of our database family here at Oracle. >> Yeah, well, the internet is basically running in the lamp stack so I see MySQL going away. All right Gerald, will give you the final word, bring us home. >> Oh, thank you very much. Yeah, I mean, as Maria said, I think it comes back to what we discussed before. There is obviously still needs for special technologies or different technologies than a relational database or multimodal database. Oracle has actually many more databases that people may first think of. Not only the three that we have already mentioned but there's even SP so the Oracle's NoSQL database. And, you know, on a high level Oracle is a data management company, right? And we want to give our customers the best tools and the best technology to manage all of their data. Rather than therefore there has to be a need or there should be a part of the business that also focuses on this highly specialized systems and this highly specialized technologies that address those use cases. And I think it makes perfect sense. It's like, you know, when the customer comes to Oracle they're not only getting this, take this one product you know, and if you don't like it your problem but actually you have choice, right? And choice allows you to make a decision based on what's best for you and not necessarily best for the vendor you're talking to. >> Well guys, really appreciate your time today and your insights. Maria, Gerald, thanks so much for coming on The Cube. >> Thank you very much for having us. >> And thanks for watching this Cube conversation this is Dave Vellante and we'll see you next time. (upbeat music)

Published Date : Jun 24 2021

SUMMARY :

and then you guys just follow my lead. So I noticed Maria you stopped anyway, So any time you don't So when I'm here you guys and we'll open up when Dave's ready. and the benefits they bring What are we really talking about there? the nearest stores to kind of the traditional So for example, you can do So Gerald, you think about to you at all but just receives or even a MongoDB that allows you to do ML and AI into the database, in the database you already have. and I buy that by the way. of since the last 40 years, you know the benefits to this approach is the fact that you can get And you know, it's And that buddy comes in the form of the truth here is you don't and deploy it on the cloud. and the cloud and containers and you know, is the argument you were making And so why are you because the laws require you to. And then it exploded, you and the needs they have in the lamp stack so I and the best technology to and your insights. we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Gerald VenzlPERSON

0.99+

Andy MendelsohnPERSON

0.99+

MariaPERSON

0.99+

DavePERSON

0.99+

ChileLOCATION

0.99+

Maria ColganPERSON

0.99+

PeruLOCATION

0.99+

100 timesQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

GeraldPERSON

0.99+

AmazonORGANIZATION

0.99+

OracleORGANIZATION

0.99+

CanadaLOCATION

0.99+

seven yearsQUANTITY

0.99+

Juan LuisPERSON

0.99+

IBMORGANIZATION

0.99+

StevePERSON

0.99+

five starQUANTITY

0.99+

Maria ColganPERSON

0.99+

Swiss ArmyORGANIZATION

0.99+

Swiss ArmyORGANIZATION

0.99+

AlexPERSON

0.99+

FacebookORGANIZATION

0.99+

MySQLTITLE

0.99+

one noteQUANTITY

0.99+

yesterdayDATE

0.99+

two handsQUANTITY

0.99+

threeQUANTITY

0.99+

two expertsQUANTITY

0.99+

AWSORGANIZATION

0.99+

LinuxTITLE

0.99+

TeradataORGANIZATION

0.99+

each microserviceQUANTITY

0.99+

HadoopTITLE

0.99+

45 yearsQUANTITY

0.99+

OraclesORGANIZATION

0.99+

early 2010sDATE

0.99+

todayDATE

0.99+

one-shotQUANTITY

0.99+

fiveQUANTITY

0.99+

one good exampleQUANTITY

0.99+

SunORGANIZATION

0.99+

tonightDATE

0.99+

firstQUANTITY

0.99+

Christian Craft, Oracle | CUBE Conversation


 

(upbeat music) >> Hello everyone, and welcome to this Cube conversation. We're going to dig into some of the more specific and sometimes gory details of managing the nuances of database, database management systems. You know, it's a lot of fun to get it to the daily buzz of cloud and database competition and get a little snarky on Twitter, but there are a lot of mundane issues that you have to address to really do proper database sizing, capacity planning, and you know whether or not database consolidation makes sense. These are not trivial issues. And decades ago they spawned an entire role around the database administrator. They had to do the dirty work of database management so that users and customers would be satisfied. And while automation and cloud are changing that role, at the end of the day, somebody actually has to make the databases work in the cloud and make sure that the business doesn't feel any impact on the transition along the way. So on that note, we have with us Oracle senior director of product management for mission critical databases. He works in Juan Loaiza's group, Chris Craft, and Steve Zivanic whom we know well on the cube says this guy is the Jedi master when it comes to consolidating databases in the cloud. Nobody knows more on the face of the planet Earth. So we're really excited Chris, to have you inside the Cube. Welcome. >> Thanks, thanks Dave. >> That's a very humble thanks. So when it comes to running databases in the cloud can you explain the difference between sizing and capacity planning? Aren't they two sides of the same coin? >> Yeah, you know, they really are. It's like, you know sizing is really part of capacity to planning. It's really, I look at sizing as a one-time effort whereas capacity planning is more your ongoing. You perform sizing initially when the application is deployed. And then, then when you're changing platforms, like going from on-prem to the Cloud you're going to go through a sizing exercise 'cause you're looking at going to a new platform. That's more of a one-time effort, and then ongoing, you're looking at your capacity management over time. So yeah, they are very related so. >> Okay, thank you. So we're going to talk about database consolidation. A lot of people would say, look the cloud makes consolidating databases maybe not irrelevant, but maybe not the best strategy because I got all these different purpose-built databases. Why consolidate databases if they're already going to consolidate it in the cloud in one location? >> Yeah. So, so we're really talking about in in the cloud, you're running virtual machines but consolidation still applies on the virtual machines. So if you have a virtual machine that's dedicated to a database that database is that server, that virtual machine is going to be under utilized over time. So what we're doing with consolidation is running multiple databases within a virtual machine or what it, Oracle virtual cluster. We do everything on clusters. So multiple machines multiple databases within that will drive up the utilization and improve your cost structure. So it's a sizing it's it's absolutely critical on even in the cloud. >> Okay. But, but wouldn't it, I might say to that, wouldn't it be better to have each database have a dedicated VM? I mean, from a performance perspective, it doesn't try to make the database do too much affect performance. >> Yeah. It, so whenever, so we know historically that a database on a dedicated server back in the day that was a physical server, today it's a virtual machine. When you do that, your utilization will be in the range of 15 to 20%. And that's, you know very highly under utilized systems when you do that. So we don't need to isolate things onto dedicated virtual machines for a performance perspective. There are other ways that we can manage that we have resource management built into Oracle and the Oracle database. And then on Exadata we have an integrated IO resource management as well so we can deal with that different ways. >> Okay. So you're basically proposing that you're putting these databases onto a single VM and managing it accordingly. Is there additional details you can provide on that? >> So, you know, we don't put everything into you know, literally one, one VM. You want to have some isolation built in there, but see and take a more pragmatic approach. You know, like every single database in one VM that's the wrong way to go. Each database in a dedicated VM is also the other extreme, also the wrong way to go. So we're kind of right down the middle and be more pragmatic about it, and do some level of consolidation to drive up utilization. >> I remember when I first started following tech I was studying up on, you know kind of how disc drives work and so forth. And there was probably like I can't even remember what it was. It was like probably like 10 megabytes under an actuator. And people were saying, Oh my God, that's so much data. You, you got your blast radius is, is so big. You got to split that up. So it's the same concept, apply with availability. Some would say, there's a problem because you're consolidating all this data and you've got this blast radius that increases. How do you address that? >> And so, you know, redundancy. So we have redundancy at all levels. So if you look at a single, so we're talking about Exadata here, taught in an Exadata machine we can lose up to 24 disc drives out of 30. 30 machines with 36 disc drives, we can use 24 of those. So that'd be 12 per storage cell. You can lose two storage cells as 24 out of 36 drives so we can lose and keep on running. We can also, we also cluster, we also do clustering. So the database servers are clustered together for high availability. So we can take, we can suffer multiple simultaneous failures and keep on running without performance impact either. So it's, so recovery, we handle that in different ways. So it's, look at blast radius from a standpoint, you want some, some isolation for blast radius but we have physical failures is just not something that we're concerned with. >> Why do you deal with taking down a VM? Doesn't that normally mean there's going to be some kind of disruption? >> Oh, so you know, the, so Oracle database, you're talking about real application clusters on on Oracle database, on Exadata. We've got, we have a very fast detection of of failures and then resolution of the failure. So we're looking at a small blip in performance, you know we're looking at a few milliseconds to detect failure and then maybe up around three seconds to actually affect the failover. So the applications that are not getting disconnected, they continue operating in the, in that kind of condition. So that's kind of unique to the Exadata platform. And so, you know, in our cloud, we're running Exadata. We have this built in there. So we're, we're resilient to that type of failure, so. >> And sorry, you mentioned real application clusters. You're saying because you're running real application clusters that's how you're able to become more resilient? >> So yeah, so we have, so Oracle database real application clusters runs on top of a clustered virtual machines on Exadata. We have integration then optimizes the fail over times of that clustering. So it's, it's not the cluster same, it's the optimizations are only built into Exadata. So we have much much faster, much better tighter integration, so much more scalability because of that, that integration that we have. >> Can I run rack in other clouds? Can I put that into Amazon's cloud? >> So, so real application clusters requires two things. It's a, you require shared storage in a fast interconnect, a fast networking interconnecting. And those things just don't exist in the other clouds. We have those built into Exadata in our cloud. And we also, we also allow real application clusters in our relational database, our database cloud service offering as well. But it's, really the highest implementation of that is in Exadata. >> Well, of course I was tongue in cheek joking but this is, this is why, you know, I was listening to Arvind Krishna the other day in IBM Think. And he was saying only 25% of mission critical applications have moved into the cloud. I didn't think it's that high. I mean, but, but what you're doing is basically building a mission critical, you know, cloud or a cloud for mission critical databases. And that's, that's unique. I mean, I would expect other cloud vendors that eventually you know, are going to get there, but you're kind of starting with the hard stuff and working backwards. But, that is what I've always interpreted is unique to Oracle, but how does that affect cost? Isn't that more expensive? >> Actually, no. We're taking services that that start out at a very similar price point. And then we drive. So what we've seen from other customers that are running in like Amazon, for example, we see databases on dedicated virtual machines that run anywhere from 15 to 20% utilization. So what we do is, that low, low utilization, what we do is take that and triple that. So we run, so we run maybe 50% utilization. At that point we still have full redundancy, but we've now made the service one third of the cost. So we're starting at a third, we're starting at a very similar cost. And then we drive it to, you know three times a utilization. This is not crazy numbers. This is, you know, 50% is, is fine and retain the redundancy at that level as well. >> Got it, well so. >> What we've seen is about a third the cost. >> Really? Okay. Well, so, but, what about, like for instance, on AWS, couldn't I run this in a multi availability zone, running RDS or some other cloud database? >> So, so you can run a Multi-AZ environment like in in Amazon, for example, you can run locals. That's what we call local standby. If you do that, you're now instead of being one third, instead of being three times more expensive, you're now six times more expensive. Because that is another copy of the entire platform, the entire instance, the storage, everything on the other availability zone instead of being three times more, it's now six. >> Because you're essentially replicating everything in a brute force mode, right? >> Yeah. It's a data guard standby, local standby in another AZ, or what we call availability domain in our cloud. >> So let's maybe geek out a little bit. So, let's talk more about availability. You know, for years, I mean, I remember going back to reading about this stuff with tandem computers, you know, coincident failures. How are you dealing with those in today's modern world? >> So what we call simultaneous failures is, so we, we deal with that with redundancy in the system. So we have redundancy at all layers in the storage. Like I said earlier, we can take across, you know, two storage cells and each storage cell has a dozen drives. So that's 24 disc drives. That's eight flashcard failures simultaneously. And we keep on running no data loss, no loss of service. That's at the storage layer. We have multiple, multiple redundant networking switches at that, at the networking layer, the internal network. Then we go up into the database server. We then have redundancy across the nodes of a cluster. You have multiple virtual machines that comprise a virtual cluster. So it's at each and every level, we have redundancy. And then we drive the redundancy into the application using what's called application continuity. So the application connections have knowledge of the failure, failure modes of the database. They can follow to the surviving node, and continue operating. >> And you do this with math, you're doing some kind of magic bit slicing, or how do you do that? >> That, so that is that particular thing, application continuity, so technology that's been built into Oracle database since, since 12c, and that it's been around for quite a long time. And it allows the application to follow the rack cluster, any kind of issues with the rack cluster. We can drain connections off. It's very well-proven technology in, you know, prior to to proactive maintenance, we can drain connections over and then it will also handle a failure of a connection as well. And the application following that, yes. >> I learned from my old mainframe days and hanging around with David Floyer. It's always ask, what happens when something goes wrong and it's all about recovery. And you guys have the gold standard there. I mean, we've talked about this a lot. So you got Exadata. That's what is behind your Exadata cloud service, X8M I think you call it, and you've got autonomous database. I'm not great with model numbers, but, but talk about the way you can handle simultaneous failures. I mean, are there like triple redundancies that you've built in? >> Yeah. So everything what we do in our cloud is everything is triple redundancy by default. So we, you can suffer, that way we can suffer two failures and continue operating. So the, the other thing, so recovery, if you look at transaction recovery, when a failure occurs a transaction will flip that session, will flip to the machine that keeps running. It'll reposition all in the work that's in flight, any kind of inflight transactions, any in flight queries that are going on, reposition and continue operating. >> So you've essentially created like the old three site data centers, but you're in a single platform because you're synchronous. But, that same concept in a package. >> It's, you know, it's a lot of times you show a picture of an Exadata. It looks like a single box, but in the box there's some redundancy built in the box. And in fact, in the cloud it's actually across an entire aisle. So it's, we kind of obscure that a little bit, from your provisioning, you know, our database nodes and our storage cells and in the cloud but it's actually across an entire aisle of a dataset. >> Okay, and of course, that's within a synchronous location. Let's talk about disaster recovery, and what you're doing in that area, around Oracle Cloud What are my options there? What's different from other cloud providers we were talking earlier about, AZs, how are you different and what are you doing there? >> Yeah, so we, we talked earlier about the Multi-AZ deployment, what we call it availability domain, AD, so a little different terminology. But we can deploy another, another copy of the database into another availability domain, if you like. It's not often that you lose an entire AZ or AD, it's more, we're protecting from regional failures. So across another region. And that's where we look at, we really look at that as that technology, as a standby, as a data, disaster recovery solution not for HA. HA, we build HA into the machine itself. >> So you're saying, we were talking earlier about AZ, you're saying that's for HA versus DR. Is that, is that what you're contending? >> Yeah, like, you know again, pick on Amazon for a second here. Amazon uses a standby database. What we would normally use for disaster recovery, they're using that for availability. And you're looking at a few minutes of time to flip over to another AZ, whereas within an Exadata frame, we can flip over in milliseconds. We keep continue running. There is no loss of conductivity. And then we use the standby in another region for disaster. That's a true disaster solution. >> As opposed to incurring that penalty of latency, or whatever, to spin up the other resource. >> Right, right. >> Okay, so that's clear how kind of you guys address that, that challenge. Last question, maybe you could give us your take, again folks, coming out of Oracle's mouth, but what's the bottom line cost Delta based on your experience between your service and competitive services? I love these conversations because you're not afraid to talk about the competition, so bring it on. >> I've seen, so we've just based on what we've seen with customers deploying databases in Amazon, versus what, you know we've replaced that within, in our cloud service. We're seeing from just a list price perspective. Now, you know, we discount, I know Amazon discounts, but the only thing I can really speak to is list price perspective. It's about a third the cost. So we're talking about a more powerful platform, runs faster. We get these incredible, we haven't even talked about performance here. Talk about availability, performance where we're getting IO rates, IO latencies in the 19 microsecond range. Now with Exadata, that's going to be 50 times faster than what you get with these traditional cloud vendors. So much, much faster, and a third the cost. >> So talk about discounts, I mean, I know Oracle discounts, Oracle from list price, Oracle provides significant discounts. I'm not as familiar with your cloud pricing but I mean, Amazon's discounts are really in the form of like reserved instances. Is your pricing similar in that regard or different? I mean, if I'm just paying on demand, I'm paying through the nose. I presume it's same with you. If I, but if I buy in bulk getting a discount, is that what you mean by discount? Or is it more similar to the way you've traditionally discounted, you know large customers, the more you spend, the more you you get kind of thing. >> It's a, there's a discount structure. So it's, we don't have the same kind of lock-in like with reserved instance structure, but yeah, it's, there are discounts and that's going to be very customer specific. >> Right. >> So, but I think that the end result we're starting at, a three X differential on the price. >> But the reason I'm asking the question is that the stats you gave me are for list price, right? >> Yeah, yes, yeah. >> Okay, and sure, you're saying that at list price you're, you're less expensive. I, and again, my contention would be just by experiences that your discounts would be more aggressive traditionally in Oracle's traditional business. You know, I've done a lot of Oracle negotiation in my days. And if you're, you know, if you're a big customer you can get good deals. And again, I'm not as familiar with the cloud pricing, but still that's, that's good. If you're doing it on a list price basis, to me, that's a conservative statement if that makes any sense. >> Right, that's where it starts. We know that's where it's starting out. So I, you know, once you get into discounts, it's very customer specific. >> Right. >> We know the starting point is at three X differential. Before you do something in the Multi-AZ would be a six X differential, by the way, so. >> Yeah, okay. All right, Chris. Well, Hey, I appreciate you taking us through this, good stuff, and best of luck, good work. You know, you guys keep, I always say Oracle invest you guys spend a lot of money in RD and, and, you know you're quiet for a while in the cloud and all of a sudden you came out like you invented it. So good job! >> All right. >> All right, thanks. Thanks for coming on. All right. >> Thanks. >> Thank you for watching everybody. This is Dave Vellante for Cube conversations. We'll see you next time. (upbeat music)

Published Date : May 14 2021

SUMMARY :

So on that note, we have with databases in the cloud Yeah, you know, they really are. maybe not the best strategy So if you have a virtual I might say to that, in the range of 15 to 20%. you can provide on that? So, you know, we So it's the same concept, So if you look at a So the applications that are And sorry, you mentioned So it's, it's not the cluster exist in the other clouds. building a mission critical, you know, And then we drive it to, you know about a third the cost. Well, so, but, what If you do that, you're now or what we call availability you know, coincident failures. So the application And it allows the application about the way you can handle So we, you can suffer, like the old three site data And in fact, in the cloud what are you doing there? It's not often that you So you're saying, we were Yeah, like, you know again, that penalty of latency, kind of you guys address that, but the only thing I can really speak to is that what you mean by discount? So it's, we don't have the So, but I think that the you can get good deals. So I, you know, once We know the starting point and all of a sudden you came Thanks for coming on. Thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Steve ZivanicPERSON

0.99+

Dave VellantePERSON

0.99+

ChrisPERSON

0.99+

AmazonORGANIZATION

0.99+

DavePERSON

0.99+

15QUANTITY

0.99+

36 drivesQUANTITY

0.99+

sixQUANTITY

0.99+

50%QUANTITY

0.99+

50 timesQUANTITY

0.99+

three timesQUANTITY

0.99+

OracleORGANIZATION

0.99+

24QUANTITY

0.99+

David FloyerPERSON

0.99+

six timesQUANTITY

0.99+

36 disc drivesQUANTITY

0.99+

10 megabytesQUANTITY

0.99+

Chris CraftPERSON

0.99+

30. 30 machinesQUANTITY

0.99+

one-timeQUANTITY

0.99+

one thirdQUANTITY

0.99+

two sidesQUANTITY

0.99+

Arvind KrishnaPERSON

0.99+

two failuresQUANTITY

0.99+

each storage cellQUANTITY

0.99+

IBMORGANIZATION

0.99+

19 microsecondQUANTITY

0.99+

two storage cellsQUANTITY

0.99+

Christian CraftPERSON

0.99+

DeltaORGANIZATION

0.99+

25%QUANTITY

0.99+

20%QUANTITY

0.99+

Juan LoaizaPERSON

0.99+

single platformQUANTITY

0.99+

Each databaseQUANTITY

0.98+

AWSORGANIZATION

0.98+

each databaseQUANTITY

0.98+

decades agoDATE

0.98+

thirdQUANTITY

0.97+

ExadataORGANIZATION

0.97+

AZLOCATION

0.97+

around three secondsQUANTITY

0.97+

three timesQUANTITY

0.96+

12 per storage cellQUANTITY

0.96+

two thingsQUANTITY

0.96+

24 disc drivesQUANTITY

0.95+

single boxQUANTITY

0.95+

todayDATE

0.94+

TwitterORGANIZATION

0.94+

one locationQUANTITY

0.93+

three XQUANTITY

0.93+

oneQUANTITY

0.92+

one VMQUANTITY

0.91+

firstQUANTITY

0.9+

single VMQUANTITY

0.89+

threeQUANTITY

0.88+

Oracle CloudTITLE

0.85+

single databaseQUANTITY

0.83+

three site data centersQUANTITY

0.83+

dozen drivesQUANTITY

0.81+

eight flashcardQUANTITY

0.79+

Nipun Agarwal, Oracle | CUBEconversation


 

(bright upbeat music) >> Hello everyone, and welcome to the special exclusive CUBE Conversation, where we continue our coverage of the trends of the database market. With me is Nipun Agarwal, who's the vice president, MySQL HeatWave in advanced development at Oracle. Nipun, welcome. >> Thank you Dave. >> I love to have technical people on the Cube to educate, debate, inform, and we've extensively covered this market. We were all over the Snowflake IPO and at that time I remember, I challenged organizations bring your best people. Because I want to better understand what's happening at Database. After Oracle kind of won the Database wars 20 years ago, Database kind of got boring. And then it got really exciting with the big data movement, and all the, not only SQL stuff coming out, and Hadoop and blah, blah, blah. And now it's just exploding. You're seeing huge investments from many of your competitors, VCs are trying to get into the action. Meanwhile, as I've said many, many times, your chairman and head of technology, CTO, Larry Ellison, continues to invest to keep Oracle relevant. So it's really been fun to watch and I really appreciate you coming on. >> Sure thing. >> We have written extensively, we talked to a lot of Oracle customers. You get the leading mission critical database in the world. Everybody from Fortune 100, we evaluated what Gardner said about the operational databases. I think there's not a lot of question there. And we've written about that on WikiBound about you're converged databases, and the strategy there, and we're going to get into that. We've covered Autonomous Data Warehouse Exadata Cloud at Customer, and then we just want to really try to get into your area, which has been, kind of caught our attention recently. And I'm talking about the MySQL Database Service with HeatWave. I love the name, I laugh. It was an unveiled, I don't know, a few months ago. So Nipun, let's start the discussion today. Maybe you can update our viewers on what is HeatWave? What's the overall focus with Oracle? And how does it fit into the Cloud Database Service? >> Sure Dave. So HeatWave is a in-memory query accelerator for the MySQL Database Service for speeding up analytic queries as well as long running complex OLTP queries. And this is all done in the context of a single database which is the MySQL Database Service. Also, all existing MySQL applications or MySQL compatible tools and applications continue to work as is. So there is no change. And with this HeatWave, Oracle is delivering the only MySQL service which provides customers with a single unified platform for both analytic as well as transaction processing workloads. >> Okay, so, we've seen open source databases in the cloud growing very rapidly. I mentioned Snowflake, I think Google's BigQuery, get some mention, we'll talk, we'll maybe talk more about Redshift later on, but what I'm wondering, well let's talk about now, how does MySQL HeatWave service, how does that compare to MySQL-based services from other cloud vendors? I can get MySQL from others. In fact, I think we do. I think we run WikiBound on the LAMP stack. I think it's running on Amazon, but so how does your service compare? >> No other vendor, like, no other vendor offers this differentiated solution with an open source database namely, having a single database, which is optimized both for transactional processing and analytics, right? So the example is like MySQL. A lot of other cloud vendors provide MySQL service but MySQL has been optimized for transaction processing so when customs need to run analytics they need to move the data out of MySQL into some other database for any analytics, right? So we are the only vendor which is now offering this unified solution for both transactional processing analytics. That's the first point. Second thing is, most of the vendors out there have taken open source databases and they're basically hosting it in the cloud. Whereas HeatWave, has been designed from the ground up for the cloud, and it is a 100% compatible with MySQL applications. And the fact that we have designed it from the ground up for the cloud, maybe I'll spend 100s of person years of research and engineering means that we have a solution, which is very, very scalable, it's very optimized in terms of performance, and it is very inexpensive in terms of the cost. >> Are you saying, well, wait, are you saying that you essentially rewrote MySQL to create HeatWave but at the same time maintained compatibility with existing applications? >> Right. So we enhanced MySQL significantly and we wrote a whole bunch of new code which is brand new code optimized for the cloud in such a manner that yes, it is 100% compatible with all existing MySQL applications. >> What does it mean? And if I'm to optimize for the cloud, I mean, I hear that and I say, okay, it's taking advantage of cloud-native. I hear kind of the buzzwords, cloud-first, cloud-native. What does it specifically mean from a technical standpoint? >> Right. So first, let's talk about performance. What we have done is that we have looked at two aspects. We have worked with shapes like for instance, like, the compute shapes which provide the best performance for dollar, per dollar. So I'll give you a couple of examples. We have optimized for certain shifts. So, HeatWave is in-memory query accelerator. So the cost of the system is dominated by the cost. So we are working with chips which provide the cheapest cost per terabyte of memory. Secondly, we are using commodity cloud services in such a manner that it's in-optimized for both performance as well as performance per dollar. So, example is, we are not using any locally-attached SSDs. We use ObjectStore because it's very inexpensive. And then I guess at some point I will get into the details of the architecture. The system has been really, really designed for massive scalability. So as you add more compute, as you add more service, the system continues to scale almost perfectly linearly. So this is what I mean in terms of being optimized for the cloud. >> All right, great. >> And furthermore, (indistinct). >> Thank you. No, carry on. >> Over the next few months, you will see a bunch of other announcements where we're adding a whole bunch of machine learning and data driven-based automation which we believe is critical for the cloud. So optimized for performance, optimized for the cloud, and machine learning-based automation which we believe is critical for any good cloud-based service. >> All right, I want to come back and ask you more about the architecture, but you mentioned some of the others taking open source databases and shoving them into the cloud. Let's take the example of AWS. They have a series of specialized data stores and, for different workloads, Aurora is for OLTP I actually think it's based on MySQL Redshift which is based on ParAccel. And so, and I've asked Amazon about this, and their response is, actually kind of made sense to me. Look, we want the right tool for the right job, we want access to the primitives because when the market changes we can change faster as opposed to, if we put, if we start building bigger and bigger databases with more functionality, it's, we're not as agile. So that kind of made sense to me. I know we, again, we use a lot, we use, I think I said MySQL in Amazon we're using DynamoDB, works, that's cool. We're not huge. And I, we fully admit and we've researched this, when you start to get big that starts to get maybe expensive. But what do you think about that approach and why is your approach better? >> Right, we believe that there are multiple drawbacks of having different databases or different services, one, optimized for transactional processing and one for analytics and having to ETL between these different services. First of all, it's expensive because you have to manage different databases. Secondly, it's complex. From an application standpoint, applications need, now need to understand the semantics of two different databases. It's inefficient because you have to transfer data at some PRPC from one database to the other one. It's not secure because there is security aspects involved when your transferring data and also the identity of users in the two different databases is different. So it's, the approach which has been taken by Amazons and such, we believe, is more costly, complex, inefficient and not secure. Whereas with HeatWave, all the data resides in one database which is MySQL and it can run both transaction processing and analytics. So in addition to all the benefits I talked about, customers can also make their decisions in real time because there is no need to move the data. All the data resides in a single database. So as soon as you make any changes, those changes are visible to customers for queries right away, which is not the case when you have different siloed specialized databases. >> Okay, that, a lot of ways to skin a cat and that what you just said makes sense. By the way, we were saying before, companies have taken off the shelf or open source database has shoved them in the cloud. I have to give Amazon some props. They actually have done engineering to Aurora and Redshift. And they've got the engineering capabilities to do that. But you can see, for example, in Redshift the way they handle separating compute from storage it's maybe not as elegant as some of the other players like a Snowflake, for example, but they get there and they, maybe it's a little bit more brute force but so I don't want to just make it sound like they're just hosting off the shelf in the cloud. But is it fair to say that there's like a crossover point? So in other words, if I'm smaller and I'm not, like doing a bunch of big, like us, I mean, it's fine. It's easy, I spin it up. It's cheaper than having to host my own servers. So there's, presumably there's a sweet spot for that approach and a sweet spot for your approach. Is that fair or do you feel like you can cover a wider spectrum? >> We feel we can cover the entire spectrum, not wider, the entire spectrum. And we have benchmarks published which are actually available on GitHub for anyone to try. You will see that this approach you have taken with the MySQL Database Service in HeatWave, we are faster, we are cheaper without having to move the data. And the mileage or the amount of improvement you will get, surely vary. So if you have less data the amount of improvement you will get, maybe like say 100 times, right, or 500 times, but smaller data sizes. If you get to lots of data sizes this improvement amplifies to 1000 times or 10,000 times. And similarly for the cost, if the data size is smaller, the cost advantage you will have is less, maybe MySQL HeatWave is one third the cost. If the data size is larger, the cost advantage amplifies. So to your point, MySQL Database Service in HeatWave is going to be better for all sizes but the amount of mileage or the amount of benefit you will get increases as the size of the data increases. >> Okay, so you're saying you got better performance, better cost, better price performance. Let me just push back a little bit on this because I, having been around for awhile, I often see these performance and price comparisons. And what often happens is a vendor will take the latest and greatest, the one they just announced and they'll compare it to an N-1 or an N-2, running on old hardware. So, is, you're normalizing for that, is that the game you're playing here? I mean, how can you, give us confidence that this is easier kind of legitimate benchmarks in your GitHub repo. >> Absolutely. I'll give you a bunch of like, information. But let me preface this by saying that all of our scripts are available in the open source in the GitHub repo for anyone to try and we would welcome feedback otherwise. So we have taken, yes, the latest version of MySQL Database Service in HeatWave, we have optimized it, and we have run multiple benchmarks. For instance, TBC-H, TPC-DS, right? Because the amount of improvement a query will get depends upon the specific query, depends upon the predicates, it depends on the selectivity so we just wanted to use standard benchmarks. So it's not the case that if you're using certain classes of query, excuse me, benefit, get them more. So, standard benchmarks. Similarly, for the other vendors or other services like Redshift, we have run benchmarks on the latest shapes of Redshift the most optimized configuration which they recommend, running their scripts. So this is not something that, hey, we're just running out of the box. We have optimized Aurora, we have optimized (indistinct) to the best and possible extent we can based on their guidelines, based on their latest release, and that's what you're talking about in terms of the numbers. >> All right. Please continue. >> Now, for some other vendors, if we get to the benchmark section, we'll talk about, we are comparing with other services, let's say Snowflake. Well there, there are issues in terms of you can't legally run Snowflake numbers, right? So there, we have looked at some reports published by Gigaom report. and we are taking the numbers published by the Gigaom report for Snowflake, Google BigQuery and as you'll see maps numbers, right? So those, we have not won ourselves. But for AWS Redshift, as well as AWS Aurora, we have run the numbers and I believe these are the best numbers anyone can get. >> I saw that Gigaom report and I got to say, Gigaom, sometimes I'm like, eh, but I got to say that, I forget the guy's name, he knew what he was talking about. He did a good job, I thought. I was curious as to the workload. I always say, well, what's the workload. And, but I thought that report was pretty detailed. And Snowflake did not look great in that report. Oftentimes, and they've been marketing the heck out of it. I forget who sponsored it. It is, it was sponsored content. But, I did, I remember seeing that and thinking, hmm. So, I think maybe for Snowflake that sweet spot is not, maybe not that performance, maybe it's the simplicity and I think that's where they're making their mark. And most of their databases are small and a lot of read-only stuff. And so they've found a market there. But I want to come back to the architecture and really sort of understand how you've able, you've been able to get this range of both performance and cost you talked about. I thought I heard that you're optimizing the chips, you're using ObjectStore. You're, you've got an architecture that's not using SSD, it's using ObjectStore. So this, is their cashing there? I wonder if you could just give us some details of the architecture and tell us how you got to where you are. >> Right, so let me start off saying like, what are the kind of numbers we are talking about just to kind of be clear, like what the improvements are. So if you take the MySQL Database Service in HeatWave in Oracle Cloud and compare it with MySQL service in any other cloud, and if you look at smaller data sizes, say data sizes which are about half a terabyte or so, HeatWave is 400 times faster, 400 times faster. And as you get to... >> Sorry. Sorry to interrupt. What are you measuring there? Faster in terms of what? >> Latency. So we take TCP-H 22 queries, we run them on HeatWave, and we run the same queries on MySQL service on any other cloud, half a terabyte and the performance in terms of latency is 400 times faster in HeatWave. >> Thank you. Okay. >> If you go to a lot of other data sites, then the other data point of view, we're looking at say something like, 4 TB, there, we did two comparisons. One is with AWS Aurora, which is, as you said, they have taken MySQL. They have done a bunch of innovations over there and we are offering it as a premier service. So on 4 TB TPC-H, MySQL Database Service with HeatWave is 1100 times faster than Aurora. It is three times faster than the fastest shape of Redshift. So Redshift comes in different flavors some talking about dense compute too, right? And again, looking at the most recommended configuration from Redshift. So 1100 times faster that Aurora, three times faster than Redshift and at one third, the cost. So this where I just really want to point out that it is much faster and much cheaper. One third the cost. And then going back to the Gigaom report, there was a comparison done with Snowflake, Google BigQuery, Redshift, Azure Synapse. I wouldn't go into the numbers here but HeatWave was faster on both TPC-H as well as TPC-DS across all these products and cheaper compared to any of these products. So faster, cheaper on both the benchmarks across all these products. Now let's come to, like, what is the technology underneath? >> Great. >> So, basically there are three parts which you're going to see. One is, improve performance, very good scale, and improve a lower cost. So the first thing is that HeatWave has been optimized and, for the cloud. And when I say that, we talked about this a bit earlier. One is we are using the cheapest shapes which are available. We're using the cheapest services which are available without having to compromise the performance and then there is this machine learning-based automation. Now, underneath, in terms of the architecture of HeatWave there are basically, I would say, four key things. First is, HeatWave is an in-memory engine that a presentation which we have in memory is a hybrid columnar representation which is optimized for vector process. That's the first thing. And that's pretty table stakes these days for anyone who wants to do in-memory analytics except that it's hybrid columnar which is optimized for vector processing. So that's the first thing. The second thing which starts getting to be novel is that HeatWave has a massively parallel architecture which is enabled by a massively partitioned architecture. So we take the data, we read the data from MySQL into the memory of the HeatWave and we massively partition this data. So as we're reading the data, we're partitioning the data based on the workload, the sizes of these partitions is such that it fits in the cache of the underlying processor and then we're able to consume these partitions really, really fast. So that's the second bit which is like, massively parallel architecture enabled by massively partitioned architecture. Then the third thing is, that we have developed new state-of-art algorithms for distributed query processing. So for many of the workloads, we find that joints are the long pole in terms of the amount of time it takes. So we at Oracle have developed new algorithms for distributed joint processing and similarly for many other operators. And this is how we're being able to consume this data or process this data, which is in-memory really, really fast. And finally, and what we have, is that we have an eye for scalability and we have designed algorithms such that there's a lot of overlap between compute and communication, which means that as you're sending data across various nodes and there could be like, dozens of of nodes or 100s of nodes that they're able to overlap the computation time with the communication time and this is what gives us massive scalability in the cloud. >> Yeah, so, some hard core database techniques that you've brought to HeatWave, that's impressive. Thank you for that description. Let me ask you, just to go to quicker side. So, MySQL is open source, HeatWave is what? Is it like, open core? Is it open source? >> No, so, HeatWave is something which has been designed and optimized for the cloud. So it can't be open source. So any, it's not open service. >> It is a service. >> It is a service. That's correct. >> So it's a managed service that I pay Oracle to host for me. Okay. Got it. >> That's right. >> Okay, I wonder if you could talk about some of the use cases that you're seeing for HeatWave, any patterns that you're seeing with customers? >> Sure, so we've had the service, we had the HeatWave service in limited availability for almost 15 months and it's been about five months since we have gone G. And there's a very interesting trend of our customers we're seeing. The first one is, we are seeing many migrations from AWS specifically from Aurora. Similarly, we are seeing many migrations from Azure MySQL we're migrations from Google. And the number one reason customers are coming is because of ease of use. Because they have their databases currently siloed. As you were talking about some for optimized for transactional processing, some for analytics. Here, what customers find is that in a single database, they're able to get very good performance, they don't need to move the data around, they don't need to manage multiple databaes. So we are seeing many migrations from these services. And the number one reason is reduce complexity of ease of use. And the second one is, much better performance and reduced costs, right? So that's the first thing. We are very excited and delighted to see the number of migrations we're getting. The second thing which we're seeing is, initially, when we had the service announced, we were like, targeting really towards analytics. But now what are finding is, many of these customers, for instance, who have be running on Aurora, when they are moving from MySQL in HeatWave, they are finding that many of the OLTP queries as well, are seeing significant acceleration with the HeatWave. So now customers are moving their entire applications or, to HeatWave. So that's the second trend we're seeing. The third thing, and I think I kind of missed mentioning this earlier, one of the very key and unique value propositions we provide with the MySQL Database Service in HeatWave, is that we provide a mechanism where if customers have their data stored on premise they can still leverage the HeatWave service by enabling MySQL replication. So they can have their data on premise, they can replicate this data in the Oracle Cloud and then they can run analytics. So this deployment which we are calling the hybrid deployment is turning out to be very, very popular because there are customers, there are some customers who for various reasons, compliance or regulatory reasons cannot move the entire data to the cloud or migrate the data to the cloud completely. So this provides them a very good setup where they can continue to run their existing database and when it comes to getting benefits of HeatWave for query acceleration, they can set up this replication. >> And I can run that on anyone, any available server capacity or is there an appliance to facilitate that? >> No, this is just standard MySQL replication. So if a customer is running MySQL on premise they can just turn off this application. We have obviously enhanced it to support this inbound replication between on-premise and Oracle Cloud with something which can be enabled as long as the source and destination are both MySQL. >> Okay, so I want to come back to this sort of idea of the architecture a little bit. I mean, it's hard for me to go toe to toe with the, I'm not an engineer, but I'm going to try anyway. So you've talked about OLTP queries. I thought, I always thought HeatWave was optimized for analytics. But so, I want to push on this notion because people think of this the converged database, and what you're talking about here with HeatWave is sort of the Swiss army knife which is great 'cause you got a screwdriver and you got Phillips and a flathead and some scissors, maybe they're not as good. They're not as good necessarily as the purpose-built tool. But you're arguing that this is best of breed for OLTP and best of breed for analytics, both in terms of performance and cost. Am I getting that right or is this really a Swiss army knife where that flathead is really not as good as the big, long screwdriver that I have in my bag? >> Yes, so, you're getting it right but I did want to make a clarification. That HeatWave is definitely the accelerator for all your queries, all analytic queries and also for the long running complex transaction processing inquiries. So yes, HeatWave the uber query accelerator engine. However, when it comes to transaction processing in terms of your insert statements, delete statements, those are still all done and served by the MySQL database. So all, the transactions are still sent to the MySQL database and they're persistent there, it's the queries for which HeatWave is the accelerator. So what you said is correct. For all query acceleration, HeatWave is the engine. >> Makes sense. Okay, so if I'm a MySQL customer and I want to use HeatWave, what do I have to do? Do I have to make changes to my existing applications? You applied earlier that, no, it's just sort of plugs right in. But can you clarify that. >> Yes, there are absolutely no changes, which any MySQL or MySQL compatible application needs to make to take advantage of HeatWave. HeatWave is an in-memory accelerator and it's completely transparent to the application. So we have like, dozens and dozens of like, applications which have migrated to HeatWave, and they are seeing the same thing, similarly tools. So if you look at various tools which work for analytics like, Tableau, Looker, Oracle Analytics Cloud, all of them will work just seamlessly. And this is one of the reasons we had to do a lot of heavy lifting in the MySQL database itself. So the MySQL database engineering team was, has been very actively working on this. And one of the reasons is because we did the heavy lifting and we meet enhancements to the MySQL optimizer in the MySQL storage layer to do the integration of HeatWave in such a seamless manner. So there is absolutely no change which an application needs to make in order to leverage or benefit from HeatWave. >> You said earlier, Nipun, that you're seeing migrations from, I think you said Aurora and Google BigQuery, you might've said Redshift as well. Do you, what kind of tooling do you have to facilitate migrations? >> Right, now, there are multiple ways in which customers may want to do this, right? So the first tooling which we have is that customers, as I was talking about the replication or the inbound replication mechanism, customers can set up heat HeatWave in the Oracle Cloud and they can send the data, they can set up replication within their instances in their cloud and HeatWave. Second thing is we have various kinds of tools to like, facilitate the data migration in terms of like, fast ingestion sites. So there are a lot of such customers we are seeing who are kind of migrating and we have a plethora of like, tools and applications, in addition to like, setting up this inbound application, which is the most seamless way of getting customers started with HeatWave. >> So, I think you mentioned before, I have my notes, machine intelligence and machine learning. We've seen that with autonomous database it's a big, big deal obviously. How does HeatWave take advantage of machine intelligence and machine learning? >> Yeah, and I'm probably going to be talking more about this in the future, but what we have already is that HeatWave uses machine learning to intelligently automate many operations. So we know that when there's a service being offered in the cloud, our customers expect automation. And there're a lot of vendors and a lot of services which do a good job in automation. One of the places where we're going to be very unique is that HeatWave uses machine learning to automate many of these operations. And I'll give you one such example which is provisioning. Right now with HeatWave, when a customer wants to determine how many nodes are needed for running their workload, they don't need to make a guess. They invoke a provisioning advisor and this advisor uses machine learning to sample a very small percentage of the data. We're talking about, like, 0.1% sampling and it's able to predict the amount of memory with 95% accuracy, which this data is going to take. And based on that, it's able to make a prediction of how many servers are needed. So just a simple operation, the first step of provisioning, this is something which is done manually across, on any of the service, whereas at HeatWave, we have machine learning-based advisor. So this is an example of what we're doing. And in the future, we'll be offering many such innovations as a part of the MySQL Database and the HeatWave service. >> Well, I've got to say I was skeptic but I really appreciate it, you're, answering my questions. And, a lot of people when you made the acquisition and inherited MySQL, thought you were going to kill it because they thought it would be competitive to Oracle Database. I'm happy to see that you've invested and figured out a way to, hey, we can serve our community and continue to be the steward of MySQL. So Nipun, thanks very much for coming to the CUBE. Appreciate your time. >> Sure. Thank you so much for the time, Dave. I appreciate it. >> And thank you for watching everybody. This is Dave Vellante with another CUBE Conversation. We'll see you next time. (bright upbeat music)

Published Date : Apr 28 2021

SUMMARY :

of the trends of the database market. So it's really been fun to watch and the strategy there, for the MySQL Database Service on the LAMP stack. And the fact that we have designed it optimized for the cloud I hear kind of the buzzwords, So the cost of the system Thank you. critical for the cloud. So that kind of made sense to me. So it's, the approach which has been taken By the way, we were saying before, the amount of improvement you will get, is that the game you're playing here? So it's not the case All right. and we are taking the numbers published of the architecture and if you look at smaller data sizes, Sorry to interrupt. and the performance in terms of latency Thank you. So faster, cheaper on both the benchmarks So for many of the workloads, to go to quicker side. and optimized for the cloud. It is a service. So it's a managed cannot move the entire data to the cloud as long as the source and of the architecture a little bit. and also for the long running complex Do I have to make changes So the MySQL database engineering team to facilitate migrations? So the first tooling which and machine learning? and the HeatWave service. and continue to be the steward of MySQL. much for the time, Dave. And thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

AmazonORGANIZATION

0.99+

Larry EllisonPERSON

0.99+

Nipun AgarwalPERSON

0.99+

NipunPERSON

0.99+

AWSORGANIZATION

0.99+

400 timesQUANTITY

0.99+

DavePERSON

0.99+

1000 timesQUANTITY

0.99+

OracleORGANIZATION

0.99+

10,000 timesQUANTITY

0.99+

100%QUANTITY

0.99+

HeatWaveORGANIZATION

0.99+

second bitQUANTITY

0.99+

MySQLTITLE

0.99+

95%QUANTITY

0.99+

100 timesQUANTITY

0.99+

two aspectsQUANTITY

0.99+

500 timesQUANTITY

0.99+

0.1%QUANTITY

0.99+

half a terabyteQUANTITY

0.99+

dozensQUANTITY

0.99+

1100 timesQUANTITY

0.99+

4 TBQUANTITY

0.99+

first pointQUANTITY

0.99+

FirstQUANTITY

0.99+

PhillipsORGANIZATION

0.99+

AmazonsORGANIZATION

0.99+

three timesQUANTITY

0.99+

GoogleORGANIZATION

0.99+

One thirdQUANTITY

0.99+

one databaseQUANTITY

0.99+

second thingQUANTITY

0.99+

oneQUANTITY

0.99+

OneQUANTITY

0.99+

bothQUANTITY

0.99+

SnowflakeTITLE

0.99+

Wim Coekaerts, Oracle | CUBEconversations


 

(bright upbeat music) >> Hello everyone, and welcome to this exclusive Cube Conversation. We have the pleasure today to welcome, Wim Coekaerts, senior vice president of software development at Oracle. Wim, it's good to see you. How you been, sir? >> Good, it's been a while since we last talked but I'm excited to be here, as always. >> It was during COVID though and so I hope to see you face to face soon. But so Wim, since the Barron's Article declared Oracle a Cloud giant, we've really been sort of paying attention and amping up our coverage of Oracle and asking a lot of questions like, is Oracle really a Cloud giant? And I'll say this, we've always stressed that Oracle invests in R&D and of course there's a lot of D in that equation. And over the past year, we've seen, of course the autonomous database is ramping up, especially notable on Exadata Cloud@Customer, we've covered that extensively. We covered the autonomous data warehouse announcement, the blockchain piece, which of course got me excited 'cause I get to talk about crypto with Juan. Roving Edge, which for everybody who might not be familiar with that, it's an edge cloud service, dedicated regions that you guys announced, which is a managed cloud region. And so it's clear, you guys are serious about cloud. These are all cloud first services using second gen OCI. So, Oracle's making some moves but the question is, what are customers doing? Are they buying this stuff? Are they leaning into these new deployment models for the databases? What can you tell us? >> You know, definitely. And I think, you know, the reason that we have so many different services is that not every customer is the same, right? One of the things that people don't necessarily realize, I guess, is in the early days of cloud lots of startups went there because they had no local infrastructure. It was easy for them to get started in something completely new. Our customers are mostly enterprise customers that have huge data centers in many cases, they have lots of real estate local. And when they think about cloud they're wondering how can we create an environment that doesn't cause us to have two ops teams and two ways of managing things. And so, they're trying to figure out exactly what it means to take their real estate and either move it wholesale to the cloud over a period of years, or they say, "Hey, some of these things need to be local maybe even for regulatory purposes." Or just because they want to keep some data locally within their own data centers but then they have to move other things remotely. And so, there's many different ways of solving the problem. And you can't just say, "Here's one cloud, this is where you go and that's it." So, we basically say, if you're on prem, we provide you with cloud services on-premises, like dedicated regions or Oracle Exadata Cloud@Customer and so forth so that you get the benefits of what we built for cloud and spend a lot of time on, but you can run them in your own data center or people say, "No, no, no. I want to get rid of my data centers, I do it remotely." Okay, then you do it in Oracle cloud directly. Or you have a hybrid model where you say, "Some stays local, some is remote." The nice thing is you get the exact same API, the exact same way of managing things, no matter how you deploy it. And that's a big differentiator. >> So, is it fair to say that you guys have, I think of it as a purpose built club, 'cause I talk to a lot of customers. I mean, take an insurance app like Claims, and customers tell me, "I'm not putting that into the public cloud." But you're making a case that it actually might make sense in your cloud because you can support those mission critical applications with the exact same experience, same API, same... I can get, you know, take Rack for instance, I can't get, you know, real application clusters in an Amazon cloud but presumably I can get them in your cloud. So, is it fair to say you have a purpose built cloud specifically for the most demanding applications? Is that a right way to look at it or not necessarily? >> Well, it's interesting. I think the thing to be careful of is, I guess, purpose built cloud might for some people mean, "Oh, you can only do things if it's Oracle centric." Right, and so I think that fundamentally, Oracle cloud provides a generic cloud. You can run anything you want, any application, any deployment model that you have. Whether you're an Oracle customer or not, we provide you with a full cloud service, right? However, given that we know and have known, obviously for a long time, how our products run best, when we designed OCI gen two, when we designed the networking stack, the storage layer and all that stuff, we made sure that it would be capable of running our more complex environments because our advantage is, Oracle customers have a place where they can run Oracle the best. Right, and so obviously the context of purpose-built fits that model, where yes, we've made some design choices that allow us to run Rack inside OCI and allow us to deploy Exadatas inside OCI which you cannot do in other clouds. So yes, it's purpose built in that sense but I would caution on the side of that it sometimes might imply that it's unique to Oracle products and I guess one way to look at it is if you can run Oracle, you can run everything else, right? Because it's such a complex suite of products that if you can run that then it it'll support any other (mumbling). >> Right. Right, it's like New York city. You make it there, you can make it anywhere. If I can run the most demanding mission critical applications, well, then I can run a web app for instance, okay. I got a question on tooling 'cause there's a lot of tooling, like sometimes it makes my eyes bleed when I look at all this stuff and doesn't... Square the circle for me, doesn't autonomous, an autonomous database like Autonomous Linux, for instance, doesn't it eliminate the need for all these management tools? >> You know, it does. It eliminates the need for the management at the lower level, right. So, with the autonomous Linux, what we offer and what we do is, we automatically patch the operating system for you and make sure it's secure from a security patching point of view. We eliminate the downtime, so when we do it then you don't have to restart applications. However, we don't know necessarily what the app is that is installed on top of it. You know, people can deploy their own applications, they can run third party applications, they can use it for development environments and so forth. So, there's sort of the core operating system layer and on the database side, you know, we take care of database patching and upgrades and storage management and all that stuff. So the same thing, if you run your own application inside the database, we can manage the database portion but we don't manage the application portion just like on the operating system. And so, there's still a management level that's required, no matter what, a level above that. And the other thing and I think this is what a lot of the stuff we're doing is based on is, you still have tons of stuff on-premises that needs full management. You have applications that you migrate that are not running Autonomous Linux, could be a Windows application that's running or it could be something on a different Linux distribution or you could still have some databases installed that you manage yourself, you don't want to use the autonomous or you're on a third-party. And so we want to make sure that we can address all of them with a single set of tools, right. >> Okay, so I wonder, can you give us just an overview, just briefly of the products that comprise into the cloud services, your management solution, what's in that portfolio? How should we think about it? >> Yeah, so it basically starts with Enterprise Manager on-premises, right? Which has been the tool that our Oracle database customers in particular have been using for many years and is widely used by our customer base. And so you have those customers, most of their real estate is on-premises and they can use enterprise management with local. They have it running and they don't want to change. They can keep doing that and we keep enhancing as you know, with newer versions of Enterprise Manager getting better. So, then there's the transition to cloud and so what we've been doing over the last several years is basically, looking at the things, well, one aspect is looking at things people, likes of Enterprise Manager and make sure that we provide similar functionality in Oracle cloud. So, we have Performance Hub for looking at how the database performance is working. We have APM for Application Performance Monitoring, we have Logging Analytics that looks at all the different log files and helps make sense of it for you. We have Database Management. So, a lot of the functionality that people like in Enterprise Manager mentioned the database that we've built into Oracle cloud, and, you know, a number of other things that are coming Operations Insights, to look at how databases are performing and how we can potentially do consolidation and stuff. So we've basically looked at what people have been using on-premises, how we can replicate that in Oracle cloud and then also, when you're in a cloud, how you can make make use of all the base services that a cloud vendor provides, telemetry, logging and so forth. And so, it's a broad portfolio and what it allows us to do with our customers is say, "Look, if you're predominantly on-prem, you want to stay there, keep using Enterprise Manager. If you're starting to move to Oracle cloud, you can first use EM, look at what's happening in the cloud and then switch over, start using all the management products we have in the cloud and let go of the Enterprise Manager instance on-premise. So you can gradually shift, you can start using more and more. Maybe you start with analytics first and then you start with insights and then you switch to database management. So there's a whole suite of possibilities. >> (indistinct) you mentioned APM, I've been watching that space, it's really evolved. I mean, you saw, you know, years ago, Splunk came out with sort of log analytics, maybe simplified that a little bit, now you're seeing some open source stuff come out. You're seeing a lot of startups come out, you saw Cisco made an acquisition with AppD and that whole space is transforming it seems that the future is all about that end to end visibility, simplifying the ability to remediate problems. And I'm thinking, okay, you just mentioned, you guys have a lot of these capabilities, you got Autonomous, is that sort of where you're headed with your capabilities? >> It definitely is and in fact, one of the... So, you know, APM allows you to say, "Hey, here's my web browser and it's making a connection to the database, to a middle tier" and it's hard for operations people in companies to say, hey, the end user calls and says, "You know, my order entry system is slow. Is it the browser? Is it the middle tier that they connect to? Is it the database that's overloaded in the backend?" And so, APM helps you with tracing, you know, what happens from where to where, where the delays are. Now, once you know where the delay is, you need to drill down on it. And then you need to go look at log files. And that's where the logging piece comes in. And what happens very often is that these log files are very difficult to read. You have networking log files and you have database log files and you have reslog files and you almost have to be an expert in all of these things. And so, then with Logging Analytics, we basically provide sort of an expert dashboard system on top of that, that allows us to say, "Hey! When you look at logging for the network stack, here are the most important errors that we could find." So you don't have to go and learn all the details of these things. And so, the real advantages of saying, "Hey, we have APM, we have Logging Analytics, we can tie the two together." Right, and so we can provide a solution that actually helps solve the problem, rather than, you need to use APM for one vendor, you need to use Logging Analytics from another vendor and you know, that doesn't necessarily work very well. >> Yeah and that's why you're seeing with like the ELK Stack it's cool, you're an open source guy, it's cool as an open source, but it's complicated to set up all that that brings. So, that's kind of a cool approach that you guys are taking. You mentioned Enterprise Manager, you just made a recent announcement, a new release. What's new in that new release? >> So Enterprise Manager 13.5 just got released. And so EM keeps improving, right? We've made a lot of changes over over the years and one of the things we've done in recent years is do more frequent updates sort of the cloud model frequent updates that are not just bug fixes but also introduce new functionality so people get more stuff more frequently rather than you know, once a year. And that's certainly been very attractive because it shows that it's a lively evolving product. And one of the main focus areas of course is cloud. And so a lot of work that happens in Enterprise Manager is hybrid cloud, which basically means I run Enterprise Manager and I have some stuff in Oracle cloud, I might have some other stuff in another cloud vendors environment and so we can actually see which databases are where and provide you with one consolidated view and one tool, right? And of course it supports Autonomous Database and Exadata in cloud servers and so forth. So you can from EM see both your databases on-premises and also how it's doing in in Oracle cloud as you potentially migrate things over. So that's one aspect. And then the other one is in terms of operations and automation. One of the things that we started doing again with Enterprise Manager in the last few years is making sure that everything has a REST API. So we try to make the experience with Enterprise Manager be very similar to how people work with a cloud service. Most folks now writing automation tools are used to calling REST APIs. EM in the early days didn't have REST APIs, now we're making sure everything works that way. And one of the advantages is that we can do extensibility without having to rewrite the product, that we just add the API clause in the agent and it makes it a lot easier to become part of the modern system. Another thing that we introduced last year but that we're evolving with more dashboards and so forth is the Grafana plugin. So even though Enterprise Manager provides lots of cool tools, a lot of cloud operations folks use a tool called Grafana. And so we provide a plugin that allows customers to have Grafana dashboards but the data actually comes out of Enterprise Manager. So that allows us to integrate EM into a more cloudy world in a cloud environment. I think the other important part is making sure that again, Enterprise Manager has sort of a cloud feel to it. So when you do patching and upgrades, it's near zero downtime which basically means that we do all the upgrades for you without having to bring EM down. Because even though it's a management tool, it's used for operations. So if there were downtime for patching Enterprise Manager for an hour, then for that hour, it's a blackout window for all the monitoring we do. And so we want to avoid that from happening, so now EM is upgrading, even though all the events are still happening and being processed, and then we do a very short switch. So that help our operations people to be more available. >> Yes. I mean, I've been talking about Automated Operations since, you know, lights out data centers since the eighties back in (laughs). I remember (indistinct) data center one-time lights out there were storage tech libraries in there and so... But there were a lot of unintended consequences around, you know, automated ops, and so people were sort of scared to go there, at least lean in too much but now with all this machine intelligence... So you're talking about ops automation, you mentioned the REST APIs, the Grafana plugins, the Cloud feel, is that what you're bringing to the table that's unique, is that unique to Oracle? >> Well, the integration with Oracle in that sense is unique. So one example is you mentioned the word migration, right? And so database migration tends to be something, you know, customers obviously take very serious. We go from one place, you have to move all your data to another place that runs in a slightly different environment. And so how do you know whether that migration is going to work? And you can't migrate a thousand databases manually, right? So automation, again, it's not just... Automation is not just to say, "Hey, I can do an upgrade of a system or I can make sure that nothing is done by hand when you patch something." It's more about having a huge fleet of servers and a huge fleet of databases. How can you move something from one place to another and automate that? And so with EM, you know, we start with sort of the prerequisite phase. So we're looking at the existing environment, how much memory does it need? How much storage does it use? Which version of the database does it have? How much data is there to move? Then on the target side, we see whether the target can actually run in that environment. Then we go and look at, you know, how do you want to migrate? Do you want to migrate everything from a sort of a physical model or do you want to migrate it from a logical model? Do you want to do it while your environment is still running so that you start backing up the data to the target database while your existing production system is still running? Then we do a short switch afterwards, or you say, "No, I want to bring my database down. I want to do the migrate and then bring it back up." So there's different deployment models that we can let our customers pick. And then when the migration is done, we have a ton of health checks that can validate whether the target database will run through basically the exact same way. And then you can say, "I want to migrate 10 databases or 50 databases" and it'll work, It's all automated out of the box. >> So you're saying, I mean, you've looked at the prevailing way you've done migrations, historically you'd have to freeze the code and then migrate, and it would take forever, it was a function of the number of lines of code you had. And then a lot of times, you know, people would say, "We're not going to freeze the code" and then they would almost go out of business trying to merge the two. You're saying in 2021, you can give customers the choice, you can migrate, you could change the, you know, refuel the plane while you're in midair? Is that essentially what you're saying? >> That's a good way of describing it, yeah. So your existing database is running and we can do a logical backup and restore. So while transactions are happening we're still migrating it over and then you can do a cutoff. It makes the transition a lot easier. But the other thing is that in the past, migrations would typically be two things. One is one database version to the next, more upgrades than migration. Then the second one is that old hardware or a different CPU architecture are moving to newer hardware in a new CPU architecture. Those were sort of the typical migrations that you had prior to Cloud. And from a CIS admin point of view or a DBA it was all something you could touch, that you could physically touch the boxes. When you move to cloud, it's this nebulous thing somewhere in a data center that you have no access to. And that by itself creates a barrier to a lot of admins and DBA's from saying, "Oh, it'll be okay." There's a lot of concern. And so by baking in all these tests and the prerequisites and all the dashboards to say, you know, "This is what you use. These are the features you use. We know that they're available on the other side so you can do the migration." It helps solve some of these problems and remove the barriers. >> Well that was just kind of same same vision when you guys came up with it. I don't know, quite a while ago now. And it took a while to get there with, you know, you had gen one and then gen two but that is, I think, unique to Oracle. I know maybe some others that are trying to do that as well, but you were really the first to do that and so... I want to switch topics to talk about security. It's hot topic. You guys, you know, like many companies really focused on security. Does Enterprise Manager bring any of that over? I mean, the prevailing way to do security often times is to do scripts and write, you know, custom security policy scripts are fragile, they break, what can you tell us about security? >> Yeah. So there's really two things, you know. One is, we obviously have our own best security practices. How we run a database inside Oracle for our own world, we've learned about that over the years. And so we sort of baked that knowledge into Enterprise Manager. So we can say, "Hey, if you install this way, we do the install and the configuration based on our best practice." That's one thing. The other one is there's STIG, there's PCI and they're ShipBob, those are the main ones. And so customers can do their own way. They can download the documentation and do it manually. But what we've done is, and we've done this for a long time, is basically bake those policies into Enterprise Manager. So you can say, "Here's my database this needs to be PCI compliant or it needs to be HIPAA compliant and you push a button and then we validate the policies in those documents or in those prescript described files. And we make sure that the database is combined to that. And so we take that manual work and all that stuff basically out of the picture, we say, "Push this button and we'll take care of it." >> Now, Wim, but just quick sidebar here, last time we talked, it was under a year ago. It was definitely during COVID and it's still during COVID. We talked about the state of the penguin. So I'm wondering, you know, what's the latest update for Linux, any Linux developments that we should be aware of? >> Linux, we're still working very hard on Autonomous Linux and that's something where we can really differentiate and solve a problem. Of course, one of the things to mention is that Enterprise Manager can can do HIPAA compliance on Oracle Linux as well. So the security practices are not just for the database it can also go down to the operating system. Anyway, so on the Autonomous Linux side, you know, management in an Oracle Cloud's OS management is evolving. We're spending a lot of time on integrating log capturing, and if something were to go wrong that we can analyze a log file on the fly and send you a notification saying, "Hey, you know there was this bug and here's the cause." And it was potentially a fix for it to Autonomous Linux and we're putting a lot of effort into that. And then also sort of IT/operation management where we can look at the different applications that are running. So you're running a web server on a Linux environment or you're running some Java processes, we can see what's running. We can say, "Hey, here's the CPU utilization over the past week or the past year." And then how is this evolving? Say, if something suddenly spikes we can say, "Well, that's normal, because every Monday morning at 10 o'clock there's a spike or this is abnormal." And then you can start drilling this down. And this comes back to overtime integration with whether it's APM or Logging Analytics, we can tie the dots, right? We can connect them, we can say, "Push this thing, then click on that link." We give you the information. So it's that integration with the entire cloud platform that's really happening now >> Integration, there's that theme again. I want to come back to migration and I think you did a good job of explaining how you sort of make that non-disruptive and you know, your customers, I think, you know, generally you're pushing you know, that experience which makes people more comfortable. But my question is, why do people want to migrate if it works and it's on prem, are they doing it just because they want to get out of the data center business? Or is it a better experience in the cloud? What can you tell us there? >> You know, it's a little bit of everything. You know, one is, of course the idea that data center maintenance costs are very high. The other one is that when you run your own data center, you know, we obviously have this problem but when you're a cloud vendor, you have these problems but we're in this business. But if you buy a server, then in three years that server basically is depreciated by new versions and they have to do migration stuff. And so one of the advantages with cloud is you push a button, you have a new version of the hardware, basically, right? So the refreshes happen on a regular basis. You don't have to go and recycle that yourself. Then the other part is the subscription model. It's a lot easier to pay for what you use rather than you have a data center whether it's used or not, you pay for it. So there's the cost advantages and predictability of what you need, you pay for, you can say, "Oh next year we need to get x more of EMs." And it's easier to scale that, right? We take care of dealing with capacity planning. You don't have to deal with capacity planning of hardware, we do that as the cloud vendor. So there's all these practical advantages you get from doing it remotely and that's really what the appeal is. >> Right. So, as it relates to Enterprise Manager, did you guys have to like tear down the code and rebuild it? Was it entire like redo? How did you achieve that? >> No, no, no. So, Enterprise Manager keeps evolving and you know, we changed the underlying technologies here and there, piecemeal, not sort of a wholesale replacement. And so in talking about five, there's a lot of new stuff but it's built on the existing EM core. And so we're just, you know, improving certain areas. One of the things is, stability is important for our customers, obviously. And so by picking things piecemeal, we replace one engine rather than the whole thing. It allows us to introduce change more slowly, right. And then it's well-tested as a unit and then when we go on to the next thing. And then the other one is I mentioned earlier, a lot of the automation and extensibility comes from REST APIs. And so instead of basically re-writing everything we just provide a REST endpoint and we make all the new features that we built automatically be REST enabled. So that makes it a lot easier for us to introduce new stuff. >> Got it. So if I want to poke around with this new version of Enterprise Manager, can I do that? Is there a place I can go, do I have to call a rep? How does that work? >> Yeah, so for information you can just go to oracle.com/enterprise manager. That's the website that has all the data. The other thing is if you're already playing with Oracle Cloud or you use Oracle Cloud, we have Enterprise Manager images in the marketplace. So if you have never used EM, you can go to Oracle Cloud, push a button in the marketplace and you get a full Enterprise Manager installation in a matter of minutes. And then you can just start using that as well. >> Awesome. Hey, I wanted to ask you about, you know, people forget that you guys are the stewards of MySQL and we've been looking at MySQL Database Cloud service with HeatWave Did you name that? And so I wonder if you could talk about what you're doing with regard to managing HeatWave environments? >> So, HeatWave is the MySQL option that helps with analytics, right? And it really accelerates MySQL usage by 100 x and in some cases more and it's transparent to the customer. So as a MySQL user, you connect with standard MySQL applications and APIs and SQL and everything. And the HeatWave part is all done within the MySQL server. The engine itself says, "Oh, this SQL query, we can offload to the backend HeatWave cluster," which then goes in memory operations and blazingly fast returns it to you. And so the nice thing is that it turns every single MySQL database into also a data warehouse without any change whatsoever in your application. So it's been widely popular and it's quite exciting. I didn't personally name it, HeatWave, that was not my decision, but it sounds very cool. >> That's very cool. >> Yeah, It's a very cool name. >> We love MySQL, we started our company on the lamp stack, so like many >> Oh? >> Yeah, yeah. >> Yeah, yeah. That's great. So, yeah. And so with HeatWave or MySQL in general we're basically doing the same thing as we have done for the Oracle Database. So we're going to add more functionality in our database management tools to also look at HeatWave. So whether it's doing things like performance hub or generic database management and monitoring tools, we'll expand that in, you know, in the near future, in the future. >> That's great. Well, Wim, it's always a pleasure. Thank you so much for coming back in "The Cube" and letting me ask all my Colombo questions. It was really a pleasure having you. (mumbling) >> It's good be here. Thank you so much. >> You're welcome. And thank you for watching, everybody, this is Dave Vellante. We'll see you next time. (bright music)

Published Date : Apr 27 2021

SUMMARY :

How you been, sir? but I'm excited to be here, as always. And so it's clear, you guys and so forth so that you get So, is it fair to say you that if you can run that You make it there, you and on the database side, you know, and then you switch to it seems that the future is all about and you know, that doesn't approach that you guys are taking. all the upgrades for you since, you know, lights out And so with EM, you know, of lines of code you had. and then you can do a cutoff. is to do scripts and write, you know, and you push a button and So I'm wondering, you know, And then you can start drilling this down. and you know, your customers, And so one of the advantages with cloud is did you guys have to like tear And so we're just, you know, How does that work? And then you can just And so I wonder if you could And so the nice thing is that it turns we'll expand that in, you know, Thank you so much for Thank you so much. And thank you for watching, everybody,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

OracleORGANIZATION

0.99+

Wim CoekaertsPERSON

0.99+

50 databasesQUANTITY

0.99+

CiscoORGANIZATION

0.99+

10 databasesQUANTITY

0.99+

2021DATE

0.99+

Enterprise ManagerTITLE

0.99+

New YorkLOCATION

0.99+

EnterpriseTITLE

0.99+

MySQLTITLE

0.99+

JavaTITLE

0.99+

last yearDATE

0.99+

two thingsQUANTITY

0.99+

three yearsQUANTITY

0.99+

twoQUANTITY

0.99+

LinuxTITLE

0.99+

an hourQUANTITY

0.99+

Enterprise ManagerTITLE

0.99+

WindowsTITLE

0.99+

SQLTITLE

0.99+

100 xQUANTITY

0.99+

OneQUANTITY

0.99+

next yearDATE

0.99+

one toolQUANTITY

0.99+

oneQUANTITY

0.99+

AmazonORGANIZATION

0.98+

todayDATE

0.98+

second oneQUANTITY

0.98+

firstQUANTITY

0.98+

one exampleQUANTITY

0.98+

Enterprise Manager 13.5TITLE

0.98+

one aspectQUANTITY

0.98+

one engineQUANTITY

0.97+

WimPERSON

0.97+

gen oneQUANTITY

0.97+

bothQUANTITY

0.97+

once a yearQUANTITY

0.97+

Oracle CloudTITLE

0.97+

one wayQUANTITY

0.97+

GrafanaTITLE

0.97+

BarronPERSON

0.97+

first servicesQUANTITY

0.96+

HeatWaveORGANIZATION

0.96+

past yearDATE

0.96+

one-timeQUANTITY

0.96+

gen twoQUANTITY

0.96+

one placeQUANTITY

0.96+

past weekDATE

0.96+

two waysQUANTITY

0.95+

Juan Loaiza, Oracle | CUBE Conversation 2021


 

(upbeat music) >> The innovation around databases has exploded over the last few years. Not only do organizations continue to rely on database technology to manage their most mission critical business data. But new use cases have emerged that process and analyze unstructured data. They share data at scale, protect data, provide greater heterogeneity. New technologies are being injected into the database equation. Not just cloud which has been a huge force in the space, but also AI to drive better insights and automation, blockchain to protect data and provide better auditability, new file formats to expand the utility of database technology and more. Debates are bound as to who's the best number one, the fastest, the most cloudy, the least expensive, et cetera. But there is no debate, when it comes to leadership and mission critical database technologies. That status goes to Oracle. And with me to talk about the developments of database technology in the market is cube alum Juan Loaiza, who's executive vice president of Mission Critical Database Technology at Oracle. Juan always great to see you, thanks for making some time. >> Thanks, great to see you Dave, always a pleasure to join you. >> Yeah and I hope you have some time because they've got a lot of questions for you. (chuckles) I want to start with- >> All right I love questions. >> Good I want to start and we'll go deep if you're up for it. I want to start with the GoldenGate announcement. We're covering that recent announcement, the service on OCI. GoldenGate it's part of this your super high availability capabilities that Oracle is so well known for. What do we need to know about the new service and what it brings for your customers? >> Yeah, so first of all, GoldenGate is all about creating real time data throughout an enterprise. So it does replication, data integration, moving data into analytic workloads, streaming analytics of data, migrating of databases and making databases highly available. All those are use cases for real-time data movement. And GoldenGate is really the leading product in the market, has been for many years. We have about 80% of the global fortune 500 running GoldenGate today, in addition to thousands and thousands of smaller customers. So it is the premier data integration, replication, high availability, anything involving moving data in real time, GoldenGate is the premier platform. And so we've had that available as a product for many years. And what we just recently done is we've released it as a cloud service, as a fully managed and automated cloud service. So that's kind of the big new thing that's happening right now. >> So is that what's unique about this, is it's now a service, or there are other attributes that are unique to Oracle? >> Yeah, so the service is kind of the most basic part to it. But the big thing about the service is it makes this product dramatically easier to use. So traditionally the data integration, replication products, although very powerful, also are very complex to use. And one of the big benefits of the service is we've made a dramatically simpler. So not just super experts can use it, but anyone can use it. And also as part of releasing it as a cloud service, we've done a number of unique things including making it completely elastically scalable, pay per use and dynamic scalability. So just in time, real time scalability. So as your workload increases we automatically increase the throughput of GoldenGate. So previously you had to figure all this stuff out ahead of time. It was very static. All these products have been very static. Now it's completely dynamic a native cloud product and that's very unique in the market. >> So, I mean, from an availability standpoint, I guess IBM sort of has this with Db2 but it doesn't offer the heterogeneity that GoldenGate has. But at what about like AWS, Microsoft, Google, do they provide services like, like GoldenGate? >> There's really nothing like the GoldenGate service. When you're talking about people like Google and Azure, they really have do it yourself third-party products. So there'll be a third party data integration replication product, and it's kind of available in their marketplace and customers have to do everything. So it's basically a put it together, your own kit. And it's very complicated. I mean these data integration products have always been complicated, and they're even more complicated in the cloud, if you have to do everything yourself. Amazon has a product but it's really focused on basic data migration to their cloud. It doesn't have the same capabilities as Oracle has. It doesn't have the elasticity, it doesn't have pay peruse, so it's really not very clavy at all. >> Well, so I mean the biggest customers have always glommed onto GoldenGate because they need that super ultra high availability. And they're capable of do it yourself. So, tell us how this compares to two DIY. >> Yeah, so you have mentioned the big customers so you're absolutely right. The big customers have been big users of GoldenGate. Smaller customers or users as well, however, it's been challenging because it's complicated. Data integration has been a complicated area of data management. More and most complicated. And so one of the things this does, is that it expands the market. Makes it much dramatically easier for smaller companies that don't have as many it resources to use the product. Also, smaller companies obviously don't have as much data as the really large giants. So they don't have as much data throughput. So traditionally the price has been high for a small customer. But now, with pay per use in the cloud, it eliminates the two big blockers for smaller enterprises. Which are the costs, the high fixed costs and the complexity of the products. So in which, by the way, it's helpful for everyone also. And for big customers they've also struggled with elasticity. So sometimes a huge batch job will kick in, the rate of change increases and suddenly the replication product doesn't keep up. Because on-prem products aren't really very elastic. So it helps large customers as well. Everybody loves these reviews but the elasticity pay per use, on demand nature of it's really helpful for everybody. >> Well, and because it's delivered as a service I would imagine for the large customers that you're giving them more granularity, so they can apply it maybe for a single application, as opposed to trying to have to justify it across a whole suite. And because the cost is higher, but now if you're allowing me to pay by the drink, is that right? I could just sort of apply it in a more granular level. >> Yes, that's exactly right. It's really pay per use. You can use it as much or as little as you want. You just pay for what you use. And as I mentioned, it's not a static payment either. So if you have a lot of data loads going on and right now you pay a little more, at night when you have less going on, you pay a lot less. So you really just paying for what use. It's very easy to set it up for a single application or all your applications. >> How about for things like continuous replication or real-time analytics, is the service designed to support that? >> Yes, so that's the heritage of GoldenGate. GoldenGate has been around for decades and we've worked with some of the most demanding customers in the world on exactly those things. So real time data all over the enterprise is really the goal that everyone wants. Real-time data from OTP and to analytics, from one system to another system, and for availability. That is the key benefit of GoldenGate. And that's the key technology that we've been working on for decades. And now we have it very easy to use in the cloud. >> Well what would be the overheads associated with that? I mean, for instance, you've go it, you need a second copy. You need the other database copies, and where does it make sense to incur that overhead? Obviously the super high availability apps that can exploit real time. Think like fraud detection is the obvious one, but what else can you add there? >> Well, GoldenGate itself doesn't require any extra copies of anything. However, it does enable customers that want to create for example, an analytics system, a data warehouse, to feed data from all their systems in real time into that data warehouse for example. And it also enables the real-time capabilities, enable high availability and you can get high availability within the cloud with it, between on premises in the cloud, between clouds. Also, you can migrate data. Migrate databases without having to take them down. So all these capabilities are available now and they're very easy to use. >> Okay. Thanks for that clarification. What about autonomous? Is that on the roadmap or what you thinking? >> Yeah, the GoldenGate is essentially an autonomous service. And it works with the Oracle Autonomous Database. So you can both use it as a source for data and as a sink for data, as a place you're writing data. So for example, you can have an autonomous OTP database, that's replicating to another autonomous OTP database in real time. And both of them are replicating changes to the autonomous data warehouse. But it doesn't all have to be autonomous. You can have any mix of, autonomous not autonomous, on-prem in cloud, in anybody's cloud. So that's the beauty of GoldenGate, It's extremely flexible. >> Well, you mentioned the plasticity a couple of times. I mean, why is that so important that that GoldenGate on OCI gives you that elastic, whatever billing the auto-scaling talk, talk to me in terms of what that does for the customer. >> Yeah, there's really two big benefits. One benefit is it's very difficult to predict workloads. So normally on an on-prem configuration, you have to say, okay what is the max possible workload that's going to happen here? And then you have to buy the product, configure the product, get hardware, basically size, everything for that. And then if you guess wrong, you're either spending too much because you oversized it or you have a big data real-time problem. The data can't keep up with the real-time because you've undersized the configuration. So that's hard to do. So the beauty of elasticity and the dynamic elasticity, the pay per use, is you don't have to figure all this stuff out. So if you have more workload, we grow it automatically. If you have less workload, we shrink it automatically. And you don't have to guess ahead of time. You don't have to price ahead of time. So you, you just use what, what you use, right? You don't pay for something that you're not using. So it's a very big change in the whole model of how you use these data, replication, integration, high availability technologies. >> Well, I think I'm correct to say GoldenGate primarily has been for big companies. You mentioned that small companies can now take advantage of this service. We talked about the granularity. And I could definitely see, can they afford it? I guess this is part one and then, and then the other part of the question is, I can see GoldenGate really satisfying your on-prem customers and them taking advantage of it, but do you think this will attract new customers beyond your core? So two part question there. >> Yeah, absolutely. So small customers have been challenged by the complexity of data integration. And that's one of the great things about the cloud services is it's dramatically simpler. So Oracle manages everything. Oracle does the patching, the upgrades. Oracle does the monitoring. It takes care of the high availability of the product. So all that management, complexity, all the configuration set up, everything like that, that's all automated, that's owned by Oracle. So small customers were always challenged by the complexity of product, along with everything else that they had to do. And then the other of course benefit is small customers were challenged by the large fixed price. So now with pay per use, they pay only for what they use. It's really usable by easily by small customers also. So it really expands the market and makes it more broadly applicable. >> So kind of same answer for beyond your existing customer base, beyond the on-prem that that's kind of... You answered >> Right. >> my two part question with one answer, so that was pretty efficient, (chuckles) pun intended. So the bottom line for me and squinting through this announcement is you've got the heterogeneity piece with GoldenGate OCI and as such it's going to give you the capability to create what I'll call an architecturally coherent decentralized data mesh. Big on this data mesh these days, could have decentralized data. With the proviso then I going to be able to connect to OCI, which of course you can do with Azure or I guess you could bring cloud to a customer on prem, first of all, is this correct? And can we expect you over time to do this with AWS or other cloud providers? >> It can move data from Amazon or to Amazon. It can actually handle, any data wherever it lives. So, yeah, it's very flexible and it's really just the automation of all the management, that we're running in our public cloud But the data can be from anywhere to anywhere. >> Cool, all right, let's switch topics here a little bit. Just talk about some of the things that you've been working on, some of the innovation. I sat through your blockchain announcement, it was very cool. Of course I love anything blockchain and crypto, NFTs are exploding, so that Coinbase IPO. It's just really an exciting time out there. I think a lot of people don't really appreciate the innovation that's occurring. So you've been making a lot of big announcements last several months. You've been taking your R and D bringing it into product, So that's great, we love to always see that because that's where really the rubber meets the road. Just for the database side of the house, you announced 21c the next generation of the self-driving data warehouse, ADW, blockchain tables, now you got GoldenGate running on OCI. Take us inside the development organizations. What are the underlying drivers other than your boss. >> When we talk about our autonomous database, it is the mission critical Oracle database, but it's dramatically easier to do. So Oracle does all the management all on automation, but also we use machine learning to tune, and to make it highly available, and to make it highly secure. So that that's been one of our biggest products we've been working on for many years. And recently we enhanced our autonomous data warehouse taking it beyond being a data warehouse to complete a data analytics platform. So it includes things like ETL. So we built ETL into the autonomous data warehouse. We're building our GoldenGate replication into autonomous data warehousing. We built machine learning directly natively into the database. So now, if someone wants to run some machine learning they just run a machine learning queries. They no longer have to stand up a separate system. So a big move that we've been making is, taking it beyond just a database to a full analytic platform. And this goes beyond what anyone else in the industry is doing, because we have a lot more technology. So for example, the ML machine learning directly in the database, the ETL directly in the database. The data replication is directly in the database. All these things are very unique to Oracle. And they dramatically simplify for customers how they manage data. In addition to that, we've also been working in our database product. We've enhanced it tremendously. So our big goal there is to provide what we call it converged database. So everything you need, all the data types. Whether it's JSON, relational, spatial, graph, all that different kinds of data types, all the different kinds of workloads. Analytics, OTP, things like blockchain, microservices events, all built into the Oracle database, making it dramatically easier to both develop and deploy new applications. So those are some of our big, big goals. Make it simple, make it integrated. Take the complexity, we'll take on the complexity. So developers and customers find it easy to develop an easy to use. And we've made huge strides in all these areas in the last couple of years. >> That's awesome. I wonder if we could land on blockchain again for now it's kind of jogging, but sort of on crypto. Though you're not about crypto but you are about applying blockchain. Maybe you can help our audience understand what are some of the real use cases where blockchain tech can be used with Oracle database. >> Yeah, so that's a very interesting topic. As you mentioned, blockchain is very currently, we see a lot of cryptocurrencies. I distributed applications for blockchain. So in general, in the past, we've had two worlds. We've had the enterprise data management world and we've had the blockchain world. And these are very distinct, right? And on the blockchain side the applications have mostly centered around, distributed multi-party applications, right? So where you have multiple parties that all want to reach consensus and then that consensus is stored in a blockchain. So that's kind of been the focus of blockchain. And what we've done is very innovative. We're the first company to ever do this. Is we've taken the core architecture, ideas. And really a lot of it has to do with the cryptography of blockchain. And we've built, we've engineered that natively into the mainstream Oracle database. So now in mainstream Oracle database, we have blockchain technology built in. And it's very dramatically simpler to use. And the use cases, you asked about the use case, that's what we've done. And it's taken us about five years to do this. Now it's been released into the market in our mainstream 19c Oracle database. So the use case is different from the conventional blockchain use case. Which I mentioned was really multi-party consensus based apps. We're trying to make blockchain useful for mainstream, enterprise and government applications. So any kind of mainstream government application, or enterprise application. And that idea of blockchain, the core concept of blockchain, is it addresses a different kind of security problem. So when you look at conventional security, it's really trying to keep people out. So we have things like firewalls, passwords, networking cryption, data encryption. It's all about keeping bad people out of the data. And there's really two big problems that it doesn't address well. One problem is that there's always new security exploits being published. So you have hackers out there that are working overtime. Sometimes they're nation States that are trying to attack data providers. And every week, every month there's a new security exploit that's discovered and this happens all the time. So that's one big problem. So we're building up these elaborate walls of protection around our core data assets. And in the meantime, we have basically barbarians attacking on every side.(chuckles) And every once in a while, they get over the walls and this is just what's happening. So that's one big problem. And the second big problem is elicit changes made by people with credentials. So sometimes you have an insider in your, in your company. Whether it's an administrator or a sales person, a support person, that has valid credentials, but then uses those valid credentials in some illicit way. They go out and change somebody's data for their own gain. And even more common than that cause there's not that many bad guys inside the company to they exist, is stolen credentials. So what's happened in many cases is hackers or nation States will steal for example, administrative credentials and then use those administrative credentials to come into a system and steal data. So that's the kind of problem that is not well addressed by security mechanism. So if you have privileges security mechanism says, yeah you're fine. If somebody steals your privileges, again you get the pass through the gate. And so what we've done with blockchain is we've taken the cryptography elements of blockchain. We call it crypto secure data management. And we've built those into the Oracle database. So think of it this way. If someone actually makes it through over the walls that we built, and in into the core data, what we've done with that cryptographic technology of blockchain, is we've made that immutable. So you can't change it. So even if you make it over the gate you can't get into the core data assets and change those assets. And that's not built into Oracle databases is super easy to adopt. And I think it's going to really enhance and expand the community of people that can actually use that blockchain technology. >> I mean, that's awesome. I could talk all day about blockchain. And I mean, when you think about hackers, it's all there. They're all about ROI, value over cost. And if you can increase the denominator they're going to go somewhere else, right? Because the value will will decline. And this is really the intersection of software engineering cryptography. And I guess even when you bring crypto currency into it, it's like sort of the game theory. That's really kind of not what you're all about, but the first two pieces are really critical in terms of just next generation of raising that security hurdle. Love it. Now, go ahead. >> Yeah it's a different approach. I was just going to say, it's a different approach. Because think about trying to keep people out with things like passwords and firewalls, you can have basically bugs in that software that allow people to exploit and get in. When you're talking about cryptography, that's math, it's very difficult. I mean, you really can't fight pass math. Once the data is cryptographically protected on a blockchain, a hacker can't really do anything with that. It's just, math is math. There's nothing you can do to break it, right. It's very different from trying to get through some algorithm. That's really trying to keep you out. >> Awesome. I said, I could talk forever on this topic. But let me, let me go into some competitive dynamics. You recently announced Autonomous Data Warehouse. You've got service capabilities that are really trying to appeal to the line of business. I want to get your take on that announcement and specifically how you think it compares name names. I'm going to name names you don't have to. But Snowflake, obviously a lot of momentum in the marketplace. AWS with Redshift is doing very, very well. Obviously there are others. But those are two prominent ones that we've tracked in our data shows that have momentum. How do you compare? >> Yeah, so there's a number of different ways to look at the comparison. So the most simplest and straightforward is there's a lot more functionality in Oracle data warehousing. Oracle has been doing this for decades. We have a lot of built-in functionality. For example, machine learning natively built into the database makes it super easy to use. We have mixed workloads, we have spatial capabilities. We have graph capabilities. We have JSON capabilities. We have a microservice capabilities. We have-- So there's a lot more capabilities. So that's number one. Number two, our cloud service is dramatically more elastic. So with our cloud service all you really do, is you basically move the slide. You say hey, I want more resources, I want less resources. In fact, we'll do that automatically, that's called auto-scaling. In contrast when you look at people like Snowflake or Redshift they want you to stand up a new cluster. Hey you have some more workload on Monday, stand up another cluster and then we'll have two sets of clusters or maybe you'd want a third cluster, maybe you want a fourth cluster. So you end up with all these different systems which is how they scale. They say, hey, I can have multiple sets of servers access the same data. With Oracle you don't have to even think about those things. We auto scale, you get more workload. We just give it more resources. You don't even have to think about that. And then the other thing is we're looking at the whole data management end to end problem. So starting with capturing the data, moving the data in real time, transforming the data, loading the data, running machine learning and analytics on the data. Putting all kinds of data in a single place that you can do analytics on all of it together. And then having very rich screen capabilities for viewing the data, graphing the data, modeling the data, all those things. So it's all integrated. It makes it super easy to use. So a much easier, much more functionality and much more elastic than any of our competitors in the market. >> Interesting, thank you for those comments. I mean, it's a different world, right? I mean, you guys got all the market share, they got all the growth, those things over time, you've been around, you see it, they come together and you fight it out and may the best approach wins. >> So we'll be watching >> Yeah also I forgot to mention the obvious thing, which is Oracle runs everywhere. So you can run Oracle on premises. You can run Oracle on the public cloud. You can run what we call cloud at customer. Our competitors really are just public cloud only. So you customers don't get the choice of where they want to run their data warehouse. >> Now Juan a while ago I sat down with David foyer and Mark steamer. We reviewed how Gartner looks at the marketplace and it wasn't surprise that when it came to operational workloads, Oracle stood out. I mean, that's kind of an understatement relative to the major competitors. Most of our viewers, I don't think expected for instance Microsoft or AWS to be that far away from you. But at the same time, the database magic quadrant maybe didn't reflect that gap as widely. So there's some dissonance there with the detailed workload drill downs were dramatic. And I wonder what your take on the results. I mean, obviously you're happy with them. You came out leading in virtually every category or you will one and two, and some of that sort of not even non-mission critical operational stuff. But what can you add to my narrative there? >> Yeah, so Gartner, first of all, we're talking about cloud databases. >> Right. >> Right, so this is not on premises databases this is pure cloud databases. And what they did is they did two things. One is, the main thing was a technical rating of the databases, of the cloud databases. And, there's other vendors that have been had database in the cloud for longer than we have. But in the most recent Gartner analysis report, as you mentioned, Oracle came out on top for cloud database technology, in almost every single operational use case including things like Internet of Things, things like JSON data, variable data, analytics as well as a traditional OTP and mixed workloads. So Oracle was rated the highest technology which isn't a big surprise. We've been doing this for decades. Over 90% of the global fortune 500 run Oracle. And there's a reason, because this is what we're good at. This our core strength. Our availability, our security, our scalability, our functionality, both for OTP and analytics. All the capabilities, built-in machine learning, graph analytics, everything. So even when we compare narrowly things like Internet of Things or variable data against niche competitors that that's what all they do. We came up dramatically ahead. But what surprised a lot of people is how far ahead of some of the other cloud vendors like Amazon, like Azure, like Google, Oracle came out ahead in the cloud database category. So a lot of people think, well, some of these other pure cloud vendors must be ahead of Oracle in cloud database. But actually not. I mean, if you look at the Gartner analyst report, it was very clear. It was Oracle was dramatically ahead of their cloud database technologies with our cloud database. >> So I'm pretty much out of time but last question. I've had some interesting discussions lately and we've pointed out for years in our research that of course you're delivering the entire stack, the database, part of the infrastructure the applications, you have the whole engineered system strategy. And for the most part you're kind of unique in this regard. I mean, Dell just announced that it's spinning off VMware and it could have gone the other direction. And become more integrated hardware and software player, for the data center. But look, it's working for Dell based on the reaction, from the street post announcement. Cisco they got a hardware and software model that's sort of integrated but the company's value that peaked back in the .com boom, it's been very slow to bounce back. But my point is for these companies the street doesn't value, the integrated model. Oracle is kind of the exception. You know, it's at trading at all time highs, I know you're not going to comment on the stock price, but I guess in SAP until it missed it guided conservatively, was kind of on the good trajectory. But so I'm wondering, why do you think Oracle strategy resonates with investors, but not so much those companies? Is it, because you have the applications piece? I mean, maybe that's kind of my premise for, for SAP but what's your take? Why is it working for you? >> Well, okay. I think it's pretty simple, which is some of our competitors, for example, they might have a software product and a hardware product. But mostly those are acquired in their separate products that just happen to be in a portfolio. They are not a single company with a single vision and joint engineering going on. It's really, hey, I got the software on over here. I got the hardware over there, but they don't really talk to each other, they don't really work together. They're not trying to develop something where the stack is actually not just integrated but engineered together. And that is really the key. Oracle focuses on data management top to bottom. So we have everything from our ERP, CRM applications talking to our database, talking to our engineered systems, running in our cloud. And it's all completely engineered together. So Oracle doesn't just acquire these things and kind of glue them together. We actually engineer them and that's fundamentally the difference. You can buy two things and have them as two separate divisions in your company but it doesn't really get you a whole lot. >> Juan it's always a pleasure, I love these conversations and hope we can do more in the future. Really appreciate your time. Thanks for coming to the CUBE >> Pleasure, Dave nice to talk to you. >> All right keep it right there, everybody. This is Dave Vellante for theCUBE, we'll see you next time. (upbeat musiC)

Published Date : Apr 21 2021

SUMMARY :

of database technology in the market Thanks, great to see you Dave, Yeah and I hope you have some time about the new service So that's kind of the big new thing of the most basic part to it. but it doesn't offer the complicated in the cloud, Well, so I mean the biggest customers And so one of the things this does, And because the cost is higher, So if you have a lot And that's the key technology is the obvious one, And it also enables the Is that on the roadmap So that's the beauty of GoldenGate, that does for the customer. the pay per use, is you don't have of the question is, I can see GoldenGate So it really expands the market beyond the on-prem that that's kind of... So the bottom line for me and it's really just the of the self-driving data So for example, the ML but you are about applying blockchain. And the use cases, you of the game theory. Once the data is in the marketplace. So the most simplest and straightforward may the best approach wins. You can run Oracle on the public cloud. But at the same time, the Yeah, so Gartner, first of all, of the databases, of the cloud databases. And for the most part you're And that is really the key. Thanks for coming to the CUBE theCUBE, we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Juan LoaizaPERSON

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

DavePERSON

0.99+

JuanPERSON

0.99+

OracleORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

DellORGANIZATION

0.99+

AWSORGANIZATION

0.99+

IBMORGANIZATION

0.99+

thousandsQUANTITY

0.99+

MondayDATE

0.99+

two thingsQUANTITY

0.99+

One problemQUANTITY

0.99+

Mark steamerPERSON

0.99+

One benefitQUANTITY

0.99+

GartnerORGANIZATION

0.99+

OCIORGANIZATION

0.99+

fourth clusterQUANTITY

0.99+

OneQUANTITY

0.99+

twoQUANTITY

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

one answerQUANTITY

0.99+

third clusterQUANTITY

0.99+

one big problemQUANTITY

0.99+

two big problemsQUANTITY

0.99+

two setsQUANTITY

0.99+

CoinbaseORGANIZATION

0.99+

two partQUANTITY

0.99+

about five yearsQUANTITY

0.98+

two big benefitsQUANTITY

0.98+

first companyQUANTITY

0.97+

two separate divisionsQUANTITY

0.97+

Over 90%QUANTITY

0.97+

GoldenGateORGANIZATION

0.97+

second copyQUANTITY

0.97+

David foyerPERSON

0.97+

first two piecesQUANTITY

0.96+

singleQUANTITY

0.96+

two big blockersQUANTITY

0.96+

single applicationQUANTITY

0.96+

George Lumpkin & Neil Mendelson, Oracle | CUBE Conversation, April 2021


 

(bright upbeat music) >> Hi well, this is Dave Vellante. We're digging deeper into the world of database. You know, there are a lot of ways to skin a cat and different vendors take different approaches and we're reaching out to the technologists to get their perspective on the major trends that they're seeing in the market, 'cause we want to understand the different ways in which you can solve problems. So look, if you have thoughts and the technical chops on this topic, I'd love to interview you. Just ping me at at DVellante, on Twitter, a lot of ways to get ahold of me. Anyway, we recently spoke with Andrew Mendelsohn, who is Oracle's EVP and he's responsible for database server technologies. And we talked a lot about Oracle's ADW, Autonomous Data Warehouse. And we looked at the cloud database strategy that Oracle is taking and the company's plans and how they're different maybe from other solutions in the marketplace, but I wanted to dig deeper. And so today we have two members of Mendelsohn's team on The Cube, and we're going to probe a little bit. George Lumpkin, is the Vice President of Autonomous Data Warehouse. And Neil Mendelson is the VP of Modern Data Warehouse, that business for Oracle. They're both 20-year veterans of Oracle. When I reached out to Steve Savannah, who's a colleague of mine for many years, he's always telling me how great Oracle is relative to the competition. So I said, okay, come on The Cube and talk about this, give me your best people. And he said, whatever these two don't know about cloud data warehouse, it isn't worth knowing anyway. So with that said gentlemen, welcome to The Cube. Thanks so much for coming on. >> Thank you. >> Hey, glad to be here. >> So George, let's start with you. And maybe we could recap for some of the viewers who might not be familiar with the interview that I did with Andy. In your words, what exactly is an Autonomous Data Warehouse? Is this cloud native? Is it an Oracle buzzword? What is it? >> Well, I mean, Autonomous Data Warehouse is Oracle's cloud data warehouse. It's a service that built to allow business users to get more value from their data. That's what the cloud data warehouse market is. Autonomous Data Warehouse is absolutely cloud native. This is a huge misconception that people might have when they first sort of hear about something, this service because they think this is a Oracle database, right? Oracle makes databases. This is the same old database I knew from 10 years ago. And that's absolutely not true. We built a cloud native service or data warehousing built it with cloud features. You know, if your understanding of the cloud data warehouse market is based upon how you thought things look 10 years ago, like Snowflake wouldn't have even existed, right? You can't base your understanding of Oracle based upon that. We have a modern service that's highly elastic, provides cloud capabilities like online patching and it's fully autonomous. It's really built the business users so they don't need to worry about administering their database. >> So I want to come back and actually ask you some questions about that, but let me follow up and talk about some of the evolution of the ADW. And where did you start? I think it was 2018, maybe where you came from, where you are today, maybe you can take us through the technological progression and maybe the path you took to get here. >> And so 2018, was when we released the service and made generally available, but of course, you know we started much earlier than that. And this was started within my product management team, and other organization. So we really sat down with a blank sheet of paper and we said, what should the data warehouse in the cloud look like? You know, let's put aside everything that Oracle does for its on-prem customers and think about how the cloud should be different. And the first thing that we said was, well, you know, if Oracle writes the database software, and Oracle builds its own hardware, and Oracle has created its own cloud, why do we need customers to manage a database? And that's where the idea of autonomous database came from. That Oracle is managing the entire ecosystem. And therefore we built a database that we believe it's far and away the simplest to use simplest data warehouse in the market. And that's been our focus since we started with 2018. And that continues to be our focus, looking at more ways that we can make an Autonomous Data Warehouse as simpler and easier for business users to get more value out of their data. >> Awesome, one more question. And actually Neil, you might want to chime in on this as well. So just from a technical perspective, you know forget the marketing claims and all the BS. How do you compare ADW to the so-called born in the cloud data warehouses? You mentioned Snowflake, you know Redshift, is Redshift born in the cloud. Well, it was par XL but Amazon's done some good work around Redshift. I think big query is maybe probably a better example 'cause it was, you know, like Snowflake started in the cloud but how do you compare ADW to some of these other so-called born in the cloud data warehouses? >> I think part of this, you mentioned Redshift wasn't important in the cloud. It was, you know, a code base taken from a prior company that was on-premise company. So they adapted it to the cloud, right? And you know, we have done, as George said, much of the same, which is, you know, our starting point was not you know another company's code base, but our starting point was our own code base. But as George said, it's less about the starting point and it's more about where you envision the end point, right? Which is that, you know, whatever your starting point is, I think we have a fundamental different view of the endpoint. Amazon talks about how they're literally built for you know, a cloud built for developers, right? You know, builders, right? And you know Oracle wasn't first in the infrastructure business, we entered through applications business. And all of a sudden, you know we began taking on 100s of 1000s and 100s of even more customers that were SAS customers. Underneath was the database and all the infrastructure. One of the things that we took away from that was that we couldn't possibly hire enough people DBA, to manage all the infrastructure below our applications customers. So one of the things that influenced this is that, you know customers expect SAS applications to just take care of themselves, right? So we had to essentially modify the infrastructure to allow it to do so as well, right? And we're bringing that capability to those people who, you know, may or may not have an application, but their interest is, you know more of this self-service agility type of aspect. >> So it seems to me and Georgia was sort of alluding to this before. I mean, when you mentioned Snowflake a couple of times, and then Neil, something you just said, I'm going to pick up on is you've been around for a long time. And you know, when I talked to the Snowflake people, they know Oracle, a lot of them came from Oracle. They understand I think how you can't just build Oracle overnight and build in the capabilities that Oracle has and the recovery. And you talk to customers and you know you are the gold standard of, you know especially mission critical databases, so I get that. But now you just sort of hit on it, is it takes a lot of people and skill to run the database. So that's the problem that you're saying you were attacking, is that, am I getting that right? >> Right, right, so the people that you talked about who originally built Snowflake came from Oracle, but they came from Oracle more than a decade ago. So their context is over a decade old, right? In the meantime, we've been busy, you know building a economies and many other capabilities, right? Their view of Oracle is that view that was back more than 10 years ago, right? They're still adding capability. So a really good example of this illustration is Oracle as you said, it's the most capable system that's out there and has been for many years. We've been focusing on how do we simplify that and how do we use machine learning embedded within the system itself? Because core to the concept of autonomous is that inside, is this machine learning system that's continually improving, right? That's the whole notion. Where in Snowflakes case, they're still adding functionality. Last year, they added masking which you know functionality they didn't have, but when they added the capability, they added it without, you know, the ability for a business user to actually take advantage of it. There's no capability for a business user to actually find the information that needs to be masked. And then after the information is found, you require a technical person to actually implement the mask. In Oracle's case, we've had masking and those capabilities for a long time, our focus was to be able to provide a simple tool that a business user can use that doesn't need technical or security experience. Find the data that needs to be masked PII data, and then hit a button and have it masked for you. So, you know, they're still, you know, without this notion of a strategy to move toward the system to heal itself and to manage itself, they're just going to continue. As they continue to add more capability, they will in turn add more complexity. What we're trying to do is take complexity out while others are adding it in, its an ironic twist. >> It is an ironic twist. It is interesting to look at it. And I don't want to make this about Snowflake. But I mean, Hey, I like what they're doing. I like them. I know the management, they're growing like crazy and you know and the customers tell me, hey, this is really simple. And it's simple by design. I mean, to your point over time it's going to get, you know, more and more complex. I was talking to Andy, I think it was Andy. He was saying, you know, they've got the different sizes you've got to shape some, you know, they call it t-shirt sizes. And I was like, okay, I got a small, I got a medium and a large, maybe that's okay. But you guys would say, we give more granular you know, a scaling, I guess is the point there, right? I mean George, I don't know if you can comment on that. It just a different strategy. You've got a company that was founded well, I guess, 2015 versus one that was founded in 1977. So you would think the latter has, you know way more function than the former, but George, anything you'd add to this conversation? >> Yeah, I mean, I'm always amazed that there are these database systems that are perceived as cloud native and they do things like sell you database sizes by t-shirt sizes, as you described. I mean, if you look at Snowflake, it's small, medium, large extra large too extra large, but they're all factors of two. You're getting a size of your database of two, four, eight, six, 32, et cetera. Or if you look at AWS Redshift, you're buying your database by the nodes. You say, how many nodes do you want? And in both those cases, this is a cloud native. This is saying we have some hardware underneath our database and we need you, Mr. Customer, to tell us how many servers you want. That's not the way the clouds should work, right? And I think this is one of the things that we did with Autonomous Data Warehouse. We said, no, that's not how the rules should work. We still run our database on hardware, we still have nodes and servers. We should tell the customer, how many CPU's you would like for your data warehouse? You want 16? Sounds good. You want 18? Yeah, we can give you 18. We're not, you know, we're not selling these to you in bundles of eight or bundles of six or powers of two. We'll sell you what you need. That's what cloud elasticity should be. Not this idea that oh, we are a database that should be managed by IT. IT already knows about servers and nodes. Therefore it's okay if we tell people your cloud data warehouse runs on nodes. Within Oracle as Neil said, we wouldn't. The data warehouse should be used by the people who want to actually analyze their data, should be used by the business users. >> Well, and so the other piece of cloud native that has become popular, is this idea of separating compute from storage and being able to scale those two independent of each other which is pretty important, right? Because you don't want to have to pay for a chunk of compute if you don't need the storage and vice versa. Maybe you could talk about that, how you solve that problem, to the extent that you solve that problem. >> Absolutely, we do separate compute print storage with Autonomous Data Warehouse. When you come in and you say, I need 10 CPU's for my data warehouse and I need two terabytes of storage. Those are two dependent decisions that you make. So they're not tied together in any way. And, you are exactly right, Dave, this is how things should work in the cloud. You should pay for what you need, pay for what you use, not be constrained by having big sets of storage you have to use for a given amount CPU or vice versa. >> Okay, go ahead Neil, please. >> Oh, just to add on to that, you know, the other aspect that comes into play is that, you know, so your starting point is X, whatever that happens to be. Over time that changes. And we all know that workloads vary right throughout the day throughout the month, throughout the year by various events that occur maybe the close of the year, close of business at the end of the quarter, it maybe you know, holiday season for retailers and so forth. So, you know, it's not only the starting point, but how do you actually manage the growth, right? scaling up and scaling down, right? In our case, we tried, as George said, we abstracted that completely for the customer basically said check a box, which has auto scale. So, if the system is required more resources, will apply more resources. And we do so instantaneously without any downtime whatsoever, right? Because you know, again, you know, people think in terms of these systems have now become business critical. So if the business critical, you can't just shut down to expand. Imagine during the holiday season is your business is ramping up. And then all of a sudden you have to scale, right? And your system either shuts down, reboots itself, right? Or it slows down to the point that it's a crawl and all your customers get frustrated. We don't do that. You click a button, auto scale and we take care of it for you smoothing out those lumps, right? Without any technical assistance. And again, if you look at Redshift, you look at all these various systems, they require technical assistance to be able to figure out not only your initial data, but how you scale out over time. >> Interesting, okay. So all is said, you know, a lot of companies are using Azure, AWS Google for infrastructure, why would these customers not just use their database? Why would they switch to Oracle or ADW? >> Well, I think Neil will probably add something. I want to start by saying a huge number of our existing Autonomous Data Warehouse customers today are customers of AWS and Azure. They are pulling data from AWS and Azure and bringing it into an Oracle Autonomous Data Warehouse. And we built feature Joe, I focused on product managers. We feel featured for that. And so it's perfectly viable and it it's almost commonplace, that the very largest enterprises to be doing that. But then coming to the question of why would they want to do it? I don't know, Neil, you want to take that? >> Yeah, yeah, so one of the things that we've really see emerge here is you know, a data warehouse doesn't generate the transactions on itself, right? So the data has to come from somewhere, right? And you ask yourself, well, where does the data come from? Well, in a lot of cases, that data is coming from applications and increasingly SAS applications that the company has deployed. And those are, you know, HR applications, you know, CRM applications, you know ERP applications and many vertical applications. In Oracle's case, what we've done is we say, okay, well, we have the application, this transactional thing, we have the infrastructure from the economist data warehouse, why don't we just make it really, really easy? And if you're an Oracle applications customer, that's already running on the Oracle cloud, we will essentially provide you the ability to create a data warehouse from that information, right? With a clicker, with largely either with a product and service or quick start kit. You don't start from scratch, you start from where you are. And there are many cases that where you are has data, very much as George mentioned before telcos, banks, insurance companies, governments, all of the data that they want to analyze, a lot of that data guess where it's coming from, it's coming from Oracle applications. So it makes sense to be able to have both the data that's generated and the data that's being analyzed close to the same place. Because at the end of the day, the payoff pitch for any form of analysis is not coming up with an insight, oh, I realized X, Y, Z, but it's rather putting the insight directly into production. And that's where, when you have this stuff spread all over God's greener trying to go from insight into action can take months, if not years. The reason that a lot of customers are now turning to us is that they need to be much more agile and they need to be able to turn that insight into action immediately without it being a science project. >> Okay, thank you for that. So let's tick them off. Like what are the top things that customers can get from Oracle Autonomous Data Warehouse, that they couldn't get from say a Snowflake or Redshift or Big query or SQL server or something yet. I appreciate you guys' willingness to talk about the competition. Let's tick them off. What are the most important things that we should know about that they can't get elsewhere? >> So first, I mean, we already talked about a couple of what we think are really the major themes of Autonomous Data Warehouse. The services is autonomous. You don't need to worry about managing it, anyone can manage the data warehouse. The service is elastic. You can buy and pay for what you use. You know, those are just what we think of as being the general characteristics of Autonomous Data Warehouse. But you know, when you come to your question of, hey, what do we give that other vendors don't provide? And I think the one angle that Autonomous Data Warehouse does a really good job is and Neil was just discussing this, it focuses on the business problems, right? We have years and years of experience with not just database security, but data security, right? You know, every cloud vendor can say, oh we encrypt all your data, we have these compliance certifications, all of these things. And what they're saying is, we are securing your database, we are securing your database infrastructure. At Oracle of course has to do those as well. But where we go further, is we say, hey, no, no, no, no, no, we know what business users want. They want to secure their data. What kind of data am I storing? Do I have PII data? Could you detect whether there's PII data and tell me about it in case some user loaded something that I wasn't aware of? What kind of privileges did I give my users? Can you make sure that those privileges are right? And can you tell me if users were given privileges that they're not using maybe I need to take them away. These are the problems that Oracle's tackled in security over the last 20 years. It's really more about the business problem. Yeah, some other, oh, go ahead. >> Oh, I'm sorry, I got so many questions for you guys. We'll get back to that 'cause it sounds like there's a long list. (laughs) >> We have nowhere to go.(laughs) I want to pick up with George on something you said about elasticity. Is it true pay by the drink? Do you have a consumption pricing? I mean, can I dial it up and dial it down whenever I want? How does that work? >> Yes, I mean not to be too many technical details, but you say, I want 14 CPU's that's what your database runs at. You can change that default number anytime you want online, right? You can say, okay, I'm coming up on my quarter end, I'm going to raise my database 20 CPU. We just do it on the ply. We just adjust the size--- >> What about the other way? What about coming down? Can I go down to one? >> You go down, you can go down to one--- >> And you're not going to charge me for 14 if I go down to one? >> No, if you set it down to one, you get charged for one, right? >> Okay, that's good, that's good. >> In the background, you know we are also allowing levels of auto scaling. You say, if you say hey, I want to charged for 14 and Oracle, can you take care of all those scaling for me? So if a bunch of people jump on at 5:00 PM, to run some queries, 'cause the executive said, hey, I need a report by tomorrow morning. We'll take care of that for you. We'll let you go beyond 14 and only charge you for exactly what you use for those extra CPU's beyond 14. >> Okay, thank you. Go ahead, Neil. >> And maybe, if we add, you know, Andy talked about this when he was on that show with you last week, right? And you know, he talked about this concept of a converged database, but let me talk about it in the way that we see it from a business point of view, right? You know, business users are looking to, you know ask a variety of questions, right? And those questions need to be able to relate to both you know, the customer themselves, the relationship that the customer might have with others. You know, today we talk about like the social network and who are influencers within that, and then where they actually conduct business. Which is really, you know, in every case, it's on some form of increasingly on a mobile device. So in that case, you want to be able to ask questions, which is not only, you know, who should I focus on, but who are the key influencers within this community, right? That could influence others? And does that happen in a particular place in time? Meaning, you know, let's say pre COVID, it might happen at a coffee shop or somewhere else. We can answer all of those questions and more inside of the autonomous system without having to replicate the data out to one system that does graph and another system that does spatial, a third system that does this. It's like a business user. It's like, wait a minute, come on, you're trying to tell me that I need a separate system and replicate the data just be able to understand location? The answer in many cases is yes, you have to have separate, which a business person says, well, that's absurd. Can't I just do this all in one system? You can with Oracle. >> So look, I'm not trying to be the snarky journalist or analyst here but I want to keep pushing on this issue. So here we are, it's 2021. It's April. We're like a third of the way through the year. And so far, nobody has come out and said, okay, we're going to deliver Autonomous Data Warehouse just like Oracle. So I asked myself, well, why is Oracle doing this? You guys answered, you know, to reduce the labor cost. But I asked myself, is this how they're solving the problem of keeping relevant a database that spans five decades? And you guys said, no, no, this is cloud native born in the cloud, you know started essentially with a new mindset. But is this a trend that others are going to follow? You know, and if so, why haven't we seen it this idea of a self-driving databases? Why is it right now unique to Oracle? What's really going on here? >> So I think there's a really interesting thing that's happening, it's not visible outside of Oracle. It's very visible for those of us who work inside of the development organization. You know, if you look at Oracle, I can tell you bad. I mean, I think it's safe to presume Oracle has the largest database development organization on the planet, right? I mean, it was kind of the largest database or large most used database for the past two decades. And what's happened is we pivoted to building a cloud platform. We're not just building a database, we're taking all of these resources that we have with all these expertise of building database software. We were saying, we now have to build the platform to run and manage the database software in the cloud, right? And it's a little bit like, you know I think to make people relate to it a little better, there was a really good quote from Elon Musk couple of years ago, talking about Tesla. Like everyone looks at the car, right? Tesla, the car is really great. The hard part of this, is building the factory, and that's analogy holds for Oracle. What we're building is the cloud battery. And what we have transitioned is our database development organization is now building as robust a cloud as possible. So that you know, when we increase the number of databases by 10 X, we don't add 10 X, more cloud ops people to manage it. We are ramping up developer building features to automate the management of our cloud infrastructure. And with that automation, we get better ability, less errors, more security. We give benefits to our cloud data warehouse customers with it. And I think this something really important to realize, right? We build database software. We build, you know, an engineered system built for databases called exit data, and we build a cloud platform. And these are really equal tiers in what we are building and developing today in 2021 from Oracle database development organization. >> Well, you mentioned exit data, I want to shift gears here a little bit and talk about we're seeing this hybrid cloud on-premises clouds, they're finally gaining some traction. I got to give props Oracle's cloud of customers really the early to that game. I think it was the first in my view anyway, true same same vision, took you guys a little while to get there but it was the right vision. And the thing I always say about Oracle people don't understand is Oracle invest in R and D, your chairman is also the CTO. You guys are serious about technical investment so you know, that's where innovation comes from. But, and we heard during your recent earnings call, we heard some positive comments on this. So what's your take on delivering autonomous data warehouse on-prem and how do you compare with say Snowflake and AWS in that area? Snowflake, Frank Slootman, I've had him on record saying we're not going to do that halfway house. Forget it, we are always going to be in the cloud. We're never going to do an on-prem installation. AWS, we'll see to date. Yeah, I don't think you can get a Redshift for instance in outposts, but maybe that'll come. But, how do you see that emerging? What's your difference there? Maybe Neil, you could talk about that. >> Yeah, so, you know, I think, you know, customers had a lot of regulated industries, right? Still have concerns about the public cloud. And I think that when you hear statements like, you know, we're never going to do, you know, on-prem. Well, economist cloud at customer, it's not a classic on-prem solution. What it is, it's a piece of our cloud delivered in your data center. It's still the cloud software. Oracle manages it, Oracle, you know, the system itself manages itself and we take care of that responsibility so you don't have to. The differences is that we can make that available in a public cloud as well as in a private cloud, right? And there are so many use cases, you know, that you can imagine from a regulatory point of view, or just from a comfort point of view, where customers are choosing, they want the ability to decide for themselves where to place this stuff as compared to only having one option, right? And you know, you look at a lot of what's happening in the emerging world where, you know, there are a lot of places in the world that may not have, you know, really really high-speed internet connections to make, you know a public cloud feasible. Well, in that case, whether you're talking about, you know an oil rig or you're talking about something else, right? We can put that capability where it needs to be close to the operation that you're talking about, irrespective of the deployment option. >> Well, let me just follow up on that because I think it's interesting that, you know Frank Slootman said that to me, I oftentimes around AWS I say, never say never 'cause they'll surprise you, right? And I've learned that with Andy Jassy, but one of the things that seems difficult for on-prem, would be to separate that compute from storage because you have to actually physically move in resources. I think about Vertica Xeon mode. It's not quite the same, same. So, I mean, in that regard, maybe you're not the same same. And maybe that dogma makes sense for some companies. For Oracle, obviously you've got a huge on-prem state, thoughts on that. >> So, you know, clearly, you know, so typically what we'll do is that we'll provide additional hardware beyond what the customer might expect and that allows them to use the capabilities of expansion, right? We also have the ability to allow the customer to expand from their cloud of customer into the public cloud as well, of which we have a lot of those situations. So we can provide a level of elasticity, even on-premises by over provisioning the systems, well not charging the customer until they use only based on what they consume, right? Combined together with the ability for us to augment their usage in the public cloud as well, right? Where others, again are constraint, right? Because they only have a single option. >> Right, well, you've got the capital resources to do that as well which is not to be overlooked. Okay, I mean, I've blown our time here but you guys are so awesome. (laughs) I appreciate the candor. So last question and George, if you want to throw in a couple of those other tick boxes, you know the differentiators, please feel free, but for both of you, if you can leave customers with the one key point or the top key points on how Oracle Autonomous Data Warehouse can really help them improve their business in the near term, what would they be? Maybe George, you could start and then Neil you bring us home. >> Yeah, I mean, I think that, as I said before, our starting point with Autonomous Data Warehouse, is how can we build a better customer experience in the cloud? And I think, and this continues throughout 2021, and I think that the big theme here is the business users should be able to get value directly from their data warehouses. We talked a few times about how a line of business user should be able to manage their own data, should be able to load their own data warehouse, should be able to start to work with their own data, should be able to run machine learning, model of build machine learning, models against that data and all of that built in, and delivered in Autonomous Data Warehouse. And we think that this is, you know we see our customer organizations large and small, the light bulbs starting to go on how easy the services to use to and how completed it is for helping business users get value from their data. And just adding onto what George said, you know, the development organization has done a tremendous job of really simplifying this cooperation. What we also tried to do that on the business side. You know, when a customer has an on-prem situation, they're looking at moving to the cloud, whether lift and shift or modernized, they're looking at costs, they're looking at risk and they're looking at time. So one of the things we look at is how do we mitigate that? How do we mitigate the cost, the risk and the time? Well, this week, I think we announced our new cloud lift program and the cloud lift program is what Oracle will provide to its cloud engineering resources around the world is that we will do, we will take the cost, the risk and the time out of the equation and Oracle will work directly with the customer or the customer's partner of choice, maybe an Accenture or Deloitte, and we will move them, right? You know, at little or no cost, most cases there's no cost whatsoever, right? We mitigate the risk because we're taking the risk on. And we've built a lot of automated tools to make that go very quickly, right? And securely, and then finally, we do it in a very very short amount of time as compared to what you would need to do with, you know 'cause there is no Redshift on-premises. There is no Snowflake on-premises. You have to convert from what you already have to that, right? And, but the company beyond the technological barriers that George talked about were also trying to smooth the operation so that a business itself can make a decision that not only did they not need the technical people to operate it, they won't need an entire consulting contract with millions of dollars in order to actually do the movement to the cloud. >> Well, guys, I really appreciate you coming on the program and again, your candor to speak openly about you know, your approach, the competitors. And so it's great having you, really really thank you for, for your time. >> Appreciate it. >> And thank you for watching everybody. Look, if you guys want to come back, go toe to toe with these guys, say the word you're always welcome to come on The Cube. One thing for sure, Oracle are serious, when it comes to database. Thank you for watching. This is Dave Vellante. We'll see you next time. (bright music)

Published Date : Apr 7 2021

SUMMARY :

And Neil Mendelson is the for some of the viewers of the cloud data warehouse and maybe the path you took to get here. And the first thing that we And actually Neil, you might want to chime And you know, we have And you know, when I talked In the meantime, we've been busy, you know it's going to get, you know, not selling these to you to the extent that you solve that problem. decisions that you make. Oh, just to add on to that, you know, So all is said, you know, I don't know, Neil, you want to take that? And those are, you know, HR applications, I appreciate you guys' And can you tell me if many questions for you guys. George on something you said but you say, I want 14 CPU's In the background, you Okay, thank you. And maybe, if we add, you know, born in the cloud, you So that you know, when we really the early to that game. And I think that when you hear interesting that, you know We also have the ability to you know the differentiators, And we think that this is, you know speak openly about you know, And thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AndyPERSON

0.99+

GeorgePERSON

0.99+

Dave VellantePERSON

0.99+

Andrew MendelsohnPERSON

0.99+

NeilPERSON

0.99+

Neil MendelsonPERSON

0.99+

DavePERSON

0.99+

George LumpkinPERSON

0.99+

OracleORGANIZATION

0.99+

DeloitteORGANIZATION

0.99+

Steve SavannahPERSON

0.99+

1977DATE

0.99+

AWSORGANIZATION

0.99+

Frank SlootmanPERSON

0.99+

AmazonORGANIZATION

0.99+

2015DATE

0.99+

Andy JassyPERSON

0.99+

2018DATE

0.99+

AprilDATE

0.99+

100sQUANTITY

0.99+

5:00 PMDATE

0.99+

April 2021DATE

0.99+

tomorrow morningDATE

0.99+

TeslaORGANIZATION

0.99+

10 CPUQUANTITY

0.99+

Last yearDATE

0.99+

Oracle Autonomous Data WarehouseORGANIZATION

0.99+

Andy Mendelsohn, Oracle | CUBE Conversation, March 2021


 

the cloud has dramatically changed the way providers think about delivering database technologies not only has cloud first become a mandate for many if not most but customers are demanding more capabilities from their technology vendors examples include a substantially similar experience for cloud and on-prem workloads increased automation and a never-ending quest for more secure platforms broadly there are two prevailing models that have emerged one is to provide highly specialized database products that focus on optimizing for a specific workload signature the other end of the spectrum combines technologies in a converge platform to satisfy satisfy the needs of a much broader set of use cases and with me to get a perspective on these and other issues is andy mendelson is the executive vice president of oracle the world's leading database company andy leads database server technologies hello andy thanks for coming on hey dave glad to be here okay so we saw the recent announcements this is kind of your baby around next generation autonomous data warehouse maybe you could take us through the path you took from the original cloud data warehouses to where we are today yeah when we uh we first brought autonomous database out uh we were basically a second generation technology at that point you know we decided that what customers wanted was to the other you know the push of a button provision the really powerful oracle database technology that they've been using for years and um we did that with autonomous database and beyond that we provided a very unique capability that around self-tuning self-driving of the database which is something the first generation vendors didn't provide and this this is really important because customers today are you know developers and data analysts you know you know at the push of a button build out their their data warehouses but you know they're not experts in tuning and so what we thought was really important is that customers get great performance out of the box and that's one of the really unique things about autonomous data warehouse autonomous database in particular and then this latest generation that we just came out with also answers the questions we got from you know the data analysts and developers they said you know it's really great that i can press a button and provision this very powerful data warehouse infrastructure or database infrastructure from oracle but you know if i'm an analyst i want data you know so it's still hard for me to go and you know get data from various data sources transform them clean them up and get them to a way a place where i can start querying the data now i still need data engineers to help me do that and so we've done in the new release we said okay we want to give data analysts and data engineer data scientists developers is a true self-service experience where they can do their job completely without bringing in any you know any any engineers from their i.t organization and so that's what this new version is all about yeah awesome i mean look years ago you guys identified the i.t labor problem and you've been focused on r d and putting it in your r d to solve that problem for customers so we're really starting to see that hit now now gartner recently did some analysis they ranked and rated them some of the more popular cloud databases and oracle did very well i mean particularly particularly in operational categories i mean an operational side and the mission critical stuff you smoked everybody we had mark stamer and david floyer on and our big takeaways were that you're you're again dominating in that mission critical workloads that that that dominance continues but your approach of converging functionality really differs from some others that we saw i mean obviously when you get high ratings from gartner you're pretty stoked about that but what do you think contributed to those rankings and what are you finding specifically in customer interactions yeah so gardner does a lot of its analysis based on talking to customers finding out how their product these products that sound great on paper actually work in practice and i think that's one of the places where oracle database technology really shines it's it's uh it solves real-world problems um it's been doing it for a long time and as we've moved that technology into the cloud you know that continues you know the differentiation we've built up over the years really stands out you know you look at like amazon's databases they generally take some open source technology that isn't that new it could be 30 years old 25 years old and they put it up on the cloud and they say oh it's cloud native it's great but but in fact it's the same old you know technology that that doesn't really compete you know decade behind oracle's database technology so i think the gartner analysis really showed that sort of thing quite clearly yeah so let's talk about that a little bit because obviously i've learned a lot you know one of the things i've learned over the last many years of following this business a lot of ways to skin a cat and cloud database vendors if you think about you mentioned aws you know look at snowflake kind of right tool for the right job approach they're going to say that their specialty databases they're focused uh are better than your converged approach which they make you know think of as a you know swiss army knife what's your take on that yeah well the converged approach is something of course we've been working on for a long time so the the idea is pretty simple you know think about your smartphone you know if you can think back you know over 10 years ago used to have you know a camcorder and a a camera and a messaging device and also a dump phone device that all those different devices got converged into what we now call the smartphone why did the smartphone win it's just simply much more productive for you to carry one device around that that is actually best to breed in all the different categories instead of lots of separate devices and that's what we're doing with converge database over the years you know we've been able to build out technologies that are really good at transaction breasts at analytics for data warehousing now we're working on you know json technologies graph technologies the other vendors basically can't do this i mean it's much easier to build a specialty database that does one thing to build out a converged database that does end things really well and that's what we've been doing for years and again it's it's based on technology that uh you've invested in for quite a long time um and it's something that i think uh customers and developers and analyze analysts find to be a much more productive way of doing their jobs it's very unique and not common at all to see a technology that's been around as long as oracle database to see that sort of morph into a more modern platform i mean you mentioned aws uses leverages open source a lot you know snowflake would say okay hey we are born in the cloud and they are i think google bigquery would be another good example but but but that notion of boy i want to get your take on this born in the cloud those folks would say well we're superior to oracle's because you know they started you know decades ago not necessarily you know native cloud services uh how have you been able to address that i know you know cloud first is kind of the buzzword but but how have you you made that sort of transparent to users or or irrelevant to users because you are cloud first maybe you could talk about how you've able to achieve that and convince us that you actually really are cloud native now you know one of the things we we sort of like pointing out is that um oracle very uniquely has had this scale out technology for running all kinds of workloads not just analytic workloads which is what you see out in the cloud there but we can also scale out transaction processing workloads now that was another one of the reasons we do so well in for example the gardner analysis for trans operational workloads and that technology is really valuable as we went to cloud it lets us do some really unique things and the most obvious unique thing we we have is something we like to call you know you know cloud native you know instant elasticity and so with our technology if you want to provision a share you know some number of amount of compute to run your workloads you can provision exactly what you need you know if you need 17 cpus to get your job done you do 17 cpus when you provision your autonomous database our competitors who claim to be born in the cloud like snowflake and amazon they still use this this archaic way of provisioning uh servers based on shapes you know snowflake you know says what which shape cluster do you want you want 16 you want 32 you want 64. no it goes up by a power of 2 which means if you compare that to what oracle does you you have to provision up to like twice as much cpu than you really need so if you really need 17 they make you provision 32. if you really need 33 they make your provision 64. so this is not a cloud native experience at all it's an archaic way of doing things and and we like to point out with our instant elasticity you know we can go from 17 to 18 to 19 you know whatever you want plus we have something called auto scale so you can set your baseline to be 17 let's say but we will automatically based on your workload scale you up to three times that so in this case be 51 and because of that true elasticity we have we are really the only ones that can deliver true pay as you go kind of you know just pay for what you need kind of capability which is certainly what amazon was talking about when they first called their cloud elastic but it turns out for database services these guys still do this archaic thing with shapes so that's a really good example of where we're quite better than the other guys and it's much more cloud native than the other guys i want to follow up on that uh just stay here for a second because you're basically saying we have we have better granularity than the so-called cloud native guys now you mentioned snowflake right you got you got the shapes you got to you got to choose which shape you want and it sounds like it sounds like redshift the same and of course i know the way in which amazon separates compute from storage is largely a tiering exercise so it's not as as is as smooth as you might expect but nonetheless it's it's good how is it that you were you were able to achieve this with a database that was you know born you know many decades ago is it i mean what is it in from a technical standpoint an r d standpoint that you were able to do i mean did you design that in in the 1980s how did you how did you get here yeah well um it's a combination of interesting technologies so autonomous database you know it has the oracle database software that software is running on a very powerful optimized infrastructure for database based on the exadata technology that we've had on prem for many years we brought that to the cloud and that technology is a scale-out infrastructure that supports you know thousands of cpus and then we use our multi-tenant technology which is a way of sharing large infrastructures amongst amongst separate uh clients and we divide it up dynamically on the fly so if there's thousands of cpus you know this guy wants 20 and this one wants 30 we we divide it up and give them exactly what they need and if they want to grow we just take some extra cpus that are in reserve and we give it to them instantly and so that's a very different way of doing things and that's been a shape based approach where you know what what snowflake and amazon do under the covers they give you a real physical server you know or a cluster and that's how they provision if you want to grow they give you another big physical cluster which takes a long time to get the data populated to get it get it working we just have that one infrastructure that we're sharing among lots of users and we just give you a little extra capacity we don't it doesn't it's done instantly there's no need for data to be moved to populate the new clusters that you know snowflake or amazon are provisioning for you so it's a very different way of doing things and you're able to do that because of the tight integration between you mentioned exadata tight integration between the hardware and software we got david floyer calls it the iphone of enterprise sometimes sometimes you get some grief for that but it's it's not a bad metaphor but is that really the sort of secret well the big secret under the covers is this you know exudated technology our real application cluster scale out technologies our multi-tenant technologies so these are things we've been working on for a long time and they are very mature very powerful technologies and they really provide very unique benefits in a cloud world where people want things to happen instantly and they want to work well for any kind of workload um you know that's that's why we call we talk about being converged we can do mixed workloads you can do transactions and analytics all in the same data the other guys can't do that you know they're really good at like you said a narrow workload like i can do analytics or i can do graph you know i can do json but they can't really do the combination which is what real world applications are like they're not pure one thing versus enough right thank you for that so one of the questions people want to know is can oracle attract you know new customers that aren't existing oracle customers so maybe you could talk about that and you know why should uh somebody who's not an existing oracle customer think about using autonomous database yeah that's a that's a really good question you know oracle if you look at our customer base has a lot of really large enterprises you know the biggest banks and the biggest telcos you know they run oracle they run their businesses on oracle and these guys are sort of the most conservative of the bunch out there and they are moving to cloud at a somewhat slower rate than the than the smaller companies and so if you look at who's using autonomous database now it's actually the smaller companies you know the same type of people that first decided amazon was an interesting cloud 10 years ago they're also using our technologies and it's for the same reason they're finding you know they don't have large it organizations they don't have large numbers of engineers to engineer their infrastructure and that's why cloud is so attractive to them and autonomous database on top of cloud is really attractive as well because you know information is the lifeblood of every organization and if they can empower their analysts to get their job done without lots of help from it organizations they're going to do it and you know that's really what's made autonomous database really interesting you know the whole self-driving nature is very attractive to the smaller shops that don't have a lot of sophisticated um i.t expertise all right let's talk about developers you guys are the stewards of the java community so obviously you know big probably you know the biggest most popular programming language out there but when i think of developers i think of guys in hoodies pounding away but when i think of oracle developers i might think of maybe an app dev team inside of maybe some of those large customers that you talked about but why would developers and or analysts be interested in in using oracle as opposed to some some of those more focused narrow use databases that we were talking about earlier yeah so if you're a developer um you want to get your job done as fast as possible and so having a database that gives you the most productive application development experience is important to you and so you know i was talking we've been talking about converged database off and on so if i'm a developer i have a given job to do a converged database that lets me do a combination of analytics and and transactions and do a little json and little graph all in one is a much more productive place to go because if i if i i don't have something like that then i'm stuck taking my my application and breaking it up into pieces you know this piece i'm going to run on say aurora on amazon and this piece i have to run on the graph database and here's some json i got to run that on some document database and then i have to move the data around the data gets sort of fragmented between these databases and i have to do all this data you know integration and and whatever with a converged database i have a much simpler world where i can just use one technology stack i can get my job done and then i'm future proof against change you know requirements change all the time so you build the initial version of the application and your users say you know that this is not what i want i want some something else and it turns out that something else often is why i want analytics and you use something like a you know a document stored technology that has really poor analytic capabilities and then so you have to take that data and you have to move it to another database and so with with our converged approach you don't have to do that you know you're already in a place where everything works everything that you need you can possibly need in the future is going to be there as well and so for developers i i think you know converged is the right way to go plus for people who are what we call citizen developers you know like the data analysts that they cuddle they write a little code occasionally but they're really after getting value of the data we have this really fabulous no code loco tool called apex and apex is again a very mature technology it's been around for years and it lets somebody who's just a data analyst he knows a little sql but doesn't want to write code get their job done really fast and we've published some benchmark on our website showing you know basically you can get the job done 20 to 40 times faster using a no co loco tool like apex versus something like you know just writing cutting lots of traditional code i'm glad you brought up apex we recently interviewed one of your former colleagues amit xavery and all he would talk about is low code no code and then in the apex announcement you said something to the effect of coding should be the exception not the rule did you mean that what do you mean by that yeah so apex is a tool that people use with our our database technology for building what we call data driven applications so if you got a bunch of data and you want to get some value out of it you want to build maybe dashboards or more sophisticated reports apex is an incredible tool for doing that and it's it's modern you know it builds applications that look great on your smartphone and it automatically you know renders that same user interface on a bigger device like a laptop desktop device as well and uh it's very it's one of these things that uh the people that use it just go bonkers with it it's a viral technology they get really excited about how productive they they've been using it and they tell all their friends and i think we decided uh i guess about a year ago when we came up with this apex service that you know we really want to start going bigger on the marketing around it because it's very unique nobody else has anything quite like it and it's it again it just adds value to the whole developer productivity story around an oracle database so uh that's why we have the apex service now and we also have apex available with every oracle database on the cloud god i want to i want to ask you about some of the features around 21c there are a lot of them you announced earlier this year maybe you could tease out some of the top things that we should be paying attention to in 21c yeah sure um so one of the ways to look at 21c is we're we're continuing down this path of a converged database and so one of the the marquee features in 21c is something we call blockchain tables so what is blockchain well blockchain was this technology that's under the covers behind bitcoin you know it's a way of creating a tamper-proof data store um that was used by the original bitcoin algorithms well developers actually like having tamper proof data objects and databases too um you know and so what we decided to do was say well if i create a sql table in an oracle database what if there's a new option that just says i want that table implemented using blockchain technology to make the table tamper proof and fully audited etc and so we just did that and so in 21c you can now get a basically another feature of the converged database that says uh you know give me a sql table i can do everything i can query it i can insert rows into it but it's it's tamper proof i can't ever update it i can't delete rows from it amazon did the their usual thing they took again some open source technology and they said hey we got this great thing called quantum ledger database and it does blockchain tables but but if you want to do blockchain tables in any of their other databases you're out of luck they don't have it you have to go move the data into this new thing and it's again one of their it's again showing sort of the problem with their their proprietary this proprietary approach of having specialty databases versus just having one conversion that does it all so that's the blockchain cable feature uh we did a bunch of other things um the one i i think is worth mentioning the most is is support for persistent memory so a lot of people out there haven't noticed this this very interesting technology that intel shipped a couple years ago called optane data center memory and what it is it's basically a hybrid of flash memory which is persistent memory and standard dram which is not persistent means you can't store a database in dram um and so with this persistent memory you can basically have a database stored persistently in memory all the time and so it's a very innovative new technology from a database standpoint it's a very disruptive technology to the database market because now you can have an in-memory database basic period all the time 24 7. and so 21c is the first database out there that has native support for this new kind of persistent memory technology and we think it's it's really important so we're actually making it available as uh to our 19c customers as well and uh you know that's another technology i'd call out that we think is very unique we're way ahead of the game there and we're going to continue investing moving forward in that space as well yeah so that layer in between dram and and persistent flash that's that's a great innovation and good game changing from a from a performance and actually the way you write applications but i gotta i gotta ask you i and all the analysts were wrong with juan recently juan loyza and and to listen to that introduction of blockchain and everybody wants to know is safra going to start putting bitcoin on the oracle balance sheet i'm about to get that leap yeah that's a good question who knows yeah i can't comment on speculation ah that would be interesting okay last question then we got to go uh look oracle the narrative on oracle is you're expensive and you're mean you know it's hard to do business with do you care are you doing things to maybe change that perception in the cloud yeah i think we've made a very conscious decision that as we move to the cloud we're offering a totally new business model on the club that is a a cloud-native model you pay for what you use um you have everyday low prices you don't have to negotiate with some salesman for for months to get get a good price um so yeah we really like the message to get out there that those of you who think you know what oracle's all about um you know i and how it might be to work with oracle on in from your on premises days um you should really check out how oracle is now on the cloud we have this autonomous database technology really easy to use really simple any analysts can help get value out of the data without any help from any other engineers it's very unique it's it's uh it's the same technology you're used to but now it's delivered in a way that's much easier to consume and much lower cost and so yeah you should definitely take a look at what we've got out there on the cloud and it's all free to try out we got this free tier you can provision free vms free databases um free apex whatever you want and uh try it out and see what you think well thanks for that i was kidding about me and a lot of a lot of friends at oracle some relatives as well and thanks andy for coming on thecube today it's really great to talk to you yeah it's my pleasure and thanks for watching this is dave vellante we'll see you next time you

Published Date : Mar 29 2021

SUMMARY :

and so for developers i i think you know

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andy MendelsohnPERSON

0.99+

amazonORGANIZATION

0.99+

March 2021DATE

0.99+

20QUANTITY

0.99+

gartnerORGANIZATION

0.99+

oracleORGANIZATION

0.99+

apexTITLE

0.99+

juan loyzaPERSON

0.99+

first databaseQUANTITY

0.99+

OracleORGANIZATION

0.99+

david floyerPERSON

0.99+

two prevailing modelsQUANTITY

0.99+

twiceQUANTITY

0.98+

dave vellantePERSON

0.98+

todayDATE

0.98+

first generationQUANTITY

0.98+

10 years agoDATE

0.98+

thousands of cpusQUANTITY

0.98+

decades agoDATE

0.98+

40 timesQUANTITY

0.98+

51OTHER

0.97+

25 years oldQUANTITY

0.97+

30QUANTITY

0.96+

firstQUANTITY

0.96+

andy mendelsonPERSON

0.96+

17OTHER

0.96+

1980sDATE

0.96+

second generationQUANTITY

0.96+

33OTHER

0.96+

oneQUANTITY

0.96+

30 years oldQUANTITY

0.96+

jsonORGANIZATION

0.95+

earlier this yearDATE

0.95+

one deviceQUANTITY

0.94+

amit xaveryPERSON

0.94+

mark stamerPERSON

0.92+

googleORGANIZATION

0.91+

yearsDATE

0.91+

32OTHER

0.9+

about a year agoDATE

0.9+

oracleTITLE

0.9+

over 10 years agoDATE

0.89+

safraORGANIZATION

0.88+

16OTHER

0.88+

one thingQUANTITY

0.87+

many decades agoDATE

0.87+

lot of peopleQUANTITY

0.83+

The Value of Oracle’s Gen 2 Cloud Infrastructure + Oracle Consulting


 

>>from the Cube Studios in Palo Alto and Boston. It's the Cube covering empowering the autonomous enterprise brought to you by >>Oracle Consulting. Everybody, this is Dave Vellante. We've been covering the transformation of Oracle consulting and really, it's rebirth. And I'm here with Chris Fox, who's the group vice president for Enterprise Cloud Architects and chief technologist for the North America Tech Cloud at Oracle. Chris, thanks so much for coming on the Cube. >>Thanks too great to be here, >>So I love this title. You know, years ago, this thing is a cloud architect. Certainly there were chief technologist, but so you really that's those are your peeps, Is that right? >>That's right. That's right. That's really in my team. And I That's all we dio. So our focus is really helping our customers take this journey from when they were on premise. You really transforming with cloud? And when we think about Cloud, really, for us, it's a combination. It's it's our hybrid cloud, which happens to be on premise. And then, of course, the true public cloud, like most people, are familiar with so very exciting journey and frankly, of seeing just a lot of success for our customers. You know what I think we're seeing at Oracle, though? Because we're so connected with SAS. And then we're also connected with the traditional applications that have run the business for years. The legacy applications that have been, you know, servicing us for 20 years and then the cloud native developers. So with my team and I are constantly focused on now is things like digital transformation and really wiring up all three of these across. So if we think of, like a customer outcome like I want to have a package delivered to me from a retailer that actual process flow could touch a brand new cognitive site of e commerce it could touch essentially maybe a traditional application that used to be on Prem that's now in the cloud. And then it might even use new SAS application, maybe for maybe Herman process or delivery vehicle and scheduling. So when my team does, we actually connect all three. So what? I was mentioned, too. In my team and all of our customers, we have field service, all three of those constituents. And if you think about process flows, so I take a cloud. Native developer we help them become efficient. We take the person use to run in a traditional application, and we help them become more efficient. And then we have the SAS applications, which are now rolling out new features on a quarterly basis and the whole new delivery model. But the real key is connecting all three of these into your business process flow. That makes the customers life much more vision. >>So I want to get into this cloud conversations that you guys are using this term last mover advantage. I asked you last I was being last, You know, an advantage. But let me start there. >>People always say, You know, of course, we want to get out of the data center. We're going zero data center and how we say, Well, how are you going to handle that back office stuff, right? The stuff that's really big Frankie, um, doesn't handle just, you know, instances dying or things going away too easily. It needs predictable performance in the scale. It absolutely needs security. And ultimately, you know, a lot of these applications truly have relied on Oracle database. The Oracle database has its own specific characteristics that it means to run really well. So we actually looked at the cloud and we said, Let's take the first generation clouds but you're doing great But let's add the features that specifically a lot of times the Oracle workload needed in order to run very well and in a cost effective manner. So that's what we mean when we say last mover advantage, We said, Let's take the best of the clouds that are out there today. Let's look at the workloads that, frankly, Oracle runs and has been running for years. What are customers needed? And then let's build those features right into this, uh, this next version of the cloud we service the Enterprise. So our goal, honestly, which is interesting is even that first discussion we had about cloud, native and legacy applications and also the new SAS applications. We built a cloud that handles all three use cases at scale resiliently in very secure manner, and I don't know of any other cloud that's handling those three use cases all in. We'll call it the same pendency process. Oracle >>Mike witnesses. Why was it important for Oracle? And is it important for Oracle on its customers that have to participate in IAS and Pass and SAS. Why not just the last two layers of that? Um What does that mean from a strategic advantage standpoint? What does that do for >>you? Yeah, great question. So the number one reason why we needed to have all three was that we have so many customers to today are in a data center. They're running a lot of our workloads on premise, and they absolutely are trying to find a better way to deliver lower cost services to their customers. And so we couldn't just say, Let's just everyone needs to just become net new. Everyone just needs to ditch the old and go just a brand new alone. Too hard, too expensive at times. So we said, You know, let's kill us customers the ultimate amount of choice. So let's even go back against that developer conversation and SAS Um, if you didn't have eyes, we couldn't help customers achieve a zero data center strategy with their traditional applications will call it PeopleSoft or JD Edwards, Revisit Suite or even. There's some massive applications that are running on the Oracle cloud right now that are custom applications built on the Oracle database. What they want is, they said, Give me the lowest. Possibly a predictable performance. I as I'll run my app steer on this number two. Give me a platform service for database because, frankly, I don't really want to run your database. Like with all the manual effort. I want someone automate, patching scale up and down and all these types of features like should have given us. And then number three. You know, I do want SAS over time. So we spend a lot of time with our customers really saying, How do I take this traditional application, Run it on eyes and has and the number two Let's modernize it at scale. Maybe I want to start peeling off functionality and running in the cloud Native services right alongside, right? That's something again that we're doing at scale. And other people are having a hard time running these traditional workloads on Prem in the cloud. The second part is they say, you know, I've got this legacy traditional your api been servicing we well, or maybe a supply chain system ultimately want to get out of this. How do I get to SAS? You say Okay, here's the way to do this. First bring into the cloud running on IAS and pass and then selectively, I call it cloud slicing. Take a piece of functionality and put it into SAS. We're helping customers move to the cloud at scale. We're helping them do it at their rate, with whatever level of change they want. And when they're ready for SAS, we're ready for them. >>How does autonomous fit into this whole architecture Wait for that? That that description? I mean, it's a it's nuanced, but it's important. I'm sure you haven't discussed this conversation with a lot of cloud architects and chief technologist. They want to know this stuff. They want to know how it works. Um, you know, we will talk about what the business impact is, but but yeah, it's not about autonomous and where that fits. >>So the autonomous database, what we've done is really big. And look at all the runtime operations of an Oracle database. So tuning, patching, sparing all these different features and what we've done is taken the best of the Oracle database the best of something called Exit Data right, which we run in the cloud which really helps a lot of our customers. And then we wrapped it with a set of automation and security tools to help it. Really, uh, managing self tune itself. Patch itself scale up and down, independent between compute and storage. So why that's important, though, is that it? Really? Our goal is to help people run the Oracle databases they have for years, but with far less effort and then even not letting far less effort. Hopefully, you know a machine. Last man out of the equation we always talk about is your man plus machine is greater than man alone, so being assisted by, um, artificial intelligence and machine learning to perform those database operations, we should provide a better service to our customers. Far less paths are hoping goal is that people have been running Oracle databases, you know, How can we help them do it with far less effort and maybe spend more time on what the data can do for the organization? Right? Improve customer experience at Centra versus maybe like Hana Way. How do I spin up the table? It >>so talk about the business impact. So you go into customers, you talk to the the cloud Architects, the chief technologist. You pass that test now, you got to deliver the business impact. We're is Oracle Consulting fit with regard to that? And maybe you could talk about that where you were You guys want to take this thing? >>Yeah, absolutely. I mean, so you know, the cloud is a great set of technologies, but where Oracle Consulting is really helping us deliver is in, um, you know, one of the things I think that's been fantastic working with the Oracle consulting team is that, you know, Cloud is new for a lot of customers who've been running these environments for a number of years. There's always some fear and a little bit of trepidation saying, How do I learn this new cloud of the workloads? We're talking about David, like tier zero, tier one, tier two and all the way up to Dev and Test and, er, um, Oracle consulting. This really couple things in particular, Number one, they start with the end in mind, and number two that they start to do is they really help implement these systems. And, you know, there's a lot of different assurances that we have that we're going to get it done on time and better be under budget because ultimately, you know, again, that's a something is really paramount for us and then the third part of it. But sometimes a run book, right? We actually don't want to just live in our customer's environments. We want to help them understand how to run this new system. So training and change management. A lot of times, Oracle Consulting is helping with run books. We usually well, after doing it the first time. We'll sit back and say, Let the customer do in the next few times and essentially help them through the process. And our goal at that point is to leave only if the customer wants us to. But ultimately our goal is to implemented, get it to go live on time and then help the customer learn this journey to the cloud and without them. Frankly, uh, you know, I think these systems were sometimes too complex and difficult to do on your own. Maybe the first time, especially cause I could say they're closing the books. They might be running your entire supply chain. They run your entire HR system, whatever they might be, uh, too important, leading a chance. So they really help us with helping a customer become live and become very confident. Skilled. They could do themselves >>of the conversation. We have to leave it right there. But thanks so much for coming on the Cube and sharing your insights. Great stuff. >>Absolutely. Thanks for having me on. >>All right. You're welcome. And thank you for watching everybody. This is Dave Volante for the Cube. We are covering the oracle of North American Consulting. Transformation. And it's rebirth in this digital event. Keep it right there. We'll be right back.

Published Date : Jul 6 2020

SUMMARY :

empowering the autonomous enterprise brought to you by Chris, thanks so much for coming on the Cube. Certainly there were chief technologist, but so you really that's those are your peeps, And if you think about process flows, So I want to get into this cloud conversations that you guys are using this term last mover advantage. And ultimately, you know, Why not just the last two layers of that? There's some massive applications that are running on the Oracle cloud right now that are custom applications built Um, you know, we will talk about what the business impact is, of the equation we always talk about is your man plus machine is greater than man alone, You pass that test now, you got to deliver the business And our goal at that point is to leave only if the customer wants us to. But thanks so much for coming on the Cube and sharing your insights. Thanks for having me on. And thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

Dave VellantePERSON

0.99+

DavidPERSON

0.99+

Chris FoxPERSON

0.99+

OracleORGANIZATION

0.99+

Dave VolantePERSON

0.99+

BostonLOCATION

0.99+

20 yearsQUANTITY

0.99+

MikePERSON

0.99+

second partQUANTITY

0.99+

SASORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

FirstQUANTITY

0.99+

Oracle ConsultingORGANIZATION

0.99+

CentraORGANIZATION

0.99+

Hana WayORGANIZATION

0.99+

first timeQUANTITY

0.99+

three use casesQUANTITY

0.98+

North American ConsultingORGANIZATION

0.98+

third partQUANTITY

0.98+

todayDATE

0.97+

oneQUANTITY

0.96+

Cube StudiosORGANIZATION

0.96+

threeQUANTITY

0.96+

first generationQUANTITY

0.95+

North America Tech CloudORGANIZATION

0.94+

FrankieORGANIZATION

0.92+

PeopleSoftORGANIZATION

0.91+

JD EdwardsORGANIZATION

0.87+

Enterprise Cloud ArchitectsORGANIZATION

0.87+

two layersQUANTITY

0.86+

yearsQUANTITY

0.86+

SASTITLE

0.84+

yearsDATE

0.84+

2QUANTITY

0.83+

first discussionQUANTITY

0.79+

tier oneOTHER

0.79+

CubeORGANIZATION

0.79+

RevisitTITLE

0.75+

SuiteORGANIZATION

0.71+

one reasonQUANTITY

0.71+

zero dataQUANTITY

0.7+

tier twoOTHER

0.68+

PassTITLE

0.67+

tier zeroOTHER

0.66+

IASTITLE

0.65+

twoQUANTITY

0.64+

ArchitePERSON

0.61+

HermanTITLE

0.61+

zeroQUANTITY

0.52+

numberQUANTITY

0.51+

CubeCOMMERCIAL_ITEM

0.51+

The Value of Oracle + Oracle Consulting


 

>> Announcer: From the CUBE studios in Palo Alto and Boston, it's the CUBE, covering Empowering the Autonomous Enterprise brought to you by Oracle Consulting. >> Everybody welcome back to the CUBE, I'm Dave Vellante. We're covering the transformation of Oracle Consulting specifically focused on really what is, what I consider a rebirth from really staff augmentation to a much more strategic partner for customers. And with me to explore that a little bit is Sherry Lautenbach. She's the Senior Vice President of Cloud Key Accounts at Oracle, and we're also joined by Pat Mungovan, who's a Group VP for the North American Cloud Strategy also at Oracle. Folks welcome to the CUBE, thanks for comin' on. >> Thanks Dave. >> Yeah, thanks for havin' us. >> You're welcome. So Sherry, you're out talkin' to customers a lot, and I'm curious as to what that conversation is like specifically as it relates to consulting. Are you bringing Oracle Consulting now into the conversation? What's that conversation like? >> Absolutely, in fact every conversation we have relating to our cloud strategy, Oracle Consulting is part and parcel to that. And they are not staff augmentation, they are actually the digital transformation arm of what we do around cloud. So it's been really interesting to see what they've been able to do in terms of changing the narrative of what we do at Oracle, from just a software company to really transforming to a cloud provider. >> Strategy, obviously a fundamental part of any customer interaction, but what are you seeing? What underscores customer strategies? What are the business drivers for them right now? What are the catalysts that are driving their technology spending decision? >> I think a lot of it depends upon, especially in the times that we're in now, depends upon the industry that they're in, but most importantly, what we're seeing is right now is durability. So we want to make sure that the customers have, you know our Oracle customers and others, have an opportunity to have a disaster recovery, business continuity. In this stage right now it's less about expansion per se, unless they're in an industry that's uniquely positioned for that, and more about durability of the overall strategy. So when we look at that durability, we think about kind of two core missions. We think about sort of back office operations and continuity, and then we think about transformational revenue generation, and so when we partner with OCS, we want to make sure that we have both of those concepts in mind. >> You know we have a lot of talk about, in our community, about cloud first, and I think Oracle has sort of put forth the gauntlet of look we're leading now with cloud. You both have cloud in your title, but obviously being cloud first is more than that. Sherry, I wonder if you could talk about your customers in your cloud journey and share with us and kind of convince us that you are cloud first. >> I joined Oracle about 11 months ago, was in the industry for about 25 years, and I joined specifically because I believe in what Oracle is doing around this cloud journey. We are in our second generation of cloud capabilities and that's purposeful. And we do that because we realize that where cloud started and where we are today are two totally different things, and so we have capabilities around security, viability, extensions with autonomous that other cloud providers just simply don't have. And we built these from the ground up to ensure that we can run Oracle workloads, databases, and applications far better than any other cloud provider. So it's a super exciting time to be at Oracle, and it's absolutely fascinating what our customers are doing to adopt our technology. >> Pat, I want to ask you a sort of similar question, how fundamental is cloud to organization strategies, obviously everybody has a cloud strategy, but I'm specifically asking as it relates to mission critical workloads because, let's face it, that's been the hardest to move into the cloud. So when you're out talking to customers about their strategy, and obviously dovetailing it to Oracle's strategy, how do you align those? >> So first I think I would respond in the following way. When I think about our portfolio, I don't necessarily say cloud first, I say customer first, and I really want the customer to make a decision based upon a deployment model that makes sense for that particular customer, whether it's a regulated industry, or the public sector, or you know any sort of compliance considerations. So Oracle is one of the very few enterprise-class cloud Providers that has obviously on-premise capabilities as well, and so 99% of the cases that we see, with the exception of some of the sort of startup S and B type folks that are born in the cloud, we're dealing with the hybrid cloud model anyway. And so that's kind of the first order of priority is what's right for the customer and let's make sure that we get the appropriate deployment model for that customer. In terms of enterprise, essentially the workloads that we have, whether it's cloud or on prem, are enterprise workloads, and those are kind of separated into two buckets. One would be core mission, sort of the revenue generation side, and one would be mission critical, sort of the back office side. So Oracle is historically tremendous at the back office side, you know, running finance, running operations, running the supply chain, you know, doing those things that are mission critical. On the core mission side, that's really where we're starting to focus now, which is getting out into the revenue generation, the mission of the entity, with things like high performance computing and making sure that we have an ability to support our customers on both sides of the spectrum. >> All right Sherry, why are customers wanting to put mission critical workloads in the cloud? Is it the same sort of cloud agility and cost, et cetera, et cetera? I mean, why not just leave it on prem and keep it protected and maybe spend a little bit more? What's the driver for moving mission critical workloads into the cloud? >> Well, I think it's dependent upon what the initiatives are in the company, right? If they're looking for cost reduction are the looking for top line growth, are they looking for different capabilities around security that the cloud can provide? The great thing about what we do is we have optimized all of our workloads, both our database and our applications, into our cloud, so we're providing additional capabilities, but we're also saving a lot of money. So we say all the time that, you know, put us to the test, let us quantify what we would look like in the cloud with our workloads versus a competitor, and we will guarantee that we'll save you a lot of money. So I think that a lot of it has to do with one, it starts with essentially cost reduction but then they start seeing additional business value driven out of and back to Oracle Consulting. What Oracle Consulting provides in terms of the business value in the cloud is transformative for our customers. >> Talk about kind of how you lead in these customer conversations. >> Right, well normally our entry point is one, understanding what the business drivers are, right. It has to be a business led discussion. It really isn't a technology starter point, right? It really is around what business problems are you trying to solve and how can we help you solve them? And because we know your environments, we know what databases are deployed and where they're deployed, what Oracle applications you're leveraging to run your business, we can, I think, successfully position ourselves very, very competitively against other cloud providers. And I think that has been something that has resonated incredibly well with our customers and in fact, our largest customer. >> Yeah, so it seems like Oracle Consulting is an important ingredient as part of that strategy 'cause again, if it was, you know, five years ago, and it was just staff augmentation that's really not a compelling conversation to have with customers. But if you can come in with a mindset of strategic partner, you're bringing in Deloitte, we've been talking to some of their professionals about the Elevate Program with Oracle, that's a nice lever that you're, you can take advantage of. >> Absolutely, and in fact, we've seen that that is a huge opportunity for us because one, the partnership with Deloitte is incredibly strategic. But we also partner with other companies like Accenture and DXC and IBM candidly, and Oracle, Oracle Consulting is incredibly flexible in terms of what kind of partnerships and alignment they have with our customers and it's really based on what the customer preference is. >> It's not just about, you know, feature, function, speeds and feeds, maybe you could address that. And where does Oracle Consulting fit in that equation? >> We firmly believe that every customer is going to want to have a different option for what they do in the cloud and based on the provider. So we one, we've partnered with Microsoft, and we actually can interconnect our clouds together to provide that kind of flexibility to our customers, and Oracle Consulting is a key component of that. To engage our customers and talk about our Microsoft integration, our partnership, Oracle Consulting is the arm that does that work for us. So we are seeing them come up, come about in a much different way, and in a way that's differentiated between other consulting staff augmentation firms. >> I want to end on growth Pat, and maybe talk about everyone wants cloud, Cloud is the growth business. You look at Oracle's business, you know, everybody's business, this cloud is growing. Everything else is either hanging on or declining, so it's all about growth. How do you drive growth? What is cloud's role in terms of the growth strategy? And maybe add some color to that narrative. >> From a product perspective, I think we're sort of a luxury of riches around the autonomous capabilities with the (mumbles). So that's something that's incredibly unique to Oracle, you know, the autonomous database and all the autonomous services that we're rolling out. And that autonomous gets back to what we talked about earlier around security, around performance, around scalability and all these things so that ultimately we're positioning the capabilities of the future, but we're positioning them today. So we're a market leader in this space, you know, not only is the Oracle database, as you pointed out, the market leader, we're market leader in ERP Cloud and a bunch of the SAS series. But this autonomous segment of the market is crucial for us and crucial to our growth. >> Yeah, it really is an enabler, and what I've been saying is that it's almost compulsory for Oracle to participate and compete in the cloud because it gives you that automation and that scale. But you're talking about also setting up, you know, some future advantages of being able to take advantage of data, the combination of data, AI, and cloud is the new superpower within the industry. Sherry, I want to end on you. 11 months in at Oracle, let's say things work out great, you're here two, three, four years down the road, you look back, what does success look like? >> Success looks like every one of our customers moving to the Oracle cloud and seeing incredible business value from that partnering with Oracle Consulting. That's what my success criteria is. >> Guys, well listen, thanks for so much for coming on the CUBE where we've been tracking this transformation of Oracle Consulting. And one of the things that's very clear, is Oracle is obviously serious about cloud, but also seriously about bringing in new talent and new skill sets to really not only transform Oracle but help transform your customers. So thank you for your time, I really appreciate it. >> Thanks so much. >> Yep you bet, thank you. >> All right and thank you everybody for watching. This is Dave Vellante for the CUBE. We'll see you next time.

Published Date : Jul 6 2020

SUMMARY :

brought to you by Oracle Consulting. We're covering the transformation and I'm curious as to what in terms of changing the especially in the times that we're in now, of put forth the gauntlet and so we have capabilities that's been the hardest and so 99% of the cases that we see, in the cloud with our Talk about kind of how you lead and in fact, our largest customer. about the Elevate Program with Oracle, because one, the partnership with Deloitte Consulting fit in that equation? and based on the provider. Cloud is the growth business. and a bunch of the SAS series. and compete in the cloud and seeing incredible And one of the things that's very clear, This is Dave Vellante for the CUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

DeloitteORGANIZATION

0.99+

Pat MungovanPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Sherry LautenbachPERSON

0.99+

DXCORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

SherryPERSON

0.99+

OracleORGANIZATION

0.99+

OCSORGANIZATION

0.99+

DavePERSON

0.99+

Palo AltoLOCATION

0.99+

BostonLOCATION

0.99+

99%QUANTITY

0.99+

twoQUANTITY

0.99+

Oracle ConsultingORGANIZATION

0.99+

threeQUANTITY

0.99+

11 monthsQUANTITY

0.99+

PatPERSON

0.99+

bothQUANTITY

0.99+

four yearsQUANTITY

0.99+

second generationQUANTITY

0.98+

five years agoDATE

0.98+

two core missionsQUANTITY

0.97+

CUBEORGANIZATION

0.97+

both sidesQUANTITY

0.97+

Oracle ConsultingORGANIZATION

0.97+

about 25 yearsQUANTITY

0.97+

oneQUANTITY

0.96+

firstQUANTITY

0.95+

two bucketsQUANTITY

0.95+

todayDATE

0.91+

about 11 months agoDATE

0.89+

first orderQUANTITY

0.84+

cloudQUANTITY

0.74+

ProgramOTHER

0.65+

North AmericanORGANIZATION

0.62+

SASORGANIZATION

0.58+

The Power of Partnership: ELEVATE by Oracle Consulting and Deloitte


 

>> Narrator: From the cube studios in Palo Alto in Boston, it's the Cube, covering empowering the autonomous enterprise brought to you by Oracle Consulting. >> Everybody. Welcome back to this special digital presentation where we are tracking the transformation of Oracle Consulting. Aaron millstone is back, he's the senior vice president of Oracle Consulting. He's joined by Jeff Davis, who's the principal at Deloitte. He's the chief Commercial Officer for Oracle at Deloitte. Gentlemen, good to see you, welcome. We see a lot of these deals. Sometimes we call them Barney deals, you know, I love you, you love me, there's a press release and that's it. But so one of the things we look for okay, is their teeth behind this? So you guys have come up with with what you call elevate. What is elevate? How did it get started? And I have some follow up questions. >> Well, elevate, really got started when Aaron and I started to look at the assets that each of the firms possessed. On the Deloitte side as Aaron suggested, we have deep capabilities and a broad range of technologies, some of them competing technologies with Oracle. At the same time, we didn't have a great deal of depth in Oracle's technical products, Oracle Cloud infrastructure, and Oracle autonomous. Our bench was not as big as Aaron's. And Aaron also had access to Oracle development at a level that we didn't have access to. So we really found ourselves in a situation where we could put those two capabilities together, and we could offer something to our clients and the broad range of Oracle customers in the field. They had access to all of the Deloitte's capabilities which include great project management, great change management, real skill around the strategic aspects of cloud migration. And Aaron had tools and had resources trained and developed around the latest Oracle Technology, they'd always be a step ahead of any SI. So together, we felt this was really a differentiation for marketplace. >> One of the things we look forward there, is there any other integration? Are you doing co-engineering? In this case maybe not co-engineering, but are there tools that you're developing that you're taking to market that you're actually leveraging? Aaron, can you talk about that a little bit and convince us that's not just the sales play? >> Yeah, sure. And Jeff alluded to some of this earlier too, right. So we definitely each had our respective tooling, right on Deloitte investments and tools. One was called ATADATA that we've seen use quite a few times now. We've invested in something we called Oracle Soar. You know, our tools are, as you'd imagine, heavily Oracle focused it's about moving Oracle technology to Oracle Cloud, ATADATA and some of the tools that Deloitte invested in are focused more comprehensively on holistically it looking at everything in a data center and everything that's across data centers and starts to develop a set of facts around this stuff. But in both cases, we actually looked at these things. And we said, "You know what if you combine these together, "we get a very comprehensive view of what exactly "it is that we're looking at with a customer". So we can tell everything from the types of traffic we see in the network to the specific versions of stuff, we can start to identify whether there's risk associated with having things not past or out of supporter, but get a very comprehensive view that's based on facts. And so you know, we took those tools and we've combined them together so that we can go into a customer and give a complete end to end view from both an Oracle and Deloitte perspective and quite frankly, it doesn't matter whether Deloitte leads or whether Oracle leads we've developed these tools together we're going to market together and we've even got you know, the templates you'd expect consultancies to have, right? So when you look at business cases, we've got joined business case templates that we've created together and that we're using actively with customers, and therefore then we're refining them and improving them each time we do it. But you know, we're at a point now where our tools are combined, our templates are combined. And we even at this you know, we were even Jeff and I were on a call earlier, yesterday actually we even got a joint, a war room that's constantly engaging with different account teams making sure that we structurally approach things in a consistent way so that we're driving business value and using the tools appropriately. >> Aaron you and I have talked about you know, data centers and building data centers and investing. It's not just it's just not a good use of capital today. There's so many other things that organizations can do. You guys have identified data center consolidation as a call it a, you know, an initiative that you're seeing customers. I wonder if we could talk about that a little bit. Is that kind of a starting point for conversations? >> Yeah, it's well it's definitely starting point right. So we call it a referred to as infrastructure lead transformation. And appetite, the appetite for that is certainly high. We were seeing an increased focus on you know, what do customers need to do to take not just a workload here and there but how do they get out of the data center business holes? So it's sort of it's a foregone conclusion, right? Like you just said, it's not really a question of should we invest in another data center? Or should we invest in up to in our data centers? The question has changed to let's move to cloud, how do we get there? And let's move in a big way. And that's, we're seeing that dialogue across all of our customers. And quit frankly, even for Oracle, it's been a learning curve for us, right? We started with an Oracle workload conversation, which is you want to move this Oracle workloads to Oracle's cloud, you want to move that Oracle workload over to cloud. And really what we're finding is it's a wholesale transformation of everything in a data center to one or more clouds right again, often, it's a multi cloud strategy and that's okay. And we you know, we were having more bigger conversations. The thing that has been really interesting as these conversations have evolved, and especially as we work with our partners at Deloitte, has been that, you know, we think that the combination of our cloud technology, the consulting services that Oracle consulting and Deloitte can bring to bear. And then Oracle's ability to finance the whole deal makes some very compelling conversations for customers, because you can walk in to a CIO to a CFO and say, Look on day one, you can actually have a lower spend than what you have today in your data center, and get a cloud transformation underway at the same time. >> Let's talk a little bit more about that business case. Is that generally what you're seeing where it starts is let's take some costs right out and then Aaron, you and I talked about maybe investing that in the future but is that really the starting point for the vast majority of customers? Let's cut some costs right away and get a payback immediately? >> So I'd like to share our perspective. Which is, you know, nobody spends money for the sake of spending money on technology, it's got to have meaningful business value. So the conversation starts with really renewable and a path to the cloud. But there's a natural opportunity for savings in consolidation that we take advantage. We're not simply shifting from your hardware to the cloud. We're actually modernizing, which will result in significant savings. But it also gives the business something that they don't have today had at a level of security and scalability and ability to run modern technology. Much faster, much better, and much more scalable. >> Jeff, can you give us a sense as to how far you're into this elevate journey, maybe thinking about a couple of customer sizer specifically or generically, kind of where you're at with them? How far along maybe even some examples that you feel are representative. >> Sure, you know, the relationship has been probably about six, close to seven months of maturity. In that time, we've had an opportunity to work on several key clients at scale. We've worked together in collaboration on one of the nation's largest retailers in the grocery business. We've worked collaboratively in aerospace and defense, and also in the hospitality industry. In these cases, what we're finding and one is each one is in a various stage of maturity. One is done, one is in midstream, and one is at the early stages. And current economic conditions we're driving a huge pipeline right now. I think our challenge right now is making sure that we identify those clients that can best take value, take advantage of our services and our joint offering to deal with that pipeline right now. What we're finding is that the savings are at least as we projected. In some cases, we're finding even more what people say they have and what people say they do isn't necessarily what you find when you get in there. And but almost every case, we're finding that there's unused equipment, unused capacity, that they currently have redundancy, low utilization of their current assets. We can go a long way in streamlining that. Plus, I can't emphasize enough that these days security is a major concern. And we're adding a layer of security that they could never achieve themselves with soft. >> How do you guys and how to customers want to approach the transaction is it a fixed fee? Is that a TNM? Is it a situation where you participate in some of the savings or the gain? How does the pricing work? >> I'll start off by saying, each deal is really custom built around what a customer really needs. What they're trying to get out of it. Right now as an example, Op-X is very important. So we're engineering deals in a way that helps customers deal with their financial challenges, especially around Op-X. There are other structures that we can put in place. We have the backing of Oracle finance, so we can be very innovative on deals. They can be when value is attained, they can be milestone based. There's just I think, a wide variety. I don't want to say unlimited, but a wide variety of different options that we can offer our clients in order to be able to deal with whatever financial challenge or opportunity they may be looking at. >> What does success look like? You know, when you sort of you know, just less than a year in, when you're two, three, four let's say five years in and you look back, what does success look like Aaron? >> So to me successful, success is going to look like we've gotten a number of these big transformation deals in play. It's in motion naturally between our organizations not necessarily driven entirely by Jeff and I going out and driving organizational behavior right away. It's more in our DNA. But more importantly, I think we've gone into, we've gone beyond the conversation of let's move workloads we've gone into conversations of let's really talk about how to reimagine your business on top of Oracle's cloud, and have an ongoing dialogue that looks at that transformation. Once we hit that point three, four or five years from now, right, that'll be a wild success Michael. >> Jeff final comment. Deloitte has been around for 175 years. This is our birthday this year and in that time, What we've learned is there's no substitute for impact and value added to our clients. In our perspective, what success looks like is client success. Find success means improved scalability of their operations. Securing their technology and their data at a substantially lower cost, so that they can focus on what their core business is, and focus less on technology, that success to Deloitte. >> Great, guys, thanks so much. Great session, we're not only witnessing the rebirth of Oracle consulting, but there's clearly a transformation going on. And it's cultural. Gentlemen, congratulations on your partnership. And thanks so much for coming in the cube. >> Thank you so much. >> Thanks for having us.

Published Date : Jul 6 2020

SUMMARY :

brought to you by Oracle Consulting. But so one of the things we look for okay, that we didn't have access to. And we even at this you know, as a call it a, you know, And we you know, we were having but is that really the starting in consolidation that we take advantage. some examples that you feel and also in the hospitality industry. options that we can offer and have an ongoing dialogue that looks that success to Deloitte. And thanks so much for coming in the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Jeff DavisPERSON

0.99+

AaronPERSON

0.99+

OracleORGANIZATION

0.99+

DeloitteORGANIZATION

0.99+

Oracle ConsultingORGANIZATION

0.99+

MichaelPERSON

0.99+

Palo AltoLOCATION

0.99+

Aaron millstonePERSON

0.99+

twoQUANTITY

0.99+

five yearsQUANTITY

0.99+

yesterdayDATE

0.99+

threeQUANTITY

0.99+

BostonLOCATION

0.99+

both casesQUANTITY

0.99+

seven monthsQUANTITY

0.99+

oneQUANTITY

0.99+

less than a yearQUANTITY

0.99+

eachQUANTITY

0.99+

OneQUANTITY

0.99+

fourQUANTITY

0.99+

this yearDATE

0.98+

bothQUANTITY

0.98+

each dealQUANTITY

0.98+

todayDATE

0.97+

two capabilitiesQUANTITY

0.95+

ATADATAORGANIZATION

0.92+

175 yearsQUANTITY

0.88+

about sixQUANTITY

0.85+

AaronORGANIZATION

0.83+

pointQUANTITY

0.82+

each oneQUANTITY

0.76+

day oneQUANTITY

0.73+

Oracle SoarORGANIZATION

0.72+

BarneyORGANIZATION

0.56+

OpTITLE

0.5+

CloudTITLE

0.47+

Oracle Consulting Transformation


 

>> Announcer: From theCUBE studios in Palo Alto and Boston, it's theCUBE covering Empowering the Autonomous Enterprise. Brought to you by Oracle Consulting. >> Hello, everyone, and welcome to this CUBE special presentation where we're covering the rebirth of Oracle Consulting. So this is a digital event where we're going around and identifying subject matter experts in different locations. We're currently here in Chicago, and I'm here with Stephanie Trunzo who's the head of transformation and offerings at Oracle Consulting. Stephanie, great to see you again. >> Yeah, you too. >> So Oracle Consulting, you know, you guys have been quiet lately. Where've you been? >> Well we were quiet because I wasn't here yet. And now I am. >> Dave: Making noise. >> Yeah, exactly. Here to make some noise. So I love the way you said rebirth. I think it's really accurate. Oracle Consulting has been around for quite some time. But as you said, maybe not high on the radar. And one of the things that we're learning and one of the reasons I'm here in this transformation role is to help us transform ourselves to better match the transformation that our clients are going through. >> So was there an internal transformation, or is there an internal transformation taking place as well and then you sort of pointed it to the marketplace? Maybe you could describe that a little bit. >> Absolutely, yeah, so we are undergoing our own transformation at the same time that we're helping our clients undergo their transformation. And so for us what that looks like, it is things like a traditional services organization, which is kind of what Oracle Consulting had been in the past, was looking at the expertise that was necessary to drive clients' business forward but delivering it in what I would call a pretty traditional way. Time and materials based kinds of contracting, determining the skills that were necessary, and conversing with clients in feature function kinds of discussions. And our transformation is now about rebuilding the organization around offerings. And those offerings are things that we're doing to match the way that our clients are consuming, let's say, cloud technology. So if you might purchase a natural language processing service from a cloud platform, we want to also make sure that we're matching the humans to those technology services and enabling our clients to buy from us in a very similar way. >> You're also bringing in some new blood. I mean, obviously Oracle, large organization, lot of DNA there, but yourself, you came from IBM. You got people coming in from AWS. You got folks from Accenture and all over the place. Describe that and how that's affecting the culture of Oracle Consulting. >> There is an influx of talent that is necessary to change the way that you think. And I believe that one of the reasons I myself came to join Oracle Consulting was I was excited about this new adventure. So when you're working in a certain style, in a certain way, in a certain team for some amount of time, you can maybe forget to get introspective and forget to look at what's right in front of you and the changes you need to make. So bringing in new talent from outside is as much a part of our transformation as the way that we're shaping our offerings is. Bringing in those new ideas, bringing in people who have been there, done it in other experiences so that they can infuse our thinking with some of what's going on in the market around us. >> How would you summarize the mission of Oracle Consulting? >> The mission of Oracle Consulting is extremely simple. It's dead simple. It's help our clients succeed on Oracle cloud technology period. >> Of course, Oracle's known as a product company. You sell software products. That's how you generate most of your revenue. And you've got your cloud. You've got the things like cloud in customer and these Exadata that's really driving. You got the Oracle database, you know, certainly the huge application portfolio. How is Oracle Consulting aligning with the products? >> As a product company, our goal is still to help our clients achieve their goals, right? And so consulting is looking at our Oracle product set to make sure that we are always the deepest and the best at understanding so we can help leverage that technology to its fullest capacity for our clients. It's not just good enough to buy a tool. You have to know how to use it, right? And so our objective is to align with Oracle products, make sure we know what's going to be hot off the press, that we're driving from our client experiences back into the product sets as well. So we're informing our product development of what's really happening out in the world with our clients' implementations. >> My last question, Stephanie, is how are you going to define success? When you look back a couple years from now, what will success look like? >> Success to me will look like being the go to for any solution that is an Oracle driven answer to our clients. That Oracle Consulting is driving consumption in a way that is extremely valuable to the client because in the end cloud consumption, technology consumption, in and of itself, is not very interesting. It's when we're telling stories that are our clients' stories on stages because we've helped them achieve new business outcomes. Things that weren't possible for them before. >> Well it's great to have you. Thank you so much for coming on, and it's good to have you at the helm, sort of bring credibility to Oracle Consulting. And we'll be watching, so thank you. >> Awesome, thank you. >> And thank you for watching. We'll be right back with our next guest right after this short break. You're watching theCUBE. (upbeat music)

Published Date : Jul 6 2020

SUMMARY :

Brought to you by Oracle Consulting. Stephanie, great to see you again. So Oracle Consulting, you know, And now I am. So I love the way you said rebirth. it to the marketplace? in the past, was looking at the expertise and all over the place. to change the way that you think. The mission of Oracle You got the Oracle database, And so our objective is to look like being the go to and it's good to have you at the helm, And thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Stephanie TrunzoPERSON

0.99+

StephaniePERSON

0.99+

OracleORGANIZATION

0.99+

ChicagoLOCATION

0.99+

Oracle ConsultingORGANIZATION

0.99+

AWSORGANIZATION

0.99+

DavePERSON

0.99+

IBMORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

BostonLOCATION

0.99+

Oracle ConsultingORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

oneQUANTITY

0.99+

theCUBEORGANIZATION

0.87+

CUBEORGANIZATION

0.7+

couple yearsDATE

0.47+

Wim Coekaerts, Oracle | CUBE Conversation, May 2020


 

>> From theCUBE studios in Palo Alto and Boston, connecting with thought-leaders all around the world, this is a Cube Conversation. >> Hi everybody, this is Dave Vellante. Welcome to this Cube Conversation. We're really excited to have Wim Coekaerts in, he is the senior vice-president of software development at Oracle. Wim, it's great to have you on, and, you know I often say I wish we were face-to-face but if we were you'd have to cut off my tie, cause developers and ties just don't go together. >> No, I know, and this is my normal outfit, so this is me wherever I go. Hi again, good to see you. >> Yeah, great to see you. So, of course, you know a lot of people are confused about Oracle, and open-source, they say "Oracle? Open-source? What is that all about?" But I think you're misunderstood. People don't, first of all, realize you as the leader of the software-development community inside of Oracle, I mean, you've been involved in Linux since the early 90s. But you guys have a lot of committers, you do a lot. I want to talk about that. What is up with Oracle, and open-source? >> Ah, well, it's a broad question. So, you know, a couple of things. One is, we have many different areas within the company that are dealing with open-source. So we have the cloud team doing a lot of stuff around cloud SDKs and support for different languages like Python and Go, and of course Java and so forth, so they do a lot around ensuring that the Oracle ecosystem is integrated in the open-source tools that customers use, or developers use, Terraform companies and so forth. And then you have the Java team, and so forth. Java is open-source and then the Graal project, GraalVM which is a polyglot compiler that can run Java, and Python, and Javascript and so forth together in one. VM do really cool optimizations, that's an open-source project, also on GitHub. There's of course MySQL, which is along with Java, they're probably the two most popular and widely used open-source projects out there. There's VirtualBox which is of course also a very popular project that's open-source. There's all the work we do around Linux. And I think one of the things is that, when you have so many different areas, doing things that are for that area, then as a developer or as a customer, you typically just deal with that group. And what you see is, oh you're talking to the Java developers, so you know what's going on around Java. The Java developers might not necessarily say, "Oh well we also do MySQL, and we do Linux and VirtualBox and so forth," and so you get a rather myopic, narrow view of the larger company. When you add all these things up, and there will be one big slide that says "This is Oracle, these are all these open source projects," and there's multiple ways. One is, we have projects that we've open-sourced and all the code came from us and we made it publicly available, we're the main contributor and we get contributions back. There are other projects where we contribute to third-party in terms of enhancing things, like I said with the Cloud Team, and then in general something like Linux where we're part of an external project and we participate in development of that project at large. And so there's these three different ways, when you count up all the developers that we have that deal with open-source on a daily basis. And in terms of contributions, in terms of bug fixes, testing, and so forth, it's thousands, literally, full-time paid developers. And of course, all the projects are all either on GitHub or similar sites that are very popular. So yeah, I think the misunderstood is probably a lack of knowledge of the breadth of what we do. And, you know, our primary goal is to provide services and products to customers, and so the open-source part is sort of embedded in a development methodology. But that's not something we sell or market separately, we just work with customers and products and services, and so in some cases it's not well-understood. >> Yeah. Well, we're talking of course, we're talking about the state of the penguin, I think it's important for people to understand, Oracle got into the Linux game in the 90s, maybe the latter part of the 90s and Oracle, of course, wants to make Linux-- wants to make Oracle, it's applications and database run better on Linux, but as you're pointing out, your Linux distro, full support, end-to-end, thousands of people in your open-source community, and the contributions that you make to Linux, many if not most, they go upstream, everybody can benefit from those, but of course you want an Oracle distro that is going to make Oracle stuff run better, that's always kind of been the Oracle way. >> Well, so, yes, two things though. One is, so everything we do is upstream. So we have no Linux patches that are not contributed upstream; There's no proprietary code in Oracle Linux at all, it's all completely open, publicly available: the source code, the change log, all the commits, it's fully open and public, which sometimes is not well-understood, but it's completely open. And, everything we do in terms of feature development or functionality or bug fixes goes upstream to the Linux kernel mail-list. It's actually, it's the only way to be able to manage a Linux distribution and be a Linux vendor is to live in that eco-system. Otherwise, the cost of maintaining your own fork, so to speak, is very high and it doesn't really solve the problem. Now, the functionality we work on obviously is focused on making Oracle products run better, making Oracle Cloud run better, and so forth. However, again, what's important to understand, though, is an Oracle database is a program running on an operating system. It does IO, it does networking, it deals with memory management, lots of processing. So, for the most part, the things that we work on to improve that helps everyone out, right? It helps every other database run better, or helps every other language run better. So none of these changes are specific to Oracle, they're just things that we found doing performance benchmarks and testing and so forth, where we say "Hey, if Linux did the following, it would make boot-up faster. Now boot-up has nothing to do with the database. But our customers run on 1-terabyte, 4-terabyte, 8-terabyte systems, and so booting up, and Linux starting up, and cleaning up memory takes a long time. So we want to reduce that from an availability point of view. So here, we're now talking about just enterprise for you. So there's this broad set of things we work on that definitely help us, but they're actually really completely generic and help everyone out. >> Yeah, that's great. So I wanted to kind of get that out of the way and help our audience understand that. So let's get into it a little bit; What are you seeing, what's going on in IT, pick your observation space and your vision of what you see happening out there. >> Well, you know, it's very interesting, it's sort of, there's two... there's sort of two worlds, right, there's the cloud world and the move to cloud, and there's the on-premises world, where people run their systems on their own. And, one of the things that we've learned is, when you talk about machine-learning, obviously, is something that's very popular these days, and automation. And so in order to rely on machine-learning well, and have algorithms that are very effective, you need lots of data. And so being a cloud vendor, and having Linux in our cloud on tens of thousands, or hundreds of thousands of servers, or more, allows us to have a view of how an operating system works across an incredibly large scale. So we get lots of data. And so for us to figure out which algorithms work well in terms of how can we do network optimizations, how can we discover anomalies on the storage site, and deal with it and so forth, we can do that at scale. And what's interesting is, how do we then bring that on-prem? Well, if we can get the data and the learning done, the training done, in our cloud directly, then when we provide that service also for people running Oracle Linux on premises then that will work. The alternative is to have point solutions where you provide something to a customer, and he needs to learn something from small amounts of data. That doesn't work so well. So I think having both worlds, on-prem and cloud directly, allows us to kind of benefit from that. And I think that's important, because lots of customers are interested in going to cloud. Many of the enterprises have not yet. You know, they're starting, but there's still a huge on-premises space that's important. And so by being able to get them familiar with how these things work at scale, autonomy is again important, right, Autonomous Database is incredibly popular and so forth, that allows us to then say, "Here, try these things out here, it's a service. We can show you the benefits right away," and then as that improves we bring that, to a certain extent, on-premises as well. And then they can have it in both places. And that, I think, is something, again, that's relatively unique but also very important, is that we want to provide services and products that act similarly on-premises as well as in cloud, because at some point when people move we want to make that transition seamless. And what you have today for the most part is one world that's on-prem, and then the cloud world is completely different. And that is a big barrier of moving, and so we want to reduce that, we can run the same operating system local as well as cloud, you can the same functionality, and then that helps transition people over much easier. >> Yeah, well Oracle actually was one of the -- I think Oracle was the first company to actually market same-same, you actually used that term. Others put forth that concept, but Oracle was the first to announce products like Cloud at Customer, that were same-same, now it took some time to actually get it perfected, and get it to market, but the point is, and we've written about this, is Oracle, because of the ascendancy of cloud, flipped and has a cloud-first mentality, and you just kind of referenced that, you just said, "And you can bring that to on-prem." So I wonder if you could talk about that cloud-first mentality, and the impact on hybrid. >> So yeah, I think the cloud-first part is of course in cloud we work on services moreso than products that we deliver. And there's a number of things that are happening. So one is that we obviously continue to provide products to customers, you can download Oracle Linux, you can download the database and what not, you can install it on your own, you can do the traditional way of working. Then in the cloud-world, what typically happens is "Oh, I use a database service. I'm not installing anything, I push a button and I get an IP address and a SQL that connects extremely quickly to the database." And we take care of everything underneath that is on this database. Now, in order to do that, you need a whole infrastructure in place, you need log-in agents, you need a back-end that captures all that stuff, you need monitoring tools, you need all the automation scripts for bringing the service up and monitor it. And so, that takes a lot of time to do right, and we learn a lot by doing this. And so the cloud-first part of these services means that we get to experience this ourselves with direct access to everything. Now taking that service with all of the additional features like autonomy, and bringing that to an on-premises world, we have to make sure we can package that so that all these pieces around it go along with it. And that takes a little bit more time, so we can do everything at the same time. And so what we've done with Autonomous Database is we created everything in Oracle Cloud, we have the whole system running really well, and then we've been able to sort of package that and shrink it into something that can be installed on-premises, but then connected into Oracle Cloud again. And so that way we can get all the telemetry over the metric, and that allows us to scale. Because part of providing a cloud service that runs on-prem in the customer environment is that we need to be able to remotely manage that similar to how that runs in our own cloud. Right, otherwise it doesn't scale. And so that takes a little bit of time, but we've done all that work, and now with Cloud at Customer Database that's really in place. >> Yeah, you really want to have that same cloud experience, whether with on-prem, in the public cloud, hybrid, et cetera. So, I want to explore a little bit more who is using Oracle Linux, and what's the driver for using it. Can you describe maybe some of the types of customers and why they buy? >> Sure, so we started this fourteen years ago, in 2006, October 25th, 2006. I remember that day very well; Penguins on stage and a big launch for Oracle Linux in San Francisco Moscone Center. So, look, the initial driver for Oracle Linux was to ensure that Oracle database customers or Oracle product customers had a good operating system experience, and the ability to be able to handle critical issues when that occurs, because typically a database runs the company's critical data: the most essential stuff that a company has is typically in a database, an Oracle database. And so when that thing has issues with the operating system, you don't want just to talk to multiple vendors and have finger-pointing, and having to explain to an operating system vendor how the database works. In the Unix world, we had a good relationship with the OS vendors, and the hardware vendors, they were the same. And they knew our products really well, and in the Linux world, that was very different. The OS vendor basically did not want to understand or learn anything about the products living on top. And so while to a certain extent that makes sense, it's an enterprise world where time is of the essence, and downtime needs to be limited absolutely. We can't have these arguments and such. And that was the driver, initially, for doing Oracle Linux. It was to ensure there was a Linux distribution really backed by us, that we could fix, that we could fully support. That was completely the original intent. And so the early customer base was database customers. Database and middleware. Mostly database. But that has then evolved quickly, and so what happened was, people say "Look, I have a thousand servers, a hundred run Oracle, so we'll run Oracle Linux on those hundred, and we'll run something else on those other nine-hundred." Now after a year or so, they realize that our support is really good; We fix all these issues, and so then they're like "Why are we having two Linux distributions? This thing works really well, it runs any application, it's fully compatible, so we'll do a thousand with Oracle Linux." And so the early days, the first few years, was definitely Oracle Database as the core driver, and then it sort of expanded to the rest of the estate. And over the years, we've added lots of features and functionality, like Ksplice, and so forth. We have an attractive pricing model for running on servers, and so now lots of our customers have a very small Oracle percentage running and many other things running. So it's really become a all-or-nothing play in the Linux space, and we're well-known now, so it's actually very good. >> You just mentioned Ksplice. We've been talking about cloud, and on-prem, and hybrid. Let's talk about security, because security really is a differentiator, particularly if you're going to start to put stuff in the cloud. Talk about Ksplice specifically, but generally security and your policy there. >> So, "Security first" is sort of what you hear us say and do, in everything we do. The database obviously security, on the Linux site security matters. Ksplice as a technology is there to do critical bug-fixing and make sure that we can apply security vulnerability fixes without affecting the customer, and not have downtime. And if you look at most of the cases or many of the cases where you have security vulnerabilities and exploits, it tends to be because systems were not patched. Why were they not patched? Well not that our customer doesn't understand that it's important, but it's a whole train of events that needs to happen. You have to, you get notified that there's a security issue in your operating system or application. Then, well, an application typically means it's a multi-layered setup. So if you have to bring your database server down, then you first have to coordinate with the application users to bring the app server down, cause that talks to the database. So to patch one system, you basically have to bring down the whole application stack. You have to negotiate with the DBAs, you have to negotiate with the app admins, you have to negotiate with the user. It takes weeks to do that and find time. Well during that time, you're vulnerable. So the only way, really, to address security in a scalable and reducing that window of time is to do it without affecting the customer. And so Casewise is something that, it's a company we acquired in 2009, and have since evolved in terms of capabilities, and so it allows us to patch the Linux terminal without downtime. We lock the kernel for 8 microseconds. It's literally no downtime. You don't have to bring down applications, the user doesn't see it, there's no hang, there's no delay. And so by doing that, you can run a Linux operating system, or gLinux, and you can be fully patched on a system that hasn't rebooted for 3 years. You don't even know it. And so by doing that type of stuff, it makes customers more secure, and it avoids them-- It saves them a lot of money in terms of dealing with project management and so forth, but it really keeps them secure. And so we do that for the Linux kernel, we do that for some of the libraries on top that are critical like OpenSSL and 2 LVC, and, you know one example-- I can give you two examples. So one example is, Heartbleed was this bug in OpenSSL a number of years ago. And so everyone had to patch their SSH server. And that meant, basically, systems around the world had to reboot. Like a whole IT reboot across the world. With Ksplice today, if Heartbleed were to happen tomorrow, we would be able to patch this online for all the Oracle Linux customers without any downtime. No reboots, no restarting of applications, everything keeps running. The amount of money saved would be massive, and also, of course, the headache. Another example is, and this was in Oracle Cloud, when some of these CPU bugs that happened a few years ago that were rather damaging on the cloud side, where you could basically see memory potentially of other CPUs running, the cloud is incredibly critical. We were basically able to basically patch our entire cloud in four hours. And the customer didn't know, right, a hundred and twenty million patches, or something, that we applied within four hours, all online, without any downtime. And so that technology has been really helpful, both for us to run our cloud, but the exact same patches and same fixes go to customers on-premises as well. But this comes back to the whole, what we do in cloud we also do for customer. And I think that's a unique thing that we have at Oracle which is quite fascinating. The operating system we run for our customers, the operating system that's the host part of VMs, is the exact same binary and source code that we make available, just to be clear, the exact same binaries are the ones that you run as a customer on-premises. So if you run Oracle Linux with KVM, you run VMs, you're actually running the exact same stuff as we run underneath our customer's stuff. Nobody else does that, everyone else has a black box. So I think that helps a little bit with transparency as well. >> Yeah, and that homogeneity just creates an environment, you're talking about that sort of security mindset, it's critical, you're not just bolting it on, it's part of the culture. But you started your career, and then of course you were a Linux person when you came to Oracle, but then I think you spent some time in database, back in the day when there were serious database wars going on, before Oracle became the king of database. So now you've got, obviously, this great portfolio, and a lot of really sharp software developers; What should we expect going forward, from Oracle? What should we look for? >> You know, I was talking to some, I was welcoming some interns to the company, for their summer internship yesterday, and one of the things I mentioned to them was that -- so cloud obviously gives us a lot of opportunities, but there's a number of things. One is, we have such a breadth of applications and software and hardware together. We have the servers, we have the storage, we have the operating systems, we have the database layer and so forth, and we have the cloud side, and one of the great opportunities, and I think we've shown a lot of this happening with the ability to create something like Autonomous Database, is to combine all these things. Right, we have such a broad portfolio of really cool technology that by itself is okay, but if you combine the things it really becomes awesome. You cannot create autonomous database without having autonomous learning. You cannot create those two and make them really safe without also controlling the firmware on the hardware and so forth. So by being able to combine all these layers, and by having a really great relationship across the teams within the company, that opens up a lot of opportunities to do stuff really quickly. And having the scale for that. I think that has been, for the last few years, a really great thing, but I can see that being one of the advantages that we have going forward. We have Oracle Fusion Applications, which is incredibly popular, and has great growth, and then we have that running on Oracle Cloud, that talks to Oracle Autonomous Database, so we bring all these pieces together. And no other SaaS vendor can do that, because they don't have these other pieces. They have one area, we have all of them. And so that's the exciting part for me, it's not so much about making my own world better, and having Linux be better, and Casewise and so forth, which is important, but that becoming part of the bigger picture. And that's the exciting part. >> Well, Oracle's always invested in RND, we've made that point many, many times. Whether it's database, you know Fusion was a painful but worthy effort, the whole public cloud piece, obviously many acquisitions, but the investments that you've made in open-source as well, Wim, you're a great spokesperson, and a great representative of the open-source community generally, and then Oracle specifically, so thanks very much for coming on theCUBE and sharing with us the state of the penguin, and best of luck. >> You're welcome. Thank you, thanks for having me. >> Alright, and thank you for watching, everybody. This is Dave Vellante for theCUBE. We'll see you next time. (cheerful music).

Published Date : May 26 2020

SUMMARY :

the world, this is a Cube Conversation. Wim, it's great to have you on, is my normal outfit, so So, of course, you know a lot of people and so the open-source part is sort of and the contributions the things that we work on to improve that get that out of the way and the move to cloud, and get it to market, but the point is, And so that way we can in the public cloud, hybrid, et cetera. And so the early customer to put stuff in the cloud. and also, of course, the headache. back in the day when there We have the servers, we have the storage, acquisitions, but the investments Alright, and thank you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

May 2020DATE

0.99+

OracleORGANIZATION

0.99+

2009DATE

0.99+

2006DATE

0.99+

3 yearsQUANTITY

0.99+

two examplesQUANTITY

0.99+

BostonLOCATION

0.99+

Wim CoekaertsPERSON

0.99+

1-terabyteQUANTITY

0.99+

one exampleQUANTITY

0.99+

8 microsecondsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

twoQUANTITY

0.99+

8-terabyteQUANTITY

0.99+

JavaTITLE

0.99+

JavascriptTITLE

0.99+

4-terabyteQUANTITY

0.99+

tens of thousandsQUANTITY

0.99+

PythonTITLE

0.99+

LinuxTITLE

0.99+

San Francisco Moscone CenterLOCATION

0.99+

October 25th, 2006DATE

0.99+

MySQLTITLE

0.99+

thousandsQUANTITY

0.99+

four hoursQUANTITY

0.99+

OpenSSLTITLE

0.99+

firstQUANTITY

0.99+

yesterdayDATE

0.99+

OneQUANTITY

0.99+

oneQUANTITY

0.99+

HeartbleedTITLE

0.98+

two thingsQUANTITY

0.98+

hundreds of thousandsQUANTITY

0.98+

tomorrowDATE

0.98+

nine-hundredQUANTITY

0.98+

bothQUANTITY

0.98+

todayDATE

0.98+

WimPERSON

0.98+

gLinuxTITLE

0.98+

GitHubORGANIZATION

0.98+

fourteen years agoDATE

0.98+