Image Title

Search Results for mySQL autopilot:

Digging into HeatWave ML Performance


 

(upbeat music) >> Hello everyone. This is Dave Vellante. We're diving into the deep end with AMD and Oracle on the topic of mySQL HeatWave performance. And we want to explore the important issues around machine learning. As applications become more data intensive and machine intelligence continues to evolve, workloads increasingly are seeing a major shift where data and AI are being infused into applications. And having a database that simplifies the convergence of transaction and analytics data without the need to context, switch and move data out of and into different data stores. And eliminating the need to perform extensive ETL operations is becoming an industry trend that customers are demanding. At the same time, workloads are becoming more automated and intelligent. And to explore these issues further, we're happy to have back in theCUBE Nipun Agarwal, who's the Senior Vice President of mySQL HeatWave and Kumaran Siva, who's the Corporate Vice President Strategic Business Development at AMD. Gents, hello again. Welcome back. >> Hello. Hi Dave. >> Thank you, Dave. >> Okay. Nipun, obviously machine learning has become a must have for analytics offerings. It's integrated into mySQL HeatWave. Why did you take this approach and not the specialized database approach as many competitors do right tool for the right job? >> Right? So, there are a lot of customers of mySQL who have the need to run machine learning on the data which is store in mySQL database. So in the past, customers would need to extract the data out of mySQL and they would take it to a specialized service for running machine learning. Now, the reason we decided to incorporate machine learning inside the database, there are multiple reasons. One, customers don't need to move the data. And if they don't need to move the data, it is more secure because it's protected by the same access controlled mechanisms as rest of the data There is no need for customers to manage multiple services. But in addition to that, when we run the machine learning inside the database customers are able to leverage the same service the same hardware, which has been provisioned for OTP analytics and use machine learning capabilities at no additional charge. So from a customer's perspective, they get the benefits that it is a single database. They don't need to manage multiple services. And it is offered at no additional charge. And then as another aspect, which is kind of hard to learn which is based on the IP, the work we have done it is also significantly faster than what customers would get by having a separate service. >> Just to follow up on that. How are you seeing customers use HeatWaves machine learning capabilities today? How is that evolving? >> Right. So one of the things which, you know customers very often want to do is to train their models based on the data. Now, one of the things is that data in a database or in a transaction database changes quite rapidly. So we have introduced support for auto machine learning as a part of HeatWave ML. And what it does is that it fully automates the process of training. And this is something which is very important to database users, very important to mySQL users that they don't really want to hire or data scientists or specialists for doing training. So that's the first part that training in HeatWave ML is fully automated. Doesn't require the user to provide any like specific parameters, just the source data and the task which they want to train. The second aspect is the training is really fast. So the training is really fast. The benefit is that customers can retrain quite often. They can make sure that the model is up to date with any changes which have been made to their transaction database. And as a result of the models being up to date, the accuracy of the prediction is high. Right? So that's the first aspect, which is training. The second aspect is inference, which customers run once they have the models trained. And the third thing, which is perhaps been the most sought after request from the mySQL customers is the ability to provide explanations. So, HeatWave ML provides explanations for any model which has been generated or trained by HeatWave ML. So these are the three capabilities- training, inference and explanations. And this whole process is completely automated, doesn't require a specialist or a data scientist. >> Yeah, that's nice. I mean, training obviously very popular today. I've said inference I think is going to explode in the coming decade. And then of course, AI explainable AI is a very important issue. Kumaran, what are the relevant capabilities of the AMD chips that are used in OCI to support HeatWave ML? Are they different from say the specs for HeatWave in general? >> So, actually they aren't. And this is one of the key features of this architecture or this implementation that is really exciting. Um, there with HeatWave ML, you're using the same CPU. And by the way, it's not a GPU, it's a CPU for both for all three of the functions that Nipun just talked about- inference, training and explanation all done on CPU. You know, bigger picture with the capabilities we bring here we're really providing a balance, you know between the CPU cores, memory and the networking. And what that allows you to do here is be able to feed the CPU cores appropriately. And within the cores, we have these AVX instruc... extensions in with the Zen 2 and Zen 3 cores. We had AVX 2, and then with the Zen 4 core coming out we're going to have AVX 512. But we were able to with that balance of being able to bring in the data and utilize the high memory bandwidth and then use the computation to its maximum we're able to provide, you know, build pride enough AI processing that we are able to get the job done. And then we're built to build a fit into that larger pipeline that that we build out here with the HeatWave. >> Got it. Nipun you know, you and I every time we have a conversation we've got to talk benchmarks. So you've done machine learning benchmarks with HeatWave. You might even be the first in the industry to publish you know, transparent, open ML benchmarks on GitHub. I mean, I, I wouldn't know for sure but I've not seen that as common. Can you describe the benchmarks and the data sets that you used here? >> Sure. So what we did was we took a bunch of open data sets for two categories of tasks- classification and regression. So we took about a dozen data sets for classification and about six for regression. So to give an example, the kind of data sets we used for classifications like the airlines data set, hex sensors bank, right? So these are open data sets. And what we did was for on these data sets we did a comparison of what would it take to train using HeatWave ML? And then the other service we compared with is that RedShift ML. So, there were two observations. One is that with HeatWave ML, the user does not need to provide any tuning parameters, right? The HeatWave ML using RML fully generates a train model, figures out what are the right algorithms? What are the right features? What are the right hyper parameters and sets, right? So no need for any manual intervention not so the case with Redshift ML. The second thing is the performance, right? So the performance of HeatWave ML aggregate on these 12 data sets for classification and the six data sets on regression. On an average, it is 25 times faster than Redshift ML. And note that Redshift ML in turn involves SageMaker, right? So on an average, HeatWave ML provides 25 times better performance for training. And the other point to note is that there is no need for any human intervention. That's fully automated. But in the case of Redshift ML, many of these data sets did not even complete in the set duration. If you look at price performance, one of the things again I want to highlight is because of the fact that AMD does pretty well in all kinds of workloads. We are able to use the same cluster users and use the same cluster for analytics, for OTP or for machine learning. So there is no additional cost for customers to run HeatWave ML if they have provision HeatWave. But assuming a user is provisioning a HeatWave cluster only to run HeatWave ML, right? That's the case, even in that case the price performance advantage of HeatWave ML over Redshift ML is 97 times, right? So 25 times faster at 1% of the cost compared to Redshift ML And all these scripts and all this information is available on GitHub for customers to try to modify and like, see, like what are the advantages they would get on their workloads? >> Every time I hear these numbers, I shake my head. I mean, they're just so overwhelming. Um, and so we'll see how the competition responds when, and if they respond. So, but thank you for sharing those results. Kumaran, can you elaborate on how the specs that you talked about earlier contribute to HeatWave ML's you know, benchmark results. I'm particularly interested in scalability, you know Typically things degrade as you push the system harder. What are you seeing? >> No, I think, I think it's good. Look, yeah. That's by those numbers, just blow me, blow my head too. That's crazy good performance. So look from, from an AMD perspective, we have really built an architecture. Like if you think about the chiplet architecture to begin with, it is fundamentally, you know, it's kind of scaling by design, right? And, and one of the things that we've done here is been able to work with, with the HeatWave team and heat well ML team, and then been able to, to within within the CPU package itself, be able to scale up to take very efficient use of all of the course. And then of course, work with them on how you go between nodes. So you can have these very large systems that can run ML very, very efficiently. So it's really, you know, building on the building blocks of the chiplet architecture and how scaling happens there. >> Yeah. So it's you're saying it's near linear scaling or essentially. >> So, let Nipun comment on that. >> Yeah. >> Is it... So, how about as cluster sizes grow, Nipun? >> Right. >> What happens there? >> So one of the design points for HeatWave is scale out architecture, right? So as you said, that as we add more data set or increase the size of the data, or we add the number of nodes to the cluster, we want the performance to scale. So we show that we have near linear scale factor, or nearly near scale scalability for SQL workloads in the case of HeatWave ML, as well. As users add more nodes to the cluster so the size of the cluster the performance of HeatWave ML improves. So I was giving you this example that HeatWave ML is 25 times faster compared to Redshift ML. Well, that was on a cluster size of two. If you increase the cluster size of HeatWave ML to a larger number. But I think the number is 16. The performance advantage over Redshift ML increases from 25 times faster to 45 times faster. So what that means is that on a cluster size of 16 nodes HeatWave ML is 45 times faster for training these again, dozen data sets. So this shows that HeatWave ML skills better than the computation. >> So you're saying adding nodes offsets any management complexity that you would think of as getting in the way. Is that right? >> Right. So one is the management complexity and which is why by features like last customers can scale up or scale down, you know, very easily. The second aspect is, okay What gives us this advantage, right, of scalability? Or how are we able to scale? Now, the techniques which we use for HeatWave ML scalability are a bit different from what we use for SQL processing. So in the case of HeatWave ML, they really like, you know, three, two trade offs which we have to be careful about. One is the accuracy. Because we want to provide better performance for machine learning without compromising on the accuracy. So accuracy would require like more synchronization if you have multiple threads. But if you have too much of synchronization that can slow down the degree of patterns that we get. Right? So we have to strike a fine balance. So what we do is that in HeatWave ML, there are different phases of training, like algorithm selection, feature selection, hyper probability training. Each of these phases is analyzed. And for instance, one of the ways techniques we use is that if you're trying to figure out what's the optimal hyper parameter to be used? We start up with the search space. And then each of the VMs gets a part of the search space. And then we synchronize only when needed, right? So these are some of the techniques which we have developed over the years. And there are actually paper's filed, research publications filed on this. And this is what we do to achieve good scalability. And what that results to the customer is that if they have some amount of training time and they want to make it better they can just provision a larger cluster and they will get better performance. >> Got it. Thank you. Kumaran, when I think of machine learning, machine intelligence, AI, I think GPU but you're not using GPU. So how are you able to get this type of performance or price performance without using GPU's? >> Yeah, definitely. So yeah, that's a good point. And you think about what is going on here and you consider the whole pipeline that Nipun has just described in terms of how you get you know, your training, your algorithms And using the mySQL pieces of it to get to the point where the AI can be effective. In that process what happens is you have to have a lot of memory to transactions. A lot of memory bandwidth comes into play. And then bringing all that data together, feeding the actual complex that does the AI calculations that in itself could be the bottleneck, right? And you can have multiple bottlenecks along the way. And I think what you see in the AMD architecture for epic for this use case is the balance. And the fact that you are able to do the pre-processing, the AI, and then the post-processing all kind of seamlessly together, that has a huge value. And that goes back to what Nipun was saying about using the same infrastructure, gets you the better TCO but it also gets you gets you better performance. And that's because of the fact that you're bringing the data to the computation. So the computation in this case is not strictly the bottleneck. It's really about how you pull together what you need and to do the AI computation. And that is, that's probably a more, you know, it's a common case. And so, you know, you're going to start I think the least start to see this especially for inference applications. But in this case we're doing both inference explanation and training. All using the the CPU in the same OCI infrastructure. >> Interesting. Now Nipun, is the secret sauce for HeatWave ML performance different than what we've discussed before you and I with with HeatWave generally? Is there some, you know, additive engine additive that you're putting in? >> Right? Yes. The secret sauce is indeed different, right? Just the way I was saying that for SQL processing. The reason we get very good performance and price performance is because we have come up with new algorithms which help the SQL process can scale out. Similarly for HeatWave ML, we have come up with new IP, new like algorithms. One example is that we use meta-learn proxy models, right? That's the technique we use for automating the training process, right? So think of this meta-learn proxy models to be like, you know using machine learning for machine learning training. And this is an IP which we developed. And again, we have published the results and the techniques. But having such kind of like techniques is what gives us a better performance. Similarly, another thing which we use is adaptive sampling that you can have a large data set. But we intelligently sample to figure out that how can we train on a small subset without compromising on the accuracy? So, yes, there are many techniques that you have developed specifically for machine learning which is what gives us the better performance, better price performance, and also better scalability. >> What about mySQL autopilot? Is there anything that differs from HeatWave ML that is relevant? >> Okay. Interesting you should ask. So mySQL Autopilot is think of it to be an application using machine learning. So mySQL Autopilot uses machine learning to automate various aspects of the database service. So for instance, if you want to figure out that what's the right partitioning scheme to partition the data in memory? We use machine learning techniques to figure out that what's the right, the best column based on the user's workload to partition the data in memory Or given a workload, if you want to figure out what is the right cluster size to provision? That's something we use mySQL autopilot for. And I want to highlight that we don't aware of any other database service which provides this level of machine learning based automation which customers get with mySQL Autopilot. >> Hmm. Interesting. Okay. Last question for both of you. What are you guys working on next? What can customers expect from this collaboration specifically in this space? Maybe Nipun, you can start and then Kamaran can bring us home. >> Sure. So there are two things we are working on. One is based on the feedback we have gotten from customers, we are going to keep making the machine learning capabilities richer in HeatWave ML. That's one dimension. And the second thing is which Kamaran was alluding to earlier, We are looking at the next generation of like processes coming from AMD. And we will be seeing as to how we can more benefit from these processes whether it's the size of the L3 cache, the memory bandwidth, the network bandwidth, and such or the newer effects. And make sure that we leverage the all the greatness which the new generation of processes will offer. >> It's like an engineering playground. Kumaran, let's give you the final word. >> No, that's great. Now look with the Zen 4 CPU cores, we're also bringing in AVX 512 instruction capability. Now our implementation is a little different. It was in, in Rome and Milan, too where we use a double pump implementation. What that means is, you know, we take two cycles to do these instructions. But the key thing there is we don't lower our speed of the CPU. So there's no noisy neighbor effects. And it's something that OCI and the HeatWave has taken full advantage of. And so like, as we go out in time and we see the Zen 4 core, we can... we see up to 96 CPUs that that's going to work really well. So we're collaborating closely with, with OCI and with the HeatWave team here to make sure that we can take advantage of that. And we're also going to upgrade the memory subsystem to get to 12 channels of DDR 5. So it should be, you know there should be a fairly significant boost in absolute performance. But more important or just as importantly in TCO value for the customers, the end customers who are going to adopt this great service. >> I love their relentless innovation guys. Thanks so much for your time. We're going to have to leave it there. Appreciate it. >> Thank you, David. >> Thank you, David. >> Okay. Thank you for watching this special presentation on theCUBE. Your leader in enterprise and emerging tech coverage.

Published Date : Sep 14 2022

SUMMARY :

And eliminating the need and not the specialized database approach So in the past, customers How are you seeing customers use So one of the things of the AMD chips that are used in OCI And by the way, it's not and the data sets that you used here? And the other point to note elaborate on how the specs And, and one of the things or essentially. So, how about as So one of the design complexity that you would So in the case of HeatWave ML, So how are you able to get And the fact that you are Nipun, is the secret sauce That's the technique we use for automating of the database service. What are you guys working on next? And the second thing is which Kamaran Kumaran, let's give you the final word. OCI and the HeatWave We're going to have to leave it there. and emerging tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

RomeLOCATION

0.99+

DavePERSON

0.99+

DavidPERSON

0.99+

OCIORGANIZATION

0.99+

Nipun AgarwalPERSON

0.99+

MilanLOCATION

0.99+

45 timesQUANTITY

0.99+

25 timesQUANTITY

0.99+

12 channelsQUANTITY

0.99+

OracleORGANIZATION

0.99+

AMDORGANIZATION

0.99+

Zen 4COMMERCIAL_ITEM

0.99+

KumaranPERSON

0.99+

HeatWaveORGANIZATION

0.99+

Zen 3COMMERCIAL_ITEM

0.99+

second aspectQUANTITY

0.99+

Kumaran SivaPERSON

0.99+

12 data setsQUANTITY

0.99+

first aspectQUANTITY

0.99+

97 timesQUANTITY

0.99+

Zen 2COMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

firstQUANTITY

0.99+

OneQUANTITY

0.99+

two thingsQUANTITY

0.99+

oneQUANTITY

0.99+

EachQUANTITY

0.99+

1%QUANTITY

0.99+

two cyclesQUANTITY

0.99+

three capabilitiesQUANTITY

0.99+

third thingQUANTITY

0.99+

eachQUANTITY

0.99+

AVX 2COMMERCIAL_ITEM

0.99+

AVX 512COMMERCIAL_ITEM

0.99+

second thingQUANTITY

0.99+

Redshift MLTITLE

0.99+

six data setsQUANTITY

0.98+

HeatWaveTITLE

0.98+

mySQL AutopilotTITLE

0.98+

twoQUANTITY

0.98+

NipunPERSON

0.98+

two categoriesQUANTITY

0.98+

mySQLTITLE

0.98+

two observationsQUANTITY

0.98+

first partQUANTITY

0.98+

mySQL autopilotTITLE

0.98+

threeQUANTITY

0.97+

SQLTITLE

0.97+

One exampleQUANTITY

0.97+

single databaseQUANTITY

0.95+

16QUANTITY

0.95+

todayDATE

0.95+

about sixQUANTITY

0.95+

HeatWavesORGANIZATION

0.94+

about a dozen data setsQUANTITY

0.94+

16 nodesQUANTITY

0.93+

mySQL HeatWaveTITLE

0.93+

AMD Oracle Partnership Elevates MySQLHeatwave


 

(upbeat music) >> For those of you who've been following the cloud database space, you know that MySQL HeatWave has been on a technology tear over the last 24 months with Oracle claiming record breaking benchmarks relative to other database platforms. So far, those benchmarks remain industry leading as competitors have chosen not to respond, perhaps because they don't feel the need to, or maybe they don't feel that doing so would serve their interest. Regardless, the HeatWave team at Oracle has been very aggressive about its performance claims, making lots of noise, challenging the competition to respond, publishing their scripts to GitHub. But so far, there are no takers, but customers seem to be picking up on these moves by Oracle and it's likely the performance numbers resonate with them. Now, the other area we want to explore, which we haven't thus far, is the engine behind HeatWave and that is AMD. AMD's epic processors have been the powerhouse on OCI, running MySQL HeatWave since day one. And today we're going to explore how these two technology companies are working together to deliver these performance gains and some compelling TCO metrics. In fact, a recent Wikibon analysis from senior analyst Marc Staimer made some TCO comparisons in OLAP workloads relative to AWS, Snowflake, GCP, and Azure databases, you can find that research on wikibon.com. And with that, let me introduce today's guest, Nipun Agarwal senior vice president of MySQL HeatWave and Kumaran Siva, who's the corporate vice president for strategic business development at AMD. Welcome to theCUBE gentlemen. >> Welcome. Thank you. >> Thank you, Dave. >> Hey Nipun, you and I have talked a lot about this. You've been on theCUBE a number of times talking about MySQL HeatWave. But for viewers who may not have seen those episodes maybe you could give us an overview of HeatWave and how it's different from competitive cloud database offerings. >> Sure. So MySQL HeatWave is a fully managed MySQL database service offering from Oracle. It's a single database, which can be used to run transactional processing, analytics and machine learning workloads. So, in the past, MySQL has been designed and optimized for transaction processing. So customers of MySQL when they had to run, analytics machine learning, would need to extract the data out of MySQL, into some other database or service, to run analytics or machine learning. MySQL HeatWave offers a single database for running all kinds of workloads so customers don't need to extract data into some of the database. In addition to having a single database, MySQL HeatWave is also very performant compared to one up databases and also it is very price competitive. So the advantages are; single database, very performant, and very good price performance. >> Yes. And you've published some pretty impressive price performance numbers against competitors. Maybe you could describe those benchmarks and highlight some of the results, please. >> Sure. So one thing to notice that the performance of any database is going to like vary, the performance advantage is going to vary based on, the size of the data and the specific workloads, so the mileage varies, that's the first thing to know. So what we have done is, we have published multiple benchmarks. So we have benchmarks on PPCH or PPCDS and we have benchmarks on different data sizes because based on the customer's workload, the mileage is going to vary, so we want to give customers a broad range of comparisons so that they can decide for themselves. So in a specific case, where we are running on a 30 terabyte PPCH workload, HeatWave is about 18 times better price performance compared to Redshift. 18 times better compared to Redshift, about 33 times better price performance, compared to Snowflake, and 42 times better price performance compared to Google BigQuery. So, this is on 30 Terabyte PPCH. Now, if the data size is different, or the workload is different, the characteristics may vary slightly but this is just to give a flavor of the kind of performance advantage MySQL HeatWave offers. >> And then my last question before we bring in Kumaran. We've talked about the secret sauce being the tight integration between hardware and software, but would you add anything to that? What is that secret sauce in HeatWave that enables you to achieve these performance results and what does it mean for customers? >> So there are three parts to this. One is HeatWave has been designed with a scale out architecture in mind. So we have invented and implemented new algorithms for skill out query processing for analytics. The second aspect is that HeatWave has been really optimized for cloud, commodity cloud, and that's where AMD comes in. So for instance, many of the partitioning schemes we have for processing HeatWave, we optimize them for the L3 cache of the AMD processor. The thing which is very important to our customers is not just the sheer performance but the price performance, and that's where we have had a very good partnership with AMD because not only does AMD help us provide very good performance, but the price performance, right? And that all these numbers which I was showing, big part of it is because we are running on AMD which provides very good price performance. So that's the second aspect. And the third aspect is, MySQL autopilot, which provides machine learning based automation. So it's really these three things, a combination of new algorithms, design for scale out query processing, optimized for commodity cloud hardware, specifically AMD processors, and third, MySQL auto pilot which gives us this performance advantage. >> Great, thank you. So that's a good segue for AMD and Kumaran. So Kumaran, what is AMD bringing to the table? What are the, like, for instance, relevance specs of the chips that are used in Oracle cloud infrastructure and what makes them unique? >> Yeah, thanks Dave. That's a good question. So, OCI is a great customer of ours. They use what we call the top of stack devices meaning that they have the highest core count and they also are very, very fast cores. So these are currently Zen 3 cores. I think the HeatWave product is right now deployed on Zen 2 but will shortly be also on the Zen 3 core as well. But we provide in the case of OCI 64 cores. So that's the largest devices that we build. What actually happens is, because these large number of CPUs in a single package and therefore increasing the density of the node, you end up with this fantastic TCO equation and the cost per performance, the cost per for deployed services like HeatWave actually ends up being extraordinarily competitive and that's a big part of the contribution that we're bringing in here. >> So Zen 3 is the AMD micro architecture which you introduced, I think in 2017, and it's the basis for EPIC, which is sort of the enterprise grade that you really attacked the enterprise with. Maybe you could elaborate a little bit, double click on how your chips contribute specifically to HeatWave's, price performance results. >> Yeah, absolutely. So in the case of HeatWave, so as Nipun alluded to, we have very large L3 caches, right? So in our very, very top end parts just like the Milan X devices, we can go all the way up to like 768 megabytes of L3 cache. And that gives you just enormous performance and performance gains. And that's part of what we're seeing with HeatWave today and that not that they're currently on the second generation ROM based product, 'cause it's a 7,002 based product line running with the 64 cores. But as time goes on, they'll be adopting the next generation Milan as well. And the other part of it too is, as our chip led architecture has evolved, we know, so from the first generation Naples way back in 2017, we went from having multiple memory domains and a sort of NUMA architecture at the time, today we've really optimized that architecture. We use a common I/O Die that has all of the memory channels attached to it. And what that means is that, these scale out applications like HeatWave, are able to really scale very efficiently as they go from a small domain of CPUs to, for example the entire chip, all 64 cores that scaling, is been a key focus for AMD and being able to design and build architectures that can take advantage of that and then have applications like HeatWave that scale so well on it, has been, a key aim of ours. >> And Gen 3 moving up the Italian countryside. Nipun, you've taken the somewhat unusual step of posting the benchmark parameters, making them public on GitHub. Now, HeatWave is relatively new. So people felt that when Oracle gained ownership of MySQL it would let it wilt on the vine in favor of Oracle database, so you lost some ground and now, you're getting very aggressive with HeatWave. What's the reason for publishing those benchmark parameters on GitHub? >> So, the main reason for us to publish price performance numbers for HeatWave is to communicate to our customers a sense of what are the benefits they're going to get when they use HeatWave. But we want to be very transparent because as I said the performance advantages for the customers may vary, based on the data size, based on the specific workloads. So one of the reasons for us to publish, all these scripts on GitHub is for transparency. So we want customers to take a look at the scripts, know what we have done, and be confident that we stand by the numbers which we are publishing, and they're very welcome, to try these numbers themselves. In fact, we have had customers who have downloaded the scripts from GitHub and run them on our service to kind of validate. The second aspect is in some cases, they may be some deviations from what we are publishing versus what the customer would like to run in the production deployments so it provides an easy way, for customers to take the scripts, modify them in some ways which may suit their real world scenario and run to see what the performance advantages are. So that's the main reason, first, is transparency, so the customers can see what we are doing, because of the comparison, and B, if they want to modify it to suit their needs, and then see what is the performance of HeatWave, they're very welcome to do so. >> So have customers done that? Have they taken the benchmarks? And I mean, if I were a competitor, honestly, I wouldn't get into that food fight because of the impressive performance, but unless I had to, I mean, have customers picked up on that, Nipun? >> Absolutely. In fact, we have had many customers who have benchmarked the performance of MySQL HeatWave, with other services. And the fact that the scripts are available, gives them a very good starting point, and then they've also tweaked those queries in some cases, to see what the Delta would be. And in some cases, customers got back to us saying, hey the performance advantage of HeatWave is actually slightly higher than what was published and what is the reason. And the reason was, when the customers were trying, they were trying on the latest version of the service, and our benchmark results were posted let's say, two months back. So the service had improved in those two to three months and customers actually saw better performance. So yes, absolutely. We have seen customers download the scripts, try them and also modify them to some extent and then do the comparison of HeatWave with other services. >> Interesting. Maybe a question for both of you how is the competition responding to this? They haven't said, "Hey, we're going to come up "with our own benchmarks." Which is very common, you oftentimes see that. Although, for instance, Snowflake hasn't responded to data bricks, so that's not their game, but if the customers are actually, putting a lot of faith in the benchmarks and actually using that for buying decisions, then it's inevitable. But how have you seen the competition respond to the MySQL HeatWave and AMD combo? >> So maybe I can take the first track from the database service standpoint. When customers have more choice, it is invariably advantages for the customer because then the competition is going to react, right? So the way we have seen the reaction is that we do believe, that the other database services are going to take a closer eye to the price performance, right? Because if you're offering such good price performance, the vendors are already looking at it. And, you know, instances where they have offered let's say discount to the customers, to kind of at least like close the gap to some extent. And the second thing would be in terms of the capability. So like one of the things which I should have mentioned even early on, is that not only does MySQL HeatWave on AMD, provide very good price performance, say on like a small cluster, but it's all the way up to a cluster size of 64 nodes, which has about 1000 cores. So the point is, that HeatWave performs very well, both on a small system, as well as a huge scale out. And this is again, one of those things which is a differentiation compared to other services so we expect that even other database services will have to improve their offerings to provide the same good scale factor, which customers are now starting to expectancy, with MySQL HeatWave. >> Kumaran, anything you'd add to that? I mean, you guys are an arms dealer, you love all your OEMs, but at the same time, you've got chip competitors, Silicon competitors. How do you see the competitive-- >> I'd say the broader answer and the big picture for AMD, we're very maniacally focused on our customers, right? And OCI and Oracle are huge and important customers for us, and this particular use cases is extremely interesting both in that it takes advantage, very well of our architecture and it pulls out some of the value that AMD bring. I think from a big picture standpoint, our aim is to execute, to build to bring out generations of CPUs, kind of, you know, do what we say and say, sorry, say what we do and do what we say. And from that point of view, we're hitting, the schedules that we say, and being able to bring out the latest technology and bring it in a TCO value proposition that generationally keeps OCI and HeatWave ahead. That's the crux of our partnership here. >> Yeah, the execution's been obvious for the last several years. Kumaran, staying with you, how would you characterize the collaboration between, the AMD engineers and the HeatWave engineering team? How do you guys work together? >> No, I'd say we're in a very, very deep collaboration. So, there's a few aspects where, we've actually been working together very closely on the code and being able to optimize for both the large L3 cache that AMD has, and so to be able to take advantage of that. And then also, to be able to take advantage of the scaling. So going between, you know, our architecture is chip like based, so we have these, the CPU cores on, we call 'em CCDs and the inter CCD communication, there's opportunities to optimize an application level and that's something we've been engaged with. In the broader engagement, we are going back now for multiple generations with OCI, and there's a lot of input that now, kind of resonates in the product line itself. And so we value this very close collaboration with HeatWave and OCI. >> Yeah, and the cadence, Nip, and you and I have talked about this quite a bit. The cadence has been quite rapid. It's like this constant cycle every couple of months I turn around, is something new on HeatWave. But for question again, for both of you, what new things do you think that organizations, customers, are going to be able to do with MySQL HeatWave if you could look out next 12 to 18 months, is there anything you can share at this time about future collaborations? >> Right, look, 12 to 18 months is a long time. There's going to be a lot of innovation, a lot of new capabilities coming out on in MySQL HeatWave. But even based on what we are currently offering, and the trend we are seeing is that customers are bringing, more classes of workloads. So we started off with OLTP for MySQL, then it went to analytics. Then we increased it to mixed workloads, and now we offer like machine learning as alike. So one is we are seeing, more and more classes of workloads come to MySQL HeatWave. And the second is a scale, that kind of data volumes people are using HeatWave for, to process these mixed workloads, analytics machine learning OLTP, that's increasing. Now, along the way we are making it simpler to use, we are making it more cost effective use. So for instance, last time, when we talked about, we had introduced this real time elasticity and that's something which is a very, very popular feature because customers want the ability to be able to scale out, or scale down very efficiently. That's something we provided. We provided support for compression. So all of these capabilities are making it more efficient for customers to run a larger part of their workloads on MySQL HeatWave, and we will continue to make it richer in the next 12 to 18 months. >> Thank you. Kumaran, anything you'd add to that, we'll give you the last word as we got to wrap it. >> No, absolutely. So, you know, next 12 to 18 months we will have our Zen 4 CPUs out. So this could potentially go into the next generation of the OCI infrastructure. This would be with the Genoa and then Bergamo CPUs taking us to 96 and 128 cores with 12 channels at DDR five. This capability, you know, when applied to an application like HeatWave, you can see that it'll open up another order of magnitude potentially of use cases, right? And we're excited to see what customers can do do with that. It certainly will make, kind of the, this service, and the cloud in general, that this cloud migration, I think even more attractive. So we're pretty excited to see how things evolve in this period of time. >> Yeah, the innovations are coming together. Guys, thanks so much, we got to leave it there really appreciate your time. >> Thank you. >> All right, and thank you for watching this special Cube conversation, this is Dave Vellante, and we'll see you next time. (soft calm music)

Published Date : Sep 14 2022

SUMMARY :

and it's likely the performance Thank you. and how it's different from So the advantages are; single and highlight some of the results, please. the first thing to know. We've talked about the secret sauce So for instance, many of the relevance specs of the chips that are used and that's a big part of the contribution and it's the basis for EPIC, So in the case of HeatWave, of posting the benchmark parameters, So one of the reasons for us to publish, So the service had improved how is the competition responding to this? So the way we have seen the but at the same time, and the big picture for AMD, for the last several years. and so to be able to Yeah, and the cadence, and the trend we are seeing is we'll give you the last and the cloud in general, Yeah, the innovations we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Marc StaimerPERSON

0.99+

Dave VellantePERSON

0.99+

NipunPERSON

0.99+

OracleORGANIZATION

0.99+

2017DATE

0.99+

DavePERSON

0.99+

OCIORGANIZATION

0.99+

Zen 3COMMERCIAL_ITEM

0.99+

7,002QUANTITY

0.99+

KumaranPERSON

0.99+

second aspectQUANTITY

0.99+

Nipun AgarwalPERSON

0.99+

AMDORGANIZATION

0.99+

12QUANTITY

0.99+

64 coresQUANTITY

0.99+

768 megabytesQUANTITY

0.99+

twoQUANTITY

0.99+

MySQLTITLE

0.99+

third aspectQUANTITY

0.99+

12 channelsQUANTITY

0.99+

Kumaran SivaPERSON

0.99+

HeatWaveORGANIZATION

0.99+

96QUANTITY

0.99+

18 timesQUANTITY

0.99+

BergamoORGANIZATION

0.99+

three partsQUANTITY

0.99+

DeltaORGANIZATION

0.99+

three monthsQUANTITY

0.99+

MySQL HeatWaveTITLE

0.99+

42 timesQUANTITY

0.99+

bothQUANTITY

0.99+

18 monthsQUANTITY

0.99+

Zen 2COMMERCIAL_ITEM

0.99+

oneQUANTITY

0.99+

GitHubORGANIZATION

0.99+

OneQUANTITY

0.98+

second generationQUANTITY

0.98+

single databaseQUANTITY

0.98+

128 coresQUANTITY

0.98+

18 monthsQUANTITY

0.98+

three thingsQUANTITY

0.98+

Video exclusive: Oracle adds more wood to the MySQL HeatWave fire


 

(upbeat music) >> When Oracle acquired Sun in 2009, it paid $5.6 billion net of Sun's cash and debt. Now I argued at the time that Oracle got one of the best deals in the history of enterprise tech, and I got a lot of grief for saying that because Sun had a declining business, it was losing money, and its revenue was under serious pressure as it tried to hang on for dear life. But Safra Catz understood that Oracle could pay Sun's lower profit and lagging businesses, like its low index 86 product lines, and even if Sun's revenue was cut in half, because Oracle has such a high revenue multiple as a software company, it could almost instantly generate $25 to $30 billion in shareholder value on paper. In addition, it was a catalyst for Oracle to initiate its highly differentiated engineering systems business, and was actually the precursor to Oracle's Cloud. Oracle saw that it could capture high margin dollars that used to go to partners like HP, it's original exit data partner, and get paid for the full stack across infrastructure, middleware, database, and application software, when eventually got really serious about cloud. Now there was also a major technology angle to this story. Remember Sun's tagline, "the network is the computer"? Well, they should have just called it cloud. Through the Sun acquisition. Oracle also got a couple of key technologies, Java, the number one programming language in the world, and MySQL, a key ingredient of the LAMP stack, that's Linux, Apache, MySQL and PHP, Perl or Python, on which the internet is basically built, and is used by many cloud services like Facebook, Twitter, WordPress, Flicker, Amazon, Aurora, and many other examples, including, by the way, Maria DB, which is a fork of MySQL created by MySQL's creator, basically in protest to Oracle's acquisition; the drama is Oscar worthy. It gets even better. In 2020, Oracle began introducing a new version of MySQL called MySQL HeatWave, and since late 2020 it's been in sort of a super cycle rolling, out three new releases in less than a year and a half in an attempt to expand its Tam and compete in new markets. Now we covered the release of MySQL Autopilot, which uses machine learning to automate management functions. And we also covered the bench marketing that Oracle produced against Snowflake, AWS, Azure, and Google. And Oracle's at it again with HeatWave, adding machine learning into its database capabilities, along with previously available integrations of OLAP and OLTP. This, of course, is in line with Oracle's converged database philosophy, which, as we've reported, is different from other cloud database providers, most notably Amazon, which takes the right tool for the right job approach and chooses database specialization over a one size fits all strategy. Now we've asked Oracle to come on theCUBE and explain these moves, and I'm pleased to welcome back Nipun Agarwal, who's the senior vice president for MySQL Database and HeatWave at Oracle. And today, in this video exclusive, we'll discuss machine learning, other new capabilities around elasticity and compression, and then any benchmark data that Nipun wants to share. Nipun's been a leading advocate of the HeatWave program. He's led engineering in that team for over 10 years, and he has over 185 patents in database technologies. Welcome back to the show Nipun. Great to see you again. Thanks for coming on. >> Thank you, Dave. Very happy to be back. >> Yeah, now for those who may not have kept up with the news, maybe to kick things off you could give us an overview of what MySQL HeatWave actually is so that we're all on the same page. >> Sure, Dave, MySQL HeatWave is a fully managed MySQL database service from Oracle, and it has a builtin query accelerator called HeatWave, and that's the part which is unique. So with MySQL HeatWave, customers of MySQL get a single database which they can use for transactional processing, for analytics, and for mixed workloads because traditionally MySQL has been designed and optimized for transaction processing. So in the past, when customers had to run analytics with the MySQL based service, they would need to move the data out of MySQL into some other database for running analytics. So they would end up with two different databases and it would take some time to move the data out of MySQL into this other system. With MySQL HeatWave, we have solved this problem and customers now have a single MySQL database for all their applications, and they can get the good performance of analytics without any changes to their MySQL application. >> Now it's no secret that a lot of times, you know, queries are not, you know, most efficiently written, and critics of MySQL HeatWave will claim that this product is very memory and cluster intensive, it has a heavy footprint that adds to cost. How do you answer that, Nipun? >> Right, so for offering any database service in the cloud there are two dimensions, performance and cost, and we have been very cognizant of both of them. So it is indeed the case that HeatWave is a, in-memory query accelerator, which is why we get very good performance, but it is also the case that we have optimized HeatWave for commodity cloud services. So for instance, we use the least expensive compute. We use the least expensive storage. So what I would suggest is for the customers who kind of would like to know what is the price performance advantage of HeatWave compared to any database we have benchmark against, Redshift, Snowflake, Google BigQuery, Azure Synapse, HeatWave is significantly faster and significantly lower price on a multitude of workloads. So not only is it in-memory database and optimized for that, but we have also optimized it for commodity cloud services, which makes it much lower price than the competition. >> Well, at the end of the day, it's customers that sort of decide what the truth is. So to date, what's been the customer reaction? Are they moving from other clouds from on-prem environments? Both why, you know, what are you seeing? >> Right, so we are definitely a whole bunch of migrations of customers who are running MySQL on-premise to the cloud, to MySQL HeatWave. That's definitely happening. What is also very interesting is we are seeing that a very large percentage of customers, more than half the customers who are coming to MySQL HeatWave, are migrating from other clouds. We have a lot of migrations coming from AWS Aurora, migrations from RedShift, migrations from RDS MySQL, TerriData, SAP HANA, right. So we are seeing migrations from a whole bunch of other databases and other cloud services to MySQL HeatWave. And the main reason we are told why customers are migrating from other databases to MySQL HeatWave are lower cost, better performance, and no change to their application because many of these services, like AWS Aurora are ETL compatible with MySQL. So when customers try MySQL HeatWave, not only do they get better performance at a lower cost, but they find that they can migrate their application without any changes, and that's a big incentive for them. >> Great, thank you, Nipun. So can you give us some names? Are there some real world examples of these customers that have migrated to MySQL HeatWave that you can share? >> Oh, absolutely, I'll give you a few names. Stutor.com, this is an educational SaaS provider raised out of Brazil. They were using Google BigQuery, and when they migrated to MySQL HeatWave, they found a 300X, right, 300 times improvement in performance, and it lowered their cost by 85 (audio cut out). Another example is Neovera. They offer cybersecurity solutions and they were running their application on an on-premise version of MySQL when they migrated to MySQL HeatWave, their application improved in performance by 300 times and their cost reduced by 80%, right. So by going from on-premise to MySQL HeatWave, they reduced the cost by 80%, improved performance by 300 times. We are Glass, another customer based out of Brazil. They were running on AWS EC2, and when they migrated, within hours they found that there was a significant improvement, like, you know, over 5X improvement in database performance, and they were able to accommodate a very large virtual event, which had more than a million visitors. Another example, Genius Senority. They are a game designer in Japan, and when they moved to MySQL HeatWave, they found a 90 times percent improvement in performance. And there many, many more like a lot of migrations, again, from like, you know, Aurora, RedShift and many other databases as well. And consistently what we hear is (audio cut out) getting much better performance at a much lower cost without any change to their application. >> Great, thank you. You know, when I ask that question, a lot of times I get, "Well, I can't name the customer name," but I got to give Oracle credit, a lot of times you guys have at your fingertips. So you're not the only one, but it's somewhat rare in this industry. So, okay, so you got some good feedback from those customers that did migrate to MySQL HeatWave. What else did they tell you that they wanted? Did they, you know, kind of share a wishlist and some of the white space that you guys should be working on? What'd they tell you? >> Right, so as customers are moving more data into MySQL HeatWave, as they're consolidating more data into MySQL HeatWave, customers want to run other kinds of processing with this data. A very popular one is (audio cut out) So we have had multiple customers who told us that they wanted to run machine learning with data which is stored in MySQL HeatWave, and for that they have to extract the data out of MySQL (audio cut out). So that was the first feedback we got. Second thing is MySQL HeatWave is a highly scalable system. What that means is that as you add more nodes to a HeatWave cluster, the performance of the system improves almost linearly. But currently customers need to perform some manual steps to add most to a cluster or to reduce the cluster size. So that was other feedback we got that people wanted this thing to be automated. Third thing is that we have shown in the previous results, that HeatWave is significantly faster and significantly lower price compared to competitive services. So we got feedback from customers that can we trade off some performance to get even lower cost, and that's what we have looked at. And then finally, like we have some results on various data sizes with TPC-H. Customers wanted to see if we can offer some more data points as to how does HeatWave perform on other kinds of workloads. And that's what we've been working on for the several months. >> Okay, Nipun, we're going to get into some of that, but, so how did you go about addressing these requirements? >> Right, so the first thing is we are announcing support for in-database machine learning, meaning that customers who have their data inside MySQL HeatWave can now run training, inference, and prediction all inside the database without the data or the model ever having to leave the database. So that's how we address the first one. Second thing is we are offering support for real time elasticity, meaning that customers can scale up or scale down to any number of nodes. This requires no manual intervention on part of the user, and for the entire duration of the resize operation, the system is fully available. The third, in terms of the costs, we have double the amount of data that can be processed per node. So if you look at a HeatWave cluster, the size of the cluster determines the cost. So by doubling the amount of data that can be processed per node, we have effectively reduced the cluster size which is required for planning a given workload to have, which means it reduces the cost to the customer by half. And finally, we have also run the TPC-DS workload on HeatWave and compared it with other vendors. So now customers can have another data point in terms of the performance and the cost comparison of HeatWave with other services. >> All right, and I promise, I'm going to ask you about the benchmarks, but I want to come back and drill into these a bit. How is HeatWave ML different from competitive offerings? Take for instance, Redshift ML, for example. >> Sure, okay, so this is a good comparison. Let's start with, let's say RedShift ML, like there are some systems like, you know, Snowflake, which don't even offer any, like, processing of machine learning inside the database, and they expect customers to write a whole bunch of code, in say Python or Java, to do machine learning. RedShift ML does have integration with SQL. That's a good start. However, when customers of Redshift need to run machine learning, and they invoke Redshift ML, it makes a call to another service, SageMaker, right, where so the data needs to be exported to a different service. The model is generated, and the model is also outside RedShift. With HeatWave ML, the data resides always inside the MySQL database service. We are able to generate models. We are able to train the models, run inference, run explanations, all inside the MySQL HeatWave service. So the data, or the model, never have to leave the database, which means that both the data and the models can now be secured by the same access control mechanisms as the rest of the data. So that's the first part, that there is no need for any ETL. The second aspect is the automation. Training is a very important part of machine learning, right, and it impacts the quality of the predictions and such. So traditionally, customers would employ data scientists to influence the training process so that it's done right. And even in the case of Redshift ML, the users are expected to provide a lot of parameters to the training process. So the second thing which we have worked on with HeatWave ML is that it is fully automated. There is absolutely no user intervention required for training. Third is in terms of performance. So one of the things we are very, very sensitive to is performance because performance determines the eventual cost to the customer. So again, in some benchmarks, which we have published, and these are all available on GitHub, we are showing how HeatWave ML is 25 times faster than Redshift ML, and here's the kicker, at 1% of the cost. So four benefits, the data all remain secure inside the database service, it's fully automated, much faster, much lower cost than the competition. >> All right, thank you Nipun. Now, so there's a lot of talk these days about explainability and AI. You know, the system can very accurately tell you that it's a cat, you know, or for you Silicon Valley fans, it's a hot dog or not a hot dog, but they can't tell you how the system got there. So what is explainability, and why should people care about it? >> Right, so when we were talking to customers about what they would like from a machine learning based solution, one of the feedbacks we got is that enterprise is a little slow or averse to uptaking machine learning, because it seems to be, you know, like magic, right? And enterprises have the obligation to be able to explain, or to provide a answer to their customers as to why did the database make a certain choice. With a rule based solution it's simple, it's a rule based thing, and you know what the logic was. So the reason explanations are important is because customers want to know why did the system make a certain prediction? One of the important characteristics of HeatWave ML is that any model which is generated by HeatWave ML can be explained, and we can do both global explanations or model explanations as well as we can also do local explanations. So when the system makes a specific prediction using HeatWave ML, the user can find out why did the system make such a prediction? So for instance, if someone is being denied a loan, the user can figure out what were the attribute, what were the features which led to that decision? So this ensures, like, you know, fairness, and many of the times there is also like a need for regulatory compliance where users have a right to know. So we feel that explanations are very important for enterprise workload, and that's why every model which is generated by HeatWave ML can be explained. >> Now I got to give Snowflakes some props, you know, this whole idea of separating compute from storage, but also bringing the database to the cloud and driving elasticity. So that's been a key enabler and has solved a lot of problems, in particular the snake swallowing the basketball problem, as I often say. But what about elasticity and elasticity in real time? How is your version, and there's a lot of companies chasing this, how is your approach to an elastic cloud database service different from what others are promoting these days? >> Right, so a couple of characteristics. One is that we have now fully automated the process of elasticity, meaning that if a user wants to scale up or scale down, the only thing they need to specify is the eventual size of the cluster and the system completely takes care of it transparently. But then there are a few characteristics which are very unique. So for instance, we can scale up or scale down to any number of nodes. Whereas in the case of Snowflake, the number of nodes someone can scale up or scale down to are the powers of two. So if a user needs 70 CPUs, well, their choice is either 64 or 128. So by providing this flexibly with MySQL HeatWave, customers get a custom fit. So they can get a cluster which is optimized for their specific portal. So that's the first thing, flexibility of scaling up or down to any number of nodes. The second thing is that after the operation is completed, the system is fully balanced, meaning the data across the various nodes is fully balanced. That is not the case with many solutions. So for instance, in the case of Redshift, after the resize operation is done, the user is expected to manually balance the data, which can be very cumbersome. And the third aspect is that while the resize operation is going on, the HeatWave cluster is completely available for queries, for DMLS, for loading more data. That is, again, not the case with Redshift. Redshift, suppose the operation takes 10 to 15 minutes, during that window of time, the system is not available for writes, and for a big part of that chunk of time, the system is not even available for queries, which is very limiting. So the advantages we have are fully flexible, the system is in a balanced state, and the system is completely available for the entire duration operation. >> Yeah, I guess you got that hypergranularity, which, you know, sometimes they say, "Well, t-shirt sizes are good enough," but then I think of myself, some t-shirts fit me better than others, so. Okay, I saw on the announcement that you have this lower price point for customers. How did you actually achieve this? Could you give us some details around that please? >> Sure, so there are two things for announcing this service, which lower the cost for the customers. The first thing is that we have doubled the amount of data that can be processed by a HeatWave node. So if we have doubled the amount of data, which can be a process by a node, the cluster size which is required by customers reduces to half, and that's why the cost drops to half. The way we have managed to do this is by two things. One is support for Bloom filters, which reduces the amount of intermediate memory. And second is we compress the base data. So these are the two techniques we have used to process more data per node. The second way by which we are lowering the cost for the customers is by supporting pause and resume of HeatWave. And many times you find customers of like HeatWave and other services that they want to run some other queries or some other workloads for some duration of time, but then they don't need the cluster for a few hours. Now with the support for pause and resume, customers can pause the cluster and the HeatWave cluster instantaneously stops. And when they resume, not only do we fetch the data, in a very, like, you know, a quick pace from the object store, but we also preserve all the statistics, which are used by Autopilot. So both the data and the metadata are fetched, extremely fast from the object store. So with these two capabilities we feel that it'll drive down the cost to our customers even more. >> Got it, thank you. Okay, I promised I was going to get to the benchmarks. Let's have it. How do you compare with others but specifically cloud databases? I mean, and how do we know these benchmarks are real? My friends at EMC, they were back in the day, they were brilliant at doing benchmarks. They would produce these beautiful PowerPoints charts, but it was kind of opaque, but what do you say to that? >> Right, so there are multiple things I would say. The first thing is that this time we have published two benchmarks, one is for machine learning and other is for SQL analytics. All the benchmarks, including the scripts which we have used are available on GitHub. So we have full transparency, and we invite and encourage customers or other service providers to download the scripts, to download the benchmarks and see if they get any different results, right. So what we are seeing, we have published it for other people to try and validate. That's the first part. Now for machine learning, there hasn't been a precedence for enterprise benchmarks so we talk about aiding open data sets and we have published benchmarks for those, right? So both for classification, as well as for aggression, we have run the training times, and that's where we find that HeatWave MLS is 25 times faster than RedShift ML at one percent of the cost. So fully transparent, available. For SQL analytics, in the past we have shown comparisons with TPC-H. So we would show TPC-H across various databases, across various data sizes. This time we decided to use TPC-DS. the advantage of TPC-DS over TPC-H is that it has more number of queries, the queries are more complex, the schema is more complex, and there is a lot more data skew. So it represents a different class of workloads, and which is very interesting. So these are queries derived from the TPC-DS benchmark. So the numbers we have are published this time are for 10 terabyte TPC-DS, and we are comparing with all the four majors services, Redshift, Snowflake, Google BigQuery, Azure Synapse. And in all the cases, HeatWave is significantly faster and significantly lower priced. Now one of the things I want to point out is that when we are doing the cost comparison with other vendors, we are being overly fair. For instance, the cost of HeatWave includes the cost of both the MySQL node as well as the HeatWave node, and with this setup, customers can run transaction processing analytics as well as machine learning. So the price captures all of it. Whereas with the other vendors, the comparison is only for the analytic queries, right? So if customers wanted to run RDP, you would need to add the cost of that database. Or if customers wanted to run machine learning, you would need to add the cost of that service. Furthermore, with the case of HeatWave, we are quoting pay as you go price, whereas for other vendors like, you know, RedShift, and like, you know, where applicable, we are quoting one year, fully paid upfront cost rate. So it's like, you know, very fair comparison. So in terms of the numbers though, price performance for TPC-DS, we are about 4.8 times better price performance compared to RedShift We are 14.4 times better price performance compared to Snowflake, 13 times better than Google BigQuery, and 15 times better than Synapse. So across the board, we are significantly faster and significantly lower price. And as I said, all of these scripts are available in GitHub for people to drive for themselves. >> Okay, all right, I get it. So I think what you're saying is, you could have said this is what it's going to cost for you to do both analytics and transaction processing on a competitive platform versus what it takes to do that on Oracle MySQL HeatWave, but you're not doing that. You're saying, let's take them head on in their sweet spot of analytics, or OLTP separately and you're saying you still beat them. Okay, so you got this one database service in your cloud that supports transactions and analytics and machine learning. How much do you estimate your saving companies with this integrated approach versus the alternative of kind of what I called upfront, the right tool for the right job, and admittedly having to ETL tools. How can you quantify that? >> Right, so, okay. The numbers I call it, right, at the end of the day in a cloud service price performance is the metric which gives a sense as to how much the customers are going to save. So for instance, for like a TPC-DS workload, if we are 14 times better price performance than Snowflake, it means that our cost is going to be 1/14th for what customers would pay for Snowflake. Now, in addition, in other costs, in terms of migrating the data, having to manage two different databases, having to pay for other service for like, you know, machine learning, that's all extra and that depends upon what tools customers are using or what other services they're using for transaction processing or for machine learning. But these numbers themselves, right, like they're very, very compelling. If we are 1/5th the cost of Redshift, right, or 1/14th of Snowflake, these numbers, like, themselves are very, very compelling. And that's the reason we are seeing so many of these migrations from these databases to MySQL HeatWave. >> Okay, great, thank you. Our last question, in the Q3 earnings call for fiscal 22, Larry Ellison said that "MySQL HeatWave is coming soon on AWS," and that caught a lot of people's attention. That's not like Oracle. I mean, people might say maybe that's an indication that you're not having success moving customers to OCI. So you got to go to other clouds, which by the way I applaud, but any comments on that? >> Yep, this is very much like Oracle. So if you look at one of the big reasons for success of the Oracle database and why Oracle database is the most popular database is because Oracle database runs on all the platforms, and that has been the case from day one. So very akin to that, the idea is that there's a lot of value in MySQL HeatWave, and we want to make sure that we can offer same value to the customers of MySQL running on any cloud, whether it's OCI, whether it's the AWS, or any other cloud. So this shows how confident we are in our offering, and we believe that in other clouds as well, customers will find significant advantage by having a single database, which is much faster and much lower price then what alternatives they currently have. So this shows how confident we are about our products and services. >> Well, that's great, I mean, obviously for you, you're in MySQL group. You love that, right? The more places you can run, the better it is for you, of course, and your customers. Okay, Nipun, we got to leave it there. As always it's great to have you on theCUBE, really appreciate your time. Thanks for coming on and sharing the new innovations. Congratulations on all the progress you're making here. You're doing a great job. >> Thank you, Dave, and thank you for the opportunity. >> All right, and thank you for watching this CUBE conversation with Dave Vellante for theCUBE, your leader in enterprise tech coverage. We'll see you next time. (upbeat music)

Published Date : Mar 29 2022

SUMMARY :

and get paid for the full Very happy to be back. maybe to kick things off you and that's the part which is unique. that adds to cost. So it is indeed the case that HeatWave Well, at the end of the day, And the main reason we are told So can you give us some names? and they were running their application and some of the white space and for that they have to extract the data and for the entire duration I'm going to ask you about the benchmarks, So one of the things we are You know, the system can and many of the times there but also bringing the So the advantages we Okay, I saw on the announcement and the HeatWave cluster but what do you say to that? So the numbers we have and admittedly having to ETL tools. And that's the reason we in the Q3 earnings call for fiscal 22, and that has been the case from day one. Congratulations on all the you for the opportunity. All right, and thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

$25QUANTITY

0.99+

JapanLOCATION

0.99+

Larry EllisonPERSON

0.99+

OracleORGANIZATION

0.99+

BrazilLOCATION

0.99+

two techniquesQUANTITY

0.99+

2009DATE

0.99+

EMCORGANIZATION

0.99+

14.4 timesQUANTITY

0.99+

AmazonORGANIZATION

0.99+

85QUANTITY

0.99+

10QUANTITY

0.99+

SunORGANIZATION

0.99+

300 timesQUANTITY

0.99+

14 timesQUANTITY

0.99+

two thingsQUANTITY

0.99+

$5.6 billionQUANTITY

0.99+

2020DATE

0.99+

HPORGANIZATION

0.99+

80%QUANTITY

0.99+

MySQLTITLE

0.99+

25 timesQUANTITY

0.99+

Nipun AgarwalPERSON

0.99+

RedshiftTITLE

0.99+

AWSORGANIZATION

0.99+

bothQUANTITY

0.99+

90 timesQUANTITY

0.99+

JavaTITLE

0.99+

PythonTITLE

0.99+

$30 billionQUANTITY

0.99+

GoogleORGANIZATION

0.99+

70 CPUsQUANTITY

0.99+

MySQL HeatWaveTITLE

0.99+

second aspectQUANTITY

0.99+

RedShiftTITLE

0.99+

Second thingQUANTITY

0.99+

RedShift MLTITLE

0.99+

1%QUANTITY

0.99+

Redshift MLTITLE

0.99+

NipunPERSON

0.99+

ThirdQUANTITY

0.99+

one percentQUANTITY

0.99+

13 timesQUANTITY

0.99+

first partQUANTITY

0.99+

todayDATE

0.99+

15 timesQUANTITY

0.99+

two capabilitiesQUANTITY

0.99+

Video Exclusive: Oracle Announces New MySQL HeatWave Capabilities


 

(bright music) >> Surprising many people, including myself, Oracle last year began investing pretty heavily in the MySQL space. Now those investments continue today. Let me give you a brief history. Last December, Oracle made its first HeatWave announcement. Where converged OLTP and OLAP together in a single MySQL database. Now, what wasn't surprising was the approach Oracle took. They leveraged hardware to improve the performance and lower the cost. You see when Oracle acquired Sun more than a decade ago, rather than rely on loosely coupled partnerships with hardware vendors to speed up its databases. Oracle set out on a path to tightly integrate hardware and software innovations using its own in-house engineering. So with his first, MySQL HeatWave announcement, Oracle leaned heavily on developing software on top of an in-memory database technology to create an embedded OLAP capability that eliminates the need for ETL and data from a transaction system into a separate analytics database. Now in doing so, Oracle is taking a similar approach with its MySQL today, as it does for its, or back then, whereas it does for its mainstream Oracle database. And today extends that. And what I mean by that is it's converging capabilities in a single platform. So the argument is this simplifies and accelerates analytics that lowers the costs and allows analytics, things like analytics to be run on data that is more fresh. Now, as many of you know, this is a different strategy than how, for example, an AWS approaches database where it creates purpose-built database services, targeted at specific workloads. These are philosophical design decisions made for a variety of reasons, but it's very clear which direction Oracle is headed in. Today, Oracle continues its HeatWave announcement cadence with a focus on increased automation as well. The company is continuing the trend of using clustering technology to scale out for both performance and capacity. And again, that theme of marrying hardware with software Oracle is also making announcements that focus on security. Hello everyone and welcome to this video exclusive. This is Dave Vellante. We're going to dig into these capabilities, Nipun Agarwal here. He's VP of MySQL HeatWave and advanced development in Oracle. Nipun has been leading the MySQL and HeatWave development effort for nearly a decade. He's got 180 patents to his name about half of which are associated with HeatWave. Nipun, welcome back to the show. Great to have you. >> Thank you, Dave. >> So before we get into the new news, if we could, maybe you could give us all a quick overview of HeatWave again, and what problems you originally set out to solve with it? >> Sure. So HeatWave is a in-memory query accelerator for MySQL. Now, as most people are aware, MySQL was originally designed and optimized for transactional processing. So when customers had the need to run analytics, they would need to extract data from the, MySQL database into another database and run analytics. With MySQL HeatWave, customers get a single database, which can be used both for transactional processing and for analytics. There's no need to move the data from one database to another database and all existing tools and applications, which are compatible with MySQL, continue to work as is. So in-memory query accelerator for MySQL and this is significantly faster than any version of MySQL database. And also it's much faster than specialized databases for analytics. >> Yeah, we're going to talk about that. And so obviously when you made the announcement last December, you had, I'm sure, a core group of, of early customers and beta customers, but then you opened it up to the world. So what was the reaction once you expose that to customers? >> The reaction has been very positive, Dave. So initially we're thinking that they're going to be a lot of customers who are on premise users of MySQL, who are going to migrate to the service. And surely that was the case. But the part which was very interesting and surprising is that we see many customers who are migrating from other cloud vendors or migrating from other cloud services to MySQL HeatWave. And most notably the biggest number of migrations we are seeing are from AWS Aurora and AWS RDS. >> Interesting. Okay. I wonder if you've got other feedback you're obviously responding in a pretty, pretty fast cadence here, you know, seven, eight month cadence. What are the feedback that you get, were there gaps that customers wanted you to to close? >> Sure. Yes. So as customers starting moving in to HeatWave they found that HeatWave is much faster, much cheaper. And when it's so much faster, they told us that there are some classes of queries, which could just not run earlier, which they can now with HeatWave. So it makes the applications richer because they can write new classes of queries with which they could not in the past. But in terms of the feedback or enhancement requests we got, I would say they will categorize the number one was automation. There've been customers move their database from on-premise to the cloud. They expect more automation. So that was the number one thing. The second thing was people wanted the ability to run analytics on larger sizes of data with MySQL HeatWave because they like what they saw and they wanted us to increase the data size limit, which can be processed by HeatWave. Third one was they wanted more classes of queries to be accessed with HeatWave. Initially, when we went out, HeatWave was designed to be an accelerator for analytic queries but more and more customers started seeing the benefit of beyond just analytics. More towards mixed workloads. So that was a third request. And then finally they wanted us to scale to a larger cluster size. And that's what we have done over the last several months that incorporating this feedback, which you've gotten from customers. >> So you're addressing those, those, those gaps. And thank you for sharing that with us. I got the press release here. I wonder if we could kind of go through these. Let's start with AutoPilot, you know, what's, what's that all about? What's different about AutoPilot? >> That's right. So MySQL AutoPilot provides machine learning based automation. So the first difference is that not only is it automating things, where and as a cloud provider as a service provider, we feel there are a lot of opportunities for us to automate, but the big difference about the approach we've taken with MySQL AutoPilot is that it's all driven based on the data and the queries. It's machine learning based automation. That's the first aspect. The second thing is this is all done natively in the server, right? So we are enhancing the, MySQL engine. We're enhancing the HeatWave engine and that's where all the logic and all the processing resides. In order to do this, we have had to collect new kinds of data. So for instance, in the past, people would collect statistics, which are based on just the data. Now we also collect statistics based on queries, for instance, what is the compilation time? What is the execution time? And we have augmented this with new machine learning models. And finally we have made a lot of innovations, a lot of inventions in the process where we collect data in a smart way. We process data in a smart way and the machine learning models we are talking about, also have a lot of innovation. And that's what gives us an edge over what other vendors may try to do. >> Yeah. I mean, I'm just, again, I'm looking at this meat, this pretty meaty preference, press release. Auto-provisioning, auto parallel load, auto data placement, auto encoding, auto error, auto recovery, auto scheduling, and you know, using a lot of, you know, computer science techniques that are well-known, first in first out, auto change propagation. So really focusing on, on driving that automation for customers. The other piece of it that struck me, and I said this in my intro is, you know, using clustering technology, clustering technology has been around for a long time as, as in-memory database, but applying it and integrating it. My sense is that's really about scale and performance and taking advantage of course, cloud being able to drive that scale instantaneously, but talk about scale a little bit in your philosophy there and why so much emphasis on scalability? >> Right. So what we want to do is to provide the fastest engine for running analytics. And that's why we do the processing in memory. Now, one of the issues with in process, in-memory processing is that the amount of data which you're processing has to reside in memory. So when we went out in the version one, given the footprint of the MySQL customers we spoke to, we thought 12 terabytes of processing at any given point in time, would be adequate. In the very first month, we got feedback that customers wanted us to process larger amounts of data with HeatWave, because they really like what they saw and they wanted us to increase. So if we have increased deployment from 12 terabytes to 32 terabytes and in order to do so, we now have a HeatWave cluster, which can be up to 64 nodes. That's one aspect on the query processing side. Now to answer the question as to why so much of an emphasis it's because this is something which is extremely difficult to do in query processing that as you scale the size of the cluster, the kind of algorithms, the kind of techniques you have to use so that you achieve a very high efficiency with a very large cluster. These are things which are easy to do, because what we want to make sure is that as customers have the need for like, like a processing larger amount of data, one of the big benefits customers get by using a cloud as opposed to on-premise is that they don't need to worry about provisioning gear ahead of time. So if they have more data with the cloud, they should be able to like process pool data easily. But when they process more data, they should expect the same kind of performance. So same kind of efficiency on a larger data size, similar to a smaller data size. And this is something traditionally other database vendors have struggled to provide. So this is a important problem. This is a tough engineering problem. And that's why a lot of emphasis on this to make sure that we provide our customers with very high efficiency of processing as they increase the size of the data. >> You're saying, traditionally, you'll get diminishing returns as you scale. So sort of as, as the volume grows, you're not able to take as much advantage or you're less efficient. And you're saying you've, you've largely solved that problem you're able to use. I mean, people always talk about scaling linearly and I'm always skeptical, but, but you're saying, especially in database, that's been a challenge, but you're, you're saying you've solved that problem largely. >> Right. What I would say is that we have a system which is very efficient, more efficient than like, you know, any of the database we are aware of. So as you said, perfect scaling is hard with you, right? I mean, that's a critical limit of scale factor one. That's very hard to achieve. We are now close to 90% efficiency for n2n queries. This is not for primitives. This is for n2n queries, both on industry benchmarks, as well as real world customer workloads. So this 90% efficiency we believe is very good and higher than what many of the vendors provide. >> Yeah. Right. So you're not, not just primitives the whole end to end cycle. I think 0.89, I think was the number that I, that I saw just to be technically correct there, but that's pretty, pretty good. Now let's talk about the benchmarks. It wouldn't be an Oracle announcement with some, some benchmarks. So you laid out today in your announcement, some, some pretty outstanding performance and price performance numbers, particularly you called out it's, it's. I feel like it's a badge of honor. If, if Oracle calls me out, I feel like I'm doing well. You called out Snowflake and Amazons. So maybe you could go over those benchmark results that we could peel the onion on that a little bit. >> Right. So the first thing to realize is that we want to have benchmarks, which are credible, right? So it's not the case that we have taken some specific unique workloads where HeatWave shines. That's not the case. What we did was we took a industry standard benchmark, which is like, you know, TPC-H. And furthermore, we had a third party, independent firm do this comparison. So let's first compare with Snowflake. On a 10 terabyte TPC-H benchmark HeatWave is seven times faster and one fifth the cost. So with this, it is 35 times better price performance compared to Snowflake, right? So seven times faster than Snowflake and one fifth of the cost. So HeatWave is 35 times better price performance compared to Snowflake. Not just that, Snowflake only does analytics, whereas MySQL HeatWave does both transactional processing and analytics. It's not a specialized database, MySQL HeatWave is a general purpose database, which can do both OLTP analytics whereas Snowflake can only do analytics. So to be 35 times more efficient than a database service, which is specialized only for one case, which is analytics, we think it's pretty good. So that's a comparison with Snowflake. >> So that's, that's you're using, I presume you got to be using list prices for that, obviously. >> That is correct. >> So there's discounts, let's put that into context of maybe 35 X better. You're not going to get that kind of discount. I wouldn't think. >> That is correct. >> Okay. What about Redshift? Aqua for Redshift has gained a lot of momentum in the marketplace. How do you compare against that? >> Right. So we did a comparison with Redshift, Aqua, same benchmark, 10 terabytes, TPC-H. And again, this was done by a third party. Here, HeatWave is six and a half times faster at half the cost. So HeatWave is 13 times better price performance compared to Redshift Aqua. And the same thing for Redshift. It's a specialized database only for analytics. So customers need to have two databases, one for transaction processing, one for analytics, with Redshift. Whereas with MySQL HeatWave, it's a single database for both. And it is so much faster than Redshift. That again, we feel is a pretty remarkable. >> Now, you mentioned earlier, but you're not, you're obviously I presume not, you're not cheating here. You're not including the cost of the transaction processing data store. Right? We're, we're, we're ignoring that for a minute. Ignoring that you got to, you know, move data, ETL, we're just talking about like the like, is that correct? >> Right. This is extremely fair and extremely generous comparison. Not only are we not including the cost of the source OLTP database, the cost in the case of the Redshift I'm talking about is the cost for one year paid full upfront. So this is a best pricing. A customer can get for one year subscription with Redshift. Whereas when I'm talking about HeatWave, this is the pay as you go price. And the third aspect is, this is Redshift when it is completely fully optimized. I don't think anyone else can get much better numbers on Redshift than we have. Right? So fully optimized configuration of Redshift looking at the one year pre-pay cost of Redshift and not including the source database. >> Okay. And then speaking of transaction processing database, what about Aurora? You mentioned earlier that that you're seeing a lot of migration from Aurora. Can you add some color to that? >> Right. And this is a very interesting question in a, it was a very interesting observation for us when we did the launch back in December, we had numbers on four terabytes, TPC-H with Aurora. So if you look at the same benchmark, four terabytes TPC-H HeatWave is 1,400 times faster than Aurora at half the cost, which makes it 2,800 times better price performance compared to Aurora. So very good number. What we have found is that many customers who are running on Aurora started migrating to HeatWave, and these customers had a mix of transaction processing and analytics, and the data sizes are much smaller. Even those customers found that there was a significant improvement in performance and reduction in costs when they migrated to HeatWave. In the announcement today, many of the references are those class of customers. So for that, we decided to choose another benchmark, which is called CH-benchmark on a much smaller data size. And even for that, even for mixed workloads, we find that HeatWave is 18 times faster, provides over a hundred times higher throughput than Aurora at 42% of the cost. So in terms of price performance gain, it is much, much better than Aurora, even for mixed workloads. And then if you consider a pure OLTP assume you have an application, which has only OLTP, which by the way is like, you know, a very uncommon scenario, but even if that were be the case, in that case for pure OLTP only, MySQL HeatWave is at par with Aurora, with respect to performance, but MySQL HeatWave costs 42% of Aurora. So the point is that in the whole spectrum, pure OLTP, mixed workloads or analytics, MySQL HeatWave is going to be fraction of the cost of a Aurora. And depending upon your query workload, your acceleration can be anywhere from 14,000 times to 18 times faster. >> That's interesting. I mean, you've been at this for the better part of a decade, because my sense is that HeatWave is all about OLAP. And that's really where you've put the majority, if not all of the innovation. But you're saying just coming into December's announcement, you were at par with a, in a, in a, in a, in a rare, but, but hypothetical OLTP workload. >> That is correct. >> Yeah. >> Well, you know, I got to push you still on this because a lot of times these benchmarks are a function of the skills of the individuals performing these tests, right? So can I, if I want to run them myself, you know, if you publish these benchmarks, what if a customer wants to replicate these tests and try to see if they can tune up, you know, Redshift better than you guys did? >> Sure. So I'll say a couple of things. One is all the numbers which I'm talking about both for Redshift and Snowflake were done by a third party firm, but all the numbers we is talking about, TPC-H, as well has CH-benchmark. All the scripts are published on GitHub. So anyone is very welcome. In fact, we encourage customers to go and try it for themselves, and they will find that the numbers are absolutely as advertised. In fact, we had couple of companies like in the last several months who went to GitHub, they downloaded our TPCH scripts and they reported that the performance numbers they were seeing with HeatWave were actually better than we had published back in December. And the reason was that since December we had new code, which was running. So our numbers were actually better than advertised. So all the benchmarks are published. They are all available on GitHub. You can go to the HeatWave website on oracle.com and get the link for it. And we welcome anyone to come and try these numbers for themselves. >> All right. Good. Great. Thank you for that. Now you mentioned earlier that you were somewhat surprised, not surprised that you got customers migrating from on-prem databases, but you also saw migration from other clouds. How do you expect the trend with regard to this new announcement? Do you have any sense as to how that's going to go? >> Right. So one of the big changes from December to now is that we have now focused quite a bit on mixed workloads. So in the past, in December, when we first went out, HeatWave was designed primarily for analytics. Now, what we have found is that there's a very large class of customers who have mixed workloads and who also have smaller data sizes. We now have introduced a lot of technology, including things like auto scheduling, definitely improvement in performance, where MySQL HeatWave is a very superior solution compared to Aurora or other databases out there, both in terms of performance as well as price for these mixed workloads and better latency, better throughput, lower costs. So we expect this trend of migration to MySQL HeatWave, to accelerate. So we are seeing customers migrate from Azure. We are seeing customers migrate from GCP and by far the number one migrations we are seeing are from AWS. So I think based on the new features and technologies, we have announced today, this migration is going to accelerate. >> All right, last question. So I said earlier, it's, it's, it seems like you're applying what are generally well understood and proven technologies, like in-memory, you like clustering to solve these problems. And I think about, you know, the, the things that you're doing, and I wonder, you know, I mean, these things have been around for awhile and why has this type of approach not been introduced by others previously? >> Right. Well, so the main thing is it takes time, right? That we designed HeatWave from the ground up for the cloud. And as a part of that, we had to invent new algorithms for distributed query processing for the cloud. We put in the hooks for machine learning processes. We're sealing processing right from the ground up. So this has taken us close to a decade. It's been hundreds of person-years of investment, dozens of patents which have gone in. Another aspect is it takes talent from different areas. So we have like, you know, people working in distributed query processing, we have people who have a lot of like background in machine learning. And then given that we are like the custodians of the MySQL database, we have a very rich set of customers we can reach out to, to get feedback from them as to what are the pinpoints. So culmination of these trends, which we have this talent, the customer base and the time, so we spent almost close to a decade to make this thing work. So that's what it takes. It takes time, patience, patience, and talent. >> A lot of software innovation bringing together, as I said, that hardware and software strategy. Very interesting. Nipun, thanks so much. I appreciate your, your insights and coming on this video exclusive. >> Thank you, Dave. Thank you for the opportunity. >> My pleasure. And thank you for watching everybody. This is Dave Vellante for theCUBE. We'll see you next time. (bright music)

Published Date : Aug 10 2021

SUMMARY :

So the argument is this simplifies the data from one database So what was the reaction once And most notably the What are the feedback that you get, So it makes the applications I got the press release here. So for instance, in the past, and I said this in my intro is, you know, In the very first month, we So sort of as, as the volume grows, any of the database we are So maybe you could go over So the first thing to realize So that's, that's you're using, You're not going to get in the marketplace. And the same thing for Redshift. of the transaction and not including the source database. a lot of migration from Aurora. So the point is that in the if not all of the innovation. but all the numbers we is talking about, not surprised that you So in the past, in December, And I think about, you know, the, of the MySQL database, we have A lot of software Thank you for the opportunity. you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

2,800 timesQUANTITY

0.99+

DavePERSON

0.99+

DecemberDATE

0.99+

one yearQUANTITY

0.99+

12 terabytesQUANTITY

0.99+

1,400 timesQUANTITY

0.99+

14,000 timesQUANTITY

0.99+

OracleORGANIZATION

0.99+

32 terabytesQUANTITY

0.99+

AmazonsORGANIZATION

0.99+

35 timesQUANTITY

0.99+

18 timesQUANTITY

0.99+

90%QUANTITY

0.99+

AWSORGANIZATION

0.99+

NipunPERSON

0.99+

first aspectQUANTITY

0.99+

Last DecemberDATE

0.99+

Nipun AgarwalPERSON

0.99+

last yearDATE

0.99+

MySQLTITLE

0.99+

sevenQUANTITY

0.99+

42%QUANTITY

0.99+

13 timesQUANTITY

0.99+

seven timesQUANTITY

0.99+

180 patentsQUANTITY

0.99+

SunORGANIZATION

0.99+

third requestQUANTITY

0.99+

firstQUANTITY

0.99+

one caseQUANTITY

0.99+

AutoPilotTITLE

0.99+

0.89QUANTITY

0.99+

second thingQUANTITY

0.99+

third aspectQUANTITY

0.99+

oneQUANTITY

0.99+

two databasesQUANTITY

0.99+

10 terabyteQUANTITY

0.99+

MySQL AutoPilotTITLE

0.99+

bothQUANTITY

0.99+

Third oneQUANTITY

0.99+

todayDATE

0.99+

last DecemberDATE

0.99+

MySQL HeatWaveTITLE

0.99+

HeatWaveORGANIZATION

0.99+

OneQUANTITY

0.99+

10 terabytesQUANTITY

0.98+

GitHubORGANIZATION

0.98+

one fifthQUANTITY

0.98+