UNLIST TILL 4/2 - Optimizing Query Performance and Resource Pool Tuning
>> Jeff: Hello, everybody and thank you for Joining us today for the virtual "Vertica VBC" 2020. Today's breakout session has been titled "Optimizing Query Performance and Resource Pool Tuning" I'm Jeff Ealing, I lead Vertica marketing. I'll be your host for this breakout session. Joining me today are Rakesh Banula, and Abhi Thakur, Vertica product technology engineers and key members of the Vertica customer success team. But before we begin, I encourage you to submit questions or comments during the virtual session. You don't have to wait. Just type your question or comment in the question box below the slides and click Submit. There will be a Q&A session at the end of the presentation. We'll answer as many questions we're able to during that time. Any questions we don't address, we'll do our best to answer them offline. Alternatively, visit Vertica forums at forum.vertica.com to post your questions there after the session. Our engineering team is planning to Join the forums to keep the conversation going. Also a reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of your slides. And yes, this virtual session is being recorded, will be available to view on demand this week. We'll send you a notification as soon as it's ready. Now let's get started. Over to you Rakesh. >> Rakesh: Thank you, Jeff. Hello, everyone. My name is Rakesh Bankula. Along with me, we have Bir Abhimanu Thakur. We both are going to cover the present session on "Optimizing Query Performance and Resource Pool Tuning" In this session, we are going to discuss query optimization, how to review the query plans and how to get the best query plans with proper production design. Then discuss on resource allocations and how to find resource contention. And we will continue the discussion on important use cases. In general, to successfully complete any activity or any project, the main things it requires are the plan. Plan for that activity on what to do first, what to do next, what are things you can do in parallel? The next thing you need, the best people to work on that project as per the plan. So, first thing is a plan and next is the people or resources. If you overload the same set of people, our resources by involving them in multiple projects or activities or if any person or resource is sick in a given project is going to impact on the overall completion of that project. The same analogy we can apply through query performance too. For a query to perform well, it needs two main things. One is the best query plan and other is the best resources to execute the plan. Of course, in some cases, resource contention, whether it can be from system side or within the database may slow down the query even when we have best query plan and best resource allocations. We are going to discuss each of these three items a little more in depth. Let us start with query plan. User submits the query to database and Vertica Optimizer generates the query plan. In generating query plans, optimizer uses the statistics information available on the tables. So, statistics plays a very important role in generating good query plans. As a best practice, always maintain up-to-date statistics. If you want to see how query plan looks like, add explain keyword in front of your query and run that query. It displays the query plan on the screen. Other option is BC explained plans. It saves all the explained plans of the queries run on the database. So, once you have a query plan, once you're checking it to make sure plan is good. The first thing I would look for, no statistics are predicted out of range. If you see any of these, means table involved in the query, have no up to date statistics. It is now the time to update the statistics. Next thing to explain plans are broadcast, three segments around the Join operator, global re segments around a group by operators. These indicate during the runtime of the query, data flow between the nodes over the network and will slow down the query execution. As far as possible, prevent such operations. How to prevent this, we will discuss in the projection design topic. Regarding the Join order, check on inner side and outer side, which tables are used, how many rows each side processing. In (mumbles) picking a table, having smaller number of rows is good in case of as shown as, as Join built in memory, smaller the number of rows, faster it is to build the hash table and also helps in consuming less memory. Then check if the plan is picking query specific projection or default projections. If optimizer ignoring any query specific projection, but picking the default super projection will show you how to use query specific hints to follow the plant to pick query specific projections which helps in improving the performance. Okay, here is one example query plan of a query trying to find number of products sold from a store in a given state. This query is having Joins between store table, product table and group by operation to find the count. So, first look for no statistics particularly around storage access path. This plan is not reporting any no statistics. This means statistics are up to date and plan is good so far. Then check what projections are used. This is also around the storage access part. For Join orders check, we have Hash Join in path ID 4 having it In Path ID 6 processing 60,000 rows and outer is in Path ID 7 processing 20 million rows. Inner side processing last record is good. This helps in building hash table quicker by using less memory. Check if any broadcast re segments, Joins in Path ID 4 and also Path ID 3. Both are having inner broadcast, Inners are having 60,000 records are broadcasted to all nodes in the cluster. This could impact the query performance negatively. These are some of the main things which we normally check in the explained plans. Still now, We have seen that how to get good query plans. To get good query plans, we need to maintain up to date statistics and also discussed how to review query plans. Projection design is the next important thing in getting good query plans, particularly in preventing broadcasts re segments. Broadcast re segments happens during Join operation, random existing segmentation class of the projections involved in the Join not matching with the Join columns in the query. These operations causes data flow over the network and negatively impacts the query performance particularly when it transfers millions or billions of rows. These operations also causes query acquire more memory particularly in network send and receive operations. One can avoid these broadcast re segments with proper projection segmentation, say, Join involved between two fact tables, T1, T2 on column I then segment the projections on these T1, T2 tables on column I. This is also called identically segmenting projections. In other cases, Join involved between a fact table and a dimension table then replicate or create an unsegmented projection on dimension table will help avoiding broadcast re segments during Join operation. During group by operation, global re segment groups causes data flow over the network. This can also slow down the query performance. To avoid these global re segment groups, create segmentation class of the projection to match with the group by columns in the query. In previous slides, we have seen the importance of projection segmentation plus in preventing the broadcast re segments during the Join operation. The order by class of production design plays important role in picking the Join method. We have two important Join methods, Merge Join and Hash Join. Merge Join is faster and consumes less memory than hash Join. Query plan uses Merge Join when both projections involved in the Join operation are segmented and ordered on the Join keys. In all other cases, Hash Join method will be used. In case of group by operation too, we have two methods. Group by pipeline and group by Hash. Group by pipeline is faster and consumes less memory compared to group by Hash. The requirements for group by pipeline is, projection must be segmented and ordered by on grouping columns. In all other cases, group by hash method will be used. After all, we have seen importance of stats and projection design in getting good query plans. As statistics are based on estimates over sample of data, it is possible in a very rare cases, default query plan may not be as good as you expected, even after maintaining up-to-date stats and good projection design. To work around this, Vertica providing you some query hints to force optimizer to generate even better query plans. Here are some example Join hints which helps in picking Join method and how to distribute the data, that is broadcast or re segment on inner or outer side and also which group by method to pick. The table level hints helps to force pick query specific projection or skipping any particular projection in a given query. These all hints are available in Vertica documentation. Here are a few general hints useful in controlling how to load data with the class materialization et cetera. We are going to discuss some examples on how to use these query hints. Here is an example on how to force query plan to pick Hash Join. The hint used here is JTYPE, which takes arguments, H for HashJoin, M for MergeJoin. How to place this hint, just after the Join keyword in the query as shown in the example here. Another important Join in this, JFMT, Join For My Type hint. This hint is useful in case when Join columns are lost workers. By default Vertica allocates memory based on column data type definition, not by looking at the actual data length in those columns. Say for example, Join column defined as (mumbles) 1000, 5000 or more, but actual length of the data in this column is, say, less than 50 characters. Vertica going to use more memory to process such columns in Join and also slow down the Join processing. JSMP hint is useful in this particular case. JSMP parameter uses the actual length of the Join column. As shown in the example, using JFMP of V hint helps in reducing the memory requirement for this query and executes faster too. Distrib hint helps in how to force inner or outer side of the Join operator to be distributed using broadcast or re segment. Distrib takes two parameters. First is the outer site and second is the inner site. As shown in the example, DISTRIB(A,R) after Join keyword in the query helps to force re segment the inner side of the Join, outer side, leaving it to optimizer to choose that distribution method. GroupBy Hint helps in forcing query plan to pick Group by Hash or Group by Pipeline. As shown in the example, GB type or hash, used just after group by class in the query helps to force this query to pick Group by Hashtag. See now, we discussed the first part of query performance, which is query plans. Now, we are moving on to discuss next part of query performance, which is resource allocation. Resource Manager allocates resources to queries based on the settings on resource pools. The main resources which resource pools controls are memory, CPU, query concurrency. The important resource pool parameters, which we have to tune according to the workload are memory size, plan concurrency, mass concurrency and execution parallelism. Query budget plays an important role in query performance. Based on the query budget, query planner allocate worker threads to process the query request. If budget is very low, query gets less number of threads, and if that query requires to process huge data, then query takes longer time to execute because of less threads or less parallelism. In other case, if the budget is very high and query executed on the pool is a simple one which results in a waste of resources, that is, query which acquires the resources holds it till it complete the execution, and that resource is not available to other queries. Every resource pool has its own query budget. This query budget is calculated based on the memory size and client and currency settings on that pool. Resource pool status table has a column called Query Budget KB, which shows the budget value of a given resource pool. The general recommendation for query budget is to be in the range of one GB to 10 GB. We can do a few checks to validate if the existing resource pool settings are good or not. First thing we can check to see if query is getting resource allocations quickly, or waiting in the resource queues longer. You can check this in resource queues table on a live system multiple times, particularly during your peak workload hours. If large number of queries are waiting in resource queues, indicates the existing resource pool settings not matching with your workload requirements. Might be, memory allocated is not enough, or max concurrency settings are not proper. If query's not spending much time in resource queues indicates resources are allocated to meet your peak workload, but not sure if you have over or under allocated the resources. For this, check the budget in resource pool status table to find any pool having way larger than eight GB or much smaller than one GB. Both over allocation and under allocation of budget is not good for query performance. Also check in DC resource acquisitions table to find any transaction acquire additional memory during the query execution. This indicates the original given budget is not sufficient for the transaction. Having too many resource pools is also not good. How to create resource pools or even existing resource pools. Resource pool settings should match to the present workload. You can categorize the workload into well known workload and ad-hoc workload. In case of well-known workload, where you will be running same queries regularly like daily reports having same set of queries processing similar size of data or daily ETL jobs et cetera. In this case, queries are fixed. Depending on the complexity of the queries, you can further divide it into low, medium, high resource required pools. Then try setting the budget to 1 GB, 4 GB, 8 GB on these pools by allocating the memory and setting the plan concurrency as per your requirement. Then run the query and measure the execution time. Try couple UP iterations by increasing and then decreasing the budget to find the best settings for your resource pools. For category of ad-hoc workload where there is no control over the number of users going to run the queries concurrently, or complexity of queries user going to submit. For this category, we cannot estimate, in advance, the optimum query budget. So for this category of workload, we have to use cascading resource pool settings where query starts on the pool based on the runtime they have set, then query resources moves to a secondary pool. This helps in preventing smaller queries waiting for resources, longer time when a big query consuming all resources and rendering for a longer time. Some important resource pool monitoring tables, analyze system, you can query resource cues table to find any transaction waiting for resources. You will also find on which resource pool transaction is waiting, how long it is waiting, how many queries are waiting on the pool. Resource pool status gives info on how many queries are in execution on each resource pool, how much memory in use and additional info. For resource consumption of a transaction which was already completed, you can play DC resource acquisitions to find how much memory a given transaction used per node. DC resource pool move table shows info on what our transactions moved from primary to secondary pool in case of cascading resource pools. DC resource rejections gives info on which node, which resource a given transaction failed or rejected. Query consumptions table gives info on how much CPU disk network resources a given transaction utilized. Till now, we discussed query plans and how to allocate resources for better query performance. It is possible for queries to perform slower when there is any resource contention. This contention can be within database or from system side. Here are some important system tables and queries which helps in finding resource contention. Table DC query execution gives the information on transaction level, how much time it took for each execution step. Like how much time it took for planning, resource allocation, actual execution etc. If the time taken is more in planning, which is mostly due to catalog contentions, you can play DC lock releases table as shown here to see how long transactions are waiting to acquire global catalog lock, how long transaction holding GCL x. Normally, GCL x acquire and release should be done within a couple of milliseconds. If the transactions are waiting for a few seconds to acquire GCL x or holding GCL x longer indicates some catalog contention, which may be due to too many concurrent queries or due to long running queries, or system services holding catalog mutexes and causing other transactions to queue up. A query is given here, particularly the system tables will help you further narrow down the contention. You can vary sessions table to find any long-running user queries. You can query system services table to find any service like analyze row counts, move out, merge operation and running for a long time. DC all evens table gives info on what are slower events happening. You can also query system resource usage table to find any particular system resource like CPU memory, disk IO or network throughput, saturating on any node. It is possible once slow node in the cluster could impact overall performance of queries negatively. To identify any slow node in the cluster, we use queries. Select one, and (mumbles) Clearly key one query just executes on initiative node. On a good node, kV one query returns within 50 milliseconds. As shown here, you can use a script to run this, select kV one query on all nodes in the cluster. You can repeat this test multiple times, say five to 10 times then reveal the time taken by this query on all nodes in all tech (mumbles) . If there is any one node taking more than a few seconds compared to other notes taking just milliseconds, then something is wrong with that node. To find what is going on with the node, which took more time for kV one query, run perf top. Perf top gives info on stopped only lister functions in which system spending most of the time. These functions can be counter functions or Vertica functions, as shown here. Based on their systemic spending most of the time we'll get some clue on what is going on with that code. Abhi will continue with the remaining part of the session. Over to you Abhi. >> Bir: Hey, thanks, Rakesh. My name is Abhimanu Thakur and today I will cover some performance cases which we had addressed recently in our customer clusters which we will be applying the best practices just showed by Rakesh. Now, to find where the performance problem is, it is always easy if we know where the problem is. And to understand that, like Rakesh just explained, the life of a query has different phases. The phases are pre execution, which is the planning, execution and post execution which is releasing all the required resources. This is something very similar to a plane taking a flight path where it prepares itself, gets onto the runway, takes off and lands back onto the runway. So, let's prepare our flight to take off. So, this is a use case which is from a dashboard application where the dashboard fails to refresh once in a while, and there is a batch of queries which are sent by the dashboard to the Vertica database. And let's see how we can be able to see where the failure is or where the slowness is. To reveal the dashboard application, these are very shortly queries, we need to see what were the historical executions and from the historical executions, we basically try to find where is the exact amount of time spent, whether it is in the planning phase, execution phase or in the post execution and if they are pretty consistent all the time, which means the plan has not changed in the execution which will also help us determine what is the memory used and if the memory budget is ideal. As just showed by Rakesh, the budget plays a very important role. So DC query executions, one-stop place to go and find your timings, whether it is a timing extra or is it execute plan or is it an abandoned plan. So, looking at the queries which we received and the times from the scrutinize, we find most of the time average execution, the execution is pretty consistent and there is some time, extra time spent in the planning phase which users of (mumbles) resource contention. This is a very simple matrix which you can follow to find if you have issues. So the system resource convention catalog contention and resource contention, all of these contribute mostly because of the concurrency. And let's see if we can drill down further to find the issue in these dashboard application queries. So, to get the concurrency, we pull out the number of queries issued, what is the max concurrency achieved, what are the number of threads, what is the overall percentage of query duration and all this data is available in the V advisor report. So, as soon as you provide scrutinize, we generate the V advisor report which helps us get complete insight of this data. So, based on this we definitely see there is very high concurrency and most of the queries finish in less than a second which is good. There are queries which go beyond 10 seconds and over a minute, but so definitely, the cluster had concurrency. What is more interesting is to find from this graph is... I'm sorry if this is not very readable, but the topmost line what you see is the Select and the bottom two or three lines are the create, drop and alters. So definitely this cluster is having a lot of DDL and DMLs being issued and what do they contribute is if there is a large DDL and DMLs, they cause catalog contention. So, we need to make sure that the batch, what we're sending is not causing too many catalog contention into the cluster which delays the complete plan face as the system resources are busy. And the same time, what we also analyze is the analyze tactics running every hour which is very aggressive, I would say. It should be scheduled to be need only so if a table has not changed drastically that's not scheduled analyzed tactics for the table. A couple more settings has shared by Rakesh is, it definitely plays a important role in the modeled and mode operations. So now, let's look at the budget of the query. The budget of the resource pool is currently at about two GB and it is the 75 percentile memory. Queries are definitely executing at that same budget, which is good and bad because these are dashboard queries, they don't need such a large amount of memory. The max memory as shown here from the capture data is about 20 GB which is pretty high. So what we did is, we found that there are some queries run by different user who are running in the same dashboard pool which should not be happening as dashboard pool is something like a premium pool or kind of a private run way to run your own private jet. And why I made that statement is as you see, resource pools are lik runways. You have different resource pools, different runways to cater different types of plane, different types of flights which... So, as you can manage your resource pools differently, your flights can take off and land easily. So, from this we did remind that the budget is something which could be well done. Now let's look... As we saw in the previous numbers that there were some resource weights and like I said, because resource pools are like your runways. So if you have everything ready, your plane is waiting just to get onto the runway to take off, you would definitely not want to be in that situation. So in this case, what we found is the coolest... There're quite a bit number of queries which have been waited in the pool and they waited almost a second and which can be avoided by modifying the the amount of resources allocated to the resource pool. So in this case, we increase the resource pool to provide more memory which is 80 GB and reduce the budget from two GB to one GB. Also making sure that the plan concurrency is increased to match the memory budget and also we moved the user who was running into the dashboard query pool. So, this is something which we have gone, which we found also in the resource pool is the execution parallelism and how this affects and what what number changes. So, execution parallelism is something which allocates the plan, allocates the number of threads, network buffers and all the data around it before even the query executes. And in this case, this pool had auto, which defaults to the core count. And so, dashboard queries not being too high on resources, they need to just get what they want. So we reduced the execution parallelism to eight and this drastically brought down the amount of threads which were needed without changing the time of execution. So, this is all what we saw how we could tune before the query takes off. Now, let's see what path we followed. This is the exact path what we followed. Hope of this diagram helps and these are the things which we took care of. So, tune your resource pool, adjust your execution parallelism based on the type of the queries the resource pool is catering to and match your memory sizes and don't be too aggressive on your resource budget. And see if you could replace your staging tables with temporary tables as they help a lot in reducing the DDLs and DMLs, reducing the catalog contention and the places where you cannot replace them with the truncate tables, reduce your analyzed statics duration and if possible, follow the best practices for a couple more operations. So moving on, let's let our query take a flight and see what best practices can be applied here. So this is another, I would say, very classic example of query where the query has been running and suddenly stops to fail. And if there is... I think most of the other seniors in a Join did not fit in memory. What does this mean? It basically means the inner table is trying to build a large Hash table, and it needs a lot of memory to fit. There are only two reasons why it could fail. One, your statics are outdated and your resource pool is not letting you grab all the memory needed. So in this particular case, the resource pool is not allowing all the memory it needs. As you see, the query acquire 180 GB of memory, and it failed. When looking at the... In most cases, you should be able to figure out the issue looking at the explained plan of the query as shared by Rakesh earlier. But in this case if you see, the explained plan looks awesome. There's no other operator like in a broadcast or outer V segment or something like that, it's just Join hash. So looking further we find into the projection. So inner is on segmented projection, the outer is segmented. Excellent. This is what is needed. So in this case, what we would recommend is go find further what is the cost. The cost to scan this row seems to be pretty high. There's the table DC query execution in their profiles in Vertica, which helps you drill down to every smallest amount of time, memory and what were the number of rows used by individual operators per pack. So, while looking into the execution engine profile details for this query, we found the amount of time spent is on the Join operator and it's the Join inner Hash table build time, which has taking huge amount of time. It's just waiting basically for the lower operators can and storage union to pass the data. So, how can we avoid this? Clearly, we can avoid it by creating a segmented projection instead of unsegmented projection on such a large table with one billion rows. Following the practice to create the projection... So this is a projection which was created and it was segmented on the column which is part of the select clause over here. Now, that plan looks nice and clean still, and the execution of this query now executes in 22 minutes 15 seconds and the most important you see is the memory. It executes in just 15 GB of memory. So, basically to what was done is the unsegmented projection which acquires a lot of memory per node is now not taking that much of memory and executing faster as it has been divided by the number of nodes per node to execute only a small share of data. But, the customer was still not happy as 22 minutes is still high. And let's see if we can tune it further to make the cost go down and execution time go down. So, looking at the explained plan again, like I said, most of the time, you could see the plan and say, "What's going on?" In this case, there is an inner re segment. So, how could we avoid the inner re segments? We can avoid the inner re segment... Most of the times, all the re segments just by creating the projection which are identically segmented which means your inner and outer both have the same amount, same segmentation clause. The same was done over here, as you see, there's now segment on sales ID and also ordered by sales ID which helps us execute the query drop from 22 minutes to eight minutes, and now the memory acquired is just equals to the pool budget which is 8 GB. And if you see, the most What is needed is the hash Join is converted into a merge Join being the ordered by the segmented clause and also the Join clause. So, what this gives us is, it has the new global data distribution and by changing the production design, we have improved the query performance. But there are times when you could not have changed the production design and there's nothing much which can be done. In all those cases, as even in the first case of Vertica after fail of the inner Join, the second Vertica replan (mumbles) spill to this operator. You could let the system degrade by acquiring 180 GB for whatever duration of minutes the query had. You could simply use this hand to replace and run the query in the very first go. Let the system have all the resources it needs. So, use hints wherever possible and filter disk is definitely your option where there're no other options for you to change your projection design. Now, there are times when you find that you have gone through your query plan, you have gone through every other thing and there's not much you see anywhere, but you definitely look at the query and you feel that, "Now, I think I can rewrite this query." And how what makes you decide that is you look at the query and you see that the same table has been accessed several times in my query plan, how can I rewrite this query to access my table just once? And in this particular use case, a very simple use case where a table is scanned three times for several different filters and then a union in Vertica union is kind of costly operator I would say, because union does not know what's the amount of data which should be coming from the underlying query. So we allocate a lot of resources to keep the union running. Now, we could simply replace all these unions by simple "Or" clause. So, simple "Or" clause changes the complete plan of the query and the cost drops down drastically. And now the optimizer almost know the exact amount of rows it has to process. So change, look at your query plans and see if you could make the execution in the profile or the optimizer do better job just by doing some small rewrites. Like if there are some tables frequently accessed you could even use a "With" clause which will do an early materialization and make use the better performance or for the union which I just shared and replace your left Joins with right Joins, use your (mumbles) like shade earlier for you changing your hash table types. This is the exact part what we have followed in this presentation. Hope this presentation was helpful in addressing, at least finding some performance issues in your queries or in your class test. So, thank you for listening to our presentation. Now we are ready for Q&A.
SUMMARY :
and key members of the Vertica customer success team. and other is the best resources to execute the plan. and the most important you see is the memory.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rakesh Banula | PERSON | 0.99+ |
Rakesh | PERSON | 0.99+ |
Abhi Thakur | PERSON | 0.99+ |
Jeff Ealing | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
two GB | QUANTITY | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
one GB | QUANTITY | 0.99+ |
180 GB | QUANTITY | 0.99+ |
80 GB | QUANTITY | 0.99+ |
Rakesh Bankula | PERSON | 0.99+ |
1 GB | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
8 GB | QUANTITY | 0.99+ |
forum.vertica.com | OTHER | 0.99+ |
One | QUANTITY | 0.99+ |
22 minutes | QUANTITY | 0.99+ |
60,000 records | QUANTITY | 0.99+ |
15 GB | QUANTITY | 0.99+ |
4 GB | QUANTITY | 0.99+ |
10 GB | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
20 million rows | QUANTITY | 0.99+ |
less than a second | QUANTITY | 0.99+ |
two methods | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
less than 50 characters | QUANTITY | 0.99+ |
Abhi | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
Abhimanu Thakur | PERSON | 0.99+ |
First | QUANTITY | 0.99+ |
eight minutes | QUANTITY | 0.99+ |
one billion rows | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
three lines | QUANTITY | 0.99+ |
10 times | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
three times | QUANTITY | 0.99+ |
one example | QUANTITY | 0.98+ |
each side | QUANTITY | 0.98+ |
Both | QUANTITY | 0.98+ |
5000 | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
over a minute | QUANTITY | 0.98+ |
60,000 rows | QUANTITY | 0.98+ |
2020 | DATE | 0.98+ |
Path ID 3 | OTHER | 0.98+ |
1000 | QUANTITY | 0.98+ |
first part | QUANTITY | 0.98+ |
Path ID 7 | OTHER | 0.98+ |
10 seconds | QUANTITY | 0.98+ |
two reasons | QUANTITY | 0.97+ |
three items | QUANTITY | 0.97+ |
each resource pool | QUANTITY | 0.97+ |
about 20 GB | QUANTITY | 0.97+ |
GCL x | TITLE | 0.97+ |
both projections | QUANTITY | 0.97+ |
two parameters | QUANTITY | 0.97+ |
more than a few seconds | QUANTITY | 0.97+ |
Path ID 4 | OTHER | 0.97+ |
T2 | OTHER | 0.97+ |
75 percentile | QUANTITY | 0.97+ |
Bir Abhimanu Thakur | PERSON | 0.97+ |
both | QUANTITY | 0.96+ |
50 milliseconds | QUANTITY | 0.96+ |
each execution | QUANTITY | 0.96+ |
about two GB | QUANTITY | 0.96+ |
Path ID 6 | OTHER | 0.95+ |
this week | DATE | 0.95+ |
two main things | QUANTITY | 0.93+ |
eight | QUANTITY | 0.93+ |
eight GB | QUANTITY | 0.93+ |
two | QUANTITY | 0.93+ |