Raghu On Tech

Pro tips on managing data at scale

What Are Column Group Statistics ?

by | May 2, 2019 | 0 comments

In this blog post I am going to talk about how DB2 optimizer can underestimate or overestimate join ratios or cardinalities resulting from join operations and a few other situations. This topic is very broad and need multiple other blog entries to cover in detail. However in this blog post at a very high level I will cover an interesting problem that I ran into recently and how I managed to fix the problem with Column Group Statistics (CGS).

At a high level even marginal cardinality misestimates can lead to incorrect and/or expensive access paths.

How Does This Problem Manifest In The Access Path ?

There are many places where you may find the symptoms of the issue. Some of them are listed below

  • Cardinalities coming out of operators are significantly different from the actual rows that are supposed to arrive from the operator. Below image shows how to check the cardinality for an operator from an db2exfmt. Your format may change depending on what options you provide as part of running db2exfmt.

  • Access path shows the use of nested loop joins where you would expect a hash join or vice versa due to cardinality under/over estimation.
  • Use of a wrong index to satisfy a join or non-join predicate.
  • You are joining multiple tables, usually more than three and you don’t have supporting indexes on the join predicates. This is a common problem in most of the RDBMS not just DB2.

Identifying The Problem:

You suspect the cardinality estimates are off and you should be able to prove it in one of the following ways.

  • Running parts of the SQL with joins and local predicates for parts of the access plan and comparing them with the cardinalities shown by the db2exfmt output.
  • Second and better way is to use DB2’s db2caem utility. Although, its important to note that it creates a temporary activity event monitor behind the scenes to capture the actual row counts at each step of the explain.
  • You see that the actual rows returned at the at the end of the explain plan are significantly different from the rows that SQL is returning. An e.g. of how the top of the explain looks is shown below.

 

 

As per the above part of the explain plan, 3819 rows are being returned, cost of the total plan is 267020 and it requires an estimated 44521 IO’s to satisfy the SQL request. However in my specific case it is far from reality. SQL is fetching close to 5 Million (5,000,000) rows.

Note: The explain output should be read from Bottom Up Left To Right.

This Specific Case:

So I know in my case 3.8 K rows are no where close to 5 Million rows I am anticipating. Below are the things that I tried before I had luck with solving this problem.

  • Checked if all the statistics are current.
  • Re-ran the statistics thinking that it would magically solve the problem 🙂
  • Created statistical views on the parts of the SQL that I thought are complex to optimize for DB2 but still no luck.
  • Used OPTGUIDELINES to force the optimzer to take a certain path i.e. use a certain index and a type of join, it worked, but I did not want this to be a permanent solution.

My SQL’s FROM and JOIN/Local predicates look like below.

FROM
TABLE1
INNER JOIN TABLE2
ON
UPPER(TABLE1.USER_ID) = UPPER(TABLE2.USER_NAME) AND
TABLE1.RANDOM_WHSECD = TABLE2.RANDOM_WHSECD AND
TABLE2.IS_PHYSICALLY_DELETED = 0

INNER JOIN TABLE3
ON
TABLE3.RANDOM_WHSECD = TABLE1.RANDOM_WHSECD AND
TABLE3.CARTON_NBR = TABLE1.CNTR_NBR AND
TABLE3.IS_PHYSICALLY_DELETED = 0

INNER JOIN TABLE4
ON
TABLE4.TASK_HDR_ID = TABLE3.TASK_HDR_ID AND
TABLE4.RANDOM_WHSECD = TABLE3.RANDOM_WHSECD AND
TABLE4.IS_PHYSICALLY_DELETED = 0

INNER JOIN TABLE5
ON
TABLE5.TC_LPN_ID = TABLE1.CNTR_NBR AND
TABLE5.RANDOM_WHSECD = TABLE1.RANDOM_WHSECD AND
TABLE5.PICKER_USERID = TABLE1.USER_ID AND
TABLE5.IS_PHYSICALLY_DELETED = 0

As I mentioned above, multiple tables are involved in the JOIN’s, some low cardinality and some high cardinality columns are involved in the join predicates. But not all the columns that are participating in the join are indexed due to the concensus among the DBA’s. As the Index may only benefit this one specific query and not others. Too many indexes on your table(s) especially if the columns that are part of the Index are highly volatile (i.e. updatable) could slow down your write performance and spike up CPU utilization.

Fixing The Problem:

How would you fix the problem the right way by helping the optimizer with better statistics ? Runstats do a fantastic job gathering distribution, frequency and quantile statistics, however, they fail to collect Statistics for join ratios accurately i.e. how many records are there in TABLE 2 where its ID is equal to a given ID from TABLE 1 etc ?

This problem will be handled by DB2 optimizer if you join the two tables based on an Index on both tables. Because DB2 collects cardinality statistics for up to first 4 keys of the Index. What if you are using more than 4 columns as part of local or join predicates and using the Index ? Well the answer is your cardinalities could get out of whack.

See below screen shot from syscat.indexes, where you can find DB2 is collecting metrics for the first 4 columns of the Index and also the cardinality for all the Columns combined(FULLKEYCARD). We can deduce that even searchability of an index will reduce beyond the first 4 columns as the optimizer will not have the statistics it needs to make sound decisions. It also makes the case for not creating extremely wide indexes for searching, as they won’t help you much with search performance and could potentially cause increase in write latency.

 

Now all these years I missed out on a cool trick to give optimizer the information it needs on a group of columns. When I learned that DB2 Column Group Statistics (CGS) could be used to give optimizer the information on column group statistics for Non-Indexed columns and also to collect statistics on groups of more than 4 columns, I was fascinated to try it out. The difference it made in this case was astounding.

I collected the column group statistics using the below commands. Column groups must be enclosed in paranthesis and you can specify multiple column groups for a given table.

RUNSTATS ON TABLE TABLE1 ON ALL COLUMNS AND COLUMNS
(
(RANDOM_WHSECD,CNTR_NBR),
(CNTR_NBR,RANDOM_WHSECD,USER_ID,IS_PHYSICALLY_DELETED),
(USER_ID,RANDOM_WHSECD)
)
WITH DISTRIBUTION AND DETAILED INDEXES ALL ALLOW WRITE ACCESS;


RUNSTATS ON TABLE TABLE2 ON ALL COLUMNS AND COLUMNS
(
(USER_NAME, RANDOM_WHSECD, IS_PHYSICALLY_DELETED),
(USER_NAME, RANDOM_WHSECD)
)
WITH DISTRIBUTION AND DETAILED INDEXES ALL ALLOW WRITE ACCESS;


RUNSTATS ON TABLE TABLE3 ON ALL COLUMNS AND COLUMNS
(
(RANDOM_WHSECD,CARTON_NBR),
(RANDOM_WHSECD,CARTON_NBR,IS_PHYSICALLY_DELETED),
(RANDOM_WHSECD,TASK_HDR_ID),
(RANDOM_WHSECD,TASK_HDR_ID,IS_PHYSICALLY_DELETED),
(TC_ORDER_ID,RANDOM_WHSECD)
)
WITH DISTRIBUTION AND DETAILED INDEXES ALL ALLOW WRITE ACCESS;


RUNSTATS ON TABLE TABLE4 ON ALL COLUMNS AND COLUMNS
(
(TASK_HDR_ID,RANDOM_WHSECD),
(TASK_HDR_ID,RANDOM_WHSECD,IS_PHYSICALLY_DELETED)
)
WITH DISTRIBUTION AND DETAILED INDEXES ALL ALLOW WRITE ACCESS;


RUNSTATS ON TABLE TABLE5 ON ALL COLUMNS AND COLUMNS
(
(LPN_ID, RANDOM_WHSECD),
(ORDER_ID, RANDOM_WHSECD),
(TC_LPN_ID,RANDOM_WHSECD,PICKER_USERID,IS_PHYSICALLY_DELETED)
)
WITH DISTRIBUTION AND DETAILED INDEXES ALL ALLOW WRITE ACCESS;

 

Now I reran the explain again and see how the cardinality estimates changed. I am just showing the top of the tree numbers but as you can understand, cardinalities for all the operators improved significantly and closer to the reality.

3.7 Million rows are a lot closer to 5 Million than 3.8 K rows. With these cardinalites my query run time reduced from about 20 minutes to less than 1 minute. In other words column groups may help you collect some advanced statistics to improve the cardinality estimates in the following scenarios.

  • When a number of tables are involved in the joins and not each and every one of the column is part of the Index.
  • When more than 4 columns are involved in the join from each table.
  • Or when you have columns that are dependent on each other for e.g. their individual statistics do not make sense in the real world. An e.g. would be (Country, State, County). A County with the same name can exist in multiple Countries and States. As a result we should always collect statistics on all those 3 columns together for Db2 to better estimate the cardinalites rather than to let it consider individual stats for each of the columns and multiply them to get the effective cardinality.

Another interesting point to note is that, the COST and IO of the SQL actually increased when I collected the column group statistics. This is another proof that cost is not the only indicator of how effective the SQL will execute.

 

SQL To Capture Existing Column Groups:

Below SQL will provide you with all the tables and their column groups with in a database. If you have not used the LISTAGG function before, you should check it out. It will save you from having to write a lot of recursive SQL.

SELECT COLS.COLGROUPID, TABSCHEMA, TABNAME, substr(COLNAMES,1,50) AS COLNAMES, COLGROUPCARD, NUMFREQ_VALUES, NUMQUANTILES
        FROM (SELECT SUBSTR(CGC.TABSCHEMA,1,10) AS TABSCHEMA, SUBSTR(CGC.TABNAME,1,20) AS TABNAME, CGC.COLGROUPID,
        LISTAGG(CAST(CASE WHEN CGC.ORDINAL=1 THEN ' ' || TRIM(COLNAME)
                          WHEN CGC.ORDINAL>1 THEN ', ' || TRIM(COLNAME)
                     END
                AS VARCHAR(100)), '') WITHIN GROUP (ORDER BY CGC.ORDINAL) AS COLNAMES
FROM SYSCAT.COLGROUPS CG, SYSCAT.COLGROUPCOLS CGC
WHERE CG.COLGROUPID=CGC.COLGROUPID
GROUP BY CGC.TABSCHEMA, CGC.TABNAME, CGC.COLGROUPID
ORDER BY CGC.TABSCHEMA, CGC.TABNAME, CGC.COLGROUPID) AS COLS, SYSCAT.COLGROUPS CG
WHERE COLS.COLGROUPID=CG.COLGROUPID WITH UR;

The output from the above SQL statement looks like below.

I have tried using this on some of my other SQL that are not performing optimally due to poor cardinality estimates but did not have the same kind of impact. After what I learned with this little experiment, I will definitely add CGS to my repertoire.

Please share your experiences with CGS and in what other scenarios these can be used ?

PS: If you run the runstats on a table that have column groups with out specifying the column groups, the column group statistics will be lost. So you need to create a process or script to easily add or remove the column groups.

 

0 Comments

Trackbacks/Pingbacks

  1. Db2 SQL Tuning - Raghu On Tech - […]                   Column Group Statistics […]

Submit a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Shares
Share This

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close