“Orphaned” LOB Segments

October 31, 2014

Recently, after doing reorganization in Oracle 11.2.0.2.0 DB, I came upon a LOB segment that does not link to a LOB.
That is, there was an entry in USER_SEGMENTS and USER_OBJECTS, but there was no entry in USER_TABLES, USER_LOBS and USER_INDEXES.
It turned out that the in the recycle bin (USER_RECYCLEBIN).  It appears that the DROP command did not “remove” all references to the dropped object.
The moral of the story is that if something is missing, look for it in the trash (the recycle bin, that is). Some things never change…


A Patch for JUST_STATS Package

August 11, 2014

An alert user recently notified my about a problem with the JUST_STATS package. It appears that it does not work properly with PARTITIONs. So, click here to download the first patch.
Please note that you are free to review and modify the code of the package.


When Oracle would Choose an Adaptive Execution Plan (part 1)

July 2, 2014

Understanding when a useful feature, such as Adaptive Execution Plans, would fire is of crucial importance for the stability of any DB system.

There are a few documents explaining how this feature works, including some that dig deep into the details:

http://kerryosborne.oracle-guy.com/2013/11/12c-adaptive-optimization-part-1/

http://www.dbi-services.com/index.php/blog/entry/oracle-12c-adaptive-plan-inflexion-point

http://scn.sap.com/community/oracle/blog/2013/09/24/oracle-db-optimizer-part-vii–looking-under-the-hood-of-adaptive-query-optimization-adaptive-plans-oracle-12c

However, I was not able to find a comprehensive technical document about when this feature fires.
My previous post included some general thoughts about issue. The simple explanations there, while plausible in general, do not fully match the messy reality.

I this post I will try to identify when a SQL plan goes from non-Adaptive (NL/HJ) to Adaptive and back. Once I have the “switching” point, I’ll review the 10053 trace just before and just after the switch.
Tables T1 andT2 was created this script. T2 has 1 million records and T1 has one.
In a loop, I insert a single record into T1 and run this query:

select    t2.id ,

          t1.str,

          t2.other
from

          t1,

          t2
where
          t1.id = t2.id

and       t1.num = 5

and       <UNIQUE NUMBER> = <UNIQUE NUMBER > ( insure that there is no plan reuse)

Initially the SQL uses Nested Loops, but after inserting 5 or 6 records, it switched to an Adaptive Execution Plan. We have a “switch” point!!!

 

The 10053 trace for the Non-Adaptive (NL) plan looks like this:

—————————————————————————————

Searching for inflection point (join #1) between 1.00 and 139810.13

AP: Computing costs for inflection point at min value 1.00

..

DP: Costing Nested Loops Join for inflection point at card 1.00

 NL Join : Cost: 5.00  Resp: 5.00  Degree: 1

..

DP: Costing Hash Join for inflection point at card 1.00

….

Hash join: Resc: 135782.55  Resp: 135782.55  [multiMatchCost=0.00]

….
DP: Costing Nested Loops Join for inflection point at card 139810.13

….

 NL Join : Cost: 279679.55  Resp: 279679.55  Degree: 1

..

P: Costing Hash Join for inflection point at card 139810.13

….
Hash join: Resc: 290527.15  Resp: 290527.15  [multiMatchCost=0.00]

DP: Found point of inflection for NLJ vs. HJ: card = -1.00
——————————————————————————————————–

 

 

The 10053 trace for the Adaptive plan looks like this:

——————————————————————————————————–

Searching for inflection point (join #1) between 1.00 and 155344.59 

+++++

DP: Costing Nested Loops Join for inflection point at card 1.00


NL Join : Cost: 5.00  Resp: 5.00  Degree: 1

….

DP: Costing Hash Join for inflection point at card 1.00

Hash join: Resc: 135782.55  Resp: 135782.55  [multiMatchCost=0.00]

+++++

DP: Costing Nested Loops Join for inflection point at card 155344.59

….

NL Join : Cost: 310755.84  Resp: 310755.84  Degree: 1

….

DP: Costing Hash Join for inflection point at card 155344.59
..

 Hash join: Resc: 290536.21  Resp: 290536.21  [multiMatchCost=0.00]

+++++

DP: Costing Nested Loops Join for inflection point at card 77672.80

NL Join : Cost: 155380.42  Resp: 155380.42  Degree: 1

DP: Costing Hash Join for inflection point at card 77672.80

Hash join: Resc: 290392.89  Resp: 290392.89  [multiMatchCost=0.00]
+++++

DP: Costing Nested Loops Join for inflection point at card 116508.69

NL Join : Cost: 233068.13  Resp: 233068.13  Degree: 1

DP: Costing Hash Join for inflection point at card 116508.69

Hash join: Resc: 290464.05  Resp: 290464.05  [multiMatchCost=0.00]
+++++

DP: Costing Nested Loops Join for inflection point at card 135926.64

NL Join : Cost: 271911.98  Resp: 271911.98  Degree: 1

DP: Costing Hash Join for inflection point at card 135926.64

 Hash join: Resc: 290500.13  Resp: 290500.13  [multiMatchCost=0.00]

+++++

(skiped iterations)

DP: Found point of inflection for NLJ vs. HJ: card = 145228.51

——————————————————————————————————–

The relationship between cardinality and cost for the non-adaptive plan (NL) is shown here:
NonAdaptivePlan

The respective graphic for adaptive plan is here:

AdaptivePlan

In this situation, Oracle went with an adaptive plan because it was able to find an inflection point.

One important factor that determines whether an inflection point is found is the range the inflection point is searched in. That is, the main reason the CBO could not find an inflection point for the non-adaptive plan is that the range was from 1 to 139810. If the range was wider, it would have probably found an inflection point.

That means that in some cases the decision to use adaptive plans depends on what cardinality range it would use when searching for the inflection point.

It should also be noted that there are situations where Oracle would decide not to use adaptive plans without going through the motions of looking for an inflection point.

All in all, lots of additional research is needed to answer those questions…


An Oracle Distributed Query CBO Bug Finally Fixed (After 7 Years)

April 9, 2014

Optimizing distributed queries in Oracle is inherently more difficult. The CBO not only has to account for the additional resources related with distributed processing in Oracle, such as networking, but also has to get reliable table/column statistics for the remote objects.

It is well documented that Oracle has(d) trouble passing information about histograms for distributed queries( http://jonathanlewis.wordpress.com/2013/08/19/distributed-queries-3/ ).

In addition, Oracle was not able to pass selectivity information for “IS NULL/NOT NULL” filters via a DB link, even though the value of records with NULL is already written to NUM_NULLS column in DBA_TAB_COLUMNS…
As a result of this bug, every query that has IS NULL against a remote table ended up with cardinality of 1, even if there were many NULL records in the table.

PLAN_TABLE_OUTPUT
SQL_ID  djpaw3d54d5uq, child number 0
-------------------------------------
select 
       * 
from 
       tab1@db_link_loop a , dual  
where 
       a.num_nullable is null

Plan hash value: 3027949496

-------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |     8 (100)|          |        |      |
|   1 |  NESTED LOOPS      |      |     1 |    11 |     8   (0)| 00:00:01 |        |      |
|   2 |   TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |        |      |
|   3 |   REMOTE           | TAB1 |     1 |     9 |     6   (0)| 00:00:01 | DB_LI~ | R->S |
-------------------------------------------------------------------------------------------

Remote SQL Information (identified by operation id):
----------------------------------------------------

   3 - SELECT ID,NUM,NUM_NULLABLE FROM TAB1 A 
WHERE NUM_NULLABLE IS  NULL (accessing 'DB_LINK_LOOP')

The behavior was due to MOS Bug 5702977 Wrong cardinality estimation for “is NULL” predicate on a remote table

Fortunatly, the bug is fixed in 12c and 11.2.0.4. A patch is available for 11.2.0.3 on certain platforms.


When Oracle would Choose an Adaptive Execution Plan – General Thoughts

March 31, 2014

Adaptive Execution Plans is one of the most existing new features in Oracle 12c.
This post is not about how this feature works or its benefits, but rather about when Oracle would choose to use it.

In general, the Oracle CBO would use Adaptive Execution Plans if it is not sure which standard join (NL or HJ) is better:

  • If at SQL parse time, the Oracle CBO estimated that one of the sets to join is “significantly” smaller the other, where “significantly” is defined internally by the CBO, and there are appropriate indexes, then Oracle would opt for Nested Loops. Oracle CBO probably figured out that that the cost of NL is so much better than the cost of HJ, so it is not worth the effort of using an adaptive execution plan.
  • If one of the sets is only “slightly” smaller than the other, where “slightly” is defined internally by the CBO, then the performance of the two standard join types would be similar, so Oracle would typically decide to go with an Adaptive Plan and postpone the decision until run time. Oracle CBO probably saw that that that the cost of NL is “close” to the cost of HJ, so it is worth the effort of using an adaptive execution plan.
  • Finally, when the two sets have “similar” sizes, where “similar” is defined internally by the CBO, then Oracle would go with Hash join. Oracle CBO probably figured out that that the cost of HJ is so much better than the cost of NL, so it is not worth the effort of using an adaptive execution plan.

The figure below illustrates that behavior:

adaptive_exec_plans


RMOUG 2014

February 11, 2014

I was very excited to present at RMOUG 2014 – my first time at that conference.

Unfortunately, I got sick and I had to cancel.

The name of the presentation was:
Working with Confidence: How Sure Is the Oracle CBO About Its Cardinality Estimates, and Why Does It Matter?

Here are the Powerpoint and the White paper.


More on Dynamic Sampling Level 11 (AUTO) in Oracle 12c

January 29, 2014

In a previous post I showed an example of how the new AUTO dynamic sampling level (11) consumed significant resources for a very simple SQL statement.

Here, I’ll try to find your why.
First surprise!
10053 trace does not capture information about dynamic sampling (DS) level 11. It works quite fine for other levels (0-10) of dynamic sampling though.
Here is how a dynamic sampling section of 10053 file for levels 0 to 10 looks like:

————————————————————————————

SINGLE TABLE ACCESS PATH
Single Table Cardinality Estimation for TAB3[TAB3]
SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE

*** 2014-01-29 11:41:02.678
** Performing dynamic sampling initial checks. **
** Dynamic sampling initial checks returning TRUE (level = 3).

*** 2014-01-29 11:41:02.678
** Generated dynamic sampling query:
query text :
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false') NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0), COUNT(DISTINCT C3) FROM (SELECT /*+ IGNORE_WHERE_CLAUSE NO_PARALLEL("TAB3") FULL("TAB3") NO_PARALLEL_INDEX("TAB3") */ 1 AS C1, CASE WHEN DECODE("TAB3"."ID",0,1)=567 THEN 1 ELSE 0 END AS C2, DECODE("TAB3"."ID",0,1) AS C3 FROM "JORDAN"."TAB3" SAMPLE BLOCK (0.829542 , 1) SEED (1) "TAB3") SAMPLESUB

*** 2014-01-29 11:41:03.597
** Executed dynamic sampling query:
level : 3
sample pct. : 0.829542
actual sample size : 9783
filtered sample card. : 0
orig. card. : 999999
block cnt. table stat. : 3737
block cnt. for sampling: 3737
max. sample block cnt. : 32
sample block cnt. : 31
unique cnt. C3 : 0
min. sel. est. : 0.01000000
** Using single table dynamic sel. est. : 0.00007085
Table: TAB3 Alias: TAB3
Card: Original: 999999.000000 Rounded: 71 
                             Computed: 70.85 Non Adjusted: 70.85

————————————————————————————

The section includes the query issues by the dynamic sampling along with lots of valuable information.

The same section for a query with dynamic sampling level 11 looks strikingly different:
————————————————————————————

SINGLE TABLE ACCESS PATH
Single Table Cardinality Estimation for TAB3[TAB3]
SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
Table: TAB3 Alias: TAB3
Card: Original: 999999.000000 >> Single Tab Card adjusted from:9999.990000 to:1.000000
Rounded: 1 Computed: 1.00 Non Adjusted: 9999.99

————————————————————————————
Even though we can see that the cardinality was adjusted from 9999.99 to 1 as a result of the dynamic sampling, there are no details about the DS queries that did the actual sampling. I was not able to see DS summary information either.

Since 10053 trace file did not give me the information I needed, I decided to look elsewhere – in the V$ tables.
After I flushed the shared pool and ran the query from the previous post, I issued the following SQL to get the DS SQL related to the statement:

select * from v$sql where sql_text like '%TAB3%'

Second surprise! The query returned a few records related to DS.
——————————————————————–

SELECT /* DS_SVC */ /*+ cursor_sharing_exact dynamic_sampling(0) 
...
FROM "TAB3" SAMPLE BLOCK(21.4075, 8) SEED(1) "TAB3" WHERE ...


SELECT /* DS_SVC */ /*+ cursor_sharing_exact dynamic_sampling(0) 
...
FROM "TAB3" SAMPLE BLOCK(42.8151, 8) SEED(2) "TAB3" WHERE ...

SELECT /* DS_SVC */ /*+ cursor_sharing_exact dynamic_sampling(0) 
...
FROM "TAB3" SAMPLE BLOCK(85.6302, 8) SEED(3) "TAB3" WHERE ....

———————————————————————

They are similar with the exception of SAMPLE BLOCK and SEED arguments. It appears that the optimizer tried to get sampling data using small sample block argument, but it failed. Them it tried again with larger sample size, but failed again. Finally, in the third attempt, with 85% sampling, it succeeded.

Generally speaking, the new policy to retry sampling until the desired result is achieved is good. The algorithm needs to be tweaked a bit to account for situations like the one I described in a previous post though.


Follow

Get every new post delivered to your Inbox.