add

Oracle AUTOTRACE Fundamentals

Lets Discuss about Oracle AUTOTRACE utility and how it helps us in day to day daily SQL works.
 

How do AUTOTRACE utility helps us ?

 
AUTOTRACE  is a very useful and comes handy for viewing the Query Plan as well as the detailed execution statistics.
 

What Leads it can offer to us for a better query planning ?

 
An AUTOTRACE utility can provide ( based on mode we run ) :

  • a. Query explain plan
  • b. returned rows
  • c. Execution plan statistics

 

who can run AUTOTRACE utility ?

 
To use this feature of Oracle, a user must be granted the PLUSTRACE DB role. Also a PLAN_TABLE must be created in the user’s schema.
 

Steps to create PLUSTRACE Role

 
In order to create a PLUSTRACE Role, we need to execute below script available in Path :

ORACLE_HOME/sqlplus/admin/plustrce.sql

For example,following path C:\oraclexe\app\oracle\app\oracle\product\11.2.0\server\sqlplus\admin\plustrce.sql is the location for plustrce.sql file.
 

What are possible AUTO Trace Options available ?

 
a) SET AUTOTRACE ON|OFF

If ON – generated report includes Optimizer execution path and SQL statement’s execution statistic details.
if OFF – This is the default option. This will turn off the report.

b) SET AUTOTRACE ON EXPLAIN

Auto trace report shows only Explain plan or optimizer execution path.

c) SET AUTOTRACE ON STATISTICS

Auto trace report includes only SQL statement execution statistics.

d) SET AUTOTRACE TRACEONLY — Always better to go with this option as it will not print all the resultset.

It’s just like SET AUTOTRACE ON, but suppresses the printing of the user’s query output, if any.

Above report is generated after successful SQL DML (that is, SELECT, DELETE, UPDATE and INSERT) statements. It is useful for monitoring and tuning the performance of these statements.
 

Lets do some practical work …

 
lets discuss about creating a PLAN_TABLE and granting a PLUSTRACE role for a normal user.

1) Initiate a SQL*Plus session to create PLAN_TABLE in the orauser schema (local schema i created on my machine).

2) Create the PLUSTRACE Role.

3) Granting the PLUSTRACE Role

all steps are shown below :

C:\oraclexe\app\oracle\app\oracle\product\11.2.0\server>sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Tue Jul 3 21:02:59 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Enter user-name: orauser
Enter password:

Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production

SQL> @RDBMS/ADMIN/UTLXPLAN.SQL

Table created.

SQL> connect / as sysdba
Connected.
SQL> @SQLPLUS/ADMIN/PLUSTRCE.SQL
SQL>
SQL> drop role plustrace;

Role dropped.

SQL> create role plustrace;

Role created.

SQL>
SQL> grant select on v_$sesstat to plustrace;

Grant succeeded.

SQL> grant select on v_$statname to plustrace;

Grant succeeded.

SQL> grant select on v_$mystat to plustrace;

Grant succeeded.

SQL> grant plustrace to dba with admin option;

Grant succeeded.

SQL>
SQL> set echo off
SQL>

 
4) Grant the PLUSTRACE Role
 
Run the following commands from your SQL*Plus session to grant the PLUSTRACE role to the “orauser” user:

SQL> connect / as sysdba
Connected.
SQL> grant PLUSTRACE to orauser;

Grant succeeded.

SQL>

 

How do we analyze the Query Plan and Statistics ?

 
As i said earlier,its always better to go with option TRACEONLY so that it wont fill all the screen with resultset.

For a better clarity,let us run the below query and generate the Explain Plan and Statistics.

SQL> SET AUTOTRACE TRACEONLY;
SQL> SELECT E.EMPNO, E.ENAME,E.JOB,E.SAL,D.DEPTNO,D.DNAME,D.LOC
  2  FROM ORAUSER.EMP E , ORAUSER.DEPT D
  3  WHERE E.DEPTNO = D.DEPTNO
  4  ORDER BY D.DEPTNO;
14 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 2288526332
--------------------------------------------------------------------------------------------

| Id  | Operation                    | Name        | Rows  | Bytes | Cost (%CPU)| Time     |

--------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT             |             |    14 |   630 |     6  (17)| 00:00:01 |

|   1 |  MERGE JOIN                  |             |    14 |   630 |     6  (17)| 00:00:01 |

|   2 |   TABLE ACCESS BY INDEX ROWID| DEPT        |     4 |    80 |     2   (0)| 00:00:01 |

|   3 |    INDEX FULL SCAN           | SYS_C006997 |     4 |       |     1   (0)| 00:00:01 |

|*  4 |   SORT JOIN                  |             |    14 |   350 |     4  (25)| 00:00:01 |

|   5 |    TABLE ACCESS FULL         | EMP         |    14 |   350 |     3   (0)| 00:00:01 |

--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
-----------------------------------------------------------

4 - access("E"."DEPTNO"="D"."DEPTNO")
filter("E"."DEPTNO"="D"."DEPTNO")

Statistics
----------------------------------------------------------
1          recursive calls
0         db block gets
10       consistent gets
0         physical reads
0         redo size
1398   bytes sent via SQL*Net to client
523     bytes received via SQL*Net from client
2          SQL*Net roundtrips to/from client
1           sorts (memory)
0          sorts (disk)
14        rows processed

SQL>

 

How do we analyze the Execution Plan ?

 
If you observe the below Execution Plan result… we note following things.
 

1) Two Tables : EMP and DEPT.

2) Level 4 has * which denotes a predicate.
Predicate details given my Execution plan is : filter(“E”.”DEPTNO”=”D”.”DEPTNO”)
which means, in our case we are joining the two tables by DEPTNO which is a primary key in table DEPT.

Based on this info, we could conclude that DEPT table acts as OUTER Table whereas EMP table acts as INNER table.

3) Lets go step by step from Level 5.

Since EMP is the INNER Table and predicate uses EMP.DEPTNO ( which is a foreign key), it has to do the full scan for Corresponding
Primary key from OUTER Table DEPT.

4) using Level 4 predicate, OUTER table DEPT will be accessed using both system INDEX SYS_C006997 and DEPT INDEX ROWID.

5) Now ResultSet from Level 4 ( DEPT resultset) , entire INNER table EMP will be full scanned resulting total of 14 rows.
 

Lets Examine another Execution Plan …

SELECT E.EMPNO, E.ENAME,E.JOB,E.SAL,D.DEPTNO,D.DNAME,D.LOC 
FROM ORAUSER.EMP E , ORAUSER.DEPT D
WHERE E.DEPTNO = D.DEPTNO
AND E.DEPTNO < 20
ORDER BY D.DEPTNO;

Execution Plan
----------------------------------------------------------
Plan hash value: 2486522731
--------------------------------------------------------------------------------------------

| Id  | Operation                    | Name        | Rows  | Bytes | Cost (%CPU)| Time     |

--------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT             |             |  1440 | 75750 |   152 (1) | 00:00:02 |

|   1 |  NESTED LOOPS                |             |       |       |           |          |

|   2 |   NESTED LOOPS               |             |  1440 | 75750 |    152 (1)| 00:00:02 |

|   3 |    TABLE ACCESS FULL         | DEPT        |     4 |       |     2  (0)| 00:00:01 |

|*  4 |     INDEX RANGE SCAN         | EMP_DEPT_IDX|    14 |   0   |     54 (0)| 00:00:01 |

|*  5 |  TABLE ACCESS INDEX ROW ID   | EMP         |   976 | 16745 |     72 (0)| 00:00:02 |

--------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
-----------------------------------------------------------

3 - filter("E"."DEPTNO" < 20)
4 - access("E"."DEPTNO"="D"."DEPTNO")
filter("D"."DEPTNO" < 20)

looking at above plan, first at level 3, it is doing full table scan on DEPT table using Index range scan using EMP_DEPT_IDX,
and accessing the EMP table by TABLE ACCESS FULL method by INDEX ROW ID. So DEPT is Full table scanned and EMP table is accessed by INDEX Range Scan.

in other words, here you can see that its doing a FULL TABLE ACCESS for table DEPT and its doing INDEX RANGE SCAN using EMP_DEPT_IDX and accessing the table EMP using INDEX ROW ID and finally it is using nested loop join.

At Level 1, it is using NESTED LOOPS to join these two datasets EMP and DEPT and returns the result. In NESTED LOOP JOIN, For every row in the first table (OUTER TABLE DEPT), oracle accesses all the rows in the second table (inner table EMP).

You can imagine this as something like this .. fora better picture.

FOR LOOP 
   FOR EVERY RECORD IN DEPT ( FULL ACCESS SCAN)
   EMBEDDED  FOR LOOP 
       SELECT RECORD BY INDEX ROW ID OF EMP

 
In same execution plan, AUTOTRACE gives few more information like :
 

  • Id – STEP NUMBER
  • Operation – Kind of operation Performed
  • Name – Name of the Operation
  • Rows – Expected rows by the performed operation
  • Bytes – No of Bytes read/write based on operation
  • Cost (%CPU) – CPU Cost (based on cycles took for each operation)
  • Time – Elapsed time

QUERY Optimizer selects the best Execution Plan based on the final cost and Bytes before firing the SQL statement on Database.
 

How does Statistics help us ?

 

The statistics are recorded by the server when your statement executes and indicate the system resources required to execute your statement. The results include the following statistics:
 

  • recursive calls : Number of recursive calls generated at both the user and system level. Oracle maintains tables used for internal processing. When Oracle needs to make a change to these tables, it internally generates an internal SQL statement, which in turn generates a recursive call.
  • db block gets : Number of times a CURRENT block was requested.
  • consistent gets : Number of times a consistent read was requested for a block
  • physical reads : Total number of data blocks read from disk. This number equals the value of “physical reads direct” plus all reads into buffer cache.
  • redo size: Total amount of redo generated in bytes.
  • bytes sent via SQL*Net to client :Total number of bytes sent to the client from the foreground processes.
  • bytes received via SQL*Net from client : Total number of bytes received from the client over Oracle Net.
  • SQL*Net roundtrips to/from client : Total number of Oracle Net messages sent to and received from the client.
  • sorts (memory) : Number of sort operations that were performed completely in memory and did not require any disk writes.
  • sorts (disk) : Number of sort operations that required at least one disk write.
  • rows processed : Number of rows processed during the operation

Note : The client referred to in the statistics is SQL*Plus. Oracle Net refers to the generic process communication between SQL*Plus and the server, regardless of whether Oracle Net is installed.
 

When can we decide if Query needs to be optimized ?

 
Monitor disk reads and buffer gets by executing the following statement in SQL*Plus:

SQL> SET AUTOTRACE TRACEONLY STATISTICS 

Typical results returned are shown as follows:

Statistics
----------------------------------------------------------
         70  recursive calls
          0  db block gets
        591  consistent gets
        404  physical reads
          0  redo size
        315  bytes sent via SQL*Net to client
        850  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
          0  rows processed

If ‘consistent gets’ or ‘physical reads’ is high relative to the amount of data returned, then this is a sign that the query is expensive and needs to be reviewed for optimization.

For example, if you are expecting less than 1,000 rows back and ‘consistent gets’ is 1,000,000 and ‘physical reads’ is 10,000, then this query needs to be further optimized.

Conclusion : Most of the time, its total Cost versus Bytes(Consistent gets/Physical reads). But there are lot of other parameters like
Data maintenance, Data design , storage and hardware which needs to be considered for heavy DML/DDL staements.

Written by Ramesh Metta


Leave a Reply

Your email address will not be published. Required fields are marked *

*
*