Oracle SQL Tuning Quick Guide: Execution Plans, Statistics, and Tracing [19c]

Performance_en

This article is a rewritten, more practical, and reproducible version of the original “Oracle SQL Tuning Basics.” While preserving the original intent (actionable SQL tuning essentials), we reinforced shortest steps, reproducible scripts, and troubleshooting.


Introduction (Summary)

Oracle SQL tuning proceeds on three pillars: understanding execution plans, ensuring proper statistics, and trace analysis. First measure the current state (elapsed time/logical I/O/waits), then read the execution plan to find improvement points, and finally validate with tracing—this is the proven path.


Conclusion & Fast Path (Featured Snippet Oriented)

  1. Identify the target SQL: pick from top-N high-load SQL (AWR/SQL Monitor, etc.).
  2. Capture the execution plan: check the current plan with DBMS_XPLAN (access path, join method, selectivity).
  3. Fix statistics: refresh schema/table/index stats with DBMS_STATS.
  4. Apply design-level fixes: select only needed columns, proper indexes, join order/method, remove unnecessary functions.
  5. Use hints last: only when design/statistics cannot fix it—keep them minimal.
  6. Validate with tracing: use DBMS_MONITOR + TKPROF to quantify CPU/logical I/O/waits.

Background and Basics

What is SQL tuning?

An iterative process to improve SQL performance to the target level. Upfront design optimization and after-the-fact tuning of existing SQL are different activities.

Metrics (with quick tips)

MetricPurposeHow to obtain
Elapsed timeRepresentative of perceived performanceSQL*Plus/SQL Developer timing
Logical I/O (buffer gets)Memory access volumeTKPROF “query/current”
Physical I/O (disk reads)Storage loadTKPROF, AWR
CPU timeDegree of CPU contentionTKPROF, AWR
Wait eventsRoot causes (I/O/locks/network, etc.)AWR/ASH, V$SESSION views

Procedure & Implementation (Prereqs stated)

Prereqs (19c / Linux / non-CDB)

  • Connection users: SYS (for some administrative steps), test user: TUNEUSER (creation steps provided)
  • Sample objects: self-made EMP / DEPT equivalents
  • Character set/time NLS: leave defaults
  • Tablespace quota (Plan A): grant quota on USERS tablespace (included below)
  • Deferred segment creation is default; if quota is missing after CREATE TABLE, the first INSERT can fail with ORA-01950
  • For impactful changes (stats refresh / index creation / hints), use a test environment and prepare rollback steps

1) Create the test user (Plan A: grant quota on USERS tablespace)

Create a dedicated test user TUNEUSER and grant write quota on the USERS tablespace.

-- As SYS
CREATE USER TUNEUSER IDENTIFIED BY "StrongPwd1"
  DEFAULT TABLESPACE USERS
  TEMPORARY TABLESPACE TEMP;

GRANT CONNECT, RESOURCE TO TUNEUSER;

-- Plan A: quota on USERS (e.g., 100MB)
ALTER USER TUNEUSER QUOTA 100M ON USERS;

-- (Optional) verification
SELECT username, default_tablespace FROM dba_users WHERE username = 'TUNEUSER';
SELECT tablespace_name, bytes, max_bytes
  FROM dba_ts_quotas
 WHERE username = 'TUNEUSER';

Intent: set the default tablespace to USERS and grant a 100MB quota so segment allocation at INSERT succeeds.

2) Create sample tables and load data

Minimal data to verify differences in execution plans and index effects.

-- From here on, as TUNEUSER

-- Tables
CREATE TABLE EMP (
  EMPNO    NUMBER PRIMARY KEY,
  ENAME    VARCHAR2(50),
  JOB      VARCHAR2(30),
  MGR      NUMBER,
  HIREDATE DATE,
  SAL      NUMBER,
  COMM     NUMBER,
  DEPTNO   NUMBER
);

CREATE TABLE DEPT (
  DEPTNO NUMBER PRIMARY KEY,
  DNAME  VARCHAR2(50),
  LOC    VARCHAR2(50)
);

-- Seed data
INSERT INTO DEPT VALUES (10,'ACCOUNTING','NEW YORK');
INSERT INTO DEPT VALUES (20,'RESEARCH','DALLAS');
INSERT INTO DEPT VALUES (30,'SALES','CHICAGO');

INSERT INTO EMP VALUES (7369,'SMITH','CLERK',7902,DATE '1980-12-17',800,NULL,20);
INSERT INTO EMP VALUES (7499,'ALLEN','SALESMAN',7698,DATE '1981-02-20',1600,300,30);
INSERT INTO EMP VALUES (7521,'WARD','SALESMAN',7698,DATE '1981-02-22',1250,500,30);
COMMIT;

Intent: a minimal dataset to easily reproduce full scan vs. index usage differences.

3) Capture execution plans (EXPLAIN PLAN and actual)

Use both the “paper blueprint (EXPLAIN PLAN)” and the “actual run record (DISPLAY_CURSOR).”

-- Estimated plan (optimizer estimate)
EXPLAIN PLAN FOR
SELECT * FROM EMP WHERE DEPTNO = 30;

SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);

-- Actual plan after execution (with A-Rows, etc.) when SQL_ID is available
SELECT * FROM TABLE(
  DBMS_XPLAN.DISPLAY_CURSOR(NULL, NULL, 'BASIC +ROWS +BYTES +COST +IOSTATS +PEEKED_BINDS')
);

Note: DISPLAY_CURSOR shows the executed plan (actual rows/IO stats) and is valuable to substantiate improvements.

4) Apply indexes and verify

Create an index on a selective predicate, compare plan and I/O deltas.

-- Improvement: B*Tree index on DEPTNO
CREATE INDEX IDX_EMP_DEPTNO ON EMP(DEPTNO);

-- Re-run → compare actual plans
SELECT ENAME FROM EMP WHERE DEPTNO = 30;

SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL, NULL, 'BASIC +ROWS +IOSTATS'));

Intent: shift from TABLE ACCESS FULL to INDEX RANGE SCAN and reduce logical I/O.

5) Refresh statistics (up to date)

Stale stats can trigger “bad” plans. Build a solid base for accurate costing first.

BEGIN
  DBMS_STATS.GATHER_TABLE_STATS(ownname=>'TUNEUSER', tabname=>'EMP',  cascade=>TRUE);
  DBMS_STATS.GATHER_TABLE_STATS(ownname=>'TUNEUSER', tabname=>'DEPT', cascade=>TRUE);
END;
/

Note: CASCADE=>TRUE refreshes related index stats. For frequent changes, rely on the auto stats job; manually refresh targets after bulk loads, etc.

6) Revisit join methods and subqueries

Small×large: NESTED LOOPS; large×large: HASH JOIN. Use WITH to reuse shared subresults and cut re-evaluation.

-- Join basics: drive from the more selective side
SELECT /*+ LEADING(d) USE_NL(e) */
       e.ename, d.dname
FROM   dept d
JOIN   emp  e ON e.deptno = d.deptno
WHERE  d.loc = 'CHICAGO';

-- Subquery optimization: share an intermediate result via WITH
WITH dept_chi AS (
  SELECT /*+ MATERIALIZE */ deptno FROM dept WHERE loc = 'CHICAGO'
)
SELECT ename FROM emp WHERE deptno IN (SELECT deptno FROM dept_chi);

Intent: make join order/method explicit with hints (solve with design by default). WITH avoids duplicate evaluation and reduces I/O/CPU.

7) Trace and validate with TKPROF

ALTER SESSION SET SQL_TRACE=TRUE is classic, but in 19c, DBMS_MONITOR is recommended.

-- Recommended: enable session trace (required privileges assumed)
BEGIN
  DBMS_MONITOR.SESSION_TRACE_ENABLE(waits=>TRUE, binds=>TRUE);
END;
/

-- Execute workload → format with TKPROF (on server)
-- $ tkprof trace_*.trc out.prf aggregate=yes sys=no sort=exeela,fchela

How to read (TKPROF)

  • CPU/elapsed: CPU-bound vs. wait-bound
  • query/current: logical I/O volume
  • disk: physical I/O (storage load)
  • rows: estimate vs. actual gap (hints at stats/cardinality issues)

Example (Mini Case Study)

Before: unnecessary columns + full scan

SELECT * FROM EMP WHERE DEPTNO = 30;
-- Plan: TABLE ACCESS FULL EMP

After: project only needed columns and leverage the index

SELECT ENAME, JOB FROM EMP WHERE DEPTNO = 30;
-- Plan: INDEX RANGE SCAN IDX_EMP_DEPTNO → TABLE ACCESS BY INDEX ROWID EMP

What to watch: logical I/O (buffer gets) drops; CPU/elapsed improves. Verify from both the plan and TKPROF.


Text Diagram: Tuning Thought Process

[Symptom] Slow SQL
   │
   ├─(1) Measure: elapsed/IO/waits
   │      └─ AWR/ASH, TKPROF, DISPLAY_CURSOR
   │
   ├─(2) Read the plan: access/join/selectivity
   │      └─ TABLE FULL? → index/predicate rewrite
   │      └─ HASH JOIN? → memory/order/indexes
   │
   ├─(3) Solve by design: column projection, index design, SQL rewrite
   │
   ├─(4) Fix statistics: DBMS_STATS
   │
   └─(5) Validate: DBMS_MONITOR + TKPROF for hard numbers

Troubleshooting (Typical ORA)

ErrorLikely causeWhat to checkRead-only-first actions
ORA-00942 table/view does not existSchema/privilege issueALL_TABLES, executing userOwner-qualify as TUNEUSER.EMP
ORA-01555 snapshot too oldLong run + small UNDOElapsed time, UNDO sizingRange split, commit granularity, enlarge UNDO
ORA-00054 resource busyExclusive lock conflictV$LOCK, V$SESSIONShift window, retry, queue control

Operations, Monitoring, and Security Notes

  • Let the auto stats job work: manual overrides only for exceptions.
  • Avoid hint overuse: fixing the plan rigidly can backfire later; prioritize design/statistics.
  • In production, minimize scope/time/privileges for tracing. Protect trace outputs (PII in bind values).
  • Rollback: drop test indexes with DROP INDEX; for statistics, use DBMS_STATS.RESTORE_* (with stats history enabled).

FAQ

Q1. Which to trust, EXPLAIN PLAN or DISPLAY_CURSOR?

A. Prefer DISPLAY_CURSOR (actual executed plan). EXPLAIN PLAN is an estimate for reference.

Q2. How often should I refresh statistics?

A. Rely on the auto stats job by default. After large data loads, manually refresh only the affected objects.

Q3. Is it wrong to try hints first?

A. Temporary fixes are fine, but without addressing root causes (schema/index/SQL/statistics), plans can regress later.

Q4. Which tracing method should I use, SQL_TRACE or DBMS_MONITOR?

A. On 19c, DBMS_MONITOR is recommended. It offers better control over waits/binds collection and target scope.


Takeaways

  • Proceed measure → read → fix → validate (iterate).
  • For execution plans, use DISPLAY_CURSOR to see the truth; keep statistics accurate.
  • Design and SQL shape are primary; hints are the last resort.
  • Validate quantitatively with DBMS_MONITOR + TKPROF to prevent regressions.

This article targets Oracle Database 19c (other versions may differ in UI or defaults).


[Reference]
Oracle Database Database Performance Tuning Guide, 19c

コメント

Copied title and URL