Problem Description:


You have many indexes on your tables. You are wondering if you are making the
best use of indexing. You need to know how to pick the best columns to index.
You need to know how to tune your indexes.
Problem Explanation:

You may be experiencing performance degradation due to a large number of

Note 67522.1 Why is my index not used ?
Note 30405.1 How Btree indexes are maintained

Search Words:

indexes, utlsidx.sql, utloidxs.sql, utldidxs.sql, tuning indexes

Solution Description:

You can use the Oracle provided scripts in version 7 to decide which
columns to index and to tune your indexes.

Oracle provides a utility to analyze indexes. This is a tuning tool.
This utility (UTLSIDXS) consists of three scripts:

utlsidxs.sql, utloidxs.sql, and utldidxs.sql

Requires SQLPLUS or greater

The scripts provide the following functionality.

UTLSIDX.SQL – Starts the “UTLOIDXS.SQL” script on multiple tables and columns.
It requires the input of the table name and the column name. Its
main use is to provide the selectivity information on key

Use this procedure to find out information about how
selective columns are. Use it to:

1. Identify prospective columns for new indexes.
2. Determine how selective a current index is.
3. Determine whether a current index is useful or not.

UTLDIDXS.SQL – Run after “UTLOIDXS.SQL”. It takes the same arguments as
“UTLSIDX.SQL” and can take % as a wild-card value. “UTLDIDXS.SQL”
generates a report based on the statistics. The report is shown
only on screen. For hard-copy output, you must modify the script
to add SPOOL and SPOOL OFF statements.

Before you use these scripts the tables and columns must exist. The scripts are
generally used on single-column indexes. To run the scripts on a concatenated
index, create a test table consisiting of a single column that simulates the
concatenated index as a single column. For example, if your index is on
“po_num”, “po_date”, create a test table that concatenates these into a single

CREATE TABLE test_index AS SELECT po_num||po_date po_ind FROM purchase_order;

Once you have created the test table, you run the script the same as if you
were analyzing a normal single-column index candidate. This is essentially a
three-step process:

1. Choose the candidate columns from the table.
2. Log on to SQL*PLUS, and run “UTLSIDXS.SQL” for entire applications and
“UTLOIDXS.SQL” for single tables, giving “UTLOIDXS.SQL” the table and
column name.
3. Use “UTLDIDXS.SQL” to generate a report on the index candidate(s).

Example (on NT):

SQL> @c:\orant\rdbms80\admin\utloidxs tkp_example owner


SQL> @c:\orant\rdbms80\admin\utldidxs tkp_example owner


TKP_EXAMPLE OWNER Rows – Null 0.00
TKP_EXAMPLE OWNER Rows – Total 1,366.00
TKP_EXAMPLE OWNER Rows per key – avg 341.50
TKP_EXAMPLE OWNER Rows per key – dev 251.68
TKP_EXAMPLE OWNER Rows per key – max 658.00
TKP_EXAMPLE OWNER Total Distinct Keys 4.00
TKP_EXAMPLE OWNER db_gets_per_key_hit 240.55
TKP_EXAMPLE OWNER db_gets_per_key_miss 480.61
TKP_EXAMPLE OWNER 658 1 48.17 25.00
TKP_EXAMPLE OWNER 335 1 24.52 25.00

As you can see from examining the UTLDIDXS reports, OWNER would not make a good
index. It has a low selectivity and its “db_gets_per_key_hits” is 50% lower
than its “db_gets_key_misses” — which means it misses a row twice as much as
it finds it.

This is further witnessed by its high “badness” and low keys_count (a relative
term that shows how well the column will act as an index). Badness ratings are
summarized by the number of keys that exhibit that amount of badness.

A good index has the attribute “near unity hit/miss ratio.”

If an indexed column has nulls in it, then care must be used in analyzing
performance of that index:

1. A query for a null value will do a table-scan, and
will never use an index.

2. A query for a non-null value will only have to search
an index that has no null entries in it. Thus, no
performance will be lost between a table with 100 not-null
entries and 100 not-null and 1000 null entries, as long
as queries for not-null values are made.

Rows – Null & Rows – Total: null values are included, for the rest of stats not

Rows per key – avg : average number of rows per key, this is
a general measure of the selectivity of the column.

Rows per key – dev : standard deviation

Rows per key – max : maximun number of rows per key

The following two statistics attempt to determine how expensive it is to access
keys in this table.

A ‘miss’ will have to scan all the rows for a key value, and a 1-row hit will,
on the average, only have to scan half of them.

These gets are the the gets to read data from the table. Gets needed to access
branch and leaf nodes of the index are not counted here.

The ‘badness_factor’ column tells how many times a key was repeated.
The ‘keys_with_badness’ column tells how many keys were repeated
‘badness_factor’ times.

Higher badness factors are detrimental to the selectivity of the column.

This table takes on different meanings depending upon which access method is

1. Access distributed equally across rows.
In this case, the user should look at the ‘ROW_PERC’ column to
determine what percentage of the queries will have high badness factors.

2. Access distributed equally across key values.
In this case, the user should look at the ‘KEY_PERC’ column to
determine what percentage of the queries will have high badness factors.

Solution Explanation:

The UTLDIDXS reports will help you determine which columns to index. This will
aid in the overall performance of your database.



Sobre Alexandre Pires

ORACLE OCS Goldengate Specialist, OCE RAC 10g R2, OCP 12C, 11g, 10g , 9i e 8i - Mais de 25 anos de experiência na área de TI. Participei de projetos na G&P alocado na TOK STOK, EDINFOR alocado na TV CIDADE "NET", 3CON Alocado no PÃO DE AÇUCAR, DISCOVER alocado na VIVO, BANCO IBI e TIVIT, SPC BRASIL, UOLDIVEO alocado no CARREFOUR e atualmente na ORACLE ACS atendendo os seguintes projetos: VIVO, CLARO, TIM, CIELO, CAIXA SEGUROS, MAPFRE, PORTO SEGURO, SULAMERICA, BRADESCO SEGUROS, BANCO BRADESCO, BASA, SANTANDER, CNJ, TSE, ELETROPAULO, EDP, SKY, NATURA, ODEBRESHT, NISSEI, SICREDI, CELEPAR, TAM, TIVIT, IBM, SMILES, CELEPAR, SERPRO,OKI,BANCO PAN, etc
Esse post foi publicado em INDEX TUNING, PERFORMANCE, PLSQL TUNING. Bookmark o link permanente.

Deixe um comentário

Preencha os seus dados abaixo ou clique em um ícone para log in:

Logotipo do

Você está comentando utilizando sua conta Sair /  Alterar )

Foto do Google+

Você está comentando utilizando sua conta Google+. Sair /  Alterar )

Imagem do Twitter

Você está comentando utilizando sua conta Twitter. Sair /  Alterar )

Foto do Facebook

Você está comentando utilizando sua conta Facebook. Sair /  Alterar )


Conectando a %s