Thursday, December 7, 2017

Oracle 12.2 new features

PL/Scope Reports on Static SQL Statements and Call Sites for Dynamic SQL:
The new view, DBA_STATEMENTS, reports on the occurrences of static SQL in PL/SQL units; listing, for example, the statement text, the type (SELECT, INSERT, UPDATE, or DELETE) and the SQL_ID. Dynamic SQL call sites (EXECUTE IMMEDIATE, OPEN cursor FOR dynamic text="") are also listed in this view family. The DBA_IDENTIFIERS view family now reports on identifiers used in static SQL and notes their type (table, column, materialized view, sequence, and so on).

The purpose of PL/SQL is to issue SQL statements. Therefore, it is useful that PL/Scope now knows about the uses of SQL in PL/SQL source code. For example, if performance investigation reports the SQL_ID of a slow statement, its call sites in the PL/SQL programs can be found immediately. When PL/SQL source is under quality control for SQL injection risks, where dynamic SQL is used, the sites to look at can also be found immediately.


Enhancing CAST Function With Error Handling:
The existing CAST function is enhanced to return a user-specified value in the case of a conversion error instead of raising an error.


New SQL and PL/SQL Function VALIDATE_CONVERSION:
The new function, VALIDATE_CONVERSION, determines whether a given input value can be converted to the requested data type.


Binding PL/SQL-Only Data Types to SQL Statements Using DBMS_SQL:
Oracle Database 12c Release 1 (12.1) introduced the ability to bind values of PL/SQL-only data types, most notably PLS_INTEGER tables of records to SQL statements. However, there were some restrictions which are lifted in this release. The PL/SQL-only data types can now be bound using the DBMS_SQL API and by invoking a C external procedure.

This improvement brings the DBMS_SQL API in parity with the native dynamic SQL.


Improving the PL/SQL Debugger:<Very Imp>
In prior releases, it was necessary to change the application to include calls to start and stop debugging. With this improvement, one database session can start debugging, and stop debugging a different session.

Also, when stopped at a breakpoint, it is possible for the debugging user to issue ad hoc SQL commands implying, therefore, the possibility to run PL/SQL code invoking stored PL/SQL subprograms in an anonymous block.

With these improvements, if a problem occurs in a long running test or in a production environment, it is possible to investigate the problem from another session. In addition to inspecting the state of in-scope variables, it is now possible to examine the database state as the session being debugged sees it during an uncommitted transaction.


Materialized Views: Real-Time Materialized Views:
Materialized views can be used for query rewrite even if they are not fully synchronized with the base tables and are considered stale. Using materialized view logs for delta computation together with the stale materialized view, the database can compute the query and return correct results in real time.

For materialized views that can be used for query rewrite all of the time, with the accurate result being computed in real time, the result is optimized and fast query processing for best performance. This alleviates the stringent requirement of always having to have fresh materialized views for the best performance.


Materialized Views: Statement-Level Refresh:
In addition to ON COMMIT and ON DEMAND refresh, the materialized join views can be refreshed when a DML operation takes place, without the need to commit such a transaction. This is predominantly relevant for star schema deployments.

The new ON STATEMENT refresh capability provides more flexibility to the application developers to take advantage of the materialized view rewrite, especially for complex transactions involving multiple DML statements. It offers built-in refresh capabilities that can replace customer-written trigger-based solutions, simplifying an application while offering higher performance.


Enhancing LISTAGG Functionality:
LISTAGG aggregates the values of a column by concatenating them into a single string. New functionality is added for managing situations where the length of the concatenated string is too long.

Developers can now control the process for managing overflowing LISTAGG aggregates. This increases the productivity and flexibility of this aggregation function.

White Lists (ACCESSIBLE BY) Enhancements:
In this release, you can define a white list on individual subprograms in a package. The ACCESSIBLE BY clause specifies a list of PL/SQL units that are considered safe to invoke the subprogram, and blocks all others.

Lowering the granularity increases the usefulness of the ACCESSIBLE BY clause for a package.


Long Identifiers:
The maximum length of identifiers is increased to 128 bytes for most identifiers, up from 30 bytes in previous releases.

Providing longer identifiers gives customers greater flexibility in defining their naming schemes, such as longer and more expressive table names. Having longer identifiers also enables object name migration between databases with different character sets, such as Thai to Unicode.


Static PL/SQL Expressions Now Allowed Where Previously Literals Were Required:
Some examples of places where, in earlier releases, a literal was required are:

The length of the constraint in a VARCHAR2 declaration.
The precision and scale in a NUMBER declaration.
Now you can use expressions, but the values must allow computation at compile time.

You can now write PL/SQL programs so that the intention is self-evident without comments. You can also change PL/SQL programs to reflect changed requirements by making changes at far fewer sites. The canonical example is the VARCHAR2 that holds the text of a simple SQL identifier. This needs to be 128 bytes plus 2 additional bytes; 128 bytes for the name, and 2 bytes to enable double quoting.


Rolling Back Redefinition
There is a new ROLLBACK parameter for the FINISH_REDEF_TABLE procedure that tracks DML on a newly redefined table so that changes can be easily synchronized with the original table using the SYNC_INTERIM_TABLE procedure.

There is also a new ROLLBACK procedure for the DBMS_REDEFINITION package that initiates the swapping of the redefined table with the original table, therefore effectively rolling back the redefinition changes and returning the table to its original state.

If the results of a redefinition are not acceptable (for example, a performance slow down accessing a newly redefined table), then the redefinition changes can be easily rolled back, therefore saving the DBA time in performing another redefinition to undo the table changes.


Redefinition Progress Monitoring
The new V$ONLINE_REDEF view displays runtime information related to the current redefinition procedure being executed based on a redefinition session identifier.

This new view enables DBAs the ability to monitor the progress of their redefinition session and take corrective action if needed.


Optimizing Batch Update During Redefinition
Redefinition requires ARCHIVELOG mode to be enabled. However, a new DBMS_REDEFINITION.EXECUTE_UPDATE procedure enables the execution of UPDATE statements in direct insert mode, eliminating redo logging and the associated overhead during that time period. Since redo is not logged during that period, the user must be aware that the redefinition and UPDATE changes cannot be recovered using media recovery. A database or tablespace backup must be taken after the redefinition completes to preserve recoverability.

For customers who run predictable, batch updates (for example, data warehouse) in direct insert mode to avoid redo overhead and the resulting impact to transaction throughput, they can now run batch updates in the same manner during a redefinition session, therefore reducing the overall time to complete the redefinition.


Materialized Views: Fast Dependent Materialized View Refresh During Redefinition:
The new SYNC_INTERIM_TABLE procedure for the DBMS_REDEFINITION package enables incremental refresh of dependent materialized views on the interim table. The SYNC_INTERIM_TABLE procedure refreshes the materialized view as part of its execution.

All dependent materialized views on the source table must be fully refreshed after redefinition completes, increasing the time when the table is fully usable. Refreshing the materialized views during the redefinition process eliminates the time to perform a complete refresh at the end.


Online Conversion of a Nonpartitioned Table to a Partitioned Table:
Nonpartitioned tables can be converted to partitioned tables online. Indexes are maintained as part of this operation and can be partitioned as well. The conversion has no impact on the ongoing DML operations.

The online conversion of a nonpartitioned table to a partitioned table enables any application to adopt partitioning without application downtime. Customers can adopt partitioning for any system and evolve tables as needed to benefit from the partitioning for large tables.


Online SPLIT Partition and Subpartition:
The partition maintenance operations SPLIT PARTITION and SPLIT SUBPARTITION can now be executed as online operations for heap organized tables, allowing the concurrent DML operations with the ongoing partition maintenance operation.

Allowing any partition maintenance operation to become truly online enables the customers to schedule and execute all of these operations as needed, without having to plan around periods of query-only windows. This functionality both increases application availability and simplifies application development.


Online Table Move:
Nonpartitioned tables can be moved as an online operation without blocking any concurrent DML operations. A table move operation now also supports automatic index maintenance as part of the move.

Data maintenance on nonpartitioned tables does not require any maintenance window since it does not impact any DML or query operation


Partitioning: Table Creation for Partition Exchange:
A new DDL command allows the creation of a table that exactly matches the shape of a partitioned table and is therefore eligible for a partition or subpartition exchange of a partitioned table. Note that indexes are not created as part of this command.

Creating a table that qualifies for a partition or subpartition exchange can be a tedious task for older tables that have had various structural changes and reorganizations. With this new DDL, the task becomes very simple and straightforward to implement. It also adds some implicit business context to such an operation, as compared to a CREATE TABLE AS SELECT command.


Partitioning: Filtered Partition Maintenance Operations:
Partition maintenance operations can now be combined with data filtering. For example, a partition can be compressed and moved to a different tablespace, but only the data satisfying the specific filter criteria is actually moved.

Partition maintenance operations with data filtering combine two of the most common data maintenance operations. This combination not only makes the partition maintenance operation more flexible and powerful, but also makes it more performant and less resource intensive compared to the two separate data management operations.

Optimizer Statistics Advisor:
The database provides a built-in mechanism to track and analyze how statistics are collected. Based on the tracked information and the current best practices, the database reports problems with the current statistics collection and suggests changes to the statistics collection.

Enabling the database to collect historical information about the statistics collection and to suggest changes enables a customer to fine tune the statistics collection and to adopt best practices for optimizer statistics collection.


Enhancing SQL Plan Management:
SQL Plan Management (SPM) leverages a larger pool of plan information (including Automatic Workload Repository (AWR)) as a source for SQL plan baselines. SPM is also enhanced to provide a more selective plan capturing and easier verification for customers.

Customers can leverage SQL Plan Management for their specific application requirements.


Scan Rate and In-Memory Columnar Statistics:
DBMS_STATS now supports external table scan rates and In-Memory Column Store (IM column store) statistics.

If the database uses the In-Memory Column Store, you can set im_imcu_count to the number of In-Memory Compression Units (IMCU) in the table or partition, and set im_block_count to the number of blocks in the table or partition. For an external table, scan rate specifies the rate at which data is scanned in MB/second.


Band Join Enhancements:
A band join is a special type of nonequijoin in which key values in one data set must fall within the specified range (“band”) of the second data set. When the database detects a band join, the database evaluates the costs of band joins more efficiently, avoiding unnecessary scans of rows that fall outside the defined bands.

In most cases, optimized performance is comparable to an equijoin.



Cursor-Duration Temporary Tables Cached in Memory:
Complex queries often process the same SQL fragment (query block) multiple times to answer a business question. The results of these queries can be stored internally to avoid the multiple processing of the same query fragment, implemented as cursor-duration temporary tables. With this new functionality, these temporary tables can reside completely in cache without the need to write them to disk.

Caching intermediate, partial results in memory increases the performance of complex, mission-critical operations and optimizes I/O resource consumption.


Text: Partition-Specific Near Real-Time Indexes:
The option to use near real-time indexes (the STAGE_ITAB setting) can now be specified at a partition level.

Partitions, which are frequently updated, can have near real-time indexing switched on, while less frequently updated tables can have it turned off.


Partitioning: Auto-List Partitioning:
The database automatically creates a separate (new) partition for every distinct partition key value of the table.

Auto-list partitioning removes the management burden from the DBAs to manually maintain a list of partitioned tables for a large number of distinct key values that require individual partitions. It also automatically copes with the unplanned partition key values without the need of a DEFAULT partition.



Fine-Grained Cursor Invalidation:
Cursor invalidations are more fine-grained, reducing the need for cursor invalidation and recompilation. It also defers any cursor recompilation, if possible, to avoid spikes in hard parses.

With a more fine-grained cursor invalidation and a slow recompilation, applications become more available and do not incur any sudden increase in the CPU utilization for cursor recompilation.


Partitioning: Read-Only Partitions:
Partitions and sub-partitions can be individually set to a read-only state. This then disables DML operations on these read-only partitions and sub-partitions. This is an extension to the existing read-only table functionality.

Read-only partitions and subpartitions enable fine-grained control over DML activity. This enhances the data management capabilities of partitioned tables.



Partitioning: Multi-Column List Partitioning:
List partitioning functionality is expanded to enable multiple partition key columns.

Using multiple columns to define the partitioning criteria for list partitioned tables enables new classes of applications to benefit from partitioning.



Tracking Index Usage:
A new view, V$INDEX_USAGE_INFO, provides information about the use of indexes in the database. The information in the view can be used to determine whether indexes are used and the frequency of usage, and this information can be used to evaluate indexing strategies in the database.

A fine-grained information about which indexes are used and how frequently they are used allows you to eliminate infrequently used indexes. This results in better database performance and more effective system utilization.


Expression Tracking:
SQL statements commonly include expressions such as "+" or "-". More complicated examples include PL/SQL functions or SQL functions like LTRIM and TO_NUMBER. A repository maintains usage information about expressions identified during compilation and captured during execution. Complicated expressions involving multiple columns or functions make it more difficult to accurately estimate selectiveness in the optimizer, resulting in suboptimal plans. Processing more information about expressions and their usage is useful for establishing better query execution plans.

Customers benefit from improved query performance for statements that include expressions.

Thursday, October 26, 2017

oracle managements packs used by DB

http://kerryosborne.oracle-guy.com/2008/10/oracle-management-packs/

select version, name, detected_usages, currently_used, first_usage_date, last_usage_date from DBA_FEATURE_USAGE_STATISTICS where detected_usages > 0 order by 1, 2;

select * from DBA_HIGH_WATER_MARK_STATISTICS;

select * from V$PARAMETER WHERE NAME LIKE '%control_management_pack_access%';

Temp_tablespace_usage_by_sort

select distinct
c.username "user",
c.osuser ,
c.sid,
c.serial#,
b.spid "unix_pid",
c.machine,
c.program "program",
a.blocks * e.block_size/1024/1024 mb_temp_used  ,
a.tablespace,
d.sql_text
from
v$sort_usage a,
v$process b,
v$session c,
v$sqlarea d,
dba_tablespaces e
where c.saddr=a.session_addr
and b.addr=c.paddr
and c.sql_address=d.address(+)
and a.tablespace = e.tablespace_name;

Query to find out temp tablespace usage

SELECT
   A.tablespace_name tablespace,
   D.mb_total,
   SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_used,
   D.mb_total - SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_free
FROM
   v$sort_segment A,
(
SELECT
   B.name,
   C.block_size,
   SUM (C.bytes) / 1024 / 1024 mb_total
FROM
   v$tablespace B,
   v$tempfile C
WHERE
   B.ts#= C.ts#
GROUP BY
   B.name,
   C.block_size
) D
WHERE
   A.tablespace_name = D.name
GROUP by
   A.tablespace_name,
   D.mb_total

Query to find out missing FK indexes

select table_name, constraint_name,
        cname1 || nvl2(cname2,','||cname2,null) ||
        nvl2(cname3,','||cname3,null) || nvl2(cname4,','||cname4,null) ||
        nvl2(cname5,','||cname5,null) || nvl2(cname6,','||cname6,null) ||
        nvl2(cname7,','||cname7,null) || nvl2(cname8,','||cname8,null)
               altercolumns
    from ( select b.table_name,
                   b.constraint_name,
                   max(decode( position, 1, column_name, null )) cname1,
                   max(decode( position, 2, column_name, null )) cname2,
                   max(decode( position, 3, column_name, null )) cname3,
                   max(decode( position, 4, column_name, null )) cname4,
                  max(decode( position, 5, column_name, null )) cname5,
                   max(decode( position, 6, column_name, null )) cname6,
                   max(decode( position, 7, column_name, null )) cname7,
                   max(decode( position, 8, column_name, null )) cname8,
                   count(*) col_cnt
              from (select substr(table_name,1,30) table_name,
                           substr(constraint_name,1,30) constraint_name,
                           substr(column_name,1,30) column_name,
                          position
                      from user_cons_columns ) a,
                   user_constraints b
            where a.constraint_name = b.constraint_name
              and b.constraint_type = 'R'
              and b.table_name='DFUVIEW'
            group by b.table_name, b.constraint_name
         ) cons
   where col_cnt > ALL
           ( select count(*)
               from user_ind_columns i
              where i.table_name = cons.table_name
                and i.column_name in (cname1, cname2, cname3, cname4,
                                      cname5, cname6, cname7, cname8 )
                and i.column_position <= cons.col_cnt
              group by i.index_name
            )

Table or index Fragmentation



TABLE FRAGMENTATION:

--Query to find out fragementation
select owner,table_name,round((blocks*16),2)||'kb' "Fragmented size", round((num_rows*avg_row_len/1024),2)||'kb' "Actual size", round((blocks*16),2)-round((num_rows*avg_row_len/1024),2)||'kb',
((round((blocks*16),2)-round((num_rows*avg_row_len/1024),2))/round((blocks*16),2))*100 -10 "reclaimable space % " from dba_tables where table_name ='DEPDMDSTATIC' AND OWNER LIKE 'SCPOMGR';

declare
   l_fs1_bytes number;
   l_fs2_bytes number;
   l_fs3_bytes number;
   l_fs4_bytes number;
   l_fs1_blocks number;
   l_fs2_blocks number;
   l_fs3_blocks number;
   l_fs4_blocks number;
   l_full_bytes number;
   l_full_blocks number;
   l_unformatted_bytes number;
   l_unformatted_blocks number;
  begin
   dbms_space.space_usage(
      segment_owner      => user,
      segment_name       => 'PLANORDER',
      segment_type       => 'TABLE',
      fs1_bytes          => l_fs1_bytes,
      fs1_blocks         => l_fs1_blocks,
      fs2_bytes          => l_fs2_bytes,
      fs2_blocks         => l_fs2_blocks,
      fs3_bytes          => l_fs3_bytes,
      fs3_blocks         => l_fs3_blocks,
      fs4_bytes          => l_fs4_bytes,
      fs4_blocks         => l_fs4_blocks,
      full_bytes         => l_full_bytes,
      full_blocks        => l_full_blocks,
      unformatted_blocks => l_unformatted_blocks,
      unformatted_bytes  => l_unformatted_bytes
    --  partition_name     => 'P_-106'
   );
   dbms_output.put_line(' FS1 Blocks = '||l_fs1_blocks||' Bytes = '||l_fs1_bytes);
   dbms_output.put_line(' FS2 Blocks = '||l_fs2_blocks||' Bytes = '||l_fs2_bytes);
   dbms_output.put_line(' FS3 Blocks = '||l_fs3_blocks||' Bytes = '||l_fs3_bytes);
   dbms_output.put_line(' FS4 Blocks = '||l_fs4_blocks||' Bytes = '||l_fs4_bytes);
   dbms_output.put_line('Full Blocks = '||l_full_blocks||' Bytes = '||l_full_bytes);
end;
/


INDEX FRAGMENTATION:

We need to analyze the index fragmentation post migration
Please use the below script to do so. After completion of the PLSQL block check the dl_lf_rows and based on the high values we can select candidates for doing rebuild.

CREATE TABLE index_frag
(
   exec_date     DATE,
   index_name    VARCHAR2 (30),
   blocks        NUMBER,
   lf_blks       NUMBER,
   del_lf_rows   NUMBER
);



DECLARE
BEGIN
   FOR c_rec IN (SELECT index_name
                   FROM user_indexes
                  WHERE table_name NOT IN (SELECT TABLE_NAME
                                             FROM user_tables
                                            WHERE IOT_TYPE IS NOT NULL))
   LOOP
      BEGIN
         EXECUTE IMMEDIATE
            'ANALYZE  INDEX ' || c_rec.index_name || 'VALIDATE  STRUCTURE';

         INSERT INTO index_frag
            SELECT SYSDATE,
                   c_rec.index_name,
                   blocks,
                   lf_blks,
                   del_lf_rows
              FROM index_stats;
          COMMIT;
      EXCEPTION
         WHEN OTHERS
         THEN
            DBMS_OUTPUT.put_line ('Issue with Index : ' || c_rec.index_name);
            CONTINUE;
      END;
   END LOOP;
END;
/

set echo off
set termout off
set verify off
set trimspool on
set feedback off
set heading off
set lines 300
set pages 0
set serverout on
spool analyze_User1_indexes.tmp

select 'exec DBMS_STATS.UNLOCK_TABLE_STATS ('''|| user ||''','''|| table_name ||''');' from user_tables order by table_name asc;

begin
for x in ( select index_name from user_indexes where index_type = 'NORMAL')
loop
dbms_output.put_line('ANALYZE INDEX ' || x.index_name || ' COMPUTE STATISTICS;');
dbms_output.put_line('ANALYZE INDEX ' || x.index_name || ' VALIDATE STRUCTURE;');
dbms_output.put_line('select name, height, lf_rows, del_lf_rows, round((del_lf_rows/lf_rows)*100,2) as ratio from index_stats where (lf_rows > 100 and del_lf_rows
> 0)
and (height > 3 or ((del_lf_rows/lf_rows)*100) > 20);');
end loop;
end;
/

select 'exec DBMS_STATS.LOCK_TABLE_STATS ('''|| user ||''','''|| table_name ||''');' from user_tables order by table_name asc;

spool off
column name format a40
spool FPATH/analyze_User1_index_report.txt
PROMPT NAME | HEIGHT | LF_ROWS | DEL_LF_ROWS | RATIO (del_lf_rows/lf_rows) %
@@$FPATH/analyze_User1_indexes.tmp
spool off

create or replace
procedure show_space
( p_segname in varchar2,
p_owner in varchar2 default user,
p_type in varchar2 default 'TABLE',
p_partition in varchar2 default NULL )
authid current_user
as
l_free_blks number;

l_total_blocks number;
l_total_bytes number;
l_unused_blocks number;
l_unused_bytes number;
l_LastUsedExtFileId number;
l_LastUsedExtBlockId number;
l_LAST_USED_BLOCK number;
procedure p( p_label in varchar2, p_num in number )
is
begin
dbms_output.put_line( rpad(p_label,40,'.') ||
p_num );
end;
begin
for x in ( select tablespace_name
from dba_tablespaces
where tablespace_name = ( select tablespace_name
from dba_segments
where segment_type = p_type
and segment_name = p_segname
and SEGMENT_SPACE_MANAGEMENT <> 'AUTO' )
)
loop
dbms_space.free_blocks
( segment_owner => p_owner,
segment_name => p_segname,
segment_type => p_type,
partition_name => p_partition,
freelist_group_id => 0,
free_blks => l_free_blks );
end loop;

dbms_space.unused_space
( segment_owner => p_owner,
segment_name => p_segname,
segment_type => p_type,
partition_name => p_partition,
total_blocks => l_total_blocks,
total_bytes => l_total_bytes,
unused_blocks => l_unused_blocks,
unused_bytes => l_unused_bytes,
LAST_USED_EXTENT_FILE_ID => l_LastUsedExtFileId,
LAST_USED_EXTENT_BLOCK_ID => l_LastUsedExtBlockId,
LAST_USED_BLOCK => l_LAST_USED_BLOCK );

p( 'Free Blocks', l_free_blks );
p( 'Total Blocks', l_total_blocks );
p( 'Total Bytes', l_total_bytes );
p( 'Total MBytes', trunc(l_total_bytes/1024/1024) );
p( 'Unused Blocks', l_unused_blocks );
p( 'Unused Bytes', l_unused_bytes );
p( 'Last Used Ext FileId', l_LastUsedExtFileId );
p( 'Last Used Ext BlockId', l_LastUsedExtBlockId );
p( 'Last Used Block', l_LAST_USED_BLOCK );
end;
/



--
-- File name: indexes_2b_shrunk.sql
--
-- Purpose: List of candidate indexes to be shrunk (rebuild online)
--
-- Author: Carlos Sierra
--
-- Version: 2017/07/12
--
-- Usage: Execute on PDB
--
-- Example: @indexes_2b_shrunk.sql
--
-- Notes: Execute connected into a PDB.
-- Consider then:
-- ALTER INDEX [schema.]index REBUILD ONLINE;
--
---------------------------------------------------------------------------------------
 
-- select only those indexes with an estimated space saving percent greater than 25%
VAR savings_percent NUMBER;
EXEC :savings_percent := 25;
-- select only indexes with current size (as per cbo stats) greater then 1MB
VAR minimum_size_mb NUMBER;
EXEC :minimum_size_mb := 1;
 
SET SERVEROUT ON ECHO OFF FEED OFF VER OFF TAB OFF LINES 300;
 
COL report_date NEW_V report_date;
SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD"T"HH24:MI:SS') report_date FROM DUAL;
SPO /tmp/indexes_2b_shrunk_&&report_date..txt;
 
DECLARE
l_used_bytes NUMBER;
l_alloc_bytes NUMBER;
l_percent NUMBER;
BEGIN
DBMS_OUTPUT.PUT_LINE('PDB: '||SYS_CONTEXT('USERENV', 'CON_NAME'));
DBMS_OUTPUT.PUT_LINE('---');
DBMS_OUTPUT.PUT_LINE(
RPAD('OWNER.INDEX_NAME', 35)||' '||
LPAD('SAVING %', 10)||' '||
LPAD('CURRENT SIZE', 20)||' '||
LPAD('ESTIMATED SIZE', 20));
DBMS_OUTPUT.PUT_LINE(
RPAD('-', 35, '-')||' '||
LPAD('-', 10, '-')||' '||
LPAD('-', 20, '-')||' '||
LPAD('-', 20, '-'));
FOR i IN (SELECT x.owner, x.index_name, SUM(s.leaf_blocks) * TO_NUMBER(p.value) index_size,
REPLACE(DBMS_METADATA.GET_DDL('INDEX',x.index_name,x.owner),CHR(10),CHR(32)) ddl
FROM dba_ind_statistics s, dba_indexes x, dba_users u, v$parameter p
WHERE u.oracle_maintained = 'N'
AND x.owner = u.username
AND x.tablespace_name NOT IN ('SYSTEM','SYSAUX')
AND x.index_type LIKE '%NORMAL%'
AND x.table_type = 'TABLE'
AND x.status = 'VALID'
AND x.temporary = 'N'
AND x.dropped = 'NO'
AND x.visibility = 'VISIBLE'
AND x.segment_created = 'YES'
AND x.orphaned_entries = 'NO'
AND p.name = 'db_block_size'
AND s.owner = x.owner
AND s.index_name = x.index_name
GROUP BY
x.owner, x.index_name, p.value
HAVING
SUM(s.leaf_blocks) * TO_NUMBER(p.value) > :minimum_size_mb * POWER(2,20)
ORDER BY
index_size DESC)
LOOP
DBMS_SPACE.CREATE_INDEX_COST(i.ddl,l_used_bytes,l_alloc_bytes);
IF i.index_size * (100 - :savings_percent) / 100 > l_alloc_bytes THEN
l_percent := 100 * (i.index_size - l_alloc_bytes) / i.index_size;
DBMS_OUTPUT.PUT_LINE(
RPAD(i.owner||'.'||i.index_name, 35)||' '||
LPAD(TO_CHAR(ROUND(l_percent, 1), '990.0')||' % ', 10)||' '||
LPAD(TO_CHAR(ROUND(i.index_size / POWER(2,20), 1), '999,999,990.0')||' MB', 20)||' '||
LPAD(TO_CHAR(ROUND(l_alloc_bytes / POWER(2,20), 1), '999,999,990.0')||' MB', 20));
END IF;
END LOOP;
END;
/
 
SPO OFF;