Query to return current oracle patch.
select patch_id, patch_uid, version, status, description from dba_registry_sqlpatch where bundle_series = 'PSU';
To find out latest oracle patches available:
https://www.oracle.com/technetwork/topics/security/alerts-086861.html
Monday, December 26, 2016
Friday, December 9, 2016
unusable index and truncate table
Changing index to unusable will drop the index segment but if we truncate the table, it will make index usable again.
ALTER INDEX X2F2TEST_UNUSABLE UNUSABLE
select status,VISIBILITY,SEGMENT_CREATED,index_name from user_indexes a where index_name='X2F2TEST_UNUSABLE'
will retuen
UNUSABLE VISIBLE NO X2F2TEST_UNUSABLE
select * from dba_segments where segment_name='X2F2TEST_UNUSABLE'
will return no rows. It show's that segment dropped.
So if we have index unusable, it depends if oracle maintains index during DML operations or not based on skip_unusable_indexes initialization parameter. If it is true it won't be maintained, else (don't use FALSE ever)it will throw error during DML operation that unusable index exists or during select if that select usage that index.
Truncating a table makes an unusable index valid. So if we truncate a table post changing it to unusable, it will change status of index to valid.
truncate table X2F2TEST_UNUSABLE;
select status,VISIBILITY,SEGMENT_CREATED,index_name from user_indexes a where index_name='X2F2TEST_UNUSABLE'
will retuen
VALID VISIBLE YES X2F2TEST_UNUSABLE
To make an unusable index usable, we have to rebuild the index.
ALTER INDEX scpomgr.X2F2TEST_UNUSABLE rebuild parallel 5
ALTER INDEX scpomgr.X2F2TEST_UNUSABLE noparallel
Unusable Index:When you make an index unusable it is ignored by optimizer and not updated by dml operation. After you make index unusable.It must rebuilded before using it.
Invisible Index:When you make index invisible it is ignored by optimizer.You can use this for test purpose.you can make index invisible when you create index or with alter index command after creating table . Hovewer if you set OPTIMIZER_USE_INVISIBLE_INDEXES parameter to TRUE invisible index can be used by optimizer.This parameter can be set session or system level.
NOTE:When you make index invisible this will result invalidate all sql statement in the shared memory that have execution plan using this index
ALTER INDEX X2F2TEST_UNUSABLE UNUSABLE
select status,VISIBILITY,SEGMENT_CREATED,index_name from user_indexes a where index_name='X2F2TEST_UNUSABLE'
will retuen
UNUSABLE VISIBLE NO X2F2TEST_UNUSABLE
select * from dba_segments where segment_name='X2F2TEST_UNUSABLE'
will return no rows. It show's that segment dropped.
So if we have index unusable, it depends if oracle maintains index during DML operations or not based on skip_unusable_indexes initialization parameter. If it is true it won't be maintained, else (don't use FALSE ever)it will throw error during DML operation that unusable index exists or during select if that select usage that index.
Truncating a table makes an unusable index valid. So if we truncate a table post changing it to unusable, it will change status of index to valid.
truncate table X2F2TEST_UNUSABLE;
select status,VISIBILITY,SEGMENT_CREATED,index_name from user_indexes a where index_name='X2F2TEST_UNUSABLE'
will retuen
VALID VISIBLE YES X2F2TEST_UNUSABLE
To make an unusable index usable, we have to rebuild the index.
ALTER INDEX scpomgr.X2F2TEST_UNUSABLE rebuild parallel 5
ALTER INDEX scpomgr.X2F2TEST_UNUSABLE noparallel
Unusable Index:When you make an index unusable it is ignored by optimizer and not updated by dml operation. After you make index unusable.It must rebuilded before using it.
Invisible Index:When you make index invisible it is ignored by optimizer.You can use this for test purpose.you can make index invisible when you create index or with alter index command after creating table . Hovewer if you set OPTIMIZER_USE_INVISIBLE_INDEXES parameter to TRUE invisible index can be used by optimizer.This parameter can be set session or system level.
NOTE:When you make index invisible this will result invalidate all sql statement in the shared memory that have execution plan using this index
Tuesday, September 20, 2016
Good to know things oracle
http://www.joblagao.com/voices/joblagao-university/20-jda-technical-interview-questions
1. Enable trace for others session
begin
sys.DBMS_SYSTEM.set_sql_trace_in_session(264, 37690, true );
end;
first parameter - SID, and next thread#
here in promo - one session blocks many others - so I thought to trace what that session is doing etc.
2. Below query fails and reason is outer query is enclosed in braces.
(with t as (select * from emp)
select * from t);
3.SQLCODE and SQLERRM, why I can't insert SQLCODE and SQLERRM Values directly into table? I can insert only while taking them into separate variables?
Ans:
Reason is that SQLCODE and SQLERRM are functions that only return valid values within exception blocks of PL/SQL code. When you issue an SQL statement it is passed to the SQL engine and as such those functions would be out of scope and not provide the correct values and the SQL engine would not be able to execute such functions.It is well documented in oracle docs.
You may write a user defined function and wrap these functions in them to implement the functionality .
The SQLERRM() is a function - and is only defined in the PL engine. Kind of the opposite of the DECODE() function that is only defined in the SQL engine.
Nothing prevents you from defining a user PL/SQL function (which can be used in SQL) and use that to wrap SQLERRM(), e.g.
create or replace function oraMessage( oraError number ) return varchar2 is
begin
return( SQLERRM(oraError) );
end;
This can be quite handy if you have a message log table (for application code) that has recorded ORA errors (via SQLCODE) - and you want to display the associated error message.
1. Enable trace for others session
begin
sys.DBMS_SYSTEM.set_sql_trace_in_session(264, 37690, true );
end;
first parameter - SID, and next thread#
here in promo - one session blocks many others - so I thought to trace what that session is doing etc.
2. Below query fails and reason is outer query is enclosed in braces.
(with t as (select * from emp)
select * from t);
3.SQLCODE and SQLERRM, why I can't insert SQLCODE and SQLERRM Values directly into table? I can insert only while taking them into separate variables?
Ans:
Reason is that SQLCODE and SQLERRM are functions that only return valid values within exception blocks of PL/SQL code. When you issue an SQL statement it is passed to the SQL engine and as such those functions would be out of scope and not provide the correct values and the SQL engine would not be able to execute such functions.It is well documented in oracle docs.
You may write a user defined function and wrap these functions in them to implement the functionality .
The SQLERRM() is a function - and is only defined in the PL engine. Kind of the opposite of the DECODE() function that is only defined in the SQL engine.
Nothing prevents you from defining a user PL/SQL function (which can be used in SQL) and use that to wrap SQLERRM(), e.g.
create or replace function oraMessage( oraError number ) return varchar2 is
begin
return( SQLERRM(oraError) );
end;
This can be quite handy if you have a message log table (for application code) that has recorded ORA errors (via SQLCODE) - and you want to display the associated error message.
4.I Have a column where number stored as 8.756412556312453E37 I want to display it as
87564125563124530000000000000000000000
Ans:Use to_char function.
5. Simple example for bulk collect and forall(For interview)
DECLARE
CURSOR c_item
IS
SELECT item, item_descr FROM stg_item;
TYPE item_tab_type IS TABLE OF c_item%ROWTYPE;
item_tab item_tab_type;
BEGIN
OPEN c_item;
LOOP
FETCH c_item BULK COLLECT INTO item_tab LIMIT 200000;
EXIT WHEN item_tab.COUNT = 0;
FORALL i IN 1 .. item_tab.COUNT
INSERT INTO item ( item, item_descr)
SELECT /*+ parallel */
item_tab ( i).item, item_tab ( i).item_descr FROM DUAL;
COMMIT;
END LOOP;
END;
/
Note: although changing table ITEM to nologging then doing a simple "insert /*+ append */ into item select from stg_item" seems like a better bet.
6.Utl_File or standard spool, which one to use and when?
Ans: UTL_FILE is a server-side I/O operation on a server file system. UTL_FILE can be integrated in application code but as it will write at server side, we should be having access to read file on server.
SPOOL is a SQL*Plus client I/O operation on a client file system.
Another options to achieve same by storing result in CLOB/BFile etc. in database and export that or read by any interface.
7.I have two tables NSK_LOC_TYPE and NSK_LOC where in NSK_LOC_TYPE a location and it's type will be there and in NSK_LOC there will be other details of this location.
I need to populate NSK_LOC table based on NSK_LOC_TYPE table.
If a location LOC1 is of type 3 then I need to insert into two records into NSK_LOC as LOC1 SHIP-LOC1
if a locaiton LOC2 is of type 6 then I need to insert only one record into NSK_LOC as LOC
How can we achieve using single insert and without using union
Ans:INSERT FIRST
WHEN LOC_TYPE = 3
THEN
INTO NSK_LOC
VALUES (LOC, LOC)
INTO NSK_LOC
VALUES (LOC, 'SHIP-' || LOC)
WHEN LOC_TYPE = 6
THEN
INTO NSK_LOC
VALUES (LOC, LOC)
SELECT LOC, LOC_TYPE FROM NSK_LOC_TYPE;
8.Selecting 1 records in a recursion
I have a data like this:
select 'A' c1, 'B' c2, 100 as c3 from dual
union all
select 'B' c1, 'A' c2, 100 as c3 from dual
union all
select 'D' c1, 'C' c2, 100 as c3 from dual
union all
select 'C' c1, 'D' c2, 100 as c3 from dual
We see that the data has a recursive pattern on C1 and C2.. Like A --> B, B--> A or D-->C, C--> D. When we have data like this I only need to get 2 records, Basically the 1st and the 3rd one. So the output should be :
select 'A' c1, 'B' c2, 100 as c3 from dual
union all
select 'D' c1, 'C' c2, 100 as c3 from dual
Ans:with data as (
select 'A' c1, 'B' c2, 100 as c3 from dual
union all
select 'B' c1, 'A' c2, 100 as c3 from dual
union all
select 'D' c1, 'C' c2, 100 as c3 from dual
union all
select 'C' c1, 'D' c2, 100 as c3 from dual
)
select c1, c2, c3
from (
select d.*, row_number() over (partition by least(c1, c2), greatest(c1, c2) order by c1) rn
from data d
)
where rn = 1;
9. Bitmap Indexes and Deadlocks
Bitmap indexes are not appropriate for tables that have lots of single row DML operations (inserts) and especially concurrent single row DML operations. Deadlock situations are the result of concurrent inserts as the following example shows: Open two windows, one for Session 1 and one for Session 2
5. Simple example for bulk collect and forall(For interview)
DECLARE
CURSOR c_item
IS
SELECT item, item_descr FROM stg_item;
TYPE item_tab_type IS TABLE OF c_item%ROWTYPE;
item_tab item_tab_type;
BEGIN
OPEN c_item;
LOOP
FETCH c_item BULK COLLECT INTO item_tab LIMIT 200000;
EXIT WHEN item_tab.COUNT = 0;
FORALL i IN 1 .. item_tab.COUNT
INSERT INTO item ( item, item_descr)
SELECT /*+ parallel */
item_tab ( i).item, item_tab ( i).item_descr FROM DUAL;
COMMIT;
END LOOP;
END;
/
Note: although changing table ITEM to nologging then doing a simple "insert /*+ append */ into item select from stg_item" seems like a better bet.
6.Utl_File or standard spool, which one to use and when?
Ans: UTL_FILE is a server-side I/O operation on a server file system. UTL_FILE can be integrated in application code but as it will write at server side, we should be having access to read file on server.
SPOOL is a SQL*Plus client I/O operation on a client file system.
Another options to achieve same by storing result in CLOB/BFile etc. in database and export that or read by any interface.
7.I have two tables NSK_LOC_TYPE and NSK_LOC where in NSK_LOC_TYPE a location and it's type will be there and in NSK_LOC there will be other details of this location.
I need to populate NSK_LOC table based on NSK_LOC_TYPE table.
If a location LOC1 is of type 3 then I need to insert into two records into NSK_LOC as LOC1 SHIP-LOC1
if a locaiton LOC2 is of type 6 then I need to insert only one record into NSK_LOC as LOC
How can we achieve using single insert and without using union
Ans:INSERT FIRST
WHEN LOC_TYPE = 3
THEN
INTO NSK_LOC
VALUES (LOC, LOC)
INTO NSK_LOC
VALUES (LOC, 'SHIP-' || LOC)
WHEN LOC_TYPE = 6
THEN
INTO NSK_LOC
VALUES (LOC, LOC)
SELECT LOC, LOC_TYPE FROM NSK_LOC_TYPE;
8.Selecting 1 records in a recursion
I have a data like this:
select 'A' c1, 'B' c2, 100 as c3 from dual
union all
select 'B' c1, 'A' c2, 100 as c3 from dual
union all
select 'D' c1, 'C' c2, 100 as c3 from dual
union all
select 'C' c1, 'D' c2, 100 as c3 from dual
We see that the data has a recursive pattern on C1 and C2.. Like A --> B, B--> A or D-->C, C--> D. When we have data like this I only need to get 2 records, Basically the 1st and the 3rd one. So the output should be :
select 'A' c1, 'B' c2, 100 as c3 from dual
union all
select 'D' c1, 'C' c2, 100 as c3 from dual
Ans:with data as (
select 'A' c1, 'B' c2, 100 as c3 from dual
union all
select 'B' c1, 'A' c2, 100 as c3 from dual
union all
select 'D' c1, 'C' c2, 100 as c3 from dual
union all
select 'C' c1, 'D' c2, 100 as c3 from dual
)
select c1, c2, c3
from (
select d.*, row_number() over (partition by least(c1, c2), greatest(c1, c2) order by c1) rn
from data d
)
where rn = 1;
9. Bitmap Indexes and Deadlocks
Bitmap indexes are not appropriate for tables that have lots of single row DML operations (inserts) and especially concurrent single row DML operations. Deadlock situations are the result of concurrent inserts as the following example shows: Open two windows, one for Session 1 and one for Session 2
Session 1 | Session 2 |
create table bitmap_index_demo (
value varchar2(20) ); | |
insert into bitmap_index_demo
select decode(mod(rownum,2),0,'M','F') from all_objects; | |
create bitmap index
bitmap_index_demo_idx on bitmap_index_demo(value); | |
insert into bitmap_index_demo
values ('M'); 1 row created. | |
insert into bitmap_index_demo
values ('F'); 1 row created. | |
insert into bitmap_index_demo
values ('F'); ...... waiting ...... | |
ERROR at line 1:
ORA-00060: deadlock detected while waiting for resource |
insert into bitmap_index_demo
values ('M'); ...... waiting ...... |
10.Stats gathering auto commits the pending transactions in the session.
Monday, September 12, 2016
Oracle 12.1.0.2: Export failed with ORA-20002: Version of statistics table "MYSTATSTAB" is too old. Please try upgrading it with dbms_stats.upgrade_stat_table
I am trying to follow
these steps in 12.1.0.2 and it fails when I run the step 3 (export
stats). This works well in 11g.
BEGIN
dbms_stats.Create_stat_table(user, 'MY_STATS_TAB');
END;
exec dbms_stats.upgrade_stat_table(user, 'MY_STATS_TAB')
BEGIN
DBMS_STATS.Export_table_stats (USER,
stattab => 'MY_STATS_TAB',
tabname => 'MD_TABLE_INFO',
statid => '11g_stats');
END;
ORA-20002: Version of statistics table "wwfuser"."MY_STATS_TAB" is too old.
Please try upgrading it with dbms_stats.upgrade_stat_table
ORA-06512: at "SYS.DBMS_STATS", line 18000
ORA-06512: at line 2
On further research we
found that this issue occurs when database NLS_LENGTH_SEMANTICS are set to
CHAR. By default we have CHAR settings in place of BYTE.
This is oracle bug 18459892
that fixed in 12.2.
After changing NLS_LENGTH_SEMANTICS
it worked fine.
exec dbms_stats.drop_stat_table(ownname => 'WWFUSER', stattab => 'MY_STATS_TAB');
ALTER SESSION SET NLS_LENGTH_SEMANTICS = BYTE
exec dbms_stats.create_stat_table(ownname => 'WWFUSER', stattab => 'MY_STATS_TAB');
EXEC dbms_stats.export_table_stats(ownname => 'WWFUSER',tabname => 'MD_TABLE_INFO',stattab => 'MY_STATS_TAB',statid => 'Q21212');
Another alternative is
to create stats table in SYS schema or with BYTE character set.
by SYS user:
EXEC
dbms_stats.create_stat_table(ownname => 'SYS',stattab =>
'STATTAB',tblspace => 'WWFDATA');
grant all on STATTAB TO
wwfmgr;
then by wwf user:
EXEC
dbms_stats.export_table_stats(ownname => 'WWFUSER',tabname =>
'MD_TABLE_INFO',stattab => 'STATTAB',statid => 'Q21212',statown =>
'SYS');
Hope it helps.
Monday, June 6, 2016
Sort records in acceding order without using order by
Hint: Union will automatically sort records in ascending order
create table t27 as
select rownum rn from dual connect by rownum<100;
select * from t27;
delete from t27 where rn<10;
insert into t27
select rownum rn from dual connect by rownum<10;
select * from t27;
with q1 as (select rn from t27 union select rn from t27)
select t1.rn from t27 t1, q1 where T1.RN=q1.rn;
create table t27 as
select rownum rn from dual connect by rownum<100;
select * from t27;
delete from t27 where rn<10;
insert into t27
select rownum rn from dual connect by rownum<10;
select * from t27;
select t1.rn from t27 t1, q1 where T1.RN=q1.rn;
Monday, May 30, 2016
ORA - 08103:Object No Longer Exists
This error generally occurs if the object segment not present and it is refereed in variable/cursor etc.
Ex: ON COMMIT DELETE ROWS, GTT data will be truncated and any reference to that data will fail.
--Example with GTT for ORA - 08103:Object No Longer Exists
CREATE GLOBAL TEMPORARY TABLE gtt_tab (id NUMBER) ON COMMIT DELETE ROWS;
INSERT INTO gtt_tab
VALUES (2);
SELECT * FROM gtt_tab;
DECLARE
cur_var SYS_REFCURSOR;
id_var gtt_tab.id%TYPE;
BEGIN
OPEN cur_var FOR SELECT id FROM gtt_tab;
COMMIT;--Before fetch table got truncated due to commit
LOOP
FETCH cur_var INTO id_var;
EXIT WHEN cur_var%NOTFOUND;
DBMS_OUTPUT.put_line ( id_var);
END LOOP;
END;
/
Ex: On commit data will be truncated and any reference to that data will fail.
--Example with Normal table for ORA - 08103:Object No Longer Exists
CREATE TABLE heap_tab (id NUMBER);
INSERT INTO heap_tab
VALUES (5);
SELECT * FROM heap_tab;
DECLARE
cur_var SYS_REFCURSOR;
id_var heap_tab.id%TYPE;
BEGIN
OPEN cur_var FOR SELECT id FROM heap_tab;
execute immediate 'truncate table heap_tab';--Before fetch truncated tables
LOOP
FETCH cur_var INTO id_var;
EXIT WHEN cur_var%NOTFOUND;
DBMS_OUTPUT.put_line ( id_var);
END LOOP;
END;
/
Ex: ON COMMIT DELETE ROWS, GTT data will be truncated and any reference to that data will fail.
--Example with GTT for ORA - 08103:Object No Longer Exists
CREATE GLOBAL TEMPORARY TABLE gtt_tab (id NUMBER) ON COMMIT DELETE ROWS;
INSERT INTO gtt_tab
VALUES (2);
SELECT * FROM gtt_tab;
DECLARE
cur_var SYS_REFCURSOR;
id_var gtt_tab.id%TYPE;
BEGIN
OPEN cur_var FOR SELECT id FROM gtt_tab;
COMMIT;--Before fetch table got truncated due to commit
LOOP
FETCH cur_var INTO id_var;
EXIT WHEN cur_var%NOTFOUND;
DBMS_OUTPUT.put_line ( id_var);
END LOOP;
END;
/
Ex: On commit data will be truncated and any reference to that data will fail.
CREATE TABLE heap_tab (id NUMBER);
INSERT INTO heap_tab
VALUES (5);
SELECT * FROM heap_tab;
DECLARE
cur_var SYS_REFCURSOR;
id_var heap_tab.id%TYPE;
BEGIN
OPEN cur_var FOR SELECT id FROM heap_tab;
execute immediate 'truncate table heap_tab';--Before fetch truncated tables
LOOP
FETCH cur_var INTO id_var;
EXIT WHEN cur_var%NOTFOUND;
DBMS_OUTPUT.put_line ( id_var);
END LOOP;
END;
/
Friday, May 27, 2016
Direct path inserts and Parallel DML operations and exceptions
It does a direct path load to disk - bypassing the buffer cache. It does generate minimal logging for dictionary changes but not for DML insert operation intended to. So for bulk load operations it is faster but may be slow for small insertions.
A direct path load always loads above the high water mark, since it is formatting and writing blocks directly to disk - it cannot reuse any existing space.
We cannot query a table or perform any DML after direct pathing into it until you commit.
drop table t_append;
create table t_append as select * from user_objects;
create table t_append_test as select * from user_objects where 1=0;
insert /*+ APPEND */ into t_append_test
select * from t_append;
select * from t_append_test;
[Error] Execution (15: 1): ORA-12838: cannot read/modify an object after modifying it in parallel
As there are no UNDO logs generated for direct-path load DML operation, other sessions do not read data from UNDO but read it from data files below higher water mark. Oracle will not allow uncommitted changes to be read by other users above old high water mark. This is the reason that to make it consistent oracle does not allow same session to query table if changes are not committed.
As we cannot perform any DML on table, this is serial operation and no other session can perform any operation on such table.
From oracle documentation:
From oracle documentation:
- During direct-path INSERT operations, Oracle appends the inserted data after existing data in the table. Data is written directly into datafiles, bypassing the buffer cache. Free space in the existing data is not reused, and referential integrity constraints are ignored. These procedures combined can enhance performance.
When you are inserting in parallel DML mode, direct-path
INSERT
is the default. In order to run in parallel DML mode, the following requirements must be met:- You must have Oracle Enterprise Edition installed.
- You must enable parallel DML in your session. To do this, run the following statement:
ALTER SESSION { ENABLE | FORCE } PARALLEL DML;
- You must specify the parallel attribute for the target table, either at create time or subsequently, or you must specify the
PARALLEL
hint for each insert operation.
To disable direct-path
INSERT
, specify the NOAPPEND
hint in each INSERT
statement. Doing so overrides parallel DML mode.
As all parallel DML operations are by default direct-path insert, any subsequent DML/select statement on table from same session will fail as with the error mentioned above.
Note: If you do not care about logging for recovery, you may even disable logging by NOLOGGING hint along with append hint and it will ignore generating REDO for dml statement and may further improve performance of insertion.
From Tom:
append does a direct path load (if it can, it is not a promise, you are requesting and we may or may not do it for you - silently)
if you direct path load, the transaction that did the direct path load CANNOT query that segment - but other transactions can, they just cannot see the newly loaded data.
if you direct path load, you never use any existing free space, it always writes above the high water mark.
if you direct path load, we bypass UNDO on the table - only the table - modifications
if you direct path load, you'll maintain indexes - we build mini indexes on the newly loaded data and merge them into the 'real' indexes in bulk. A direct path load of large amounts of data will maintain indexes very efficiently.
if you direct path load you can bypass redo on the TABLE in archivelog mode, if the database is set up to allow nologging and you have the segment set to nologging
direct path loading bypasses the buffer cache, you write directly to the datafiles.
direct path loading is only appropriate for the first load of a segment or an increment load of lots of data - or an increment load into a table that never has any deletes (so there is no free space to consider)
transactional systems - you probably won't use it.
warehouses - a tool you'll use a lot
Note: insert /*+ append */ will ignore the append hint and use conventional path loading when the table has referential integrity or a trigger.
Note: If you do not care about logging for recovery, you may even disable logging by NOLOGGING hint along with append hint and it will ignore generating REDO for dml statement and may further improve performance of insertion.
From Tom:
append does a direct path load (if it can, it is not a promise, you are requesting and we may or may not do it for you - silently)
if you direct path load, the transaction that did the direct path load CANNOT query that segment - but other transactions can, they just cannot see the newly loaded data.
if you direct path load, you never use any existing free space, it always writes above the high water mark.
if you direct path load, we bypass UNDO on the table - only the table - modifications
if you direct path load, you'll maintain indexes - we build mini indexes on the newly loaded data and merge them into the 'real' indexes in bulk. A direct path load of large amounts of data will maintain indexes very efficiently.
if you direct path load you can bypass redo on the TABLE in archivelog mode, if the database is set up to allow nologging and you have the segment set to nologging
direct path loading bypasses the buffer cache, you write directly to the datafiles.
direct path loading is only appropriate for the first load of a segment or an increment load of lots of data - or an increment load into a table that never has any deletes (so there is no free space to consider)
transactional systems - you probably won't use it.
warehouses - a tool you'll use a lot
Note: insert /*+ append */ will ignore the append hint and use conventional path loading when the table has referential integrity or a trigger.
drop table t_append;
create table t_append as select * from user_objects;
create table t_append_test as select * from user_objects where 1=0
insert /*+ APPEND */ into t_append_test
select * from t_append
select * from t_append_test
this statement is failing as error mentioned above.
Now define FK on child table and then load data by using append hint.
alter table t_append add constraint t_append_pk primary key(object_name,OBJECT_TYPE);
alter table t_append_test add constraint t_append_test_fk foreign key(object_name,OBJECT_TYPE) references t_append(object_name,OBJECT_TYPE);
insert /*+ APPEND */ into t_append_test
select * from t_append
select * from t_append_test;
It goes well. It means it ignored direct-path insert if table has FK defined.
Same with triggers. If table has triggers defined then oracle will ignore direct path insert and will go with conventional path.
sqlldr can and does ignore triggers/referential integrity and even uniqueness during a direct path load, insert /*+ append */ does not - you are not using direct path when they (constraints) exist.
parallel is always a direct path, if you go parallel, you will be appending.
Tuesday, May 17, 2016
Performance issue on transient tables
Gathering STATS on PROCESS
tables and maintaining the same a bit troublesome process.
Since the STATS gather to be
done when the table is populated with the highest volume and that too differ
with different jobs. Then the column statistics of key columns need to be
deleted since those vary with each job.
Instead of that deleting the
STATS and locking the same works consistently. In that Oracle uses dynamic
sampling on the fly and gives the right plan.
The below steps are one time
activity and no further maintenance is needed.
TRUNCATE TABLE PROCESSTAB;
EXEC dbms_stats.delete_table_stats('SCPOAPPG','PROCESSTAB');
EXEC dbms_stats.lock_table_stats('SCPOAPPG','PROCESSTAB');
Sunday, May 15, 2016
All oracle bug fixes from DB itself.
Below query will show all fixes done in database till release
SELECT optimizer_feature_enable, bugno, description
FROM v$system_fix_control
ORDER BY REGEXP_REPLACE ( REGEXP_REPLACE ( optimizer_feature_enable || '.', '(\d+\.)', '0000000\1'), '0+(........)\.', '\1') desc, bugno
/
Friday, May 13, 2016
Estimate the remaining time of drop column operation
If table size is huge it may take long time to drop a column and even if there are multiple columns dropped together it may multiply the time with the number of columns in drop operation.
There is an interesting article written by Jonathan on the same and good to read.
http://allthingsoracle.com/dropping-columns/
Following are my personal observations and interesting to see different-2 behavior.
1. Drop column directly.
ALTER TABLE t_big drop column SKIP_CORRUPT;
ALTER TABLE t_big drop (SKIP_CORRUPT,MAX_EXTENTS);
If we drop column directly the table is locked exclusively while the drop is executing and the entire drop is executed as a single massive transaction. That may increase redo logs drastically and if we do not have enough size ,this operation may fail.
If we are using this option then we can monitor time remaining by using v$session_longops or alternatively by using TOAD session browser/long ops.
12 28 % 13-05-2016 01:04:38 AM 175 Rowid Range Scan: SCPOMGR.T_BIG: 34000 out of 123044 Blocks done 67
There is an interesting article written by Jonathan on the same and good to read.
http://allthingsoracle.com/dropping-columns/
Following are my personal observations and interesting to see different-2 behavior.
1. Drop column directly.
ALTER TABLE t_big drop column SKIP_CORRUPT;
ALTER TABLE t_big drop (SKIP_CORRUPT,MAX_EXTENTS);
If we drop column directly the table is locked exclusively while the drop is executing and the entire drop is executed as a single massive transaction. That may increase redo logs drastically and if we do not have enough size ,this operation may fail.
If we are using this option then we can monitor time remaining by using v$session_longops or alternatively by using TOAD session browser/long ops.
12 28 % 13-05-2016 01:04:38 AM 175 Rowid Range Scan: SCPOMGR.T_BIG: 34000 out of 123044 Blocks done 67
2. By using
checkpoint
If we do not want REDO to grow drastically and improve performance of drop operation, we can use check point with frequency of commit intervals. alter table t_big drop column MIN_EXTENTS checkpoint 5000; But drawback with checkpoint is if operation fails due to some reason or terminated by user(I did that simply by CTRL+C when drop column operation was running on sqlplus) then it leaves entire table into table in unusable state where we cannot perform any operation on the table.
If we try to perform any operation on table it will through below
error.
[Error] Execution (1: 15): ORA-12986: columns in partially dropped
state. Submit ALTER TABLE DROP COLUMNS CONTINUE
To overcome this we need to execute command as suggested in error.
But If we are using this option then I observed that we can't monitor time remaining by using v$session_longops or alternatively by using TOAD session browser/long ops.
If checkpoint is used then long operations information is not available in v$session_longops. Same in the case of below statement.
ALTER TABLE t_big DROP COLUMNS CONTINUE.
I assume that oracle usage checkpoint for this operation as well.
3. Set column unused and drop latter. (alternatively in 12C we can mark column invisible also if the purpose is to hide columns from application only)
alter table T_BIG drop unused columns checkpoint 5000; Interestingly even if column marked as unused but still column still exists and it places lock on table when you actually drop this column like the normal drop operation. In this case also it depends if we can monitor time or not based on the checkpoint usage. Also if checkpoint used and operation failed due to any reason, it leaves table in unusable state.
Footnote: Be careful when using checkpoint or be ready for the consequences .
|
Wednesday, May 11, 2016
Features those are interesting and I was unaware till now.
1.The maximum size of the
Increasing the allotted size for these data types allows users to store more information in character data types before switching to large objects (LOBs). This is especially useful for brief textual data types and the capabilities to build indexes on these types of columns.
2. DEFAULT Values for Columns on Explicit NULL Insertion
VARCHAR2, NVARCHAR2,
and RAW
data types has been increased from 4,000 to 32,767 bytes.Increasing the allotted size for these data types allows users to store more information in character data types before switching to large objects (LOBs). This is especially useful for brief textual data types and the capabilities to build indexes on these types of columns.
2. DEFAULT Values for Columns on Explicit NULL Insertion
The
DEFAULT
definition of a column can be extended to have the DEFAULT
being applied for explicit NULL
insertion.
The
DEFAULT
clause has a new ON NULL
clause, which instructs the database to assign a specified default column value when an INSERT
statement attempts to assign a value that evaluates to NULL
.
Ex:
create table default_on_null (id number, name varchar2(4000) default on null 'Anand' ,name2 varchar2(4000) default 'Yadav');
insert into default_on_null values(1,null,null);
select * from default_on_null;
1 Anand
It returns default value of name field in place of null. It makes sure that there will never be null value in name field.
3. Identity Columns
Ex: No more sequences needs to be defined and use identity_clause that is almost identical to sequence
CREATE TABLE t7 (c1 NUMBER GENERATED ALWAYS AS IDENTITY (start with 1 increment by 1 cache 50 nocycle) ,
c2 VARCHAR2(10));
INSERT INTO t7(c2) VALUES ('’abc');
INSERT INTO t7 (c1, c2) VALUES (null, '’xyz');--It will fail as ALWAYS mentioned so you cannot insert or update this column value
If you want to insert user defined value as well then write it like this.
CREATE TABLE t8 (c1 NUMBER GENERATED AS IDENTITY (start with 1 increment by 1 cache 50 nocycle) ,
c2 VARCHAR2(10));
SELECT c1, c2 FROM t7;
UPDATE t7 SET C1=3 WHERE C1=1
Ex: Use of always so that user don't have to insert/update any value
CREATE TABLE t6 (c1 NUMBER GENERATED ALWAYS AS IDENTITY ,
c2 VARCHAR2(10));
INSERT INTO t6(c2) VALUES ('’abc');
INSERT INTO t6 (c1, c2) VALUES (null, '’xyz');--It will fail
SELECT c1, c2 FROM t6;
UPDATE T6 SET C1=3 WHERE C1=1--It will fail
Ex: Default sequence will be used but user can insert/update values
CREATE TABLE t5 (c1 NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY,
c2 VARCHAR2(10));
INSERT INTO t5(c2) VALUES ('’abc');
INSERT INTO t5 (c1, c2) VALUES (null, '’xyz');
SELECT c1, c2 FROM t5;
3. Identity Columns
Oracle Database 12c Release 1 implements ANSI-compliant
This feature implements auto increment by enhancing IDENTITY
columns. Migration from database systems that use identity columns is simplified and can take advantage of this new functionality.DEFAULT
or DEFAULT ON NULL
semantics for use by SEQUENCE.NEXTVAL
and SYS_GUID
, supports built-in functions and implicit return of default values.Ex: No more sequences needs to be defined and use identity_clause that is almost identical to sequence
CREATE TABLE t7 (c1 NUMBER GENERATED ALWAYS AS IDENTITY (start with 1 increment by 1 cache 50 nocycle) ,
c2 VARCHAR2(10));
INSERT INTO t7(c2) VALUES ('’abc');
INSERT INTO t7 (c1, c2) VALUES (null, '’xyz');--It will fail as ALWAYS mentioned so you cannot insert or update this column value
If you want to insert user defined value as well then write it like this.
CREATE TABLE t8 (c1 NUMBER GENERATED AS IDENTITY (start with 1 increment by 1 cache 50 nocycle) ,
c2 VARCHAR2(10));
SELECT c1, c2 FROM t7;
UPDATE t7 SET C1=3 WHERE C1=1
Ex: Use of always so that user don't have to insert/update any value
CREATE TABLE t6 (c1 NUMBER GENERATED ALWAYS AS IDENTITY ,
c2 VARCHAR2(10));
INSERT INTO t6(c2) VALUES ('’abc');
INSERT INTO t6 (c1, c2) VALUES (null, '’xyz');--It will fail
SELECT c1, c2 FROM t6;
UPDATE T6 SET C1=3 WHERE C1=1--It will fail
Ex: Default sequence will be used but user can insert/update values
CREATE TABLE t5 (c1 NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY,
c2 VARCHAR2(10));
INSERT INTO t5(c2) VALUES ('’abc');
INSERT INTO t5 (c1, c2) VALUES (null, '’xyz');
SELECT c1, c2 FROM t5;
4. Top N queries simplified in 12C
Ex: First 10 rows or first 10 percent rows
SELECT T.*
FROM T
ORDER BY OWNER,OBJECT_NAME
FETCH first 10 <percent> ROWS ONLY
Ex: Rows from 6th to 16th
SELECT T.*
FROM T
ORDER BY OWNER,OBJECT_NAME
OFFSET 5 ROWS
FETCH NEXT 10 ROWS ONLY;
Internally oracle usage analytical functions only to get these records but user don't need to know all that. You may check predicates in explain plan to verify how query actually executed.
Ex: First 10 rows or first 10 percent rows
SELECT T.*
FROM T
ORDER BY OWNER,OBJECT_NAME
FETCH first 10 <percent> ROWS ONLY
Ex: Rows from 6th to 16th
SELECT T.*
FROM T
ORDER BY OWNER,OBJECT_NAME
OFFSET 5 ROWS
FETCH NEXT 10 ROWS ONLY;
Internally oracle usage analytical functions only to get these records but user don't need to know all that. You may check predicates in explain plan to verify how query actually executed.
Sunday, May 8, 2016
Best way to learn oracle
Real performance issues:
http://www.runningoracle.com/index.php?cPath=1&osCsid=t8gc7eoa5igsgmb93goik617g2
12 Things You Will Love about Oracle Database 12c R2
https://go.oracle.com/LP=61561?src1=:em:ne:ie:pt:SEV100664921STG3&elq_mid=93659&sh=26132613230226232605293436080903&cmid=OUNV170405P00034C0004
http://blog.enmotech.com/2018/01/11/12-things-you-need-to-know-about-oracle-database-12c-r2/
Video tutorials:
https://www.youtube.com/watch?v=2QggbUdNsfI&feature=youtu.be&list=PLKCk3OyNwIzvwEXdaubc6PQXwnQOAE9h2
Follow biggies:
Jonathan:
https://community.oracle.com/reputation.jspa?username=Jonathan+Lewis&viewID=history
https://jonathanlewis.wordpress.com
https://community.oracle.com/reputation.jspa?username=rp0428&viewID=history
https://community.oracle.com/reputation.jspa?username=Frank+Kulash&viewID=history
https://community.oracle.com/reputation.jspa?username=BluShadow&viewID=history
https://community.oracle.com/reputation.jspa?username=sb92075&viewID=history
https://community.oracle.com/reputation.jspa?username=Solomon+Yakobson&viewID=history
https://community.oracle.com/reputation.jspa?username=John+Stegeman&viewID=history
DBA Gods blog links can be found here;
http://dbakevlar.com/links/
http://allthingsoracle.com/topics/performance-tuning/
Blogs:
http://arup.blogspot.in/2010/12/100-things-you-probably-didnt-know.html
http://www.proligence.com/downloads.html
Arup Nanda:database administration, from performance tuning to security and disaster recovery
http://www.oracle.com/technetwork/issue-archive/nanda-column-index-2098438.html
Steven Feuerstein: For pl-sql
http://www.oracle.com/technetwork/issue-archive/index-087690.html
Melanie Caffrey: SQL/PL_SQL
http://www.oracle.com/technetwork/issue-archive/index-1429427.html
Tom Kyte: Anything
http://www.oracle.com/technetwork/issue-archive/index-093676.html
Must read articles from Tom:
http://www.oracle.com/technetwork/issue-archive/2013/13-sep/o53asktom-1999186.html
http://www.oracle.com/technetwork/issue-archive/2013/13-nov/o63asktom-2034271.html
http://www.oracle.com/technetwork/issue-archive/2014/14-jan/o14asktom-2079690.html
http://www.oracle.com/technetwork/issue-archive/2014/14-mar/o24asktom-2147206.html
http://www.oracle.com/technetwork/issue-archive/2014/14-may/o34asktom-2162246.html
Oracle performance tunning:
https://antognini.ch/publications/
https://people.sap.com/stefan.koehler
Adaptive query optimization related blogs:
https://blogs.sap.com/2013/09/24/oracle-db-optimizer-part-vii-looking-under-the-hood-of-adaptive-query-optimization-adaptive-plans-oracle-12c/
http://www.ludovicocaldara.net/dba/cpu-usage-12c-dbms_feature_awr/
http://www.ludovicocaldara.net/dba/sql-plan-directives-problem/
https://blog.dbi-services.com/nothing-in-feature-usage-statistics-since-12c/
Other articles written by prominent writers
http://www.oracle.com/technetwork/oramag/magazine/tech-articles/index.html
https://richardfoote.wordpress.com/
http://newbiedba.blogspot.in/
http://gerardnico.com/wiki/database/oracle/dbwn
http://oracle-randolf.blogspot.in
https://jonathanlewis.wordpress.com/ or http://allthingsoracle.com
http://blog.tanelpoder.com/2007/
https://blogs.oracle.com/sql/12-things-developers-will-love-about-oracle-database-12c-release-2#approx-queries
https://coskan.wordpress.com/blogroll/
https://oracleriddleblog.wordpress.com/
https://carlos-sierra.net/
https://blogs.oracle.com/optimizer/
http://blog.tanelpoder.com/2010/06/23/the-full-power-of-oracles-diagnostic-events-part-2-oradebug-doc-and-11g-improvements/
https://blog.tanelpoder.com/files/
http://db-oriented.com/en/
Oracle ACE laurent schneider
http://laurentschneider.com/
http://laurentschneider.blogspot.in/
Misc articles by tim hall:
https://oracle-base.com/articles/misc/articles-misc#development
Data Modeling:
http://databaseanswers.org/data_models/index.htm
Read new release features and implement them. It will make you champion
http://docs.oracle.com/database/121/NEWFT/chapter12101.htm#NEWFT167
http://www.oracle-developer.net/display.php?id=506
Read analytical functions and aggregate functions
http://psoug.org/reference/analytic_functions.html
http://psoug.org/reference/analytic_functions.html
Tips and tricks by Erik Wramner:
https://erikwramner.wordpress.com/category/database/page/3/
https://dioncho.wordpress.com/2010/12/10/interpreting-parallel-merge-statement/
How to read bind variable values:
http://tech.e2sn.com/oracle/troubleshooting/how-to-read-errorstack-output(If you don't have tunning pack)
http://tech.e2sn.com/oracle/troubleshooting/oracle-s-real-time-sql-monitoring-feature-v-sql_monitor
(If you have tunning pack)
JDA related blog:
https://vjdba.wordpress.com
People with my name :)
https://aprakash.wordpress.com/
https://www.linkedin.com/in/anand-yadav-96a79b
Data mining:
http://www.deep-data-mining.com/2014/03/flush-shared-pool-and-buffer-cache.html
Oracle interview questions:
http://www.orafaq.com/wiki/Interview_Questions
Simple oracle learning:
http://www.studytonight.com/dbms/database-normalization.php
http://its-all-about-oracle.blogspot.in/search/label/Partitioning
Removing black magic in optimizer:
https://blogs.oracle.com/optimizer/
http://optimizermagic.blogspot.in/
Bug fixes in oracle :
http://www.eygle.com/Notes/11203_buglist.htm
Basic interview questions:
https://jagguonline.wordpress.com/sql-queries/
http://www.runningoracle.com/index.php?cPath=1&osCsid=t8gc7eoa5igsgmb93goik617g2
12 Things You Will Love about Oracle Database 12c R2
https://go.oracle.com/LP=61561?src1=:em:ne:ie:pt:SEV100664921STG3&elq_mid=93659&sh=26132613230226232605293436080903&cmid=OUNV170405P00034C0004
http://blog.enmotech.com/2018/01/11/12-things-you-need-to-know-about-oracle-database-12c-r2/
Video tutorials:
https://www.youtube.com/watch?v=2QggbUdNsfI&feature=youtu.be&list=PLKCk3OyNwIzvwEXdaubc6PQXwnQOAE9h2
Follow biggies:
Jonathan:
https://community.oracle.com/reputation.jspa?username=Jonathan+Lewis&viewID=history
https://jonathanlewis.wordpress.com
https://community.oracle.com/reputation.jspa?username=rp0428&viewID=history
https://community.oracle.com/reputation.jspa?username=Frank+Kulash&viewID=history
https://community.oracle.com/reputation.jspa?username=BluShadow&viewID=history
https://community.oracle.com/reputation.jspa?username=sb92075&viewID=history
https://community.oracle.com/reputation.jspa?username=Solomon+Yakobson&viewID=history
https://community.oracle.com/reputation.jspa?username=John+Stegeman&viewID=history
DBA Gods blog links can be found here;
http://dbakevlar.com/links/
http://allthingsoracle.com/topics/performance-tuning/
Blogs:
http://arup.blogspot.in/2010/12/100-things-you-probably-didnt-know.html
http://www.proligence.com/downloads.html
Arup Nanda:database administration, from performance tuning to security and disaster recovery
http://www.oracle.com/technetwork/issue-archive/nanda-column-index-2098438.html
Steven Feuerstein: For pl-sql
http://www.oracle.com/technetwork/issue-archive/index-087690.html
Melanie Caffrey: SQL/PL_SQL
http://www.oracle.com/technetwork/issue-archive/index-1429427.html
Tom Kyte: Anything
http://www.oracle.com/technetwork/issue-archive/index-093676.html
Must read articles from Tom:
http://www.oracle.com/technetwork/issue-archive/2013/13-sep/o53asktom-1999186.html
http://www.oracle.com/technetwork/issue-archive/2013/13-nov/o63asktom-2034271.html
http://www.oracle.com/technetwork/issue-archive/2014/14-jan/o14asktom-2079690.html
http://www.oracle.com/technetwork/issue-archive/2014/14-mar/o24asktom-2147206.html
http://www.oracle.com/technetwork/issue-archive/2014/14-may/o34asktom-2162246.html
Oracle performance tunning:
https://antognini.ch/publications/
https://people.sap.com/stefan.koehler
Adaptive query optimization related blogs:
https://blogs.sap.com/2013/09/24/oracle-db-optimizer-part-vii-looking-under-the-hood-of-adaptive-query-optimization-adaptive-plans-oracle-12c/
http://www.ludovicocaldara.net/dba/cpu-usage-12c-dbms_feature_awr/
http://www.ludovicocaldara.net/dba/sql-plan-directives-problem/
https://blog.dbi-services.com/nothing-in-feature-usage-statistics-since-12c/
Other articles written by prominent writers
http://www.oracle.com/technetwork/oramag/magazine/tech-articles/index.html
https://richardfoote.wordpress.com/
http://newbiedba.blogspot.in/
http://gerardnico.com/wiki/database/oracle/dbwn
http://oracle-randolf.blogspot.in
https://jonathanlewis.wordpress.com/ or http://allthingsoracle.com
http://blog.tanelpoder.com/2007/
https://blogs.oracle.com/sql/12-things-developers-will-love-about-oracle-database-12c-release-2#approx-queries
https://coskan.wordpress.com/blogroll/
https://oracleriddleblog.wordpress.com/
https://carlos-sierra.net/
https://blogs.oracle.com/optimizer/
http://blog.tanelpoder.com/2010/06/23/the-full-power-of-oracles-diagnostic-events-part-2-oradebug-doc-and-11g-improvements/
https://blog.tanelpoder.com/files/
http://db-oriented.com/en/
Oracle ACE laurent schneider
http://laurentschneider.com/
http://laurentschneider.blogspot.in/
Misc articles by tim hall:
https://oracle-base.com/articles/misc/articles-misc#development
Data Modeling:
http://databaseanswers.org/data_models/index.htm
Read new release features and implement them. It will make you champion
http://docs.oracle.com/database/121/NEWFT/chapter12101.htm#NEWFT167
http://www.oracle-developer.net/display.php?id=506
Read analytical functions and aggregate functions
http://psoug.org/reference/analytic_functions.html
http://psoug.org/reference/analytic_functions.html
Tips and tricks by Erik Wramner:
https://erikwramner.wordpress.com/category/database/page/3/
https://dioncho.wordpress.com/2010/12/10/interpreting-parallel-merge-statement/
How to read bind variable values:
http://tech.e2sn.com/oracle/troubleshooting/how-to-read-errorstack-output(If you don't have tunning pack)
http://tech.e2sn.com/oracle/troubleshooting/oracle-s-real-time-sql-monitoring-feature-v-sql_monitor
(If you have tunning pack)
JDA related blog:
https://vjdba.wordpress.com
People with my name :)
https://aprakash.wordpress.com/
https://www.linkedin.com/in/anand-yadav-96a79b
Data mining:
http://www.deep-data-mining.com/2014/03/flush-shared-pool-and-buffer-cache.html
Oracle interview questions:
http://www.orafaq.com/wiki/Interview_Questions
Simple oracle learning:
http://www.studytonight.com/dbms/database-normalization.php
http://its-all-about-oracle.blogspot.in/search/label/Partitioning
Removing black magic in optimizer:
https://blogs.oracle.com/optimizer/
http://optimizermagic.blogspot.in/
Bug fixes in oracle :
http://www.eygle.com/Notes/11203_buglist.htm
Basic interview questions:
https://jagguonline.wordpress.com/sql-queries/
Tracing in oracle
Enable session tracing:
In v$diag_info we can check various diagnostic information's.
Execute required query that you want to trace.
Ex: WITH query1 AS (
SELECT /*+ MATERIALIZE */ * FROM t1
)
SELECT * FROM query1;
--To check materialize hint in trace file.
Disable tracing:
If you don't have permissions to execute this procedure, you may connect by sys and give required permissions.
D:\Oracle\12.1.0.2>sqlplus / as sysdba
SQL> grant execute on DBMS_MONITOR to SCOTT;
Grant succeeded.
SQL>
Other related tracing levels etc.
http://www.orafaq.com/node/2891
EXEC DBMS_MONITOR.session_trace_enable;
Check default trace file name:
SELECT value FROM v$diag_info WHERE name = 'Default Trace File';
Execute required query that you want to trace.
Ex: WITH query1 AS (
SELECT /*+ MATERIALIZE */ * FROM t1
)
SELECT * FROM query1;
--To check materialize hint in trace file.
Disable tracing:
EXEC DBMS_MONITOR.session_trace_disable;
If you don't have permissions to execute this procedure, you may connect by sys and give required permissions.
D:\Oracle\12.1.0.2>sqlplus / as sysdba
SQL> grant execute on DBMS_MONITOR to SCOTT;
Grant succeeded.
SQL>
Other related tracing levels etc.
http://www.orafaq.com/node/2891
Tuesday, March 15, 2016
Import job is slow
Sometimes import becomes very slow and to improve performance it's better to clean purge sysaux table space.
Below are the simple sql's those can be executed to achieve this.
connect / as sysdba
SQL> @?/rdbms/admin/catnoawr.sql
SQL> @?/rdbms/admin/catawrtb.sql
Also check max size and autoextend of sysaux related datafiles. Those might be full and need to extend. But default setting of datafiles needs to change if above method does not work.
This helps when AWR performance is slow.I can safely remove the AWR metadata tables and recreate them.
There is a good link that will give more information for some of the basic impdp operations.
http://blog.oracle48.nl/killing-and-resuming-datapump-expdp-and-impdp-jobs/comment-page-1/
alternatively we can run impdp -help to find out all the options.
For export slow, follow below link
http://dbaharrison.blogspot.in/2014/07/datapump-export-is-slow-because-i-did.html
Below are the simple sql's those can be executed to achieve this.
connect / as sysdba
SQL> @?/rdbms/admin/catnoawr.sql
SQL> @?/rdbms/admin/catawrtb.sql
This helps when AWR performance is slow.I can safely remove the AWR metadata tables and recreate them.
There is a good link that will give more information for some of the basic impdp operations.
http://blog.oracle48.nl/killing-and-resuming-datapump-expdp-and-impdp-jobs/comment-page-1/
alternatively we can run impdp -help to find out all the options.
For export slow, follow below link
http://dbaharrison.blogspot.in/2014/07/datapump-export-is-slow-because-i-did.html
Wednesday, March 9, 2016
ORA-08102: index key not found, obj# 255898, file 15, block 758743 (2)
We encountered this issue couple of times during database migration or in client environment. This issue can be due to block corruption of index(or may be table as well)
I noticed that rebuild index online or dropping or re-creating the index maynot help in some of the cases.
We did following operations and it did not work.
1. Drop FCST_SELECT and HIST_SELECT indexes
2. Gather stats on HIST and FCST tables
3. Re-create FCST_SELECT and HIST_SELECT indexes
I noticed that rebuild index online or dropping or re-creating the index maynot help in some of the cases.
We did following operations and it did not work.
1. Drop FCST_SELECT and HIST_SELECT indexes
2. Gather stats on HIST and FCST tables
3. Re-create FCST_SELECT and HIST_SELECT indexes
Below was the underlying query which was resulting the error:
DELETE /* HistPurgeDAODeleteHint */
/* HistPurgeDAODeleteHint */
FROM TAB1 h
WHERE (h.dmdunit, h.dmdgroup, h.loc, h.histstream) IN
(SELECT /* HistPurgeDAOSelectHint */
/* HistPurgeDAOSelectHint */
dm.todmdunit,
dm.todmdgroup,
dm.todfuloc,
m.histstream
FROM XXXX dm, YYYYY pr, ZZZZ m
WHERE pr.map = :1
AND pr.action_number = :2
AND pr.action_group = :3
AND pr.action_group_set_id = :4
AND pr.map = dm.map
AND pr.dmdunit = dm.todmdunit
AND pr.dmdgroup = dm.todmdgroup
AND pr.loc = dm.todfuloc
AND pr.model = dm.tomodel
AND dm.tomodel = m.model
AND dm.histtype = 1)
AND h.TYPE <> 3;
Note that FCST_SELECT is function based index but function always results into deterministic value
CREATE INDEX HIST_SELECT ON TAB1
(DMDGROUP, DMDUNIT, LOC, HISTSTREAM, STARTDATE,
"STARTDATE"+"DUR"/1440)
To resolve this issue we tried below approach and it worked.
1. Create a back up table with same structure and data
2. Drop existing table.
3. Rename back up table to existing table name.
As it was dev environment above mentioned approach is good but I'm still looking for actual root cause of this issue.
Note: If we remove function from the index it was working fine.
Also we noticed that in dev environment index was enabled for parallel. Also there is another PK index defined that is overlapping most of the columns with this index.
Oracle Advanced Compression Option for LOB
Good news is that with oracle 12C oracle by default enabled SECUREFILE for LOB objects. But it is not enabled by default for Oracle internal storage like system table spaces.
Here is a link that will give more information.
http://blog.dbi-services.com/oracle-12c-securefiles-enhancements/
Sample code to convert a column from Basicfile lob to securefile lob or to change any other component.
DECLARE
v_needschange NUMBER := 0;
BEGIN
SELECT COUNT (1)
INTO v_needschange
FROM user_lobs
WHERE table_name = 'TAB1'
AND column_name = 'COL1'
AND NOT ( cache = 'YES'
AND securefile = 'YES'
AND deduplication = 'NO'
AND compression = 'NO');
IF v_needschange > 0
THEN
DBMS_OUTPUT.put_line (
'Column COL1 of table TAB1 is not defined as either secure file or keep duplicates, or no compress or cache as yes');
EXECUTE IMMEDIATE
'ALTER TABLE TAB1 ADD (COL11 BLOB) LOB(COL11) STORE AS SECUREFILE (KEEP_DUPLICATES CACHE)';
EXECUTE IMMEDIATE
'UPDATE TAB1 SET COL11 = COL1';
EXECUTE IMMEDIATE 'ALTER TABLE TAB1 DROP (COL1)';
EXECUTE IMMEDIATE
'ALTER TABLE TAB1 RENAME COLUMN COL11 TO COL1';
END IF;
END;
/
Here is a link that will give more information.
http://blog.dbi-services.com/oracle-12c-securefiles-enhancements/
Sample code to convert a column from Basicfile lob to securefile lob or to change any other component.
DECLARE
v_needschange NUMBER := 0;
BEGIN
SELECT COUNT (1)
INTO v_needschange
FROM user_lobs
WHERE table_name = 'TAB1'
AND column_name = 'COL1'
AND NOT ( cache = 'YES'
AND securefile = 'YES'
AND deduplication = 'NO'
AND compression = 'NO');
IF v_needschange > 0
THEN
DBMS_OUTPUT.put_line (
'Column COL1 of table TAB1 is not defined as either secure file or keep duplicates, or no compress or cache as yes');
EXECUTE IMMEDIATE
'ALTER TABLE TAB1 ADD (COL11 BLOB) LOB(COL11) STORE AS SECUREFILE (KEEP_DUPLICATES CACHE)';
EXECUTE IMMEDIATE
'UPDATE TAB1 SET COL11 = COL1';
EXECUTE IMMEDIATE 'ALTER TABLE TAB1 DROP (COL1)';
EXECUTE IMMEDIATE
'ALTER TABLE TAB1 RENAME COLUMN COL11 TO COL1';
END IF;
END;
/
Subscribe to:
Posts (Atom)