Monday, May 30, 2016

ORA - 08103:Object No Longer Exists

This error generally occurs if the object segment not present and it is refereed in variable/cursor etc.

Ex: ON COMMIT DELETE ROWS, GTT data will be truncated and any reference to that data will fail.

--Example with GTT for ORA - 08103:Object No Longer Exists

CREATE GLOBAL TEMPORARY TABLE gtt_tab (id NUMBER) ON COMMIT DELETE ROWS;

INSERT INTO gtt_tab
     VALUES (2);

SELECT * FROM gtt_tab;

DECLARE
    cur_var   SYS_REFCURSOR;
    id_var    gtt_tab.id%TYPE;
BEGIN
    OPEN cur_var FOR SELECT id FROM gtt_tab;

    COMMIT;--Before fetch table got truncated due to commit

    LOOP
        FETCH cur_var INTO id_var;

        EXIT WHEN cur_var%NOTFOUND;
        DBMS_OUTPUT.put_line ( id_var);
    END LOOP;
END;
/

Ex: On commit data will be truncated and any reference to that data will fail.

--Example with Normal table for ORA - 08103:Object No Longer Exists

CREATE  TABLE heap_tab (id NUMBER);


INSERT INTO heap_tab
     VALUES (5);

SELECT * FROM heap_tab;

DECLARE
    cur_var   SYS_REFCURSOR;
    id_var    heap_tab.id%TYPE;
BEGIN
    OPEN cur_var FOR SELECT id FROM heap_tab;

execute immediate 'truncate table heap_tab';--Before fetch truncated tables

    LOOP
        FETCH cur_var INTO id_var;

        EXIT WHEN cur_var%NOTFOUND;
        DBMS_OUTPUT.put_line ( id_var);
    END LOOP;
END;
/

Friday, May 27, 2016

Direct path inserts and Parallel DML operations and exceptions


It does a direct path load to disk - bypassing the buffer cache. It does generate minimal logging for dictionary changes but not for DML insert operation intended to.  So for bulk load operations it is faster but may be slow for small insertions.

A direct path load always loads above the high water mark, since it is formatting and writing blocks directly to disk - it cannot reuse any existing space.

We cannot query a table or perform any DML after direct pathing into it until you commit.


drop table t_append;

create table t_append as select * from user_objects;

create table t_append_test as select * from user_objects where 1=0;

insert /*+ APPEND */ into t_append_test
select * from t_append;

select * from t_append_test;

[Error] Execution (15: 1): ORA-12838: cannot read/modify an object after modifying it in parallel

As there are no UNDO logs generated for direct-path load DML operation, other sessions do not read data from UNDO but read it from data files below higher water mark. Oracle will not allow uncommitted changes to be read by other users above old high water mark.  This is the reason that to make it consistent oracle does not allow same session to query table if changes are not committed. 

As we cannot perform any DML on table, this is serial operation and no other session can perform any operation on such table.

From oracle documentation:
  • During direct-path INSERT operations, Oracle appends the inserted data after existing data in the table. Data is written directly into datafiles, bypassing the buffer cache. Free space in the existing data is not reused, and referential integrity constraints are ignored. These procedures combined can enhance performance.
When you are inserting in parallel DML mode, direct-path INSERT is the default In order to run in parallel DML mode, the following requirements must be met:
  • You must have Oracle Enterprise Edition installed.
  • You must enable parallel DML in your session. To do this, run the following statement:
    ALTER SESSION { ENABLE | FORCE } PARALLEL DML;
    
    
  • You must specify the parallel attribute for the target table, either at create time or subsequently, or you must specify the PARALLEL hint for each insert operation.

To disable direct-path INSERT, specify the NOAPPEND hint in each INSERT statement. Doing so overrides parallel DML mode.

As all parallel DML operations are by default direct-path insert, any subsequent DML/select statement on table from same session will fail as with the error mentioned above.

Note: If you do not care about logging for recovery, you may even disable logging by NOLOGGING hint along with append hint and it will ignore generating REDO for dml statement and may further improve performance of insertion.

From Tom:
append does a direct path load (if it can, it is not a promise, you are requesting and we may or may not do it for you - silently)

if you direct path load, the transaction that did the direct path load CANNOT query that segment - but other transactions can, they just cannot see the newly loaded data.

if you direct path load, you never use any existing free space, it always writes above the high water mark.

if you direct path load, we bypass UNDO on the table - only the table - modifications

if you direct path load, you'll maintain indexes - we build mini indexes on the newly loaded data and merge them into the 'real' indexes in bulk. A direct path load of large amounts of data will maintain indexes very efficiently.

if you direct path load you can bypass redo on the TABLE in archivelog mode, if the database is set up to allow nologging and you have the segment set to nologging

direct path loading bypasses the buffer cache, you write directly to the datafiles.

direct path loading is only appropriate for the first load of a segment or an increment load of lots of data - or an increment load into a table that never has any deletes (so there is no free space to consider)

transactional systems - you probably won't use it.
warehouses - a tool you'll use a lot

Note: insert /*+ append */ will ignore the append hint and use conventional path loading when the table has referential integrity or a trigger.


drop table t_append;

create table t_append as select * from user_objects;

create table t_append_test as select * from user_objects where 1=0

insert /*+ APPEND */ into t_append_test
select * from t_append

select * from t_append_test

this statement is failing as error mentioned above.

Now define FK on child table and then load data by using append hint.

alter table t_append add constraint t_append_pk primary key(object_name,OBJECT_TYPE);

alter table t_append_test add constraint t_append_test_fk foreign key(object_name,OBJECT_TYPE) references t_append(object_name,OBJECT_TYPE);

insert /*+ APPEND */ into t_append_test
select * from t_append

select * from t_append_test;

It goes well. It means it ignored direct-path insert if table has FK defined.

Same with triggers. If table has triggers defined then oracle will ignore direct path insert and will go with conventional path.

sqlldr can and does ignore triggers/referential integrity and even uniqueness during a direct path load, insert /*+ append */ does not - you are not using direct path when they (constraints) exist.

parallel is always a direct path, if you go parallel, you will be appending. 

Tuesday, May 17, 2016

Performance issue on transient tables

Gathering STATS on PROCESS tables and maintaining the same a bit troublesome process.
Since the STATS gather to be done when the table is populated with the highest volume and that too differ with different jobs. Then the column statistics of key columns need to be deleted since those vary with each job.
Instead of that deleting the STATS and locking the same works consistently. In that Oracle uses dynamic sampling on the fly and gives the right plan.
The below steps are one time activity and no further maintenance is needed.
TRUNCATE TABLE PROCESSTAB;
EXEC dbms_stats.delete_table_stats('SCPOAPPG','PROCESSTAB');

EXEC dbms_stats.lock_table_stats('SCPOAPPG','PROCESSTAB');

Sunday, May 15, 2016

All oracle bug fixes from DB itself.


Below query will show all fixes done in database till release

SELECT   optimizer_feature_enable, bugno, description
    FROM v$system_fix_control
ORDER BY REGEXP_REPLACE ( REGEXP_REPLACE ( optimizer_feature_enable || '.', '(\d+\.)', '0000000\1'), '0+(........)\.', '\1') desc, bugno
/

Friday, May 13, 2016

Estimate the remaining time of drop column operation

If table size is huge it may take long time to drop a column and even if there are multiple columns dropped together it may multiply the time with the number of columns in drop operation.

There is an interesting article written by Jonathan on the same and good to read.

http://allthingsoracle.com/dropping-columns/

Following are my personal observations and interesting to see different-2 behavior.

1. Drop column directly.

ALTER TABLE t_big drop column SKIP_CORRUPT;

ALTER TABLE t_big drop (SKIP_CORRUPT,MAX_EXTENTS);

If we drop column directly the table is locked exclusively while the drop is executing and the entire drop is executed as a single massive transaction. That may increase redo logs drastically and if we do not have enough size ,this operation may fail.


If we are using this option then we can monitor time remaining by using v$session_longops  or alternatively by using TOAD session browser/long ops.

12 28 % 13-05-2016 01:04:38 AM 175 Rowid Range Scan:  SCPOMGR.T_BIG: 34000 out of 123044 Blocks done 67

2. By using checkpoint

If we do not want REDO to grow drastically and improve performance of drop operation, we can use check point with frequency of commit intervals.

alter table t_big drop column MIN_EXTENTS checkpoint 5000;

But drawback with checkpoint is if operation fails due to some reason or  terminated by user(I did that simply by CTRL+C when drop column operation was running on sqlplus) then it leaves entire table into table in unusable state where we cannot perform any operation on the table.

If we try to perform any operation on table it will through below error.
[Error] Execution (1: 15): ORA-12986: columns in partially dropped state. Submit ALTER TABLE DROP COLUMNS CONTINUE

To overcome this we need to execute command as suggested in error. 

But If we are using this option then I observed that we can't monitor time remaining by using v$session_longops  or alternatively by using TOAD session browser/long ops.

If checkpoint is used then long operations information is not available in v$session_longops. Same in the case of below statement. 

ALTER TABLE t_big DROP COLUMNS CONTINUE. 

I assume that oracle usage checkpoint for this operation as well.

3. Set column unused and drop latter. (alternatively in 12C we can mark column invisible also if the purpose is to hide columns from application only)

ALTER TABLE t_big SET UNUSED COLUMN NEXT_EXTENT ;

alter table T_BIG drop unused columns checkpoint 5000;

Interestingly even if column marked as unused but still column still exists and it places lock on table when you actually drop this column like the normal drop operation.

In this case also it depends if we can monitor time or not based on the checkpoint usage. Also if checkpoint used and operation failed due to any reason, it leaves table in unusable state.


Footnote: Be careful when using checkpoint or be ready for the consequences .







Wednesday, May 11, 2016

Features those are interesting and I was unaware till now.

1.The maximum size of the VARCHAR2, NVARCHAR2, and RAW data types has been increased from 4,000 to 32,767 bytes.

Increasing the allotted size for these data types allows users to store more information in character data types before switching to large objects (LOBs). This is especially useful for brief textual data types and the capabilities to build indexes on these types of columns.

2. DEFAULT Values for Columns on Explicit NULL Insertion
The DEFAULT definition of a column can be extended to have the DEFAULT being applied for explicit NULL insertion.
The DEFAULT clause has a new ON NULL clause, which instructs the database to assign a specified default column value when an INSERT statement attempts to assign a value that evaluates to NULL.
Ex: 
create table default_on_null (id number, name varchar2(4000) default on null 'Anand' ,name2 varchar2(4000) default 'Yadav');
insert into default_on_null values(1,null,null);
select * from default_on_null;
1 Anand
It returns default value of name field in place of null. It makes sure that there will never be null value in name field.


3. Identity Columns
Oracle Database 12c Release 1 implements ANSI-compliant IDENTITY columns. Migration from database systems that use identity columns is simplified and can take advantage of this new functionality.
This feature implements auto increment by enhancing DEFAULT or DEFAULT ON NULL semantics for use by SEQUENCE.NEXTVAL and SYS_GUID, supports built-in functions and implicit return of default values.

Ex: No more sequences needs to be defined and use identity_clause that is almost identical to sequence

CREATE TABLE t7 (c1 NUMBER GENERATED  ALWAYS AS IDENTITY (start with 1 increment by 1 cache 50 nocycle) ,
                   c2 VARCHAR2(10));
                 
INSERT INTO t7(c2) VALUES ('’abc');

INSERT INTO t7 (c1, c2) VALUES (null, '’xyz');--It will fail as ALWAYS mentioned so you cannot insert or update this column value

If you want to insert user defined value as well then write it like this.

CREATE TABLE t8 (c1 NUMBER GENERATED AS IDENTITY (start with 1 increment by 1 cache 50 nocycle) ,
                   c2 VARCHAR2(10));



SELECT c1, c2 FROM t7;

UPDATE t7 SET C1=3 WHERE C1=1

Ex: Use of always so that user don't have to insert/update any value
CREATE TABLE t6 (c1 NUMBER GENERATED  ALWAYS AS IDENTITY ,
                   c2 VARCHAR2(10));
                 
INSERT INTO t6(c2) VALUES ('’abc');

INSERT INTO t6 (c1, c2) VALUES (null, '’xyz');--It will fail

SELECT c1, c2 FROM t6;

UPDATE T6 SET C1=3 WHERE C1=1--It will fail

Ex: Default sequence will be used but user can insert/update values

CREATE TABLE t5 (c1 NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY,
                   c2 VARCHAR2(10));
                 
INSERT INTO t5(c2) VALUES ('’abc');

INSERT INTO t5 (c1, c2) VALUES (null, '’xyz');

SELECT c1, c2 FROM t5;


4. Top N queries simplified in 12C

Ex: First 10 rows or first 10 percent rows
SELECT T.*
       FROM T
   ORDER BY OWNER,OBJECT_NAME
FETCH first 10 <percent> ROWS ONLY

Ex: Rows from 6th to 16th
SELECT T.*
       FROM T
   ORDER BY OWNER,OBJECT_NAME
     OFFSET 5 ROWS
FETCH NEXT 10 ROWS ONLY;

Internally oracle usage analytical functions only to get these records but user don't need to know all that. You may check predicates in explain plan to verify how query actually executed.

Sunday, May 8, 2016

Best way to learn oracle

Real performance issues:

http://www.runningoracle.com/index.php?cPath=1&osCsid=t8gc7eoa5igsgmb93goik617g2


12 Things You Will Love about Oracle Database 12c R2
https://go.oracle.com/LP=61561?src1=:em:ne:ie:pt:SEV100664921STG3&elq_mid=93659&sh=26132613230226232605293436080903&cmid=OUNV170405P00034C0004

http://blog.enmotech.com/2018/01/11/12-things-you-need-to-know-about-oracle-database-12c-r2/

Video tutorials:

https://www.youtube.com/watch?v=2QggbUdNsfI&feature=youtu.be&list=PLKCk3OyNwIzvwEXdaubc6PQXwnQOAE9h2


Follow biggies:
Jonathan:
https://community.oracle.com/reputation.jspa?username=Jonathan+Lewis&viewID=history
https://jonathanlewis.wordpress.com

https://community.oracle.com/reputation.jspa?username=rp0428&viewID=history

https://community.oracle.com/reputation.jspa?username=Frank+Kulash&viewID=history

https://community.oracle.com/reputation.jspa?username=BluShadow&viewID=history

https://community.oracle.com/reputation.jspa?username=sb92075&viewID=history

https://community.oracle.com/reputation.jspa?username=Solomon+Yakobson&viewID=history

https://community.oracle.com/reputation.jspa?username=John+Stegeman&viewID=history

DBA Gods blog links can be found here;
http://dbakevlar.com/links/
http://allthingsoracle.com/topics/performance-tuning/

Blogs:
http://arup.blogspot.in/2010/12/100-things-you-probably-didnt-know.html
http://www.proligence.com/downloads.html
Arup Nanda:database administration, from performance tuning to security and disaster recovery
http://www.oracle.com/technetwork/issue-archive/nanda-column-index-2098438.html

Steven Feuerstein: For pl-sql
http://www.oracle.com/technetwork/issue-archive/index-087690.html

Melanie Caffrey: SQL/PL_SQL
http://www.oracle.com/technetwork/issue-archive/index-1429427.html

Tom Kyte: Anything
http://www.oracle.com/technetwork/issue-archive/index-093676.html

Must read articles from Tom:
http://www.oracle.com/technetwork/issue-archive/2013/13-sep/o53asktom-1999186.html
http://www.oracle.com/technetwork/issue-archive/2013/13-nov/o63asktom-2034271.html
http://www.oracle.com/technetwork/issue-archive/2014/14-jan/o14asktom-2079690.html
http://www.oracle.com/technetwork/issue-archive/2014/14-mar/o24asktom-2147206.html
http://www.oracle.com/technetwork/issue-archive/2014/14-may/o34asktom-2162246.html


Oracle performance tunning:
https://antognini.ch/publications/
https://people.sap.com/stefan.koehler

Adaptive query optimization related blogs:
https://blogs.sap.com/2013/09/24/oracle-db-optimizer-part-vii-looking-under-the-hood-of-adaptive-query-optimization-adaptive-plans-oracle-12c/
http://www.ludovicocaldara.net/dba/cpu-usage-12c-dbms_feature_awr/
http://www.ludovicocaldara.net/dba/sql-plan-directives-problem/
https://blog.dbi-services.com/nothing-in-feature-usage-statistics-since-12c/

Other articles written by prominent writers
http://www.oracle.com/technetwork/oramag/magazine/tech-articles/index.html

https://richardfoote.wordpress.com/
http://newbiedba.blogspot.in/

http://gerardnico.com/wiki/database/oracle/dbwn

http://oracle-randolf.blogspot.in

https://jonathanlewis.wordpress.com/ or http://allthingsoracle.com

http://blog.tanelpoder.com/2007/

https://blogs.oracle.com/sql/12-things-developers-will-love-about-oracle-database-12c-release-2#approx-queries


https://coskan.wordpress.com/blogroll/

https://oracleriddleblog.wordpress.com/

https://carlos-sierra.net/

https://blogs.oracle.com/optimizer/

http://blog.tanelpoder.com/2010/06/23/the-full-power-of-oracles-diagnostic-events-part-2-oradebug-doc-and-11g-improvements/
https://blog.tanelpoder.com/files/

http://db-oriented.com/en/

Oracle ACE laurent schneider
http://laurentschneider.com/
http://laurentschneider.blogspot.in/

Misc articles by tim hall:
https://oracle-base.com/articles/misc/articles-misc#development

Data Modeling:

http://databaseanswers.org/data_models/index.htm

Read new release features and implement them. It will make you champion
http://docs.oracle.com/database/121/NEWFT/chapter12101.htm#NEWFT167
http://www.oracle-developer.net/display.php?id=506
Read analytical functions and aggregate functions

http://psoug.org/reference/analytic_functions.html
http://psoug.org/reference/analytic_functions.html

Tips and tricks by Erik Wramner:
https://erikwramner.wordpress.com/category/database/page/3/

https://dioncho.wordpress.com/2010/12/10/interpreting-parallel-merge-statement/

How to read bind variable values:
http://tech.e2sn.com/oracle/troubleshooting/how-to-read-errorstack-output(If you don't have tunning pack)
http://tech.e2sn.com/oracle/troubleshooting/oracle-s-real-time-sql-monitoring-feature-v-sql_monitor
(If you have tunning pack)
JDA related blog:
https://vjdba.wordpress.com

People with my name :)

https://aprakash.wordpress.com/

https://www.linkedin.com/in/anand-yadav-96a79b

Data mining:
http://www.deep-data-mining.com/2014/03/flush-shared-pool-and-buffer-cache.html

Oracle interview questions:

http://www.orafaq.com/wiki/Interview_Questions

Simple oracle learning:
http://www.studytonight.com/dbms/database-normalization.php
http://its-all-about-oracle.blogspot.in/search/label/Partitioning

Removing black magic in optimizer:
https://blogs.oracle.com/optimizer/
http://optimizermagic.blogspot.in/

Bug fixes in oracle :
http://www.eygle.com/Notes/11203_buglist.htm

Basic interview questions:
https://jagguonline.wordpress.com/sql-queries/

Tracing in oracle

Enable session tracing:

EXEC DBMS_MONITOR.session_trace_enable;


Check default trace file name:

SELECT value
FROM   v$diag_info
WHERE  name = 'Default Trace File';


In v$diag_info we can check various diagnostic information's.

Execute required query that you want to trace.

Ex: WITH query1 AS (
  SELECT /*+ MATERIALIZE */ * FROM t1
)
SELECT * FROM query1;
--To check materialize hint in trace file.


Disable tracing:

EXEC DBMS_MONITOR.session_trace_disable;

If you don't have permissions to execute this procedure, you may connect by sys and give required permissions.

D:\Oracle\12.1.0.2>sqlplus / as sysdba

SQL> grant execute on DBMS_MONITOR to SCOTT;

Grant succeeded.

SQL>


Other related tracing levels etc.

http://www.orafaq.com/node/2891