Tuesday, July 26, 2022

Non-unique index used by primary key

 You must be wondering why there is an option to use a non-unique index by primary key or is it beneficial. 


First of all if we create index along with PK then oracle always creates unique index on PK columns. But if we create index first and then use it as part of primary key(make sure all primary key columns are part of index but visa versa in not mandatory) , uniqueness is not mandatory. 


In fact for deferred primary key constraint, columns can hold non-unique values until we commit. 


One common (?) performance method is define non unique index based on columns (A,B) and define primary key constraint on column A.

Now select statements of type:
select B from table where A = ? will only access index.

This is useful e.g. when B is status column.

Also if you use direct path insert on such table where non-unique index used for uniqueness, it will not allow direct path inserts. 

There will be a note in execution plan


- Direct Load disabled because unique constraints are enforced by non-unique index   

So there are some applications of non-unique index on PK


Note: If both PK and non-unique index is defined on same set of columns and in same order, mostly it's design issue then intended.

Monday, May 16, 2022

Oracle Database upgrade notes from 18c to 19c

 Differences in 18c and 19c

1. New database initialization parameters, MIN_AUTH_SERVERS and MAX_AUTH_SERVERS, have been added to configure Database Resident Connection Pooling (DRCP).

MIN_AUTH_SERVERS and MAX_AUTH_SERVERS allow the number of processes used to handle session authentication for DRCP to be configured for optimal usage.

2. The LISTAGG aggregate function now supports duplicate elimination by using the new DISTINCT keyword.

The LISTAGG aggregate function orders the rows for each group in a query according to the ORDER BY expression and then concatenates the values into a single string. You can remove duplicate values from the specified expression before concatenation into a single string using the new DISTINCT keyword. This removes the need to create complex query processing to find the distinct values before using the aggregate LISTAGG function. Use the DISTINCT option to remove duplicate values within the LISTAGG function.


3.Immutable Tables

Immutable tables are insert-only tables in which existing data cannot be modified. Deleting rows is either prohibited or restricted based on the insertion time of the rows.

Immutable tables protect data against unauthorized modification by insiders. This includes database administrators or compromised users who have access to insider credentials. Immutable tables also prevent accidental data modification that may be caused by human error.


The DBMS_IMMUTABLE_TABLE package is used for maintenance of immutable tables.

The DELETE_EXPIRED_ROWS procedure removes any rows that are beyond the retention period. They can't be removed using a normal DELETE statement.


4. Performance - SQL Quarantine: SQL statements that are terminated by Oracle   Database Resource Manager due to their excessive consumption of CPU and I/O resources can be automatically quarantined. The execution plans associated with the terminated SQL statements are quarantined to prevent them from being executed again.This feature protects an Oracle database from performance degradation by preventing execution of SQL statements that excessively consume CPU and I/O resources.

5. Hybrid Partitioned Tables

Partitioned external tables were introduced in 12.2. Such tables use the external table driver (ORACLE_LOADER or ORACLE_DATAPUMP) to get the data from a file or even a cloud source (see also DBMS_CLOUD). And each partition can have a different file or even a different oracle directory as the source.

Now we can have tables that have external partitions and normal table partitions. They are called hybrid partitioned tables and are a special case of the external partitioned tables.

6.MAX_IDLE_BLOCKER_TIME

Additionally to MAX_IDLE_TIME there is a new parameter that can restrict the duration of a database session: MAX_IDLE_BLOCKER_TIME. Both are initialization parameters, but also resource plan directives.

Setting such a parameter to 0 means the session is unrestricted. Other values (for the initialization parameters) are in minutes.

MAX_IDLE_BLOCKER_TIME will limit sessions that consume resources. This is the much better option, because connection pools from application servers usually are idle when the are not in use. Those sessions should not be touched if we set MAX_IDLE_TIME to 0 and MAX_IDLE_BLOCKER_TIME to 30 for example.

7.DBMS_XPLAN : Compare Execution Plans in Oracle Database 19c and 21c (COMPARE_PLANS, COMPARE_CURSORS and COMPARE_EXPLAIN)



Script to test IO performance in oracle DB

 set timing on;

set serveroutput on size unlimited


set feedback on


column tm new_value file_time noprint


column tm2 new_value MARKET_ENV noprint


select to_char(sysdate, 'YYYYMMDD') tm from dual ;


select '&ENV' tm2 FROM DUAL;


spool C:\D\DatabaseUpgrade\&MARKET_ENV._IO_Testing_large_&file_time..log


PROMPT &MARKET_ENV


select sysdate AS RUN_DATE from dual;


create  type number_ntt as table of number;

/


create function row_generator ( rows_in in pls_integer ) return number_ntt pipelined is

begin

   for i in 1 .. rows_in loop

      pipe row (i);

   end loop;

   return;

end;

/


--drop table big_table;

create table big_table as select rownum id from dual where 1=0

/


alter table big_table nologging;


-- flush ALL

--alter system flush shared_pool;

--alter system flush buffer_cache;


--exec runstats_pkg.rs_start;

-- first execution

--insert /*+ append */ into big_table select * from table(row_generator(10000));

--commit;


--exec runstats_pkg.rs_middle;

-- second execution for drop the recursive calls and hard parse

insert /*+ append */ into big_table select * from table(row_generator(100000000));

commit;

--exec runstats_pkg.rs_stop;


--SELECT * FROM big_table



CREATE INDEX BIG_TABLE_IDX ON BIG_TABLE

(id)

TABLESPACE SCPODATA;



SET TIMING ON ;


SELECT /*+ INDEX_FFS(BIG_TABLE BIG_TABLE_IDX)*/ COUNT(*) FROM BIG_TABLE;


SET TIMING OFF;


drop table big_table purge;


drop function row_generator;


drop type number_ntt;


spool off

Wednesday, June 17, 2020

Code to de-fragment Oracle tables and Indexes

--------------------------------------------------------------------------------
--
-- File name:   rebuild_table
-- Purpose:     Move/Shrink tables to reduce fragmentation. The procedure can be used with both ONLINE/OFFLINE mode.
--
-- Author:      Anand Yadav
--
-- Schema:      app schema
--
-- Usage:       @gen_rebuild_table.sql
--The procedure gen_rebuild_table takes two arguements and generates output script accordingly. First arguements is table name and second arguement is type of operation.
--Permitted values for operation type is "ONLINE" or "OFFLINE". If "OFFLINE", table will be offline for users during move operation.
--
-- Other:
--If oracle EE version is 12.2 then table can be moved online in place of shrink
--------------------------------------------------------------------------------

CREATE OR REPLACE PROCEDURE gen_rebuild_table (
   tab_name   IN VARCHAR2,
   ops_type   IN VARCHAR2 DEFAULT 'ONLINE')
AS
   OBJECT_SIZE      NUMBER := 0;
   DEFAULT_DEGREE   NUMBER := 16; --Considering 8 CPU's and enterprise edition
   timestart        NUMBER;
   version          VARCHAR2 (20) := NULL;
   releas           VARCHAR2 (1) := NULL;
   -- keyTable         user_tables%ROWTYPE;
   cnt              NUMBER (1) := 0;
   pct_val NUMBER := 0;
   TYPE keyTableType IS RECORD
   (
      iot_type       user_tables.iot_type%TYPE,
      row_movement   user_tables.row_movement%TYPE
   );

   keyTable         keyTableType;
BEGIN
   DBMS_OUTPUT.PUT_LINE ('SET TIMING ON;');
   SELECT CASE
             WHEN INSTR (banner, 'Enterprise') > 0 THEN 'Enterprise'
             ELSE 'Standard'
          END,
          CASE WHEN INSTR (banner, '12.2') > 0 THEN '1' ELSE '0' END
     INTO version, releas
     FROM v$version
    WHERE ROWNUM < 2;

   SELECT SUM (BYTES) / 1024 / 1024
     INTO OBJECT_SIZE
     FROM user_segments
    WHERE segment_name IN (SELECT TABLE_NAME
                             FROM USER_TABLES
                            WHERE TABLE_NAME = tab_name
                           UNION ALL
                           SELECT INDEX_NAME
                             FROM USER_INDEXES
                            WHERE TABLE_NAME = tab_name);

   SELECT pct_free INTO pct_val
     FROM user_tables where TABLE_NAME = tab_name;
   DBMS_OUTPUT.PUT_LINE (
         '--Total size of table '
      || tab_name
      || ' and associated indexes was: '
      || OBJECT_SIZE
      || ' MB');

   timestart := DBMS_UTILITY.get_time;

   SELECT iot_type,
          DECODE (row_movement,  'DISABLED', 'DISABLE',  'ENABLED', 'ENABLE')
             row_movement
     INTO keyTable
     FROM user_tables
    WHERE table_name = tab_name;
IF pct_val > 10 
THEN 
DBMS_OUTPUT.put_line (
            ' ALTER TABLE ' || tab_name || ' PCTFREE 10'||';');
END IF;

   IF UPPER (ops_type) = 'ONLINE'
   THEN
      IF version = 'Enterprise' AND releas = 1
      THEN
         DBMS_OUTPUT.put_line (
            ' ALTER TABLE ' || tab_name || ' MOVE ONLINE PARALLEL '||DEFAULT_DEGREE||';');
      ELSE
         BEGIN
            -- DBMS_OUTPUT.put_line ('New code to support online shrink');
            IF     keyTable.iot_type IS NULL
               AND keyTable.row_movement = 'DISABLE'
            THEN
               DBMS_OUTPUT.put_line (
                  'ALTER TABLE ' || tab_name || ' ENABLE ROW MOVEMENT;');
            END IF;

            DBMS_OUTPUT.put_line (
               'ALTER TABLE ' || tab_name || ' SHRINK SPACE COMPACT CASCADE;');

            DBMS_OUTPUT.put_line (
               'ALTER TABLE ' || tab_name || ' SHRINK SPACE CASCADE;');

            IF keyTable.iot_type IS NULL
            THEN
               DBMS_OUTPUT.put_line (
                     'ALTER TABLE '
                  || tab_name
                  || ' '
                  || keyTable.row_movement
                  || ' ROW MOVEMENT;');
            END IF;
         EXCEPTION
            WHEN OTHERS
            THEN
               DBMS_OUTPUT.PUT_LINE (
                  '--Failed during shrink: ' || USER || '.' || tab_name);

               --DBMS_OUTPUT.PUT_LINE (SQLERRM);

               BEGIN
                  IF keyTable.iot_type IS NULL
                  THEN
                     DBMS_OUTPUT.put_line (
                           'ALTER TABLE '
                        || tab_name
                        || ' '
                        || keyTable.row_movement
                        || ' ROW MOVEMENT;');
                  END IF;
               EXCEPTION
                  WHEN OTHERS
                  THEN
                     --DBMS_OUTPUT.PUT_LINE (SQLERRM);
                     RAISE;
               END;

               RAISE;
         END;
      END IF;
   ELSE
       IF version = 'Enterprise' THEN 
       DBMS_OUTPUT.put_line (
            ' ALTER TABLE ' || tab_name || ' MOVE PARALLEL '||DEFAULT_DEGREE||';');
       ELSE
        DBMS_OUTPUT.put_line (' ALTER TABLE ' || tab_name || ' MOVE;');
       END IF;
   END IF;
   
IF pct_val > 10 
THEN 
DBMS_OUTPUT.put_line (
            ' ALTER TABLE ' || tab_name || ' PCTFREE '||pct_val||';');
END IF;   
   
   IF UPPER (ops_type) = 'OFFLINE'
   THEN
      FOR ind_name
         IN (SELECT index_name,
                    DECODE (LOGGING, 'YES', 'LOGGING', 'NOLOGGING')
                       LOGGING_FLAG,
                    DECODE (DEGREE,
                            '1', 'NOPARALLEL',
                            'DEFAULT', 'PARALLEL',
                            'PARALLEL' || ' ' || DEGREE)
                       PARALLEL_FLAG
               FROM user_indexes
              WHERE table_name = tab_name)
      LOOP
         BEGIN
            timestart := DBMS_UTILITY.get_time;


            IF version = 'Enterprise'
            THEN
               DBMS_OUTPUT.put_line (
                     'ALTER INDEX '
                  || ind_name.index_name
                  || ' REBUILD PARALLEL '
                  || DEFAULT_DEGREE
                  || ' NOLOGGING;');

               DBMS_OUTPUT.put_line (
                     'ALTER INDEX '
                  || ind_name.index_name
                  || ' '
                  || ind_name.PARALLEL_FLAG
                  || ' '
                  || ind_name.LOGGING_FLAG
                  || ';');
            ELSE
               DBMS_OUTPUT.put_line (
                     'ALTER INDEX '
                  || ind_name.index_name
                  || ' REBUILD NOLOGGING;');

               DBMS_OUTPUT.put_line (
                     'ALTER INDEX '
                  || ind_name.index_name
                  || ' '
                  || ind_name.LOGGING_FLAG
                  || ';');
            END IF;
         EXCEPTION
            WHEN OTHERS
            THEN
               DBMS_OUTPUT.put_line (
                  '--Issue with Index : ' || ind_name.index_name);
               -- DBMS_OUTPUT.PUT_LINE (SQLERRM);
               CONTINUE;
         END;
      END LOOP;
   END IF;
--   DBMS_OUTPUT.PUT_LINE (
--         'begin dbms_stats.GATHER_TABLE_STATS('''
--      || USER
--      || ''','''
--      || tab_name
--      || ''','
--      || 'DEGREE=> '
--      || DEFAULT_DEGREE
--      || ',method_opt=> ''for all indexed columns size skewonly'',ESTIMATE_PERCENT=> DBMS_STATS.AUTO_SAMPLE_SIZE,  cascade=>TRUE, no_invalidate=>FALSE, force=>TRUE); end;');
--   DBMS_OUTPUT.PUT_LINE ('/');
   version := NULL;
   releas := NULL;
    DBMS_OUTPUT.PUT_LINE ('SET TIMING OFF;');
EXCEPTION
   WHEN OTHERS
   THEN
      version := NULL;
      releas := NULL;
      DBMS_OUTPUT.put_line ('--Issue with move/shrink table: ' || tab_name);
      DBMS_OUTPUT.PUT_LINE ('SET TIMING OFF;');
--DBMS_OUTPUT.PUT_LINE (SQLERRM);
END;
/

SHOW ERRORS;

SET SERVEROUTPUT ON SIZE UNLIMITED
SET FEEDBACK OFF;
SPOOL C:\D\Cloud\Utility\table_fragmentation\generated_rebuild_script.sql
EXEC gen_rebuild_table('&DUMMY_TABLE','OFFLINE');

--SET SERVEROUTPUT ON SIZE UNLIMITED
--SET FEEDBACK OFF;
--SPOOL C:\D\Cloud\Utility\table_fragmentation\generated_rebuild_script.sql
--begin
--for tab_name in (select table_name from user_tables where table_name in ('PROCESSSKU','DFU','SKU'))
--LOOP
--gen_rebuild_table(tab_name.table_name,'ONLINE');
--END LOOP;
--END;
--/
--
spool off;

DROP PROCEDURE gen_rebuild_table;

Saturday, May 25, 2019

How to check previous hash value based on old_hash_value from statspack report.

You can check the SQL's execution plan for a given old hash value  by
@?/rdbms/admin/sprepsql

You may want to compare SQL Statistics section such as buffer gets, disk reads and rows processed as well.

select * from stats$sql_summary where old_hash_value='4249085830';

You would need to have gathered snapshots at level 6 or 7, then you could see all the plans for a statement. Did you do that? Level 5 isn't good enough.

http://www.dba-oracle.com/t_sprepsql_sql.htm

To fix the good PHV or plan, you can use : 


1) COE CUSTOM PROFILE 

Refer Correcting Optimizer Cost Estimates to Encourage Good Execution Plans Using the COE XFR SQL Profile Script ( Doc ID 1955195.1 ) 

/*NO NEED OF ANY SQLT INSTALLATION. WE JUST NEED THAT ZIP FILE*/ 


OR 


2) How to Load SQL Plans into SQL Plan Management (SPM) from the Automatic Workload Repository (AWR) ( Doc ID 789888.1 ) 


Adding hints to a SQL statement require more than just a copy of the statement. 

Please upload SQLTXPLAIN report for the SQL using Method 2 (XTRACT). 
SQLTXPLAIN reports provide a great deal of supporting information including schema definitions, statistics, parameters, an optimizer 10053 trace, and much more that are essential to the diagnostic process. 

Please do the following: 
(i) Please read Note SQLT (SQLTXPLAIN) - Tool that helps to diagnose a SQL statement performing poorly ( Doc ID 1614107.1 ) 
and Note FAQ: SQLT (SQLTXPLAIN) Frequently Asked Questions ( Doc ID 1454160.1 ) carefully. 
(ii) Download and unzip file sqlt.zip provided on Note 
Please download the latest version. 
(iii) Install SQLTXPLAIN under direction of INSTRUCTIONS.TXT file 
(iv) Run SQLTXPLAIN (use XTRACT method) for the SQL statements that we are focus under direction of INSTRUCTIONS.TXT 
(v) SQLTXPLAIN normally generates a *.zip file for upload. If it does not, please upload all files generated. 

For SQL Hints, https://docs.oracle.com/database/121/SQLRF/sql_elements006.htm#SQLRF51101 

For Statistics, Support Note: How to Restore Oracle Optimizer Statistics ( Doc ID 452011.1 ) 
I could not find if this feature is available in SR, but you can try. 



Unfortunately SQL Plan Management (SPM) is not available in Standard Edition (SE) Oracle.
Additionally, there is no history of old execution plans available because they are stored in the Oracle Diagnostic Pack (AWR) which is also not available on SE Oracle.

Your options are:
1. Manual tuning of the SQL statement.
2. Using hints to force a specific execution plan.
3. Possible restore old statistics from before the plan changed (May not be available on SE Oracle)
4. Change the way statistics are gathered. 

Thursday, May 16, 2019

Notes from Heroes of autonomous Oracle

Sandesh Rao
sql developer 18.3
Download anaconda and check zupitor notebook

Find next best window when maintenance can be performed with minimal service impact
from average active sessions: AI application
Re-start nodes one by one---May be killing and re-starting nodes


Can we create a data lake with all customers data and generate usefull reports from them that can benifits the customers?

Why we need divers to search sea level but not Drones?

Write a algoritm when a query changes explain plan and impact of the same?

Can we utilize oracle 12.2 algoritms by populating 12.1 and previous databases metadata to them like AWR data etc?

Remove clutter from database logs?


Can we enable all enterprize features on at least test machines and generate best data out of it and use for prod? For example, tunning advisor, segment advisor , index monitoring etc.?

Write sql against dbms_datamining packages to get ML data

Malay Kumar Khawas, Sr ETA
Autonomous transaction processing

https://myservices.us.oraclecloud.com/mycloud/signup?sourceType&sourceType=:eng:ip:ie::RC_PDMK190401P00011:IndiaRSBlloreATPHOLMeetupMay_04_19

Name:anandcloud1

LAB doc:
https://oracle.github.io/learning-library/workshops/autonomous-transaction-processing/?page=LabGuide200SecureConnectivityAndDataAccess.mds

https://oracle.github.io/learning-library/workshops/autonomous-transaction-processing/?page=README.md


https://console.eu-frankfurt-1.oraclecloud.com/a/db/adb/ocid1.autonomousdatabase.oc1.eu-frankfurt-1.abtheljtqhfph2igflv4paewxx6wsnwsrx5oy7ztmw7terl7mm66732ufb3a

anand.yadav@jda.com
password:Kuchbhi@12345

DBName:
DB201905041220
Kuchbhi@12345



https://console.eu-frankfurt-1.oraclecloud.com/object-storage/buckets/frcxommztxdd/ATPLab/customers.csv

https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/frcxommztxdd/ATPLab/customers.csv


#S0[hRpve6;QXRiBhQDc


set define off
begin
  DBMS_CLOUD.create_credential(
    credential_name => 'OBJ_STORE_CRED1',
    username => 'anand.yadav@jda.com',
    password => '#S0[hRpve6;QXRiBhQDc'
  );
end;
/
set define on


/* Specify the real base_URL for your OCI Object Storage in the define base_URL line below*/
/* change gse00014623 to your real tenancy name. The name is case-sensitive. */
/* change ATPDB to your real bucket name. The name is case-sensitive. */
/* change us-phoenix-1 to your real region name. The name is case-sensitive. */
/* you can find these values on the OCI Console .. Storage .. Object Storage screen */

define base_URL='https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/gse00014623/ATPDB'

define customer_dat_URL = '&base_URL/customers.csv';





begin
 dbms_cloud.copy_data(
    table_name =>'CUSTOMERS',
    credential_name =>'OBJ_STORE_CRED1',
    file_uri_list => 'https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/frcxommztxdd/ATPLab/customers.csv',
    format => json_object('ignoremissingcolumns' value 'true', 'removequotes' value 'true', 'dateformat' value 'YYYY-MM-DD HH24:MI:SS', 'blankasnull' value 'true')
 );
end;
/


DemoVCN
CIDR Block:

10.0.0.0/16
Compartment:

PD


ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQB7tMDR0CDLUdoe3xYgGNKMDHxmQJNYd/jeyPJTwcaQmEjQFULoSmgkVqlIbUwgz1DmDl1y1HslJ1sFxzXJRMdwsTBhhAriIHKjDB8vVF5QMXiR4qw14zHFwmRMp5WFK53xULjFGmZ2N41CMMh0GN27PMYdZsmH1jL2Eeq4Vl4IC4E1meOGOdxtEVtJJzcZdAsz1yuAyMnA54nC6Ygt2LtMvADId+hXWyqAv9iTCqgGC1bzZ75FbXIysAebdwfgjAl2wqZWmqlBqvCyUyYn4FDuHlTh5AsGMUjJXC7nUVjQ2jOYQ4WN8/FgQGu/yrM2ufGx9qN0D8EGFJNab43RpT43 rsa-key-20190504

130.61.72.31
user:opc

sudo curl --silent --location https://rpm.nodesource.com/setup_10.x | sudo bash -

sudo yum install nodejs
python --version
sudo yum install libaio
node --version
npm --version
sudo yum install git

scp -i C:\Tools\ATP/perkey.pbk C:\Tools\ATP\instantclient-basic-linux.x64-12.2.0.1.0 opc@130.61.72.31:/opt/oracle/instantclient_12_2


 git clone https://github.com/ravi041282/ATPnodeapp
 git clone https://github.com/ravi041282/ATPnodeapp


 [opc@atpmv wallet_RESTONHUBDB]$ export TNS_ADMIN=/home/opc/ATPnodeapp/wallet_RESTONHUBDB/


 Lab6:

 sudo yum install -y java-1.8.0-openjdk-devel
 cd ~

mkdir ATPJava

cd ATPJava

 sudo yum install git

 git clone https://github.com/ravi041282/ATPJava.git


 cd /home/opc/ATPJava/

mkdir lib

cd lib

wget https://github.com/sblack4/ojdbc8-full/raw/master/ojdbc8-full.tar.gz

tar xzfv ojdbc8-full.tar.gz


cd /home/opc/ATPJava/

mkdir wallet_DB

unzip /home/opc/ATPJava/Wallet_DB201905041220.zip -d /home/opc/ATPJava/wallet_DB/

cd /home/opc/ATPJava/ATPJava/src

Friday, March 8, 2019

Notes from Sangam18

What is new with Oracle 18C:

ATP:

Oracle Autonomous Transaction Processing database<read more>

Single database instance on server  is most efficient and multi-instance on single server is in-efficient model of database build.

Row format storage is ideal for OLTP and column format storage is for analytics.

18c is essentially 12.2.0.2.

Polymorphic Table Functions in Oracle Database 18c:

The return type of a Polymorphic Table Function (PTF) can be determined by input parameters. This differs from conventional table functions, where the output table type is fixed at compile time.

Qualified Expressions in PL/SQL in Oracle Database 18c

Qualified expressions provide and alternative way to define the value of complex objects, which in some cases can make the code look neater.

Scalable Sequences in Oracle Database 18c

Scalable sequences are designed to reduce problems with hot index blocks during large scale data loads into tables with sequence generated primary keys on single instance and RAC databases.
Scalable sequences have been available since the first release of Oracle 12c, but they were not documented and therefore not supported. They are included in the documentation for the first time in Oracle 18c, so they are now a supported feature.

Introduction to In-Memory External Tables in 18c:


approximate Top-N Query Processing (APPROX_RANK, APPROX_SUM, APPROX_COUNT) In Oracle Database 18c

The APPROX_RANKAPPROX_SUM and APPROX_COUNT functions were introduced in Oracle 18c to allow approximate top-n query processing. This extends the approximate query processing that was introduced in the previous two releases (12cR1 and 12cR2)

ALTER SYSTEM CANCEL SQL : Cancel a SQL Statement in a Session in Oracle Database 18c

The basic syntax of the ALTER SYSTEM CANCEL SQL statement is show below.
ALTER SYSTEM CANCEL SQL 'SID, SERIAL[, @INST_ID][, SQL_ID]';
If the INST_ID is omitted, it is assumed you mean the current instance. If the SQL_ID is omitted, it is assumed you mean the SQL that is currently running in the specified session.

Multithreaded Model using THREADED_EXECUTION in Oracle Database 12c Release 1 (12.1)