"The art of performance tuning is there is no rules". You have to use hit and try.
For incomplete transactions during AWR snapshot duration, AWR takes help from ASH snapshot to capture details.
DB Time=CPU+Wait time
enq: TM – contention can also happen due to table level lock(may be due to DDL)
Buffer busy waits: There can be storage issue as well where buffers are not getting flushed to disk.
Latch sleep breakdown: <Read more>
To reduce rows per block: alter table t minimize records_per_block;
Load profile:
Instance efficiency percent:
If parse CPU to parse elapsed is low then there can be shared pool issue also.
Why my cursor not shared?
Check details in v$sql_shared_cursor.
Gather system stats only when you change system components.
We can set optimizer_ignore_hints to true to ignore any hints supplied as part of queries. This can be done as part of upgrade activity to validate if hints are no longer required. Specially the hints those added to support missing functionality.
Oracle disk and I/O tuning:
/var/log/messages: to check storage issues
Disk scrubbing: is a background process that reads disks during idle periods to detect irremediable read er- rors in infrequently accessed sectors. Timely detection of such latent sector errors (LSEs) is important to reduce data loss.
In exadata it's done automatically and it keeps checking disk corruption.
iscsi storage: Cheapest and slowest
NFS storage:
We can even ask to store important data files into outer space of disk that is faster.
Thursday, December 27, 2018
Thursday, December 20, 2018
Parallel max server and SQL Developer
http://structureddata.org/2012/02/29/pitfalls-of-using-parallel-execution-with-sql-developer/
https://iiotzov.wordpress.com/tag/oracle-parallel-slaves-idle/

https://iiotzov.wordpress.com/tag/oracle-parallel-slaves-idle/
I was watching
the webex recording , I think inactive sessions
were already cleared. I think the root cause of the issue was inactive sessions
only but we did not segregate between JDA application sessions and user created
sessions. WW DBA’s killed all the inactive sessions and conveyed that all were
the JDBC inactive sessions and it’s an application issue.
JDA inactive
sessions must belong to node pool sessions and those will remain same over the
time. We can monitor now also and confirm the same.
But there was
one crucial information in between that went un-noticed.


If you notice
above screenshots , you will see that a query initiated from SQL Developer
created 256 parallel threads. You might also notice that query does not have
any parallel hint, so I’m assuming that table UDT_SSTEMPLATEDATA must have
default table level parallelism enabled and without knowing that user initiated
the session.
I think this
query was the root cause of the issue. “By default, SQL Developer has the array fetch size set
to 50. This means that for any cursor SQL Developer opens for scolling, it will
fetch the first 50 rows and when you scroll to the bottom of those rows in a
grid, it will fetch the next 50 rows and so on. This is good in the sense that
the JDBC application can fetch an array of rows with a single JDBC database
call, however, using this approach with Parallel Execution, the PX servers used
for this cursor will not be released until the cursor is canceled or the last
row is fetched. Currently the only way to force reading until the end of cursor
in SQL Developer is to issue a Control+End in the data grid. As a result, any
action that uses Parallel Execution and has not fetched all the rows or is not
canceled/closed, will squat those Parallel Execution resources and prevent them
from being used by other users. If
enough users have open cursors backed by Parallel Execution, then it is
possible that it could consume all of the Parallel Execution servers and will
result in Parallel Execution requests being forced to Serial Execution because
resources are not available, even if the system is completely idle.”
My gut feeling
(as I don’t have access to environment and my views are completely based on
webex recording) is that killing those SQL Developer inactive sessions helped
revive the situation.
What can be
done to prevent this:
- Set parallel_max_server parameter value high enough to avoid
drain out parallel execution resources.
- Check and disable any default table level parallelism. It should
be enabled very carefully.
- Educate user to not use /*+ PARALLEL*/ hint frequently without
knowing the impact but give specific degree.
- For the problematic query, we validated the plan with couple of
different values of parallel degree and current plan looks good.
I hope it
answers why parallel hint was not respected before killing inactive sessions
and why it’s working post that.
Monday, July 30, 2018
Oracle scheduler jobs and client timezone
Two weeks back I was working on one of the client "Chedraui" issue. Implementation team reported that two of the IGP jobs are running long from last couple of weeks. Unfortunately these were weekly jobs so if missed this time, next time we can debug after a week only.
Also it was reported that same job was running longer earlier as well and after disabling a trigger it went fine.
Recently they refreshed environment from prod again.
When I started looking into the issue, observed there were no active sessions in database and it was custom IGP loading job, so there cannot be anything on app server.
I have verified job logs as well and every time job took close to 2 hours whereas earlier it used to take close to 10 mins.
I have verified statspack report as well and nothing was useful in statspack report as well.
It was really difficult what's going wrong. After going through complete job code and all related packages and procedures I started querying into job tables. As these were oracle scheduled jobs so started querying below tables.
select * from sys.scheduler$_job
Above table had an entry without end date populated. That gave some hint that job in fact in place but waiting for something.
ex: 241680 BEGIN NWMGR.call_proc; END; 7/30/2018 6:29:29.000000 AM -07:00 7/30/2018 4:28:59.594874 AM -07:00 17372 3 1 0 9.0071992548803E15 Test job to check completion time NWMGR NLS_LANGUAGE='AMERICAN' NLS_TERRITORY='AMERICA' NLS_CURRENCY='$' NLS_ISO_CURRENCY='AMERICA' NLS_NUMERIC_CHARACTERS='.,' NLS_CALENDAR='GREGORIAN' NLS_DATE_FORMAT='DD-MON-RR' NLS_DATE_LANGUAGE='AMERICAN' NLS_SORT='BINARY' NLS_TIME_FORMAT='HH.MI.SSXFF AM' NLS_TIMESTAMP_FORMAT='DD-MON-RR HH.MI.SSXFF AM' NLS_TIME_TZ_FORMAT='HH.MI.SSXFF AM TZR' NLS_TIMESTAMP_TZ_FORMAT='DD-MON-RR HH.MI.SSXFF AM TZR' NLS_DUAL_CURRENCY='$' NLS_COMP='BINARY' NLS_LENGTH_SEMANTICS='CHAR' NLS_NCHAR_CONV_EXCP='FALSE' 0102000000000000 0 PRIMARY 0 1 7/30/2018 6:29:29.000000 AM -07:00 0 0 0 2147483648 0 0 0 0
Then I queried below table.
SELECT * FROM USER_SCHEDULER_RUNNING_JOBS;
I think this table was also having running jobs information. I missed to capture details on that day so does not have current information.
So to check historical details of these jobs I started querying job history tables based on the keywords provided in code.
select * from USER_SCHEDULER_JOB_RUN_DETAILS where job_name LIKE '%LOAD_OO6%' ORDER BY ACTUAL_START_DATE DESC;
With historical data in this table , things were more clear. My first observation was that each job was taking exactly 2 hours time. This cannot be just coincidence.
100900 7/5/2018 2:19:17.210435 PM -05:00 NWMGR LOAD_OO6 SUCCEEDED 0 7/5/2018 12:18:39.000000 PM -07:00 7/5/2018 12:18:39.408172 PM -07:00 +00 00:00:38.000000 1 533,36729 12685 +00 00:00:36.920000
99830 7/4/2018 11:26:46.054728 AM -05:00 NWMGR LOAD_OO6 SUCCEEDED 0 7/4/2018 9:26:25.000000 AM -07:00 7/4/2018 9:26:25.747494 AM -07:00 +00 00:00:20.000000 1 289,12857 19307 +00 00:00:20.090000
In above records you can see that there is close to 2 hours difference in start time and finish time. Job took 38 seconds and 20 seconds respectively.
Also the most important information you can collect fro this is time zone. You can see that start date has -7 in timezone and log date has -5. So it was pretty clear by now that it not database that is taking time but job is waiting before it starts executing code.
So my suspect was that client machine and DB server were configured with different time zone.
But I was not having access to both servers and there was nobody to help find out current time zone configuration on client and DB servers.
So I started writing code to test the same.
Sample code:
CREATE TABLE job_test
(
id NUMBER,
descr VARCHAR2 (200));
CREATE OR REPLACE PROCEDURE test_proc
AS
BEGIN
INSERT INTO job_test
VALUES (1, 'Job completed at' || SYSTIMESTAMP);
END;
/
show errors;
CREATE OR REPLACE PROCEDURE call_proc
AS
v_stat VARCHAR2 (200);
begin
v_stat :='BEGIN TEST_PROC;COMMIT;END;';
EXECUTE IMMEDIATE v_stat;
END;
/
show errors;
CREATE OR REPLACE PROCEDURE run_proc
AS
BEGIN
DBMS_SCHEDULER.create_job (
job_name => 'test_job',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN NWMGR.call_proc; END;',
start_date => SYSTIMESTAMP + (0.5 / 1440),
repeat_interval => NULL,
end_date => NULL,
ENABLED => TRUE,
comments => 'Test job to check completion time');
END;
/
show errors;
begin
run_proc;
end;
/
alter session set time_zone='-07:00';
begin
run_proc;
end;
/
In first test job executed immediately and record got inserted into job_test table. But when I changed time zone and re-submitted job, I noticed that my job was waiting. After two hours job completed.
Now I have proof to that it up with implementation team. I advised them to change timezone on application server and they realized that it was changed recently as part of server maintenance activity.
After changing timezone ,job started completing within few seconds.
Party time!!!!
Also it was reported that same job was running longer earlier as well and after disabling a trigger it went fine.
Recently they refreshed environment from prod again.
When I started looking into the issue, observed there were no active sessions in database and it was custom IGP loading job, so there cannot be anything on app server.
I have verified job logs as well and every time job took close to 2 hours whereas earlier it used to take close to 10 mins.
I have verified statspack report as well and nothing was useful in statspack report as well.
It was really difficult what's going wrong. After going through complete job code and all related packages and procedures I started querying into job tables. As these were oracle scheduled jobs so started querying below tables.
select * from sys.scheduler$_job
Above table had an entry without end date populated. That gave some hint that job in fact in place but waiting for something.
ex: 241680 BEGIN NWMGR.call_proc; END; 7/30/2018 6:29:29.000000 AM -07:00 7/30/2018 4:28:59.594874 AM -07:00 17372 3 1 0 9.0071992548803E15 Test job to check completion time NWMGR NLS_LANGUAGE='AMERICAN' NLS_TERRITORY='AMERICA' NLS_CURRENCY='$' NLS_ISO_CURRENCY='AMERICA' NLS_NUMERIC_CHARACTERS='.,' NLS_CALENDAR='GREGORIAN' NLS_DATE_FORMAT='DD-MON-RR' NLS_DATE_LANGUAGE='AMERICAN' NLS_SORT='BINARY' NLS_TIME_FORMAT='HH.MI.SSXFF AM' NLS_TIMESTAMP_FORMAT='DD-MON-RR HH.MI.SSXFF AM' NLS_TIME_TZ_FORMAT='HH.MI.SSXFF AM TZR' NLS_TIMESTAMP_TZ_FORMAT='DD-MON-RR HH.MI.SSXFF AM TZR' NLS_DUAL_CURRENCY='$' NLS_COMP='BINARY' NLS_LENGTH_SEMANTICS='CHAR' NLS_NCHAR_CONV_EXCP='FALSE' 0102000000000000 0 PRIMARY 0 1 7/30/2018 6:29:29.000000 AM -07:00 0 0 0 2147483648 0 0 0 0
Then I queried below table.
SELECT * FROM USER_SCHEDULER_RUNNING_JOBS;
I think this table was also having running jobs information. I missed to capture details on that day so does not have current information.
So to check historical details of these jobs I started querying job history tables based on the keywords provided in code.
select * from USER_SCHEDULER_JOB_RUN_DETAILS where job_name LIKE '%LOAD_OO6%' ORDER BY ACTUAL_START_DATE DESC;
With historical data in this table , things were more clear. My first observation was that each job was taking exactly 2 hours time. This cannot be just coincidence.
100900 7/5/2018 2:19:17.210435 PM -05:00 NWMGR LOAD_OO6 SUCCEEDED 0 7/5/2018 12:18:39.000000 PM -07:00 7/5/2018 12:18:39.408172 PM -07:00 +00 00:00:38.000000 1 533,36729 12685 +00 00:00:36.920000
99830 7/4/2018 11:26:46.054728 AM -05:00 NWMGR LOAD_OO6 SUCCEEDED 0 7/4/2018 9:26:25.000000 AM -07:00 7/4/2018 9:26:25.747494 AM -07:00 +00 00:00:20.000000 1 289,12857 19307 +00 00:00:20.090000
In above records you can see that there is close to 2 hours difference in start time and finish time. Job took 38 seconds and 20 seconds respectively.
Also the most important information you can collect fro this is time zone. You can see that start date has -7 in timezone and log date has -5. So it was pretty clear by now that it not database that is taking time but job is waiting before it starts executing code.
So my suspect was that client machine and DB server were configured with different time zone.
But I was not having access to both servers and there was nobody to help find out current time zone configuration on client and DB servers.
So I started writing code to test the same.
Sample code:
CREATE TABLE job_test
(
id NUMBER,
descr VARCHAR2 (200));
CREATE OR REPLACE PROCEDURE test_proc
AS
BEGIN
INSERT INTO job_test
VALUES (1, 'Job completed at' || SYSTIMESTAMP);
END;
/
show errors;
CREATE OR REPLACE PROCEDURE call_proc
AS
v_stat VARCHAR2 (200);
begin
v_stat :='BEGIN TEST_PROC;COMMIT;END;';
EXECUTE IMMEDIATE v_stat;
END;
/
show errors;
CREATE OR REPLACE PROCEDURE run_proc
AS
BEGIN
DBMS_SCHEDULER.create_job (
job_name => 'test_job',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN NWMGR.call_proc; END;',
start_date => SYSTIMESTAMP + (0.5 / 1440),
repeat_interval => NULL,
end_date => NULL,
ENABLED => TRUE,
comments => 'Test job to check completion time');
END;
/
show errors;
begin
run_proc;
end;
/
alter session set time_zone='-07:00';
begin
run_proc;
end;
/
In first test job executed immediately and record got inserted into job_test table. But when I changed time zone and re-submitted job, I noticed that my job was waiting. After two hours job completed.
Now I have proof to that it up with implementation team. I advised them to change timezone on application server and they realized that it was changed recently as part of server maintenance activity.
After changing timezone ,job started completing within few seconds.
Party time!!!!
resmgr:cpu quantum and oracle 12.1.0.2 standard edition
Post upgrade to Oracle database to 12.1.* , we are facing lot of issue, specially when database version is standard edition.
There are multiple reason for that .
1. Oracle allows maximum 16 CPU threads at any time.
2. With earlier versions of oracle and with enterprise edition, we can disable resource manager but not with oracle SE2. Our application sessions get stuck on resmgr:cpu quantum for long and it creates lot of perofrmance issues.
To mitigate performance issue related to resource manager, we keep resource manager disabled. But with Oracle SE2 , even if we set resource manager plan null and disable all default maintenance windows, still we can see same wait event.
This becomes even more critical when even single application query gets stuck on resource manager and when check in background , you will see CPU utilization on DB server is very less.
Ex:
There are multiple reason for that .
1. Oracle allows maximum 16 CPU threads at any time.
2. With earlier versions of oracle and with enterprise edition, we can disable resource manager but not with oracle SE2. Our application sessions get stuck on resmgr:cpu quantum for long and it creates lot of perofrmance issues.
To mitigate performance issue related to resource manager, we keep resource manager disabled. But with Oracle SE2 , even if we set resource manager plan null and disable all default maintenance windows, still we can see same wait event.
This becomes even more critical when even single application query gets stuck on resource manager and when check in background , you will see CPU utilization on DB server is very less.
Ex:
Cpu(s): 1.9%us, 0.6%sy, 0.0%ni, 97.0%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st | |
Above is the cpu utilization on database sever and query was getting stuck for almost 50 minutes.
On further analysis found that even a single core(single thread) consumes ~100% CPU that thread with get caged by instance.
Ex:
top - 03:38:16 up 21 days, 2:36, 1 user, load average: 2.48, 4.06, 4.38 Tasks: 481 total, 2 running, 479 sleeping, 0 stopped, 0 zombie Cpu(s): 9.6%us, 0.3%sy, 0.0%ni, 90.0%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 74252448k total, 52846708k used, 21405740k free, 1178888k buffers Swap: 58720212k total, 24576k used, 58695636k free, 45195604k cached | |
90% ideal |
but still query waiting on cpu | |
from last 285 seconds | |
query plan is perfect | |
Plan hash value: 2943048428 ------------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | | | 393 (100)| | | 1 | SORT AGGREGATE | | 1 | 450 | | | | 2 | NESTED LOOPS | | | | | | | 3 | NESTED LOOPS | | 1 | 450 | 393 (1)| 00:00:01 | | 4 | VIEW | | 1 | 408 | 390 (1)| 00:00:01 | | 5 | HASH UNIQUE | | 1 | 75 | 390 (1)| 00:00:01 | |* 6 | FILTER | | | | | | | 7 | NESTED LOOPS | | | | | | | 8 | NESTED LOOPS | | 1 | 75 | 389 (0)| 00:00:01 | |* 9 | HASH JOIN | | 1 | 61 | 388 (0)| 00:00:01 | | 10 | TABLE ACCESS BY INDEX ROWID| LOC | 450 | 8550 | 16 (0)| 00:00:01 | |* 11 | INDEX RANGE SCAN | LOC_STORESTATUS_IDX | 450 | | 1 (0)| 00:00:01 | | 12 | TABLE ACCESS BY INDEX ROWID| SOURCING | 374 | 15708 | 372 (0)| 00:00:01 | |* 13 | INDEX RANGE SCAN | XIE3SOURCING | 374 | | 5 (0)| 00:00:01 | |* 14 | INDEX UNIQUE SCAN | ITEM_PK | 1 | | 0 (0)| |
On further drill down Identified that one of the core was utilizing 100% CPU.
Tasks: 494 total, 3 running, 491 sleeping, 0 stopped, 0 zombie Cpu0 : 2.3%us, 0.3%sy, 0.0%ni, 97.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 99.7%us, 0.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 1.7%us, 0.0%sy, 0.0%ni, 98.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 0.3%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 0.7%us, 0.0%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu5 : 0.7%us, 0.3%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 0.7%us, 0.3%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 1.0%us, 0.3%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu8 : 0.7%us, 0.0%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu9 : 1.3%us, 0.0%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu10 : 0.3%us, 0.3%sy, 0.0%ni, 99.0%id, 0.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu11 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st |
So any thread which tries to consume 100% CPU will get caged by resource manager.
So to help overcome this issue , only option to run smaller queries those does not consume more CPU. Else you will be doomed.
Note: In another environment , where we were getting same issue and process was taking 4-5 hours to complete, after increasing CPU cores from 12 to 16 and enabling huge pages helped complete process in 40 minutes.
Useful links: https://blog.dbi-services.com/oracle-standard-edition-two/
Useful links: https://blog.dbi-services.com/oracle-standard-edition-two/
Tuesday, July 17, 2018
Oracle Weblogic : Tuning JDBC Data Source parameters
In this post , we will cover some of the parameters and factors which affect JDBC performance in Weblogic Server. But first I will like to show how jdbc connections are made (in Snaps below )and for that we need to understand two terms Data Sources and Connection Pool.
Data sources:
Are administered factory Objects that provide JDBC connections.
Are bound into Java naming and Directory Interface (JNDI) and configure using the Administration console.
Make the application code Portal Across databases.
– Creating a connection to the database is an expensive operation.
Are administered factory Objects that provide JDBC connections.
Are bound into Java naming and Directory Interface (JNDI) and configure using the Administration console.
Make the application code Portal Across databases.
– Creating a connection to the database is an expensive operation.
Connection pools:
– Remove the overhead of establishing connections.
-Improve server performance by sharing database connections among multiple users accessing the Web application.
A simple Way to boost JDBC application performance and avoid Wasting resources:
1. JNDI lookups are relatively expensive. So caching an object that requires a lookup in client.
code or application code avoids incurring additional performance cost.
2. When client or application code has a connection, maximize the reuse of this connection rather than closing and reacquiring a new connection. Although acquiring and returning an existing creation is much less expensive than creating a new one, excessive acquisitions and returns to pools creates contention in the connection pool and degrades application performance.
3. Do not hold connections any longer than is necessary to achieve the Work needed. Getting a connection once, completing all necessary Work, and returning it as soon as possible provides the best balance for overall performance.
1. JNDI lookups are relatively expensive. So caching an object that requires a lookup in client.
code or application code avoids incurring additional performance cost.
2. When client or application code has a connection, maximize the reuse of this connection rather than closing and reacquiring a new connection. Although acquiring and returning an existing creation is much less expensive than creating a new one, excessive acquisitions and returns to pools creates contention in the connection pool and degrades application performance.
3. Do not hold connections any longer than is necessary to achieve the Work needed. Getting a connection once, completing all necessary Work, and returning it as soon as possible provides the best balance for overall performance.
Parameters that Affect JDBC Data Source performance (which can be changed using Weblogic console) :
1. Connection Pool Capacity (Most Important) :
o Connection creation is expensive.
o For applications that consistently involve heavy database traffic:
Determine the optimal Maximum Capacity of a data source experimentally
Set the lnitial Capacity and Maximum Capacity to the same value.
o For applications, where peak database load is intermittent:
Use different values for initial and maximum sizes.
Tune Capacity increment and Shrink Frequency on the basis of load changes.
1. Connection Pool Capacity (Most Important) :
o Connection creation is expensive.
o For applications that consistently involve heavy database traffic:
Determine the optimal Maximum Capacity of a data source experimentally
Set the lnitial Capacity and Maximum Capacity to the same value.
o For applications, where peak database load is intermittent:
Use different values for initial and maximum sizes.
Tune Capacity increment and Shrink Frequency on the basis of load changes.
In my Environment, I have kept Minimum value as 20 and Maximum as 100. Minimum cant be kept too high as there will be extra overhead because of this.
2. Connection testing :
– The WebLogic Server can test a connection from the connection pool before giving it to a client.
– Test Connection On Reserve parameter enables automatic testing of database connection.
– Connections can also be tested periodically for validity by using the Test Frequency parameter.
– Both these parameters can degrade performance (So , in production instances Avoid this)
– The WebLogic Server can test a connection from the connection pool before giving it to a client.
– Test Connection On Reserve parameter enables automatic testing of database connection.
– Connections can also be tested periodically for validity by using the Test Frequency parameter.
– Both these parameters can degrade performance (So , in production instances Avoid this)
3. Shrink Frequency
– The WebLogic Server periodically shrinks the connection pool to its initial capacity based on usage.
– The Shrink Frequency parameter is used to specify the number of seconds to wait before shrinking a connection pool.
– When set to Zero 0, shrinking is disabled. This can be helpful in a production environment.
4. Configuring Row Pre Fetch
-Row prefetching improves performance by fetching multiple rows from the sen/er to the client in one server access.
– The optimal prefetch size depends on the particulars of the query.
– In general, increasing this number will increase performance, until a particular value is reached.
Note: This is applicable only for external clients, not for clients in the same Java Virtual Machine (JVM) as the WebLogic Server.
Very rarely will increased performance result from Exceeding 100 rows.
Max value is 65536 and minimum value is 2
Note: This is applicable only for external clients, not for clients in the same Java Virtual Machine (JVM) as the WebLogic Server.
Very rarely will increased performance result from Exceeding 100 rows.
Max value is 65536 and minimum value is 2
5. Statement caching:
The three types of statements in JDBC are:
-Statements (not cached)
-Prepared statements
-Callable statements
The prepared statements and callable statements are Cacheable and improve overall performance through reuse.
– The statement cache type determines how the cache is refreshed:
LRU: Replaces least recently used statement with new .
FIXED: no replacement is done
-Statements (not cached)
-Prepared statements
-Callable statements
The prepared statements and callable statements are Cacheable and improve overall performance through reuse.
– The statement cache type determines how the cache is refreshed:
LRU: Replaces least recently used statement with new .
FIXED: no replacement is done
6. Statement Cache Size: The Statement Cache Size attribute determines the total number of prepared and callable statements to cache for each connection in each instance of the data source. By caching statements, you can increase your system performance. However, you must consider how your
DBMS handles open prepared and callable statements. In many cases, the DBMS will maintain a cursor for each open statement. This applies to prepared and callable statements in the statement cache. If you cache too many statements, you may exceed the limit of open cursors on your database server. Setting the size of the statement cache to 0 turns off statement caching.
-Minimum value: 0
-Maximum value: 1024
DBMS handles open prepared and callable statements. In many cases, the DBMS will maintain a cursor for each open statement. This applies to prepared and callable statements in the statement cache. If you cache too many statements, you may exceed the limit of open cursors on your database server. Setting the size of the statement cache to 0 turns off statement caching.
-Minimum value: 0
-Maximum value: 1024
7. Connection Pinned to Thread:
– A data source can dedicate or “pin” a connection to the first server thread that requests it.
– This capability:
– May increase performance by eliminating potential contention for connections by threads
– ls not supported with multidata sources or Oracle Real Application Clusters (RAC)
8. Inactive Connection Timeout:
The number of inactive seconds on a reserved connection before WebLogic Server reclaims the connection and releases it back into the connection pool.
You can use the Inactive Connection Timeout feature to reclaim leaked connections – connections that were not explicitly closed by the application. Note that this feature is not intended to be used in place of properly closing connections.
When set to 0, the feature is disabled.
Things to Consider on Oracle Database Instance side:
Number of processes
-It includes simultaneous users and Oracle background processes.
-The default setting is usually too low.
Shared pool size
-It contains data and control information for each instance.
-lt is critical to performance.
-Even the moderate use of stored procedures and triggers may require an increase.
Maximum opened cursor
-It should be increased if you are using JDBC statement caching.
There are several other Factors which affect JDBC performance which i will try to cover later , the ones discussed above are primary and important . Although Performance issues differ from environment to environment and may be due to some other reasons or factors (which are not mentioned above) , but i hope understanding these parameters will certainly help.
Monday, March 12, 2018
SQL and PL/SQL practice
--Hierarchical queries
--Top down
select lpad(ename,length(ename)-1+level,'****') from emp connect by prior empno=mgr start with mgr is null;
--Down--up
SELECT LPAD (' ', LEVEL * 3 - 3)||ENAME ENAME
FROM EMP
CONNECT BY PRIOR mgr = empno
START WITH MGR=7788;
--Write a query to print all prime numbers less than or equal to . Print your result on a single line, and use the ampersand () character as your separator (instead of a space).
with t as ( select level n
from dual
connect by level <= 1000)
select listagg(N,'&') within group(order by n) from (select n,
case
when exists
(select null
from t t2
where t2.n > 1 and t2.n < t.n and t.n / t2.n = trunc (t.n / t2.n)) then 'no prime'
else 'prime'
end is_prime
from t t where n<>1 ) where is_prime='prime';
P(R) represents a pattern drawn by Julia in R rows. The following pattern represents P(5):
*
* *
* * *
* * * *
* * * * *
Write a query to print the pattern P(20).
select lpad('* ',level*2,'* ') from dual connect by rownum< 21;--P(R) represents a pattern drawn by Julia in R rows. The following pattern represents P(5):
* * * * *
* * * *
* * *
* *
*
Write a query to print the pattern P(20).
select * from (select lpad('* ',level*2,'* ') t from dual connect by rownum< 21) order by length(t) desc;
--Analytical queries
Query to find out departments those are having maximum number of employees. In other words you can say the customer who placed maximum number of orders.
select deptno from emp group by deptno having count(*)=(select max(count(*)) from emp e group by e.deptno);
--order by analytical clause
select distinct deptno from (select deptno,dense_rank() over (order by cnt desc) rnk from(select deptno,count(*) over (partition by deptno) cnt from emp order by cnt desc)) where rnk=1
--Analytical functions those do not need ordering of the elements.
--sum
select a.*,sum(sal) over (partition by deptno) part_tot_sal from emp a;
--In the absence of partition analytical functions work on entire set of rows.
--count
select a.*,count(ename) over (partition by deptno) part_tot_emp from emp a;
--avg
select a.*,trunc(avg(sal) over (partition by deptno)) part_avg_sal from emp a;
--min
select a.*,min(sal) over (partition by deptno) part_min_sal from emp a;
--max
select a.*,max(sal) over (partition by deptno) part_max_sal from emp a;
--Analytical function those require order by clause. Notice that you don't really need to give input parameter to these functions
--row_number()--I can be used in TOP-N(top 5,last 5 or like 90-100 number of records from rows)
select a.*,row_number() over (order by sal desc NULLS LAST) part_max_sal from emp a;
--Numbering with partition
select a.*,row_number() over (partition by deptno order by sal desc NULLS LAST) part_max_sal from emp a;
--rank()--To define rank-- In case of a tie of 2 records at position N, RANK declares 2 positions N and skips position N+1 and gives position N+2 to the next record. While DENSE_RANK declares 2 positions N but does not skip position N+1.
--dense_rank()
select a.*,row_number() over (order by sal desc NULLS LAST) row_order,rank() over (order by sal desc NULLS LAST) rank_order ,dense_rank() over (order by sal desc NULLS LAST) dense_rank_order from emp a;
--LEAD,LAG--Note that it needs column on which it needs to be applied
select a.*,row_number() over (order by sal desc NULLS LAST) row_order,rank() over (order by sal desc NULLS LAST) rank_order ,dense_rank() over (order by sal desc NULLS LAST) dense_rank_order,
lead(sal,1,0) over(partition by deptno order by sal desc nulls last) next_sal,lag(sal,1,0) over (partition by deptno order by sal desc nulls last) prev_sal from emp a;
--FIRST_VALUE and LAST_VALUE
select a.*,first_value(sal) over (partition by deptno order by sal desc nulls last) first_sal,last_value(sal) over (partition by deptno order by sal desc nulls last) last_sal from emp a;
-- How each employee's salary compare with the average salary of the first
-- year hires of their department?
--This usage FIRST
SELECT empno, deptno, TO_CHAR(hiredate,'YYYY') HIRE_YR, sal,
TRUNC(
AVG(sal) KEEP (DENSE_RANK FIRST
ORDER BY TO_CHAR(hiredate,'YYYY') )
OVER (PARTITION BY deptno)
) AVG_SAL_YR1_HIRE
FROM emp
WHERE deptno IN (20, 10)
ORDER BY deptno, empno, HIRE_YR;
--Find total employee count under manager down the whole hierarchy path
select ename, count (*) - 1 cnt
from (select connect_by_root ename ename
from emp
connect by prior empno=mgr)
group by ename having count(*) > 1
order by cnt desc;
--Bulk collection practice
CREATE TABLE bulk_collect_test AS
SELECT owner,
object_name
name,
object_id
FROM all_objects;
create table bulk_collect_test1 as select * from bulk_collect_test where 1=2;
DECLARE
CURSOR cur
IS
SELECT * FROM bulk_collect_test;
TYPE t_cur IS TABLE OF cur%rowtype;
v_cur t_cur;
l_limit NUMBER:=100;
BEGIN
OPEN cur;
LOOP
FETCH cur BULK COLLECT INTO v_cur LIMIT l_limit;
FORALL i IN 1 .. v_cur.COUNT
INSERT INTO bulk_collect_test1
VALUES (v_cur (i).owner, v_cur (i).name, v_cur (i).object_id);
exit when v_cur.count<l_limit;
END LOOP;
close cur;
--Another way by using direct select. In this way we cannot apply limit
select * bulk collect into v_cur from bulk_collect_test;
FORALL i IN 1 .. v_cur.COUNT
INSERT INTO bulk_collect_test1
VALUES v_cur (i);
commit;
END;
/
--SQL%BULK_ROWCOUNT example
--It gives granular information of rows affected by each iteration of forall statement
--above code can be re-written by using SQL%BULK_ROWCOUNT
DECLARE
TYPE t_array_tab IS TABLE OF VARCHAR2 (30);
l_array t_array_tab
:= t_array_tab ('SCOTT',
'SYS',
'SYSTEM',
'DBSNMP',
'BANANA');
BEGIN
FORALL i IN l_array.FIRST .. l_array.LAST
DELETE FROM bulk_collect_test1
WHERE owner = l_array (i);
FOR i IN l_array.FIRST .. l_array.LAST
LOOP
DBMS_OUTPUT.put_line (
'Records deleted for owner: '
|| RPAD(l_array (i),15,' ')
|| ' is: '
|| SQL%BULK_ROWCOUNT (i));
END LOOP;
COMMIT;
END;
/
--exception handling in forall
create table id_tab(id number not null);
DECLARE
TYPE t_table IS TABLE OF id_tab%ROWTYPE;
v_tab t_table := t_table ();
l_error_count NUMBER;
--Create exception and bind it with expected error code so that we can handle a named exception
dml_error EXCEPTION;
PRAGMA EXCEPTION_INIT (dml_error, -24381);
BEGIN
FOR i IN 1 .. 100
LOOP
v_tab.EXTEND;
v_tab (v_tab.LAST).id := i;
END LOOP;
--Creating error records as table does not allow null values
v_tab (50).id := NULL;
v_tab (61).id := NULL;
BEGIN
FORALL i IN v_tab.FIRST .. v_tab.LAST SAVE EXCEPTIONS
INSERT INTO id_tab
VALUES v_tab (i);
EXCEPTION
WHEN dml_error
THEN
l_error_count := SQL%BULK_EXCEPTIONS.COUNT;
DBMS_OUTPUT.PUT_LINE ('The number of errors: ' || l_error_count);
FOR i IN 1 .. l_error_count
LOOP
DBMS_OUTPUT.PUT_LINE (
'Index'
|| i
|| ' Array index: '
|| SQL%BULK_EXCEPTIONS (i).ERROR_INDEX
|| ' Error: '
|| SQLERRM(-SQL%BULK_EXCEPTIONS (i).ERROR_CODE));
END LOOP;
END;
END;
/
---User defined exceptions
--Define exception and use it. Note that by default error code is 1 if no binding is used.
DECLARE
my_exception EXCEPTION;
-- PRAGMA EXCEPTION_INIT (my_exception, -20221);
BEGIN
RAISE my_exception;
EXCEPTION
WHEN my_exception
THEN
DBMS_OUTPUT.put_line (SQLCODE);
DBMS_OUTPUT.put_line (SQLERRM);
END;
/
--Bind exception with error code
DECLARE
my_exception EXCEPTION;
PRAGMA EXCEPTION_INIT (my_exception, -20221);
BEGIN
RAISE my_exception;
EXCEPTION
WHEN my_exception
THEN
DBMS_OUTPUT.put_line (SQLCODE);
DBMS_OUTPUT.put_line (SQLERRM);
END;
/
--give description of error by using RAISE_APPLICATION_ERROR
DECLARE
my_exception EXCEPTION;
PRAGMA EXCEPTION_INIT (my_exception, -20221);
BEGIN
RAISE_application_error(-20221, 'This is exception description');
EXCEPTION
WHEN my_exception
THEN
DBMS_OUTPUT.put_line (SQLCODE);
DBMS_OUTPUT.put_line (SQLERRM);
END;
/
--Ref cursor advance options, return results as ref cursor without parameter.
CREATE table t1 (
id NUMBER,
description VARCHAR2(30),
created_date DATE
);
INSERT INTO t1 VALUES (1, 'The value 1', SYSDATE-2);
INSERT INTO t1 VALUES (2, 'The value 2', SYSDATE-1);
INSERT INTO t1 VALUES (3, 'The value 3', SYSDATE);
COMMIT;
--Now we can create a stored procedure to return one or more resultsets containing this data.
CREATE OR REPLACE PROCEDURE get_my_results (p_id IN NUMBER DEFAULT NULL)
AS
l_cursor_1 SYS_REFCURSOR;
l_cursor_2 SYS_REFCURSOR;
BEGIN
IF p_id IS NOT NULL THEN
OPEN l_cursor_1 FOR
SELECT description, created_date
FROM t1
WHERE id = p_id;
DBMS_SQL.RETURN_RESULT(l_cursor_1);
END IF;
OPEN l_cursor_2 FOR
SELECT COUNT(*)
FROM t1;
DBMS_SQL.RETURN_RESULT(l_cursor_2);
END;
/
exec get_my_results();
--Storing collections into database
--Nested table
create type t_nemp as OBJECT(ID NUMBER, NAME VARCHAR2(30));
create type t_nemplist as table of t_nemp;
create table t_emp (id number ,emps t_nemplist) nested table emps store as emp_table;
insert into t_emp values (10,t_nemplist(t_nemp(10,'Anand'),t_nemp(11,'Ved')));
insert into t_emp values (20,t_nemplist(t_nemp(21,'Jitendra'),t_nemp(25,'Ansh')));
select * from t_emp;
--Associative arrays.
create type t_aaemp as OBJECT(ID NUMBER, NAME VARCHAR2(30));
--You notice that below line gives errors and cannot be stored in the database. It proves that associative arrays cannot be stored in database.
create type t_aaemplist as table of t_nemp index by pls_integer;
--varrays
create or replace type t_vemp as OBJECT(ID NUMBER, NAME VARCHAR2(30));
create or replace type t_vemplist as VARRAY(10) OF t_vemp;
create table t_vaemp (id number ,emps t_vemplist);
insert into t_vaemp values (10,t_vemplist(t_vemp(10,'Anand'),t_vemp(11,'Ved')));
insert into t_vaemp values (20,t_vemplist(t_vemp(21,'Jitendra'),t_vemp(25,'Ansh')));
select * from t_vaemp;
--Deadlocks
--Deadlock by bitmap indexes
create table lock1(id number);
create bitmap index lock1_idx on lock1(id);
--From session #1
insert into lock1 values(1);
--from session#2
insert into lock1 values(2);
insert into lock1 values(1);
--Back to session #1
insert into lock1 values(2);
--Bamm ...deadlock detected
--Deadloacks by foreign keys(parent-child hierarchy)
create table lock1(id number);
alter table lock1 add constraint lock1_uk unique(id);
create table lock2(id number);
alter table lock2 add constraint lock2_lock1_fk foreign key(id) references lock1(id);
--From session #1
insert into lock1 values(1);
--from session#2
insert into lock1 values(2);
insert into lock2 values(1);
--Back to session #1--any of below
insert into lock2 values(2);
insert into lock1 values(2);
--Bamn...Deadlock
--Deadlock in autonomous transaction
CREATE OR REPLACE PROCEDURE auto_deadlock_test (id IN NUMBER)
AS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
INSERT INTO lock2
VALUES (id);
COMMIT;
END;
/
BEGIN
INSERT INTO lock1
VALUES (1);
auto_deadlock_test (1);
END;
/
--Bamn...Deadlock detected
--Mutating trigger error solution...real example
CREATE OR REPLACE PACKAGE state_pkg
AS
v_is_primary_relation md_table_relation.is_primary_relation%TYPE;
v_from_table_name md_table_relation.from_table_name%TYPE;
v_to_table_name md_table_relation.to_table_name%TYPE;
v_from_schema_name md_table_relation.from_schema_name%TYPE;
v_to_schema_name md_table_relation.to_schema_name%TYPE;
v_schema_name MD_COLUMN_INFO.ALT_DISPLAY_COLUMN_SCHEMA%TYPE;
v_table_name MD_COLUMN_INFO.ALT_DISPLAY_COLUMN_TABLE%TYPE;
v_column_name MD_COLUMN_INFO.ALT_DISPLAY_COLUMN_COLUMN%TYPE;
v_dimension_name MD_HIERARCHY.DIMENSION_NAME%type;
v_hierarchy_name MD_HIERARCHY.HIERARCHY_NAME%type;
v_oldispwbenabled MD_HIERARCHY.IS_PWB_ENABLED%type;
v_newispwbenabled MD_HIERARCHY.IS_PWB_ENABLED%type;
END;
/
CREATE OR REPLACE TRIGGER BIU_MD_TABLE_RELATION
BEFORE INSERT OR UPDATE OF is_primary_relation
ON md_table_relation compound
v_is_primary_relation md_table_relation.is_primary_relation%TYPE;
v_from_table_name md_table_relation.from_table_name%TYPE;
v_to_table_name md_table_relation.to_table_name%TYPE;
v_from_schema_name md_table_relation.from_schema_name%TYPE;
v_to_schema_name md_table_relation.to_schema_name%TYPE;
v_schema_name MD_COLUMN_INFO.ALT_DISPLAY_COLUMN_SCHEMA%TYPE;
v_table_name MD_COLUMN_INFO.ALT_DISPLAY_COLUMN_TABLE%TYPE;
v_column_name MD_COLUMN_INFO.ALT_DISPLAY_COLUMN_COLUMN%TYPE;
v_dimension_name MD_HIERARCHY.DIMENSION_NAME%type;
v_hierarchy_name MD_HIERARCHY.HIERARCHY_NAME%type;
v_oldispwbenabled MD_HIERARCHY.IS_PWB_ENABLED%type;
v_newispwbenabled MD_HIERARCHY.IS_PWB_ENABLED%type;
ncount NUMBER;
BEFORE EACH ROW
BEGIN
IF :NEW.is_primary_relation = 1
THEN
BEGIN
state_pkg.v_from_table_name := :NEW.from_table_name;
state_pkg.v_to_table_name := :NEW.to_table_name;
state_pkg.v_from_schema_name := :NEW.from_schema_name;
state_pkg.v_to_schema_name := :NEW.to_schema_name;
END;
END IF;
END BEFORE EACH ROW;
AFTER EACH STATEMENT IS
BEGIN
SELECT COUNT (*)
INTO ncount
FROM md_table_relation
WHERE from_table_name = state_pkg.v_from_table_name
AND to_table_name = state_pkg.v_to_table_name
AND from_schema_name = state_pkg.v_from_schema_name
AND to_schema_name = state_pkg.v_to_schema_name
AND is_primary_relation = 1;
IF ncount > 1
THEN
raise_application_error
(-20000,
'Primary relation already exists on the table set'
);
END IF;
END AFTER EACH STATEMENT;
END;
/
select * from emp
select * from dept
select b.deptno,max(sal) from emp a , dept b where a.deptno=b.deptno group by b.deptno
select b.deptno,max(sal) from emp b group by deptno;
select * from (
select b.ENAME,job,max(sal) over(partition by deptno) dept_max,max(sal) over(partition by job) job_max, sal from emp b
)
where sal=job_max;
select * from (
select b.ENAME,deptno,job,dense_rank() over (partition by deptno order by sal desc) dept_rank,sal from emp b
)
where dept_rank<6;
SELECT ENAME, deptno, job, dept_rank, sal, hiredate, CASE WHEN sal < LEAD ( sal) OVER ( PARTITION BY deptno ORDER BY sal DESC) THEN 'No' ELSE 'Yes' END seniority_salARY
FROM (SELECT *
FROM (SELECT b.ENAME, deptno, hiredate, job, DENSE_RANK () OVER ( PARTITION BY deptno ORDER BY sal DESC) dept_rank, sal
FROM emp b))
WHERE dept_rank < 6 order by deptno, dept_rank ;
set serveroutput on;
declare
type rc is ref cursor;
l_rc rc;
l_dummy all_objects.object_name%type;
l_start number default dbms_utility.get_time;
begin
for i in 1 .. 1000
loop
open l_rc for
'select object_name
from all_objects
where object_id = ' || i;
fetch l_rc into l_dummy;
close l_rc;
-- dbms_output.put_line(l_dummy);
end loop;
dbms_output.put_line
(round((dbms_utility.get_time-l_start)/100, 2) ||
' Seconds...' );
end;
/
declare
type rc is ref cursor;
l_rc rc;
l_dummy all_objects.object_name%type;
l_start number default dbms_utility.get_time;
begin
for i in 1 .. 1000
loop
open l_rc for
'select object_name
from all_objects
where object_id = :x'
using i;
fetch l_rc into l_dummy;
close l_rc;
-- dbms_output.put_line(l_dummy);
end loop;
dbms_output.put_line
(round((dbms_utility.get_time-l_start)/100, 2) ||
' Seconds...' );
end;
/
drop table id12;
create table id12 (id number);
insert into id12 values(1);
insert into id12 values(2);
insert into id12 values(3);
select index_name from user_indexes where table_name='ID12';
CREATE OR REPLACE FUNCTION funct
RETURN number
IS
BEGIN
DBMS_OUTPUT.put_line ( 'Executed');
RETURN 10;
END;
/
set serveroutput on ;
select nvl(id,funct) from id12;
select coalesce(id,funct, 1,funct) from id12;
select coalesce('2',1) from dual
--
select nullif(1,1) from dual;
select * from id12
CREATE TABLE t1
AS
SELECT LEVEL AS c1, LEVEL AS c2, RPAD ( 'x', 100) AS c3
FROM DUAL
CONNECT BY LEVEL <= 10000;
CREATE TABLE t2
AS
SELECT LEVEL AS c1, LEVEL AS c2, RPAD ( 'x', 100) AS c3
FROM DUAL
CONNECT BY LEVEL <= 10000;
ALTER SESSION ENABLE PARALLEL DML;
EXPLAIN PLAN
FOR merge /*+ parallel */ into t1
using (select c1, c2 from t2) t2
on (t1.c1 = t2.c1)
when matched then
update set t1.c2 = t1.c2
when not matched then
insert(c1, c2) values(t2.c1, t2.c2);
create table my_objects as
insert into my_objects_temp
select /*+ APPEND */ * from my_objects
create table my_objects_temp as
select /*+ APPEND */ * from my_objects
union all
select /*+ APPEND */ * from my_objects
union all
select /*+ APPEND */ * from my_objects
insert into my_objects
select /*+ APPEND */ * from MY_OBJECTS_GTT where rownum<100
INSERT INTO MY_OBJECTS_GTT
select * from my_objects
SELECT object_name, DBMS_ROWID.rowid_relative_fno ( ROWID) rfile#, DBMS_ROWID.rowid_block_number ( ROWID) block#,
DBMS_ROWID.rowid_row_number(ROWID) row#, rowid rd FROM my_objects;
select dbms_utility.make_data_block_address(1,48409) from dual;
select * from user_objects where object_name='MY_OBJECTS'
SELECT objd,file#, block#, DECODE (class#, 1, 'data block', 2, 'sort block', 3, 'save undo block', 4, 'segment header', 5, 'save undo header', 6, 'free list', 7, 'extent map', 8, '1st level bmb', 9, '2nd level bmb', 10, '3rd level bmb', 11, 'bitmap block', 12, 'bitmap index block', 13, 'file header block', 14, 'unused', 15, 'system undo header', 16, 'system undo block', 17, 'undo header', 18, 'undo block') class_type, status
FROM v$bh
WHERE objd = 85568
ORDER BY 1, 2, 3;
select hladdr
from x$bh
where dbarfil = 1
and dbablk = 48409;
select * from MY_OBJECTS_GTT
truncate table MY_OBJECTS_GTT
drop table my_objects_temp
truncate table my_objects_temp
select owner,object_name from MY_OBJECTS order by 2 desc
create table id20 (id number unique)
create table id21 (id number, constraint id21_id20_FK foreign key(id) references id20(id))
insert into id20 values(1)
create table big as select * from all_objects
drop table small
create table small as select * from all_objects where rownum<10
create index big_idx on big(object_name)
create index small_idx on small(object_name)
alter table big modify(object_name null)
alter table small modify(object_name null)
update small set object_name=null where rownum<2
DECLARE
CURSOR c1
IS
SELECT *
FROM emp FOR UPDATE NOWAIT;
Z c1%ROWTYPE;
BEGIN
OPEN C1;
FETCH c1 INTO Z;
COMMIT;
FETCH c1 INTO Z;
END;
/
create table bitmap_users as select * from all_objects;
select * from bitmap_users
create index bitmap_users_bitidx on bitmap_users(object_id);
create bitmap index bitmap_users_bitidx on bitmap_users(object_id);
alter table bitmap_users add constraint bitmap_users_pk primary key(object_id);
SET TIMING ON;
SELECT COUNT ( *)
FROM big
WHERE object_name IN (SELECT object_name FROM small);
SELECT COUNT ( *)
FROM big s
WHERE NOT EXISTS
(SELECT 1
FROM small b
WHERE b.object_name = s.object_name)
select * from emp order by sal desc
select * from (select a.*, rownum rn from emp a order by sal desc) b where b.rn=1
select a.*, rownum rn from emp a order by sal desc
select sal from (
select rownum num, c.* from (
select sal from emp order by 1 desc) c)d
where num = 1
drop table id22
create table id22(id number primary key)
create table id23(id number, constraint id23 foreign key(id) references id22(id));
alter table id22 drop primary key
alter table id23 disable constraint ID23
select * from user_constraints where table_name='ID23'
select * from (select ename,deptno,sal , rank() over (partition by deptno order by sal DESC) rnk from emp) a
where a.rnk=2
select e.deptno ,e.ename, max_sal from emp e,(select deptno,max(sal) max_sal from emp group by deptno) b
where e.deptno=b.deptno and e.sal=b.max_sal
select 'a' from dual
minus
select 1 from dual;
alter table id22 drop primary key cascade
create table teams(name varchar2(30));
insert into teams values('A');
insert into teams values('B');
insert into teams values('C');
begin
for i in (select rownum rn, name from teams order by rn desc)
loop
dbms_output.put_line(i.name);
for r in (select name from(select rownum rn, name from teams order by rn desc) where rn<3 )
loop
end loop;
end loop;
end;
/
select name from(select rownum rn, name from teams order by rn desc) where rn<3
select TO_CHAR(sysdATE+2,'DAY') FROM
select * from emp a, (select deptno,avg(sal) avg_sal from emp group by deptno) b
where a.deptno=b.deptno and a.sal>b.avg_sal;
Mutating trigger error example: Note that mutating trigger error comes only with after DML triggers. Mutating error occurs when we are performing some DML(insert,delete,update) operations and we are trying to select (or any DML operation) the affected records from the same trigger. So basically we are trying to select the records in the trigger from the table that owns the trigger. This creates inconsistency issue and Oracle throws a mutating error
create table mutat_trigger(id number)
insert into mutat_trigger values (&id);
create or replace trigger mut_trigger
after insert on mutat_trigger for each row
declare
cnt number;
begin
select count(*)into cnt from mutat_trigger ;
dbms_output.put_line(cnt);
end;
/
Subscribe to:
Posts (Atom)