"The art of performance tuning is there is no rules". You have to use hit and try.
For incomplete transactions during AWR snapshot duration, AWR takes help from ASH snapshot to capture details.
DB Time=CPU+Wait time
enq: TM – contention can also happen due to table level lock(may be due to DDL)
Buffer busy waits: There can be storage issue as well where buffers are not getting flushed to disk.
Latch sleep breakdown: <Read more>
To reduce rows per block: alter table t minimize records_per_block;
Load profile:
Instance efficiency percent:
If parse CPU to parse elapsed is low then there can be shared pool issue also.
Why my cursor not shared?
Check details in v$sql_shared_cursor.
Gather system stats only when you change system components.
We can set optimizer_ignore_hints to true to ignore any hints supplied as part of queries. This can be done as part of upgrade activity to validate if hints are no longer required. Specially the hints those added to support missing functionality.
Oracle disk and I/O tuning:
/var/log/messages: to check storage issues
Disk scrubbing: is a background process that reads disks during idle periods to detect irremediable read er- rors in infrequently accessed sectors. Timely detection of such latent sector errors (LSEs) is important to reduce data loss.
In exadata it's done automatically and it keeps checking disk corruption.
iscsi storage: Cheapest and slowest
NFS storage:
We can even ask to store important data files into outer space of disk that is faster.
Thursday, December 27, 2018
Thursday, December 20, 2018
Parallel max server and SQL Developer
http://structureddata.org/2012/02/29/pitfalls-of-using-parallel-execution-with-sql-developer/
https://iiotzov.wordpress.com/tag/oracle-parallel-slaves-idle/

https://iiotzov.wordpress.com/tag/oracle-parallel-slaves-idle/
I was watching
the webex recording , I think inactive sessions
were already cleared. I think the root cause of the issue was inactive sessions
only but we did not segregate between JDA application sessions and user created
sessions. WW DBA’s killed all the inactive sessions and conveyed that all were
the JDBC inactive sessions and it’s an application issue.
JDA inactive
sessions must belong to node pool sessions and those will remain same over the
time. We can monitor now also and confirm the same.
But there was
one crucial information in between that went un-noticed.


If you notice
above screenshots , you will see that a query initiated from SQL Developer
created 256 parallel threads. You might also notice that query does not have
any parallel hint, so I’m assuming that table UDT_SSTEMPLATEDATA must have
default table level parallelism enabled and without knowing that user initiated
the session.
I think this
query was the root cause of the issue. “By default, SQL Developer has the array fetch size set
to 50. This means that for any cursor SQL Developer opens for scolling, it will
fetch the first 50 rows and when you scroll to the bottom of those rows in a
grid, it will fetch the next 50 rows and so on. This is good in the sense that
the JDBC application can fetch an array of rows with a single JDBC database
call, however, using this approach with Parallel Execution, the PX servers used
for this cursor will not be released until the cursor is canceled or the last
row is fetched. Currently the only way to force reading until the end of cursor
in SQL Developer is to issue a Control+End in the data grid. As a result, any
action that uses Parallel Execution and has not fetched all the rows or is not
canceled/closed, will squat those Parallel Execution resources and prevent them
from being used by other users. If
enough users have open cursors backed by Parallel Execution, then it is
possible that it could consume all of the Parallel Execution servers and will
result in Parallel Execution requests being forced to Serial Execution because
resources are not available, even if the system is completely idle.”
My gut feeling
(as I don’t have access to environment and my views are completely based on
webex recording) is that killing those SQL Developer inactive sessions helped
revive the situation.
What can be
done to prevent this:
- Set parallel_max_server parameter value high enough to avoid
drain out parallel execution resources.
- Check and disable any default table level parallelism. It should
be enabled very carefully.
- Educate user to not use /*+ PARALLEL*/ hint frequently without
knowing the impact but give specific degree.
- For the problematic query, we validated the plan with couple of
different values of parallel degree and current plan looks good.
I hope it
answers why parallel hint was not respected before killing inactive sessions
and why it’s working post that.
Subscribe to:
Posts (Atom)