PL/Scope Reports on Static SQL Statements and Call Sites for Dynamic SQL:
The new view, DBA_STATEMENTS, reports on the occurrences of static SQL in PL/SQL units; listing, for example, the statement text, the type (SELECT, INSERT, UPDATE, or DELETE) and the SQL_ID. Dynamic SQL call sites (EXECUTE IMMEDIATE, OPEN cursor FOR dynamic text="") are also listed in this view family. The DBA_IDENTIFIERS view family now reports on identifiers used in static SQL and notes their type (table, column, materialized view, sequence, and so on).
The purpose of PL/SQL is to issue SQL statements. Therefore, it is useful that PL/Scope now knows about the uses of SQL in PL/SQL source code. For example, if performance investigation reports the SQL_ID of a slow statement, its call sites in the PL/SQL programs can be found immediately. When PL/SQL source is under quality control for SQL injection risks, where dynamic SQL is used, the sites to look at can also be found immediately.
Enhancing CAST Function With Error Handling:
The existing CAST function is enhanced to return a user-specified value in the case of a conversion error instead of raising an error.
New SQL and PL/SQL Function VALIDATE_CONVERSION:
The new function, VALIDATE_CONVERSION, determines whether a given input value can be converted to the requested data type.
Binding PL/SQL-Only Data Types to SQL Statements Using DBMS_SQL:
Oracle Database 12c Release 1 (12.1) introduced the ability to bind values of PL/SQL-only data types, most notably PLS_INTEGER tables of records to SQL statements. However, there were some restrictions which are lifted in this release. The PL/SQL-only data types can now be bound using the DBMS_SQL API and by invoking a C external procedure.
This improvement brings the DBMS_SQL API in parity with the native dynamic SQL.
Improving the PL/SQL Debugger:<Very Imp>
In prior releases, it was necessary to change the application to include calls to start and stop debugging. With this improvement, one database session can start debugging, and stop debugging a different session.
Also, when stopped at a breakpoint, it is possible for the debugging user to issue ad hoc SQL commands implying, therefore, the possibility to run PL/SQL code invoking stored PL/SQL subprograms in an anonymous block.
With these improvements, if a problem occurs in a long running test or in a production environment, it is possible to investigate the problem from another session. In addition to inspecting the state of in-scope variables, it is now possible to examine the database state as the session being debugged sees it during an uncommitted transaction.
Materialized Views: Real-Time Materialized Views:
Materialized views can be used for query rewrite even if they are not fully synchronized with the base tables and are considered stale. Using materialized view logs for delta computation together with the stale materialized view, the database can compute the query and return correct results in real time.
For materialized views that can be used for query rewrite all of the time, with the accurate result being computed in real time, the result is optimized and fast query processing for best performance. This alleviates the stringent requirement of always having to have fresh materialized views for the best performance.
Materialized Views: Statement-Level Refresh:
In addition to ON COMMIT and ON DEMAND refresh, the materialized join views can be refreshed when a DML operation takes place, without the need to commit such a transaction. This is predominantly relevant for star schema deployments.
The new ON STATEMENT refresh capability provides more flexibility to the application developers to take advantage of the materialized view rewrite, especially for complex transactions involving multiple DML statements. It offers built-in refresh capabilities that can replace customer-written trigger-based solutions, simplifying an application while offering higher performance.
Enhancing LISTAGG Functionality:
LISTAGG aggregates the values of a column by concatenating them into a single string. New functionality is added for managing situations where the length of the concatenated string is too long.
Developers can now control the process for managing overflowing LISTAGG aggregates. This increases the productivity and flexibility of this aggregation function.
White Lists (ACCESSIBLE BY) Enhancements:
In this release, you can define a white list on individual subprograms in a package. The ACCESSIBLE BY clause specifies a list of PL/SQL units that are considered safe to invoke the subprogram, and blocks all others.
Lowering the granularity increases the usefulness of the ACCESSIBLE BY clause for a package.
Long Identifiers:
The maximum length of identifiers is increased to 128 bytes for most identifiers, up from 30 bytes in previous releases.
Providing longer identifiers gives customers greater flexibility in defining their naming schemes, such as longer and more expressive table names. Having longer identifiers also enables object name migration between databases with different character sets, such as Thai to Unicode.
Static PL/SQL Expressions Now Allowed Where Previously Literals Were Required:
Some examples of places where, in earlier releases, a literal was required are:
The length of the constraint in a VARCHAR2 declaration.
The precision and scale in a NUMBER declaration.
Now you can use expressions, but the values must allow computation at compile time.
You can now write PL/SQL programs so that the intention is self-evident without comments. You can also change PL/SQL programs to reflect changed requirements by making changes at far fewer sites. The canonical example is the VARCHAR2 that holds the text of a simple SQL identifier. This needs to be 128 bytes plus 2 additional bytes; 128 bytes for the name, and 2 bytes to enable double quoting.
Rolling Back Redefinition
There is a new ROLLBACK parameter for the FINISH_REDEF_TABLE procedure that tracks DML on a newly redefined table so that changes can be easily synchronized with the original table using the SYNC_INTERIM_TABLE procedure.
There is also a new ROLLBACK procedure for the DBMS_REDEFINITION package that initiates the swapping of the redefined table with the original table, therefore effectively rolling back the redefinition changes and returning the table to its original state.
If the results of a redefinition are not acceptable (for example, a performance slow down accessing a newly redefined table), then the redefinition changes can be easily rolled back, therefore saving the DBA time in performing another redefinition to undo the table changes.
Redefinition Progress Monitoring
The new V$ONLINE_REDEF view displays runtime information related to the current redefinition procedure being executed based on a redefinition session identifier.
This new view enables DBAs the ability to monitor the progress of their redefinition session and take corrective action if needed.
Optimizing Batch Update During Redefinition
Redefinition requires ARCHIVELOG mode to be enabled. However, a new DBMS_REDEFINITION.EXECUTE_UPDATE procedure enables the execution of UPDATE statements in direct insert mode, eliminating redo logging and the associated overhead during that time period. Since redo is not logged during that period, the user must be aware that the redefinition and UPDATE changes cannot be recovered using media recovery. A database or tablespace backup must be taken after the redefinition completes to preserve recoverability.
For customers who run predictable, batch updates (for example, data warehouse) in direct insert mode to avoid redo overhead and the resulting impact to transaction throughput, they can now run batch updates in the same manner during a redefinition session, therefore reducing the overall time to complete the redefinition.
Materialized Views: Fast Dependent Materialized View Refresh During Redefinition:
The new SYNC_INTERIM_TABLE procedure for the DBMS_REDEFINITION package enables incremental refresh of dependent materialized views on the interim table. The SYNC_INTERIM_TABLE procedure refreshes the materialized view as part of its execution.
All dependent materialized views on the source table must be fully refreshed after redefinition completes, increasing the time when the table is fully usable. Refreshing the materialized views during the redefinition process eliminates the time to perform a complete refresh at the end.
Online Conversion of a Nonpartitioned Table to a Partitioned Table:
Nonpartitioned tables can be converted to partitioned tables online. Indexes are maintained as part of this operation and can be partitioned as well. The conversion has no impact on the ongoing DML operations.
The online conversion of a nonpartitioned table to a partitioned table enables any application to adopt partitioning without application downtime. Customers can adopt partitioning for any system and evolve tables as needed to benefit from the partitioning for large tables.
Online SPLIT Partition and Subpartition:
The partition maintenance operations SPLIT PARTITION and SPLIT SUBPARTITION can now be executed as online operations for heap organized tables, allowing the concurrent DML operations with the ongoing partition maintenance operation.
Allowing any partition maintenance operation to become truly online enables the customers to schedule and execute all of these operations as needed, without having to plan around periods of query-only windows. This functionality both increases application availability and simplifies application development.
Online Table Move:
Nonpartitioned tables can be moved as an online operation without blocking any concurrent DML operations. A table move operation now also supports automatic index maintenance as part of the move.
Data maintenance on nonpartitioned tables does not require any maintenance window since it does not impact any DML or query operation
Partitioning: Table Creation for Partition Exchange:
A new DDL command allows the creation of a table that exactly matches the shape of a partitioned table and is therefore eligible for a partition or subpartition exchange of a partitioned table. Note that indexes are not created as part of this command.
Creating a table that qualifies for a partition or subpartition exchange can be a tedious task for older tables that have had various structural changes and reorganizations. With this new DDL, the task becomes very simple and straightforward to implement. It also adds some implicit business context to such an operation, as compared to a CREATE TABLE AS SELECT command.
Partitioning: Filtered Partition Maintenance Operations:
Partition maintenance operations can now be combined with data filtering. For example, a partition can be compressed and moved to a different tablespace, but only the data satisfying the specific filter criteria is actually moved.
Partition maintenance operations with data filtering combine two of the most common data maintenance operations. This combination not only makes the partition maintenance operation more flexible and powerful, but also makes it more performant and less resource intensive compared to the two separate data management operations.
Optimizer Statistics Advisor:
The database provides a built-in mechanism to track and analyze how statistics are collected. Based on the tracked information and the current best practices, the database reports problems with the current statistics collection and suggests changes to the statistics collection.
Enabling the database to collect historical information about the statistics collection and to suggest changes enables a customer to fine tune the statistics collection and to adopt best practices for optimizer statistics collection.
Enhancing SQL Plan Management:
SQL Plan Management (SPM) leverages a larger pool of plan information (including Automatic Workload Repository (AWR)) as a source for SQL plan baselines. SPM is also enhanced to provide a more selective plan capturing and easier verification for customers.
Customers can leverage SQL Plan Management for their specific application requirements.
Scan Rate and In-Memory Columnar Statistics:
DBMS_STATS now supports external table scan rates and In-Memory Column Store (IM column store) statistics.
If the database uses the In-Memory Column Store, you can set im_imcu_count to the number of In-Memory Compression Units (IMCU) in the table or partition, and set im_block_count to the number of blocks in the table or partition. For an external table, scan rate specifies the rate at which data is scanned in MB/second.
Band Join Enhancements:
A band join is a special type of nonequijoin in which key values in one data set must fall within the specified range (“band”) of the second data set. When the database detects a band join, the database evaluates the costs of band joins more efficiently, avoiding unnecessary scans of rows that fall outside the defined bands.
In most cases, optimized performance is comparable to an equijoin.
Cursor-Duration Temporary Tables Cached in Memory:
Complex queries often process the same SQL fragment (query block) multiple times to answer a business question. The results of these queries can be stored internally to avoid the multiple processing of the same query fragment, implemented as cursor-duration temporary tables. With this new functionality, these temporary tables can reside completely in cache without the need to write them to disk.
Caching intermediate, partial results in memory increases the performance of complex, mission-critical operations and optimizes I/O resource consumption.
Text: Partition-Specific Near Real-Time Indexes:
The option to use near real-time indexes (the STAGE_ITAB setting) can now be specified at a partition level.
Partitions, which are frequently updated, can have near real-time indexing switched on, while less frequently updated tables can have it turned off.
Partitioning: Auto-List Partitioning:
The database automatically creates a separate (new) partition for every distinct partition key value of the table.
Auto-list partitioning removes the management burden from the DBAs to manually maintain a list of partitioned tables for a large number of distinct key values that require individual partitions. It also automatically copes with the unplanned partition key values without the need of a DEFAULT partition.
Fine-Grained Cursor Invalidation:
Cursor invalidations are more fine-grained, reducing the need for cursor invalidation and recompilation. It also defers any cursor recompilation, if possible, to avoid spikes in hard parses.
With a more fine-grained cursor invalidation and a slow recompilation, applications become more available and do not incur any sudden increase in the CPU utilization for cursor recompilation.
Partitioning: Read-Only Partitions:
Partitions and sub-partitions can be individually set to a read-only state. This then disables DML operations on these read-only partitions and sub-partitions. This is an extension to the existing read-only table functionality.
Read-only partitions and subpartitions enable fine-grained control over DML activity. This enhances the data management capabilities of partitioned tables.
Partitioning: Multi-Column List Partitioning:
List partitioning functionality is expanded to enable multiple partition key columns.
Using multiple columns to define the partitioning criteria for list partitioned tables enables new classes of applications to benefit from partitioning.
Tracking Index Usage:
A new view, V$INDEX_USAGE_INFO, provides information about the use of indexes in the database. The information in the view can be used to determine whether indexes are used and the frequency of usage, and this information can be used to evaluate indexing strategies in the database.
A fine-grained information about which indexes are used and how frequently they are used allows you to eliminate infrequently used indexes. This results in better database performance and more effective system utilization.
Expression Tracking:
SQL statements commonly include expressions such as "+" or "-". More complicated examples include PL/SQL functions or SQL functions like LTRIM and TO_NUMBER. A repository maintains usage information about expressions identified during compilation and captured during execution. Complicated expressions involving multiple columns or functions make it more difficult to accurately estimate selectiveness in the optimizer, resulting in suboptimal plans. Processing more information about expressions and their usage is useful for establishing better query execution plans.
Customers benefit from improved query performance for statements that include expressions.
It was really a nice post and i was really impressed by reading this Oracle SOA Online Training
ReplyDeletegood post ever i have seen
ReplyDeleteoracle training in chennai
Infycle Technologies, the top ssoftware training institute and placement center in Chennai offers the Best Digital Marketing course in Chennai for freshers, students, and tech professionals at the best offers. In addition to Digital Marketing, other in-demand courses such as DevOps, Data Science, Python, Selenium, Big Data, Java, Power BI, Oracle will also be trained with 100% practical classes. After the completion of training, the trainees will be sent for placement interviews in the top MNC's. Call 7504633633 to get more info and a free demo.
ReplyDelete