Organizational enterprise is being challenged to meet BYOD: Bring Your Own Mobile Devices and cloud computing operational requirements. This is in addition to a substantial increase in application development, database processing, storage and data retention regulations.
Database related challenges include:
Oracle Database 11g leverages hardware capacity and the reduced cost of memory to address storage bottlenecks and application performance. Improved caching utilizes an expanded memory footprint to accelerate query processing. Shared common execution plans allow partially or shared cached results to be shareable between sessions and SQL statements. Server Result Cache enables query results to be cached in memory with the cached results used during subsequent query execution. This bypasses standard query processing and returns the results faster.
Server Result Cache optimizes SQL execution by decreasing wait times for both physical and logical I/O through the direct fetching results from memory. The memory dedicated to storing the results cache is located in the System Global Area; is managed by the Automatic Shared Memory Management infrastructure. The result cache is enabled by setting the maximum size of the cache with the RESULT_CACHE_MAX_SIZE parameter. With global and fine-grained controls, there are several options for integrating the Results Cache in the system architecture. The usage is defined at the system, session, table, and statement levels.
OCI Consistent Client Interface - Client Cache enables the caching of query results on the client machines. It utilizes per-process memory on the OCI client to store data affecting table queries and resultsets. The Client Cache resides in the OCI client process memory; content can be shared across multiple sessions and threads. The Client Cache fetches resultsets directly from the cache rather than having the server execute the query repeatedly. The improved Client Cache performance results from a reduction in the server CPU utilization by requiring fewer SQL calls and round-trips between the client and database server. The Client Cache is optimized for read-only queries.
At the application server level, database connection pooling allows applications to use a reduced number of database sessions for servicing application end-users. Connection pooling is designed to improve application performance and scalability by limiting the overhead associated with database session creation and reducing database session memory utilization. The Database Resident Connection Pool is used to scale session connection pooling managed by the Oracle database.
With Database Resident Connection Pools, clients connect to the CMON: Connection Monitor, instead of a dedicated server process. CMON manages the server side connection pool functionality and background processes. Whenever possible, clients accessing the database use previously allocated sessions. The client caches persistent connections to CMON for use when the application requests database connections. When the application closes database connections, the dedicated server process is returned to CMON and to the pool.
Introduced in Oracle Database 9i, data compression is used primarily with data warehouses. Data management limitations in compressed tables made it unsuitable for OLTP: Online Transaction Processing. Oracle Database 11g addressed these limitations by allowing INSERT, UPDATE, and DELETE DML statements on compressed tables. Table compression provides both a reduction in storage capacity demand and an increase in application performance.
Compression is cpu intensive; there will be latency associated with uncompressing and data use. In OLTP environments this can be an issue. Oracle Database 11g offers a licensed Advanced Compression Option:
create table my_compressed_table
( col1 number(20), col2 varchar2(300), ... )
compress for all operations
The clause compress for all operations enables compression on DML activities. Unlike previous versions, the compression occurs on all DML activities; not just direct path inserts. The compression has minimal impact on performance; rows are inserted uncompressed. Compression is performed on a block, not the row level. Once a threshold number of rows are inserted or updated, the algorithm is applied to all the uncompressed rows in the block. The compression threshold is defined within the RDBMS code.
Oracle Hybrid Columnar Compression on Exadata applies a different method for organizing data within a database. A combination of both row and columnar methods is used for storing data. This hybrid approach achieves the compression benefits of columnar storage; while avoiding the performance degradation of a columnar format. The hybrid columnar implementation provides both compression and performance in conjunction with the facilities of the Oracle database. Although optimized for scan-level access, row data is self-contained within compression units and this allows for efficient row-level access, with entire rows typically being retrieved with a single I/O.
The Oracle Streams Performance Advisor consists of the DBMS_STREAMS_ADVISOR_ADM PL/SQL package and a collection of data dictionary views. The ANALYZE_CURRENT_PERFORMANCE procedure in the
DBMS_STREAMS_ADVISOR_ADM package can be used for gathering information about the Oracle Streams topology and the performance of the Oracle Streams components within the topology.
The information from the Oracle Streams Performance Advisor can be examined by querying the data dictionary views:
Oracle Database 10g and 11g provide tools for identifying and correcting performance bottlenecks.
Performance and scalability of the information infrastructure is both a proactive and reactive process; gathering and analyzing data in order to service customers, improve operations, and manage the bottom line. Oracle Database 11g software has been designed to integrate with Oracle Corporation hardware, scale to thousands of processors, and handle massive amounts of structured relational and unstructured data.
There is an option to ONLINE shrink the temporary tablespaces. It can be reduced to a lower limit by specifying using KEEP clause. If it is not specified, then it will be reduced to a level which satisfies the other storage attributes. As an online operation, the sessions and queries running at that time are not affected. When no KEEP clause is specified, it will free and return to the operating system all the extents currently not in use, but which already have been allocated. There is an option for reducing the entire temporary tablespace or only a specific temporary file.
Tracing can be performed within a session through by issuing the ALTER SESSION SET SQL_TRACE TRUE command. It creates a standard trace in which SQL statement execution statistics and execution plans are recorded, but not the values of bind variables, or the time spent waiting for various events. In order to generate an advanced trace, the DBMS_SESSION package is used.
Under certain circumstances, Oracle can collect statistics dynamically. This will be useful when a table has no statistics or where the estimated elapsed time for the SQL is sufficiently high to indicate that the overhead of sampling is likely to lead to a net reduction in run-time. Dynamic sampling is controlled by specifying the parameter OPTIMIZER_DYNAMIC_SAMPLING - which takes a value from 0 to 10. At 0, no sampling is done, while 10 will read all the blocks in any table that has missing or incomplete statistics or where the table has filter conditions matching two or more columns. The default setting for OPTIMIZER_DYNAMIC_SAMPLING is 2, which instructs the optimizer to sample a small number of blocks for unanalyzed tables only.
Oracle 9i, 10g, and 11g databases are registered trademarks of Oracle Corporation.