Nigel Noble's Oracle Blog

08/01/2015

“log file sync” and the MTTR Advisor

Filed under: 11gR2, Performance — Nigel Noble @ 6:12 pm

I recently investigated a performance problem on an Oracle 11.2 OLTP trading system and although we still don’t fully understand the issue (and which versions of Oracle it effects), I thought I would share what we found (and how we found it). We had a hardware failure on the database server, within 30 seconds the database had automatically been restarted on an idle identical member of the cluster and the application continued on the new database host. A few days later I just happened to notice the following change in the LGWR trace file. I noticed that the Log Writer trace was showing more “kcrfw_update_adaptive_sync_mode” messages than normal.

(more…)

Advertisements

10/01/2013

11.2.0.3 Strange statistic, large transactions, dirty buffers and “direct path read”

Filed under: 11gR2, Performance — Nigel Noble @ 6:27 pm

Summary

I recently investigated an IO performance “spike” on a large 11.2.0.3 transactional system and I thought I would cover some interesting issues found. I am going to take the approach of detailing the observations made from our production and test systems and avoid attempting to cover how other versions of Oracle behave. The investigation also uncovers a confusing database statistic which we are currently discussing with Oracle Development so they can decide if this is an Oracle coding bug or a documentation issue.

The initial IO issue

We run a simple home grown database monitor which watches database wait events and sends an email alert if it detects either a single session waiting on a non-idle wait for a long time or the total number of database sessions concurrently waiting goes above a defined threshold. The monitor can give a number of false alerts but can also draw our attention to some more interesting events.

One day the monitor alerted me to two separate databases waiting on “log file sync” waits at exactly the same few seconds and affecting many hundreds of sessions. Both databases share the same storage array so the obvious place to start was to look at the storage statistics. We found a strange period lasting around 10 seconds long when both databases had a large increase in redo write service times and a few seconds when no IO was written at all. The first database we looked at seemed to show increase in disk service times for a very similar work load.  The second system showed a large increase in data file writes (500MB/sec) but without any matching increase in redo log writes. It seems the first database was slowed down by the second database flushing 5GB of data.

Where did 5GB of data file writes come from, and what triggered it?

Looking at the database we knew there were no corresponding redo writes, there were no obvious large sql statements reading or writing. We confirmed these writes were real and not something outside the database.

(more…)

05/07/2010

10.2.0.5, KEEP pool / Serial Direct Read

Filed under: 10.2.0.5, 11gR1, Performance — Nigel Noble @ 11:36 am

Jonathan Lewis made reference to a 11g bug related to using a KEEP POOL in his note Not KEEPing.  Oracle 11g introduced a new feature called adaptive serial direct path reads which allows large “Full Scan” disk reads  to be read using “direct path reads” rather than through the buffer cache. In many cases “direct IO” can give significant increase in performance when reading data for first time (from disk), however can be significantly slower if your subsequent queries could have been serviced from the buffer cache. The bug Jonathan references (Bug 8897574) causes problems if you assign any large object to a KEEP POOL because by default, 11g would read large objects using the new direct IO feature and avoid  ever placing the object in the KEEP POOL. The whole point of using the KEEP POOL is to identify objects you do want to protect and keep in a cache. 

The 10.2.0.5 patchset has also back-ported the same direct read feature which is new to 11g although I don’t know if the rules are the same as 11g. The site where I work makes significant use of KEEP pools and also has spent some time investigating aspects relating to serial direct IO vs. buffer cache IO. 

 I want to use this blog entry to explore a number of related issues but also demonstrate that the 11g bug Jonathan identified seems to also exists in 10.2.0.5 patchset (and 11gR2). This blog item will cover:

  • Brief reference relating to the 11g “adaptive serial direct path read”
  • The 10.2.0.5 implementation and how to switch it on
  • 10.2.0.5 demonstration showing the relative difference for different types of IO vs read from cache
  • 10.2.0.5 demonstration which shows the KEEP pool bug also exists (but not by default)
  • Some real life comparison figures of disk reads via “direct path read” and via “buffer cache” to show why the “adaptive serial direct path read” feature is worth exploring in more detail.   

 

(more…)

17/05/2010

Monitoring Connection Pools

Filed under: Monitoring, Performance — Nigel Noble @ 4:19 pm

The company where I work run a large web infrastructure with many different Java based applications and servers. Most of these application servers connect to the database using a connection pool to manage database connections and reduce the cost of starting/destroying database sessions. Over the years we have spent a lot of time trying to get the right balance to keep session usage as smooth as possible.

  • maximum connections in the pool set too low? – Can lead to the requests queuing to get a connection or running out all together during peak spikes (leading to  application failure).
  • maximum connections set too high? – Can lead to “logon storms”. A sudden surge in concurrent user activity can cause a huge demand on new sessions to be created on the database server. Because the database is slowed by the surge, you often get a “feed back loop” effect. Slow database response means even more connection requests. “Connection Storms” can also be caused by small problems which start in the database or network (unexpected slow network pause or database wait event…. then a sudden logon storm hits the database host as increasing connections slow the database even more (seen this lots in the past)
  • minimum connections in the pool set too low? – Can lead to the added cost of always having to restart new database sessions just at the critical time you need them.
  • idle out time set too low? – Once you have taken the hit of creating a new session in the database, you let this session reach the idle time too soon and the session is destroyed… only to be needed again a few minutes later.
  • idle out time set too high? Lets say you have suffered a minor “logon storm”, if the idle time set too high you could have all these extra sessions hanging around for a very long time (There have been a number of good presentations by the Oracle Real World Performance group talking about benefits of reducing connections on a database server)
  • minimum and maximum the same value? There is a lot to be said for running with a fixed number of sessions supporting your average usage and your peaks (The trick is finding the correct number to support peaks)

A very simple session monitoring script

Before I actually talk about the script itself, I thought I would give  an example of why I wrote it in the first place and how it was used, then I will show the script and why we still use it today.

The problem

When I first joined my company, I could never get my head around the number of sessions on the database compared to the number of “ACTIVE” sessions seen in v$session. We seemed to have many more connections from the application servers then sessions doing work. Every once in a while the company would review the peak connection pool settings (per box) and adjust them. The kind of conversations we would have around the office were:

 “We are using the maximum 30 connections per box, let’s make it 35 per box for growth”

The next year we would say

“We are using all the 35 per box, let’s make it 40 per box…..  for growth”

All the time I kept thinking maybe v$session status=”ACTIVE” was mis-leading, or maybe a problem with the connection pools (We had 30 minute idle out time but we always ran at the maximum connections but very few were actually seen to be used.

Back to the script

client_info.sql

 

set trimspool on
set pagesize 1000
set linesize 190
column since format a20
column logon format a20
column event format a25 truncate
column machine format a10 truncate
column status format a10 truncate
column username format a10 truncate
column n format 9999.99

break on machine on report  skip 1

compute sum of sess_count on machine
compute sum of sess_count on report

set time on
spool client.txt append

select 'CFROM' tag,
  to_char(sysdate,'hh24:mi:ss') when,
        machine,
        event,
        seconds_in_wait,
        sql_id,
        prev_sql_id,
        count(*) sess_count,
  to_char(sysdate - (SECONDS_IN_WAIT / (60 * 60 * 24)),'dd-mon-yy hh24:mi:ss') since,
--      next two lines useful if trying to predict a concurrent spike.   1200 being 20 minutes
--      Left over from site specific issue but could be useful.
--      next is last active time + 20 minutes
--      n is a count down to next predicted spike
-- to_char(sysdate - ( (seconds_in_wait ) / (60 * 60 * 24)) + (1200 / (60 * 60 * 24)),'dd-mon-yy hh24:mi:ss') NEXT ,
-- ((sysdate - (sysdate - ( (seconds_in_wait ) / (60 * 60 * 24)) + (1200 / (60 * 60 * 24)) ) ) * (60 * 60 * 24) ) / 60 n,
        username,
        status,
        state
from v$session
group by machine,
         event,
         seconds_in_wait,
         sql_id,
         prev_sql_id,
         username,
         status,
         state
order by machine,
         username,
         event,
         seconds_in_wait,
         sql_id,
         prev_sql_id,
         status,
         state
/

One day I wrote the above script and all became very clear. Something sent  a very fast (150 micro seconds) sql statement concurrently from each application server to every session in the connection pool. The request rate could easily have been serviced by a handful of sessions, but because they came concurrently at the exact same time every session was used. 

   

note: This output has been faked, Not able to find real example of the original issue we had (few years back)
note: All host names and user names are also faked...... so I can keep my job!
Columns            Description
-----------------  --------------------------
TAG                tag used so can grep info from log file
WHEN current system time when data collected
MACHINE            host where the session originates
EVENT              Current database event
SECONDS_IN_WAIT    Seconds in Wait
SQL_ID             Current SQL statement
PREV_SQL_ID        Previous SQL statement last run on the session
SESS_COUNT         Total number of sessions within group (sql_id,event,seconds in wait etc)
SINCE What time the session has been idle since (sysdate - seconds wait)
USERNAME           User name of sessions
STATUS             Session status (Active or Inactive)
STATE              Session Wait State

TAG   WHEN     MACHINE    EVENT                     SECONDS_IN_WAIT SQL_ID        PREV_SQL_ID   SESS_COUNT SINCE                USERNAME   STATUS     STATE
----- -------- ---------- ------------------------- --------------- ------------- ------------- ---------- -------------------- ---------- ---------- -------------------
CFROM 13:55:17 abcabc01.i SQL*Net message from clie               0 2j7pff3tfuzzz 2j7pff3tfuzzz          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               0               qw4bv6jwup5ab          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               0               5tarshstnypzv          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               1               8zv7177vuc8dt          3 14-may-10 13:55:16   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               5               4616pfpak8akh          1 14-may-10 13:55:12   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie              22               amsmuu1pp1w74          1 14-may-10 13:54:55   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie             185               3q4bv6jx8wup5         30 14-may-10 13:52:12   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie             433               f3tg1gz4zdadm          1 14-may-10 13:48:04   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie             873               cmh3vh4pjs7q7          1 14-may-10 13:40:44   WEBUSERABC INACTIVE   WAITING
               **********                                                                       ----------
               sum                                                                                      40
CFROM 13:55:17 abcabc02.i SQL*Net message from clie               0 zzz0ahx447fpr zzz0ahx447fpr          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               0               3q4bv6jx8wup5          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               0               5tarshstnypzv          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               0               f3tg1gz4zdadm          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               6               cmh3vh4pjs7q7          1 14-may-10 13:55:11   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               8               8zv7177vuc8dt          1 14-may-10 13:55:09   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie              11               cmh3vh4pjs7q7          1 14-may-10 13:55:06   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie              12               f3tg1gz4zdadm          1 14-may-10 13:55:05   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie              80               8zv7177vuc8dt          1 14-may-10 13:53:57   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie              80               f3tg1gz4zdadm          1 14-may-10 13:53:57   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie             138               3q4bv6jx8wup5         28 14-may-10 13:52:59   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie             695               cmh3vh4pjs7q7          1 14-may-10 13:43:42   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie            1394               cmh3vh4pjs7q7          1 14-may-10 13:32:03   WEBUSERABC INACTIVE   WAITING
               **********                                                                       ----------
               sum                                                                                      40
CFROM 13:55:17 abcabc03.i SQL*Net message from clie               0 zzz0ahx447fpr zzz0ahx447fpr          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               0               5tarshstnypzv          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               0               cmh3vh4pjs7q7          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               2               5tarshstnypzv          1 14-may-10 13:55:15   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               7               8zv7177vuc8dt          2 14-may-10 13:55:10   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie              14               3q4bv6jx8wup5          1 14-may-10 13:55:03   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie              14               4616pfpak8akh          1 14-may-10 13:55:03   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie              61               8zv7177vuc8dt          1 14-may-10 13:54:16   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie             159               3q4bv6jx8wup5         31 14-may-10 13:52:38   WEBUSERABC INACTIVE   WAITING
               **********                                                                       ----------
               sum                                                                                      40
CFROM 13:55:17 abcabc04.i SQL*Net message from clie               0               3q4bv6jx8wup5         39 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               0               5tarshstnypzv          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
               **********                                                                       ----------
               sum                                                                                      40
CFROM 13:55:17 abcabc05.i SQL*Net message from clie               0 zzz0ahx447fpr zzz0ahx447fpr          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               0               33hsynd62ka4k          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               0               cmh3vh4pjs7q7          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               1 zzz0ahx447fpr zzz0ahx447fpr          2 14-may-10 13:55:16   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie              10               cmh3vh4pjs7q7          1 14-may-10 13:55:07   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie              31               8zv7177vuc8dt          1 14-may-10 13:54:46   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie             157               amsmuu1pp1w74          1 14-may-10 13:52:40   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie             318               3q4bv6jx8wup5         29 14-may-10 13:49:59   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie             874               cmh3vh4pjs7q7          2 14-may-10 13:40:43   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie            1040               8zv7177vuc8dt          1 14-may-10 13:37:57   WEBUSERABC INACTIVE   WAITING
               **********                                                                       ----------
               sum                                                                                      40

Whenever I ran the script, I could see that we had spikes of sessions running the same sql statement in the same second on any given host. Because the statement groups by “seconds_in_wait” we might see that say 25 sessions all were active 8 minutes ago with the same statement and have since done no work (waiting on “SQL*Net message from client”). When we looked at all the application hosts, each would spike at a 20 minute interval (although each host had its own time at which it occurred).

My company use a number of different application caching techniques in our middle-tier application, one of these was a concurrent “read- ahead” cache, this would allow a read from cache to detect its data was soon to expire and asynchronously request a load of the caches in parallel. Our “read-ahead” code had been set to allow 50 threads to get data, but we only had 40 connections in the database connection pool (each box). We had a 30 minute keep alive on the connections, but every 20 minutes exactly, we would touch every connection and keep them alive. The solution was to reduce the number of threads on the “read-ahead” and not even change the connection pool configuration. This fix automatically reduced the number of connections on the database by around 1000.

There have been so many examples where the script has helped, we leave it collecting every 5 minutes to a log file on the more sensitive databases. It can help identify cases where:

  • Sudden concurrent requests by one or more severs or applications. We can get clues in what was the last sql statement run, who by and when
  • Statements taking a long time on specific hosts (Look for “ACTIVE” sessions for long time on same sql)
  • Idle connections which never get returned to the connection pool correctly within the application hosts
  • user behaviour driving sudden spikes in concurrent requests from specific hosts
  • sessions which are getting stuck on database waits (eg db links due to networking issues, row locking etc) and not returning to the pool
  • What time an application host last sent any requests to the database

The script attempts to provide a fast summary of the connections on the database, how long they have been idle, what they last did, which current wait events are used.  Although the script is very simple, I have found it a really good way to summarise what our hosts (using connection pools)  are doing.

11/05/2010

11.1.0.7 poor plsql array performance

Filed under: 11gR1, Performance — Nigel Noble @ 2:05 pm

The PLSQL application at my site is dependent on the usage of very large PLSQL arrays (several GB in size). During the testing on 11.1.0.7 we found a severe performance issue when the application first loaded the plsql arrays with data. The application used to load the array data in tens of seconds but was now taking 6 or 7 minutes to do the same work. Further investigation showed that this problem did not exist on any other version of Oracle (We tested 10.2.0.3, 10.2.0.4, 11.1.0.6 and 11.2.0.1).

A colleague wrote a test case to show the issue for Oracle support so they could identify a fix and we could request a patch:

Source of the script:
Package Header (has no body!)

CREATE OR REPLACE PACKAGE tst_pkg_array IS

   TYPE typ_rec_1 IS RECORD(
       attr1  NUMBER(12)
      ,attr2  NUMBER(12)
      ,attr3  NUMBER(12)
      ,attr4  NUMBER(12)
      ,attr5  NUMBER(12)
      ,attr6  VARCHAR2(100)
      ,attr7  VARCHAR2(100)
      ,attr8  VARCHAR2(100)
      ,attr9  VARCHAR2(100)
      ,attr10 VARCHAR2(100)
      ,attr11 DATE
      ,attr12 DATE
      ,attr13 DATE
      ,attr14 DATE
      ,attr15 DATE
      ,attr16 NUMBER(12)
      ,attr17 NUMBER(12)
      ,attr18 NUMBER(12)
      ,attr19 NUMBER(12)
      ,attr20 NUMBER(12)
      ,attr21 VARCHAR2(100)
      ,attr22 VARCHAR2(100)
      ,attr23 VARCHAR2(100)
      ,attr24 VARCHAR2(100)
      ,attr25 VARCHAR2(100));

   TYPE typ_tab_1 IS TABLE OF typ_rec_1 INDEX BY PLS_INTEGER;

   tab_test_simple typ_tab_1;

END tst_pkg_array;
/

Test script to generate timings

set timing on

spool run.log append

set time on

select * from v$version;

set serveroutput on size 100000

DECLARE

   --
   PROCEDURE pr_put_elements_into_array(i_num_elements IN NUMBER) IS

      l_rec_test tst_pkg_array.typ_rec_1;

      tab_test_local tst_pkg_array.typ_tab_1;

   BEGIN
      --
      -- start by clearing down
      tst_pkg_array.tab_test_simple.delete;
      tst_pkg_array.tab_test_simple := tab_test_local;
      --
      l_rec_test.attr1  := 10000000;
      l_rec_test.attr2  := 10000000;
      l_rec_test.attr3  := 10000000;
      l_rec_test.attr4  := 10000000;
      l_rec_test.attr5  := 10000000;
      l_rec_test.attr6  := 'ABCDEFGH';
      l_rec_test.attr7  := 'ABCDEFGH';
      l_rec_test.attr8  := 'ABCDEFGH';
      l_rec_test.attr9  := 'ABCDEFGH';
      l_rec_test.attr10 := 'ABCDEFGH';
      l_rec_test.attr11 := SYSDATE;
      l_rec_test.attr12 := SYSDATE;
      l_rec_test.attr13 := SYSDATE;
      l_rec_test.attr14 := SYSDATE;
      l_rec_test.attr15 := SYSDATE;
      l_rec_test.attr16 := 10000000;
      l_rec_test.attr17 := 10000000;
      l_rec_test.attr18 := 10000000;
      l_rec_test.attr19 := 10000000;
      l_rec_test.attr20 := 10000000;
      l_rec_test.attr21 := 'ABCDEFGH';
      l_rec_test.attr22 := 'ABCDEFGH';
      l_rec_test.attr23 := 'ABCDEFGH';
      l_rec_test.attr24 := 'ABCDEFGH';
      l_rec_test.attr25 := 'ABCDEFGH';
      --
      FOR n IN 1 .. i_num_elements LOOP
         tst_pkg_array.tab_test_simple(n) := l_rec_test;
--       Using the line below instead of the one above (ie using a local variable rather than a package state variable) resolves the issue on 11.1.0.7
--       To test that, comment out the line above, and "comment in" the line below
--         tab_test_local(n) := l_rec_test;
      END LOOP;

   END pr_put_elements_into_array;

   PROCEDURE pr_run_test(i_num_elements IN NUMBER) IS
      l_duration_secs NUMBER;
      l_start_time    NUMBER;

   BEGIN

      l_start_time := DBMS_UTILITY.get_time;
      pr_put_elements_into_array(i_num_elements => i_num_elements);
      l_duration_secs := (DBMS_UTILITY.get_time - l_start_time) / 100;
      dbms_output.put_line(i_num_elements || ' elements took ' || (l_duration_secs) || ' secs or ' ||
                           (l_duration_secs / (i_num_elements / 1000)) || ' secs per thousand');
   END pr_run_test;
BEGIN
   pr_run_test(i_num_elements => 1000);
   pr_run_test(i_num_elements => 10000);
   pr_run_test(i_num_elements => 100000);
   pr_run_test(i_num_elements => 200000);
   pr_run_test(i_num_elements => 300000);
END;
/

exit

Output on 11.1.0.7

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
PL/SQL Release 11.1.0.7.0 - Production
CORE    11.1.0.7.0      Production
TNS for Linux: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production

Elapsed: 00:00:00.40
1000 elements took .01 secs or .01 secs per thousand
10000 elements took .09 secs or .009 secs per thousand
100000 elements took 6.1 secs or .061 secs per thousand
200000 elements took 21.65 secs or .10825 secs per thousand
300000 elements took 46.55 secs or .1551666666666666666666666666666666666667
secs per thousand

PL/SQL procedure successfully completed.

Elapsed: 00:01:14.40

The above example shows the script took over 1 minute to complete and you can see the result degreades exponentially (per 1000 requests) as more entries are added to the array.

Output on 10.2.0.4


BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE    10.2.0.4.0      Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production

Elapsed: 00:00:00.02
1000 elements took .01 secs or .01 secs per thousand
10000 elements took .05 secs or .005 secs per thousand
100000 elements took .51 secs or .0051 secs per thousand
200000 elements took 1.02 secs or .0051 secs per thousand
300000 elements took 1.17 secs or .0039 secs per thousand

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.89

Patch details:

Patch 7671793: EXCESSIVE MEMORY USAGE WHEN USING KGHU
patch is avaiable for Linux x86-64)

Output once patch was applied:

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
PL/SQL Release 11.1.0.7.0 - Production
CORE    11.1.0.7.0      Production
TNS for Linux: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production

Elapsed: 00:00:00.06
1000 elements took 0 secs or 0 secs per thousand
10000 elements took .05 secs or .005 secs per thousand
100000 elements took .55 secs or .0055 secs per thousand
200000 elements took .88 secs or .0044 secs per thousand
300000 elements took 1.25 secs or .004166666666666666666666666666666666666667
secs per thousand

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.84

You can now see with the patch applied, the above test takes less than 3 seconds to complete.

Although our application uses plsql arrays in a very extreme way, it may be worth reviewing this patch if your application has some unexplained slow down and is using large plsql arrays AND 11.1.0.7.

I have tested this issue on 11.1.0.7.3 (latest PSU patch 9352179) and the problem still exists. It’s not clear if this is a generic bug or one specific to our platform (Linux x86-64). Anoyingly I have found that the available patch ( 7671793) clashes with all the avaiable  11.1.0.7 psu patches, so a merge patch would be needed or you could try a non 11.1.0.7 version of Oracle.

Blog at WordPress.com.