Nigel Noble's Oracle Blog

12/07/2013

“enq: TX – contention” on SELECT with a large buffer cache and 2PC

Filed under: 11gR2 — Nigel Noble @ 12:08 pm

Note: The following is based on testing with 11.2.0.3 (I believe same issue exists within other Oracle versions).

I recently worked on an interesting problem relating to the “enq: TX – contention” wait event. There are a number of reasons for the wait but the most common reason I have come across at my site is best described by a forum posting I found by Jonathan Lewis... “select can wait for another transaction if that transaction is between the “prepare” and “commit” of a two-phase commit.” 

My site has a number of databases in the UK and Australia. We have often come across the problem when a SELECT statement tries to read a row which is involved in a distributed transaction to Australia. The problem is with the round trip latency to Australia. It is possible that during the communication of the PREPARE and COMMIT phases you have a 200ms – 300ms latency. Anyone trying to read the rows involved in the transaction can see a 300ms pause on “enq: TX – contention” waits. There are a number of tricks you can use to try and reduce these problems by finding ways to separate the rows the SELECT statement reads on the UK data from the rows involved in the Australian transaction. The very fact you visit the same block and evaluate the 2pc row can cause it to hang (Just reading a row to find it’s not actually the one you needed would do it as well.) We use tricks like careful usage of indexes to ensure the reader can go directly to the UK data and not even evaluate the row involved in the Australian 2pc (Partitioning can also help here too).

A new variation of “enq: TX – contention”

(more…)

28/03/2013

11.2.0.3 v$sqlstats.last_active_time stops changing and breaks AWR

Filed under: 11gR2, Monitoring — Nigel Noble @ 3:18 pm

My site uses a 3rd party SQL monitoring tool which collects data based on the Oracle view v$sqlstats.  The tool collects data for all sql statements which have been executed since the previous collection using the last_active_time column. A few months ago we noticed (after an upgrade to 11g) we would occasionally be missing some sql statements from the monitor graphs. We investigated the problem, re-produced a test case, raised a bug with Oracle and Support has just released an 11.2 patch.

Before I explain the issue and demonstrate the issue, I will explain what prompted me to post this blog item. Until now I always thought this just effected v$sqlstats (and my 3rd party monitoring tool) and not AWR. If you ever trace (10046) an AWR collection you will not find any references to v$sqlstats. If the data is missing from the 3rd Party tools “I can always rely on AWR?”…. I thought!

Well, I recently discovered that any time a sql statement drops off the 3rd party monitoring tool…… IT STOPS BEING MONITORED IN AWR’s DBA_HIST_SQLSTAT table as well. We have our AWR collection threshold set to collect as many sql statements as possible. This problem has nothing to do with the cost of the SQL statements and you could well find your most expensive sql statement just disappear from AWR for a period of time.

Let’s start at the very beginning….

(more…)

10/01/2013

11.2.0.3 Strange statistic, large transactions, dirty buffers and “direct path read”

Filed under: 11gR2, Performance — Nigel Noble @ 6:27 pm

Summary

I recently investigated an IO performance “spike” on a large 11.2.0.3 transactional system and I thought I would cover some interesting issues found. I am going to take the approach of detailing the observations made from our production and test systems and avoid attempting to cover how other versions of Oracle behave. The investigation also uncovers a confusing database statistic which we are currently discussing with Oracle Development so they can decide if this is an Oracle coding bug or a documentation issue.

The initial IO issue

We run a simple home grown database monitor which watches database wait events and sends an email alert if it detects either a single session waiting on a non-idle wait for a long time or the total number of database sessions concurrently waiting goes above a defined threshold. The monitor can give a number of false alerts but can also draw our attention to some more interesting events.

One day the monitor alerted me to two separate databases waiting on “log file sync” waits at exactly the same few seconds and affecting many hundreds of sessions. Both databases share the same storage array so the obvious place to start was to look at the storage statistics. We found a strange period lasting around 10 seconds long when both databases had a large increase in redo write service times and a few seconds when no IO was written at all. The first database we looked at seemed to show increase in disk service times for a very similar work load.  The second system showed a large increase in data file writes (500MB/sec) but without any matching increase in redo log writes. It seems the first database was slowed down by the second database flushing 5GB of data.

Where did 5GB of data file writes come from, and what triggered it?

Looking at the database we knew there were no corresponding redo writes, there were no obvious large sql statements reading or writing. We confirmed these writes were real and not something outside the database.

(more…)

11/01/2011

10.2.0.5 dbms_sql leaks memory AND performance degrades

Filed under: Uncategorized — Nigel Noble @ 11:24 am

My site recently upgraded one of its databases to the 10.2.0.5 patchset and found a serious performance problem relating to code using dbms_sql. Once we had completed the upgrade, we noticed a number of data feeds to the upgraded database started to fall behind and could no longer keep up. When we stopped and restarted the feeds they appeared to speed up. After some investigation we found two problems, the first relating to a “log file sync” problem in RAC which is still under investigation (one for a future post when we have more detail) but the second issue caused the performance of the data feed to degrade steadily over time.  

We use a couple of products to dynamically feed data to our 10.2.0.5 database and it seemed the performance degraded  over time (many days). We did a 10046 session trace and found we lost a lot of time between the dbms_sql.parse of some dynamic SQL and the bind phase. We found reference in Metalink to Bug 10269717 (DBMS_SQL.PARSE leaks session heap memory in 10.2.0.5).  Although the bug discusses a memory leak, we found that the performance also degrades over time.

We applied the patch for 10269717 and the PGA memory leak was resolved but more importantly the performance remained constant. I just checked the 10.2.0.5 known issues document and this does now list the bug, but it still only references the leak and not the performance implications.

25/08/2010

dbms_scheduler.drop_job “leaks” PGA memory

Filed under: Uncategorized — Nigel Noble @ 1:00 pm

I thought I would post a very short note about a recent PGA memory “leak” issue we found in one of our applications that appears to exist in Oracle versions 10gR2 through to 11gR2. I would not expect the problem to actually affect many sites so I am not going to  spend a huge amount of time showing the test case but thought I would make people aware of the potential issue.

My site introduced a “parallel scheduler” which allows us to break some of our business transactions into parallel jobs. It simply manages the running of some time critical  business tasks in parallel but takes full control of the business rules and co-ordinates that all the tasks are complete, verified and handles the rules if parts fail to complete. The program runs itself as a continually running dbms_scheduler job and then schedules the worker threads (within various rules) then keeps careful track of the completion status of each part.

The Problem

One day our scheduler program failed with ORA-4030 (“out of process memory when trying to allocate %s bytes (%s,%s)”). We re-started the job, but further investigation showed that the job process which had previously been running our scheduler (which was still attached)  although now showing a low “session pga memory” had at some point had “session pga memory max” of 4GB. I set up a monitor to collect the “session pga memory” of the re-started job and left it collecting for a few days. When I plotted the PGA memory data we could clearly see the PGA memory appeared to grow during busy periods and not at all at off peak times but importantly never reduced. I sent the memory usage graph to a colleague and after a short while, he sent me back a graph which looked 100% the same as mine……except his graph had a totally different scale and was not memory. The graph he sent me was actually the total number of tasks our scheduler processes was asked to run in the same time period. So we knew we appeared to “leak” a fixed amount of PGA every time we ran a scheduled job.

Investigation

After some further investigation (dumping “session pga memory” after every call on a test system) we found the problem was not the submission of the job, but after the job had completed, we formally make a call to dbms_scheduler.drop_job and this call “leaks” approximately 21k every call. NOTE: I’ve used the term “leak” a lot and strictly speaking this is not a leak. Oracle knows about all about the memory and when your plsql package completes all the PGA memory is returned. The problem is Oracle does not free the memory during the execution of the main plsql procedure.

Bug

We have raised a new bug with Oracle:  9957867 – EXECUTING DBMS_SCHEDULER.DROP_JOB REPEATEDLY IN A PLSQL PROCEDURE LEAKS MEMORY which is still under investigation by Oracle Support. I’ll post an update if I get anymore information.

Work Around

This problem will really only cause problems if you schedule and then explicitly drop (dbms_scheduler.drop_job)  a lot of jobs in the same plsql loop without completing the procedure.  The dbms scheduler will by default create jobs with auto_drop=true so again this is only a problem if you create jobs with auto_drop=false because you want to take full control. We have a number of options which my site will consider (Change our code and use auto_drop=true, move the drop_job to another scheduled task which does restart from time to time or finally restart our scheduler task from time to time.

23/07/2010

10gR2 – 11gR2, ASSM Space Management “Bug”

Filed under: 10gR2, 11gR1, 11gR2, assm — Nigel Noble @ 10:59 am

ASSM (Automatic Segment Space Management) has an issue when trying to re-use “deleted” space created by another session.  There is a very specific set of circumstances which must occur for this issue to show itself, but will result in tables (and I suspect indexes) growing significantly larger than they need to be. I am aware that the problem exists in versions 10.2.0.3 through to the current 11gR2 inclusive although I don’t know which Oracle release first introduced the problem.

The conditions required to cause the issue

My site has a number of daemon style jobs running permanently on the database loading data into a message table. The daemon is managed by dbms_job and is re-started when the database is first started and can run for many days, weeks (or in some cases months) before the job is ever stopped. We only need to keep the messages for a short time, so we have another daemon job whose role is to delete the messages from the table as soon as the expiry time is reached. In one example we only need to retain the data for a few minutes (after which time we no longer need it) and we also wanted to keep the table as small as possible so it remained cached in the buffer cache (helped by a KEEP pool). When we developed the code, we expected the message table to remain at a fairly constant size of 50 – 100MB in size. What we found was the table continued to grow at a consistent rate to many gigabytes in size until we stopped the test. If we copied the table (eg using “create table <new table> as select * from <current table>”) the new table would again be 50MB so most of the space in the table was empty “deleted” space. The INSERT statements were never re-using the space made free by the delete statement (run in another session).

(more…)

05/07/2010

10.2.0.5, KEEP pool / Serial Direct Read

Filed under: 10.2.0.5, 11gR1, Performance — Nigel Noble @ 11:36 am

Jonathan Lewis made reference to a 11g bug related to using a KEEP POOL in his note Not KEEPing.  Oracle 11g introduced a new feature called adaptive serial direct path reads which allows large “Full Scan” disk reads  to be read using “direct path reads” rather than through the buffer cache. In many cases “direct IO” can give significant increase in performance when reading data for first time (from disk), however can be significantly slower if your subsequent queries could have been serviced from the buffer cache. The bug Jonathan references (Bug 8897574) causes problems if you assign any large object to a KEEP POOL because by default, 11g would read large objects using the new direct IO feature and avoid  ever placing the object in the KEEP POOL. The whole point of using the KEEP POOL is to identify objects you do want to protect and keep in a cache. 

The 10.2.0.5 patchset has also back-ported the same direct read feature which is new to 11g although I don’t know if the rules are the same as 11g. The site where I work makes significant use of KEEP pools and also has spent some time investigating aspects relating to serial direct IO vs. buffer cache IO. 

 I want to use this blog entry to explore a number of related issues but also demonstrate that the 11g bug Jonathan identified seems to also exists in 10.2.0.5 patchset (and 11gR2). This blog item will cover:

  • Brief reference relating to the 11g “adaptive serial direct path read”
  • The 10.2.0.5 implementation and how to switch it on
  • 10.2.0.5 demonstration showing the relative difference for different types of IO vs read from cache
  • 10.2.0.5 demonstration which shows the KEEP pool bug also exists (but not by default)
  • Some real life comparison figures of disk reads via “direct path read” and via “buffer cache” to show why the “adaptive serial direct path read” feature is worth exploring in more detail.   

 

(more…)

17/06/2010

My 10046 SQL Trace Formatter

Filed under: Monitoring, trace — Nigel Noble @ 10:52 pm

trace_by_hash.awk

A number of years ago, I wrote a SQL Trace File formatter when I needed to process a large amount of trace files for a complex project we were working on. The formatter is not intended to replace the really good tools that are out there, but I like reading the detail which appears in a raw trace file (but with some additional help). I also wanted to structure the trace file so it could be processed by other scripts separately. This is by no means written to a commercial standard, but I thought people may find it useful and also provide interesting insite on how to process trace files.  I worked with James Morle in a previous company to build a benchmark based on Oracle trace files using some software he had written in his book (Scaling Oracle 8i, the  paper Brewing Benchmarks and the later software simora ). The benchmark software contained an awk script to process traces files and extract the entry point database calls and convert them to a tcl scripting language to then drive the benchmark. I took (with permission) the conversion script and modified it so it generated a Oracle trace file but with extra information.

(more…)

02/06/2010

Online Index Rebuilds

Filed under: Uncategorized — Nigel Noble @ 12:24 pm

I’ve just seen a note on Jonathan Lewis’s blog regarding Online Index Rebuilds. It reminds me of some issues which existed in Oracle 9i and 10g but appear to have been resolved in 11gR1 and 11gR2. Oracle 9i introduced a patch to change behaviour regarding online Index Rebuilds. The default behaviour in 9i and 10g is that an Online Index Rebuild would get blocked behind a long active transaction which uses the index (which is still true in 11g) but critically then would also block any new DML  wanting to also modify the index (Leading to a hang of the application as well as the index build). They introduced a new database EVENT 10629 (in a 9i patch) which would mean the Online Index Rebuild would keep trying to acquire its locks but would keep backing off to allow other DML to continue. The Event would be set with a level to either try forever (but don’t block other new DML) or fail the Online Index Rebuild after a specific time period (well retries).

Out of interest, you can sometimes look up the text for Oracle Events using the “oerr” command .  

$ oerr ora 10629
10629, 00000, "force online index build to backoff and retry DML lock upgrade"
// *Cause:
// *Action: set this event only under the supervision of Oracle development
// *Comment: Change the behaviour of an online index rebuild such that it
// will backoff and retry a failed DML lock upgrade.
// The event level is the number of retries the online index rebuild
// should wait. Level 1 means backoff and retry indefinitely. Any
//           other value less than 32 will be adjusted automatically to be 32.

 
There is more information on Meta link (note: 3566511.8). The comments of “This issue is fixed in” mean the new Event feature is included in the release (you still need to set it).

I’ve just had a very quick test of the Event on 11gR1/R2 but it’s not clear if the Event still works. The very important thing (to me) is the 11g versions no longer cause other unrelated DML to become stuck behind a long running active transaction.

17/05/2010

Monitoring Connection Pools

Filed under: Monitoring, Performance — Nigel Noble @ 4:19 pm

The company where I work run a large web infrastructure with many different Java based applications and servers. Most of these application servers connect to the database using a connection pool to manage database connections and reduce the cost of starting/destroying database sessions. Over the years we have spent a lot of time trying to get the right balance to keep session usage as smooth as possible.

  • maximum connections in the pool set too low? - Can lead to the requests queuing to get a connection or running out all together during peak spikes (leading to  application failure).
  • maximum connections set too high? - Can lead to “logon storms”. A sudden surge in concurrent user activity can cause a huge demand on new sessions to be created on the database server. Because the database is slowed by the surge, you often get a “feed back loop” effect. Slow database response means even more connection requests. “Connection Storms” can also be caused by small problems which start in the database or network (unexpected slow network pause or database wait event…. then a sudden logon storm hits the database host as increasing connections slow the database even more (seen this lots in the past)
  • minimum connections in the pool set too low? – Can lead to the added cost of always having to restart new database sessions just at the critical time you need them.
  • idle out time set too low? - Once you have taken the hit of creating a new session in the database, you let this session reach the idle time too soon and the session is destroyed… only to be needed again a few minutes later.
  • idle out time set too high? Lets say you have suffered a minor “logon storm”, if the idle time set too high you could have all these extra sessions hanging around for a very long time (There have been a number of good presentations by the Oracle Real World Performance group talking about benefits of reducing connections on a database server)
  • minimum and maximum the same value? There is a lot to be said for running with a fixed number of sessions supporting your average usage and your peaks (The trick is finding the correct number to support peaks)

A very simple session monitoring script

Before I actually talk about the script itself, I thought I would give  an example of why I wrote it in the first place and how it was used, then I will show the script and why we still use it today.

The problem

When I first joined my company, I could never get my head around the number of sessions on the database compared to the number of “ACTIVE” sessions seen in v$session. We seemed to have many more connections from the application servers then sessions doing work. Every once in a while the company would review the peak connection pool settings (per box) and adjust them. The kind of conversations we would have around the office were:

 “We are using the maximum 30 connections per box, let’s make it 35 per box for growth”

The next year we would say

“We are using all the 35 per box, let’s make it 40 per box…..  for growth”

All the time I kept thinking maybe v$session status=”ACTIVE” was mis-leading, or maybe a problem with the connection pools (We had 30 minute idle out time but we always ran at the maximum connections but very few were actually seen to be used.

Back to the script

client_info.sql

 

set trimspool on
set pagesize 1000
set linesize 190
column since format a20
column logon format a20
column event format a25 truncate
column machine format a10 truncate
column status format a10 truncate
column username format a10 truncate
column n format 9999.99

break on machine on report  skip 1

compute sum of sess_count on machine
compute sum of sess_count on report

set time on
spool client.txt append

select 'CFROM' tag,
  to_char(sysdate,'hh24:mi:ss') when,
        machine,
        event,
        seconds_in_wait,
        sql_id,
        prev_sql_id,
        count(*) sess_count,
  to_char(sysdate - (SECONDS_IN_WAIT / (60 * 60 * 24)),'dd-mon-yy hh24:mi:ss') since,
--      next two lines useful if trying to predict a concurrent spike.   1200 being 20 minutes
--      Left over from site specific issue but could be useful.
--      next is last active time + 20 minutes
--      n is a count down to next predicted spike
-- to_char(sysdate - ( (seconds_in_wait ) / (60 * 60 * 24)) + (1200 / (60 * 60 * 24)),'dd-mon-yy hh24:mi:ss') NEXT ,
-- ((sysdate - (sysdate - ( (seconds_in_wait ) / (60 * 60 * 24)) + (1200 / (60 * 60 * 24)) ) ) * (60 * 60 * 24) ) / 60 n,
        username,
        status,
        state
from v$session
group by machine,
         event,
         seconds_in_wait,
         sql_id,
         prev_sql_id,
         username,
         status,
         state
order by machine,
         username,
         event,
         seconds_in_wait,
         sql_id,
         prev_sql_id,
         status,
         state
/

One day I wrote the above script and all became very clear. Something sent  a very fast (150 micro seconds) sql statement concurrently from each application server to every session in the connection pool. The request rate could easily have been serviced by a handful of sessions, but because they came concurrently at the exact same time every session was used. 

   

note: This output has been faked, Not able to find real example of the original issue we had (few years back)
note: All host names and user names are also faked...... so I can keep my job!
Columns            Description
-----------------  --------------------------
TAG                tag used so can grep info from log file
WHEN current system time when data collected
MACHINE            host where the session originates
EVENT              Current database event
SECONDS_IN_WAIT    Seconds in Wait
SQL_ID             Current SQL statement
PREV_SQL_ID        Previous SQL statement last run on the session
SESS_COUNT         Total number of sessions within group (sql_id,event,seconds in wait etc)
SINCE What time the session has been idle since (sysdate - seconds wait)
USERNAME           User name of sessions
STATUS             Session status (Active or Inactive)
STATE              Session Wait State

TAG   WHEN     MACHINE    EVENT                     SECONDS_IN_WAIT SQL_ID        PREV_SQL_ID   SESS_COUNT SINCE                USERNAME   STATUS     STATE
----- -------- ---------- ------------------------- --------------- ------------- ------------- ---------- -------------------- ---------- ---------- -------------------
CFROM 13:55:17 abcabc01.i SQL*Net message from clie               0 2j7pff3tfuzzz 2j7pff3tfuzzz          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               0               qw4bv6jwup5ab          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               0               5tarshstnypzv          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               1               8zv7177vuc8dt          3 14-may-10 13:55:16   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               5               4616pfpak8akh          1 14-may-10 13:55:12   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie              22               amsmuu1pp1w74          1 14-may-10 13:54:55   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie             185               3q4bv6jx8wup5         30 14-may-10 13:52:12   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie             433               f3tg1gz4zdadm          1 14-may-10 13:48:04   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie             873               cmh3vh4pjs7q7          1 14-may-10 13:40:44   WEBUSERABC INACTIVE   WAITING
               **********                                                                       ----------
               sum                                                                                      40
CFROM 13:55:17 abcabc02.i SQL*Net message from clie               0 zzz0ahx447fpr zzz0ahx447fpr          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               0               3q4bv6jx8wup5          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               0               5tarshstnypzv          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               0               f3tg1gz4zdadm          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               6               cmh3vh4pjs7q7          1 14-may-10 13:55:11   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               8               8zv7177vuc8dt          1 14-may-10 13:55:09   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie              11               cmh3vh4pjs7q7          1 14-may-10 13:55:06   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie              12               f3tg1gz4zdadm          1 14-may-10 13:55:05   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie              80               8zv7177vuc8dt          1 14-may-10 13:53:57   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie              80               f3tg1gz4zdadm          1 14-may-10 13:53:57   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie             138               3q4bv6jx8wup5         28 14-may-10 13:52:59   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie             695               cmh3vh4pjs7q7          1 14-may-10 13:43:42   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie            1394               cmh3vh4pjs7q7          1 14-may-10 13:32:03   WEBUSERABC INACTIVE   WAITING
               **********                                                                       ----------
               sum                                                                                      40
CFROM 13:55:17 abcabc03.i SQL*Net message from clie               0 zzz0ahx447fpr zzz0ahx447fpr          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               0               5tarshstnypzv          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               0               cmh3vh4pjs7q7          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               2               5tarshstnypzv          1 14-may-10 13:55:15   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               7               8zv7177vuc8dt          2 14-may-10 13:55:10   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie              14               3q4bv6jx8wup5          1 14-may-10 13:55:03   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie              14               4616pfpak8akh          1 14-may-10 13:55:03   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie              61               8zv7177vuc8dt          1 14-may-10 13:54:16   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie             159               3q4bv6jx8wup5         31 14-may-10 13:52:38   WEBUSERABC INACTIVE   WAITING
               **********                                                                       ----------
               sum                                                                                      40
CFROM 13:55:17 abcabc04.i SQL*Net message from clie               0               3q4bv6jx8wup5         39 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               0               5tarshstnypzv          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
               **********                                                                       ----------
               sum                                                                                      40
CFROM 13:55:17 abcabc05.i SQL*Net message from clie               0 zzz0ahx447fpr zzz0ahx447fpr          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               0               33hsynd62ka4k          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               0               cmh3vh4pjs7q7          1 14-may-10 13:55:17   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie               1 zzz0ahx447fpr zzz0ahx447fpr          2 14-may-10 13:55:16   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie              10               cmh3vh4pjs7q7          1 14-may-10 13:55:07   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie              31               8zv7177vuc8dt          1 14-may-10 13:54:46   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie             157               amsmuu1pp1w74          1 14-may-10 13:52:40   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie             318               3q4bv6jx8wup5         29 14-may-10 13:49:59   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie             874               cmh3vh4pjs7q7          2 14-may-10 13:40:43   WEBUSERABC INACTIVE   WAITING
CFROM 13:55:17            SQL*Net message from clie            1040               8zv7177vuc8dt          1 14-may-10 13:37:57   WEBUSERABC INACTIVE   WAITING
               **********                                                                       ----------
               sum                                                                                      40

Whenever I ran the script, I could see that we had spikes of sessions running the same sql statement in the same second on any given host. Because the statement groups by "seconds_in_wait" we might see that say 25 sessions all were active 8 minutes ago with the same statement and have since done no work (waiting on "SQL*Net message from client"). When we looked at all the application hosts, each would spike at a 20 minute interval (although each host had its own time at which it occurred).

My company use a number of different application caching techniques in our middle-tier application, one of these was a concurrent "read- ahead" cache, this would allow a read from cache to detect its data was soon to expire and asynchronously request a load of the caches in parallel. Our "read-ahead" code had been set to allow 50 threads to get data, but we only had 40 connections in the database connection pool (each box). We had a 30 minute keep alive on the connections, but every 20 minutes exactly, we would touch every connection and keep them alive. The solution was to reduce the number of threads on the "read-ahead" and not even change the connection pool configuration. This fix automatically reduced the number of connections on the database by around 1000.

There have been so many examples where the script has helped, we leave it collecting every 5 minutes to a log file on the more sensitive databases. It can help identify cases where:

  • Sudden concurrent requests by one or more severs or applications. We can get clues in what was the last sql statement run, who by and when
  • Statements taking a long time on specific hosts (Look for "ACTIVE" sessions for long time on same sql)
  • Idle connections which never get returned to the connection pool correctly within the application hosts
  • user behaviour driving sudden spikes in concurrent requests from specific hosts
  • sessions which are getting stuck on database waits (eg db links due to networking issues, row locking etc) and not returning to the pool
  • What time an application host last sent any requests to the database

The script attempts to provide a fast summary of the connections on the database, how long they have been idle, what they last did, which current wait events are used.  Although the script is very simple, I have found it a really good way to summarise what our hosts (using connection pools)  are doing.

Older Posts »

The Silver is the New Black Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 27 other followers