All posts by mrculp

Alert log errors

This is located in a common library

# Purpose:    Search the alert log of $ORACLE_SID for error
# messages.
# If any error messages are found, send notification
#               (see Notes below).
# Prereq's:    $ORACLE_SID environment variable must be set or
#        passed as a positional parameter. Also, $ORATAB
#        environment variable must be set.
# Notes: If this script is invoked from a terminal session,
#        display the results to the screen (STDOUT); otherwise,
#        send a mail message to the specified mail accounts.
# Exit Status:  99 = Required input, $ORACLE_SID, not supplied.
#               90 = Invalid Oracle sid.
#               95 = init.ora file not found for input sid.
#        97 = Alert log file does not exist.
# Author: Michael Culp
# Date:   May 2002
#        R E V I S I O N    H I S T O R Y
# Who:
# Date:
# Comments:

function error_notification
if [ "$TERMINAL" = "Y" ]; then
   echo "$ERROR_MSG"
   env|grep SEND_MAIL_TO|while read LINE
      alias sendvar="echo "`echo $LINE|awk 'BEGIN { FS="=" }; { print "$" $1 }'`
      echo "$ERROR_MSG" | mail $(sendvar)

export SEND_MAIL_TO=''
export SEND_MAIL_TO2=''

SERVER=$(uname -n)
DATE=$(date +%c%t)

# check to see if command is being executed interactively
/usr/bin/tty -s
if [ $STATUS = 0 ]

# Test for required inputs.
# Positional parameters override env variables.

if [ ${1:-0} = 0 ]; then
   if [ ${ORACLE_SID:-0} = 0 ]; then
      ERROR_MSG=`echo "ORACLE_SID must be supplied as input to $0"`
      exit 99
   export ORACLE_SID=$1

# Call the validate_sid function to verify ORACLE_SID is valid.
# The function also sets the ORACLE_HOME and ORACLE_BASE variables.

if [ "$VALID_SID" != "Y" ]; then
   ERROR_MSG=`echo "ORACLE_SID ($ORACLE_SID) is invalid. Script is $0"`
   exit 90


if [ ! -s "$INIT_FILE" ]; then
   ERROR_MSG=`echo "$INIT_FILE does not exist. Script is $0"`
   exit 95

LOG_PATH="/"`awk -F= '/background_dump_dest/ { print $2 }' "$INIT_FILE" |tr -s " "|cut -f2-30 -d/`


# Create a list of valid Oracle error message prefixes by parsing
# the oerr script in $ORACLE_HOME/bin

/home/oracle/bin/awk_message_prefixes > /tmp/ora_prefixes

# Archiver error message prefix is not included in the oerr
# script. Add an entry for this into the "ora_prefixes" file.

echo '^ARC[0-9]: Error' >> /tmp/ora_prefixes

# Search the alert.log for any occurrences of an Oracle error
# message prefix.

grep -bsf /tmp/ora_prefixes $ALERTLOG > $ERROR_MSG_FILE
if [ $status -eq 1 ]; then   # no matches found
if [ $TERMINAL = 'Y' ]; then
echo "Amazingly enough, there are NO errors in $ALERTLOG"
elif [ $status -gt 1 ]; then
ERROR_MSG=`echo "$ALERTLOG does not exist."`
exit 97

# If there are any messages in the message file, send
# appropriate notifications.

if [ -s $ERROR_MSG_FILE ]; then
   echo "\nMessages from server $SERVER on $DATE" >> $ERROR_MSG_FILE
   if [ $TERMINAL = 'Y' ]; then
      cat $ERROR_MSG_FILE
      env|grep SEND_MAIL_TO|while read LINE
      alias sendvar="echo "`echo $LINE|awk 'BEGIN { FS="=" }; { print "$" $1 }'`
      mail $(sendvar) < $ERROR_MSG_FILE

if [ -f $ERROR_MSG_FILE ]; then
if [ -f /tmp/ora_prefixes ]; then
rm -f /tmp/ora_prefixes


Creating a TAR of existing Oracle home

Clone DB OracleBase article

This article is a test

When doing any type of upgrade/patch I’m often asked about backing up the existing binaries and how to save the customer from minimal downtime. One of the ways this can be achieved is by simply making another oracle home tar copied from the existing home. This now gives several options that allow flexibity in many ways.

# tar -cvf /tmp/u01.tar /u01
First parameter is the location of the tar file, second paramater is the directory where the oracle home is located

# gzip /tmp/u01.tar

You can then gzip it to make it smaller/easier to move it

Once moved to the destination server

# gunzip /tmp/u01.tar.gz
# cd /
# tar -xvf /tmp/u01.tar

Root Configuration Scripts

Run the root configutation scripts, generated as part of the original installation, on the destination server as the “root” user.


Modify Config Files

If you have not prepared the “.bash_profile” as part of the destination server setup, copy the file from the source to the destination server.

# scp /home/oracle/.bash_profile oracle@

Edit the “.bash_profile” file on the destination server, giving it the correct value for the ORACLE_HOSTNAME environment variable.

Amend any hostname or IP references in the “listener.ora” and “tnsnames.ora” files in the “$ORACLE_HOME/network/admin” directory.

Edit the “/etc/oratab” making sure all instances are referenced correctly. In my case I need to add the following entry.


FTS Script

I felt most folks looking for full-table scans were not getting the full picture and created my own script to pick up the operations I felt went into this category.

sqlplus -s "/ as sysdba" <<EOF
spool fts_MRC.log

set linesize 200
set pagesize 66
col operation format a13
col object_name format a32
col object_owner format a10
col options format a15
col executions format 999,999,999

a.sql_id,a.object_owner,a.object_name, rtrim(a.operation) operation,
a.options, sum(b.executions) executions, c.bytes, (sum(b.executions)*c.bytes)/(1024*1024) fts_meg
v\$sql_Plan a, v\$sqlarea b, dba_segments c
where (a.object_owner=c.owner and a.object_name=c.segment_name) and
a.address=b.address and
a.operation IN ('TABLE ACCESS','INDEX','HASH JOIN') and
a.object_owner not in ('SYS','SYSTEM','PERFSTAT','SYSMAN','WKSYS') and
group by a.sql_id,a.object_owner, a.object_name, operation, options, c.bytes
order by 8, a.object_owner,a.object_name,operation, options, executions desc;

spool off


Show what is in cache script

This is a query to show what is in cache currently

select decode(pd.bp_id,1,'KEEP',
                       4,'2K SUBCACHE',
                       5,'4K SUBCACHE',
                       6,'8K SUBCACHE',
                       7,'16K SUBCACHE',
                       8,'32KSUBCACHE','UNKNOWN') subcache,
  from x\$kcbwds ds,
       x\$kcbwbpd pd,
      (select /*+ use_hash(x) */ set_ds,
              count(*) BLOCKS
         from obj\$ o,
              x\$bh x
        where o.dataobj# = x.obj and
              x.state !=0 and
              o.owner# !=0
     group by set_ds, bh
where ds.set_id &gt;= pd.bp_lo_sid and
      ds.set_id &lt;= pd.bp_hi_sid and
      pd.bp_size != 0 and

SQL Net Connections

Advanced features of Oracle Net include failover and load balancing. These features are actually interrelated in as much as you usually don’t have one without the other. While they are mostly used in a RAC environment, they can be set up in a single instance environment as well.

In the context of Oracle Net, failover refers to the mechanism of switching over to an alternate “database” or resource when connection to the primary resource gets terminated or unavailable during a connection request. Connection failure may be broadly categorized as:

  • Those that occur while making the initial connection.
  • Those that occur after a session has been successfully established.

The first category of connection failure can be dealt with comparatively easily.  If your attempt to connect to an instance fails, you can make the connection attempt again, but this time to a backup instance. As long as you have backup instances configured, you can continue trying to connect, until a successful connection is established. This process is technically termed as Connect Time Failover.
The second category of connection failure is of the type that happens after a successful connection has already been established, and subsequently the connection gets terminated.

In such cases, the application normally has to handle all the details of reconnecting to a backup instance, re-establishing the session environment, and resubmitting any work lost, because of the break in the connection. The technical name for this type of failover mechanism is transparent application failover or TAF for short.

Ok, now let’s have a look at the mechanism of setting up these features. Let’s start with Connect Time Failover. Although Connect Time Failover is a RAC environment mechanism, you can still use it with non-RAC environments, where you have a mechanism like standby database in place. Connect Time Failover can be achieved by the simple expedient of configuring a Net Service Name through which clients may connect to the standby database, whenever they cannot access the primary database.

This can even be achieved with a single net service name provided you configure

a) Multiple listener addresses within a description
b) Multiple descriptions within a description list

The difference between the two is that while the former will use the same connect data for all the listener addresses, that latter may have separate connect data for each configured listener address. Below, I provide an example of both types of configuration:
Multiple Listener addresses within a description:

			(SERVICE_NAME= mculp)

An important issue to be aware of at this point is that Connect Time Failover only works for you, if you are using dynamic registration. This means that this feature won’t work if you have something like this configured in your listener.ora:


Any reference to Global_dbname and you can forget about Connect Time Failover. By default, failover is enabled when you specify multiple addresses. However, you can disable failover by specifying (FAILOVER=false). When failover is disabled, Oracle Net will attempt to connect using the first address, and if that attempt fails, no further attempts will be made and Oracle Net will generate an error.

Multiple descriptions within a description list:


			(SERVICE_NAME= test_mculp))

Notice that in the description list above, I have put (FAILOVER=true) and (LOAD_BALANCE=false). There is no real need to put (FAILOVER=true), as it is the default behavior, however, (LOAD_BALANCE=false) does not represent default behavior, and I have to set it to false to stop client load balancing which is enabled by default, if I am using multiple description lists. When client load balancing is enabled, Oracle Net will randomly choose descriptions from the description list to make a client connection.

Now, let’s look at how TAF works. Unlike connect time failover, which is invoked before the connection is made, TAF comes into play after the connection is made (and then, broken). If the connection is lost while the application is running, Oracle Net will transparently reconnect the application to another instance accessing the same database. The failover is done gracefully and uses failover aware APIs built into OCI.
TAF supports two types of failover: SESSION and SELECT. A SESSION failover connection is not over ambitious. It just fails over to a backup instance. All work in progress at that point are irrevocably lost. SELECT failover is more intricate in as much as it enables some type of read only application to failover without losing the work in progress. If a SELECT statement was in progress at the time of the termination of the connection, then as the connection is reestablished to a backup instance, Oracle Net re-executes the SELECT statement and positions the cursor in a manner that the client can seamlessly continue fetching the rows. But that’s about all that TAF has to offer. It doesn’t have any mechanism to recover DML statements that were in progress, when the failure occurred, or even for SELECT statements, you lose global temporary tables, package states and session settings.
TAF supports two failover methods: BASIC and PRECONNECT. In BASIC method, you connect to the backup instance when the primary connection fails. In the PRECONNECT method, you connect to the backup instance at the same time you connect to the primary instance. This has the obvious benefit of having a backup connection available all of the time, thus reducing the time of ‘failover’. But the downside is that you have to pay the extra ‘cost’ in terms of resources spent, of having a backup connection open all the while.
TAF is configured by adding a FAILOVER_MODE parameter to the CONNECT_DATA parameter for a net service name. Since you cannot configure TAF using the Net Manager, you have to use either OEM or manually edit tnsnames.ora. If you are going to use TAF, then you have to have a backup instance in place. That means you have to configure two net service names—one to connect to the primary instance and the other to connect to the backup instance. The example below shows a TAF configuration where connections to TEST will failover to TEST_BKUP.


                                                     (SERVICE_NAME= sbkup.ghosh)


The definition of TEST contains a FAILOVER_MODE entry that specifies the name of the net service name to which a connection is to be made, should the TEST connection happen to fail. In this particular example, I have ‘failed over’ to an entirely different database, not to another instance accessing the same database. I did this to show that TAF can work either way, but if you are failing over to a different database, then you should keep the two databases in sync by using a mechanism like standby database.
Normally, TAF makes only a single attempt to connect to the backup instance. However, if you specify the RETRIES and DELAY parameters, you can force TAF to make multiple connection attempts to the backup instance. The following example shows TAF configured for 10 retries, each at an interval of fifteen seconds.

      (ADDRESS= (PROTOCOL=TCP) (HOST=ss-bkup) (PORT=1521))
                                                                                                                                              (DELAY= 15))


RETRIES and DELAY parameters may be gainfully employed where you are using a standby database to failover to and such a database may take a few moments to be brought up.
If we want to have the entire failover mechanism configured, so that we can take advantage of both connect time failover, as well as transparent application failover, we can put something like the following in place:

           (ADDRESS=(PROTOCOL=TCP) (HOST=ss01-main) (PORT=1521))
	   (ADDRESS=(PROTOCOL=TCP) (HOST=ss02-main) (PORT=1521))
(TYPE=SELECT)(METHOD=BASIC)(BACKUP=PROD_BKUP)                                                                                                                                           ) 
     (ADDRESS= (PROTOCOL=TCP) (HOST=ss02-main) (PORT=1521))

Now, let’s analyze the scenario above. The failover set up is from ss01-main to ss02-main. But a problem will occur where the initial connection is made to ss-02-main because of connect time failover; then we are already connected to the backup mode. If that fails, we have had it, there’s nowhere else to go. And what’s more, if you are thinking of using a description list, with several descriptions, that won’t work either. TAF settings are picked up from the first connect_data entry encountered, so other descriptions in the description list become useless.

Load balancing may be defined as distributing a job or piece of work over multiple resources. RAC is an ideal environment for distributing load amongst multiple instances accessing the same physical database. Other environments may also suitably configured to invoke load balancing, and in the following few paragraphs I show how load balancing can be set up.
CLIENT LOAD BALANCING: You can configure load balancing either at the client end or at the server end. Client load balancing is configured at the net service name level, and it is done simply by providing multiple addresses in an address list, or multiple descriptions in a description list. When load balancing is enabled, Oracle Net chooses an address to connect to in a random order rather than sequential order. This has the effect of clients connecting through
addresses which are picked up at random and no one address is overloaded. But significantly, there is no guarantee that just because clients are being connected through different addresses picked at random, there is an even distribution of workload at the server end. To do that, you will need to configure load balancing at the server end which is discussed below.
CONNECTION LOAD BALANCING: This feature improves connection performance by allowing the listener distribute new connections to different dispatchers and instances. The listener is in a position to do so because due to dynamic registration, the current load of instances and dispatchers are available with the listener. This allows the listener to balance the load across dispatchers and instances while connecting client connection requests.
Connection load balancing may be done in:

  • Single instance shared server configuration
  • Multiple instance shared server configuration
  • Multiple instance dedicated server connection

Load balancing is done in the following order: In case of dedicated server configuration it is:

  • Least loaded node
  • Least loaded instance

In case of shared server, a listener selects a dispatcher in the following order:

  • Least loaded node
  • Least loaded instance
  • Least loaded dispatcher for that instance

Connection load balancing may also be combined with client load balancing to leverage load balancing activity. While the listener will load-balance connections to dispatchers/instances, client load balancing will distribute load of handling new connections over more than one listener. Add failover at the client end and you will be setting up a robust system, which will insulate the clients from potential connection failures, while at the same time keeping an eye on performance.

Shell Script Loop

files="/etc/passwd /etc/group /etc/hosts"
for f in $files; do
   if  ! -f $f ]
         echo "$f file missing!"

for car in $(cat car.txt)
   print "Current Car : $car"

for f in $(ls /tmp/*)
   print "Full file path  in /tmp dir : $f"


The  buffer exterminate wait event is caused when using Oracle’s “automatic memory management” (AMM) when the MMON process shrinks the data buffer cache to re-allocate that RAM for another SGA region. AMM resize operations can hurt overall database performance especially the OLTP environments, and you may want to consider turning off AMM which will relieve the buffer exterminate waits, and manually adjust your SGA regions.

If you suspect AMM resize operations, you have to look into v$sga_resize_ops and v$memory_resize_ops and see how many times it is occurring and effecting the performance of your database. If one sees more events of these and especially during your peak times of database one has to turn of the feature adjusting manually the corresponding SGA and PGA sizes.

If you want to analyze Oracle’s use of memory and look at various memory resizing operations you can use the v$memory_resize_ops view. This view contains a list of the last 800 SGA resize requests handled by Oracle. Here is an example:

select parameter
where initial_size > = 0
  and final_size > = 0
order by

Please find below the various tables that are related to AMM and their descriptions to check the information in the database.

v$memory_dynamic_components – displays information on the current size of all automatically tuned and static memory components, with the last operation (for example, grow or shrink) that occurred on each.

v$sga_dynamic_components — displays the current sizes of all SGA components, and the last operation for each component.

v$sga_dynamic_free_memory — displays information about the amount of SGA memory available for future dynamic SGA resize operations.

v$memory_current_resize_ops — displays information about resize operations that are currently in progress. a resize operation is an enlargement or reduction of the SGA, the instance pPGAga, or a dynamic SGA component.

v$sga_current_resize_ops — displays information about dynamic SGA component resize operations that are currently in progress.

v$memory_resize_ops — displays information about the last 800 completed memory component resize operations, including automatic grow and shrink operations for sga_target and pga_aggregate_target.

v$sga_resize_ops — Displays information about the last 800 completed SGA component resize operations.

Gathering Stats

                                   estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
                                         method_opt => 'FOR ALL COLUMNS SIZE AUTO',
                                        granularity => 'AUTO',
                                            cascade => TRUE,
                                      no_invalidate => FALSE);