Category Archives: RAC

RAC Log File Locations

RAC Log File Locations

If you are using Oracle RAC (doesn’t matter how many nodes you have) You need to know where log files are located.

The Cluster Ready Services Daemon (crsd) Log Files

Log files for the CRSD process (crsd) can be found in the following directories:

                 CRS home/log/hostname/crsd

Oracle Cluster Registry (OCR) Log Files

The Oracle Cluster Registry (OCR) records log information in the following location:

                CRS Home/log/hostname/client

Cluster Synchronization Services (CSS) Log Files

You can find CSS information that the OCSSD generates in log files in the following locations:

                CRS Home/log/hostname/cssd

Event Manager (EVM) Log Files

Event Manager (EVM) information generated by evmd is recorded in log files in the following locations:

                CRS Home/log/hostname/evmd

RACG Log Files

The Oracle RAC high availability trace files are located in the following two locations:

CRS home/log/hostname/racg

$ORACLE_HOME/log/hostname/racg

Core files are in the sub-directories of the log directories. Each RACG executable has a sub-directory assigned exclusively for that executable. The name of the RACG executable sub-directory is the same as the name of the executable.

You can follow below table which define locations of logs files:

Oracle Clusterware log files

Cluster Ready Services Daemon (crsd) Log Files:
$CRS_HOME/log/hostname/crsd

Cluster Synchronization Services (CSS):
$CRS_HOME/log/hostname/cssd

Event Manager (EVM) information generated by evmd:
$CRS_HOME/log/hostname/evmd

Oracle RAC RACG:
$CRS_HOME/log/hostname/racg
$ORACLE_HOME/log/hostname/racg

Oracle RAC 11g Release 2 log files

Clusterware alert log:
$GRID_HOME/log/<host>/alert<host>.log

Disk Monitor daemon:
$GRID_HOME/log/<host>/diskmon

OCRDUMP, OCRCHECK, OCRCONFIG, CRSCTL:
$GRID_HOME/log/<host>/client

Cluster Time Synchronization Service:
$GRID_HOME/log/<host>/ctssd

Grid Interprocess Communication daemon:
$GRID_HOME/log/<host>/gipcd

Oracle High Availability Services daemon:
$GRID_HOME/log/<host>/ohasd

Cluster Ready Services daemon:
$GRID_HOME/log/<host>/crsd

Grid Plug and Play daemon:
$GRID_HOME/log/<host>/gpnpd:

Mulitcast Domain Name Service daemon:
$GRID_HOME/log/<host>/mdnsd

Event Manager daemon:
$GRID_HOME/log/<host>/evmd

RAC RACG (only used if pre-11.1 database is installed):
$GRID_HOME/log/<host>/racg

Cluster Synchronization Service daemon:
$GRID_HOME/log/<host>/cssd

Server Manager:
$GRID_HOME/log/<host>/srvm

HA Service Daemon Agent:
$GRID_HOME/log/<host>/agent/ohasd/oraagent_oracle11

HA Service Daemon CSS Agent:
$GRID_HOME/log/<host>/agent/ohasd/oracssdagent_root

HA Service Daemon ocssd Monitor Agent:
$GRID_HOME/log/<host>/agent/ohasd/oracssdmonitor_root

HA Service Daemon Oracle Root Agent:
$GRID_HOME/log/<host>/agent/ohasd/orarootagent_root

CRS Daemon Oracle Agent:
$GRID_HOME/log/<host>/agent/crsd/oraagent_oracle11

CRS Daemon Oracle Root Agent:
$GRID_HOME/log/<host> agent/crsd/orarootagent_root

Grid Naming Service daemon:
$GRID_HOME/log/<host>/gnsd

Directories for RAC

Filesystems
/oracle – binaries for the database software 50GB – 100GB (enough for the current binaries as well as upgrade or download if necessary)
/oracle_crs – binaries for the grid infrastructure 50GB – 100GB (enough for the current binaries as well as upgrade or download if necessary)
/ora01 – at least 100GB for each server in the cluster, mounted to all servers

These are GoldenGate named databases
/ggate
/ggate2
/ggtrail
/ggtrail02

These are ACFS based mounts
/oracle_homes
/acfs_test_mount
/acfs_test2_mount

/oracle_crs/crs/diag/asm/+asm/+ASM*/trace

/oracle_crs/crs/diag/tnslsnr/*/listener/trace

Oracle Tools for RAC / Cluster systems

ORAchk Health Checks For The Oracle Stack 1268927.1

ORAchk replaces the popular RACcheck tool, extending the coverage based on prioritization of top issues reported by users, to proactively scan for known problems.

Oracle Exadata Best Practices 757552.1
 Trace File Analyzer Collector (aka TFA) 1513912.1
TFA Collector – Tool for Enhanced Diagnostic Gathering 1513912.1
OSWatcher 301137.1
Procwatcher 459694.1
ORATOP  => 1500864.1

Oratop DOC

SQLT => 215187.1

Tutorial

RDA   314442.1           DA Diagnostic Assistant (GUI to the RDA) – both are covered by the doc id

Service Tools Bundle

DCLI    Doc

ED360

eAdam

doc for eAdam

Tanel Poder Scripts

SQL Developer

SQL Developer Data Modeler

Oracle RAC Instances

SET LINESIZE  145
SET PAGESIZE  9999
SET VERIFY    off

COLUMN instance_name          FORMAT a13         HEAD 'Instance|Name / Number'
COLUMN thread#                FORMAT 99999999    HEAD 'Thread #'
COLUMN host_name              FORMAT a13         HEAD 'Host|Name'
COLUMN status                 FORMAT a6          HEAD 'Status'
COLUMN startup_time           FORMAT a20         HEAD 'Startup|Time'
COLUMN database_status        FORMAT a8          HEAD 'Database|Status'
COLUMN archiver               FORMAT a8          HEAD 'Archiver'
COLUMN logins                 FORMAT a10         HEAD 'Logins?'
COLUMN shutdown_pending       FORMAT a8          HEAD 'Shutdown|Pending?'
COLUMN active_state           FORMAT a6          HEAD 'Active|State'
COLUMN version                                   HEAD 'Version'

SELECT
instance_name || ' (' || instance_number || ')' instance_name
, thread#
, host_name
, status
, TO_CHAR(startup_time, 'DD-MON-YYYY HH:MI:SS') startup_time
, database_status
, archiver
, logins
, shutdown_pending
, active_state
, version
 FROM
   gv\$instance
ORDER BY
   instance_number
/

SVRCTL Commands Cluster Health

SCAN Information
echo “Status SCAN————————————-”
echo “================================================”
srvctl status scan

Status SCAN————————————-
================================================
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node host121
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node host120
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node host120

echo
echo “Status listener———————————”
echo “================================================”
srvctl status listener

echo
echo “Status config SCAN——————————”
echo “================================================”
srvctl config scan

echo
echo “Status config SCAN_LISTENER———————”
echo “================================================”
srvctl config scan_listener

echo
echo “Services config ——————————–”
echo “================================================”
# srvctl status service -d TEST01
Service TEST01_SVC_01 is running on instance(s) TEST011,TEST012

srvctl config – will list databases configured in the cluster

once the databases are listed you can look at the config of each one via this command
srvctl config database -d ;

shutdown on one node

srvctl stop database -d -i ;

login as dbaroot

crsctl stop crs