#######################################################
# GG These are all GoldenGate functions
#######################################################

#######################################################
# GG functions
# All of the GG functions should go here
#

# gg_init - Set some variables for other functions

# gg_init_dbrd - GoldenGate init dashboard

# gg_unzip - Unzip the downloads for GG binaries only set
# variables in gg_init first

# gg_zipfile_chk - Check for directory for distribution set variables
# in gg_init first

# gg_inst -

# cr_gg_env - Create the GoldenGate environment files
# this should take into account the existing stuff
# already established by .profile_env

# gg_chk_files -

# gg_unzip -

# xag_unzip -
# xag_inst_dir -
# xag_install -
# chk_xag -
# file_chk_ggzip -
# gg_chk_envfile -
# dir_chk_gen -
# dir_chk_ggstr -
# dir_chk_ggexe -
# gg_chk_usr -
# qq_qa_chk -
# add_gg_vip -
# chk_vip -
# chk_gg_fs -
# ggcr_file01_cr -

# mkdir_dbfs -
# inst_fuse -
#######################################################



gg_init_dbrd()
{

echo
echo " Change variables in gg_common.ksh gg_init...."
echo " display with gg_init_dbrd...................."
echo

echo "....GoldenGate Version...: " $ggver
echo "....GoldenGate Filename..: " $ggflnm
echo "....GoldenGate Directory.: " $ggdir
echo "....GoldenGate Home......: " $GGS_HOME
echo "....XAG Version..........: " $xagver
echo "....XAG directory........: " $xagdir


}




gg_init()
######################################################
# These variables should be set prior to setup
######################################################
{
# GoldenGate version
ggver="12.3"

# OGG_HOME=$GGS_HOME

# export DBFS_MNT=/dbfs

# This is the name of the goldengate zip file
ggflnm="123010_fbo_ggs_Linux_x64_shiphome.zip"

# This is the name of the directory GoldenGate software is stored
ggdir="/ora01/software/gg123/"

# This is the XAG version
xagver="8.1b"

# This is the name of the directory where the XAG software is stored
xagdir="/ora01/software/xag81b/"

# This is the name of the xag zip file
xagflnm="xagpack81b.zip"

# Read the GG environment files
. ./ggoraenv.env

# Show the variable status
gg_init_dbrd

}

The Integrated Capture GoldenGate Mode also known as the Integrated Extract Goldengate process in 12c (also backported to 11gr2) is one of the more interesting and useful feature released with this version. This capture process is the component responsible for extracting the DML transactional data and DDL’s from the source database redo log files. This data is then written to local trail files which eventually is move to the destination database to be applied there.

Here are the list of topics to be covered in this article.

• What is the GoldenGate Integrated Capture Mode?
• Integrated Extract Goldengate vs Classic Capture
• On-Source Capture
• Downstream Capture
• Prerequisites
• Configuration
• Monitoring/Views

What is the GoldenGate Integrated Capture Mode?

Integrated Capture Mode (IC) is a new form of the Extract process, were in this process is moved closer, inside the source database. In the traditional Classic extract process, the extract works on the redo logs outside the domain of the actual database. In this new integrated capture mode, a server Log Miner process is started which extracts all the DML data and DDLS statements creating Logical Change Records (LCR’s). These are then handed to the Goldengate memory processes which writes these LCR’s to the local trail files. This Log Miner server process is not the Log Miner utility we are used to in the database but is a similar mechanism which has been tuned and enhanced for specific use by the Goldengate processes.

The purpose of moving this inside the database is to be able to make use of the already existing internal procedures in the database, making it easier to provide support for the newer features of Oracle faster than was previously possible. Due to this change, Oracle is now able to provide the following.

• Full Support of Basic, OLTP and EHCC compressed data.
• No need to fetch LOB’s from tables.
• Full Secure File support for Secure file lobs.
• Full XML support.
• Automatically handles addition of nodes and threads in RAC environment.
• Senses node up down in RAC and handles it in its processes transparently.
Integrated Capture vs Classic Capture

The Integrated Capture mode offers the following.

• Integrated with Database features
• Allows to mine earlier versions of integrated capture on secondary
• More efficient. It does not have to fetch data because of the datatype, etc..
• No longer necessary to set this: Threads, ASMUSER, ASMBUF, DBLOGREADER, DECRYPASSWORD
• For RAC no additional manual steps required. Transparent with RAC.
Integrated Capture Modes

Integration capture supports two types of deployment configurations. They are:
• On-Source Capture
• Downstream Capture

On-Source Capture

When the integrated capture process is configured using the on-source capture mode, the capture process is started on the actual source database server itself. Changes as they happen on source database will be captured locally, routed, transformed and applied on target database in near real-time.

This may seem convenient but consideration needs to be given to the additional workload that will be placed by this process on the database server. However if real-time replication is required this is the best option.

Note: All features are supported in both On-Source or Downstream Deployment
Downstream Capture

In the downstream mode, the capture process is configured to run on a remote database host. All the database redo logs from the source database are shipped to this remote server using Dataguard technology and then mined there by the capture process.

In this mode there is an inherent latency introduced due to the fact that the redo log on the source needs to switch first before the log can be shipped downstream. So there will be some delay in the replication of data to a target database as the extraction will be delayed due to the log switch. The main benefit of this setup however is the offset of the resource usage on the source server.

In this mode, to overcome the log switch latency, Oracle has provide a near Real time capture using Standby redo logs for extraction. In this configuration the redo log from the source continuously writes into the standby redo logs of the downstream database. The capture process directly capture the data from here.

It is important to keep in mind when deciding whether to use the Integrated capture or the classic capture mechanism that both configuration will remain available in future releases. However Oracle recommends to use the new Integrated capture mechanism as Oracle will not be adding new features to classic capture in the future and it will only be there for legacy support purposes.
Prerequisites

The database where integrated capture runs:
• Must be at least 11.2.0..3
• Database patch 1411356.1 must be installed.
• Works with Oracle 12c.

Setup Of GG Hub Article

If the GG Hub server is a standalone environment then the first step is to install the most recent Oracle client on the GoldenGate hub server, choosing the administrator installation.

In a hub configuration, GoldenGate is installed only on the hub server, it is not installed on the database servers. The Oracle client is installed on the hub server. We will be using the thick client.

In a hub environment that is install in a fault tolerant configuration, (RAC, DBFS, and XAG  w/DataGuard).

GGS_HOME needs to be set, make this the actual directory
export GGS_HOME=/oracle/product/12.2/gg_1

This will allow gh
alias gh='cd $GGS_HOME'
alias gd='cd /ggate'

export PATH=$GGS_HOME:$PATH
export LD_LIBRARY_PATH=$GGS_HOME:$LD_LIBRARY_PATH

GoldenGate Downloads

GoldenGate Lag Script

GoldenGate Script Library gg_common.ksh

XAG Download 8.1

Adding Existing Tables to GG Replication

Basic Replication

Basic Config 001

Basic Config 002

ATS Group Basic setup

 

Configure Oracle DBFS or ACFS on Exadata

Create the dbs_group file for dcli

As the user root, create a text file in the home directory of the root user called dbs_group. This file will contain the names of both the X5-2 compute nodes.

We will be using the DCLI utility to run commands on all compute nodes in the Exadata box and this file will be used for that purpose when we run the ‘dcli –g’ command.

[root@exdb1db01 ~]# dcli -g dbs_group -l root hostname
exdb1db01: exdb1db01.gavin.com.au
exdb1db02: exdb1db02.gavin.com.au

1.1 Add the oracle user to the fuse group
[root@exdb1db01 ~]# dcli -g ~/dbs_group -l root usermod -a -G fuse oracle

1.2 Add the user_allow_other option to the fuse.conf file

root@exdb1db01 ~]# dcli -g ~/dbs_group -l root "echo user_allow_other > /etc/fuse.conf"
[root@exdb1db01 ~]# dcli -g ~/dbs_group -l root chmod 644 /etc/fuse.conf

Note – on the Exadata servers, the required fuse RPM packages are installed by default.

1.3 Create the mount points and give appropriate permissions
On both compute nodes we will create mount points which will be used to mount the DBFS file system.
Since the objective is to have multiple mount points where each mount point is dedicated to separate database or environment, we will create the mount point with the naming convention /dbfs/.
Change the ownership of the mount points to the oracle user

dcli -g ~/dbs_group -l root mkdir /dbfs/dev2
dcli -g ~/dbs_group -l root chown oracle:oinstall /dbfs/dev2

1.4 Create tablespace and users
As the user SYS, we will create two tablespaces which will be used to store the LOB objects associated with the DBFS file system.
We will create the dbfs_gg_dirtmp tablespace with the recommended NOLOGGING attribute as it will be used to store the contents of the GoldenGate dirtmp directory.
Note: The size of the tablespace will depend on the amount of trail files which are expected to be generated as well as the required retention period for those trail files.
While the example shows the DBFS_DG ASM disk group being used for the hosting the DBFS related tablespaces, any ASM disk group with the required amount of free disk space can be used.
The DBFS_USER Oracle database user will be the owner of the DBFS related database objects and we create the user and grant the appropriate privileges especially the DBFS_ROLE database role.

create bigfile tablespace dbfs_gg_dirsrc
datafile '+DBFS_DG' size 32g autoextend on next 2g
LOGGING EXTENT MANAGEMENT LOCAL AUTOALLOCATE
SEGMENT SPACE MANAGEMENT AUTO;

create bigfile tablespace dbfs_gg_dirtmp
datafile '+DBFS_DG' size 10g autoextend on next 2g
NOLOGGING EXTENT MANAGEMENT LOCAL AUTOALLOCATE
SEGMENT SPACE MANAGEMENT AUTO;

create user dbfs_user identified by Oracle#123
default tablespace dbfs_gg_dirsrc
temporary tablespace temp
quota unlimited on dbfs_gg_dirsrc
quota unlimited on dbfs_gg_dirtmp;

GRANT create session, create table, create view,create procedure, dbfs_role TO dbfs_user;

1.5 Create the DBFS file system
We will next connect as the DBFS_USER and run the dbfs_create_filesystem.sql script to create the necessary DBFS related database objects.

The dbfs_create_filesystem.sql takes two parameters – the tablespace_name and the DBFS file system name.

SQL> conn dbfs_user/
Connected.

SQL> @?/rdbms/admin/dbfs_create_filesystem dbfs_gg_dirsrc ogg_dev2

No errors.
--------
CREATE STORE:
begin dbms_dbfs_sfs.createFilesystem(store_name => 'FS_OGG_DEV2', tbl_name =>
'T_OGG_DEV2', tbl_tbs => 'dbfs_gg_dirsrc', lob_tbs => 'dbfs_gg_dirsrc',
do_partition => false, partition_key => 1, do_compress => false, compression =>
'', do_dedup => false, do_encrypt => false); end;
--------
REGISTER STORE:
begin dbms_dbfs_content.registerStore(store_name=> 'FS_OGG_DEV2',
provider_name => 'sample1', provider_package => 'dbms_dbfs_sfs'); end;
--------
MOUNT STORE:
begin dbms_dbfs_content.mountStore(store_name=>'FS_OGG_DEV2',
store_mount=>'ogg_dev2'); end;
--------
CHMOD STORE:
declare m integer; begin m := dbms_fuse.fs_chmod('/ogg_dev2', 16895); end;
No errors.

SQL>@?/rdbms/admin/dbfs_create_filesystem dbfs_gg_dirtmp ogg_dirtmp_dev2

No errors.
--------
CREATE STORE:
begin dbms_dbfs_sfs.createFilesystem(store_name => 'FS_OGG_DIRTMP_DEV2',
tbl_name => 'T_OGG_DIRTMP_DEV2', tbl_tbs => 'dbfs_gg_dirtmp', lob_tbs =>
'dbfs_gg_dirtmp', do_partition => false, partition_key => 1, do_compress =>
false, compression => '', do_dedup => false, do_encrypt => false); end;
--------
REGISTER STORE:
begin dbms_dbfs_content.registerStore(store_name=> 'FS_OGG_DIRTMP_DEV2',
provider_name => 'sample1', provider_package => 'dbms_dbfs_sfs'); end;
--------
MOUNT STORE:
begin dbms_dbfs_content.mountStore(store_name=>'FS_OGG_DIRTMP_DEV2',
store_mount=>'ogg_dirtmp_dev2'); end;
--------
CHMOD STORE:
declare m integer; begin m := dbms_fuse.fs_chmod('/ogg_dirtmp_dev2', 16895);
end;
No errors.

1.6 Verify the DBFS LOB segment attributes
SQL> SELECT table_name, segment_name, logging, cache
2 FROM dba_lobs WHERE tablespace_name like 'DBFS%';

TABLE_NAME SEGMENT_NAME LOGGING CACHE
------------------------------ ------------------------------ ------- ----------
T_OGG_DEV2 LOB_SFS$_FST_1 YES NO
T_OGG_DIRTMP_DEV2 LOB_SFS$_FST_11 NO NO

1.7 Edit and customize the Oracle supplied mount-dbfs.sh script
Download the file mount-dbfs-20160215.zip from the MOS note 1054431.1 (Configuring DBFS on Oracle Exadata Database Machine).
Copy the file to a temporary directory on one of the database compute nodes and as the user root, extract the file.
We will now have two files – mount-dbfs.conf and mount-dbfs.sh.
Copy the mount-dbfs.sh to mount-dbfs_.sh
[root@exdb1db01 ~]# cd /tmp
[root@exdb1db01 tmp]# cp mount-dbfs.sh mount-dbfs_dev2.sh
[root@exdb1db01 tmp]# cp mount-dbfs.conf mount-dbfs_dev2.conf

Edit the mount-dbfs_.sh script to reference the customized CONFIG file

[root@exdb1db01 tmp]# vi mount-dbfs_dev2.sh

### Ensure that when multiple mounts are used, there are separate copies
### of mount-dbfs.sh that reference separate CONFIG file pathnames
CONFIG=/etc/oracle/mount-dbfs_dev2.conf

1.8 Edit and customize the Oracle supplied mount-dbfs.conf script
Change the values for :
• DBNAME
• MOUNT_POINT
• DBFS_USER
• ORACLE_HOME
• GRID_HOME
• DBFS_PASSWORD

### Database name for the DBFS repository as used in "srvctl status database -d $DBNAME"
### If using PDB/CDB, this should be set to the CDB name
### Database name for the DBFS repository as used in "srvctl status database -d $DBNAME"
### If using PDB/CDB, this should be set to the CDB name
DBNAME=DEV2

### Mount point where DBFS should be mounted
MOUNT_POINT=/dbfs/dev2

### Username of the DBFS repository owner in database $DBNAME
DBFS_USER=dbfs_user

### RDBMS ORACLE_HOME directory path
ORACLE_HOME=/u01/app/oracle/product/11.2.0/shieldnp_1

### GRID HOME directory path
GRID_HOME=/u01/app/12.1.0/grid_1

###########################################
### If using password-based authentication, set these
###########################################
### This is the plain text password for the DBFS_USER user
DBFS_PASSWD=Oracle#123

1.9 Copy the modified files to $GRID_HOME/crs/script as well as /etc/oracle and grant appropriate privileges
dcli -g ~/dbs_group -l root -d /u01/app/12.1.0/grid_1/crs/script -f /tmp/mount-dbfs_dev2.sh
dcli -g ~/dbs_group -l root chown oracle:oinstall /u01/app/12.1.0/grid_1/crs/script/mount-dbfs_dev2.sh
dcli -g ~/dbs_group -l root chmod 750 /u01/app/12.1.0/grid_1/crs/script/mount-dbfs_dev2.sh
dcli -g ~/dbs_group -l root -d /etc/oracle -f /tmp/mount-dbfs_dev2.conf
dcli -g ~/dbs_group -l root chown oracle:oinstall /etc/oracle/mount-dbfs_dev2.conf
dcli -g ~/dbs_group -l root chmod 640 /etc/oracle/mount-dbfs_dev2.conf

1.10 Create the script for mounting the DBFS File System
We will create the add-dbfs-resource _.sh script. This script will be used to create the clusterware resource for mounting the DBFS file system.
Note that the add-dbfs-resource script will be sourcing the customized mount-dbfs_.sh script which we had created earlier.

[root@exdb1db01 tmp]# cd /u01/app/12.1.0/grid_1/crs/script
[root@exdb1db01 script]# vi add-dbfs-resource_dev2.sh
##### start script add-dbfs-resource_dev2.sh
#!/bin/bash
ACTION_SCRIPT=/u01/app/12.1.0/grid_1/crs/script/mount-dbfs_dev2.sh
RESNAME=dbfs_mount_dev2
DBNAME=DEV2
DBNAMEL=`echo $DBNAME | tr A-Z a-z`
ORACLE_HOME=/u01/app/oracle/product/11.2.0/shieldnp_1
PATH=$ORACLE_HOME/bin:$PATH
export PATH ORACLE_HOME
/u01/app/12.1.0/grid_1/bin/crsctl add resource $RESNAME \
-type local_resource \
-attr "ACTION_SCRIPT=$ACTION_SCRIPT, \
CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \
START_DEPENDENCIES='hard(ora.$DBNAMEL.db)pullup(ora.$DBNAMEL.db)',\
STOP_DEPENDENCIES='hard(ora.$DBNAMEL.db)',\
SCRIPT_TIMEOUT=300"
##### end script add-dbfs-resource_dev2.sh

Change the ownership of the script to oracle

[root@exdb1db01 script]# chown oracle:oinstall add-dbfs-resource_dev2.sh

1.11 As the OS user oracle run the add-dbfs-resource script to create the resource
[root@exdb1db01 script]# su - oracle
[oracle@exdb1db01 ~]$ cd /u01/app/12.1.0/grid_1/crs/script
[oracle@exdb1db01 script]$ ./add-dbfs-resource_dev2.sh

1.12 As oracle start the resource using crsctl – this will mount the DBFS file system
[oracle@exdb1db01 ~]$ cd /u01/app/12.1.0/grid_1/bin
[oracle@exdb1db01 bin]$ ./crsctl start resource dbfs_mount_dev2
CRS-2672: Attempting to start 'dbfs_mount_dev2' on 'exdb1db01'
CRS-2672: Attempting to start 'ora.dev2.db' on 'exdb1db02'
CRS-2676: Start of 'dbfs_mount_dev2' on 'exdb1db01' succeeded
CRS-2676: Start of 'dbfs_mount_dev2' on 'exdb1db02' succeeded

1.13 Check the status of the resource
[oracle@exdb1db01 bin]$ ./crsctl stat res dbfs_mount_dev2
NAME=dbfs_mount_dev2
TYPE=local_resource
TARGET=ONLINE , ONLINE
STATE=ONLINE on exdb1db01, ONLINE on exdb1db02

[oracle@exdb1db01 bin]$ exit
logout

1.14 As root create the Application VIP
[root@exdb1db01 script]# cd /u01/app/12.1.0/grid_1/bin

[root@exdb1db01 bin]# ./appvipcfg create -network=1 -ip=10.100.24.28 -vipname=ogg_vip_dev2 -user=root

[root@exdb1db01 bin]# ./crsctl setperm resource ogg_vip_dev2 -u user:oracle:r-x
[root@exdb1db01 bin]# ./crsctl setperm resource ogg_vip_dev2 -u user:grid:r-x
[root@exdb1db01 bin]# ./crsctl start resource ogg_vip_dev2
CRS-2672: Attempting to start 'ogg_vip_dev2' on 'exdb1db02'
CRS-2676: Start of 'ogg_vip_dev2' on 'exdb1db02' succeeded

We can see that the VIP is running on exdb1db02 -we can relocate it to exdb1db01

[root@exdb1db01 bin]# ./crsctl relocate resource ogg_vip_dev2
CRS-2673: Attempting to stop 'ogg_vip_dev2' on 'exdb1db02'
CRS-2677: Stop of 'ogg_vip_dev2' on 'exdb1db02' succeeded
CRS-2672: Attempting to start 'ogg_vip_dev2' on 'exdb1db01'
CRS-2676: Start of 'ogg_vip_dev2' on 'exdb1db01' succeeded

Now check the status of the resource – we can see it running on exdb1db01

[root@exdb1db01 bin]# ./crsctl status resource ogg_vip_dev2
NAME=ogg_vip_dev2
TYPE=app.appvipx.type
TARGET=ONLINE
STATE=ONLINE on exdb1db01

1.15 Check if the DBFS file systems for each database environment are mounted and directories are present
[root@exdb1db01 bin]# df -k |grep dbfs
dbfs-dbfs_user@:/ 56559616 232 56559384 1% /dbfs_dev2

[root@exdb1db01 bin]# cd /dbfs_dev2/
[root@exdb1db01 dbfs_dev2]# ls -l
total 0
drwxrwxrwx 3 root root 0 Feb 25 11:56 ogg_dev2
drwxrwxrwx 3 root root 0 Feb 25 11:57 ogg_dirtmp_dev2


2 Configure Grid Infrastructure Agent

2.1 Create the directories on the DBFS file system

[oracle@exdb1db01 ogg_dev2pd]$ pwd
/dbfs/dev2pd/ogg_dev2pd

[oracle@exdb1db01 ]$ mkdir dirpcs
[oracle@exdb1db01 ]$ mkdir dirchk
[oracle@exdb1db01 ]$ mkdir dirdat
[oracle@exdb1db01 ]$ mkdir dirprm
[oracle@exdb1db01 ]$ mkdir dircrd
[oracle@exdb1db01 ]$ mkdir BR

[oracle@exdb1db01 dev2pd]$ cd ogg_dirtmp_dev2pd
[oracle@exdb1db01 ogg_dirtmp_dev2pd]$ pwd
/dbfs/dev2pd/ogg_dirtmp_dev2pd

[oracle@exdb1db01 ogg_dirtmp]$ mkdir dirtmp

2.2 On each compute node rename the existing directories in the GoldenGate software home

[oracle@exdb1db01 dev2]$ mkdir BR

[oracle@exdb1db01 dev2]$ mv dirchk dirchk.bkp
[oracle@exdb1db01 dev2]$ mv dirdat dirdat.bkp
[oracle@exdb1db01 dev2]$ mv dirpcs dirpcs.bkp
[oracle@exdb1db01 dev2]$ mv dirprm dirprm.bkp
[oracle@exdb1db01 dev2]$ mv dircrd dircrd.bkp
[oracle@exdb1db01 dev2]$ mv dirtmp dirtmp.bkp

2.3 Create the symbolic links
[oracle@exdb1db01 dev2]$ ln -s /dbfs/dev2pd/ogg_dev2pd/dirdat dirdat
[oracle@exdb1db01 dev2]$ ln -s /dbfs/dev2pd/ogg_dev2pd/dirchk dirchk
[oracle@exdb1db01 dev2]$ ln -s /dbfs/dev2pd/ogg_dev2pd/dirpcs dirpcs
[oracle@exdb1db01 dev2]$ ln -s /dbfs/dev2pd/ogg_dev2pd/dirprm dirprm
[oracle@exdb1db01 dev2]$ ln -s /dbfs/dev2pd/ogg_dev2pd/BR BR
[oracle@exdb1db01 dev2]$ ln -s /dbfs/dev2pd/ogg_dev2pd/dircrd dircrd
[oracle@exdb1db01 dev2]$ ln -s /dbfs/dev2pd/ogg_dirtmp_dev2pd /dirtmp dirtmp

2.4 Download Oracle Grid Infrastructure Agent

From the URL below download the file: xagpack_7b.zip

http://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/index.html

2.5 Copy the downloaded xagpack_7b.zip file to Grid user $HOME and unzip

[grid@exdb1db01 ~]$ ls
xagpack_7b.zip

[grid@exdb1db01 ~]$ unzip xagpack_7b.zip
Archive: xagpack_7b.zip
creating: xag/
inflating: xag/xagsetup.bat
creating: xag/lib/
inflating: xag/lib/facility.lis
inflating: xag/agcommon.pm
inflating: xag/agjdeas.pm
creating: xag/bin/
inflating: xag/bin/oerr.pl
inflating: xag/xagsetup.sh



inflating: xag/mesg/xagus.be
inflating: xag/mesg/xagus.msg
inflating: xag/mesg/xagus.msb
inflating: xag/agmysqlmonas.pm
inflating: xag/readme.txt
inflating: xag/agwl.pm

2.6 Two directories will be created - xag and xagent

[grid@exdb1db01 xag]$ pwd
/home/grid/xag
[grid@exdb1db01 xag]$ cd ..
[grid@exdb1db01 ~]$ ls
xag xagent xagpack_7b.zip

2.7 Run the xagsetup.sh script (as the Grid Infrastructure owner)

Note – this will install the Grid Infrastructure Agent files in the xagent directory (on both compute nodes)

[grid@exdb1db01 xag]$ ./xagsetup.sh --install --directory /u01/app/grid/xagent --all_nodes
Installing Oracle Grid Infrastructure Agents on: exdb1db01
Installing Oracle Grid Infrastructure Agents on: exdb1db02

If we try and install the Grid Infrastructure Agents under the $GRID_HOME we will see an error as shown below:

[grid@exdb1db01 xag]$ ./xagsetup.sh --install --directory /u01/app/12.1.0/grid_1/xagent --all_nodes
Installation directory cannot be under Clusterware home.

2.8 As oracle we run the AGCTL command to create the GoldenGate resource

[root@exdb1db01 bin]# su - oracle
[oracle@exdb1db01 ~]$ cd /u01/app/grid/xagent/bin

[oracle@exdb1db01 bin]$ ./agctl add goldengate ogg_dev2 --gg_home /u01/app/oracle/product/gg12.2/dev2 --instance_type source --nodes exdb1db01,exdb1db02 --vip_name ogg_vip_dev2 --filesystems dbfs_mount_dev2pd --databases ora.dev2pd.db --oracle_home /u01/app/oracle/product/11.2.0/shieldnp_1

2.9 Start and Stop Goldengate using AGCTL

[oracle@exdb1db01 bin]$ ./agctl status goldengate ogg_dev2
Goldengate instance 'ogg_dev2' is not running

[oracle@exdb1db01 bin]$ ./agctl start goldengate ogg_dev2

[oracle@exdb1db01 bin]$ ./agctl status goldengate ogg_dev2
Goldengate instance 'ogg_dev2' is running on exdb1db01

If we check via GGSCI, we can see the manager process is now up and running on compute node exdb1db01

[oracle@exdb1db01 bin]$ cd -
/u01/app/oracle/product/gg12.2/dev2
[oracle@exdb1db01 dev2]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.1 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Dec 12 2015 00:54:38
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.

GGSCI (exdb1db01.gavin.com.au) 1> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING

Note that manager is stopped on compute node exdb1db02

GGSCI (exdb1db02.gavin.com.au) 3> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER STOPPED

2.10 Relocate GoldenGate using AGCTL

[oracle@exdb1db01 bin]$ ./agctl relocate goldengate ogg_dev2

[oracle@exdb1db01 bin]$ ./agctl status goldengate ogg_dev2
Goldengate instance 'ogg_dev2' is running on exdb1db02

Now manager is running on exdb1db02

GGSCI (exdb1db02.gavin.com.au) 3> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING


#!/bin/ksh
############################################
# Name: ggs_lag.ksh #
# PURPOSE: TO MONITOR LAG OF GOLDEN GATE #
# NOTE: THIS SCRIPT CALLS ggs.ksh #
# THIS SCRIPT NOTIFY IF LAG IS MORE THEN 30 MIN #
# ONLY FOR FOR EXT AND PMP PROCESS GROUP #
###########################################
export GGATE=/opt/oracle/u01/app/oracle/ggs
alias gate='clear;cd $GGATE;./ggsci'
export PATH=/opt/oracle/u01/app/oracle/ggs:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib:/opt/oracle/u01/app/oracle/ggs
LOGDIR=/export/home/oracle/dba_scripts/ggs/logs
EMAILFile=$LOGDIR/ggs_email.log
BOX=$(uname -a | awk '{print $2}')

##########################################################################
# RUNNING SCRIPT TO GET GOLDEN GATE INFORMATION #
##########################################################################

/export/home/oracle/dba_scripts/ggs/ggs.ksh > $LOGDIR/ggs_1.log

#to check when script was running

echo "script ggsksh completed from ggs_lag at `date`" >> /tmp/ggs_check.log

##################################################################################
## FORMATTING INFORMATION: change cut -d":" -f 1,4 TO cut -d":" -f 1,2 ##
## to get information about lag instead of checkpoint ##
## this command grep only EXT_ and PMP_ if you need more pattern ##
## if you need more pattern to be grep'd please add to '(EXT_|PMP_|pattern)' ##
##################################################################################

cat $LOGDIR/ggs_1.log|egrep -i '(EXT_|PMP_)'|cut -d":" -f 1,2,4| tr ":" " "|tr -s '[:space:]'|cut -d" " -f1,2,3,4,5,6 > $LOGDIR/ggs_2.log

# uncomment below command if you want to get lag and checkpoint both information #


#cat $LOGDIR/ggs_1.log|egrep -i '(EXT_|PMP_|DART)'|cut -d":" -f 1,2,4| tr ":" " "|tr -s '[:space:]'|cut -d" " -f1,2,3,4,5,6 > $LOGDIR/ggs_2.log

 

# uncomment below command if you want to get lag information about running process #

 

#cat $LOGDIR/ggs_1.log|grep RUNNING|cut -d":" -f 1,2,4| tr ":" " "|tr -s '[:space:]'|cut -d" " -f1,2,3,4,5,6 >
$LOGDIR/ggs_2.log

 

##########################################################################
## CHECKING FOR LAG MORE THEN 30 MIN FOR ABENDED PROCESS ##
##########################################################################

 

##########################################################################
## CHECKING FOR LAG MORE THEN 30 MIN FOR ABENDED PROCESS ##
##########################################################################

 

 

awk '{if ( $4 > 00 || $5 >=30 ) {print $1 " " $3 " HAS LAG of " $4" hour " $5 " min -- at -- " d "\n"} else {print "NO LAG FOR " $3 " " d >> "/tmp/ggs_lag_fine.log" }}' d="$(date)" $LOGDIR/ggs_2.log > $LOGDIR/ggs_email.log

 

# uncomment below command if you want to get lag and checkpoint both information #

 

#awk '{if ($4 >=30 || $5>=30 ) {print $1 " " $3 " has lag of "$4" min with checkpoint of "$5" min -- at -- " d "\n"} else {print "NO LAG FOR " $3 " "d > "/tmp/ggs_lag_fine.log" }}' d="$(date)" $LOGDIR/ggs_2.log > $LOGDIR/ggs_email.log

 

##########################################################
## SENDING EMAIL IF ERRORS ARE IN LOGFILE ###
##########################################################

 

if [ -s $EMAILFile ]; then
#echo "ERRORS FOUND"
mailx -s "GG LAG FOUND ON: $BOX" your.email@gmail.com team.email@gmail.com < $EMAILFile else cat /dev/null > $EMAILFile
#echo "ERRORS NOT FOUND"
fi

&nbsp;

################# SCRIPT END ######################

&nbsp;

&nbsp;

&nbsp;

&nbsp;

#!/bin/ksh
#########################################
#Name: ggs.ksh #
#THIS SCRIPT WILL CALLED BY ggs_lag.ksh #
#########################################
#alias gate='clear;cd $GGATE;./ggsci'
echo "ggsksh started `date`" >> /tmp/ggs_check.log
export GGATE=/opt/oracle/u01/app/oracle/ggs
alias gate='clear;cd $GGATE;ggsci'
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib:/opt/oracle/u01/app/oracle/ggs
export PATH=/opt/oracle/u01/app/oracle/ggs:$PATH
#cd $GGATE
/opt/oracle/u01/app/oracle/ggs/ggsci <<>
info all
exit
EOF
echo "ggsksh completed at `date` " >> /tmp/ggs_check.log

&nbsp;

&nbsp;

&nbsp;

Here is a great resource as the A-Team within Oracle is the top-level technical resources as it relates to GoldenGate. This is a link to many of their published articles. A-Team Index of articles

Oracle GoldenGate Extract Recovery

OGG Security Best Practices

Oracle GoldenGate Best Practices: Configuring Downstream Integrated Extract

SQL Svr 2 SQL Svr GG

EC2 config GG hub

GoldenGate Cloud Service (GGCS) Replication from Amazon RDS Oracle Database to Oracle DBCS

Ready for Action?

LET'S GO!
Copyright 2024 IT Remote dot com
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram