GoldenGate Setup for the Hub

Steps to setup a new environment on a node.

Assumptions.

  • The required disk has been make available to the Hub database system.
  • XAG has been installed, 9.1 is the current version.
  • All required software has been installed (fuse, GoldenGate, CRS, XAG  etc.)
  • All required setting and logging is configured at the DB level.
  • All installs are in the same location on all servers.

 

These are all the manual steps required to implement resiliency for the GoldenGate hub.   Mike Culp has written scripts to do this process. In general, using scripting for a process that covers many different servers is a best practice approach, as it insures the installs are done in an identical fashion.  This reduces complexity of all maintenance.

 

Setup Overview

Each cluster on the Hub has its’ own database.  This database is used to supply a file system(s) for the trail files and other OGG files required for recovery.   The DBFS filesystem for the OGG files can only be mounted on one node at a time as a best practice. Otherwise duplicate processes could be started, which could cause corruption in the replication hub.  The data disk can be mounted on all systems if desired. If there is only one data group in a replication instance, it is possible to put all files on that one mount, and mount it to only one node at a time. But it is very difficult to undo that decision.

 

  1. Create a directories off the root filesystem system.
  2. Grant the oracle user full control of the dir.
  3. Create the tablespace for DBFS – standard create tablespace command.  The following example uses a bigfile tablespace but that is NOT required.
    1. create bigfile tablespace dbfs_tblsp datafile '+DBFS_DG' size 32g autoextend on next 8g maxsize 300g NOLOGGING EXTENT MANAGEMENT LOCAL AUTOALLOCATE  SEGMENT SPACE MANAGEMENT AUTO ;
  4. Create a user for that tablespace.
    1. At minimum, database users must have the following privileges to create a file system: GRANT CONNECT, CREATE SESSION, RESOURCE, CREATE TABLE, and CREATE PROCEDURE, and DBFS_ROLE
      1. create user dbfs_user identified by dbfs_passwd default tablespace dbfs_tblsp quota unlimited on dbfs_tblsp;
      2. grant create session, create table, create view, create procedure, dbfs_role to dbfs_user;
  5. Create the file system, logging in as the user created above.
    1. start dbfs_create_filesystem dbfs_tblsp dbfs_mnt
      1. This script takes two arguments:
        1. dbfs_tblsp: tablespace for the DBFS database objects
        2. dbfs_mnt : filesystem name, this can be any string and will appear as a directory under the mount point
    2. Validate the system is configured with nocache, logging
      1. SQL>  SELECT logging, cache FROM dba_lobs WHERE tablespace_name='DBFS_TBLSP';

LOGGING CACHE

------- ----------

YES     NO

  1. Modify/rename the mount-dbfs.conf and mount-dbfs.sh scripts for this instantiation.  Each OGG instance will have its own “name” used to identify it within crs. As such a different set of conf and sh script will be needed for each OGG instance.
  2. Test the mount-dbfs.sh script.  (start, stop , status). This validates the setup.
    1. Start should mount the file system.
    2. Stop should unmount.
    3. Status should return the status.
  3. Create a service for the instantation.
    1. srvctl add service -database db_name -service ggname_svc -preferred node1 -available node2
  4. Create the clusterware setup for the dbfs file system.  Use cluster resource to only allow it to be mounted one one node.  The data dbfs system can use local_resource.
    1. $GRID_HOME/bin/crsctl add resource dbfs_mnt \

-type cluster_resource \

-attr "ACTION_SCRIPT=/ora01/scripts/ggcommon/mount-dbfs.sh CHECK_INTERVAL=30 RESTART_ATTEMPTS=10 , \  START_DEPENDENCIES='hard(ora.raca_domain.db)pullup(ora.raca_domain.db)',STOP_DEPENDENCIES='hard(ora.raca_domain.db)',SCRIPT_TIMEOUT=300"

 

  1. Test crsctl to start,stop and relocate to validate it works.

$GRID_HOME/bin/start resource dbfs_mnt

  1. Create the goldengate soft links to the dbfs mounts.
    1. cd $GG_HOME
    2. mv dirprm dirprm.old
    3. mv dirchk dirchk.old
    4. mv dircrd dircrd.old
    5. mv dirrpt dirrpt.old
    6. ln –s /mnt/dbfs_mnt/dirprm dirprm
    7. ln –s /mnt/dbfs_mnt/dirchk dirchk
    8. ln –s /mnt/dbfs_mnt/dircrd dircrd
    9. ln –s /mnt/dbfs_mnt/dirrpt dirrpt
  2. Create the crs definition for goldengate using the XAG interface
    1. $XAG_HOME/bin/agctl add goldengate GG_SOURCE \ --gg_home /u01/oracle/goldengate \ --oracle_home /u01/app/oracle/product/12.2.0/dbhome_1 \  --db_services ggname_svc --use_local_services --filesystems dbfs_mnt
  3. Create the manager parameter file
  4. Start and stop the goldengate instance using agctl.   Validate the mgr process started.
  5. Create the required data dbfs file systems for the data.
  6. Create the crsctl definitions, if desired.  The only difference is -cluster_resource is not used, as the data can be mounted on all systems with no issues.  The only reason for the crsctl definitions is to startup the mount system automatically, there is no failover required.

The system is ready for replication, however the same setup, with the exception of the XAG and CRS configuration is required for all nodes/clusters that may run this instantiation of GoldenGate.  The XAG and the crs setup is only done once per cluster.

 

Ready for Action?

LET'S GO!
Copyright 2024 IT Remote dot com
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram