Category Archives: GoldenGate 12.2

GoldenGate Integrated Capture Mode

The Integrated Capture GoldenGate Mode also known as the Integrated Extract Goldengate process in 12c (also backported to 11gr2) is one of the more interesting and useful feature released with this version. This capture process is the component responsible for extracting the DML transactional data and DDL’s from the source database redo log files. This data is then written to local trail files which eventually is move to the destination database to be applied there.

Here are the list of topics to be covered in this article.

• What is the GoldenGate Integrated Capture Mode?
• Integrated Extract Goldengate vs Classic Capture
• On-Source Capture
• Downstream Capture
• Prerequisites
• Configuration
• Monitoring/Views

What is the GoldenGate Integrated Capture Mode?

Integrated Capture Mode (IC) is a new form of the Extract process, were in this process is moved closer, inside the source database. In the traditional Classic extract process, the extract works on the redo logs outside the domain of the actual database. In this new integrated capture mode, a server Log Miner process is started which extracts all the DML data and DDLS statements creating Logical Change Records (LCR’s). These are then handed to the Goldengate memory processes which writes these LCR’s to the local trail files. This Log Miner server process is not the Log Miner utility we are used to in the database but is a similar mechanism which has been tuned and enhanced for specific use by the Goldengate processes.

The purpose of moving this inside the database is to be able to make use of the already existing internal procedures in the database, making it easier to provide support for the newer features of Oracle faster than was previously possible. Due to this change, Oracle is now able to provide the following.

• Full Support of Basic, OLTP and EHCC compressed data.
• No need to fetch LOB’s from tables.
• Full Secure File support for Secure file lobs.
• Full XML support.
• Automatically handles addition of nodes and threads in RAC environment.
• Senses node up down in RAC and handles it in its processes transparently.
Integrated Capture vs Classic Capture

The Integrated Capture mode offers the following.

• Integrated with Database features
• Allows to mine earlier versions of integrated capture on secondary
• More efficient. It does not have to fetch data because of the datatype, etc..
• No longer necessary to set this: Threads, ASMUSER, ASMBUF, DBLOGREADER, DECRYPASSWORD
• For RAC no additional manual steps required. Transparent with RAC.
Integrated Capture Modes

Integration capture supports two types of deployment configurations. They are:
• On-Source Capture
• Downstream Capture

On-Source Capture

When the integrated capture process is configured using the on-source capture mode, the capture process is started on the actual source database server itself. Changes as they happen on source database will be captured locally, routed, transformed and applied on target database in near real-time.

This may seem convenient but consideration needs to be given to the additional workload that will be placed by this process on the database server. However if real-time replication is required this is the best option.

Note: All features are supported in both On-Source or Downstream Deployment
Downstream Capture

In the downstream mode, the capture process is configured to run on a remote database host. All the database redo logs from the source database are shipped to this remote server using Dataguard technology and then mined there by the capture process.

In this mode there is an inherent latency introduced due to the fact that the redo log on the source needs to switch first before the log can be shipped downstream. So there will be some delay in the replication of data to a target database as the extraction will be delayed due to the log switch. The main benefit of this setup however is the offset of the resource usage on the source server.

In this mode, to overcome the log switch latency, Oracle has provide a near Real time capture using Standby redo logs for extraction. In this configuration the redo log from the source continuously writes into the standby redo logs of the downstream database. The capture process directly capture the data from here.

It is important to keep in mind when deciding whether to use the Integrated capture or the classic capture mechanism that both configuration will remain available in future releases. However Oracle recommends to use the new Integrated capture mechanism as Oracle will not be adding new features to classic capture in the future and it will only be there for legacy support purposes.

The database where integrated capture runs:
• Must be at least 11.2.0..3
• Database patch 1411356.1 must be installed.
• Works with Oracle 12c.

GoldenGate Setup of a Hub

Setup Of GG Hub Article

If the GG Hub server is a standalone environment then the first step is to install the most recent Oracle client on the GoldenGate hub server, choosing the administrator installation.

In a hub configuration, GoldenGate is installed only on the hub server, it is not installed on the database servers. The Oracle client is installed on the hub server. We will be using the thick client.

In a hub environment that is install in a fault tolerant configuration, (RAC, DBFS, and XAG  w/DataGuard).

GoldenGate sample environment file ggsora12_env

Here is a sample GoldenGate environment file, feel free to comment about changes or additions

# This should be a .ggora12_env file

# This variable could be used to identify a site
export GG_SITE=01

# This could be used to identify a location
export GG_LOC=CO

# if there is only one DB related to this GG home,
# set NEW_ORACLE_SID to avoid constant switch between a DB env and its GG env
export NEW_ORACLE_SID=MRC01D011; . ~/.std_profile

# otherwise, set NEW_ORACLE_SID to dummy if there are multiple replicated databases related to same GG home
#export NEW_ORACLE_SID=dummy; . ~/.std_profile

# uncomment next three lines if  ORACLE_SID is dummy
# export ORACLE_HOME=/oracle/product/11.2.0/db_1


export GGS_HOME=/oracle/product/gg12.1


# For OEM12c GG monitoring
export JAVA_HOME=/oracle/product/12.1.0/oem_1/agent/core/
export PATH=$JAVA_HOME/bin:$PATH
# export LD_LIBRARY_PATH=$JAVA_HOME/lib/amd64/server:$LD_LIBRARY_PATH

## For GI Agent (XAG)
export XAG_HOME=/oracle/product/xag71
export PATH=$XAG_HOME/bin:$PATH

\${PWD} \\
\${SNAME} [\${ORACLE_SID}] [GG12_site$GG_SITE"_"$GG_LOC]-> "
export PS1

alias ggsora12='. $HOME/.ggsora12_env;cd $GGS_HOME'
alias xag='. $HOME/.ggsora12_env;cd $XAG_HOME'
alias ggstatus='$XAG_HOME/bin/agctl status goldengate testdb01_oggapp'