To specify a Linux mount point of sufficient space for GoldenGate binaries / trail files.

While you can use the DBFS  Database File System (DBFS) option its use creates additional database objects and unnecessary additional database I/O as well as additional redo and RMAN activity.

Another option is to use Oracle ASM Clustered File System (ACFS) for this use case.

It is much faster to set up and is available on all nodes by default, which allows GoldenGate to fail over to other nodes.

In addition,  ACFS does not require the database to be up so the filesystem can also be used for other purposes.

If you are using this mount solely for GoldenGate, make sure you follow the best practices document which is updated periodically (Oracle GoldenGate Best Practice: NFS Mount options for use with GoldenGate (Doc ID 1232303.1))

***  Refer to the following steps at your own risk and always test for your use case prior to using in a production setting.

Requirements:

Configuration:

Verify that ACFS/ADVM modules are present in memory (on each node):

1
$ lsmod | grep oracle

If the modules are not present, the command will return something similar to:
oracleasm              53591  1

If the modules are present, the command will return something similar to:
oracleacfs 3308260 0
oracleadvm 508030 0
oracleoks 506741 2 oracleacfs,oracleadvm
oracleasm 53591 1

If the modules are not present or you would like to ensure that the latest version is loaded, run the following before proceeding (as the root user):

1
2
3
4
5
6
7
$ . oraenv
ORACLE_SID = [CDBRAC1] ? +ASM
The Oracle base remains unchanged with value /u01/app/oracle
# $GRID_HOME/bin/acfsroot install

Reboot the node if the modules were already present and you are reloading them.

Start the ACFS modules on each node:

On each node and as the root user:

1
2
3
4
5
6
7
8
9
# $GRID_HOME/bin/acfsload start
ACFS-9391: Checking for existing ADVM/ACFS installation.
ACFS-9392: Validating ADVM/ACFS installation files for operating system.
ACFS-9393: Verifying ASM Administrator setup.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9322: completed

Once installation is complete, and the mount is registered with clusterware, these modules will be loaded automatically.

If you like you can double check the driverstate by using the following executable:
usage: acfsdriverstate [-orahome <ORACLE_HOME>] <installed | loaded | version | supported> [-s]

As oracle user, create an ASM volume for ACFS (run only on one node):

Source the grid environment.

1
2
3
$ . oraenv
ORACLE_SID = [CDBRAC1] ? +ASM
The Oracle base remains unchanged with value /u01/app/oracle

Create the volume using the volcreate command.
You can use an existing disk group or create a separate one to house ACFS.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ asmcmd
ASMCMD volcreate -G DATA -s 10G ACFSVOL1
ASMCMD volinfo --all
Diskgroup Name: DATA
Volume Name: ACFSVOL1
Volume Device: /dev/asm/acfsvol1-370
State: ENABLED
Size (MB): 1024
Resize Unit (MB): 64
Redundancy: UNPROT
Stripe Columns: 8
Stripe Width (K): 1024
Usage:
Mountpath:

As oracle user, create the filesystem on the volume which was just created:

1
2
3
4
5
6
7
$ /sbin/mkfs -t acfs /dev/asm/acfsvol1-370
mkfs.acfs: version = 12.1.0.2.0
mkfs.acfs: on-disk version = 39.0
mkfs.acfs: volume = /dev/asm/acfsvol1-370
mkfs.acfs: volume size = 1073741824 ( 1.00 GB )
mkfs.acfs: Format complete.

As root, create an empty directory which will house the file system:

1
2
3
4
5
# mkdir -p /acfsmounts/acfsvol1
# chown root:oinstall /acfsmounts
# chmod 770 /acfsmounts
# chown -R oracle:oinstall /acfsmounts/acfsvol1
# chmod 775 /acfsmounts/acfsvol1

As root, setup the file system to be auto mounted by clusterware:

In a RAC 11g environment, you use acfsutil (srvctl may be supported – was not tested and the “-u option” will allow the oracle user to administer the mount):
1
2
3
4
# . /usr/local/bin/oraenv
ORACLE_SID = [CDBRAC1] ? +ASM
The Oracle base remains unchanged with value /u01/app/oracle
# /sbin/acfsutil registry -a /dev/asm/acfsvol1-370 /acfsmounts/acfsvol1 -t "ACFS General Purpose Mount" -u oracle
In a RAC 12c GI environment, register it with clusterware using the following commands (the “-u option” will allow the oracle user to administer the mount):
1
2
3
4
5
6
# /usr/local/bin/oraenv
ORACLE_SID = [CDBRAC1] ? +ASM
The Oracle base remains unchanged with value /u01/app/oracle
# srvctl add volume -volume ACFSVOL1 -diskgroup DATA -device /dev/asm/acfsvol1-370
# srvctl add filesystem -device /dev/asm/acfsvol1-370 -path /acfsmounts/acfsvol1 -diskgroup DATA -user oracle -fstype ACFS -description "ACFS General Purpose Mount"
# srvctl modify filesystem –device /dev/asm/acfsvol1-370 –fsoptions “rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,noac,vers=3,timeo=600”

At this point the mount should be ready for read/write and will be automatically mounted by clusterware.

Administration of the ACFS mount:

If you need to resize the mount once created (since you granted control to the oracle user, this command can also be executed by the oracle user:

1
2
3
$ acfsutil size 25G /acfsmounts/acfsvol1
$ srvctl start filesystem -device /dev/asm/acfsvol1-370
$ srvctl stop filesystem -device /dev/asm/acfsvol1-370

This is a demostration of CLASSIC CAPTURE on ASM using New ASM API , which reading redo log on ASM Using GoldenGate user provided by USERID in Extract

Source Side : 3 Node RAC / Grid Infrastructure 11.2.0.2
Target Side : Stand-alone 11.2.0.1

DBLOGREADER

(Oracle) Valid for Extract in classic capture mode.Causes Extract to use a newer ASM API that is available as
of Oracle 10.2.0.5 and later 10g R2 versions, and Oracle 11.2.0.2 and later 11g R2 versions (but not in Oracle 11g R1
versions). This API uses the database server to access the redo and archive logs, instead of connecting directly to the
Oracle ASM instance. The database must contain the libraries that contain the API modules and must be
running. To use this feature, the Extract database user must have SELECT ANY TRANSACTION privilege.
When used, DBLOGREADER enables Extract to use a read size of up to 4 MB in size. This is controlled with the
DBLOGREADERBUFSIZE option The maximum read size when using the default OCI buffer is 28672 bytes.
This is controlled by the ASMBUFSIZE option. A larger buffer may improve the performance of Extract
when redo rate is high.When using DBLOGREADER, do not use the ASMUSER and ASMPASSWORD options of TRANLOGOPTIONS.
The API uses the user and password specified with the USERID parameter.

DBLOGREADERBUFSIZE

(Oracle) Valid for Extract in classic capture mode.Controls the maximum size, in bytes, of a read operation
into the internal buffer that holds the results of each read of the transaction log in ASM. Higher values increase
extraction speed but cause Extract to consume more memory. Low values reduce memory usage but increase I/O
because Extract must store data that exceeds the cache size to disk.
Use DBLOGREADERBUFSIZE together with the DBLOGREADER option if the source ASM instance is Oracle 10.2.0.5 or
later10g R2 versions, or Oracle 11.2.0.2 and later 11g R2 versions (but not Oracle 11g R1 versions). The newer ASM
API in those versions provides better performance than the older one. If the Oracle version is not one of those versions,
then ASMBUFSIZE must be used.

--SOURCE SIDE

Add supplement log at database level.

1.SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
2. 
3.Database altered.

--ON SOURCE Add checkpoint Table if it is bi-directional replication,and if you dont have one.

01.GGSCI (PNETN1.localdomain.com) 19>DBLOGIN, USERID GGADMIN, PASSWORD Summer2011
02.GGSCI (PNETN1.localdomain.com) 19>INFO CHECKPOINTTABLE
03. 
04.--if you dont have one , create it.
05. 
06.GGSCI (PNETN1.localdomain.com) 19>ADD CHECKPOINTTABLE GGADMIN.CKPT_TABLE
07. 
08.GGSCI (PNETN1.localdomain.com) 19>EDIT PARAMS ./GLOBALS
09.GGSCHEMA GGADMIN
10.CHECKPOINTTABLE GGADMIN.CKPT_TABLE

--Add Extract as Below , REGISTER extract using LOGRETENTION

01.Use DBLOGIN to REGISTER extract
02. 
03.GGSCI (PNETN1.localdomain.com)>DBLOGIN USERID GGADMIN,PASSWORD Summer2011
04.GGSCI (PNETN1.localdomain.com)>REGISTER EXTRACT DW_EX LOGRETENTION
05. 
06.--Add extract
07. 
08.GGSCI (PNETN1.localdomain.com)>ADD EXTRACT DW_EX TRANLOG,BEGIN NOW,THREADS 3
09.GGSCI (PNETN1.localdomain.com)>ADD EXTTRAIL ./dirdat/EX, EXTRACT DW_EX
10. 
11.--Create parameter file for Extract
12.GGSCI (PNETN1.localdomain.com) 19>EDIT PARAMS DW_EX
13.EXTRACT DW_EX
14.---ORACLE ENVIRONMET
15.SETENV (ORACLE_HOME = "/u01/app/oracle/11.2.0/db_1")
16.SETENV (ORACLE_SID = "EDWP1")
17.SETENV (NLS_LANG = "AMERICAN_AMERICA.WE8MSWIN1252")
18.USERID GGADMIN, PASSWORD Summer2011
19. 
20.--TRANLOGOPTIONS ASMUSER sys@+ASM, ASMPASSWORD Summer69
21.--This is ASM API that is available as of
22.--Oracle 10.2.0.5 and later 10g R2 versions AND
23.--Oracle 11.2.0.2 and later 11g R2 versions
24.--BUT NOT in Oracle 11g R1 versions
25. 
26.TRANLOGOPTIONS DBLOGREADER, DBLOGREADERBUFSIZE  2597152,ASMBUFSIZE 28000
27.DYNAMICRESOLUTION
28.DISCARDFILE ./dirrpt/edwp.dsc,PURGE, MEGABYTES 100
29.EXTTRAIL ./dirdat/EX
30. 
31.--DDL REPLICATION
32.DDL INCLUDE MAPPED OBJNAME TEST.*
33. 
34.--DML replication for SCHEMA level.
35.TABLE TEST.*;
36.--end

--Add DATAPUMP

01.GGSCI (PNETN1.localdomain.com) 19>ADD EXTRACT DW_EP, EXTTRAILSOURCE ./dirdat/EX,begin now
02.GGSCI (PNETN1.localdomain.com) 19>ADD RMTTRAIL ./dirdat/EP, EXTRACT DW_EP, MEGABYTES 100
03.GGSCI (PNETN1.localdomain.com) 19>edit params DW_EP
04.EXTRACT DW_EP
05.SETENV (ORACLE_HOME = "/u01/app/oracle/11.2.0/db_1")
06.SETENV (ORACLE_SID = "EDWP1")
07.SETENV (NLS_LANG = "AMERICAN_AMERICA.WE8MSWIN1252")
08.USERID GGADMIN, PASSWORD Summer2011
09.PASSTHRU
10.RMTHOST 192.168.100.101, MGRPORT 7809
11.RMTTRAIL ./dirdat/EP
12.TABLE TEST.*;
13.--end

--START extract / Pump.

01.GGSCI (PNETN1.localdomain.com) 17> start DW_EX
02. 
03.Sending START request to MANAGER ...
04.EXTRACT DW_EX starting
05. 
06. 
07.GGSCI (PNETN1.localdomain.com) 18> info all
08. 
09.Program     Status      Group       Lag at Chkpt  Time Since Chkpt
10. 
11.MANAGER     RUNNING
12.EXTRACT     STOPPED     DW_EP       00:00:00      00:20:02
13.EXTRACT     RUNNING     DW_EX       00:33:56      00:00:06
14. 
15. 
16.GGSCI (PNETN1.localdomain.com) 19> start DW_EP
17. 
18.Sending START request to MANAGER ...
19.EXTRACT DW_EP starting
20. 
21. 
22.GGSCI (PNETN1.localdomain.com) 20> INFO ALL
23. 
24.Program     Status      Group       Lag at Chkpt  Time Since Chkpt
25. 
26.MANAGER     RUNNING
27.EXTRACT     RUNNING     DW_EP       00:00:00      00:00:08
28.EXTRACT     RUNNING     DW_EX       00:00:02      00:00:07

--Output from ggserr.log

01.[[A2012-07-31 12:34:15  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): start DW_EX.
02.2012-07-31 12:34:15  INFO    OGG-00963  Oracle GoldenGate Manager for Oracle, mgr.prm:  Command received from GGSCI on host PNETN1.localdomain.com (START EXTRACT DW_EX ).
03.2012-07-31 12:34:15  INFO    OGG-00975  Oracle GoldenGate Manager for Oracle, mgr.prm:  EXTRACT DW_EX starting.
04.2012-07-31 12:34:15  INFO    OGG-00992  Oracle GoldenGate Capture for Oracle, dw_ex.prm:  EXTRACT DW_EX starting.
05.2012-07-31 12:34:15  INFO    OGG-03035  Oracle GoldenGate Capture for Oracle, dw_ex.prm:  Operating system character set identified as UTF-8. Locale: en_US, LC_ALL:.
06.2012-07-31 12:34:15  INFO    OGG-01635  Oracle GoldenGate Capture for Oracle, dw_ex.prm:  BOUNDED RECOVERY: reset to initial or altered checkpoint.
07.2012-07-31 12:34:15  INFO    OGG-01815  Oracle GoldenGate Capture for Oracle, dw_ex.prm:  Virtual Memory Facilities for: BR
08.anon alloc: mmap(MAP_ANON)  anon free: munmap
09.file alloc: mmap(MAP_SHARED)  file free: munmap
10.target directories:
11./u02/gghome/BR/DW_EX.
12.2012-07-31 12:34:15  INFO    OGG-01815  Oracle GoldenGate Capture for Oracle, dw_ex.prm:  Virtual Memory Facilities for: COM
13.anon alloc: mmap(MAP_ANON)  anon free: munmap
14.file alloc: mmap(MAP_SHARED)  file free: munmap
15.target directories:
16./u02/gghome/dirtmp.
17.2012-07-31 12:34:17  INFO    OGG-00546  Oracle GoldenGate Capture for Oracle, dw_ex.prm:  Default thread stack size: 33554432.
18.2012-07-31 12:34:17  INFO    OGG-01515  Oracle GoldenGate Capture for Oracle, dw_ex.prm:  Positioning to begin time Jul 31, 2012 12:00:23 PM.
19.2012-07-31 12:34:18  INFO    OGG-01516  Oracle GoldenGate Capture for Oracle, dw_ex.prm:  Positioned to (Thread 1) Sequence 4, RBA 5810192, SCN 0.0, Jul 31, 2012 12:00:23 PM.
20.2012-07-31 12:34:18  INFO    OGG-01515  Oracle GoldenGate Capture for Oracle, dw_ex.prm:  Positioning to begin time Jul 31, 2012 12:00:23 PM.
21.2012-07-31 12:34:18  INFO    OGG-01516  Oracle GoldenGate Capture for Oracle, dw_ex.prm:  Positioned to (Thread 2) Sequence 2, RBA 5019152, SCN 0.0, Jul 31, 2012 12:00:23 PM.
22.2012-07-31 12:34:18  INFO    OGG-01515  Oracle GoldenGate Capture for Oracle, dw_ex.prm:  Positioning to begin time Jul 31, 2012 12:00:23 PM.
23.2012-07-31 12:34:19  INFO    OGG-01516  Oracle GoldenGate Capture for Oracle, dw_ex.prm:  Positioned to (Thread 3) Sequence 2, RBA 4712464, SCN 0.0, Jul 31, 2012 12:00:23 PM.
24.2012-07-31 12:34:19  INFO    OGG-01517  Oracle GoldenGate Capture for Oracle, dw_ex.prm:  Position of first record processed for Thread 2, Sequence 2, RBA 5019152, SCN 0.1098626, Jul 31, 2012 12:00:23 PM.
25.2012-07-31 12:34:19  INFO    OGG-01517  Oracle GoldenGate Capture for Oracle, dw_ex.prm:  Position of first record processed for Thread 3, Sequence 2, RBA 4712464, SCN 0.1098627, Jul 31, 2012 12:00:23 PM.
26.2012-07-31 12:34:19  INFO    OGG-00993  Oracle GoldenGate Capture for Oracle, dw_ex.prm:  EXTRACT DW_EX started.
27.2012-07-31 12:34:19  INFO    OGG-01052  Oracle GoldenGate Capture for Oracle, dw_ex.prm:  No recovery is required for target file ./dirdat/EX000000, at RBA 0 (file not opened).
28.2012-07-31 12:34:19  INFO    OGG-01478  Oracle GoldenGate Capture for Oracle, dw_ex.prm:  Output file ./dirdat/EX is using format RELEASE 11.2.
29.2012-07-31 12:34:19  INFO    OGG-01517  Oracle GoldenGate Capture for Oracle, dw_ex.prm:  Position of first record processed for Thread 1, Sequence 4, RBA 5810192, SCN 0.1098585, Jul 31, 2012 12:00:23 PM.
30.2012-07-31 12:34:32  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): start DW_EP.
31.2012-07-31 12:34:32  INFO    OGG-00963  Oracle GoldenGate Manager for Oracle, mgr.prm:  Command received from GGSCI on host PNETN1.localdomain.com (START EXTRACT DW_EP ).
32.2012-07-31 12:34:32  INFO    OGG-00975  Oracle GoldenGate Manager for Oracle, mgr.prm:  EXTRACT DW_EP starting.
33.2012-07-31 12:34:33  INFO    OGG-00992  Oracle GoldenGate Capture for Oracle, dw_ep.prm:  EXTRACT DW_EP starting.
34.2012-07-31 12:34:33  INFO    OGG-03035  Oracle GoldenGate Capture for Oracle, dw_ep.prm:  Operating system character set identified as UTF-8. Locale: en_US, LC_ALL:.
35.2012-07-31 12:34:33  INFO    OGG-01815  Oracle GoldenGate Capture for Oracle, dw_ep.prm:  Virtual Memory Facilities for: COM
36.anon alloc: mmap(MAP_ANON)  anon free: munmap
37.file alloc: mmap(MAP_SHARED)  file free: munmap
38.target directories:
39./u02/gghome/dirtmp.
40.2012-07-31 12:34:33  WARNING OGG-01015  Oracle GoldenGate Capture for Oracle, dw_ep.prm:  Positioning with begin time: Jul 31, 2012 12:02:01 PM, waiting for data: at extseqno 0, extrba 0.
41.2012-07-31 12:34:33  INFO    OGG-00993  Oracle GoldenGate Capture for Oracle, dw_ep.prm:  EXTRACT DW_EP started.
42.2012-07-31 12:34:38  INFO    OGG-01226  Oracle GoldenGate Capture for Oracle, dw_ep.prm:  Socket buffer size set to 27985 (flush size 27985).
43.2012-07-31 12:34:38  INFO    OGG-01052  Oracle GoldenGate Capture for Oracle, dw_ep.prm:  No recovery is required for target file ./dirdat/EP000000, at RBA 0 (file not opened).
44.2012-07-31 12:34:38  INFO    OGG-01478  Oracle GoldenGate Capture for Oracle, dw_ep.prm:  Output file ./dirdat/EP is using format RELEASE 11.2.

---Also you will see below output on Target ggserr.log

01.2012-07-31 09:32:36  INFO    OGG-00963  Oracle GoldenGate Manager for Oracle, mgr.prm:  Command received from EXTRACT on host 192.168.100.126 (START SERVER CPU -1 PRI -1  TIMEOUT 300 PARAMS ).
02.2012-07-31 09:32:36  INFO    OGG-00974  Oracle GoldenGate Manager for Oracle, mgr.prm:  Manager started collector process (Port 7840).
03.2012-07-31 09:32:36  INFO    OGG-01677  Oracle GoldenGate Collector:  Waiting for connection (started dynamically).
04.2012-07-31 09:32:36  INFO    OGG-01228  Oracle GoldenGate Collector:  Timeout in 300 seconds.
05.2012-07-31 09:32:41  INFO    OGG-01229  Oracle GoldenGate Collector:  Connected to 192.168.100.126:23270.
06.2012-07-31 09:32:41  WARNING OGG-01223  Oracle GoldenGate Collector:  did not recognize command (n).
07.2012-07-31 09:32:41  INFO    OGG-01669  Oracle GoldenGate Collector:  Opening ./dirdat/EP000000 (byte -1, current EOF 0).
08.2012-07-31 09:32:41  INFO    OGG-01670  Oracle GoldenGate Collector:  Closing ./dirdat/EP000000.
09.2012-07-31 09:32:41  INFO    OGG-01669  Oracle GoldenGate Collector:  Opening ./dirdat/EP000000 (byte -1, current EOF 0).

=============================================================
=============================================================

--TARGET SIDE

--Add checkpoint Table into GLOBALS, if you dont have any.

01.GGSCI (TEST.localdomain.com) 4> view PARAMS ./GLOBALS
02.CHECKPOINTTABLE GGSUSER.CKPT
03.GGSCHEMA GGSUSER
04. 
05.--Add checkpoint Table into Database.
06. 
07.GGSCI (TEST.localdomain.com) 6>DBLOGIN USERID GGSUSER,PASSWORD Summer2011
08.Successfully logged into database.
09. 
10.--confirm if any checkpoint Table is already exist.
11.GGSCI (TEST.localdomain.com) 8>INFO CHECKPOINTTABLE
12. 
13.No checkpoint table specified, using GLOBALS specification (GGSUSER.CKPT)...
14.Checkpoint table GGSUSER.CKPT created 2012-05-31 13:32:57.

--Add Replicat on Target SIDE.

01.GGSCI (TEST.localdomain.com) 8>ADD REPLICAT DW_ER, EXTTRAIL ./dirdat/EP,checkpointtable GGSUSER.CKPT
02. 
03.--Create parameter file for Replicat
04. 
05.GGSCI (TEST.localdomain.com) 8>edit params DW_ER
06.REPLICAT DW_ER
07.SETENV (ORACLE_HOME = "/u00/app/oracle/product/11.2.0/db_1")
08.SETENV (ORACLE_SID = "TEST")
09.--Assume DDL of Source.
10.ASSUMETARGETDEFS
11.USERID ggsuser, PASSWORD Summer2011
12.DISCARDFILE ./dirrpt/EDWP.dsc, append, megabytes 100
13.--DLL replication.
14.DDL INCLUDE ALL
15.--DML replication from TEST schema to TEST schema.
16.MAP TEST.*, TARGET TEST.*;
17.--end

--start Replicat

1.GGSCI (TEST.localdomain.com) 6> start DW_ER
2. 
3.Sending START request to MANAGER ...
4.REPLICAT DW_ER starting

--Some Output from Replicat ggserr.log

1.2012-07-31 09:33:08  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): start DW_ER.
2.2012-07-31 09:33:08  INFO    OGG-00963  Oracle GoldenGate Manager for Oracle, mgr.prm:  Command received from GGSCI on host 192.168.100.101 (START REPLICAT DW_ER ).
3.2012-07-31 09:33:08  INFO    OGG-00975  Oracle GoldenGate Manager for Oracle, mgr.prm:  REPLICAT DW_ER starting.
4.2012-07-31 09:33:09  INFO    OGG-00995  Oracle GoldenGate Delivery for Oracle, dw_er.prm:  REPLICAT DW_ER starting.
5.2012-07-31 09:33:09  INFO    OGG-00996  Oracle GoldenGate Delivery for Oracle, dw_er.prm:  REPLICAT DW_ER started.

Configure GoldenGate Extract to read from remote logs

Sometimes you may need to run GoldenGate on different machines than the ones that host the database. It is possible but there are restrictions that apply. First is that the endian order of both the systems should be same and the second is the bit-width has to be same. For example it is not possible to run GoldenGate on a 32-bit system to read from a database that runs on some 64-bit platform. Assuming that the environment satisfies the above two conditions; we can use the LOGSOURCE option of TRANSLOGOPTIONS to achieve this.

Here we run GG on host goldengate1 (192.168.0.109) and the database from which we want to capture the changes runs on the host goldengate3 (192.168.0.111) so two different hosts. Both the systems run 11.2.0.2 on RHEL 5.5. On goldengate3 redo logs are in the mount point /home which has been NFS mounted on goldengate1 as /home_gg3

This requires an NFS mount between systems for redo logs, not a goldengate process

Filesystem           1K-blocks      Used Available Use% Mounted on

192.168.0.111:/home   12184800   7962496   3593376  69% /home_gg3

The Extract parameters are as follows:

EXTRACT ERMT01

USERID ggadmin@orcl3, PASSWORD ggadmin

EXTTRAIL ./dirdat/er

TRANLOGOPTIONS LOGSOURCE LINUX, PATHMAP /home/oracle/app/oracle/oradata/orcl /home_gg3/oracle/app/oracle/oradata/or
cl, PATHMAP /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog /home_gg3/oracle/app/oracle/flash_recovery_
area/ORCL/archivelog

TABLE HR.*;

(The text in the line starting with TRANLOGOPTIONS is a single line)

So using PATHMAP we can make GG aware about the actual location of the redo logs & archive logs on the remote server and the mapped location on the system where GG is running (It is somewhat like db_file_name_convert option for Data Guards).

We fire some DMLs on the source database and then run stats command for the Extract

GGSCI (goldengate1) 93> stats ermt01 totalsonly *

Sending STATS request to EXTRACT ERMT01 ...

Start of Statistics at 2012-05-26 05:17:05.

Output to ./dirdat/er:

Cumulative totals for specified table(s):

*** Total statistics since 2012-05-26 04:51:10 ***
        Total inserts                                1.00
        Total updates                                0.00
        Total deletes                                1.00
        Total discards                               0.00
        Total operations                             2.00
.
.
.

End of Statistics.

GGSCI (goldengate1) 94>

For more details have a look at the GG reference guide (Page 402).

 

Configuring DDL Replication

How To Setup 11 2 DBFS FileSystems Using the dbfs_client_API_Method_OK-April-15-2015

869822.1   Installing the DBFS

1150157.1  List of Critical patches

 

In summary the broad steps involved are:

1) Install and configure FUSE (Filesystem in Userspace)
2) Create the DBFS user and DBFS tablespaces
3) Mount the DBFS filesystem
5) Create symbolic links for the Goldengate software directories dirchk,dirpcs, dirdat, BR to point to directories on DBFS
6) Create the Application VIP
7) Download the mount-dbfs.sh script from MOS and edit according to our environment
8) Create the DBFS Cluster Resource
9) Download and install the Oracle Grid Infrastructure Bundled Agent
10) Register Goldengate with the bundled agents using agctl utility

 

Install and Configure FUSE

Using the following command check if FUSE has been installed:

lsmod | grep fuse

FUSE can be installed in a couple of ways – either via the Yum repository or using the RPM’s available on the OEL software media.

Using Yum:

yum install kernel-devel
yum install fuse fuse-libs

Via RPM’s:

If installing from the media, then these are the RPM’s which are required:

kernel-devel-2.6.32-358.el6.x86_64.rpm
fuse-2.8.3-4.el6.x86_64.rpm
fuse-devel-2.8.3-4.el6.x86_64.rpm
fuse-libs-2.8.3-4.el6.x86_64.rpm

A group named fuse must be created and the OS user who will be mounting the DBFS filesystem needs to be added to the fuse group.

For example if the OS user is ‘oracle’, then we use the usermod command to modify the secondary group membership for the oracle user. Important is to ensure we do not exclude any current groups the user already is a member of.

# /usr/sbin/groupadd fuse
# usermod -G dba,fuse oracle

One of the mount options which we will use is called “allow_other” which will enable users other than the user who mounted the DBFS file system to access the file system.

The /etc/fuse.conf  needs to have the “user_allow_other” option as shown below.

$ # cat /etc/fuse.conf
user_allow_other

chmod 644 /etc/fuse.conf

Important: Ensure that the variable LD_LIBRARY_PATH is set and includes the path to $ORACLE_HOME/lib. Otherwise we will get an error when we try to mount the DBFS using the dbfs_client executable.

Create the DBFS tablespaces and mount the DBFS

If the source database used by Goldengate Extract is running on RAC or hosted on Exadata then we will create ONE tablespace for DBF.

If the target database where Replicat will be applying changes in on RAC or Exadata, then we will create TWO tableapaces for DBFS with each tablespace having different logging and caching settings – typically one tablespace will be used for the Goldengate trail files and the other for the Goldengate checkpoint files.

If using Exadata then typically an ASM disk group called DBFS_DG will already be available for us to use, otherwise on an non-Exadata platform we will create a separate ASM disk group for holding DBFS files.

Note than since we will be storing Goldengate trail files on DBFS, a best practice would be to allocate enough disk space/tablespace space to be able to retain at least a minimum of 12-24 hours of trail files. So we need to keep that in mind when we create the ASM diskgroup or create the DBFS tablespace.

CREATE bigfile TABLESPACE dbfs_ogg_big datafile '+DBFS_DG' SIZE
1000M autoextend ON NEXT 100M LOGGING EXTENT MANAGEMENT LOCAL
AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO;

Create the DBFS user

CREATE USER dbfs_user IDENTIFIED BY dbfs_pswd 
DEFAULT TABLESPACE dbfs_ogg_big
QUOTA UNLIMITED ON dbfs_ogg_big;

GRANT create session, 
      create table, 
      create view, 
      create procedure, 
      dbfs_role 
TO    dbfs_user; 


Create the DBFS Filesystem

To create the DBFS filesystem we connect as the DBFS_USER Oracle user account and either run the dbfs_create_filesystem.sql or dbfs_create_filesystem_advanced.sql script located under $ORACLE_HOME/rdbms/admin directory.

For example:

cd $ORACLE_HOME/rdbms/admin 

sqlplus dbfs_user/dbfs_pswd 


SQL> @dbfs_create_filesystem dbfs_ogg_big gg_source

OR

SQL> @dbfs_create_filesystem_advanced.sql dbfs_ogg_big  gg_source
      nocompress nodeduplicate noencrypt non-partition 

Where …
dbfs_ogg_big:  tablespace for the DBFS database objects
gg_source:         filesystem name, this can be any string and will appear as a directory under the mount point

If we were configuring DBFS on the Goldengate target or Replicat side of things, it is recommended to use the NOCACHE LOGGING attributes for the tablespace which holds the trail files because of the sequential reading and writing nature of the trail files.

For the checkpoint files on the other hand it is recommended to use CACHING and LOGGING attributes instead.

The example shown below illustrates how we can modify the LOB attributes.

(assuming we have created two DBFS tablespaces)

SQL> SELECT table_name, segment_name, cache, logging FROM dba_lobs 
     WHERE tablespace_name like 'DBFS%'; 

TABLE_NAME              SEGMENT_NAME                CACHE     LOGGING
----------------------- --------------------------- --------- -------
T_DBFS_BIG              LOB_SFS$_FST_1              NO        YES
T_DBFS_SM               LOB_SFS$_FST_11             NO        YES



SQL> ALTER TABLE dbfs_user.T_DBFS_SM 
     MODIFY LOB (FILEDATA) (CACHE LOGGING); 


SQL> SELECT table_name, segment_name, cache, logging FROM dba_lobs 
     WHERE tablespace_name like 'DBFS%';  

TABLE_NAME              SEGMENT_NAME                CACHE     LOGGING
----------------------- --------------------------- --------- -------
T_DBFS_BIG              LOB_SFS$_FST_1              NO        YES
T_DBFS_SM               LOB_SFS$_FST_11             YES       YES


As the user root, now create the DBFS mount point on ALL nodes of the RAC cluster (or Exadata compute servers).

# cd /mnt 
# mkdir DBFS 
# chown oracle:oinstall DBFS/

Create a custom tnsnames.ora file in a separate location (on each node of the RAC cluster).

In our 2 node RAC cluster for example these are entries we will make for the ORCL RAC database.

Node A

orcl =
  (DESCRIPTION =
      (ADDRESS =
        (PROTOCOL=BEQ)
        (PROGRAM=/u02/app/oracle/product/12.1.0/dbhome_1/bin/oracle)
        (ARGV0=oracleorcl1)
        (ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))')
        (ENVS='ORACLE_HOME=/u02/app/oracle/product/12.1.0/dbhome_1,ORACLE_SID=orcl1')
      )
  (CONNECT_DATA=(SID=orcl1))
)

Node B

orcl =
  (DESCRIPTION =
      (ADDRESS =
        (PROTOCOL=BEQ)
        (PROGRAM=/u02/app/oracle/product/12.1.0/dbhome_1/bin/oracle)
        (ARGV0=oracleorcl2)
        (ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))')
        (ENVS='ORACLE_HOME=/u02/app/oracle/product/12.1.0/dbhome_1,ORACLE_SID=orcl2')
      )
  (CONNECT_DATA=(SID=orcl2))
)


 

We will need to provide the password for the DBFS_USER database user account when we mount the DBFS filesystem via the dbfs_mount command. We can either store the password in a text file or we can use Oracle Wallet to encrypt and store the password.

In this example we are not using the Oracle Wallet, so we need to create a file (on all nodes of the RAC cluster) which will contain the DBFS_USER password.

For example:
echo dbfs_pswd > passwd.txt

nohup $ORACLE_HOME/bin/dbfs_client  dbfs_user@orcl -o allow_other,direct_io /mnt/DBFS < ~/passwd.txt &

After the DBFS filesystem is mounted successfully we can now see it via the ‘df’ command like shown below. Note in this case we had created a tablespace of 5 GB for DBFS and the space allocated and used displays that.

$  df -h |grep dbfs

dbfs-dbfs_user@:/     4.9G   11M  4.9G   1% /mnt/dbfs

The command used to unmount the DBFS filesystem would be:

fusermount -u 

Create links from Oracle Goldengate software directories to DBFS

Create the following directories on DBFS

$ mkdir /mnt/gg_source/goldengate 
$ cd /mnt/gg_source/goldengate 
$ mkdir dirchk
$ mkdir dirpcs 
$ mkdir dirprm
$ mkdir dirdat
$ mkdir BR

Make the symbolic links from Goldengate software directories to DBFS

cd /u03/app/oracle/goldengate
mv dirchk dirchk.old
mv dirdat dirdat.old
mv dirpcs dirpcs.old
mv dirprm dirprm.old
mv BR BR.old
ln -s /mnt/dbfs/gg_source/goldengate/dirchk dirchk
ln -s /mnt/dbfs/gg_source/goldengate/dirdat dirdat
ln -s /mnt/dbfs/gg_source/goldengate/dirprm dirprm
ln -s /mnt/dbfs/gg_source/goldengate/dirpcs dirpcs
ln -s /mnt/dbfs/gg_source/goldengate/BR BR

For example :

[oracle@rac2 goldengate]$ ls -l dirdat
lrwxrwxrwx 1 oracle oinstall 26 May 16 15:53 dirdat -> /mnt/dbfs/gg_source/goldengate/dirdat

Also copy the jagent.prm file which comes out of the box located in the dirprm directory

[oracle@rac2 dirprm.old]$ pwd
/u03/app/oracle/goldengate/dirprm.old
[oracle@rac2 dirprm.old]$ cp jagent.prm /mnt/dbfs/gg_source/dirprm

Note – in the Extract parameter file(s) we need to include the BR parameter pointing to the DBFS stored directory

BR BRDIR /mnt/dbfs/gg_source/goldengate/BR

Create the Application VIP

Typically the Goldengate source and target databases will be located outside the same Exadata machine and even in a non-Exadata RAC environment the source and target databases are on usually on different RAC clusters. In that case we have to use an Application VIP which is a cluster resource managed by Oracle Clusterware and the VIP assigned to one node will be seamlessly transferred to another surviving node in the event of a RAC (or Exadata compute) node failure.

Run the appvipcfg command to create the Application VIP as shown in the example below.

$GRID_HOME/bin/appvipcfg create -network=1 -ip= 192.168.56.90 -vipname=gg_vip_source -user=root

We have to assign an unused IP address to the Application VIP. We run the following command to identify the value we use for the network parameter as well as the subnet for the VIP.

$ crsctl stat res -p |grep -ie .network -ie subnet |grep -ie name -ie subnet

NAME=ora.net1.network
USR_ORA_SUBNET=192.168.56.0

As root give the Oracle Database software owner permissions to start the VIP.

$GRID_HOME/bin/crsctl setperm resource gg_vip_source -u user:oracle:r-x 

As the Oracle database software owner start the VIP

$GRID_HOME/bin/crsctl start resource gg_vip_source

Verify the status of the Application VIP

$GRID_HOME/bin/crsctl status resource gg_vip_source

 

Download the mount-dbfs.sh script from MOS

Download the mount-dbfs.sh script from MOS note 1054431.1.

Copy it to a temporary location on one of the Linux RAC nodes and run the command as root:

# dos2unix /tmp/mount-dbfs.sh

Change the ownership of the file to the Oracle Grid Infrastructure owner and also copy the file to the $GRID_HOME/crs/script directory location.

Next make changes to the environment variable settings section of the mount-dbfs.sh script as required. These are the changes I made to the script.

### Database name for the DBFS repository as used in "srvctl status database -d $DBNAME"
DBNAME=orcl

### Mount point where DBFS should be mounted
MOUNT_POINT=/mnt/dbfs

### Username of the DBFS repository owner in database $DBNAME
DBFS_USER=dbfs_user

### RDBMS ORACLE_HOME directory path
ORACLE_HOME=/u02/app/oracle/product/12.1.0/dbhome_1

### This is the plain text password for the DBFS_USER user
DBFS_PASSWD=dbfs_user

### TNS_ADMIN is the directory containing tnsnames.ora and sqlnet.ora used by DBFS
TNS_ADMIN=/u02/app/oracle/admin

### TNS alias used for mounting with wallets
DBFS_LOCAL_TNSALIAS=orcl

Create the DBFS Cluster Resource

Before creating the cluster resource for DBFS, test the mount-dbfs.sh script

$ ./mount-dbfs.sh start
$ ./mount-dbfs.sh status
Checking status now
Check – ONLINE

$ ./mount-dbfs.sh stop

As the Grid Infrastructure owner create a script called add-dbfs-resource.sh and store it in the $ORACLE_HOME/crs/script directory.

This script will create a Cluster Managed Resource called dbfs_mount by calling the Action Script mount-dbfs.sh which we had created earlier.

Edit the following variables in the script as shown below:

ACTION_SCRIPT
RESNAME
DEPNAME ( this can be the Oracle database or a database service)
ORACLE_HOME

#!/bin/bash
ACTION_SCRIPT=/u02/app/12.1.0/grid/crs/script/mount-dbfs.sh
RESNAME=dbfs_mount
DEPNAME=ora.orcl.db
ORACLE_HOME=/u01/app/12.1.0.2/grid
PATH=$ORACLE_HOME/bin:$PATH
export PATH ORACLE_HOME
crsctl add resource $RESNAME \
-type cluster_resource \
-attr "ACTION_SCRIPT=$ACTION_SCRIPT, \
CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \
START_DEPENDENCIES='hard($DEPNAME)pullup($DEPNAME)',\
STOP_DEPENDENCIES='hard($DEPNAME)',\
SCRIPT_TIMEOUT=300"

Execute the script – it should produce no output.

./ add-dbfs-resource.sh

 

Download and Install the Oracle Grid Infrastructure Bundled Agent

Starting with Oracle 11.2.0.3 on 64-bit Linux, out-of-the-box Oracle Grid Infrastructure bundled agents were introduced which had predefined clusterware resources for applications like Siebel and Goldengate.

The bundled agent for Goldengate provided integration between Oracle Goldengate and dependent resources like the database, filesystem and the network.

The AGCTL agent command line utility can be used to start and stop Goldengate as well as relocate Goldengate resources between nodes in the cluster.

Download the latest version of the agent (6.1) from the URL below:

http://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/index.html

The downloaded file will be xagpack_6.zip.

There is an xag/bin directory with the agctl executable already existing in the $GRID_HOME root directory.

We need to install the new bundled agent in a separate directory and ensure the $PATH includes [{–nodes <node1,node2[,...]> | –all_nodes}]

Register Goldengate with the bundled agents using agctl utility

Using agctl utility create the GoldenGate configuration.

Ensure that we are running agctl from the downloaded bundled agent directory and not from the $GRID_HOME/xag/bin directory or ensure that the $PATH variable has been amended as described earlier.

/home/oracle/xagent/bin/agctl add goldengate gg_source --gg_home /u03/app/oracle/goldengate \
--instance_type source \
--nodes rac1,rac2 \
--vip_name gg_vip_source \
--filesystems dbfs_mount --databases ora.orcl.db \
--oracle_home /u02/app/oracle/product/12.1.0/dbhome_1 \
--monitor_extracts ext1,extdp1
 

Once GoldenGate is registered with the bundled agent, we should only use agctl to start and stop Goldengate processes. The agctl command will start the Manager process which in turn will start the other processes like Extract, Data Pump and Replicat if we have configured them for automatic restart.

Let us look at some examples of using agctl.

Check the Status – note the DBFS filesystem is also mounted currently on node rac2

$ pwd
/home/oracle/xagent/bin
$ ./agctl status goldengate gg_source
Goldengate  instance 'gg_source' is running on rac2


$ cd /mnt/dbfs/
$ ls -lrt
total 0
drwxrwxrwx 9 root root 0 May 16 15:37 gg_source

Stop the Goldengate environment

$ ./agctl stop goldengate gg_source 
$ ./agctl status goldengate gg_source
Goldengate  instance ' gg_source ' is not running

GGSCI (rac2.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED
EXTRACT     STOPPED     EXT1        00:00:03      00:01:19
EXTRACT     STOPPED     EXTDP1      00:00:00      00:01:18

Start the Goldengate environment – note the resource has relocated to node rac1 from rac2 and the Goldengate processes on rac2 have been stopped and started on node rac1.

$ ./agctl start goldengate gg_source
$ ./agctl status goldengate gg_source
Goldengate  instance 'gg_source' is running on rac1

GGSCI (rac2.localdomain) 2> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt
MANAGER     STOPPED

GGSCI (rac1.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt
MANAGER     RUNNING
EXTRACT     RUNNING     EXT1        00:00:09      00:00:06
EXTRACT     RUNNING     EXTDP1      00:00:00      00:05:22

We can also see that the agctl has unmounted DBFS on rac2 and mounted it on rac1 automatically.

[oracle@rac1 goldengate]$ ls -l /mnt/dbfs
total 0
drwxrwxrwx 9 root root 0 May 16 15:37 gg_source

[oracle@rac2 goldengate]$ ls -l /mnt/dbfs
total 0

Lets test the whole thing!!

Now that we see that the Goldengate resources are running on node rac1, let us see what happens when we reboot that node to simulate a node failure when Goldengate is up and running and the Extract and Data Pump processes are running on the source.

AGCTL and Cluster Services will relocate all the Goldengate resources, VIP, DBFS to the other node seamlessly and we see that the Extract and Data Pump processes have been automatically started up on node rac2.

[oracle@rac1 goldengate]$ su -
Password:
[root@rac1 ~]# shutdown -h now

Broadcast message from oracle@rac1.localdomain
[root@rac1 ~]#  (/dev/pts/0) at 19:45 ...

The system is going down for halt NOW!

Connect to the surviving node rac2 and check ……

[oracle@rac2 bin]$ ./agctl status goldengate gg_source
Goldengate  instance 'gg_source' is running on rac2

GGSCI (rac2.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXT1        00:00:07      00:00:02
EXTRACT     RUNNING     EXTDP1      00:00:00      00:00:08

Check the Cluster Resource ….

oracle@rac2 bin]$ crsctl stat res dbfs_mount -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
dbfs_mount
      1        ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------

See individual products notes for individual links:
888828.1
 Note 965230.1 How to Find GoldenGate on edelivery.oracle.com
Note 970860.1 How To Apply Oracle GoldenGate Patches
Note 965394.1 Installing GoldenGate Code Into An Existing Subvolume
<< Note 966215.1>>  Implementing Logger Upgrades With Minimal Operational Intrusion
<< Note 1060867.1>>  How To Upgrade A Single Component In GoldenGate Version 10.0
Note 965683.1 How To Configure For Multiple Instances Of GoldenGate On One Himalaya System
Note 965360.1 Running Multiple GoldenGate Environments On NonStop
Note 968632.1 Does GoldenGate Support Installation Of Its Product On A Shared Disk Subsystem In A Clustered Environment?
Note: 966181.1  Installing GoldenGate For Oracle RAC
Note 969651.1 ORA-12705: Invalid Or Unknown NLS Parameter Value Specified.
Note 1060596.1 Unable To Determine The Application And Database Codepage Settings
Note 965278.1  How To Create a GLOBALS Parameter on Windows, MVS, or Unix
Note 965754.1 Moving A GoldenGate Installation Instead Of Downloading It On Tandem

 

GG Integrated Capture

Architecting GoldenGate

Binaries consume approx 600MB

Each process consumes about 50MB of memory this can quickly consume a lot of memory

Base processing will consume

Manager process, extract, pump, replicat which at the base is about 200MB

Each process assumes 1 CPU core

GoldenGate requires a number of TCP/IP ports in order to operate. It is important that your network firewall is allowed to pass traffic on these ports. One port is used solely for communication between the Manager process and other GoldenGate processes. This is normally set to port 7809 but it can be changed. A range of other ports are for local GoldenGate communications. These can be the default range, starting at port 7840 or a predefined range of up to 256 other ports.

Oracle recommends at least 256 GB of space per Extract process for the dirtmp subdirectory.

 

Administering GoldenGate Doc

Sample configuration for handling collisions

Customizing GoldenGate Processing

Using "EXITS" to customize GoldenGate processing

Architecture Plans

Maintain Live Standby w/GoldenGate

 

 

This is Oracle's main page for GoldenGate

GoldenGate Main Page Oracle

This page also contains information regarding Oracle Cloud Marketplace

GoldenGate downloads are available here

GoldenGate Downloads

GoldenGate Documentation is available here

GoldenGate Documentation

12.3.1.1.1 for Big Data

12.2.0.2.2 for Linux

GoldenGate 12.3 Announcement

GoldenGate 12.3 Documentation

12.3.1.1 Features

12.3 Download

GoldenGate Documentation

GoldenGate 12c (12.1.2)Documentation

GoldenGate Documentation

GoldenGate DBFS Installation

Oracle GoldenGate Blog

Oracle GoldenGate Best Practices: Heartbeat Table for Monitoring Lag times (Doc ID 1299679.1)

Integrated Capture

The mining database, from which the primary Extract captures log change records from the logmining server, can be either local or downstream from the source database.

These steps configure the primary Extract to capture transaction data in integrated mode from either location.

See Appendix B, "Configuring a Downstream Mining Database" and Appendix C, "Example Downstream Mining Configuration" for more information about capturing from a downstream mining database.

 

 

 

Basically every download of GoldenGate is a 'full' install.

Extract the file in a directory, run the "ggsci" executable in that directory,

type "Create SubDirs" at the command prompt and you're done

If you want to 'patch' GoldenGate, you could extract the 'patch' over the old version of GoldenGate. In general this works very well, but realize if you are using the Management Pack for GoldenGate (which works with Enterprise Manager 12c Cloud Control), you'll want to save a copy of your CONFIG.PROPERTIES file (it's in the cfg directory) before you do this. Typically that's going to be the only file that would get 'overwritten' during the patch install that you'd actually be concerned about. Everything else you'll WANT to be overwritten during the 'patch' install.

Remember, it's not a bad idea to back everything up before you do this.

At least between 25 and 55 Mb of RAM memory is required for each GoldenGate Replicat and Extract process.Each GoldenGate instance can support up to 300 (in new release it has increased up to 5000) concurrent Extract and Replicat processes (inclusive). but have enough system resourcesavailable for the OS.

The best way to assess total memory requirement is to run the GGSCI command to view the current report file and to examine the PROCESS AVAIL VM FROM OS (min) to determine if you have sufficient swap memory for your platform.

GoldenGate will typically use only 5% of a systems CPU resource. Modern Operating Systems can share available resources very efficiently; It is important however to size your requirements effectively, obtaining a balance between the maximum possible number of concurrent processes and number of CPUs. GoldenGate will use 1 CPU core per Extract or Replicat process.

For more details refer GoldenGate installation and admin guides.

 

Ready for Action?

LET'S GO!
Copyright 2024 IT Remote dot com
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram