To specify a Linux mount point of sufficient space for GoldenGate binaries / trail files.
While you can use the DBFS Database File System (DBFS) option its use creates additional database objects and unnecessary additional database I/O as well as additional redo and RMAN activity.
Another option is to use Oracle ASM Clustered File System (ACFS) for this use case.
It is much faster to set up and is available on all nodes by default, which allows GoldenGate to fail over to other nodes.
In addition, ACFS does not require the database to be up so the filesystem can also be used for other purposes.
If you are using this mount solely for GoldenGate, make sure you follow the best practices document which is updated periodically (Oracle GoldenGate Best Practice: NFS Mount options for use with GoldenGate (Doc ID 1232303.1))
*** Refer to the following steps at your own risk and always test for your use case prior to using in a production setting.
Requirements:
Root user access
Sufficient ASM Space
Separate ASM Diskgroup (Optional) unless you are using the cluster entirely for the purposes of GoldenGate
Latest Oracle Grid Infrastructure and Database Patchset
Configuration:
Verify that ACFS/ADVM modules are present in memory (on each node):
1
$ lsmod | greporacle
If the modules are not present, the command will return something similar to:
oracleasm 53591 1
If the modules are present, the command will return something similar to:
oracleacfs 3308260 0
oracleadvm 508030 0
oracleoks 506741 2 oracleacfs,oracleadvm
oracleasm 53591 1
If the modules are not present or you would like to ensure that the latest version is loaded, run the following before proceeding (as the root user):
1
2
3
4
5
6
7
$ . oraenv
ORACLE_SID = [CDBRAC1] ? +ASM
The Oracle base remains unchanged with value /u01/app/oracle
# $GRID_HOME/bin/acfsroot install
Reboot the node if the modules were already present and you are reloading them.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9322: completed
Once installation is complete, and the mount is registered with clusterware, these modules will be loaded automatically.
If you like you can double check the driverstate by using the following executable:
usage: acfsdriverstate [-orahome <ORACLE_HOME>] <installed | loaded | version | supported> [-s]
As oracle user, create an ASM volume for ACFS (run only on one node):
Source the grid environment.
1
2
3
$ . oraenv
ORACLE_SID = [CDBRAC1] ? +ASM
The Oracle base remains unchanged with value /u01/app/oracle
Create the volume using the volcreate command.
You can use an existing disk group or create a separate one to house ACFS.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ asmcmd
ASMCMD volcreate -G DATA -s 10G ACFSVOL1
ASMCMD volinfo --all
Diskgroup Name: DATA
Volume Name: ACFSVOL1
Volume Device: /dev/asm/acfsvol1-370
State: ENABLED
Size (MB): 1024
Resize Unit (MB): 64
Redundancy: UNPROT
Stripe Columns: 8
Stripe Width (K): 1024
Usage:
Mountpath:
As oracle user, create the filesystem on the volume which was just created:
1
2
3
4
5
6
7
$ /sbin/mkfs-t acfs /dev/asm/acfsvol1-370
mkfs.acfs: version = 12.1.0.2.0
mkfs.acfs: on-disk version = 39.0
mkfs.acfs: volume = /dev/asm/acfsvol1-370
mkfs.acfs: volume size = 1073741824 ( 1.00 GB )
mkfs.acfs: Format complete.
As root, create an empty directory which will house the file system:
1
2
3
4
5
# mkdir -p /acfsmounts/acfsvol1
# chown root:oinstall /acfsmounts
# chmod 770 /acfsmounts
# chown -R oracle:oinstall /acfsmounts/acfsvol1
# chmod 775 /acfsmounts/acfsvol1
As root, setup the file system to be auto mounted by clusterware:
In a RAC 11g environment, you use acfsutil (srvctl may be supported – was not tested and the “-u option” will allow the oracle user to administer the mount):
1
2
3
4
# . /usr/local/bin/oraenv
ORACLE_SID = [CDBRAC1] ? +ASM
The Oracle base remains unchanged with value /u01/app/oracle
# /sbin/acfsutil registry -a /dev/asm/acfsvol1-370 /acfsmounts/acfsvol1 -t "ACFS General Purpose Mount" -u oracle
In a RAC 12c GI environment, register it with clusterware using the following commands (the “-u option” will allow the oracle user to administer the mount):
1
2
3
4
5
6
# /usr/local/bin/oraenv
ORACLE_SID = [CDBRAC1] ? +ASM
The Oracle base remains unchanged with value /u01/app/oracle
# srvctl add volume -volume ACFSVOL1 -diskgroup DATA -device /dev/asm/acfsvol1-370
# srvctl add filesystem -device /dev/asm/acfsvol1-370 -path /acfsmounts/acfsvol1 -diskgroup DATA -user oracle -fstype ACFS -description "ACFS General Purpose Mount"
This is a demostration of CLASSIC CAPTURE on ASM using New ASM API , which reading redo log on ASM Using GoldenGate user provided by USERID in Extract
Source Side : 3 Node RAC / Grid Infrastructure 11.2.0.2
Target Side : Stand-alone 11.2.0.1
DBLOGREADER
(Oracle) Valid for Extract in classic capture mode.Causes Extract to use a newer ASM API that is available as
of Oracle 10.2.0.5 and later 10g R2 versions, and Oracle 11.2.0.2 and later 11g R2 versions (but not in Oracle 11g R1
versions). This API uses the database server to access the redo and archive logs, instead of connecting directly to the
Oracle ASM instance. The database must contain the libraries that contain the API modules and must be
running. To use this feature, the Extract database user must have SELECT ANY TRANSACTION privilege.
When used, DBLOGREADER enables Extract to use a read size of up to 4 MB in size. This is controlled with the
DBLOGREADERBUFSIZE option The maximum read size when using the default OCI buffer is 28672 bytes.
This is controlled by the ASMBUFSIZE option. A larger buffer may improve the performance of Extract
when redo rate is high.When using DBLOGREADER, do not use the ASMUSER and ASMPASSWORD options of TRANLOGOPTIONS.
The API uses the user and password specified with the USERID parameter.
DBLOGREADERBUFSIZE
(Oracle) Valid for Extract in classic capture mode.Controls the maximum size, in bytes, of a read operation
into the internal buffer that holds the results of each read of the transaction log in ASM. Higher values increase
extraction speed but cause Extract to consume more memory. Low values reduce memory usage but increase I/O
because Extract must store data that exceeds the cache size to disk.
Use DBLOGREADERBUFSIZE together with the DBLOGREADER option if the source ASM instance is Oracle 10.2.0.5 or later10g R2 versions, or Oracle 11.2.0.2 and later 11g R2 versions (but not Oracle 11g R1 versions). The newer ASM
API in those versions provides better performance than the older one. If the Oracle version is not one of those versions,
then ASMBUFSIZE must be used.
01.2012-07-31 09:32:36 INFO OGG-00963 Oracle GoldenGate Manager forOracle, mgr.prm: Command received fromEXTRACT onhost 192.168.100.126 (START SERVER CPU -1 PRI -1 TIMEOUT 300 PARAMS ).
02.2012-07-31 09:32:36 INFO OGG-00974 Oracle GoldenGate Manager forOracle, mgr.prm: Manager started collector process (Port 7840).
03.2012-07-31 09:32:36 INFO OGG-01677 Oracle GoldenGate Collector: Waiting forconnection(started dynamically).
04.2012-07-31 09:32:36 INFO OGG-01228 Oracle GoldenGate Collector: Timeout in300 seconds.
05.2012-07-31 09:32:41 INFO OGG-01229 Oracle GoldenGate Collector: Connected to192.168.100.126:23270.
Configure GoldenGate Extract to read from remote logs
Sometimes you may need to run GoldenGate on different machines than the ones that host the database. It is possible but there are restrictions that apply. First is that the endian order of both the systems should be same and the second is the bit-width has to be same. For example it is not possible to run GoldenGate on a 32-bit system to read from a database that runs on some 64-bit platform. Assuming that the environment satisfies the above two conditions; we can use the LOGSOURCE option of TRANSLOGOPTIONS to achieve this.
Here we run GG on host goldengate1 (192.168.0.109) and the database from which we want to capture the changes runs on the host goldengate3 (192.168.0.111) so two different hosts. Both the systems run 11.2.0.2 on RHEL 5.5. On goldengate3 redo logs are in the mount point /home which has been NFS mounted on goldengate1 as /home_gg3
This requires an NFS mount between systems for redo logs, not a goldengate process
Filesystem 1K-blocks Used Available Use% Mounted on
192.168.0.111:/home 12184800 7962496 3593376 69% /home_gg3
The Extract parameters are as follows:
EXTRACT ERMT01
USERID ggadmin@orcl3, PASSWORD ggadmin
EXTTRAIL ./dirdat/er
TRANLOGOPTIONS LOGSOURCE LINUX, PATHMAP /home/oracle/app/oracle/oradata/orcl /home_gg3/oracle/app/oracle/oradata/or
cl, PATHMAP /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog /home_gg3/oracle/app/oracle/flash_recovery_
area/ORCL/archivelog
TABLE HR.*;
(The text in the line starting with TRANLOGOPTIONS is a single line)
So using PATHMAP we can make GG aware about the actual location of the redo logs & archive logs on the remote server and the mapped location on the system where GG is running (It is somewhat like db_file_name_convert option for Data Guards).
We fire some DMLs on the source database and then run stats command for the Extract
GGSCI (goldengate1) 93> stats ermt01 totalsonly *
Sending STATS request to EXTRACT ERMT01 ...
Start of Statistics at 2012-05-26 05:17:05.
Output to ./dirdat/er:
Cumulative totals for specified table(s):
*** Total statistics since 2012-05-26 04:51:10 ***
Total inserts 1.00
Total updates 0.00
Total deletes 1.00
Total discards 0.00
Total operations 2.00
.
.
.
End of Statistics.
GGSCI (goldengate1) 94>
1) Install and configure FUSE (Filesystem in Userspace)
2) Create the DBFS user and DBFS tablespaces
3) Mount the DBFS filesystem
5) Create symbolic links for the Goldengate software directories dirchk,dirpcs, dirdat, BR to point to directories on DBFS
6) Create the Application VIP
7) Download the mount-dbfs.sh script from MOS and edit according to our environment
8) Create the DBFS Cluster Resource
9) Download and install the Oracle Grid Infrastructure Bundled Agent
10) Register Goldengate with the bundled agents using agctl utility
Install and Configure FUSE
Using the following command check if FUSE has been installed:
lsmod | grep fuse
FUSE can be installed in a couple of ways – either via the Yum repository or using the RPM’s available on the OEL software media.
A group named fuse must be created and the OS user who will be mounting the DBFS filesystem needs to be added to the fuse group.
For example if the OS user is ‘oracle’, then we use the usermod command to modify the secondary group membership for the oracle user. Important is to ensure we do not exclude any current groups the user already is a member of.
One of the mount options which we will use is called “allow_other” which will enable users other than the user who mounted the DBFS file system to access the file system.
The /etc/fuse.conf needs to have the “user_allow_other” option as shown below.
Important: Ensure that the variable LD_LIBRARY_PATH is set and includes the path to $ORACLE_HOME/lib. Otherwise we will get an error when we try to mount the DBFS using the dbfs_client executable.
Create the DBFS tablespaces and mount the DBFS
If the source database used by Goldengate Extract is running on RAC or hosted on Exadata then we will create ONE tablespace for DBF.
If the target database where Replicat will be applying changes in on RAC or Exadata, then we will create TWO tableapaces for DBFS with each tablespace having different logging and caching settings – typically one tablespace will be used for the Goldengate trail files and the other for the Goldengate checkpoint files.
If using Exadata then typically an ASM disk group called DBFS_DG will already be available for us to use, otherwise on an non-Exadata platform we will create a separate ASM disk group for holding DBFS files.
Note than since we will be storing Goldengate trail files on DBFS, a best practice would be to allocate enough disk space/tablespace space to be able to retain at least a minimum of 12-24 hours of trail files. So we need to keep that in mind when we create the ASM diskgroup or create the DBFS tablespace.
CREATE bigfile TABLESPACE dbfs_ogg_big datafile '+DBFS_DG' SIZE
1000M autoextend ON NEXT 100M LOGGING EXTENT MANAGEMENT LOCAL
AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO;
Create the DBFS user
CREATE USER dbfs_user IDENTIFIED BY dbfs_pswd
DEFAULT TABLESPACE dbfs_ogg_big
QUOTA UNLIMITED ON dbfs_ogg_big;
GRANT create session,
create table,
create view,
create procedure,
dbfs_role
TO dbfs_user;
Create the DBFS Filesystem
To create the DBFS filesystem we connect as the DBFS_USER Oracle user account and either run the dbfs_create_filesystem.sql or dbfs_create_filesystem_advanced.sql script located under $ORACLE_HOME/rdbms/admin directory.
For example:
cd $ORACLE_HOME/rdbms/admin
sqlplus dbfs_user/dbfs_pswd
SQL> @dbfs_create_filesystem dbfs_ogg_big gg_source
OR
SQL> @dbfs_create_filesystem_advanced.sql dbfs_ogg_big gg_source
nocompress nodeduplicate noencrypt non-partition
Where …
dbfs_ogg_big: tablespace for the DBFS database objects
gg_source: filesystem name, this can be any string and will appear as a directory under the mount point
If we were configuring DBFS on the Goldengate target or Replicat side of things, it is recommended to use the NOCACHE LOGGING attributes for the tablespace which holds the trail files because of the sequential reading and writing nature of the trail files.
For the checkpoint files on the other hand it is recommended to use CACHING and LOGGING attributes instead.
The example shown below illustrates how we can modify the LOB attributes.
(assuming we have created two DBFS tablespaces)
SQL> SELECT table_name, segment_name, cache, logging FROM dba_lobs
WHERE tablespace_name like 'DBFS%';
TABLE_NAME SEGMENT_NAME CACHE LOGGING
----------------------- --------------------------- --------- -------
T_DBFS_BIG LOB_SFS$_FST_1 NO YES
T_DBFS_SM LOB_SFS$_FST_11 NO YES
SQL> ALTER TABLE dbfs_user.T_DBFS_SM
MODIFY LOB (FILEDATA) (CACHE LOGGING);
SQL> SELECT table_name, segment_name, cache, logging FROM dba_lobs
WHERE tablespace_name like 'DBFS%';
TABLE_NAME SEGMENT_NAME CACHE LOGGING
----------------------- --------------------------- --------- -------
T_DBFS_BIG LOB_SFS$_FST_1 NO YES
T_DBFS_SM LOB_SFS$_FST_11 YES YES
As the user root, now create the DBFS mount point on ALL nodes of the RAC cluster (or Exadata compute servers).
# cd /mnt
# mkdir DBFS
# chown oracle:oinstall DBFS/
Create a custom tnsnames.ora file in a separate location (on each node of the RAC cluster).
In our 2 node RAC cluster for example these are entries we will make for the ORCL RAC database.
We will need to provide the password for the DBFS_USER database user account when we mount the DBFS filesystem via the dbfs_mount command. We can either store the password in a text file or we can use Oracle Wallet to encrypt and store the password.
In this example we are not using the Oracle Wallet, so we need to create a file (on all nodes of the RAC cluster) which will contain the DBFS_USER password.
After the DBFS filesystem is mounted successfully we can now see it via the ‘df’ command like shown below. Note in this case we had created a tablespace of 5 GB for DBFS and the space allocated and used displays that.
Note – in the Extract parameter file(s) we need to include the BR parameter pointing to the DBFS stored directory
BR BRDIR /mnt/dbfs/gg_source/goldengate/BR
Create the Application VIP
Typically the Goldengate source and target databases will be located outside the same Exadata machine and even in a non-Exadata RAC environment the source and target databases are on usually on different RAC clusters. In that case we have to use an Application VIP which is a cluster resource managed by Oracle Clusterware and the VIP assigned to one node will be seamlessly transferred to another surviving node in the event of a RAC (or Exadata compute) node failure.
Run the appvipcfg command to create the Application VIP as shown in the example below.
We have to assign an unused IP address to the Application VIP. We run the following command to identify the value we use for the network parameter as well as the subnet for the VIP.
$ crsctl stat res -p |grep -ie .network -ie subnet |grep -ie name -ie subnet
NAME=ora.net1.network
USR_ORA_SUBNET=192.168.56.0
As root give the Oracle Database software owner permissions to start the VIP.
$GRID_HOME/bin/crsctl status resource gg_vip_source
Download the mount-dbfs.sh script from MOS
Download the mount-dbfs.sh script from MOS note 1054431.1.
Copy it to a temporary location on one of the Linux RAC nodes and run the command as root:
# dos2unix /tmp/mount-dbfs.sh
Change the ownership of the file to the Oracle Grid Infrastructure owner and also copy the file to the $GRID_HOME/crs/script directory location.
Next make changes to the environment variable settings section of the mount-dbfs.sh script as required. These are the changes I made to the script.
### Database name for the DBFS repository as used in "srvctl status database -d $DBNAME"
DBNAME=orcl
### Mount point where DBFS should be mounted
MOUNT_POINT=/mnt/dbfs
### Username of the DBFS repository owner in database $DBNAME
DBFS_USER=dbfs_user
### RDBMS ORACLE_HOME directory path
ORACLE_HOME=/u02/app/oracle/product/12.1.0/dbhome_1
### This is the plain text password for the DBFS_USER user
DBFS_PASSWD=dbfs_user
### TNS_ADMIN is the directory containing tnsnames.ora and sqlnet.ora used by DBFS
TNS_ADMIN=/u02/app/oracle/admin
### TNS alias used for mounting with wallets
DBFS_LOCAL_TNSALIAS=orcl
Create the DBFS Cluster Resource
Before creating the cluster resource for DBFS, test the mount-dbfs.sh script
$ ./mount-dbfs.sh start
$ ./mount-dbfs.sh status
Checking status now
Check – ONLINE
$ ./mount-dbfs.sh stop
As the Grid Infrastructure owner create a script called add-dbfs-resource.sh and store it in the $ORACLE_HOME/crs/script directory.
This script will create a Cluster Managed Resource called dbfs_mount by calling the Action Script mount-dbfs.sh which we had created earlier.
Edit the following variables in the script as shown below:
ACTION_SCRIPT
RESNAME
DEPNAME ( this can be the Oracle database or a database service)
ORACLE_HOME
Download and Install the Oracle Grid Infrastructure Bundled Agent
Starting with Oracle 11.2.0.3 on 64-bit Linux, out-of-the-box Oracle Grid Infrastructure bundled agents were introduced which had predefined clusterware resources for applications like Siebel and Goldengate.
The bundled agent for Goldengate provided integration between Oracle Goldengate and dependent resources like the database, filesystem and the network.
The AGCTL agent command line utility can be used to start and stop Goldengate as well as relocate Goldengate resources between nodes in the cluster.
Download the latest version of the agent (6.1) from the URL below:
There is an xag/bin directory with the agctl executable already existing in the $GRID_HOME root directory.
We need to install the new bundled agent in a separate directory and ensure the $PATH includes [{–nodes <node1,node2[,...]> | –all_nodes}]
Register Goldengate with the bundled agents using agctl utility
Using agctl utility create the GoldenGate configuration.
Ensure that we are running agctl from the downloaded bundled agent directory and not from the $GRID_HOME/xag/bin directory or ensure that the $PATH variable has been amended as described earlier.
Once GoldenGate is registered with the bundled agent, we should only use agctl to start and stop Goldengate processes. The agctl command will start the Manager process which in turn will start the other processes like Extract, Data Pump and Replicat if we have configured them for automatic restart.
Let us look at some examples of using agctl.
Check the Status – note the DBFS filesystem is also mounted currently on node rac2
$ pwd
/home/oracle/xagent/bin
$ ./agctl status goldengate gg_source
Goldengate instance 'gg_source' is running on rac2
$ cd /mnt/dbfs/
$ ls -lrt
total 0
drwxrwxrwx 9 root root 0 May 16 15:37 gg_source
Stop the Goldengate environment
$ ./agctl stop goldengate gg_source
$ ./agctl status goldengate gg_source
Goldengate instance ' gg_source ' is not running
GGSCI (rac2.localdomain) 1> info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER STOPPED
EXTRACT STOPPED EXT1 00:00:03 00:01:19
EXTRACT STOPPED EXTDP1 00:00:00 00:01:18
Start the Goldengate environment – note the resource has relocated to node rac1 from rac2 and the Goldengate processes on rac2 have been stopped and started on node rac1.
$ ./agctl start goldengate gg_source
$ ./agctl status goldengate gg_source
Goldengate instance 'gg_source' is running on rac1
GGSCI (rac2.localdomain) 2> info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER STOPPED
GGSCI (rac1.localdomain) 1> info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING EXT1 00:00:09 00:00:06
EXTRACT RUNNING EXTDP1 00:00:00 00:05:22
We can also see that the agctl has unmounted DBFS on rac2 and mounted it on rac1 automatically.
[oracle@rac1 goldengate]$ ls -l /mnt/dbfs
total 0
drwxrwxrwx 9 root root 0 May 16 15:37 gg_source
[oracle@rac2 goldengate]$ ls -l /mnt/dbfs
total 0
Lets test the whole thing!!
Now that we see that the Goldengate resources are running on node rac1, let us see what happens when we reboot that node to simulate a node failure when Goldengate is up and running and the Extract and Data Pump processes are running on the source.
AGCTL and Cluster Services will relocate all the Goldengate resources, VIP, DBFS to the other node seamlessly and we see that the Extract and Data Pump processes have been automatically started up on node rac2.
[oracle@rac1 goldengate]$ su -
Password:
[root@rac1 ~]# shutdown -h now
Broadcast message from oracle@rac1.localdomain
[root@rac1 ~]# (/dev/pts/0) at 19:45 ...
The system is going down for halt NOW!
Connect to the surviving node rac2 and check ……
[oracle@rac2 bin]$ ./agctl status goldengate gg_source
Goldengate instance 'gg_source' is running on rac2
GGSCI (rac2.localdomain) 1> info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING EXT1 00:00:07 00:00:02
EXTRACT RUNNING EXTDP1 00:00:00 00:00:08
Check the Cluster Resource ….
oracle@rac2 bin]$ crsctl stat res dbfs_mount -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
dbfs_mount
1 ONLINE ONLINE rac2 STABLE
--------------------------------------------------------------------------------
See individual products notes for individual links:
Each process consumes about 50MB of memory this can quickly consume a lot of memory
Base processing will consume
Manager process, extract, pump, replicat which at the base is about 200MB
Each process assumes 1 CPU core
GoldenGate requires a number of TCP/IP ports in order to operate. It is important that your network firewall is allowed to pass traffic on these ports. One port is used solely for communication between the Manager process and other GoldenGate processes. This is normally set to port 7809 but it can be changed. A range of other ports are for local GoldenGate communications. These can be the default range, starting at port 7840 or a predefined range of up to 256 other ports.
Oracle recommends at least 256 GB of space per Extract process for the dirtmp subdirectory.
The mining database, from which the primary Extract captures log change records from the logmining server, can be either local or downstream from the source database.
These steps configure the primary Extract to capture transaction data in integrated mode from either location.
Basically every download of GoldenGate is a 'full' install.
Extract the file in a directory, run the "ggsci" executable in that directory,
type "Create SubDirs" at the command prompt and you're done
If you want to 'patch' GoldenGate, you could extract the 'patch' over the old version of GoldenGate. In general this works very well, but realize if you are using the Management Pack for GoldenGate (which works with Enterprise Manager 12c Cloud Control), you'll want to save a copy of your CONFIG.PROPERTIES file (it's in the cfg directory) before you do this. Typically that's going to be the only file that would get 'overwritten' during the patch install that you'd actually be concerned about. Everything else you'll WANT to be overwritten during the 'patch' install.
Remember, it's not a bad idea to back everything up before you do this.
At least between 25 and 55 Mb of RAM memory is required for each GoldenGate Replicat and Extract process.Each GoldenGate instance can support up to 300 (in new release it has increased up to 5000) concurrent Extract and Replicat processes (inclusive). but have enough system resourcesavailable for the OS.
The best way to assess total memory requirement is to run the GGSCI command to view the current report file and to examine the PROCESS AVAIL VM FROM OS (min) to determine if you have sufficient swap memory for your platform.
GoldenGate will typically use only 5% of a systems CPU resource. Modern Operating Systems can share available resources very efficiently; It is important however to size your requirements effectively, obtaining a balance between the maximum possible number of concurrent processes and number of CPUs. GoldenGate will use 1 CPU core per Extract or Replicat process.
For more details refer GoldenGate installation and admin guides.