To specify a Linux mount point of sufficient space for GoldenGate binaries / trail files.
While you can use the DBFS Database File System (DBFS) option its use creates additional database objects and unnecessary additional database I/O as well as additional redo and RMAN activity.
Another option is to use Oracle ASM Clustered File System (ACFS) for this use case.
It is much faster to set up and is available on all nodes by default, which allows GoldenGate to fail over to other nodes.
In addition, ACFS does not require the database to be up so the filesystem can also be used for other purposes.
If you are using this mount solely for GoldenGate, make sure you follow the best practices document which is updated periodically (Oracle GoldenGate Best Practice: NFS Mount options for use with GoldenGate (Doc ID 1232303.1))
*** Refer to the following steps at your own risk and always test for your use case prior to using in a production setting.
Requirements:
- Root user access
- Sufficient ASM Space
- Separate ASM Diskgroup (Optional) unless you are using the cluster entirely for the purposes of GoldenGate
- Latest Oracle Grid Infrastructure and Database Patchset
Configuration:
Verify that ACFS/ADVM modules are present in memory (on each node):
1
|
$ lsmod | grep oracle |
If the modules are not present, the command will return something similar to:
oracleasm 53591 1
If the modules are present, the command will return something similar to:
oracleacfs 3308260 0
oracleadvm 508030 0
oracleoks 506741 2 oracleacfs,oracleadvm
oracleasm 53591 1
If the modules are not present or you would like to ensure that the latest version is loaded, run the following before proceeding (as the root user):
1
2
3
4
5
6
7
|
$ . oraenv ORACLE_SID = [CDBRAC1] ? +ASM The Oracle base remains unchanged with value /u01/app/oracle # $GRID_HOME/bin/acfsroot install |
Reboot the node if the modules were already present and you are reloading them.
Start the ACFS modules on each node:
On each node and as the root user:
1
2
3
4
5
6
7
8
9
|
# $GRID_HOME/bin/acfsload start ACFS-9391: Checking for existing ADVM /ACFS installation. ACFS-9392: Validating ADVM /ACFS installation files for operating system. ACFS-9393: Verifying ASM Administrator setup. ACFS-9308: Loading installed ADVM /ACFS drivers. ACFS-9327: Verifying ADVM /ACFS devices. ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec' . ACFS-9156: Detecting control device '/dev/ofsctl' . ACFS-9322: completed |
Once installation is complete, and the mount is registered with clusterware, these modules will be loaded automatically.
If you like you can double check the driverstate by using the following executable:
usage: acfsdriverstate [-orahome <ORACLE_HOME>] <installed | loaded | version | supported> [-s]
As oracle user, create an ASM volume for ACFS (run only on one node):
Source the grid environment.
1
2
3
|
$ . oraenv ORACLE_SID = [CDBRAC1] ? +ASM The Oracle base remains unchanged with value /u01/app/oracle |
Create the volume using the volcreate command.
You can use an existing disk group or create a separate one to house ACFS.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
$ asmcmd ASMCMD volcreate -G DATA -s 10G ACFSVOL1 ASMCMD volinfo --all Diskgroup Name: DATA Volume Name: ACFSVOL1 Volume Device: /dev/asm/acfsvol1-370 State: ENABLED Size (MB): 1024 Resize Unit (MB): 64 Redundancy: UNPROT Stripe Columns: 8 Stripe Width (K): 1024 Usage: Mountpath: |
As oracle user, create the filesystem on the volume which was just created:
1
2
3
4
5
6
7
|
$ /sbin/mkfs -t acfs /dev/asm/acfsvol1-370 mkfs.acfs: version = 12.1.0.2.0 mkfs.acfs: on-disk version = 39.0 mkfs.acfs: volume = /dev/asm/acfsvol1-370 mkfs.acfs: volume size = 1073741824 ( 1.00 GB ) mkfs.acfs: Format complete. |
As root, create an empty directory which will house the file system:
1
2
3
4
5
|
# mkdir -p /acfsmounts/acfsvol1 # chown root:oinstall /acfsmounts # chmod 770 /acfsmounts # chown -R oracle:oinstall /acfsmounts/acfsvol1 # chmod 775 /acfsmounts/acfsvol1 |
As root, setup the file system to be auto mounted by clusterware:
In a RAC 11g environment, you use acfsutil (srvctl may be supported – was not tested and the “-u option” will allow the oracle user to administer the mount):
1
2
3
4
|
# . /usr/local/bin/oraenv ORACLE_SID = [CDBRAC1] ? +ASM The Oracle base remains unchanged with value /u01/app/oracle # /sbin/acfsutil registry -a /dev/asm/acfsvol1-370 /acfsmounts/acfsvol1 -t "ACFS General Purpose Mount" -u oracle |
In a RAC 12c GI environment, register it with clusterware using the following commands (the “-u option” will allow the oracle user to administer the mount):
1
2
3
4
5
6
|
# /usr/local/bin/oraenv ORACLE_SID = [CDBRAC1] ? +ASM The Oracle base remains unchanged with value /u01/app/oracle # srvctl add volume -volume ACFSVOL1 -diskgroup DATA -device /dev/asm/acfsvol1-370 # srvctl add filesystem -device /dev/asm/acfsvol1-370 -path /acfsmounts/acfsvol1 -diskgroup DATA -user oracle -fstype ACFS -description "ACFS General Purpose Mount" # srvctl modify filesystem –device /dev/asm/acfsvol1-370 –fsoptions “rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,noac,vers=3,timeo=600” |
At this point the mount should be ready for read/write and will be automatically mounted by clusterware.
Administration of the ACFS mount:
If you need to resize the mount once created (since you granted control to the oracle user, this command can also be executed by the oracle user:
1
2
3
|
$ acfsutil size 25G /acfsmounts/acfsvol1 $ srvctl start filesystem -device /dev/asm/acfsvol1-370 $ srvctl stop filesystem -device /dev/asm/acfsvol1-370 |