How To Setup 11 2 DBFS FileSystems Using the dbfs_client_API_Method_OK-April-15-2015
869822.1 Installing the DBFS
1150157.1 List of Critical patches
In summary the broad steps involved are:
1) Install and configure FUSE (Filesystem in Userspace)
2) Create the DBFS user and DBFS tablespaces
3) Mount the DBFS filesystem
5) Create symbolic links for the Goldengate software directories dirchk,dirpcs, dirdat, BR to point to directories on DBFS
6) Create the Application VIP
7) Download the mount-dbfs.sh script from MOS and edit according to our environment
8) Create the DBFS Cluster Resource
9) Download and install the Oracle Grid Infrastructure Bundled Agent
10) Register Goldengate with the bundled agents using agctl utility
Install and Configure FUSE
Using the following command check if FUSE has been installed:
lsmod | grep fuse
FUSE can be installed in a couple of ways – either via the Yum repository or using the RPM’s available on the OEL software media.
Using Yum:
yum install kernel-devel yum install fuse fuse-libs
Via RPM’s:
If installing from the media, then these are the RPM’s which are required:
kernel-devel-2.6.32-358.el6.x86_64.rpm fuse-2.8.3-4.el6.x86_64.rpm fuse-devel-2.8.3-4.el6.x86_64.rpm fuse-libs-2.8.3-4.el6.x86_64.rpm
A group named fuse must be created and the OS user who will be mounting the DBFS filesystem needs to be added to the fuse group.
For example if the OS user is ‘oracle’, then we use the usermod command to modify the secondary group membership for the oracle user. Important is to ensure we do not exclude any current groups the user already is a member of.
# /usr/sbin/groupadd fuse # usermod -G dba,fuse oracle
One of the mount options which we will use is called “allow_other” which will enable users other than the user who mounted the DBFS file system to access the file system.
The /etc/fuse.conf needs to have the “user_allow_other” option as shown below.
$ # cat /etc/fuse.conf user_allow_other chmod 644 /etc/fuse.conf
Important: Ensure that the variable LD_LIBRARY_PATH is set and includes the path to $ORACLE_HOME/lib. Otherwise we will get an error when we try to mount the DBFS using the dbfs_client executable.
Create the DBFS tablespaces and mount the DBFS
If the source database used by Goldengate Extract is running on RAC or hosted on Exadata then we will create ONE tablespace for DBF.
If the target database where Replicat will be applying changes in on RAC or Exadata, then we will create TWO tableapaces for DBFS with each tablespace having different logging and caching settings – typically one tablespace will be used for the Goldengate trail files and the other for the Goldengate checkpoint files.
If using Exadata then typically an ASM disk group called DBFS_DG will already be available for us to use, otherwise on an non-Exadata platform we will create a separate ASM disk group for holding DBFS files.
Note than since we will be storing Goldengate trail files on DBFS, a best practice would be to allocate enough disk space/tablespace space to be able to retain at least a minimum of 12-24 hours of trail files. So we need to keep that in mind when we create the ASM diskgroup or create the DBFS tablespace.
CREATE bigfile TABLESPACE dbfs_ogg_big datafile '+DBFS_DG' SIZE 1000M autoextend ON NEXT 100M LOGGING EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO;
Create the DBFS user
CREATE USER dbfs_user IDENTIFIED BY dbfs_pswd DEFAULT TABLESPACE dbfs_ogg_big QUOTA UNLIMITED ON dbfs_ogg_big; GRANT create session, create table, create view, create procedure, dbfs_role TO dbfs_user;
Create the DBFS Filesystem
To create the DBFS filesystem we connect as the DBFS_USER Oracle user account and either run the dbfs_create_filesystem.sql or dbfs_create_filesystem_advanced.sql script located under $ORACLE_HOME/rdbms/admin directory.
For example:
cd $ORACLE_HOME/rdbms/admin sqlplus dbfs_user/dbfs_pswd SQL> @dbfs_create_filesystem dbfs_ogg_big gg_source OR SQL> @dbfs_create_filesystem_advanced.sql dbfs_ogg_big gg_source nocompress nodeduplicate noencrypt non-partition
Where …
dbfs_ogg_big: tablespace for the DBFS database objects
gg_source: filesystem name, this can be any string and will appear as a directory under the mount point
If we were configuring DBFS on the Goldengate target or Replicat side of things, it is recommended to use the NOCACHE LOGGING attributes for the tablespace which holds the trail files because of the sequential reading and writing nature of the trail files.
For the checkpoint files on the other hand it is recommended to use CACHING and LOGGING attributes instead.
The example shown below illustrates how we can modify the LOB attributes.
(assuming we have created two DBFS tablespaces)
SQL> SELECT table_name, segment_name, cache, logging FROM dba_lobs WHERE tablespace_name like 'DBFS%'; TABLE_NAME SEGMENT_NAME CACHE LOGGING ----------------------- --------------------------- --------- ------- T_DBFS_BIG LOB_SFS$_FST_1 NO YES T_DBFS_SM LOB_SFS$_FST_11 NO YES SQL> ALTER TABLE dbfs_user.T_DBFS_SM MODIFY LOB (FILEDATA) (CACHE LOGGING); SQL> SELECT table_name, segment_name, cache, logging FROM dba_lobs WHERE tablespace_name like 'DBFS%'; TABLE_NAME SEGMENT_NAME CACHE LOGGING ----------------------- --------------------------- --------- ------- T_DBFS_BIG LOB_SFS$_FST_1 NO YES T_DBFS_SM LOB_SFS$_FST_11 YES YES
As the user root, now create the DBFS mount point on ALL nodes of the RAC cluster (or Exadata compute servers).
# cd /mnt # mkdir DBFS # chown oracle:oinstall DBFS/
Create a custom tnsnames.ora file in a separate location (on each node of the RAC cluster).
In our 2 node RAC cluster for example these are entries we will make for the ORCL RAC database.
Node A
orcl = (DESCRIPTION = (ADDRESS = (PROTOCOL=BEQ) (PROGRAM=/u02/app/oracle/product/12.1.0/dbhome_1/bin/oracle) (ARGV0=oracleorcl1) (ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))') (ENVS='ORACLE_HOME=/u02/app/oracle/product/12.1.0/dbhome_1,ORACLE_SID=orcl1') ) (CONNECT_DATA=(SID=orcl1)) )
Node B
orcl = (DESCRIPTION = (ADDRESS = (PROTOCOL=BEQ) (PROGRAM=/u02/app/oracle/product/12.1.0/dbhome_1/bin/oracle) (ARGV0=oracleorcl2) (ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))') (ENVS='ORACLE_HOME=/u02/app/oracle/product/12.1.0/dbhome_1,ORACLE_SID=orcl2') ) (CONNECT_DATA=(SID=orcl2)) )
We will need to provide the password for the DBFS_USER database user account when we mount the DBFS filesystem via the dbfs_mount command. We can either store the password in a text file or we can use Oracle Wallet to encrypt and store the password.
In this example we are not using the Oracle Wallet, so we need to create a file (on all nodes of the RAC cluster) which will contain the DBFS_USER password.
For example:
echo dbfs_pswd > passwd.txt
nohup $ORACLE_HOME/bin/dbfs_client dbfs_user@orcl -o allow_other,direct_io /mnt/DBFS < ~/passwd.txt &
After the DBFS filesystem is mounted successfully we can now see it via the ‘df’ command like shown below. Note in this case we had created a tablespace of 5 GB for DBFS and the space allocated and used displays that.
$ df -h |grep dbfs dbfs-dbfs_user@:/ 4.9G 11M 4.9G 1% /mnt/dbfs
The command used to unmount the DBFS filesystem would be:
fusermount -u
Create links from Oracle Goldengate software directories to DBFS
Create the following directories on DBFS
$ mkdir /mnt/gg_source/goldengate $ cd /mnt/gg_source/goldengate $ mkdir dirchk $ mkdir dirpcs $ mkdir dirprm $ mkdir dirdat $ mkdir BR
Make the symbolic links from Goldengate software directories to DBFS
cd /u03/app/oracle/goldengate mv dirchk dirchk.old mv dirdat dirdat.old mv dirpcs dirpcs.old mv dirprm dirprm.old mv BR BR.old ln -s /mnt/dbfs/gg_source/goldengate/dirchk dirchk ln -s /mnt/dbfs/gg_source/goldengate/dirdat dirdat ln -s /mnt/dbfs/gg_source/goldengate/dirprm dirprm ln -s /mnt/dbfs/gg_source/goldengate/dirpcs dirpcs ln -s /mnt/dbfs/gg_source/goldengate/BR BR
For example :
[oracle@rac2 goldengate]$ ls -l dirdat lrwxrwxrwx 1 oracle oinstall 26 May 16 15:53 dirdat -> /mnt/dbfs/gg_source/goldengate/dirdat
Also copy the jagent.prm file which comes out of the box located in the dirprm directory
[oracle@rac2 dirprm.old]$ pwd /u03/app/oracle/goldengate/dirprm.old [oracle@rac2 dirprm.old]$ cp jagent.prm /mnt/dbfs/gg_source/dirprm
Note – in the Extract parameter file(s) we need to include the BR parameter pointing to the DBFS stored directory
BR BRDIR /mnt/dbfs/gg_source/goldengate/BR
Create the Application VIP
Typically the Goldengate source and target databases will be located outside the same Exadata machine and even in a non-Exadata RAC environment the source and target databases are on usually on different RAC clusters. In that case we have to use an Application VIP which is a cluster resource managed by Oracle Clusterware and the VIP assigned to one node will be seamlessly transferred to another surviving node in the event of a RAC (or Exadata compute) node failure.
Run the appvipcfg command to create the Application VIP as shown in the example below.
$GRID_HOME/bin/appvipcfg create -network=1 -ip= 192.168.56.90 -vipname=gg_vip_source -user=root
We have to assign an unused IP address to the Application VIP. We run the following command to identify the value we use for the network parameter as well as the subnet for the VIP.
$ crsctl stat res -p |grep -ie .network -ie subnet |grep -ie name -ie subnet NAME=ora.net1.network USR_ORA_SUBNET=192.168.56.0
As root give the Oracle Database software owner permissions to start the VIP.
$GRID_HOME/bin/crsctl setperm resource gg_vip_source -u user:oracle:r-x
As the Oracle database software owner start the VIP
$GRID_HOME/bin/crsctl start resource gg_vip_source
Verify the status of the Application VIP
$GRID_HOME/bin/crsctl status resource gg_vip_source
Download the mount-dbfs.sh script from MOS
Download the mount-dbfs.sh script from MOS note 1054431.1.
Copy it to a temporary location on one of the Linux RAC nodes and run the command as root:
# dos2unix /tmp/mount-dbfs.sh
Change the ownership of the file to the Oracle Grid Infrastructure owner and also copy the file to the $GRID_HOME/crs/script directory location.
Next make changes to the environment variable settings section of the mount-dbfs.sh script as required. These are the changes I made to the script.
### Database name for the DBFS repository as used in "srvctl status database -d $DBNAME" DBNAME=orcl ### Mount point where DBFS should be mounted MOUNT_POINT=/mnt/dbfs ### Username of the DBFS repository owner in database $DBNAME DBFS_USER=dbfs_user ### RDBMS ORACLE_HOME directory path ORACLE_HOME=/u02/app/oracle/product/12.1.0/dbhome_1 ### This is the plain text password for the DBFS_USER user DBFS_PASSWD=dbfs_user ### TNS_ADMIN is the directory containing tnsnames.ora and sqlnet.ora used by DBFS TNS_ADMIN=/u02/app/oracle/admin ### TNS alias used for mounting with wallets DBFS_LOCAL_TNSALIAS=orcl
Create the DBFS Cluster Resource
Before creating the cluster resource for DBFS, test the mount-dbfs.sh script
$ ./mount-dbfs.sh start $ ./mount-dbfs.sh status Checking status now Check – ONLINE $ ./mount-dbfs.sh stop
As the Grid Infrastructure owner create a script called add-dbfs-resource.sh and store it in the $ORACLE_HOME/crs/script directory.
This script will create a Cluster Managed Resource called dbfs_mount by calling the Action Script mount-dbfs.sh which we had created earlier.
Edit the following variables in the script as shown below:
ACTION_SCRIPT
RESNAME
DEPNAME ( this can be the Oracle database or a database service)
ORACLE_HOME
#!/bin/bash ACTION_SCRIPT=/u02/app/12.1.0/grid/crs/script/mount-dbfs.sh RESNAME=dbfs_mount DEPNAME=ora.orcl.db ORACLE_HOME=/u01/app/12.1.0.2/grid PATH=$ORACLE_HOME/bin:$PATH export PATH ORACLE_HOME crsctl add resource $RESNAME \ -type cluster_resource \ -attr "ACTION_SCRIPT=$ACTION_SCRIPT, \ CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \ START_DEPENDENCIES='hard($DEPNAME)pullup($DEPNAME)',\ STOP_DEPENDENCIES='hard($DEPNAME)',\ SCRIPT_TIMEOUT=300"
Execute the script – it should produce no output.
./ add-dbfs-resource.sh
Download and Install the Oracle Grid Infrastructure Bundled Agent
Starting with Oracle 11.2.0.3 on 64-bit Linux, out-of-the-box Oracle Grid Infrastructure bundled agents were introduced which had predefined clusterware resources for applications like Siebel and Goldengate.
The bundled agent for Goldengate provided integration between Oracle Goldengate and dependent resources like the database, filesystem and the network.
The AGCTL agent command line utility can be used to start and stop Goldengate as well as relocate Goldengate resources between nodes in the cluster.
Download the latest version of the agent (6.1) from the URL below:
http://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/index.html
The downloaded file will be xagpack_6.zip.
There is an xag/bin directory with the agctl executable already existing in the $GRID_HOME root directory.
We need to install the new bundled agent in a separate directory and ensure the $PATH includes [{–nodes <node1,node2[,...]> | –all_nodes}]
Register Goldengate with the bundled agents using agctl utility
Using agctl utility create the GoldenGate configuration.
Ensure that we are running agctl from the downloaded bundled agent directory and not from the $GRID_HOME/xag/bin directory or ensure that the $PATH variable has been amended as described earlier.
/home/oracle/xagent/bin/agctl add goldengate gg_source --gg_home /u03/app/oracle/goldengate \ --instance_type source \ --nodes rac1,rac2 \ --vip_name gg_vip_source \ --filesystems dbfs_mount --databases ora.orcl.db \ --oracle_home /u02/app/oracle/product/12.1.0/dbhome_1 \ --monitor_extracts ext1,extdp1
Once GoldenGate is registered with the bundled agent, we should only use agctl to start and stop Goldengate processes. The agctl command will start the Manager process which in turn will start the other processes like Extract, Data Pump and Replicat if we have configured them for automatic restart.
Let us look at some examples of using agctl.
Check the Status – note the DBFS filesystem is also mounted currently on node rac2
$ pwd /home/oracle/xagent/bin $ ./agctl status goldengate gg_source Goldengate instance 'gg_source' is running on rac2 $ cd /mnt/dbfs/ $ ls -lrt total 0 drwxrwxrwx 9 root root 0 May 16 15:37 gg_source
Stop the Goldengate environment
$ ./agctl stop goldengate gg_source $ ./agctl status goldengate gg_source Goldengate instance ' gg_source ' is not running GGSCI (rac2.localdomain) 1> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER STOPPED EXTRACT STOPPED EXT1 00:00:03 00:01:19 EXTRACT STOPPED EXTDP1 00:00:00 00:01:18
Start the Goldengate environment – note the resource has relocated to node rac1 from rac2 and the Goldengate processes on rac2 have been stopped and started on node rac1.
$ ./agctl start goldengate gg_source $ ./agctl status goldengate gg_source Goldengate instance 'gg_source' is running on rac1 GGSCI (rac2.localdomain) 2> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER STOPPED GGSCI (rac1.localdomain) 1> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING EXTRACT RUNNING EXT1 00:00:09 00:00:06 EXTRACT RUNNING EXTDP1 00:00:00 00:05:22
We can also see that the agctl has unmounted DBFS on rac2 and mounted it on rac1 automatically.
[oracle@rac1 goldengate]$ ls -l /mnt/dbfs total 0 drwxrwxrwx 9 root root 0 May 16 15:37 gg_source [oracle@rac2 goldengate]$ ls -l /mnt/dbfs total 0
Lets test the whole thing!!
Now that we see that the Goldengate resources are running on node rac1, let us see what happens when we reboot that node to simulate a node failure when Goldengate is up and running and the Extract and Data Pump processes are running on the source.
AGCTL and Cluster Services will relocate all the Goldengate resources, VIP, DBFS to the other node seamlessly and we see that the Extract and Data Pump processes have been automatically started up on node rac2.
[oracle@rac1 goldengate]$ su - Password: [root@rac1 ~]# shutdown -h now Broadcast message from oracle@rac1.localdomain [root@rac1 ~]# (/dev/pts/0) at 19:45 ... The system is going down for halt NOW!
Connect to the surviving node rac2 and check ……
[oracle@rac2 bin]$ ./agctl status goldengate gg_source Goldengate instance 'gg_source' is running on rac2 GGSCI (rac2.localdomain) 1> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING EXTRACT RUNNING EXT1 00:00:07 00:00:02 EXTRACT RUNNING EXTDP1 00:00:00 00:00:08
Check the Cluster Resource ….
oracle@rac2 bin]$ crsctl stat res dbfs_mount -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- dbfs_mount 1 ONLINE ONLINE rac2 STABLE --------------------------------------------------------------------------------