DBFS and XAG

#1 remote_listener needs to be set to the scan name and the port number
#2 Create the service from the svrctl command, and start it up

After configuring DBFS to use Oracle Wallet, to store the password, via "Configuring DBFS on Oracle Database Machine (Doc ID 1054431.1)" or by following DBFS documentation in Oracle® Database SecureFiles and Large Objects Developer's Guide 11g Release 2 (11.2), Part Number E18294-01 , the /dbfs directory mounts but when trying to access directory in any way you get "Input/output error", i.e.

# df -k /dbfs
df: `/dbfs': Input/output error

# ls -dl /dbfs
ls: /dbfs: Input/output error

Bringing dbfs_mount online, which will mount the file system on all nodes, via "crsctl start resource dbfs_mount" from any cluster node shows no errors.

(oracle)$ crsctl start resource dbfs_mount
CRS-2672: Attempting to start 'dbfs_mount' on 'host05'
CRS-2672: Attempting to start 'dbfs_mount' on 'host06'
CRS-2676: Start of 'dbfs_mount' on 'host06' succeeded
CRS-2676: Start of 'dbfs_mount' on 'host05' succeeded

The "crsctl stat res dbfs_mount -t" confirms that dbfs is online

# oracle)$ crsctl stat res dbfs_mount -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
dbfs_mount
ONLINE ONLINE host05
ONLINE ONLINE host06

By setting trace parameter as part of the mount command we find that accessing /dbfs directory results in "ORA-01017: invalid username/password; logon denied".

To turn on tracing, modify /etc/fstab to have following mount command and remount /dbfs via "crsctl stop/start resource dbfs_mount" command

/sbin/mount.dbfs#/@dbfs_mount.local /dbfs fuse rw,user,direct_io,allow_other,wallet,trace_file=/tmp/dbfs_mount.trc,trace_level=1,trace_size=0,noauto 0 0

After running "df -k /dbfs" command and getting error you will see following in /tmp/dbfs_mount.trc

[5bbf7940 09/09/11 11:24:42.818800 LcdfLock.cpp:230 ] DEBUG: Global Lock - unlock
[5bbf7940 09/09/11 11:24:42.818806 LcdfLock.cpp:242 ] DEBUG: Global Lock (R) rdwr:0
[45bec940 09/09/11 11:24:42.857258 LcdfDBPool.cpp:398 ] ERROR: ERROR 1017 - ORA-01017: invalid username/password; logon denied

[45bec940 09/09/11 11:24:42.857287 LcdfJob.cpp:304 ] DEBUG: Exit LcdfJob::DoJobInt job:0x2aaaac0bc0e0 op(1)

Using GoldenGate on the Oracle Database File System ( DBFS )

https://asiandba.com/2016/10/07/oracle-goldengate-on-dbfs-for-rac-and-exadata/

Full Stack Role Transition DBFS & Dataguard 2016


############################################################
# DBFS Functions
# Use the following functions to install and maintain dbfs
############################################################

############################################################
# dbfs_init           - Initialize variables or other functions
# dbfs_pre_inst       - Check for OS params
# dbfs_inst_fuse      - Install FUSE libraries
# dbfs_cfg_fuse       - Config FUSE libraries
# dbfs_chk_db_usr     -
# dbfs_cr_tblspc      - Create DBFS tablespace
# dbfs_drp_tblspc     - Drop DBFS tablespace
# dbfs_cr_usr         - Create DBFS user
# dbfs_drp_usr        - Drop DBFS user
# dbfs_cr_objs        - Create DBFS objects
# dbfs_cr_mnt_pnt     - Create the DBFS mountpoint
# dbfs_test_access    - Test access to the DBFS directories
# dbfs_inst_qa        - QA the install
# dbfs_sho            - Stub only needs code
# dbfs_status         - Stub only needs code
############################################################

sho_dbfs_cmds()
{

echo "DBFS Installation commands"
echo "=========================="
echo "dbfs_init_fn.ksh           - "

echo "dbfs_cr_tblspc_cicm_fn.ksh - Create tablespace for CICM "
echo ""
echo ""
echo ""
echo ""
echo ""
echo ""
echo ""
echo ""
echo ""
echo ""
echo ""
echo ""

}
dbfs_init()
{

echo "dbfs_init........"
echo "This is the var init process....."
echo "Put any variables here needed for the process..."

}

dbfs_dbrd()
{

echo "DBFS Dashboard..............:"
echo "============================="
echo
echo "Oracle Home.............: "$ORACLE_HOME
echo "Oracle Client Home......: "$ORA_CLI_HOME
echo "ASM Diskgroup for DBFS..: "
echo "Tablespace for DBFS.....: "
echo "Mountpoint for DBFS.....: "$mntpnt
echo "Date for logfile........: "$usedate


echo
echo "DBFS Mounts identified......."
echo "============================="
echo

df -ha|grep dbfs

echo
echo

dbfs_svc_stat

}


dbfs_pre_inst()
{

echo
echo "dbfs_pre_inst........"
echo "Checking for uname -a for dbfs pre check....."
echo
echo
uname -a
echo
echo
echo "Checking for cat release....."
echo
echo
cat /etc/*-release

}

dbfs_inst_fuse()
############################################################
# Install the FUSE libraries
############################################################
{

############################################################
# not sure how this will get done via bank environment
############################################################
yum install fuse fuse-libs kernel-devel

}

dbfs_cfg_fuse()
############################################################
# this is done as root
# Config the FUSE sub-system
############################################################
{

ls -IL /usr/bin/fusermount

/sbin/modprobe fuse

chmod 666 /dev/fuse

echo "/sbin/modprobe fuse ">> /etc/rc.modules

grep fuse /etc/group

usermod -a -G fuse oracle

grep fuse /etc/group

id oracle

}


dbfs_chk_db_usr()
############################################################
# this is done as oracle
############################################################
{

show user

select name, created, from v\$database;

}


dbfs_cr_tblspc()
{


create bigfile tablespace dbfs_ts
datafile 'XXXXXXX'
size 1024M
autoextend on next 100M maxsize 3G nologging extent management
local autoallocate segment space management auto;

}


dbfs_drp_tblspc()
{

echo "TBD..."

}



dbfs_cr_usr()
{

create user dbfs_user identified by dbfs_user
default tablespace dbfs_ts
quota unlimited on dbfs_ts;

grant create_session, create table, creaet view, create procedure, dbfs_role to dbfs_user;

grant resource to dbfs_user;

}


dbfs_drp_usr()
{

echo "TBD..."

}



dbfs_cr_objs()
{

show user

rdbms/admin/dbfs_create_filesystem.sql dbfs_ts FS1


create store
begin dbms dbfs sfs.createFilesystem(store name => 'FS1', tbl name => 'FS1',
tbl tbs =>'dbfs ts', lob tbs => 'dbfs ts', do partition => false

register store


mount store



chmod store



}


dbfs_cr_mnt_pnt()
######################################
# create mount point
# run as root
######################################
{

flsystm=/u06dbfs

su -

echo
echo "create dirs"
echo

mkdir $flsystm

chown oracle:dba $flsystm

ls -ld $flsystm

}


dbfs_test_access()
{

df -k

sqlplus dbfs_user/dbfs_user@DBFSDB << EOF

show user

exit


EOF

id
dbfs_client dbfs_user/dbfs_user@DBFSDB /u06dbfs


}











Configure Oracle DBFS or ACFS on Exadata

Create the dbs_group file for dcli

As the user root, create a text file in the home directory of the root user called dbs_group. This file will contain the names of both the X5-2 compute nodes.

We will be using the DCLI utility to run commands on all compute nodes in the Exadata box and this file will be used for that purpose when we run the ‘dcli –g’ command.

[root@exdb1db01 ~]# dcli -g dbs_group -l root hostname
exdb1db01: exdb1db01.gavin.com.au
exdb1db02: exdb1db02.gavin.com.au

1.1 Add the oracle user to the fuse group
[root@exdb1db01 ~]# dcli -g ~/dbs_group -l root usermod -a -G fuse oracle

1.2 Add the user_allow_other option to the fuse.conf file

root@exdb1db01 ~]# dcli -g ~/dbs_group -l root "echo user_allow_other > /etc/fuse.conf"
[root@exdb1db01 ~]# dcli -g ~/dbs_group -l root chmod 644 /etc/fuse.conf

Note – on the Exadata servers, the required fuse RPM packages are installed by default.

1.3 Create the mount points and give appropriate permissions
On both compute nodes we will create mount points which will be used to mount the DBFS file system.
Since the objective is to have multiple mount points where each mount point is dedicated to separate database or environment, we will create the mount point with the naming convention /dbfs/.
Change the ownership of the mount points to the oracle user

dcli -g ~/dbs_group -l root mkdir /dbfs/dev2
dcli -g ~/dbs_group -l root chown oracle:oinstall /dbfs/dev2

1.4 Create tablespace and users
As the user SYS, we will create two tablespaces which will be used to store the LOB objects associated with the DBFS file system.
We will create the dbfs_gg_dirtmp tablespace with the recommended NOLOGGING attribute as it will be used to store the contents of the GoldenGate dirtmp directory.
Note: The size of the tablespace will depend on the amount of trail files which are expected to be generated as well as the required retention period for those trail files.
While the example shows the DBFS_DG ASM disk group being used for the hosting the DBFS related tablespaces, any ASM disk group with the required amount of free disk space can be used.
The DBFS_USER Oracle database user will be the owner of the DBFS related database objects and we create the user and grant the appropriate privileges especially the DBFS_ROLE database role.

create bigfile tablespace dbfs_gg_dirsrc
datafile '+DBFS_DG' size 32g autoextend on next 2g
LOGGING EXTENT MANAGEMENT LOCAL AUTOALLOCATE
SEGMENT SPACE MANAGEMENT AUTO;

create bigfile tablespace dbfs_gg_dirtmp
datafile '+DBFS_DG' size 10g autoextend on next 2g
NOLOGGING EXTENT MANAGEMENT LOCAL AUTOALLOCATE
SEGMENT SPACE MANAGEMENT AUTO;

create user dbfs_user identified by Oracle#123
default tablespace dbfs_gg_dirsrc
temporary tablespace temp
quota unlimited on dbfs_gg_dirsrc
quota unlimited on dbfs_gg_dirtmp;

GRANT create session, create table, create view,create procedure, dbfs_role TO dbfs_user;

1.5 Create the DBFS file system
We will next connect as the DBFS_USER and run the dbfs_create_filesystem.sql script to create the necessary DBFS related database objects.

The dbfs_create_filesystem.sql takes two parameters – the tablespace_name and the DBFS file system name.

SQL> conn dbfs_user/
Connected.

SQL> @?/rdbms/admin/dbfs_create_filesystem dbfs_gg_dirsrc ogg_dev2

No errors.
--------
CREATE STORE:
begin dbms_dbfs_sfs.createFilesystem(store_name => 'FS_OGG_DEV2', tbl_name =>
'T_OGG_DEV2', tbl_tbs => 'dbfs_gg_dirsrc', lob_tbs => 'dbfs_gg_dirsrc',
do_partition => false, partition_key => 1, do_compress => false, compression =>
'', do_dedup => false, do_encrypt => false); end;
--------
REGISTER STORE:
begin dbms_dbfs_content.registerStore(store_name=> 'FS_OGG_DEV2',
provider_name => 'sample1', provider_package => 'dbms_dbfs_sfs'); end;
--------
MOUNT STORE:
begin dbms_dbfs_content.mountStore(store_name=>'FS_OGG_DEV2',
store_mount=>'ogg_dev2'); end;
--------
CHMOD STORE:
declare m integer; begin m := dbms_fuse.fs_chmod('/ogg_dev2', 16895); end;
No errors.

SQL>@?/rdbms/admin/dbfs_create_filesystem dbfs_gg_dirtmp ogg_dirtmp_dev2

No errors.
--------
CREATE STORE:
begin dbms_dbfs_sfs.createFilesystem(store_name => 'FS_OGG_DIRTMP_DEV2',
tbl_name => 'T_OGG_DIRTMP_DEV2', tbl_tbs => 'dbfs_gg_dirtmp', lob_tbs =>
'dbfs_gg_dirtmp', do_partition => false, partition_key => 1, do_compress =>
false, compression => '', do_dedup => false, do_encrypt => false); end;
--------
REGISTER STORE:
begin dbms_dbfs_content.registerStore(store_name=> 'FS_OGG_DIRTMP_DEV2',
provider_name => 'sample1', provider_package => 'dbms_dbfs_sfs'); end;
--------
MOUNT STORE:
begin dbms_dbfs_content.mountStore(store_name=>'FS_OGG_DIRTMP_DEV2',
store_mount=>'ogg_dirtmp_dev2'); end;
--------
CHMOD STORE:
declare m integer; begin m := dbms_fuse.fs_chmod('/ogg_dirtmp_dev2', 16895);
end;
No errors.

1.6 Verify the DBFS LOB segment attributes
SQL> SELECT table_name, segment_name, logging, cache
2 FROM dba_lobs WHERE tablespace_name like 'DBFS%';

TABLE_NAME SEGMENT_NAME LOGGING CACHE
------------------------------ ------------------------------ ------- ----------
T_OGG_DEV2 LOB_SFS$_FST_1 YES NO
T_OGG_DIRTMP_DEV2 LOB_SFS$_FST_11 NO NO

1.7 Edit and customize the Oracle supplied mount-dbfs.sh script
Download the file mount-dbfs-20160215.zip from the MOS note 1054431.1 (Configuring DBFS on Oracle Exadata Database Machine).
Copy the file to a temporary directory on one of the database compute nodes and as the user root, extract the file.
We will now have two files – mount-dbfs.conf and mount-dbfs.sh.
Copy the mount-dbfs.sh to mount-dbfs_.sh
[root@exdb1db01 ~]# cd /tmp
[root@exdb1db01 tmp]# cp mount-dbfs.sh mount-dbfs_dev2.sh
[root@exdb1db01 tmp]# cp mount-dbfs.conf mount-dbfs_dev2.conf

Edit the mount-dbfs_.sh script to reference the customized CONFIG file

[root@exdb1db01 tmp]# vi mount-dbfs_dev2.sh

### Ensure that when multiple mounts are used, there are separate copies
### of mount-dbfs.sh that reference separate CONFIG file pathnames
CONFIG=/etc/oracle/mount-dbfs_dev2.conf

1.8 Edit and customize the Oracle supplied mount-dbfs.conf script
Change the values for :
• DBNAME
• MOUNT_POINT
• DBFS_USER
• ORACLE_HOME
• GRID_HOME
• DBFS_PASSWORD

### Database name for the DBFS repository as used in "srvctl status database -d $DBNAME"
### If using PDB/CDB, this should be set to the CDB name
### Database name for the DBFS repository as used in "srvctl status database -d $DBNAME"
### If using PDB/CDB, this should be set to the CDB name
DBNAME=DEV2

### Mount point where DBFS should be mounted
MOUNT_POINT=/dbfs/dev2

### Username of the DBFS repository owner in database $DBNAME
DBFS_USER=dbfs_user

### RDBMS ORACLE_HOME directory path
ORACLE_HOME=/u01/app/oracle/product/11.2.0/shieldnp_1

### GRID HOME directory path
GRID_HOME=/u01/app/12.1.0/grid_1

###########################################
### If using password-based authentication, set these
###########################################
### This is the plain text password for the DBFS_USER user
DBFS_PASSWD=Oracle#123

1.9 Copy the modified files to $GRID_HOME/crs/script as well as /etc/oracle and grant appropriate privileges
dcli -g ~/dbs_group -l root -d /u01/app/12.1.0/grid_1/crs/script -f /tmp/mount-dbfs_dev2.sh
dcli -g ~/dbs_group -l root chown oracle:oinstall /u01/app/12.1.0/grid_1/crs/script/mount-dbfs_dev2.sh
dcli -g ~/dbs_group -l root chmod 750 /u01/app/12.1.0/grid_1/crs/script/mount-dbfs_dev2.sh
dcli -g ~/dbs_group -l root -d /etc/oracle -f /tmp/mount-dbfs_dev2.conf
dcli -g ~/dbs_group -l root chown oracle:oinstall /etc/oracle/mount-dbfs_dev2.conf
dcli -g ~/dbs_group -l root chmod 640 /etc/oracle/mount-dbfs_dev2.conf

1.10 Create the script for mounting the DBFS File System
We will create the add-dbfs-resource _.sh script. This script will be used to create the clusterware resource for mounting the DBFS file system.
Note that the add-dbfs-resource script will be sourcing the customized mount-dbfs_.sh script which we had created earlier.

[root@exdb1db01 tmp]# cd /u01/app/12.1.0/grid_1/crs/script
[root@exdb1db01 script]# vi add-dbfs-resource_dev2.sh
##### start script add-dbfs-resource_dev2.sh
#!/bin/bash
ACTION_SCRIPT=/u01/app/12.1.0/grid_1/crs/script/mount-dbfs_dev2.sh
RESNAME=dbfs_mount_dev2
DBNAME=DEV2
DBNAMEL=`echo $DBNAME | tr A-Z a-z`
ORACLE_HOME=/u01/app/oracle/product/11.2.0/shieldnp_1
PATH=$ORACLE_HOME/bin:$PATH
export PATH ORACLE_HOME
/u01/app/12.1.0/grid_1/bin/crsctl add resource $RESNAME \
-type local_resource \
-attr "ACTION_SCRIPT=$ACTION_SCRIPT, \
CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \
START_DEPENDENCIES='hard(ora.$DBNAMEL.db)pullup(ora.$DBNAMEL.db)',\
STOP_DEPENDENCIES='hard(ora.$DBNAMEL.db)',\
SCRIPT_TIMEOUT=300"
##### end script add-dbfs-resource_dev2.sh

Change the ownership of the script to oracle

[root@exdb1db01 script]# chown oracle:oinstall add-dbfs-resource_dev2.sh

1.11 As the OS user oracle run the add-dbfs-resource script to create the resource
[root@exdb1db01 script]# su - oracle
[oracle@exdb1db01 ~]$ cd /u01/app/12.1.0/grid_1/crs/script
[oracle@exdb1db01 script]$ ./add-dbfs-resource_dev2.sh

1.12 As oracle start the resource using crsctl – this will mount the DBFS file system
[oracle@exdb1db01 ~]$ cd /u01/app/12.1.0/grid_1/bin
[oracle@exdb1db01 bin]$ ./crsctl start resource dbfs_mount_dev2
CRS-2672: Attempting to start 'dbfs_mount_dev2' on 'exdb1db01'
CRS-2672: Attempting to start 'ora.dev2.db' on 'exdb1db02'
CRS-2676: Start of 'dbfs_mount_dev2' on 'exdb1db01' succeeded
CRS-2676: Start of 'dbfs_mount_dev2' on 'exdb1db02' succeeded

1.13 Check the status of the resource
[oracle@exdb1db01 bin]$ ./crsctl stat res dbfs_mount_dev2
NAME=dbfs_mount_dev2
TYPE=local_resource
TARGET=ONLINE , ONLINE
STATE=ONLINE on exdb1db01, ONLINE on exdb1db02

[oracle@exdb1db01 bin]$ exit
logout

1.14 As root create the Application VIP
[root@exdb1db01 script]# cd /u01/app/12.1.0/grid_1/bin

[root@exdb1db01 bin]# ./appvipcfg create -network=1 -ip=10.100.24.28 -vipname=ogg_vip_dev2 -user=root

[root@exdb1db01 bin]# ./crsctl setperm resource ogg_vip_dev2 -u user:oracle:r-x
[root@exdb1db01 bin]# ./crsctl setperm resource ogg_vip_dev2 -u user:grid:r-x
[root@exdb1db01 bin]# ./crsctl start resource ogg_vip_dev2
CRS-2672: Attempting to start 'ogg_vip_dev2' on 'exdb1db02'
CRS-2676: Start of 'ogg_vip_dev2' on 'exdb1db02' succeeded

We can see that the VIP is running on exdb1db02 -we can relocate it to exdb1db01

[root@exdb1db01 bin]# ./crsctl relocate resource ogg_vip_dev2
CRS-2673: Attempting to stop 'ogg_vip_dev2' on 'exdb1db02'
CRS-2677: Stop of 'ogg_vip_dev2' on 'exdb1db02' succeeded
CRS-2672: Attempting to start 'ogg_vip_dev2' on 'exdb1db01'
CRS-2676: Start of 'ogg_vip_dev2' on 'exdb1db01' succeeded

Now check the status of the resource – we can see it running on exdb1db01

[root@exdb1db01 bin]# ./crsctl status resource ogg_vip_dev2
NAME=ogg_vip_dev2
TYPE=app.appvipx.type
TARGET=ONLINE
STATE=ONLINE on exdb1db01

1.15 Check if the DBFS file systems for each database environment are mounted and directories are present
[root@exdb1db01 bin]# df -k |grep dbfs
dbfs-dbfs_user@:/ 56559616 232 56559384 1% /dbfs_dev2

[root@exdb1db01 bin]# cd /dbfs_dev2/
[root@exdb1db01 dbfs_dev2]# ls -l
total 0
drwxrwxrwx 3 root root 0 Feb 25 11:56 ogg_dev2
drwxrwxrwx 3 root root 0 Feb 25 11:57 ogg_dirtmp_dev2


2 Configure Grid Infrastructure Agent

2.1 Create the directories on the DBFS file system

[oracle@exdb1db01 ogg_dev2pd]$ pwd
/dbfs/dev2pd/ogg_dev2pd

[oracle@exdb1db01 ]$ mkdir dirpcs
[oracle@exdb1db01 ]$ mkdir dirchk
[oracle@exdb1db01 ]$ mkdir dirdat
[oracle@exdb1db01 ]$ mkdir dirprm
[oracle@exdb1db01 ]$ mkdir dircrd
[oracle@exdb1db01 ]$ mkdir BR

[oracle@exdb1db01 dev2pd]$ cd ogg_dirtmp_dev2pd
[oracle@exdb1db01 ogg_dirtmp_dev2pd]$ pwd
/dbfs/dev2pd/ogg_dirtmp_dev2pd

[oracle@exdb1db01 ogg_dirtmp]$ mkdir dirtmp

2.2 On each compute node rename the existing directories in the GoldenGate software home

[oracle@exdb1db01 dev2]$ mkdir BR

[oracle@exdb1db01 dev2]$ mv dirchk dirchk.bkp
[oracle@exdb1db01 dev2]$ mv dirdat dirdat.bkp
[oracle@exdb1db01 dev2]$ mv dirpcs dirpcs.bkp
[oracle@exdb1db01 dev2]$ mv dirprm dirprm.bkp
[oracle@exdb1db01 dev2]$ mv dircrd dircrd.bkp
[oracle@exdb1db01 dev2]$ mv dirtmp dirtmp.bkp

2.3 Create the symbolic links
[oracle@exdb1db01 dev2]$ ln -s /dbfs/dev2pd/ogg_dev2pd/dirdat dirdat
[oracle@exdb1db01 dev2]$ ln -s /dbfs/dev2pd/ogg_dev2pd/dirchk dirchk
[oracle@exdb1db01 dev2]$ ln -s /dbfs/dev2pd/ogg_dev2pd/dirpcs dirpcs
[oracle@exdb1db01 dev2]$ ln -s /dbfs/dev2pd/ogg_dev2pd/dirprm dirprm
[oracle@exdb1db01 dev2]$ ln -s /dbfs/dev2pd/ogg_dev2pd/BR BR
[oracle@exdb1db01 dev2]$ ln -s /dbfs/dev2pd/ogg_dev2pd/dircrd dircrd
[oracle@exdb1db01 dev2]$ ln -s /dbfs/dev2pd/ogg_dirtmp_dev2pd /dirtmp dirtmp

2.4 Download Oracle Grid Infrastructure Agent

From the URL below download the file: xagpack_7b.zip

http://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/index.html

2.5 Copy the downloaded xagpack_7b.zip file to Grid user $HOME and unzip

[grid@exdb1db01 ~]$ ls
xagpack_7b.zip

[grid@exdb1db01 ~]$ unzip xagpack_7b.zip
Archive: xagpack_7b.zip
creating: xag/
inflating: xag/xagsetup.bat
creating: xag/lib/
inflating: xag/lib/facility.lis
inflating: xag/agcommon.pm
inflating: xag/agjdeas.pm
creating: xag/bin/
inflating: xag/bin/oerr.pl
inflating: xag/xagsetup.sh



inflating: xag/mesg/xagus.be
inflating: xag/mesg/xagus.msg
inflating: xag/mesg/xagus.msb
inflating: xag/agmysqlmonas.pm
inflating: xag/readme.txt
inflating: xag/agwl.pm

2.6 Two directories will be created - xag and xagent

[grid@exdb1db01 xag]$ pwd
/home/grid/xag
[grid@exdb1db01 xag]$ cd ..
[grid@exdb1db01 ~]$ ls
xag xagent xagpack_7b.zip

2.7 Run the xagsetup.sh script (as the Grid Infrastructure owner)

Note – this will install the Grid Infrastructure Agent files in the xagent directory (on both compute nodes)

[grid@exdb1db01 xag]$ ./xagsetup.sh --install --directory /u01/app/grid/xagent --all_nodes
Installing Oracle Grid Infrastructure Agents on: exdb1db01
Installing Oracle Grid Infrastructure Agents on: exdb1db02

If we try and install the Grid Infrastructure Agents under the $GRID_HOME we will see an error as shown below:

[grid@exdb1db01 xag]$ ./xagsetup.sh --install --directory /u01/app/12.1.0/grid_1/xagent --all_nodes
Installation directory cannot be under Clusterware home.

2.8 As oracle we run the AGCTL command to create the GoldenGate resource

[root@exdb1db01 bin]# su - oracle
[oracle@exdb1db01 ~]$ cd /u01/app/grid/xagent/bin

[oracle@exdb1db01 bin]$ ./agctl add goldengate ogg_dev2 --gg_home /u01/app/oracle/product/gg12.2/dev2 --instance_type source --nodes exdb1db01,exdb1db02 --vip_name ogg_vip_dev2 --filesystems dbfs_mount_dev2pd --databases ora.dev2pd.db --oracle_home /u01/app/oracle/product/11.2.0/shieldnp_1

2.9 Start and Stop Goldengate using AGCTL

[oracle@exdb1db01 bin]$ ./agctl status goldengate ogg_dev2
Goldengate instance 'ogg_dev2' is not running

[oracle@exdb1db01 bin]$ ./agctl start goldengate ogg_dev2

[oracle@exdb1db01 bin]$ ./agctl status goldengate ogg_dev2
Goldengate instance 'ogg_dev2' is running on exdb1db01

If we check via GGSCI, we can see the manager process is now up and running on compute node exdb1db01

[oracle@exdb1db01 bin]$ cd -
/u01/app/oracle/product/gg12.2/dev2
[oracle@exdb1db01 dev2]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.1 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Dec 12 2015 00:54:38
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.

GGSCI (exdb1db01.gavin.com.au) 1> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING

Note that manager is stopped on compute node exdb1db02

GGSCI (exdb1db02.gavin.com.au) 3> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER STOPPED

2.10 Relocate GoldenGate using AGCTL

[oracle@exdb1db01 bin]$ ./agctl relocate goldengate ogg_dev2

[oracle@exdb1db01 bin]$ ./agctl status goldengate ogg_dev2
Goldengate instance 'ogg_dev2' is running on exdb1db02

Now manager is running on exdb1db02

GGSCI (exdb1db02.gavin.com.au) 3> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING

How To Setup 11 2 DBFS FileSystems Using the dbfs_client_API_Method_OK-April-15-2015

869822.1   Installing the DBFS

1150157.1  List of Critical patches

 

In summary the broad steps involved are:

1) Install and configure FUSE (Filesystem in Userspace)
2) Create the DBFS user and DBFS tablespaces
3) Mount the DBFS filesystem
5) Create symbolic links for the Goldengate software directories dirchk,dirpcs, dirdat, BR to point to directories on DBFS
6) Create the Application VIP
7) Download the mount-dbfs.sh script from MOS and edit according to our environment
8) Create the DBFS Cluster Resource
9) Download and install the Oracle Grid Infrastructure Bundled Agent
10) Register Goldengate with the bundled agents using agctl utility

 

Install and Configure FUSE

Using the following command check if FUSE has been installed:

lsmod | grep fuse

FUSE can be installed in a couple of ways – either via the Yum repository or using the RPM’s available on the OEL software media.

Using Yum:

yum install kernel-devel
yum install fuse fuse-libs

Via RPM’s:

If installing from the media, then these are the RPM’s which are required:

kernel-devel-2.6.32-358.el6.x86_64.rpm
fuse-2.8.3-4.el6.x86_64.rpm
fuse-devel-2.8.3-4.el6.x86_64.rpm
fuse-libs-2.8.3-4.el6.x86_64.rpm

A group named fuse must be created and the OS user who will be mounting the DBFS filesystem needs to be added to the fuse group.

For example if the OS user is ‘oracle’, then we use the usermod command to modify the secondary group membership for the oracle user. Important is to ensure we do not exclude any current groups the user already is a member of.

# /usr/sbin/groupadd fuse
# usermod -G dba,fuse oracle

One of the mount options which we will use is called “allow_other” which will enable users other than the user who mounted the DBFS file system to access the file system.

The /etc/fuse.conf  needs to have the “user_allow_other” option as shown below.

$ # cat /etc/fuse.conf
user_allow_other

chmod 644 /etc/fuse.conf

Important: Ensure that the variable LD_LIBRARY_PATH is set and includes the path to $ORACLE_HOME/lib. Otherwise we will get an error when we try to mount the DBFS using the dbfs_client executable.

Create the DBFS tablespaces and mount the DBFS

If the source database used by Goldengate Extract is running on RAC or hosted on Exadata then we will create ONE tablespace for DBF.

If the target database where Replicat will be applying changes in on RAC or Exadata, then we will create TWO tableapaces for DBFS with each tablespace having different logging and caching settings – typically one tablespace will be used for the Goldengate trail files and the other for the Goldengate checkpoint files.

If using Exadata then typically an ASM disk group called DBFS_DG will already be available for us to use, otherwise on an non-Exadata platform we will create a separate ASM disk group for holding DBFS files.

Note than since we will be storing Goldengate trail files on DBFS, a best practice would be to allocate enough disk space/tablespace space to be able to retain at least a minimum of 12-24 hours of trail files. So we need to keep that in mind when we create the ASM diskgroup or create the DBFS tablespace.

CREATE bigfile TABLESPACE dbfs_ogg_big datafile '+DBFS_DG' SIZE
1000M autoextend ON NEXT 100M LOGGING EXTENT MANAGEMENT LOCAL
AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO;

Create the DBFS user

CREATE USER dbfs_user IDENTIFIED BY dbfs_pswd 
DEFAULT TABLESPACE dbfs_ogg_big
QUOTA UNLIMITED ON dbfs_ogg_big;

GRANT create session, 
      create table, 
      create view, 
      create procedure, 
      dbfs_role 
TO    dbfs_user; 


Create the DBFS Filesystem

To create the DBFS filesystem we connect as the DBFS_USER Oracle user account and either run the dbfs_create_filesystem.sql or dbfs_create_filesystem_advanced.sql script located under $ORACLE_HOME/rdbms/admin directory.

For example:

cd $ORACLE_HOME/rdbms/admin 

sqlplus dbfs_user/dbfs_pswd 


SQL> @dbfs_create_filesystem dbfs_ogg_big gg_source

OR

SQL> @dbfs_create_filesystem_advanced.sql dbfs_ogg_big  gg_source
      nocompress nodeduplicate noencrypt non-partition 

Where …
dbfs_ogg_big:  tablespace for the DBFS database objects
gg_source:         filesystem name, this can be any string and will appear as a directory under the mount point

If we were configuring DBFS on the Goldengate target or Replicat side of things, it is recommended to use the NOCACHE LOGGING attributes for the tablespace which holds the trail files because of the sequential reading and writing nature of the trail files.

For the checkpoint files on the other hand it is recommended to use CACHING and LOGGING attributes instead.

The example shown below illustrates how we can modify the LOB attributes.

(assuming we have created two DBFS tablespaces)

SQL> SELECT table_name, segment_name, cache, logging FROM dba_lobs 
     WHERE tablespace_name like 'DBFS%'; 

TABLE_NAME              SEGMENT_NAME                CACHE     LOGGING
----------------------- --------------------------- --------- -------
T_DBFS_BIG              LOB_SFS$_FST_1              NO        YES
T_DBFS_SM               LOB_SFS$_FST_11             NO        YES



SQL> ALTER TABLE dbfs_user.T_DBFS_SM 
     MODIFY LOB (FILEDATA) (CACHE LOGGING); 


SQL> SELECT table_name, segment_name, cache, logging FROM dba_lobs 
     WHERE tablespace_name like 'DBFS%';  

TABLE_NAME              SEGMENT_NAME                CACHE     LOGGING
----------------------- --------------------------- --------- -------
T_DBFS_BIG              LOB_SFS$_FST_1              NO        YES
T_DBFS_SM               LOB_SFS$_FST_11             YES       YES


As the user root, now create the DBFS mount point on ALL nodes of the RAC cluster (or Exadata compute servers).

# cd /mnt 
# mkdir DBFS 
# chown oracle:oinstall DBFS/

Create a custom tnsnames.ora file in a separate location (on each node of the RAC cluster).

In our 2 node RAC cluster for example these are entries we will make for the ORCL RAC database.

Node A

orcl =
  (DESCRIPTION =
      (ADDRESS =
        (PROTOCOL=BEQ)
        (PROGRAM=/u02/app/oracle/product/12.1.0/dbhome_1/bin/oracle)
        (ARGV0=oracleorcl1)
        (ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))')
        (ENVS='ORACLE_HOME=/u02/app/oracle/product/12.1.0/dbhome_1,ORACLE_SID=orcl1')
      )
  (CONNECT_DATA=(SID=orcl1))
)

Node B

orcl =
  (DESCRIPTION =
      (ADDRESS =
        (PROTOCOL=BEQ)
        (PROGRAM=/u02/app/oracle/product/12.1.0/dbhome_1/bin/oracle)
        (ARGV0=oracleorcl2)
        (ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))')
        (ENVS='ORACLE_HOME=/u02/app/oracle/product/12.1.0/dbhome_1,ORACLE_SID=orcl2')
      )
  (CONNECT_DATA=(SID=orcl2))
)


 

We will need to provide the password for the DBFS_USER database user account when we mount the DBFS filesystem via the dbfs_mount command. We can either store the password in a text file or we can use Oracle Wallet to encrypt and store the password.

In this example we are not using the Oracle Wallet, so we need to create a file (on all nodes of the RAC cluster) which will contain the DBFS_USER password.

For example:
echo dbfs_pswd > passwd.txt

nohup $ORACLE_HOME/bin/dbfs_client  dbfs_user@orcl -o allow_other,direct_io /mnt/DBFS < ~/passwd.txt &

After the DBFS filesystem is mounted successfully we can now see it via the ‘df’ command like shown below. Note in this case we had created a tablespace of 5 GB for DBFS and the space allocated and used displays that.

$  df -h |grep dbfs

dbfs-dbfs_user@:/     4.9G   11M  4.9G   1% /mnt/dbfs

The command used to unmount the DBFS filesystem would be:

fusermount -u 

Create links from Oracle Goldengate software directories to DBFS

Create the following directories on DBFS

$ mkdir /mnt/gg_source/goldengate 
$ cd /mnt/gg_source/goldengate 
$ mkdir dirchk
$ mkdir dirpcs 
$ mkdir dirprm
$ mkdir dirdat
$ mkdir BR

Make the symbolic links from Goldengate software directories to DBFS

cd /u03/app/oracle/goldengate
mv dirchk dirchk.old
mv dirdat dirdat.old
mv dirpcs dirpcs.old
mv dirprm dirprm.old
mv BR BR.old
ln -s /mnt/dbfs/gg_source/goldengate/dirchk dirchk
ln -s /mnt/dbfs/gg_source/goldengate/dirdat dirdat
ln -s /mnt/dbfs/gg_source/goldengate/dirprm dirprm
ln -s /mnt/dbfs/gg_source/goldengate/dirpcs dirpcs
ln -s /mnt/dbfs/gg_source/goldengate/BR BR

For example :

[oracle@rac2 goldengate]$ ls -l dirdat
lrwxrwxrwx 1 oracle oinstall 26 May 16 15:53 dirdat -> /mnt/dbfs/gg_source/goldengate/dirdat

Also copy the jagent.prm file which comes out of the box located in the dirprm directory

[oracle@rac2 dirprm.old]$ pwd
/u03/app/oracle/goldengate/dirprm.old
[oracle@rac2 dirprm.old]$ cp jagent.prm /mnt/dbfs/gg_source/dirprm

Note – in the Extract parameter file(s) we need to include the BR parameter pointing to the DBFS stored directory

BR BRDIR /mnt/dbfs/gg_source/goldengate/BR

Create the Application VIP

Typically the Goldengate source and target databases will be located outside the same Exadata machine and even in a non-Exadata RAC environment the source and target databases are on usually on different RAC clusters. In that case we have to use an Application VIP which is a cluster resource managed by Oracle Clusterware and the VIP assigned to one node will be seamlessly transferred to another surviving node in the event of a RAC (or Exadata compute) node failure.

Run the appvipcfg command to create the Application VIP as shown in the example below.

$GRID_HOME/bin/appvipcfg create -network=1 -ip= 192.168.56.90 -vipname=gg_vip_source -user=root

We have to assign an unused IP address to the Application VIP. We run the following command to identify the value we use for the network parameter as well as the subnet for the VIP.

$ crsctl stat res -p |grep -ie .network -ie subnet |grep -ie name -ie subnet

NAME=ora.net1.network
USR_ORA_SUBNET=192.168.56.0

As root give the Oracle Database software owner permissions to start the VIP.

$GRID_HOME/bin/crsctl setperm resource gg_vip_source -u user:oracle:r-x 

As the Oracle database software owner start the VIP

$GRID_HOME/bin/crsctl start resource gg_vip_source

Verify the status of the Application VIP

$GRID_HOME/bin/crsctl status resource gg_vip_source

 

Download the mount-dbfs.sh script from MOS

Download the mount-dbfs.sh script from MOS note 1054431.1.

Copy it to a temporary location on one of the Linux RAC nodes and run the command as root:

# dos2unix /tmp/mount-dbfs.sh

Change the ownership of the file to the Oracle Grid Infrastructure owner and also copy the file to the $GRID_HOME/crs/script directory location.

Next make changes to the environment variable settings section of the mount-dbfs.sh script as required. These are the changes I made to the script.

### Database name for the DBFS repository as used in "srvctl status database -d $DBNAME"
DBNAME=orcl

### Mount point where DBFS should be mounted
MOUNT_POINT=/mnt/dbfs

### Username of the DBFS repository owner in database $DBNAME
DBFS_USER=dbfs_user

### RDBMS ORACLE_HOME directory path
ORACLE_HOME=/u02/app/oracle/product/12.1.0/dbhome_1

### This is the plain text password for the DBFS_USER user
DBFS_PASSWD=dbfs_user

### TNS_ADMIN is the directory containing tnsnames.ora and sqlnet.ora used by DBFS
TNS_ADMIN=/u02/app/oracle/admin

### TNS alias used for mounting with wallets
DBFS_LOCAL_TNSALIAS=orcl

Create the DBFS Cluster Resource

Before creating the cluster resource for DBFS, test the mount-dbfs.sh script

$ ./mount-dbfs.sh start
$ ./mount-dbfs.sh status
Checking status now
Check – ONLINE

$ ./mount-dbfs.sh stop

As the Grid Infrastructure owner create a script called add-dbfs-resource.sh and store it in the $ORACLE_HOME/crs/script directory.

This script will create a Cluster Managed Resource called dbfs_mount by calling the Action Script mount-dbfs.sh which we had created earlier.

Edit the following variables in the script as shown below:

ACTION_SCRIPT
RESNAME
DEPNAME ( this can be the Oracle database or a database service)
ORACLE_HOME

#!/bin/bash
ACTION_SCRIPT=/u02/app/12.1.0/grid/crs/script/mount-dbfs.sh
RESNAME=dbfs_mount
DEPNAME=ora.orcl.db
ORACLE_HOME=/u01/app/12.1.0.2/grid
PATH=$ORACLE_HOME/bin:$PATH
export PATH ORACLE_HOME
crsctl add resource $RESNAME \
-type cluster_resource \
-attr "ACTION_SCRIPT=$ACTION_SCRIPT, \
CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \
START_DEPENDENCIES='hard($DEPNAME)pullup($DEPNAME)',\
STOP_DEPENDENCIES='hard($DEPNAME)',\
SCRIPT_TIMEOUT=300"

Execute the script – it should produce no output.

./ add-dbfs-resource.sh

 

Download and Install the Oracle Grid Infrastructure Bundled Agent

Starting with Oracle 11.2.0.3 on 64-bit Linux, out-of-the-box Oracle Grid Infrastructure bundled agents were introduced which had predefined clusterware resources for applications like Siebel and Goldengate.

The bundled agent for Goldengate provided integration between Oracle Goldengate and dependent resources like the database, filesystem and the network.

The AGCTL agent command line utility can be used to start and stop Goldengate as well as relocate Goldengate resources between nodes in the cluster.

Download the latest version of the agent (6.1) from the URL below:

http://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/index.html

The downloaded file will be xagpack_6.zip.

There is an xag/bin directory with the agctl executable already existing in the $GRID_HOME root directory.

We need to install the new bundled agent in a separate directory and ensure the $PATH includes [{–nodes <node1,node2[,...]> | –all_nodes}]

Register Goldengate with the bundled agents using agctl utility

Using agctl utility create the GoldenGate configuration.

Ensure that we are running agctl from the downloaded bundled agent directory and not from the $GRID_HOME/xag/bin directory or ensure that the $PATH variable has been amended as described earlier.

/home/oracle/xagent/bin/agctl add goldengate gg_source --gg_home /u03/app/oracle/goldengate \
--instance_type source \
--nodes rac1,rac2 \
--vip_name gg_vip_source \
--filesystems dbfs_mount --databases ora.orcl.db \
--oracle_home /u02/app/oracle/product/12.1.0/dbhome_1 \
--monitor_extracts ext1,extdp1
 

Once GoldenGate is registered with the bundled agent, we should only use agctl to start and stop Goldengate processes. The agctl command will start the Manager process which in turn will start the other processes like Extract, Data Pump and Replicat if we have configured them for automatic restart.

Let us look at some examples of using agctl.

Check the Status – note the DBFS filesystem is also mounted currently on node rac2

$ pwd
/home/oracle/xagent/bin
$ ./agctl status goldengate gg_source
Goldengate  instance 'gg_source' is running on rac2


$ cd /mnt/dbfs/
$ ls -lrt
total 0
drwxrwxrwx 9 root root 0 May 16 15:37 gg_source

Stop the Goldengate environment

$ ./agctl stop goldengate gg_source 
$ ./agctl status goldengate gg_source
Goldengate  instance ' gg_source ' is not running

GGSCI (rac2.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED
EXTRACT     STOPPED     EXT1        00:00:03      00:01:19
EXTRACT     STOPPED     EXTDP1      00:00:00      00:01:18

Start the Goldengate environment – note the resource has relocated to node rac1 from rac2 and the Goldengate processes on rac2 have been stopped and started on node rac1.

$ ./agctl start goldengate gg_source
$ ./agctl status goldengate gg_source
Goldengate  instance 'gg_source' is running on rac1

GGSCI (rac2.localdomain) 2> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt
MANAGER     STOPPED

GGSCI (rac1.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt
MANAGER     RUNNING
EXTRACT     RUNNING     EXT1        00:00:09      00:00:06
EXTRACT     RUNNING     EXTDP1      00:00:00      00:05:22

We can also see that the agctl has unmounted DBFS on rac2 and mounted it on rac1 automatically.

[oracle@rac1 goldengate]$ ls -l /mnt/dbfs
total 0
drwxrwxrwx 9 root root 0 May 16 15:37 gg_source

[oracle@rac2 goldengate]$ ls -l /mnt/dbfs
total 0

Lets test the whole thing!!

Now that we see that the Goldengate resources are running on node rac1, let us see what happens when we reboot that node to simulate a node failure when Goldengate is up and running and the Extract and Data Pump processes are running on the source.

AGCTL and Cluster Services will relocate all the Goldengate resources, VIP, DBFS to the other node seamlessly and we see that the Extract and Data Pump processes have been automatically started up on node rac2.

[oracle@rac1 goldengate]$ su -
Password:
[root@rac1 ~]# shutdown -h now

Broadcast message from oracle@rac1.localdomain
[root@rac1 ~]#  (/dev/pts/0) at 19:45 ...

The system is going down for halt NOW!

Connect to the surviving node rac2 and check ……

[oracle@rac2 bin]$ ./agctl status goldengate gg_source
Goldengate  instance 'gg_source' is running on rac2

GGSCI (rac2.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXT1        00:00:07      00:00:02
EXTRACT     RUNNING     EXTDP1      00:00:00      00:00:08

Check the Cluster Resource ….

oracle@rac2 bin]$ crsctl stat res dbfs_mount -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
dbfs_mount
      1        ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------

Ready for Action?

LET'S GO!
Copyright 2024 IT Remote dot com
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram