Configure Oracle DBFS or ACFS on Exadata

Create the dbs_group file for dcli

As the user root, create a text file in the home directory of the root user called dbs_group. This file will contain the names of both the X5-2 compute nodes.

We will be using the DCLI utility to run commands on all compute nodes in the Exadata box and this file will be used for that purpose when we run the ‘dcli –g’ command.

[root@exdb1db01 ~]# dcli -g dbs_group -l root hostname
exdb1db01: exdb1db01.gavin.com.au
exdb1db02: exdb1db02.gavin.com.au

1.1 Add the oracle user to the fuse group
[root@exdb1db01 ~]# dcli -g ~/dbs_group -l root usermod -a -G fuse oracle

1.2 Add the user_allow_other option to the fuse.conf file

root@exdb1db01 ~]# dcli -g ~/dbs_group -l root "echo user_allow_other > /etc/fuse.conf"
[root@exdb1db01 ~]# dcli -g ~/dbs_group -l root chmod 644 /etc/fuse.conf

Note – on the Exadata servers, the required fuse RPM packages are installed by default.

1.3 Create the mount points and give appropriate permissions
On both compute nodes we will create mount points which will be used to mount the DBFS file system.
Since the objective is to have multiple mount points where each mount point is dedicated to separate database or environment, we will create the mount point with the naming convention /dbfs/.
Change the ownership of the mount points to the oracle user

dcli -g ~/dbs_group -l root mkdir /dbfs/dev2
dcli -g ~/dbs_group -l root chown oracle:oinstall /dbfs/dev2

1.4 Create tablespace and users
As the user SYS, we will create two tablespaces which will be used to store the LOB objects associated with the DBFS file system.
We will create the dbfs_gg_dirtmp tablespace with the recommended NOLOGGING attribute as it will be used to store the contents of the GoldenGate dirtmp directory.
Note: The size of the tablespace will depend on the amount of trail files which are expected to be generated as well as the required retention period for those trail files.
While the example shows the DBFS_DG ASM disk group being used for the hosting the DBFS related tablespaces, any ASM disk group with the required amount of free disk space can be used.
The DBFS_USER Oracle database user will be the owner of the DBFS related database objects and we create the user and grant the appropriate privileges especially the DBFS_ROLE database role.

create bigfile tablespace dbfs_gg_dirsrc
datafile '+DBFS_DG' size 32g autoextend on next 2g
LOGGING EXTENT MANAGEMENT LOCAL AUTOALLOCATE
SEGMENT SPACE MANAGEMENT AUTO;

create bigfile tablespace dbfs_gg_dirtmp
datafile '+DBFS_DG' size 10g autoextend on next 2g
NOLOGGING EXTENT MANAGEMENT LOCAL AUTOALLOCATE
SEGMENT SPACE MANAGEMENT AUTO;

create user dbfs_user identified by Oracle#123
default tablespace dbfs_gg_dirsrc
temporary tablespace temp
quota unlimited on dbfs_gg_dirsrc
quota unlimited on dbfs_gg_dirtmp;

GRANT create session, create table, create view,create procedure, dbfs_role TO dbfs_user;

1.5 Create the DBFS file system
We will next connect as the DBFS_USER and run the dbfs_create_filesystem.sql script to create the necessary DBFS related database objects.

The dbfs_create_filesystem.sql takes two parameters – the tablespace_name and the DBFS file system name.

SQL> conn dbfs_user/
Connected.

SQL> @?/rdbms/admin/dbfs_create_filesystem dbfs_gg_dirsrc ogg_dev2

No errors.
--------
CREATE STORE:
begin dbms_dbfs_sfs.createFilesystem(store_name => 'FS_OGG_DEV2', tbl_name =>
'T_OGG_DEV2', tbl_tbs => 'dbfs_gg_dirsrc', lob_tbs => 'dbfs_gg_dirsrc',
do_partition => false, partition_key => 1, do_compress => false, compression =>
'', do_dedup => false, do_encrypt => false); end;
--------
REGISTER STORE:
begin dbms_dbfs_content.registerStore(store_name=> 'FS_OGG_DEV2',
provider_name => 'sample1', provider_package => 'dbms_dbfs_sfs'); end;
--------
MOUNT STORE:
begin dbms_dbfs_content.mountStore(store_name=>'FS_OGG_DEV2',
store_mount=>'ogg_dev2'); end;
--------
CHMOD STORE:
declare m integer; begin m := dbms_fuse.fs_chmod('/ogg_dev2', 16895); end;
No errors.

SQL>@?/rdbms/admin/dbfs_create_filesystem dbfs_gg_dirtmp ogg_dirtmp_dev2

No errors.
--------
CREATE STORE:
begin dbms_dbfs_sfs.createFilesystem(store_name => 'FS_OGG_DIRTMP_DEV2',
tbl_name => 'T_OGG_DIRTMP_DEV2', tbl_tbs => 'dbfs_gg_dirtmp', lob_tbs =>
'dbfs_gg_dirtmp', do_partition => false, partition_key => 1, do_compress =>
false, compression => '', do_dedup => false, do_encrypt => false); end;
--------
REGISTER STORE:
begin dbms_dbfs_content.registerStore(store_name=> 'FS_OGG_DIRTMP_DEV2',
provider_name => 'sample1', provider_package => 'dbms_dbfs_sfs'); end;
--------
MOUNT STORE:
begin dbms_dbfs_content.mountStore(store_name=>'FS_OGG_DIRTMP_DEV2',
store_mount=>'ogg_dirtmp_dev2'); end;
--------
CHMOD STORE:
declare m integer; begin m := dbms_fuse.fs_chmod('/ogg_dirtmp_dev2', 16895);
end;
No errors.

1.6 Verify the DBFS LOB segment attributes
SQL> SELECT table_name, segment_name, logging, cache
2 FROM dba_lobs WHERE tablespace_name like 'DBFS%';

TABLE_NAME SEGMENT_NAME LOGGING CACHE
------------------------------ ------------------------------ ------- ----------
T_OGG_DEV2 LOB_SFS$_FST_1 YES NO
T_OGG_DIRTMP_DEV2 LOB_SFS$_FST_11 NO NO

1.7 Edit and customize the Oracle supplied mount-dbfs.sh script
Download the file mount-dbfs-20160215.zip from the MOS note 1054431.1 (Configuring DBFS on Oracle Exadata Database Machine).
Copy the file to a temporary directory on one of the database compute nodes and as the user root, extract the file.
We will now have two files – mount-dbfs.conf and mount-dbfs.sh.
Copy the mount-dbfs.sh to mount-dbfs_.sh
[root@exdb1db01 ~]# cd /tmp
[root@exdb1db01 tmp]# cp mount-dbfs.sh mount-dbfs_dev2.sh
[root@exdb1db01 tmp]# cp mount-dbfs.conf mount-dbfs_dev2.conf

Edit the mount-dbfs_.sh script to reference the customized CONFIG file

[root@exdb1db01 tmp]# vi mount-dbfs_dev2.sh

### Ensure that when multiple mounts are used, there are separate copies
### of mount-dbfs.sh that reference separate CONFIG file pathnames
CONFIG=/etc/oracle/mount-dbfs_dev2.conf

1.8 Edit and customize the Oracle supplied mount-dbfs.conf script
Change the values for :
• DBNAME
• MOUNT_POINT
• DBFS_USER
• ORACLE_HOME
• GRID_HOME
• DBFS_PASSWORD

### Database name for the DBFS repository as used in "srvctl status database -d $DBNAME"
### If using PDB/CDB, this should be set to the CDB name
### Database name for the DBFS repository as used in "srvctl status database -d $DBNAME"
### If using PDB/CDB, this should be set to the CDB name
DBNAME=DEV2

### Mount point where DBFS should be mounted
MOUNT_POINT=/dbfs/dev2

### Username of the DBFS repository owner in database $DBNAME
DBFS_USER=dbfs_user

### RDBMS ORACLE_HOME directory path
ORACLE_HOME=/u01/app/oracle/product/11.2.0/shieldnp_1

### GRID HOME directory path
GRID_HOME=/u01/app/12.1.0/grid_1

###########################################
### If using password-based authentication, set these
###########################################
### This is the plain text password for the DBFS_USER user
DBFS_PASSWD=Oracle#123

1.9 Copy the modified files to $GRID_HOME/crs/script as well as /etc/oracle and grant appropriate privileges
dcli -g ~/dbs_group -l root -d /u01/app/12.1.0/grid_1/crs/script -f /tmp/mount-dbfs_dev2.sh
dcli -g ~/dbs_group -l root chown oracle:oinstall /u01/app/12.1.0/grid_1/crs/script/mount-dbfs_dev2.sh
dcli -g ~/dbs_group -l root chmod 750 /u01/app/12.1.0/grid_1/crs/script/mount-dbfs_dev2.sh
dcli -g ~/dbs_group -l root -d /etc/oracle -f /tmp/mount-dbfs_dev2.conf
dcli -g ~/dbs_group -l root chown oracle:oinstall /etc/oracle/mount-dbfs_dev2.conf
dcli -g ~/dbs_group -l root chmod 640 /etc/oracle/mount-dbfs_dev2.conf

1.10 Create the script for mounting the DBFS File System
We will create the add-dbfs-resource _.sh script. This script will be used to create the clusterware resource for mounting the DBFS file system.
Note that the add-dbfs-resource script will be sourcing the customized mount-dbfs_.sh script which we had created earlier.

[root@exdb1db01 tmp]# cd /u01/app/12.1.0/grid_1/crs/script
[root@exdb1db01 script]# vi add-dbfs-resource_dev2.sh
##### start script add-dbfs-resource_dev2.sh
#!/bin/bash
ACTION_SCRIPT=/u01/app/12.1.0/grid_1/crs/script/mount-dbfs_dev2.sh
RESNAME=dbfs_mount_dev2
DBNAME=DEV2
DBNAMEL=`echo $DBNAME | tr A-Z a-z`
ORACLE_HOME=/u01/app/oracle/product/11.2.0/shieldnp_1
PATH=$ORACLE_HOME/bin:$PATH
export PATH ORACLE_HOME
/u01/app/12.1.0/grid_1/bin/crsctl add resource $RESNAME \
-type local_resource \
-attr "ACTION_SCRIPT=$ACTION_SCRIPT, \
CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \
START_DEPENDENCIES='hard(ora.$DBNAMEL.db)pullup(ora.$DBNAMEL.db)',\
STOP_DEPENDENCIES='hard(ora.$DBNAMEL.db)',\
SCRIPT_TIMEOUT=300"
##### end script add-dbfs-resource_dev2.sh

Change the ownership of the script to oracle

[root@exdb1db01 script]# chown oracle:oinstall add-dbfs-resource_dev2.sh

1.11 As the OS user oracle run the add-dbfs-resource script to create the resource
[root@exdb1db01 script]# su - oracle
[oracle@exdb1db01 ~]$ cd /u01/app/12.1.0/grid_1/crs/script
[oracle@exdb1db01 script]$ ./add-dbfs-resource_dev2.sh

1.12 As oracle start the resource using crsctl – this will mount the DBFS file system
[oracle@exdb1db01 ~]$ cd /u01/app/12.1.0/grid_1/bin
[oracle@exdb1db01 bin]$ ./crsctl start resource dbfs_mount_dev2
CRS-2672: Attempting to start 'dbfs_mount_dev2' on 'exdb1db01'
CRS-2672: Attempting to start 'ora.dev2.db' on 'exdb1db02'
CRS-2676: Start of 'dbfs_mount_dev2' on 'exdb1db01' succeeded
CRS-2676: Start of 'dbfs_mount_dev2' on 'exdb1db02' succeeded

1.13 Check the status of the resource
[oracle@exdb1db01 bin]$ ./crsctl stat res dbfs_mount_dev2
NAME=dbfs_mount_dev2
TYPE=local_resource
TARGET=ONLINE , ONLINE
STATE=ONLINE on exdb1db01, ONLINE on exdb1db02

[oracle@exdb1db01 bin]$ exit
logout

1.14 As root create the Application VIP
[root@exdb1db01 script]# cd /u01/app/12.1.0/grid_1/bin

[root@exdb1db01 bin]# ./appvipcfg create -network=1 -ip=10.100.24.28 -vipname=ogg_vip_dev2 -user=root

[root@exdb1db01 bin]# ./crsctl setperm resource ogg_vip_dev2 -u user:oracle:r-x
[root@exdb1db01 bin]# ./crsctl setperm resource ogg_vip_dev2 -u user:grid:r-x
[root@exdb1db01 bin]# ./crsctl start resource ogg_vip_dev2
CRS-2672: Attempting to start 'ogg_vip_dev2' on 'exdb1db02'
CRS-2676: Start of 'ogg_vip_dev2' on 'exdb1db02' succeeded

We can see that the VIP is running on exdb1db02 -we can relocate it to exdb1db01

[root@exdb1db01 bin]# ./crsctl relocate resource ogg_vip_dev2
CRS-2673: Attempting to stop 'ogg_vip_dev2' on 'exdb1db02'
CRS-2677: Stop of 'ogg_vip_dev2' on 'exdb1db02' succeeded
CRS-2672: Attempting to start 'ogg_vip_dev2' on 'exdb1db01'
CRS-2676: Start of 'ogg_vip_dev2' on 'exdb1db01' succeeded

Now check the status of the resource – we can see it running on exdb1db01

[root@exdb1db01 bin]# ./crsctl status resource ogg_vip_dev2
NAME=ogg_vip_dev2
TYPE=app.appvipx.type
TARGET=ONLINE
STATE=ONLINE on exdb1db01

1.15 Check if the DBFS file systems for each database environment are mounted and directories are present
[root@exdb1db01 bin]# df -k |grep dbfs
dbfs-dbfs_user@:/ 56559616 232 56559384 1% /dbfs_dev2

[root@exdb1db01 bin]# cd /dbfs_dev2/
[root@exdb1db01 dbfs_dev2]# ls -l
total 0
drwxrwxrwx 3 root root 0 Feb 25 11:56 ogg_dev2
drwxrwxrwx 3 root root 0 Feb 25 11:57 ogg_dirtmp_dev2


2 Configure Grid Infrastructure Agent

2.1 Create the directories on the DBFS file system

[oracle@exdb1db01 ogg_dev2pd]$ pwd
/dbfs/dev2pd/ogg_dev2pd

[oracle@exdb1db01 ]$ mkdir dirpcs
[oracle@exdb1db01 ]$ mkdir dirchk
[oracle@exdb1db01 ]$ mkdir dirdat
[oracle@exdb1db01 ]$ mkdir dirprm
[oracle@exdb1db01 ]$ mkdir dircrd
[oracle@exdb1db01 ]$ mkdir BR

[oracle@exdb1db01 dev2pd]$ cd ogg_dirtmp_dev2pd
[oracle@exdb1db01 ogg_dirtmp_dev2pd]$ pwd
/dbfs/dev2pd/ogg_dirtmp_dev2pd

[oracle@exdb1db01 ogg_dirtmp]$ mkdir dirtmp

2.2 On each compute node rename the existing directories in the GoldenGate software home

[oracle@exdb1db01 dev2]$ mkdir BR

[oracle@exdb1db01 dev2]$ mv dirchk dirchk.bkp
[oracle@exdb1db01 dev2]$ mv dirdat dirdat.bkp
[oracle@exdb1db01 dev2]$ mv dirpcs dirpcs.bkp
[oracle@exdb1db01 dev2]$ mv dirprm dirprm.bkp
[oracle@exdb1db01 dev2]$ mv dircrd dircrd.bkp
[oracle@exdb1db01 dev2]$ mv dirtmp dirtmp.bkp

2.3 Create the symbolic links
[oracle@exdb1db01 dev2]$ ln -s /dbfs/dev2pd/ogg_dev2pd/dirdat dirdat
[oracle@exdb1db01 dev2]$ ln -s /dbfs/dev2pd/ogg_dev2pd/dirchk dirchk
[oracle@exdb1db01 dev2]$ ln -s /dbfs/dev2pd/ogg_dev2pd/dirpcs dirpcs
[oracle@exdb1db01 dev2]$ ln -s /dbfs/dev2pd/ogg_dev2pd/dirprm dirprm
[oracle@exdb1db01 dev2]$ ln -s /dbfs/dev2pd/ogg_dev2pd/BR BR
[oracle@exdb1db01 dev2]$ ln -s /dbfs/dev2pd/ogg_dev2pd/dircrd dircrd
[oracle@exdb1db01 dev2]$ ln -s /dbfs/dev2pd/ogg_dirtmp_dev2pd /dirtmp dirtmp

2.4 Download Oracle Grid Infrastructure Agent

From the URL below download the file: xagpack_7b.zip

http://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/index.html

2.5 Copy the downloaded xagpack_7b.zip file to Grid user $HOME and unzip

[grid@exdb1db01 ~]$ ls
xagpack_7b.zip

[grid@exdb1db01 ~]$ unzip xagpack_7b.zip
Archive: xagpack_7b.zip
creating: xag/
inflating: xag/xagsetup.bat
creating: xag/lib/
inflating: xag/lib/facility.lis
inflating: xag/agcommon.pm
inflating: xag/agjdeas.pm
creating: xag/bin/
inflating: xag/bin/oerr.pl
inflating: xag/xagsetup.sh



inflating: xag/mesg/xagus.be
inflating: xag/mesg/xagus.msg
inflating: xag/mesg/xagus.msb
inflating: xag/agmysqlmonas.pm
inflating: xag/readme.txt
inflating: xag/agwl.pm

2.6 Two directories will be created - xag and xagent

[grid@exdb1db01 xag]$ pwd
/home/grid/xag
[grid@exdb1db01 xag]$ cd ..
[grid@exdb1db01 ~]$ ls
xag xagent xagpack_7b.zip

2.7 Run the xagsetup.sh script (as the Grid Infrastructure owner)

Note – this will install the Grid Infrastructure Agent files in the xagent directory (on both compute nodes)

[grid@exdb1db01 xag]$ ./xagsetup.sh --install --directory /u01/app/grid/xagent --all_nodes
Installing Oracle Grid Infrastructure Agents on: exdb1db01
Installing Oracle Grid Infrastructure Agents on: exdb1db02

If we try and install the Grid Infrastructure Agents under the $GRID_HOME we will see an error as shown below:

[grid@exdb1db01 xag]$ ./xagsetup.sh --install --directory /u01/app/12.1.0/grid_1/xagent --all_nodes
Installation directory cannot be under Clusterware home.

2.8 As oracle we run the AGCTL command to create the GoldenGate resource

[root@exdb1db01 bin]# su - oracle
[oracle@exdb1db01 ~]$ cd /u01/app/grid/xagent/bin

[oracle@exdb1db01 bin]$ ./agctl add goldengate ogg_dev2 --gg_home /u01/app/oracle/product/gg12.2/dev2 --instance_type source --nodes exdb1db01,exdb1db02 --vip_name ogg_vip_dev2 --filesystems dbfs_mount_dev2pd --databases ora.dev2pd.db --oracle_home /u01/app/oracle/product/11.2.0/shieldnp_1

2.9 Start and Stop Goldengate using AGCTL

[oracle@exdb1db01 bin]$ ./agctl status goldengate ogg_dev2
Goldengate instance 'ogg_dev2' is not running

[oracle@exdb1db01 bin]$ ./agctl start goldengate ogg_dev2

[oracle@exdb1db01 bin]$ ./agctl status goldengate ogg_dev2
Goldengate instance 'ogg_dev2' is running on exdb1db01

If we check via GGSCI, we can see the manager process is now up and running on compute node exdb1db01

[oracle@exdb1db01 bin]$ cd -
/u01/app/oracle/product/gg12.2/dev2
[oracle@exdb1db01 dev2]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.1 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Dec 12 2015 00:54:38
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.

GGSCI (exdb1db01.gavin.com.au) 1> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING

Note that manager is stopped on compute node exdb1db02

GGSCI (exdb1db02.gavin.com.au) 3> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER STOPPED

2.10 Relocate GoldenGate using AGCTL

[oracle@exdb1db01 bin]$ ./agctl relocate goldengate ogg_dev2

[oracle@exdb1db01 bin]$ ./agctl status goldengate ogg_dev2
Goldengate instance 'ogg_dev2' is running on exdb1db02

Now manager is running on exdb1db02

GGSCI (exdb1db02.gavin.com.au) 3> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING

Oracle Docs 12.1 ACFS Replication

Oracle 12.2 Storage Documentation

Oracle Goldengate on DBFS for RAC and Exadata

Let us take a look at the process of configuring Golden Gate 12c to work in an Oracle 12c Grid Infrastructure RAC or Exadata environment using DBFS on Linux x86-64.

This is now supported on ACFS as well, which may outperform the DBFS solution.

Simply put the Oracle Database File System (DBFS) is a standard file system interface on top of files and directories that are stored in database tables as LOBs.

Until recently Exadata did not support using ACFS but ACFS is now supported on version 12.1.0.2 of the RAC Grid Infrastructure.

In summary the steps involved are:

1) Install and configure FUSE (Filesystem in Userspace)
2) Create the DBFS user and DBFS tablespaces
3) Mount the DBFS filesystem
5) Create symbolic links for the Goldengate software directories dirchk,dirpcs, dirdat, BR to point to directories on DBFS
6) Create the Application VIP
7) Download the mount-dbfs.sh script from MOS and edit according to our environment
8) Create the DBFS Cluster Resource
9) Download and install the Oracle Grid Infrastructure Bundled Agent
10) Register Goldengate with the bundled agents using agctl utility

Important Fixes required to implement Oracle Automatic Storage Management File System (ACFS) on Oracle Exadata Database Machine (Doc ID 2022172.1)

 

Per the below noted doc ID ACFS on Exadata is now supported for GoldenGate files

Oracle ACFS Support on Oracle Exadata Database Machine (Linux only) (Doc ID 1929629.1)

Oracle ACFS use cases on Exadata Database Machine

 

With the latest software release for the Oracle Database Appliance (ODA), Oracle has decided to make ACFS (ASM Cluster FileSystem) the default storage for all new databases. This is a major change for Oracle since ACFS previously had not allowed Oracle data files to be stored on it. ACFS was introduced in Oracle 11.2 and has been proven to be stable and feature rich. However, not until recently has Oracle compared the performance of ACFS to Native ASM.

ACFS is a full-featured cluster filesystem that was introduced with Oracle database release 11gR2 but has been enhanced with each release since then. It now supports all Oracle data file types and is the recommended choice for storage starting with ODA software release 12.x. ASM and ACFS have been optimized for the highest level of performance with Oracle databases as well as general storage requirements such as Virtual Machine (VM) files.

With the 12.x release of the ODA software all databases created with oakcli will create an ACFS filesystem and repository (if it doesn’t already exist) for database storage. With the latest 12.x release of Exadata with OVM (Oracle VM) this will also be the default storage option. Unfortunately if you decide to use ACFS for Exadata database storage you will currently be unable to take advantage of the Oracle Smart Scan option.

In addition to being a general purpose clustered filesystem ACFS offers the following features:

To specify a Linux mount point of sufficient space for GoldenGate binaries / trail files.

While you can use the DBFS  Database File System (DBFS) option its use creates additional database objects and unnecessary additional database I/O as well as additional redo and RMAN activity.

Another option is to use Oracle ASM Clustered File System (ACFS) for this use case.

It is much faster to set up and is available on all nodes by default, which allows GoldenGate to fail over to other nodes.

In addition,  ACFS does not require the database to be up so the filesystem can also be used for other purposes.

If you are using this mount solely for GoldenGate, make sure you follow the best practices document which is updated periodically (Oracle GoldenGate Best Practice: NFS Mount options for use with GoldenGate (Doc ID 1232303.1))

***  Refer to the following steps at your own risk and always test for your use case prior to using in a production setting.

Requirements:

Configuration:

Verify that ACFS/ADVM modules are present in memory (on each node):

1
$ lsmod | grep oracle

If the modules are not present, the command will return something similar to:
oracleasm              53591  1

If the modules are present, the command will return something similar to:
oracleacfs 3308260 0
oracleadvm 508030 0
oracleoks 506741 2 oracleacfs,oracleadvm
oracleasm 53591 1

If the modules are not present or you would like to ensure that the latest version is loaded, run the following before proceeding (as the root user):

1
2
3
4
5
6
7
$ . oraenv
ORACLE_SID = [CDBRAC1] ? +ASM
The Oracle base remains unchanged with value /u01/app/oracle
# $GRID_HOME/bin/acfsroot install

Reboot the node if the modules were already present and you are reloading them.

Start the ACFS modules on each node:

On each node and as the root user:

1
2
3
4
5
6
7
8
9
# $GRID_HOME/bin/acfsload start
ACFS-9391: Checking for existing ADVM/ACFS installation.
ACFS-9392: Validating ADVM/ACFS installation files for operating system.
ACFS-9393: Verifying ASM Administrator setup.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9322: completed

Once installation is complete, and the mount is registered with clusterware, these modules will be loaded automatically.

If you like you can double check the driverstate by using the following executable:
usage: acfsdriverstate [-orahome <ORACLE_HOME>] <installed | loaded | version | supported> [-s]

As oracle user, create an ASM volume for ACFS (run only on one node):

Source the grid environment.

1
2
3
$ . oraenv
ORACLE_SID = [CDBRAC1] ? +ASM
The Oracle base remains unchanged with value /u01/app/oracle

Create the volume using the volcreate command.
You can use an existing disk group or create a separate one to house ACFS.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ asmcmd
ASMCMD volcreate -G DATA -s 10G ACFSVOL1
ASMCMD volinfo --all
Diskgroup Name: DATA
Volume Name: ACFSVOL1
Volume Device: /dev/asm/acfsvol1-370
State: ENABLED
Size (MB): 1024
Resize Unit (MB): 64
Redundancy: UNPROT
Stripe Columns: 8
Stripe Width (K): 1024
Usage:
Mountpath:

As oracle user, create the filesystem on the volume which was just created:

1
2
3
4
5
6
7
$ /sbin/mkfs -t acfs /dev/asm/acfsvol1-370
mkfs.acfs: version = 12.1.0.2.0
mkfs.acfs: on-disk version = 39.0
mkfs.acfs: volume = /dev/asm/acfsvol1-370
mkfs.acfs: volume size = 1073741824 ( 1.00 GB )
mkfs.acfs: Format complete.

As root, create an empty directory which will house the file system:

1
2
3
4
5
# mkdir -p /acfsmounts/acfsvol1
# chown root:oinstall /acfsmounts
# chmod 770 /acfsmounts
# chown -R oracle:oinstall /acfsmounts/acfsvol1
# chmod 775 /acfsmounts/acfsvol1

As root, setup the file system to be auto mounted by clusterware:

In a RAC 11g environment, you use acfsutil (srvctl may be supported – was not tested and the “-u option” will allow the oracle user to administer the mount):
1
2
3
4
# . /usr/local/bin/oraenv
ORACLE_SID = [CDBRAC1] ? +ASM
The Oracle base remains unchanged with value /u01/app/oracle
# /sbin/acfsutil registry -a /dev/asm/acfsvol1-370 /acfsmounts/acfsvol1 -t "ACFS General Purpose Mount" -u oracle
In a RAC 12c GI environment, register it with clusterware using the following commands (the “-u option” will allow the oracle user to administer the mount):
1
2
3
4
5
6
# /usr/local/bin/oraenv
ORACLE_SID = [CDBRAC1] ? +ASM
The Oracle base remains unchanged with value /u01/app/oracle
# srvctl add volume -volume ACFSVOL1 -diskgroup DATA -device /dev/asm/acfsvol1-370
# srvctl add filesystem -device /dev/asm/acfsvol1-370 -path /acfsmounts/acfsvol1 -diskgroup DATA -user oracle -fstype ACFS -description "ACFS General Purpose Mount"
# srvctl modify filesystem –device /dev/asm/acfsvol1-370 –fsoptions “rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,noac,vers=3,timeo=600”

At this point the mount should be ready for read/write and will be automatically mounted by clusterware.

Administration of the ACFS mount:

If you need to resize the mount once created (since you granted control to the oracle user, this command can also be executed by the oracle user:

1
2
3
$ acfsutil size 25G /acfsmounts/acfsvol1
$ srvctl start filesystem -device /dev/asm/acfsvol1-370
$ srvctl stop filesystem -device /dev/asm/acfsvol1-370

Ready for Action?

LET'S GO!
Copyright 2024 IT Remote dot com
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram