Incremental Merge Backup

#############################################################################
# #
# FILENAME: rman.args #
# #
# Purpose: Supplies the RMAN scripts with the necessary parameters. #
# #
#############################################################################
# #
# Revision History: #
# #
# REV DATE BY DESCRIPTION #
# --- ------ -------------- ------------------------------------------- #
# 1.0 053105 INFRASTRUCTURE Initial Release #
# 1.2 081605 Fix bug in CHK_LSNR for standby databases #
# 1.3 041107 Add an option of using Flash Recovery Area #
# for 10g / RACPlex #
# #
#############################################################################
# #
# Edit any of the following variables for your database. #
# #
# Note TARGET_DATABASE and TARGET_INSTANCE are case-sensitive! #
# #
# Use FULL PATHs for all variables, do not substitute with environment #
# variables as they will not be parsed correctly. #
# #
# In the comment part of the line, Y|N means [Y]es or [N]o #
# #
#############################################################################
#CATALOG_DATABASE=ddsrmand # Name of RMAN Catalog database
#CATALOG_USERNAME=rcvcat104 # Name of RMAN account for Catalog database
#CATALOG_PASSWORD=RMANDBA # Password for RMAN account in Catalog db

TARGET_DATABASE=DBFSXD01
TARGET_INSTANCE=DBFSXD011 # Name of TARGET instance (Case-Sensitive)
WEBDB_PASSWORD=st4nd4rd # Password for orastd id in WEBDB database #

# The following indicates where temporary command files are created #
RMAN_CMD=/oracle/admin/DBFSXD01/local/rman/cmd # Path to temp command files

# The following indicates whether to run the scripts in debug mode or not #
DEBUGMODE=N # Y|N - Run scripts in verbose mode? #

# The following variable will be used to determine whether to run #
# the backup or NOT to run the backup. This is used rather than the oratab.#
# If set to N, no error will be raised by skipping the backup. #
#
DOBACKUP=Y # Y|N - Run Backup? [Y]es, [N]o #

# The following variable will be used for alerting your on-call group #
# NO SPACES in the On-Call group name for NEWS notification/lookup #
# Examples: "Midrange-Oracle-Concord", "MRDBA-Jacksonville", #
# "Charlotte-UDB-Oracle-DBA", etc. #
ONCALLGRP="Group-Use-Dashes" # OnCallGroup (no spaces or use dashes)
PAGE=N # Y|N Tivoli/Page notification? [Y]es, [N]o #
FOGLIGHT=N # Write to Foglight alert log? [Y]es, [N]o #

EMAIL=Y # Y|N Email notification? [Y]es, [N]o #
# The following variable will qualify which "type" of condition to send an #
# email for. The default, if not specified, is to email on Failures only. #
# That setting would be: EMAILTYPE=F. However, if you wanted to be emailed #
# on successful backups, then use EMAILTYPE=A and you will get email #
# regardless of whether the backup was successful or not. #
#
EMAILTYPE=F # Email on backup Failures (F) or All (A) #

# If EMAIL set to Y, then need to set EMAIL_DIST filename with list of #
# email addresses. #
# The following variable contains a list of email recipients #
# EMAIL_DIST file format: name1@mrcconsulting.com (one email per line) #
# name2@mrcconsulting.com (one email per line) #
#

EMAIL_DIST=/oracle/admin/DBFSXD01/local/rman/.email # Email recipients list

# Compression - due to space limitations, selecting this option will #
# keep two generations of backups in compressed format on disk. #
# If set to "Y", then the value of OLDDIRNAME must also be supplied. #
#
COMPRESS=N # Y|N - Compress backups? [Y]es, [N]o #
#OLDDIRNAME=/oracle/admin/BSG1/backup/save # Previous compressed directory

# The following variable will be used to determine how many archive logs #
# to keep based on a time string. Whole numbers indicate days. #
# Hours represented as fractions, for example: 2/24 = 2 hours, which means #
# archive logs would remain online from the current time - 2 hours #
TIMETOKEEP="0" # Time unit to keep archive logs
#TIMETOKEEP="2/24" # Example: time unit of 2 hours
#TIMETOKEEP="3" # Example: time unit of 3 days

# The following variable is used to keep a certain number of archive logs #
# on disk. The script will determine up to which archive log has been #
# applied to the standby, and will purge the archive logs up to this #
# sequence number MINUS the value of STANDBYKEEP, to assure that a handful #
# of archive logs are kept on disk. #
STANDBYKEEP=10 # Number of archive logs to keep

# The following variable indicates the type of backup media #
BACKUP_DEVICE=D # [D]isk, [T]ape. Indicates device type #

# Webdb notification. The variable WEBDB will indicate whether to send an #
# update to the DBA Support website, for posting onto the site. #
# Set webdb to 'Y' to send a record of success/failure to the DBA website. #
# Set webdb to 'N' to skip sending a record of success/fail. to the website.#
WEBDB=Y # Y|N - Update DBA Website with bkup status?#

# The following variable indicates the number of script logs to keep in dir #
KEEP=30 # If not set, it will default. #
USE_FRA=Y # Set USE_FRA to "Y" to send backups to FRA #
# Default is "N' #
COMPRESSED_BACKUPSET=N
#COMPRESSED_BACKUPSET=Y
#NB_ORA_CLASS=" " #optional
#NB_ORA_SCHED="Daily" #optional
SENDNEWS=N
#NEWSGROUP=
#NEWSPIN=
#

RMAN Scripts

RMAN Full backup script with compression

Upgrading A Database Using Recovery Manager (RMAN) Duplicate Command In Oracle 12c

Introduction

For versions of Oracle 11gR2, Oracle Recovery Manager (RMAN) replicates a database to a different Oracle home and opens it with NORESET LOGS option. From Oracle 12cR1, database RMAN introduced an option called ‘NO OPEN’, which duplicates your database to a new location and keeps it in an unopened state.

Using this feature we can duplicate a database.

Introduction

For versions of Oracle 11gR2, Oracle Recovery Manager (RMAN) replicates a database to a different Oracle home and opens it with NORESET LOGS option. From Oracle 12cR1, database RMAN introduced an option called ‘NO OPEN’, which duplicates your database to a new location and keeps it in an unopened state. Using this feature we can duplicate a database to more recent target database version, allowing us to update our existing database to the most recent version.

RMAN duplicate command using backup:

Duplicate process can make use of an existing source database backup. All backup pieces should be kept in the same location and the location should be same for both source and destination.

Configuring the source server:
•Switch the source database to archive log mode if it is not already:

SQL> select log_mode from v$database;

In the mount state change the LOG mode of the database:

SQL> alter database archivelog;

Execute the pre-upgrade script at source database server and make necessary changes for upgrade.
Pre-upgrade script (Preupgrd.sql, utluppkg.sql) is located at: /rdbms/admin

• Copy the file(s) to source Oracle home and execute it as ‘SYS’ user. This will validate the source database and provide recommendations for performing the upgrade process. This is a mandatory step for the upgrade process.

• Take a full backup of the source database including archive logs. A RMAN full database backup has to be taken. Your database can be either in the "OPEN" or "MOUNT" state. Use following commands:

RMAN> connect target / catalog <catalog_schema>/<password>@<catalog database>
RMAN> backup database including archivelog;

This will generate a backup set.

Create pfile from spfile:

SQL> create pfile from spfile;

Move your database backup and the backup of pfile to the target database server.

Configuring the Target Database Server
• Create a password file for the database service

◦Orapwd file=<12c password file> password=<password for sys user>

• Make the necessary changes to the initialization file at target database server.

Change the compatible parameter value if it is recommended by pre-upgrade script.
• Start the database in nomount state:

SQL> startup nomount pfile=<modified pfile>

• Ensure that you keep the backup pieces at same location as the source database server. This is required as catalog has recorded the backup location.

• Execute the duplicate database command with the “NOOPEN” option using backup location:

RMAN> Connect target /
RMAN> Duplicate database to ‘DB12c’ noopen backup location
‘<location of backup>’;

•Start the database in upgrade mode with the reset logs option
SQL> alter database open resetlogs upgrade;
•Execute catupgrade script using catctl.pl script:
$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catctl.pl –n -l catupgrd.sql
•Check the database component status:
col comp_id format a10
col comp_name format a30
col version format a10
col status format a8
select substr(comp_id,1,15) comp_id,substr(comp_name,1,30) comp_name,substr(version,1,10) version,status
from dba_registry
•Make an entry in the listener file for database service.
•Create tnsnames.ora entry for the newly-created database service.
•Restart the listener.

Scenario: Upgrading an Oracle 10g database to an Oracle 12c database using the RMAN Duplicate method. The versions of source database and target database :
1.Source database: DB10g (10.2.0.5.0)
2.Target database: DB12c (12.1.0.1.0)

The Steps:
1.Switch the source database from No Archive Log Mode to Archive Log Mode if it is not already that way:
SYS:DB10g> startup mount
ORACLE instance started.
Total System Global Area 876609536 bytes
Fixed Size 2100232 bytes
Variable Size 234882040 bytes
Database Buffers 633339904 bytes
Redo Buffers 6287360 bytes
Database mounted.
•Check the LOG Mode from v$database view
SYS:DB10g> select log_mode from v$database;
LOG_MODE
----------------
NOARCHIVELOG
SYS:DB10g> alter database archivelog;
Database altered.
•Check the LOG Mode after converting from v$database view
SYS:DB10g> select log_mode from v$database;
LOG_MODE
------------
ARCHIVELOG
SYS:DB10g> alter database open;
Database altered.
2.Execute the pre-upgrade script at the source database server, review the output and make necessary changes for the upgrade.
SYS:DB10g> @preupgrd.sql
Loading Pre-Upgrade Package...
Executing Pre-Upgrade Checks...
Pre-Upgrade Checks Complete.

Results of the checks are located at:
/u01/app/oracle/cfgtoollogs/DB10g/preupgrade/preupgrade.log

Pre-Upgrade Fixup Script (run in source database environment):
/u01/app/oracle/cfgtoollogs/DB10g/preupgrade/preupgrade_fixups.sql

Post-Upgrade Fixup Script (run shortly after upgrade):
/u01/app/oracle/cfgtoollogs/DB10g/preupgrade/postupgrade_fixups.sql
************************************************************
Fixup scripts must be reviewed prior to being executed.
************************************************************
************************************************************
====>> USER ACTION REQUIRED <<==== ************************************************************ The following are Error level conditions that must be addressed prior to attempting your upgrade. Failure to do so will result in a failed upgrade: •Check Tag 1: COMPATIBLE_PARAMETER Check Summary: Verify that the compatible parameter value is valid Fixup Summary: ""compatible" parameter must be increased manually prior to upgrade." +++ Source Database Manual Action Required +++ You must resolve the above error prior to upgrade. The Minimum Compatible parameter value for Oracle 12c database is 11.1.0.0.0. This needs to be modified in the parameter file. Check /u01/app/oracle/cfgtoollogs/DB10g/preupgrade/preupgrade.log for recommended changes to the database for performing the upgrade. 3.Take full backup of the 10g database including archive logs: oracle@localhost:/u01/app/oracle/product/10.2.0/rdbms/admin$ rman target / Recovery Manager: Release 10.2.0.5.0 - Production on Fri Jun 12 18:21:21 2015 Copyright (c) 1982, 2007, Oracle. All rights reserved. connected to target database: DB10G (DBID=269577939) RMAN> backup database plus archivelog tag 'Full database backup';

Starting backup at 12-JUN-15
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=143 devtype=DISK
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=1 recid=19 stamp=882072342
input archive log thread=1 sequence=2 recid=20 stamp=882073330
input archive log thread=1 sequence=4 recid=21 stamp=882210136
channel ORA_DISK_1: starting piece 1 at 12-JUN-15
channel ORA_DISK_1: finished piece 1 at 12-JUN-15
piece handle=/oradata/DB10g/DB10G/backupset/2015_06_12/o1_mf_annnn_FULL_DATABASE_BACKUP_bqp8ttl8_.bkp tag=FULL DATABASE BACKUP comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:08
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=11 recid=2 stamp=881380916
input archive log thread=1 sequence=12 recid=4 stamp=881380916
input archive log thread=1 sequence=13 recid=5 stamp=881380916
channel ORA_DISK_1: starting piece 1 at 12-JUN-15
channel ORA_DISK_1: finished piece 1 at 12-JUN-15
piece handle=/oradata/DB10g/DB10G/backupset/2015_06_12/o1_mf_annnn_FULL_DATABASE_BACKUP_bqp8v2mr_.bkp tag=FULL DATABASE BACKUP comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=1 recid=7 stamp=881383026
input archive log thread=1 sequence=2 recid=8 stamp=881383027
input archive log thread=1 sequence=3 recid=9 stamp=881383032
input archive log thread=1 sequence=4 recid=10 stamp=881383032
input archive log thread=1 sequence=5 recid=11 stamp=881383035
input archive log thread=1 sequence=6 recid=12 stamp=881383035
input archive log thread=1 sequence=7 recid=13 stamp=881383038
input archive log thread=1 sequence=8 recid=17 stamp=881383308
input archive log thread=1 sequence=9 recid=18 stamp=881383308
input archive log thread=1 sequence=10 recid=16 stamp=881383307
channel ORA_DISK_1: starting piece 1 at 12-JUN-15
channel ORA_DISK_1: finished piece 1 at 12-JUN-15
piece handle=/oradata/DB10g/DB10G/backupset/2015_06_12/o1_mf_annnn_FULL_DATABASE_BACKUP_bqp8v4pz_.bkp tag=FULL DATABASE BACKUP comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=1 recid=1 stamp=881380916
channel ORA_DISK_1: starting piece 1 at 12-JUN-15
channel ORA_DISK_1: finished piece 1 at 12-JUN-15
piece handle=/oradata/DB10g/DB10G/backupset/2015_06_12/o1_mf_annnn_FULL_DATABASE_BACKUP_bqp8v6s3_.bkp tag=FULL DATABASE BACKUP comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=1 recid=3 stamp=881380916
channel ORA_DISK_1: starting piece 1 at 12-JUN-15
channel ORA_DISK_1: finished piece 1 at 12-JUN-15
piece handle=/oradata/DB10g/DB10G/backupset/2015_06_12/o1_mf_annnn_FULL_DATABASE_BACKUP_bqp8v8vq_.bkp tag=FULL DATABASE BACKUP comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 12-JUN-15

Starting backup at 12-JUN-15
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=/oradata/DB10g/system01.dbf
input datafile fno=00003 name=/oradata/DB10g/sysaux01.dbf
input datafile fno=00002 name=/oradata/DB10g/undotbs01.dbf
input datafile fno=00004 name=/oradata/DB10g/users01.dbf
channel ORA_DISK_1: starting piece 1 at 12-JUN-15
channel ORA_DISK_1: finished piece 1 at 12-JUN-15
piece handle=/oradata/DB10g/DB10G/backupset/2015_06_12/o1_mf_nnndf_TAG20150612T182233_bqp8vbp6_.bkp tag=TAG20150612T182233 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:45

channel ORA_DISK_1: starting full datafile backupset

channel ORA_DISK_1: specifying datafile(s) in backupset

including current control file in backupset

including current SPFILE in backupset
channel ORA_DISK_1: starting piece 1 at 12-JUN-15
channel ORA_DISK_1: finished piece 1 at 12-JUN-15
piece handle=/oradata/DB10g/DB10G/backupset/2015_06_12/o1_mf_ncsnf_TAG20150612T182233_bqp8wrnd_.bkp tag=TAG20150612T182233 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:04
Finished backup at 12-JUN-15

Starting backup at 12-JUN-15
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=5 recid=22 stamp=882210203
channel ORA_DISK_1: starting piece 1 at 12-JUN-15
channel ORA_DISK_1: finished piece 1 at 12-JUN-15
piece handle=/oradata/DB10g/DB10G/backupset/2015_06_12/o1_mf_annnn_FULL_DATABASE_BACKUP_bqp8wwrc_.bkp tag=FULL DATABASE BACKUP comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 12-JUN-15
4.Created pfile from spfile:
SYS:DB10g> create pfile from spfile;
File created.
5.Move the Oracle 10g database backup and pfile to target database server.

Create a password file for database service. The database name can be different from the source database server.
oracle@localhost:~$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0
oracle@localhost:~$ export PATH=$ORACLE_HOME/bin:$PATH
oracle@localhost:/u01/app/oracle/product/12.1.0/dbs$ orapwd file=orapwDB12c password=syspwd entries=5
oracle@localhost:/u01/app/oracle/product/12.1.0/dbs$ ls -lrt orapwDB12c
-rw-r----- 1 oracle oinstall 5120 Jun 12 18:28 orapwDB12c
6.Make the necessary changes to the initialization file at target database server. Change the compatible parameter value if it is recommended by the Pre-upgrade script. For Oracle 12c, the minimum required compatible parameter value is 11.1.0.0.0.
oracle@localhost:/u01/app/oracle/product/12.1.0/dbs$ cat initDB12c.ora

*.audit_file_dest='/u01/app/oracle/admin/DB12c/adump'
*.diagnostic_dest='/u01/app/oracle/diag'
*.compatible='12.1.0.1.0'
*.control_files='/oradata/DB12c/control01.ctl','/oradata/DB12c/control02.ctl','/oradata/DB12c/control03.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='DB12c'
*.db_recovery_file_dest_size=1073741824
*.db_recovery_file_dest='/oradata/DB12c'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=DB12cXDB)'
*.job_queue_processes=10
*.open_cursors=300
*.pga_aggregate_target=291504128
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=876609536
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.db_file_name_convert='/oradata/DB10g','/oradata/DB12c/data'
*.log_file_name_convert='/oradata/DB10g','/oradata/DB12c/log'
7.Start the database in NOMOUNT state using pfile.
oracle@localhost:/u01/app/oracle/product/12.1.0/dbs$ sqlplus "/ as sysdba"

SQL*Plus: Release 12.1.0.1.0 Production on Fri Jun 12 18:38:25 2015
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to an idle instance.
SYS:DB12c> startup nomount pfile=initDB12c.ora
ORACLE instance started.
Total System Global Area 872685568 bytes
Fixed Size 2293680 bytes
Variable Size 289407056 bytes
Database Buffers 574619648 bytes
Redo Buffers 6365184 bytes
SYS:DB12c>
8.Specify the backup piece location while executing the Duplicate command, since this location is not cataloged in the database. Execute the duplicate database command with the “NOOPEN” option.
oracle@localhost:/oradata/DB12c/backup$ rman
Recovery Manager: Release 12.1.0.1.0 - Production on Fri Jun 12 18:48:36 2015
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
RMAN> connect auxiliary sys/syspwd
connected to auxiliary database: DB12C (not mounted)

RMAN> Duplicate database to 'DB12c' noopen
backup location '/oradata/DB12c/backup/’;
Starting Duplicate Db at 12-JUN-15
contents of Memory Script:
{
.
.
.
cataloged datafile copy
datafile copy file name=/oradata/DB12c/data/undotbs01.dbf RECID=1 STAMP=882211842
cataloged datafile copy
datafile copy file name=/oradata/DB12c/data/sysaux01.dbf RECID=2 STAMP=882211842
cataloged datafile copy
datafile copy file name=/oradata/DB12c/data/users01.dbf RECID=3 STAMP=882211842
datafile 2 switched to datafile copy
input datafile copy RECID=1 STAMP=882211842 file name=/oradata/DB12c/data/undotbs01.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=2 STAMP=882211842 file name=/oradata/DB12c/data/sysaux01.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=3 STAMP=882211842 file name=/oradata/DB12c/data/users01.dbf
Leaving database unopened, as requested
Cannot remove created server parameter file
Finished Duplicate Db at 12-JUN-15
9.Step-9: Start the database in upgrade mode with reset logs option
SYS:DB12c> alter database open resetlogs upgrade;
Database altered.
10.Step-10: Execute the catupgrade script using catctl.pl script:
oracle@localhost:/u01/app/oracle/product/12.1.0/rdbms/admin
$ /u01/app/oracle/product/12.1.0/perl/bin/perl catctl.pl -n 4 -l /u01/app/oracle/log catupgrd.sql
Analyzing file catupgrd.sql
Log files in /u01/app/oracle/log
Parallelism 4 is used
11.The database upgrade will be executed in parallel using the Perl script “catctl.pl”. This is a new piece of functionality introduced in 12c. The database upgrade will be executed in phases. In case any particular phase fails, we can re-execute the upgrade from that particular phase using the command:
$ORACLE_HOME/perl/bin/perl catctl.pl -n 4 -l $ORACLE_HOME/diagnostics -p catupgrd.sql

Once the upgrade completes successfully, check the database component status through dba_registry view.

Summary

This feature is possible only when Duplicate command performed with backup, it is not possible with the Active duplication option. It is an alternative for the RMAN backup restore and recovery method.


#!/bin/ksh
############################################################################
# Script Name..: 01 backup_copy_lvl0.ksh
# Description..:
# Author.......: M. Culp
# Date.........: 07/09/2013
# Version......:
# Modified By..:
# Date Modified:
# Comments.....:
# Schema owner.:
#    alter session set current
# Login User...:
# Run Order....:
# Dependent on.:
# Script type..:
#############################################################################

rman  << EOF 
connect target / connect catalog <username>/<password>@<connect string> 
spool log to logs/11backup_copy_lvl0.log 
run { ALLOCATE CHANNEL dev1 DEVICE TYPE DISK; 
      ALLOCATE CHANNEL dev2 DEVICE TYPE DISK; 
      BACKUP AS COPY INCREMENTAL LEVEL 0 DATAFILE "+SHARED_DATA_DG01/cdahxf02/datafile/cdahx_data.848.806779823"  FORMAT "+TEMP_TEST_DG" TAG 'ORA_ASM_MIGRATE'; 
    } 
list copy of datafile 7; 
spool log off 
EOF

############################################################################
#!/bin/ksh 
############################################################################ 
# Script Name..: 02 backup_copy_lvl1.ksh 
# Description..: Create level 1 incremental copy 
# Author.......: Culp 
# Date.........: 07/09/2013 
# Version......: 
# Modified By..: 
# Date Modified: 
# Comments.....: 
# Schema owner.: 
#    alter session set current 
# Login User...: 
# Run Order....: 
# Dependent on.: 
# Script type..: 
# ##########################################################################

rman  <<EOF 
connect target / connect catalog <a href="mailto:rcvcat109/RMANDBA@ddsrmand">rcvcat109/RMANDBA@ddsrmand</a> 
spool log to logs/2backup_copy_lvl1.log 
run { ALLOCATE CHANNEL dev1 DEVICE TYPE DISK; 
      ALLOCATE CHANNEL dev2 DEVICE TYPE DISK; 
      BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG 'ORA_ASM_MIGRATE' DATAFILECOPY  FORMAT '+TEMP_TEST_DG' datafile 7; } 
list copy of datafile 7; 
spool log off 
EOF

############################################################################
#!/bin/ksh 
############################################################################ 
# Script Name..: 3 recover_copy.ksh 
# Description..: Apply incremental backup to the level 0 data file copy 
# Author.......: Dugan 
# Date.........: 07/09/2013 
# Version......: 
# Modified By..: 
# Date Modified: 
# Comments.....: 
# Schema owner.: 
#    alter session set current 
# Login User...: 
# Run Order....: 
# Dependent on.: 
# Script type..: 
# #############################################################################

rman  << EOF 
connect target / connect catalog <a href="mailto:rcvcat109/RMANDBA@ddsrmand">rcvcat109/RMANDBA@ddsrmand</a> 
spool log to logs/3recover_copy.log 
run { ALLOCATE CHANNEL dev1 DEVICE TYPE DISK; 
      ALLOCATE CHANNEL dev2 DEVICE TYPE DISK; 
      RECOVER COPY OF DATAFILE 7 WITH TAG 'ORA_ASM_MIGRATE'; 
    } 
list copy of datafile 7; 
spool log off 
EOF

############################################################################
#!/bin/ksh 
############################################################################ 
# Script Name..: 4restricted_sessions.ksh 
# Description..: Enable Restricted Sessions 
# Author.......: Culp 
# Date.........: 06/20/2013 
# Version......: 
# Modified By..: 
# Date Modified: 
# Comments.....: This script must be run on each instance 
# Schema owner.: 
#    alter session set current 
# Login User...: 
# Run Order....: 
# Dependent on.: 
# Script type..: 
# ############################################################################

sqlplus -s "/ as sysdba" <<EOF

spool logs/4restricted_sessions.log
set lines 120 
set pages 999

col name for a55 col value for a15

SELECT instance_name 
      ,(SELECT to_char(sysdate , 'HH24:MI:SS') "Date" 
          FROM dual) "Date" 
  FROM v\$instance;

SELECT username
     , status
     , count(*) 
  FROM gv\$session 
 GROUP BY username, status;

-- you may need to terminate sessions ALTER SYSTEM ENABLE RESTRICTED SESSION;

SELECT username, status, count(*) FROM gv\$session GROUP BY username, status;

alter system set job_queue_processes=0 scope=both sid='*'; 
alter system set aq_tm_processes=0 scope=both sid='*';

spool off

EOF

############################################################################
#!/bin/ksh 
############################################################################ 
# Script Name..: 5datafile_offline.ksh 
# Description..: Takes specified datafile offline 
# Author.......: Culp
# Date.........: 07/10/2013 
# Version......: 
# Modified By..: 
# Date Modified: 
# Comments.....: 
# Schema owner.: 
#    alter session set current 
# Login User...: 
# Run Order....: 
# Dependent on.: 
# Script type..: 
# ############################################################################

sqlplus -s "/ as sysdba" <<EOF

spool logs/5datafile_offline.log 
set lines 120 
set pages 999

col name for a65 col value for a15

SELECT instance_name 
      ,(SELECT to_char(sysdate , 'HH24:MI:SS') "Date" 
          FROM dual) "Date" 
 FROM v\$instance;

SELECT file#
      ,name
      ,status 
 FROM v\$datafile 
ORDER BY 1; 

prompt Altering datafile offline

ALTER database datafile 7 offline;

SELECT file#
     , name
     , status 
  FROM v\$datafile 
 ORDER BY 1;

spool off

EOF

############################################################################
#!/bin/ksh 
############################################################################ 
# Script Name..: 6switch_datafile_to_copy.ksh 
# Description..: Switch the datafile to the latest image copy and recover the datafile 
# Author.......: Culp
# Date.........: 07/09/2013 
# Version......: 
# Modified By..: 
# Date Modified: 
# Comments.....: 
# Schema owner.: 
#    alter session set current # Login User...: 
# Run Order....: 
# Dependent on.: 
# Script type..: 
# #########################################################################

rman  <<EOF 
connect target / connect catalog <a href="mailto:rcvcat109/RMANDBA@ddsrmand">rcvcat109/RMANDBA@ddsrmand</a> 
spool log to logs/6switch_datafile_to_copy.log 
switch datafile 7 to copy; 
spool log off 
EOF

############################################################################
#!/bin/ksh 
############################################################################ 
# Script Name..: 7recover_datafile.ksh 
# Description..: Recover the datafile 
# Author.......: Dugan 
# Date.........: 07/09/2013 
# Version......: 
# Modified By..: 
# Date Modified: 
# Comments.....: 
# Schema owner.: 
#    alter session set current 
# Login User...: 
# Run Order....: 
# Dependent on.: 
# Script type..: 
# #############################################################################

rman  <<EOF connect target / connect catalog <a href="mailto:rcvcat109/RMANDBA@ddsrmand">rcvcat109/RMANDBA@ddsrmand</a> 
spool log to logs/7recover_datafile.log 
run { ALLOCATE CHANNEL dev1 DEVICE TYPE DISK; 
      ALLOCATE CHANNEL dev2 DEVICE TYPE DISK; 
      RECOVER DATAFILE 7 ;
    } 
list copy of datafile 7; 
spool log off 
EOF

############################################################################
#!/bin/ksh 
############################################################################ 
# Script Name..: 8datafile_online.ksh 
# Description..: Bring datafile online 
# Author.......: Sara Dugan 
# Date.........: 07/10/2013 
# Version......: 
# Modified By..: 
# Date Modified: 
# Comments.....: 
# Schema owner.: 
#    alter session set current 
# Login User...: 
# Run Order....: 
# Dependent on.: 
# Script type..: 
# ############################################################################

sqlplus -s "/ as sysdba" <<EOF

spool logs/8datafile_online.log 
set lines 120 
set pages 999

col name for a65 col value for a15

SELECT instance_name 
      ,(SELECT to_char(sysdate , 'HH24:MI:SS') "Date" 
          FROM dual) "Date" 
 FROM v\$instance;

SELECT file#
     , name
     , status 
  FROM v\$datafile 
 ORDER BY 1; 

prompt Altering datafile online

ALTER database datafile 7 online;

SELECT file#
     ,name
     ,status 
 FROM v\$datafile 
ORDER BY 1;

spool off

EOF

############################################################################
#!/bin/ksh 
############################################################################ 
# Script Name..: 9restricted_sessions_off.ksh 
# Description..: Disable Restricted Sessions 
# Author.......: Culp
# Date.........: 06/20/2013 
# Version......: 
# Modified By..: 
# Date Modified: 
# Comments.....: This script must be run on each instance 
# Schema owner.: 
#    alter session set current 
# Login User...: 
# Run Order....: 
# Dependent on.: 
# Script type..: 
# ############################################################################

sqlplus -s "/ as sysdba" <<EOF

spool logs/9restricted_sessions_off.log 
set lines 120 
set pages 999

col name for a55 col value for a15

SELECT instance_name ,(SELECT to_char(sysdate , 'HH24:MI:SS') "Date" 
                         FROM dual) "Date" 
  FROM v\$instance;

ALTER SYSTEM DISABLE RESTRICTED SESSION;

alter system set job_queue_processes=1000 scope=both sid='*'; 
alter system set aq_tm_processes =1 scope=both sid='*'; 
spool off

EOF

############################################################################
#!/bin/ksh 
############################################################################ 
# Script Name..: xxx.ksh 
# Description..: 
# Author.......: Michael Culp 
# Date.........: 04/ /2013 
# Version......: # Modified By..: # Date Modified: # Comments.....: # Schema owner.: #              : alter session set current should be used to change schemas # Login User...: # Run Order....: # Dependent on.: # Script type..: # ############################################################################

sqlplus -s "/ as sysdba" <<EOF 
set lines 150 
set pages 150 -- 
spool <some file name>

set lines 122 pages 9999  col name for a63

select file#
,      name
,      status
,      checkpoint_change# chkpt#
,      to_char(checkpoint_time,'DD-MON-YYYY HH24:MI:SS') chkpt_date   
 from v\$datafile  
where lower(name) like '%cdahx_data%%';

-- spool off EOF

############################################################################
!/bin/ksh 
############################################################################ 
# Script Name..: yyy.ksh 
# Description..: 
# Author.......: Michael Culp 
# Date.........: 04/ /2013 
# Version......: 
# Modified By..: 
# Date Modified: 
# Comments.....: 
# Schema owner.: 
#              : alter session set current should be used to change schemas 
# Login User...: 
# Run Order....: 
# Dependent on.: 
# Script type..: 
# ############################################################################

sqlplus -s "/ as sysdba" <<EOF 
set lines 150 
set pages 150 
-- spool <some file name>

SELECT SID
      , SERIAL#
      , CONTEXT
      , SOFAR
      , TOTALWORK
      ,ROUND(SOFAR/TOTALWORK*100,2) "%_COMPLETE"   
 FROM V\$SESSION_LONGOPS  
WHERE OPNAME LIKE 'RMAN%' 
  AND OPNAME NOT LIKE '%aggregate%' 
  AND TOTALWORK != 0 
  AND SOFAR  != TOTALWORK;

-- spool off EOF

############################################################################
!/bin/ksh 
############################################################################ 
# Script Name..: zzz.ksh 
# Description..: 
# Author.......: Michael Culp 
# Date.........: 04/ /2013 
# Version......: 
# Modified By..: 
# Date Modified: 
# Comments.....: 
# Schema owner.: 
#              : alter session set current should be used to change schemas 
# Login User...: 
# Run Order....: 
# Dependent on.: 
# Script type..: 
# ############################################################################

sqlplus -s "/ as sysdba" <<EOF 
set lines 150 
set pages 150 -- 
spool <some file name>

set lines 122 pages 9999 
col name for a63

select name
,      state
,      total_mb   
  from v\$asm_diskgroup  
 where lower(name) like '%shared_data_dg%' 
    or lower(name) like '%temp%' 
    or lower(name) like '%shared_data_hp%'  
 order by name;

--spool off EOF

Making Compressed Backups

For any use of the BACKUP command that creates backup sets, you can take advantage of RMAN support for binary compression of backup sets. Specify the AS COMPRESSED BACKUPSET option to the BACKUP command.

RMAN compresses the backup set contents before writing them to disk. The details of which binary compression level is used are automatically recorded in the backup set. There is no need to explicitly mention the type of compression used or how to decompress the backup set in the recovery operation.

Binary compression creates some performance overhead during backup and restore operations. Binary compression consumes CPU resources, so do not routinely schedule compressed backups when CPU usage is high. However, the following circumstances may warrant paying the performance penalty:

Transportable Tablespaces from RMAN backup

http://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmttbsb.htm#BRADV05141

 

Creating Transportable Tablespace Sets

This chapter explains how to use RMAN to create transportable tablespace sets by restoring backups. This discussion assumes that you are familiar with the transportable tablespace procedure described in Oracle Database Administrator's Guide. The procedure in this chapter is an alternative technique for generating transportable tablespace sets.This chapter contains the following sections:

Overview of Creating Transportable Tablespace Sets

Customizing Initialization Parameters for the Auxiliary Instance

Creating a Transportable Tablespace Set

Troubleshooting the Creation of Transportable Tablespace Sets

Transportable Tablespace Set Scenarios

Overview of Creating Transportable Tablespace Sets

This section explains the basic concepts and tasks involved in creating transportable tablespace sets from RMAN backups.

Purpose of Creating Transportable Tablespace Sets

A transportable tablespace set contains data files for a set of tablespaces and an export file containing structural metadata for the set of tablespaces. The export file is generated by Data Pump Export.

One use of transportable tablespace sets is to create a tablespace repository. For example, if you have a database with some tablespaces used for quarterly reporting, you can create transportable sets for these tablespaces for storage in a tablespace repository. Subsequently, versions of the tablespace can be requested from the repository and attached to another database for use in generating reports.

Another use for transportable tablespaces is in an Oracle Streams environment. When preparing to use Oracle Streams to keep a destination database synchronized with a source database, you must perform Oracle Streams instantiation. You must bring the destination database up to a given SCN at which the two databases were known to be synchronized before you can use Oracle Streams to move subsequent updates from the source database to the destination database. You can create transportable tablespace sets from backups as part of the Oracle Streams instantiation.

A key benefit of the RMAN TRANSPORT TABLESPACE command is that it does not need access to the live data files from the tablespaces to be transported. In contrast, the transportable tablespace technique described in Oracle Database Administrator's Guide requires that the tablespaces to be transported are open read-only during the transport. Thus, transporting from backups improves database availability, especially for large tablespaces, because the tablespaces to be transported can remain open for writes during the operation. Also, placing a tablespace in read-only mode can take a long time, depending on current database activity.

The RMAN TRANSPORT TABLESPACE command also enables you to specify a target point in time, SCN, or restore point during your recovery window and transport tablespace data as it existed at that time (see "Creating a Transportable Tablespace Set at a Specified Time or SCN"). For example, if your backup retention policy guarantees a 1 week recovery window, and if you want to create transportable tablespaces based on the contents of the database on the last day of the month, then RMAN can perform this task at any time during the first week of the next month.

See Also:

Oracle Database Backup and Recovery Reference for reference information about the TRANSPORT TABLESPACE command

Oracle Streams Replication Administrator's Guide for more details on RMAN and tablespace repositories

Oracle Streams Replication Administrator's Guide for more details on RMAN and Oracle Streams instantiations

Basic Concepts of Transportable Tablespace Sets

You create a transportable tablespace set by connecting RMAN to a source database as TARGET and then executing the TRANSPORT TABLESPACE command. The source database contains the tablespaces to be transported.

You must have a backup of all needed tablespaces and archived redo log files available for use by RMAN that can be recovered to the target point in time for the TRANSPORT TABLESPACE operation. Figure 26-1 illustrates the basic process of transportable tablespace creation.

Figure 26-1 RMAN Transportable Tablespace from Backup: Architecture

Description of "Figure 26-1 RMAN Transportable Tablespace from Backup: Architecture"

The process shown in Figure 26-1 occurs in the following phases:

RMAN starts an auxiliary instance.

An auxiliary instance is created by RMAN on the same host as the source database to perform the restore and recovery of the tablespaces. RMAN automatically creates an initialization parameter file for the auxiliary instance and starts it NOMOUNT.

RMAN restores a backup of the source database control file to serve as the auxiliary instance control file and mounts this control file.

RMAN restores auxiliary set and transportable set data files from the backups of the source database.

The auxiliary set includes data files and other files required for the tablespace transport but which are not themselves part of the transportable tablespace set. The auxiliary set typically includes the SYSTEM and SYSAUX tablespaces, temp files, and data files containing rollback or undo segments. The auxiliary instance has other files associated with it, such as its own control file, parameter file, and online logs, but they are not part of the auxiliary set.

RMAN stores the auxiliary data files in the selected auxiliary destination. The auxiliary destination is a disk location where RMAN can store auxiliary set files such as the parameter file, data files (other than those in the transportable set), control files, and online logs of the auxiliary instance during the transport. If the transport succeeds, then RMAN deletes these files.

RMAN stores the transportable set files in the tablespace destination. The tablespace destination is a disk location that by default contains the data file copies and other output files when the tablespace transport command completes.

RMAN performs database point-in-time recovery (DBPITR) at the auxiliary instance.

The recovery updates auxiliary and transportable set data files to their contents as of the target time specified for the TRANSPORT TABLESPACE command. If no target time is specified, then RMAN recovers with all available redo. RMAN restores archived redo logs from backup as necessary at the auxiliary destination (or other location) and deletes them after they are applied.

RMAN opens the auxiliary database with the RESETLOGS options.

The data files now reflect the tablespace contents as of the target SCN for the tablespace transport operation.

RMAN places the transportable set tablespaces of the auxiliary instance into read-only mode. RMAN also invokes Data Pump Export in transportable tablespace mode to create the export dump file for the transportable set.

By default, the dump file is located in the tablespace destination. To specify the dump file location, see "Specifying Locations for Data Pump Files".

RMAN also generates the sample Data Pump import script for use when plugging in the transported tablespaces at a target database. The contents of this script are written to a file named impscript.sql in the tablespace destination. The commands for the script are also included in the RMAN command output.

If the preceding steps are successful, then RMAN shuts down the auxiliary instance and deletes all files created during the TRANSPORT TABLESPACE operation except for the transportable set files, the Data Pump Export file, and the sample import script.

Basic Steps of Creating Transportable Tablespace Sets

Before creating transportable tablespace sets you must meet several prerequisites. These prerequisites are described in the TRANSPORT TABLESPACE entry in Oracle Database Backup and Recovery Reference.

The basic steps of creating transportable tablespace sets are as follows:

Start the RMAN client and connect to the source database and, if used, the recovery catalog.

If necessary, set additional parameters in the auxiliary instance parameter file.

This task is described in "Customizing Initialization Parameters for the Auxiliary Instance".

Execute the TRANSPORT TABLESPACE command.

This basic technique is described in "Creating a Transportable Tablespace Set". Variations on this technique are described in "Transportable Tablespace Set Scenarios".

If the TRANSPORT TABLESPACE command fails, troubleshoot the problem and then retry the command until it succeeds.

This technique is described in "Troubleshooting the Creation of Transportable Tablespace Sets".

Return to the procedure for transporting tablespaces described in Oracle Database Administrator's Guide.

Customizing Initialization Parameters for the Auxiliary Instance

When RMAN creates the auxiliary instance, it creates an initialization parameter file. The default values should work for nearly all TRANSPORT TABLESPACE cases, especially if you specify the AUXILIARY DESTINATION option on the TRANSPORT TABLESPACE command.

RMAN can also use an auxiliary instance parameter file that contains values for additional initialization parameters. These values override the values of parameters defined in the default initialization parameter file. You might use an auxiliary instance parameter file for the following reasons:

To increase STREAMS_POOL_SIZE and SHARED_POOL_SIZE if needed for Data Pump Export.

To manage locations for auxiliary instance data files (see "Using Initialization Parameters to Name Auxiliary Files"). For example, you do not want all auxiliary instance data files stored in the same location on disk, but you do not want to specify the location of every file individually.

To specify names for online redo logs with LOG_FILE_NAME_CONVERT (see "Using Initialization Parameters to Name Auxiliary Files").

The auxiliary instance parameter file is not intended to be a complete initialization parameter file for the auxiliary instance. Any parameters specified are added to or override the default parameters for the auxiliary instance. It is not necessary to specify parameters in the initialization file that you do not intend to override.

Setting Initialization Parameters for the Auxiliary Instance

RMAN defines the basic initialization parameters in Table 26-1 for the automatic auxiliary instance.

Table 26-1 Default Initialization Parameters for the Auxiliary Instance

Initialization Parameter Value
DB_NAME Same as DB_NAME of the source database.
COMPATIBLE Same as the compatible setting of the source database.
DB_UNIQUE_NAME Generated unique value based on DB_NAME.
DB_BLOCK_SIZE Same as the DB_BLOCK_SIZE of the source database.
DB_FILES Same value as DB_FILES for the source database
SGA_TARGET 280M recommended value.
DB_CREATE_FILE_DEST Auxiliary destination (only if the AUXILIARY DESTINATION argument to TRANSPORT TABLESPACE is set). RMAN creates Oracle managed control files and online logs in this location.

 

Overriding a basic initialization parameter in Table 26-1 with an inappropriate value in the auxiliary instance parameter file can cause TRANSPORT TABLESPACE to fail. If you encounter a problem, then try returning the initialization parameter to its default value.

See Also:

"Using Initialization Parameters to Name Auxiliary Files" to learn how to use DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT to name files

Setting the Location of the Auxiliary Instance Parameter File

By default, RMAN looks for the auxiliary initialization parameter file at an operating system-dependent location on the host running the RMAN client. This location may not be on the host running the auxiliary instance. For UNIX systems, this location is ?/rdbms/admin/params_auxint.ora, where the question mark (?) stands for ORACLE_HOME on the host running RMAN. If no file is found in the default location, then RMAN does not generate an error.

If you use the default initialization parameters for the auxiliary instance, then check whether an auxiliary instance parameter file exists before running TRANSPORT TABLESPACE.

To specify a different location for the auxiliary instance parameter file, you can use the RMAN SET AUXILIARY INSTANCE PARAMETER FILE command in a RUN block before the TRANSPORT TABLESPACE command. As with the default location of the auxiliary instance parameter file, the path specified when using the SET AUXILIARY INSTANCE PARAMETER FILE command is a client-side path.

Assume that you create a file named /tmp/auxinstparams.ora on the host running the RMAN client. This file contains the following initialization parameter:

SHARED_POOL_SIZE=150M;

You can then use the initialization parameter file with TRANSPORT TABLESPACE as shown in Example 26-1. The SHARED_POOL_SIZE parameter in /tmp/auxinstparams.ora overrides the default value used for SHARED_POOL_SIZE when RMAN creates the auxiliary instance.

Example 26-1 Specifying an Auxiliary Instance Parameter File

RUN

{

SET AUXILIARY INSTANCE PARAMETER FILE TO '/tmp/auxinstparams.ora';

TRANSPORT TABLESPACE tbs_2

TABLESPACE DESTINATION '/disk1/transportdest'

AUXILIARY DESTINATION '/disk1/auxdest';

}

Creating a Transportable Tablespace Set

This section describes the use of TRANSPORT TABLESPACE in the most basic and automated case. See "Transportable Tablespace Set Scenarios" for variations on the basic case.

It is assumed that you have met the prerequisites described in the TRANSPORT TABLESPACE entry in Oracle Database Backup and Recovery Reference. It is also assumed that you have met the requirements described in Oracle Database Administrator's Guide:

Confirmed that tablespace transport is supported between your source and destination platforms

Identified a self-contained set of tablespaces to include in the transportable set

To create a transportable tablespace set:

Start the RMAN client and connect to the source database and, if used, the recovery catalog database.

Run the TRANSPORT TABLESPACE command in RMAN.

In the most basic case, you specify an AUXILIARY DESTINATION clause, which is optional but recommended. RMAN uses default values that work for most cases. If you do not specify an auxiliary location, then ensure that locations are specified for all auxiliary instance files. See the rules described in "Specifying Auxiliary File Locations" to learn how to name auxiliary files.

Example 26-2 creates a transportable tablespace set that includes tablespaces tbs_2 and tbs_3.

Example 26-2 Creating a Transportable Tablespace Set

TRANSPORT TABLESPACE tbs_2, tbs_3

TABLESPACE DESTINATION '/disk1/transportdest'

AUXILIARY DESTINATION '/disk1/auxdest';

After the command completes successfully, note the following results:

The transportable set data files are left in the location /disk1/transportdest with their original names. The transportable tablespace set data files are not automatically converted to the endian format of the destination database by TRANSPORT TABLESPACE. If necessary, use the RMAN CONVERT command to convert the data files to the endian format of the destination database after creating the transportable set.

The Data Pump Export dump file for the transportable set is named dmpfile.dmp, the export log is named explog.log, and the sample import script is named impscrpt.sql.

All files are created in /disk1/transportdest. If a file under the name of the export dump file already exists in the tablespace destination, then TRANSPORT TABLESPACE fails when it calls Data Pump Export. If you are repeating a previous TRANSPORT TABLESPACE operation, delete the previous output files, including the export dump file.

The auxiliary set files are removed from /disk1/auxdest.

If necessary, edit the sample import script.

The sample import script assumes that the files used to import the tablespaces into the destination database are stored in the same locations where they were created by TRANSPORT TABLESPACE. If files have been moved to new disk locations before being plugged in, then you must update the sample script with the new locations of the files before using the script to plug in the transported tablespaces.

Return to the process for transporting tablespaces described in Oracle Database Administrator's Guide.

Troubleshooting the Creation of Transportable Tablespace Sets

When the RMAN TRANSPORT TABLESPACE command fails, the failed auxiliary instance files are left intact in the auxiliary instance destination for troubleshooting.

If your SET NEWNAME, CONFIGURE AUXNAME, and DB_FILE_NAME_CONVERT settings cause multiple files in the auxiliary or transportable tablespace sets to have the same name, then RMAN reports an error during the TRANSPORT TABLESPACE command. To correct the problem, use different values for these parameters to ensure that duplicate file names are not created. Naming techniques are described in "Specifying Auxiliary File Locations".

Transportable Tablespace Set Scenarios

This section contains the following topics:

Creating a Transportable Tablespace Set at a Specified Time or SCN

Specifying Locations for Data Pump Files

Specifying Auxiliary File Locations

Creating a Transportable Tablespace Set at a Specified Time or SCN

You can specify a target time or SCN with the TRANSPORT TABLESPACE command. During the tablespace transport operation, RMAN restores the tablespace at the auxiliary instance with backups from before the target time and performs point-in-time recovery on the auxiliary database to the specified target time. Backups and archived redo logs needed for this point-in-time recovery must be available.

You can specify the target time with an SCN (in the current incarnation or its ancestors) as shown in Example 26-3.

Example 26-3 Specifying an End SCN

TRANSPORT TABLESPACE tbs_2

TABLESPACE DESTINATION '/disk1/transportdest'

AUXILIARY DESTINATION '/disk1/auxdest'

UNTIL SCN 11379;

You can also specify a restore point as shown in Example 26-4.

Example 26-4 Specifying an End Restore Point

TRANSPORT TABLESPACE tbs_2

TABLESPACE DESTINATION '/disk1/transportdest'

AUXILIARY DESTINATION '/disk1/auxdest'

TO RESTORE POINT 'before_upgrade';

You can also specify an end time as shown in Example 26-5.

Example 26-5 Specifying an End Time

TRANSPORT TABLESPACE tbs_2

TABLESPACE DESTINATION '/disk1/transportdest'

AUXILIARY DESTINATION '/disk1/auxdest'

UNTIL TIME 'SYSDATE-1';

Specifying Locations for Data Pump Files

You can change the names of the Data Pump Export dump file for the transportable set, the sample import script for use at the target database, the log file generated by Data Pump Export, and the directory to which they are written.

By default, these files are stored in the tablespace destination and named as follows:

The Data Pump Export dump file is named dmpfile.dmp.

The export log file is named explog.log.

The sample import script is named impscrpt.sql.

You can place the dump file and the export log in a different directory by using the DATAPUMP DIRECTORY clause of the TRANSPORT TABLESPACE command, passing in the name of a database directory object. The database directory object used by the DATAPUMP DIRECTORY clause is not the directory path of an actual file system directory. The value passed corresponds to the DIRECTORY command-line argument of Data Pump Export. See Oracle Database Utilities for more details on the use of directory objects with Data Pump Export.

You can rename these files with the DUMP FILE, EXPORT LOG, and IMPORT SCRIPT clauses of TRANSPORT TABLESPACE. The file names cannot contain full file paths with directory names. If the DUMP FILE or EXPORT LOG file names specify file paths, then TRANSPORT TABLESPACE fails when it attempts to generate the export dump files. Use the DATAPUMP DIRECTORY clause to specify a database directory object that identifies a location for the outputs of Data Pump Export.

The following scenario illustrates the use of TRANSPORT TABLESPACE with the DATAPUMP DIRECTORY, DUMP FILE, EXPORT LOG, and IMPORT SCRIPT file names specified. Assume that you create a database directory object as follows for use with Data Pump Export:

CREATE OR REPLACE DIRECTORY mypumpdir as '/datapumpdest';

Example 26-6 shows a TRANSPORT TABLESPACE command with optional arguments that specify output file locations.

Example 26-6 Specifying Output File Locations

TRANSPORT TABLESPACE tbs_2

TABLESPACE DESTINATION '/transportdest'

AUXILIARY DESTINATION '/auxdest'

DATAPUMP DIRECTORY  mypumpdir

DUMP FILE 'mydumpfile.dmp'

IMPORT SCRIPT 'myimportscript.sql'

EXPORT LOG 'myexportlog.log';

 

After a successful run, RMAN cleans up the auxiliary destination, creates the Data Pump Export dump file and the export log in the directory referenced by DATAPUMP DIRECTORY (/datapumpdest/mydumpfile.dmp and /datapumpdest/myexportlog.log), and stores the transportable set data files in /transportdest.

Specifying Auxiliary File Locations

Several rules affect the location of auxiliary instance files created during the transport.

If RMAN determines that any of the auxiliary files, designated by any of the methods for specifying auxiliary file locations, already contain a data file copy that is suitable to be used for the desired point in time for this transport operation, then that data file copy is used instead of restoring the data file.

Any data file copies that are present, but not suitable for this transport operation, because they are more recent than the requested point in time, or are not recognized as part of the target database, are overwritten when the data files are restored.

The simplest technique is to use the AUXILIARY DESTINATION clause of the TRANSPORT TABLESPACE command and let RMAN manage all file locations automatically. To relocate some or all auxiliary instance files, the following options for specifying file locations appear in order of precedence:

SET NEWNAME FOR DATAFILES

SET NEWNAME FOR TABLESPACE

SET NEWNAME FOR DATABASE

See "Using SET NEWNAME for Auxiliary Data Files".

CONFIGURE AUXNAME

As described in "Using CONFIGURE AUXNAME for Auxiliary Data Files", you can use this command to specify names for data files.

AUXILIARY DESTINATION clause of the TRANSPORT TABLESPACE command

As described in "Using AUXILIARY DESTINATION to Specify a Location for Auxiliary Files", you can use this option to specify a location for auxiliary files.

LOG_FILE_NAME_CONVERT and DB_FILE_NAME_CONVERT in the initialization parameter file

As described in "Using Initialization Parameters to Name Auxiliary Files", you can use these initialization parameters to specify a location for auxiliary files.

If you use several of these options, then the first option in the list that applies to a file determines the file name.

Using SET NEWNAME for Auxiliary Data Files

You can use the following SET NEWNAME commands in a RUN block to specify file names for use in the TRANSPORT TABLESPACE command:

SET NEWNAME FOR DATAFILE

SET NEWNAME FOR DATABASE

SET NEWNAME FOR TABLESPACE

The SET NEWNAME FOR DATAFILE commands in Example 26-7 cause the auxiliary instance data files to be restored to the locations named instead of to /disk1/auxdest.

Example 26-7 Using SET NEWNAME FOR DATAFILE to Name Auxiliary Data Files

RUN

{

SET NEWNAME FOR DATAFILE '/oracle/dbs/tbs_12.f'

TO '/bigdrive/auxdest/tbs_12.f';

SET NEWNAME FOR DATAFILE '/oracle/dbs/tbs_11.f'

TO '/bigdrive/auxdest/tbs_11.f';

TRANSPORT TABLESPACE tbs_2

TABLESPACE DESTINATION '/disk1/transportdest'

AUXILIARY DESTINATION '/disk1/auxdest';

}

SET NEWNAME is best used with one-time operations. If you expect to create transportable tablespaces from backup regularly for a particular set of tablespaces, then consider using CONFIGURE AUXNAME instead of SET NEWNAME to make persistent settings for the location of the auxiliary instance data files.

Using CONFIGURE AUXNAME for Auxiliary Data Files

You can use the CONFIGURE AUXNAME command to specify persistent locations for transportable tablespace set or auxiliary set data files. RMAN restores each data file for which a CONFIGURE AUXNAME command has been used to the specified location before recovery. RMAN deletes auxiliary set data files when the operation is complete, unless the operation failed.

An example illustrates the relationship between the CONFIGURE AUXNAME and TRANSPORT ... AUXILIARY DESTINATION commands. Suppose that you want to transport tablespace tbs_11. The tablespace tbs_12, which contains data file tbs_12.f, is part of the auxiliary set. You execute the following steps:

You use the CONFIGURE AUXNAME statement to set a persistent nondefault location for the auxiliary set data file /oracle/dbs/tbs_12.f.

For example, you enter the following command:

CONFIGURE AUXNAME FOR '/oracle/dbs/tbs_12.f'

TO '/disk1/auxdest/tbs_12.f';

You execute the TRANSPORT TABLESPACE command with the AUXILIARY DESTINATION parameter.

For example, you enter the following command:

TRANSPORT TABLESPACE tbs_11

AUXILIARY DESTINATION '/myauxdest';

In the preceding scenario, RMAN restores the auxiliary set copy of data file /oracle/dbs/tbs_12.f to /disk1/auxdest/tbs_12.f instead of the location specified by AUXILIARY DESTINATION. The CONFIGURE AUXNAME setting is higher in the order of precedence than AUXILIARY DESTINATION.

Note:

You can view any current CONFIGURE AUXNAME settings by executing the SHOW AUXNAME command, which is described in Oracle Database Backup and Recovery Reference.

Using AUXILIARY DESTINATION to Specify a Location for Auxiliary Files

If you use an AUXILIARY DESTINATION argument with TRANSPORT TABLESPACE, then any auxiliary set file that is not moved to another location using a SET NEWNAME or CONFIGURE AUXNAME command is stored in the auxiliary destination during the TRANSPORT TABLESPACE operation.

If you do not use AUXILIARY DESTINATION, then you must use LOG_FILE_NAME_CONVERT to specify the location of the online redo log files for the auxiliary instance. Neither SET NEWNAME nor CONFIGURE AUXNAME can affect the location of the auxiliary instance online redo logs. Thus, if you do not use AUXILIARY DESTINATION or LOG_FILE_NAME_CONVERT, then RMAN has no information about where to create the online redo logs.

Using Initialization Parameters to Name Auxiliary Files

You can use the LOG_FILE_NAME_CONVERT and DB_FILE_NAME_CONVERT initialization parameters in an auxiliary instance parameter file to determine the names for online redo logs and other database files at the auxiliary instance. If no AUXILIARY DESTINATION clause is specified on the TRANSPORT TABLESPACE command, then these parameters determine the location of any files for which no CONFIGURE AUXNAME or SET NEWNAME command was run.

You cannot use LOG_FILE_NAME_CONVERT or DB_FILE_NAME_CONVERT to generate new Oracle Managed Files (OMF) file names for files at the auxiliary instance when the original files are OMF files. The database manages the generation of unique file names in each OMF destination. You must use an AUXILIARY DESTINATION clause to control the location of the online redo log files. You must use the AUXILIARY DESTINATION clause, SET NEWNAME or CONFIGURE AUXNAME commands, or DB_CREATE_FILE_DEST initialization parameter to specify the location for OMF data files.

See Also:

Oracle Database Reference for more details on the LOG_FILE_NAME_CONVERT and DB_FILE_NAME_CONVERT initialization

 

Ready for Action?

LET'S GO!
Copyright 2024 IT Remote dot com
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram