initialize ()
{
   initialize code
}

function1 ()
{
   function stuff
}

Functions are the key to writing just about ANY program that is longer than a page or so of text. Essentially, its all a matter of breaking up a large program, into smaller, managable chunks. Ideally, functions are sort of like 'objects' for program flow. You pick a part of your program that is pretty much self-contained, and make it into its own 'function'

Why are functions critical?

Properly written functions should exist by themselves, and affect few things external to themselves. You should DOCUMENT what things it changes external to itself. Then you can look very carefully just at the function, and determine whether it actually does what you think it should do.

When your program isn't working properly (WHEN, not if), you can then put in little debug notes to yourself in the approximate section you think is broken. If you suspect a function is not working, then all you have to verify is

Is the INPUT to the function correct?
Is the OUTPUT from the function correct?

Once you have done that, you then know the entire function is correct, for that particular set of input(s), and you can look for errors elsewhere.
A trivial function

printmessage() {
echo "Hello, this is the printmessage function"
}

printmessage

The first part, from the first "printmessage()" all the way through the final '}', is the function definition. It only defines what the function does, when you decide to call it. It does not DO anything, until you actually say "I want to call this function now".

You call a function in ksh, by pretending it is a regular command, as shown above. Just have the function name as the first part of your line. Or any other place commands go. For example,

echo The message is: `printmessage`

Remember: Just like its own separate shellscript. Which means if you access "$1" in a function, it is the first argument passed in to the function, not the shellscript.
Debugging your functions

If you are really really having difficulties, it should be easy to copy the entire function into another file, and test it separately from the main program.

This same type of modularity can be achived by making separate script files, instead of functions. In some ways, that is almost preferable, because it is then easier to test each part by itself. But functions run much faster than separate shellscripts.

A nice way to start a large project is to start with multiple, separate shellscripts, but then encapsulate them into functions in your main script, once you are happy with how they work.

CRITICAL ISSUE: exit vs return

THE main difference when changing between shellscripts and functions, is the use of "exit".

'exit' will exit the entire script, whether it is in a function or not.
'return' will just quit the function. Like 'exit', however, it can return the default "sucess" value of 0, or any number from 1-255 that you specify. You can then check the return value of a function, just in the same way you can check the return value of an external program, with the $? variable.

# This is just a dummy script. It does not DO anything

fatal(){
echo FATAL ERROR
# This will quit the 'fatal' function, and the entire script that
# it is in!
exit
}

lessthanfour(){
if [[ "$1" = "" ]] ; then echo "hey, give me an argument" ; return 1; fi

# we should use 'else' here, but this is just a demonstration
if [[ $1 -lt 4 ]] ; then
echo Argument is less than 4
# We are DONE with this function. Dont do anything else in
# here. But the shellscript will continue at the caller
return
fi

echo Argument is equal to or GREATER than 4
echo We could do other stuff if we wanted to now
}

echo note that the above functions are not even called. They are just
echo defined

A bare "return" in a shellscript is an error. It can only be used inside a function.

CRITICAL ISSUE: "scope" for function variables!
Be warned: Functions act almost just like external scripts... except that by default, all variables are SHARED between the same ksh process! If you change a variable name inside a function.... that variable's value will still be changed after you have left the function!! Run this script to see what I mean.

#!/bin/sh
# Acts the same with /bin/sh, or /bin/ksh, or /bin/bash
subfunc(){
echo sub: var starts as $var
var=2
echo sub: var is now $var
}
var=1
echo var starts as $var, before calling function '"subfunc"'
subfunc # calls the function
echo var after function is now $var

To avoid this behaviour, and give what is known as "local scope" to a variable, you can use the typeset command, to define a variable as local to the function.

#!/bin/ksh
# You must use a modern sh like /bin/ksh, or /bin/bash for this
subfunc(){
typeset var
echo sub: var starts as $var '(empty)'
var=2
echo sub: var is now $var
}
var=1
echo var starts as $var, before calling function '"subfunc"'
subfunc # calls the function
echo var after function is now $var

Another exception to this is if you call a function in the 'background', or as part of a pipe (like echo val | function )
This makes the function be called in a separate ksh process, which cannot dynamically share variables back to the parent shell. Another way that this happens, is if you use backticks to call the function. This treats the function like an external call, and forks a new shell. This means the variable from the parent will not be updated. Eg:

func() { newval=$(($1 + 1)) echo $newval echo in func: newval ends as $newval } newval=1 echo newval in main is $newval output=`func $newval` func $newval echo output is : $output echo newval finishes in main as $newval

Write Comments!
Lastly, as mentioned in the good practices chapter, dont forget to comment your functions! While shellscripts are generally easier to read than most programming languages, you really can't beat actual human language to explain what a function is doing!

Exadata Best Practices White Paper

Exadata Consolidation Best Practices

Exadata Health and Resource Usage Monitoring White Paper Nov 2014

Exadata Oracle Info

Carlos Sierra Exadata Scripts

How to disable cores

Exadata X6

Exadata Database Machine Eighth Rack reconfiguration required after restore/rescue (Doc ID 1538561.1)
How to properly disable cores on Exadata database nodes (Doc ID 1499114.1)

SQL Dev CI

SQL Developer Oracle Site

SQl Developer Data Modeler Oracle Site

APEX 5 EA

REST Services

This content is password protected. To view it please enter your password below:

Dataguard Broker Concepts

Dataguard Broker is best used in a configuration where there is a separate machine used to monitor the up/down status of the components.

For example: using broker on the primary would not help in event the primary was down or unavailable, the same holds true for the standby. Therefore it is best to understand the requirements for additional resources to plan for this. Many companies do not use broker for the very reason of this critical configuration.

A common misconception is that DGMGRL can be used to create new standby databases. Here is an excerpt from Oracle's website:

Although DGMGRL cannot automatically create a new standby database, you can use DGMGRL commands to configure and monitor an existing standby database, including those created using Enterprise Manager.

The Data Guard command-line interface (DGMGRL) enables you to control and monitor a Data Guard configuration from the DGMGRL prompt or within scripts. You can perform most of the activities required to manage and monitor the databases in the configuration using DGMGRL.

DG Broker Config

DG Broker Setup Article

 

ORAchk Health Checks For The Oracle Stack 1268927.1

ORAchk replaces the popular RACcheck tool, extending the coverage based on prioritization of top issues reported by users, to proactively scan for known problems.

Oracle Exadata Best Practices 757552.1
 Trace File Analyzer Collector (aka TFA) 1513912.1
TFA Collector - Tool for Enhanced Diagnostic Gathering 1513912.1
OSWatcher 301137.1
Procwatcher 459694.1
ORATOP  => 1500864.1

Oratop DOC

SQLT => 215187.1

Tutorial

RDA   314442.1           DA Diagnostic Assistant (GUI to the RDA) - both are covered by the doc id

Service Tools Bundle

DCLI    Doc

ED360

eAdam

doc for eAdam

Tanel Poder Scripts

SQL Developer

SQL Developer Data Modeler

Start with....

select :WORKSPACE_ID from dual;

In the SQL Workshop's "SQL Command Processor" to determine your workspaces workspace_id (also known as a security_group_id internally).  Assuming that query came back with a value of 12345, you could then run a block like.


BEGIN
    wwv_flow_api.set_security_group_id(p_security_group_id=>12345);

    wwv_flow_fnd_user_api.create_fnd_user(
    p_user_name     => 'regular_user',
    p_email_address => 'myemail@mydomain.com',
    p_web_password  => 'regular1') ;

    wwv_flow_fnd_user_api.create_fnd_user(
    p_user_name       => 'developer_user',
    p_email_address   => 'myemail@mydo.com',
    p_web_password    => 'dev1',
    p_developer_privs => 'ADMIN') ;

end;
/

Create page items

Oracle has several main v$ views to expose ASM details. Oracle Automated Storage Management has several v$ views to see information About Automated Storage Management disks, diskgroups and other internals.

There are seven new v$ views provided in Oracle Database to monitor ASM structures.

v$asm_diskgroup: Describes a disk group (number, name, size related info, state, and redundancy type) Contains one row for every open ASM disk in the DB instance.
v$asm_client: Identifies databases using disk groups managed by the ASM instance. Contains no rows.
v$asm_disk: Contains one row for every disk discovered by the ASM instance, including disks that are not part of any disk group. Contains rows only for disks in the disk groups in use by that DB instance.
v$asm_file: Contains one row for every ASM file in every disk group mounted by the ASM instance. Contains rows only for files that are currently open in the DB instance.
v$asm_template: Contains one row for every template present in every disk group mounted by the ASM instance. Contains no rows
v$asm_alias: Contains one row for every alias present in every disk group mounted by the ASM instance. Contains no rows.
v$asm_operation: Contains one row for every active ASM long running operation executing in the ASM instance. Contains no rows.

Oracle v$ views for ASM and their x$ tables

The v$ views for ASM are built upon several ASM fixed tables, called x$ tables. The x$ tables are not really tables, they are C language structures inside the SGA RAM heap:

X$ Table v$ View
X$KFGRP V$ASM_DISKGROUP
X$KFGRP_STAT V$ASM_DISKGROUP_STAT
X$KFDSK V$ASM_DISK
X$KFKID V$ASM_DISK
X$KFDSK_STAT V$ASM_DISK_STAT
X$KFKID V$ASM_DISK_STAT
X$KFFIL V$ASM_FILE
X$KFALS V$ASM_ALIAS
X$KFTMTA V$ASM_TEMPLATE
X$KFNCL V$ASM_CLIENT
X$KFGMG V$ASM_OPERATION
X$KFENV V$ASM_ATTRIBUTE
X$KFNSDSKIOST V$ASM_DISK_IOSTAT

This script is a great way to find the transfer rate of the log apply, this is useful in a number of ways.
With a dataguard physical standby each time you start a managed recovery process there will be this series of 9 rows entered into the view. Each row has a type, but while recovering the standby it is always Media Recovery.

The average apply rate includes time waiting for the redo to arrive. The active time spent applying redo log information is a small proportion of the total elapsed time since starting managed recovery.

The active apply rate therefore gives a better indication of how fast you can actually apply redo on your standby, if you think you are generating redo at a faster rate than this number, then you may well be falling behind on your standby.

As indicated in the documentation this V$RECOVERY_PROGRESS view is actually just a subset of the V$SESSION_LONGOPS view, and while all the information is available there too, the V$RECOVERY_PROGRESS view summarizes the relevant data for your media recovery progress in a standby quite nicely.

I made this in the form of a function to add to a common library

#!/bin/ksh

dg_rate_apply()
{

sqlplus -s "/ as sysdba" <<EOF

set linesize 400
col Values for a65
col Recover_start for a21

select to_char(START_TIME,'dd.mm.yyyy hh24:mi:ss') "Recover_start",
to_char(item)||' = '||to_char(sofar)||' '||to_char(units)||' '|| to_char(TIMESTAMP,'dd.mm.yyyy hh24:mi') "Values"
 from v\$recovery_progress 
where start_time=(select max(start_time) 
                    from v\$recovery_progress);

EOF
}

Ready for Action?

LET'S GO!
Copyright 2024 IT Remote dot com
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram