Information Center: Oracle Application Express (APEX) (Doc ID 1418083.2)
Oracle RESTful Data Services FAQ (Doc ID 2085904.1)
TFA is a diagnostic tool for collection of logs from Oracle databases, and is useful for collection of various components.
Management of TFA is done with root or sudo setup
tfactl access lsusers
ClusterWare Troubleshooting w/TFA
APEX application DBAdmin
APEX Server Info
Change APEX Port
###################################################################
#!/bin/ksh
sqlplus -s "/ as sysdba" <<EOF
exec dbms_xdb.sethttpport(0);
commit;
exec dbms_xdb.sethttpport(8081);
commit;
EOF
###################################################################
Pull Registry Information
#################################################################
#!/bin/ksh
sqlplus -s "/ as sysdba" <<EOF
col comp_name format a35
col status format a12
select comp_name, status, version
from dba_registry
order by comp_name;
EOF
###################################################################
EPG Status
##################################################################
#!/bin/ksh
sqlplus -s "/ as sysdba" <<EOF
@/oracle/product/11.2.0/db_1/rdbms/admin/epgstat.sql
EOF
##################################################################
This is an APEX application that tracks databases and other information around
Objects include:
Database -
CREATE TABLE "DATABASE"
( "DB_ID" VARCHAR2(10),
"DB_NM" VARCHAR2(20),
"DB_DSCR" VARCHAR2(1000),
"DB_CMT" VARCHAR2(4000),
"DB_QADT" DATE,
"DB_INST_NM" VARCHAR2(15),
"DB_TP" VARCHAR2(3),
"DB_VER" VARCHAR2(15),
"DB_OS" VARCHAR2(4),
"DB_OS_VER" VARCHAR2(3),
"DB_UPG_VER" VARCHAR2(10),
"DB_UPG_STAT" VARCHAR2(3),
"DB_DBA_CRT" VARCHAR2(4),
"DB_APL" VARCHAR2(4),
"DB_DBA_CUR" VARCHAR2(4),
"DB_CRT_DT" DATE,
"DB_DT_NXT_UPG" DATE,
"DB_DT_UPG" DATE,
"DB_CMP" VARCHAR2(3),
"DB_DBA_PRIM" VARCHAR2(4),
"DB_DBA_SECD" VARCHAR2(4),
"DB_CST_ID" NUMBER,
"DB_CPU" NUMBER,
"DB_DSK_USED" NUMBER,
"DB_DSK_ALLOC" NUMBER,
"DB_MEM_PGA" NUMBER,
"DB_MEM_SGA" NUMBER,
"DB_MEM_TGT" NUMBER,
"DB_DBID" VARCHAR2(25),
"DB_BKP_FRQ" VARCHAR2(2000),
"DB_SVC_NM" VARCHAR2(50),
"DB_CAT_TP" VARCHAR2(35),
"DB_DBA_LU" VARCHAR2(25),
"DB_DBA_NU" VARCHAR2(25),
"DB_MCH_ID" VARCHAR2(10),
"DB_DT_DECOMM" DATE,
"DB_STORM" VARCHAR2(4000),
"DB_UNQ_NM" VARCHAR2(20),
"DB_AIT" VARCHAR2(20),
"DB_STO_DATA" NUMBER,
"DB_STO_SYS" NUMBER,
"DB_STO_FRA" NUMBER,
"DB_UPG_CMTS" CLOB,
"DB_PRCS" NUMBER,
"DB_SESS" NUMBER,
"DB_QST" VARCHAR2(4000),
"DB_HC_DBID" NUMBER,
"DB_STO_DATA_USD" NUMBER,
"DB_STO_FRA_USD" NUMBER,
"DB_STO_SYS_USD" NUMBER,
"DB_AUD_TP" VARCHAR2(50),
"DB_BCK" VARCHAR2(1),
"DB_SQLT" VARCHAR2(1),
"DB_CRTAB" VARCHAR2(1),
"DB_DDL" VARCHAR2(1),
"DB_ENV" VARCHAR2(1),
"DB_RMAN_ARGS" VARCHAR2(1),
"DB_RON_FO_TST" VARCHAR2(1),
"DB_RISK_RTG" VARCHAR2(4000),
"DB_NA_DT" DATE,
"DB_VFY_ASM" VARCHAR2(1),
"DB_DBCA" VARCHAR2(1),
"DB_DBCA_SL" VARCHAR2(4000),
"DB_TST_NWS" VARCHAR2(1),
"DB_ESM" VARCHAR2(1),
"DB_TST_CNCT" VARCHAR2(1),
"DB_PORT" VARCHAR2(4000),
"DB_HST" VARCHAR2(4000),
"DB_SVC_NM02" VARCHAR2(4000),
"DB_LOG_CLN" VARCHAR2(1),
"DB_LSN_CLN" VARCHAR2(1),
"DB_CRON_TXT" VARCHAR2(4000),
- stores the crontab entry text for this database
"DB_CLN_RUN" VARCHAR2(1),
"DB_MON_TST" VARCHAR2(1),
"DB_MRD" VARCHAR2(1),
"DB_SPM" VARCHAR2(1),
"DB_AWR" VARCHAR2(1),
"DB_INST_CG" VARCHAR2(3),
"DB_FOG" VARCHAR2(1),
"DB_SPT" VARCHAR2(1),
"DB_ORATAB" VARCHAR2(1),
"DB_90_CFG" VARCHAR2(1),
"DB_100_TBSPC" VARCHAR2(1),
"DB_QA_CMTS" CLOB,
"DB_STO_SAUX" NUMBER,
"DB_STO_SAUX_USD" NUMBER,
"DB_STO_USERS" NUMBER,
"DB_STO_USERS_USD" NUMBER,
"DB_STO_UNDO" NUMBER,
"DB_STO_UNDO_USD" NUMBER,
"DB_STO_TOOLS" NUMBER,
"DB_STO_TOOLS_USD" NUMBER,
"DB_STO_TEMP" NUMBER,
"DB_STO_TEMP_USD" NUMBER,
"DB_STO_ONREDO" NUMBER,
"DB_STO_ONREDO_USD" NUMBER,
CONSTRAINT "DATABASE_PK" PRIMARY KEY ("DB_ID") ENABLE
) ;
cluster
Tracking the problem tickets include:
incident table
inc_no
Ticket Number
Severity
Time Opened
Time Resolved
Total hrs worked
After hours worked
SLA Missed
Hostname
Resolved Group
Root Cause - drop down?
Duplicate? - has this ticket occured before
Detailed description
Brief Description
Work log
Comments
Detiled Resolution
Architecting GoldenGate
Binaries consume approx 600MB
Each process consumes about 50MB of memory this can quickly consume a lot of memory
Base processing will consume
Manager process, extract, pump, replicat which at the base is about 200MB
Each process assumes 1 CPU core
GoldenGate requires a number of TCP/IP ports in order to operate. It is important that your network firewall is allowed to pass traffic on these ports. One port is used solely for communication between the Manager process and other GoldenGate processes. This is normally set to port 7809
but it can be changed. A range of other ports are for local GoldenGate communications. These can be the default range, starting at port 7840
or a predefined range of up to 256 other ports.
Oracle recommends at least 256 GB of space per Extract process for the dirtmp
subdirectory.
Sample configuration for handling collisions
Customizing GoldenGate Processing
Using "EXITS" to customize GoldenGate processing
Architecture Plans
Maintain Live Standby w/GoldenGate
This is the entry point into Oracle's APEX main site
APEX Download - This is Oracle site where the downloads are done from
APEX 5.0 Documentation - This is the link for documentation
ORDS This is the ORDS doc site
This is Oracle's main page for GoldenGate
This page also contains information regarding Oracle Cloud Marketplace
GoldenGate downloads are available here
GoldenGate Documentation is available here
12.3.1.1.1 for Big Data
12.2.0.2.2 for Linux
GoldenGate 12c (12.1.2)Documentation
Oracle GoldenGate Best Practices: Heartbeat Table for Monitoring Lag times (Doc ID 1299679.1)
The mining database, from which the primary Extract captures log change records from the logmining server, can be either local or downstream from the source database.
These steps configure the primary Extract to capture transaction data in integrated mode from either location.
See Appendix B, "Configuring a Downstream Mining Database" and Appendix C, "Example Downstream Mining Configuration" for more information about capturing from a downstream mining database.
Basically every download of GoldenGate is a 'full' install.
Extract the file in a directory, run the "ggsci" executable in that directory,
type "Create SubDirs" at the command prompt and you're done
If you want to 'patch' GoldenGate, you could extract the 'patch' over the old version of GoldenGate. In general this works very well, but realize if you are using the Management Pack for GoldenGate (which works with Enterprise Manager 12c Cloud Control), you'll want to save a copy of your CONFIG.PROPERTIES file (it's in the cfg directory) before you do this. Typically that's going to be the only file that would get 'overwritten' during the patch install that you'd actually be concerned about. Everything else you'll WANT to be overwritten during the 'patch' install.
Remember, it's not a bad idea to back everything up before you do this.
At least between 25 and 55 Mb of RAM memory is required for each GoldenGate Replicat and Extract process.Each GoldenGate instance can support up to 300 (in new release it has increased up to 5000) concurrent Extract and Replicat processes (inclusive). but have enough system resourcesavailable for the OS.
The best way to assess total memory requirement is to run the GGSCI command to view the current report file and to examine the PROCESS AVAIL VM FROM OS (min) to determine if you have sufficient swap memory for your platform.
GoldenGate will typically use only 5% of a systems CPU resource. Modern Operating Systems can share available resources very efficiently; It is important however to size your requirements effectively, obtaining a balance between the maximum possible number of concurrent processes and number of CPUs. GoldenGate will use 1 CPU core per Extract or Replicat process.
For more details refer GoldenGate installation and admin guides.