MAA GoldenGate Performance

https://www.oracle.com/technetwork/database/availability/maa-gg-performance-1969630.pdf

 

Best Practices 2017

GoldenGate 23c

https://www.oracle.com/middleware/technologies/goldengate-downloads.html

https://www.oracle.com/middleware/technologies/goldengate-downloads.html

HandleCollisions parameterUse the HANDLECOLLISIONS and NOHANDLECOLLISIONS parameters to control whether or not Replicat tries to resolve duplicate-record and missing-record errors when applying SQL on the target. These errors, called collisions, occur during an initial load, when data from source tables is being loaded to target tables while Oracle GoldenGate is replicating transactional changes that are being made to those tables. When Oracle GoldenGate applies the replicated changes after the load is finished, HANDLECOLLISIONS provides Replicat with error-handling logic for these collisions.

Active Active Replication Nick Wagner

Restart After Failure

GoldenGate Veridata

Coordinated Apply

Use DELETE EXTTRAIL to delete the record of checkpoints associated with a trail on a local system. Checkpoints are maintained in a file bearing the same name as the group in the dirchk sub-directory of the Oracle GoldenGate directory.

This command only deletes references to the specified trail from the checkpoint file. It does not delete the trail files themselves. To delete the trail files, use standard operating system commands for removing files.

This will explain a little bit about the LogMiner Dictionaries this is something that many folks do not understand when doing integrated capture and this goes into some detail with examples.

DescVersionPatch Level
test 0112.2.16

Integrated Capture LogMiner Dictionaries

Oracle Doc for Logminer 18c

When trying to setup GG extract. When registering an integrated extract, behind the scenes, it dumps the "logminer dictionary" in the logfile and we can query v$archived_log.dictionary_begin and dictionary_end columns to find out when the dictionary was created. Let the archive seq number 100 has the dictionary, does this mean we have to keep this and all of the following archivelogs for GG extract parsing? If not, then does it mean the dictionary is not stored in logfile?

If the dictionary is in archive seq 100 and later i create a new table in DB then how that will become part of the dictionary? Do I have to re-generate dictionary after every DDL? If my v$archived_log shows a dictionary which is one month old but all those archivelogs are deleted, can i register a new extract using that SCN [shared model]? for example:

register extract testdatabase scn <SCN from last available dictionary> share automatic

Do I really need to run

dbms_logminer_d.build 

regularly to build dictionary, even though am not registering any new extracts?
=========================================================================================

The Integrated Extract process requires a data dictionary that is separate from the primary data dictionary for the source database. This separate data dictionary is called a LogMiner data dictionary. This data dictionary information is placed in the redo log. You can perform a build of data dictionary information in the redo log multiple times.

The redo log contains raw data. It does not contain database object names and column names in tables. Instead, it uses object numbers and internal column numbers for database objects and columns, respectively. Therefore, when a change is extracted, the Integrated Extract process must reference a data dictionary to determine the details of the change. Now, if in any case it would be needed to have an Integrated Extract to start capturing changes from an earlier time than current, then it is possible to do so as long as there is a logminer dictionary build from such time.

And if that is the case, then take into account ALL archived redo logs since that time will need to be available on disk, visible in v$ARCHIVED_LOG as Available and registered for that IE

Following document details a bit more on this scenario:

How to configure a new Integrated Extract to start at a historical time. (Doc ID 2046831.1)

Best Regards,  Jose Perez

==========================================================================================

The main focus of my question is how it actually works. For example, if dictionary_begin is at log 100 and dictionary_end is at 101, what does that actually mean? Like you said, the logminer dictionary is in log files, so does it mean in log seq 102 there is no dictionary?? And we have to have seq 100 and 101 on disk all the time even in order to process log seq 102??

===============================================================================================================

In order for a new IE to capture transactions earlier than that time, you need to search for a dictionary build that was created earlier than the historical time and register the IE to begin capture at a SCN of a dictionary build.

To find SCN for all previous dictionary builds issue:


SELECT first_change# 
  FROM v$archived_log 
 WHERE dictionary_begin = 'YES' 
   AND standby_dest = 'NO' 
   AND name IS NOT NULL 
   AND status = 'A';

Convert the SCNs displayed to a timestamp that is closest to what you need but earlier, via

SELECT SCN_TO_TIMESTAMP(<SCN>) FROM DUAL;

Choose the timestamp that is earlier and closest to the time you want to start the new IE, and register the IE as follows:

dblogin userid <userid>, password <password>
register extract etint, database scn <SCN>
Then add IE as usual with a historical time as determined above.
add ext etint, integrated tranlog,  begin <yyyy-mm-dd hh:mm>

Please refer to the Reference manual for more details on ADD EXTRACT for configuring an Integrated Extract.

You can also change an existing IE to go to a timestamp earlier than when it was first configured, also known as first scn which can be obtained from

GGSCI > INFO EXTRACT DETAIL 

Unregister this IE and register again with the above scn option.
Then alter the extract to the timestamp as required.

Note: all archive log files from the logminer build to the starting point of the extract must be available.
If there are archive logs missing, the extract cannot start.

Example:

The current date is 21-Aug-2015 and we need to capture transactions using IE from 04-Aug-2015

SELECT first_change# 
  FROM v$archived_log 
 WHERE dictionary_begin = 'YES' 
   AND standby_dest = 'NO' 
   AND name IS NOT NULL 
   AND status = 'A';

FIRST_CHANGE#
-------------
18975095
19465792
19510832

SELECT SCN_TO_TIMESTAMP(18975095) FROM DUAL;
SCN_TO_TIMESTAMP(18975095)
--------------------------------------------------
04-AUG-15 02.15.57.000000000 PM

register extract myinteg, database scn 18975095
Extract myinteg successfully registered with database at SCN 18975095.

add ext myinteg, integrated tranlog,  begin 2015-08-04 03:00
EXTRACT added.

add exttrail dirdat\ie, ext myinteg
EXTTRAIL added.

Other Notes:

A new dictionary is built when you create a new database and generally for normal database operations you do not need to create anymore new dictionaries.

 
ggsci> register extract database 

command creates a new dictionary and can thus be shared if there are existing IEs.
If you regularly create a dictionary you enable new or existing IEs the ability to capture from a historical timestamp by

sql> execute DBMS_LOGMNR_D.BUILD( options => DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);

For example schedule this daily and you can go back each 24 hours.

If you need to go back further then the available listed dictionary SCNs you can restore archived logs if these logs contain dictionary builds and catalog these restored logs with RMAN.

Do the SELECT to retrieve the dictionary build SCNs again and you may see more SCNs.

 

The LogMiner Stored Procedures are used for starting and ending a LogMiner session, as well as for specifying various configurations like what kind of dictionary to use. The stored procedures are:

Ready for Action?

LET'S GO!
Copyright 2024 IT Remote dot com
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram