Integrated Application Heartbeat for Oracle GoldenGate
Over the last two weeks I’ve been working on a heartbeat monitoring solution for a client. This is not the normal heartbeat solution provided by Oracle as described in note 1299679.1; yet very similar. The approach that I configured is similar to a traditional heartbeat setup but uses the same extract and replicats already being used by the database/application. For simplicity reasons, I like to call this approach an Integrated Application Heartbeat.
In order to setup this style of heartbeat monitoring, the following items are are needed:
1. Identify the heartbeat table in the source database. In this configuration, lets call this table SETTINGS (Keep in mind nothing will change with the table (no DDL changes)).
2. Identify and generate DDL for the target heartbeat table. Let’s call this table GG_STATUS_HB.
3. Identify and generate DDL for the target heartbeat history table. Let’s call this table GG_STATUS_HB_HIST
4. Two triggers for updating information in the target heartbeat tables (1 example below)
5. A few macros. One for each of the processes in the configuration.
6. Either a crontab job or a DBMS_JOBs process
Now that the components have been identified, lets take a look at what needs to be done.
The source side heartbeat table has already been identified. In the database, the table name is SETTINGS. The SETTINGS table has a single column for a primary key. This table is updated using a crontab job that runs the following SQL. The <schema> variable is due to a this table (SETTINGS) being in more than one schema.
update <schema>.settings set id=id where rownum < 2;
On the target side, the target heartbeat and heartbeat history table need to be created. These tables will not match the columns in the SETTINGS table. This means that all the data that is replicated in the SETTINGS table still have to be replicated a long with being used as a heartbeat table. The DDL to create these two tables are as follows:
CREATE TABLE <schema>.GG_STATUS_HB ( SITE_ID NUMBER(10), DB_SCHEMA VARCHAR2(30), CSN_TS TIMESTAMP(6), CSN_NB NUMBER(18), EXT_GROUP VARCHAR2(10), EXT_TIME TIMESTAMP(6), EXT_LAG NUMBER, PMP_GROUP VARCHAR2(10), PMP_TIME TIMESTAMP(6), PMP_LAG NUMBER, TGT_DB_NAME VARCHAR2(15), REP_GROU{ VARCHAR2(10), REP_TIME TIMESTAMP(6), REP_LAG NUMBER, TOTAL_LAG NUMBER, TOTAL_LAG_MIN NUMBER, UPDATE_TS TIMESTAMP(6), CONSTRAINT GG_STATUS_HB_PK PRIMARY KEY (SITE_ID) ENABLE ); CREATE TABLE <schema>.GG_STATUS_HB_HIST ( SITE_ID NUMBER(10), DB_SCHEMA VARCHAR2(30), CSN_TS TIMESTAMP(6), CSN_NB NUMBER(18), EXT_GROUP VARCHAR2(10), EXT_TIME TIMESTAMP(6), EXT_LAG NUMBER, PMP_GROUP VARCHAR2(10), PMP_TIME TIMESTAMP(6), PMP_LAG NUMBER, TGT_DB_NAME VARCHAR2(15), REP_GROU{ VARCHAR2(10), REP_TIME TIMESTAMP(6), REP_LAG NUMBER, TOTAL_LAG NUMBER, TOTAL_LAG_MIN NUMBER, UPDATE_TS TIMESTAMP(6), CONSTRAINT GG_STATUS_HB_HIST_PK PRIMARY KEY (SITE_ID, DB_SCHEMA, CSN_TS, CSN_NB, UPDATE_TS) ENABLE );
In mapping these tables, I found it easier to create macros to handle the mappings. Macros are especially handy since the SETTINGS table is in multiple schemas and they need to be mapped through existing GoldenGate processes. For more on macros and passing parameters, I’ve written this post to highlight macros (here).
Examples of the macros that are used in this configuration are as follows:
Macro for Extract process:
This macro is used to capture and map the extract name, time the transaction was extracted, the timestamp of the commit and the System Change Number.
--Heartbeat Extract Macro Library - Version 1.01 --Inital write - 08-13-2014 - BLC</pre> <pre>-- *************************************** -- Common macros for this library -- called internally -- *************************************** MACRO #hb_ext_details BEGIN EXT_GROUP = @GETENV("GGENVIRONMENT","GROUPNAME"), & EXT_TIME = @DATE("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV("JULIANTIMESTAMP")), & CSN_TS = @GETENV("GGHEADER","COMMITTIMESTAMP"), & TRANS_CSN = @GETENV("TRANSACTION","CSN") END; -- ***************************************** -- Application Specific Marcos for Heartbeat -- ***************************************** MACRO #opshb_info BEGIN #hb_ext_details() END;
Macro for the Pump process:
The macro captures the pump name and time of transaction passed through the pump.
--Heartbeat Pump Macro Library - Version 1.01 --Inital write - 08-13-2014 - BLC -- *************************************** -- Common macros for this library -- called internally -- *************************************** MACRO #hb_pmp_details BEGIN PMP_GROUP=@GETENV("GGENVIRONMENT","GROUPNAME"), PMP_TIME=@DATE("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV("JULIANTIMESTAMP")) END; -- ***************************************** -- Application Specific Marcos for Heartbeat -- ***************************************** MACRO #opshb_info BEGIN #hb_pmp_details() END;
Macro for the Replicat Process:
This macro does the mapping of all the tokens that have been passed from the other GoldenGate processes to the target tables.
--Heartbeat Replicat Macro Library - Version 1.01 --Inital write - 08-13-2014 - BLC -- *************************************** -- Common macros for this library -- called internally -- *************************************** MACRO #opshb_rep_details BEGIN SITE_ID=@TOKEN("NHIN_STORE_ID"), & DB_SCHEMA=@TOKEN("SRC_DB_SCHEMA"), HOST_NAME=@GETENV("GGFILEHEADER", "HOSTNAME"), & CSN_TS=@TOKEN("SRC_CSN_TS"), & CSN_NB=@TOKEN("SRC_TRANS_CSN"), & EXT_NAME=@TOKEN("EXT_GROUP"), & EXT_TIME=@TOKEN("EXT_TIME"), & PMP_GROUP=@TOKEN("PMP_GROUP"), & PMP_TIME=@TOKEN("PMP_TIME"), & TGT_DB_NAME=@GETENV("DBENVIRONMENT","DBNAME"), & REP_GROUP=@GETENV ("GGENVIRONMENT", "GROUPNAME"), & REP_TIME=@DATE("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP")), & UPDATE_TS=@DATENOW() END; -- ***************************************** -- Application Specific Marcos for Heartbeat -- ***************************************** MACRO #hb_info PARAMS (#src_schema) BEGIN MAP #src_schema.SETTINGS,TARGET <target schema>.GG_STATUS_HB, & KEYCOLS (SITE_ID), & INSERTMISSINGUPDATES, & COLMAP ( #hb_rep_details() );</pre> <pre>MAP #src_schema.SETTINGS,TARGET <target schema>.GG_STATUS_HB_HIST, & KEYCOLS (SITE_ID, DB_SCHEMA, CSN_TS, CSN_NB, UPDATE_TS), & INSERTALLRECORDS, HANDLECOLLISIONS & COLMAP ( #hb_rep_details() ); END;
For each of the target heartbeat tables, a trigger is needed to calculate the lag and timestamp information. The triggers that I used/created are based on the Oracle supplied examples in note 1299679.1. To keep this blog some what short, I’m just going to include one sample of the triggers used.
Trigger Example:
CREATE OR REPLACE TRIGGER <target schema>.GG_STATUS_HB_TRIG BEFORE INSERT OR UPDATE ON <target schema>.GG_STATUS_HB FOR EACH ROW BEGIN SELECT TRUNC(TO_NUMBER(SUBSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS),1, INSTR(:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS,' ')))) * 86400 + TO_NUMBER(SUBSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS),' ')+1,2)) * 3600 + TO_NUMBER(SUBSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS),' ')+4,2) ) * 60 + TO_NUMBER(SUBSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS),' ')+7,2)) + TO_NUMBER(SUBSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS),' ')+10,6)) / 1000000 INTO :NEW.EXT_LAG FROM DUAL;</pre> <pre>SELECT TRUNC(TO_NUMBER(SUBSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME),1, INSTR(:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME,' ')))) * 86400 + TO_NUMBER(SUBSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME), INSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME),' ')+1,2)) * 3600 + TO_NUMBER(SUBSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME), INSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME),' ')+4,2) ) * 60 + TO_NUMBER(SUBSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME), INSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME),' ')+7,2)) + TO_NUMBER(SUBSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME), INSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME),' ')+10,6)) / 1000000 INTO :NEW.PMP_LAG FROM DUAL;</pre> <pre>SELECT TRUNC(TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME),1, INSTR(:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME,' ')))) * 86400 + TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME),' ')+1,2)) * 3600 + TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME),' ')+4,2) ) * 60 + TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME),' ')+7,2)) + TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME),' ')+10,6)) / 1000000 INTO :NEW.REP_LAG FROM DUAL;</pre> <pre>SELECT TRUNC(TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS),1, INSTR(:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS,' ')))) * 86400 + TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS),' ')+1,2)) * 3600 + TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS),' ')+4,2) ) * 60 + TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS),' ')+7,2)) + TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS),' ')+10,6)) / 1000000 INTO :NEW.TOTAL_LAG FROM DUAL; SELECT round((:NEW.TOTAL_LAG/60),1) INTO :NEW.TOTAL_LAG_MIN FROM DUAL; SELECT SYSTIMESTAMP INTO :NEW.UPDATE_TS FROM DUAL; END; /
After all these items are in place, the next thing that has to be done is updating the parameter files. With any macros, the parameter files have to reference the macro via an INCLUDE statement. Then the mappings for the SETTINGS table has to be done in each parameter file before restarting the process. More information on macros can be found here.
The biggest change that has to be be made to existing processes comes in the pump process. The pump has to be configured to be a PASSTHRU for all tables except the SETTINGS table. In order to assign tokens to the SETTINGS table, the pump needs to be put in NOPASSTHRU mode. In order to do this, the pump parameter file needs to be updated with something similar to this approach:
NODYNAMICRESOLUTION WILDCARDRESOLVE IMMEDIATE NOPASSTHRU TABLE <schema>.SETTINGS, TOKENS ( #opshb_info() ); DYNAMICRESOLUTION WILDCARDRESOLVE DYNAMIC PASSTHRU TABLEEXCLUDE <schema>.SETTINGS; TABLE <schema>.*;
With everything in place and processes restarted, the crontab job should be updating the heartbeat process on a scheduled interval. By reviewing the STATUS_HB table, it will provide you a quick update on the application heartbeat. By looking at the STATUS_HB_HIST table, you can get a sense of how much lag is happening within your GoldenGate environment over time without having additional overhead of a separate heartbeat processes.
Enjoy!
about.me: http://about.me/dbasolved
Current Oracle Certs
Bobby Curtis
I’m Bobby Curtis and I’m just your normal average guy who has been working in the technology field for awhile (started when I was 18 with the US Army). The goal of this blog has changed a bit over the years. Initially, it was a general blog where I wrote thoughts down. Then it changed to focus on the Oracle Database, Oracle Enterprise Manager, and eventually Oracle GoldenGate.
If you want to follow me on a more timely manner, I can be followed on twitter at @dbasolved or on LinkedIn under “Bobby Curtis MBA”.