Integrated Application Heartbeat for Oracle GoldenGate

Over the last two weeks I’ve been working on a heartbeat monitoring solution for a client.  This is not the normal heartbeat solution provided by Oracle as described in note 1299679.1; yet very similar.  The approach that I configured is similar to a traditional heartbeat setup but uses the same extract and replicats already being used by the database/application.  For simplicity reasons, I like to call this approach an Integrated Application Heartbeat.
In order to setup this style of heartbeat monitoring, the following items are are needed:
1.  Identify the heartbeat table in the source database.  In this configuration, lets call this table SETTINGS (Keep in mind nothing will change with the table (no DDL changes)).
2.  Identify and generate DDL for the target heartbeat table.  Let’s call this table GG_STATUS_HB.
3.  Identify and generate DDL for the target heartbeat history table.  Let’s call this table GG_STATUS_HB_HIST
4.  Two triggers for updating information in the target heartbeat tables (1 example below)
5.  A few macros.  One for each of the processes in the configuration.
6.  Either a crontab job or a DBMS_JOBs process
Now that the components have been identified, lets take a look at what needs to be done.
The source side heartbeat table has already been identified.  In the database, the table name is SETTINGS.  The SETTINGS table has a single column for a primary key.  This table is updated using a crontab job that runs the following SQL.  The <schema> variable is due to a this table (SETTINGS) being in more than one schema.

update &lt;schema&gt;.settings set id=id where rownum &lt; 2;

On the target side, the target heartbeat and heartbeat history table need to be created.  These tables will not match the columns in the SETTINGS table.  This means that all the data that is replicated in the SETTINGS table still have to be replicated a long with being used as a heartbeat table.  The DDL to create these two tables are as follows:

CREATE TABLE &lt;schema&gt;.GG_STATUS_HB
(
SITE_ID                    NUMBER(10),
DB_SCHEMA                  VARCHAR2(30),
CSN_TS                     TIMESTAMP(6),
CSN_NB                     NUMBER(18),
EXT_GROUP                  VARCHAR2(10),
EXT_TIME                   TIMESTAMP(6),
EXT_LAG                    NUMBER,
PMP_GROUP                  VARCHAR2(10),
PMP_TIME                   TIMESTAMP(6),
PMP_LAG                    NUMBER,
TGT_DB_NAME                VARCHAR2(15),
REP_GROU{                  VARCHAR2(10),
REP_TIME                   TIMESTAMP(6),
REP_LAG                    NUMBER,
TOTAL_LAG                  NUMBER,
TOTAL_LAG_MIN              NUMBER,
UPDATE_TS                  TIMESTAMP(6),
CONSTRAINT GG_STATUS_HB_PK PRIMARY KEY (SITE_ID) ENABLE
);
CREATE TABLE &lt;schema&gt;.GG_STATUS_HB_HIST
(
SITE_ID                    NUMBER(10),
DB_SCHEMA                  VARCHAR2(30),
CSN_TS                     TIMESTAMP(6),
CSN_NB                     NUMBER(18),
EXT_GROUP                  VARCHAR2(10),
EXT_TIME                   TIMESTAMP(6),
EXT_LAG                    NUMBER,
PMP_GROUP                  VARCHAR2(10),
PMP_TIME                   TIMESTAMP(6),
PMP_LAG                    NUMBER,
TGT_DB_NAME                VARCHAR2(15),
REP_GROU{                  VARCHAR2(10),
REP_TIME                   TIMESTAMP(6),
REP_LAG                    NUMBER,
TOTAL_LAG                  NUMBER,
TOTAL_LAG_MIN              NUMBER,
UPDATE_TS                  TIMESTAMP(6),
CONSTRAINT GG_STATUS_HB_HIST_PK PRIMARY KEY (SITE_ID, DB_SCHEMA, CSN_TS, CSN_NB, UPDATE_TS) ENABLE
);

In mapping these tables, I found it easier to create macros to handle the mappings.  Macros are especially handy since the SETTINGS table is in multiple schemas and they need to be mapped through existing GoldenGate processes.  For more on macros and passing parameters, I’ve written this post to highlight macros (here).
Examples of the macros that are used in this configuration are as follows:
Macro for Extract process:
This macro is used to capture and map the extract name, time the transaction was extracted, the timestamp of the commit and the System Change Number.

--Heartbeat Extract Macro Library - Version 1.01
--Inital write - 08-13-2014 - BLC&lt;/pre&gt;
&lt;pre&gt;-- ***************************************
-- Common macros for this library
-- called internally
-- ***************************************
MACRO #hb_ext_details
BEGIN
EXT_GROUP = @GETENV(&quot;GGENVIRONMENT&quot;,&quot;GROUPNAME&quot;), &amp;
EXT_TIME = @DATE(&quot;YYYY-MM-DD HH:MI:SS.FFFFFF&quot;,&quot;JTS&quot;,@GETENV(&quot;JULIANTIMESTAMP&quot;)), &amp;
CSN_TS = @GETENV(&quot;GGHEADER&quot;,&quot;COMMITTIMESTAMP&quot;), &amp;
TRANS_CSN = @GETENV(&quot;TRANSACTION&quot;,&quot;CSN&quot;)
END;
-- *****************************************
-- Application Specific Marcos for Heartbeat
-- *****************************************
MACRO #opshb_info
BEGIN
       #hb_ext_details()
END;

Macro for the Pump process:
The macro captures the pump name and time of transaction passed through the pump.

--Heartbeat Pump Macro Library - Version 1.01
--Inital write - 08-13-2014 - BLC
-- ***************************************
-- Common macros for this library
-- called internally
-- ***************************************
MACRO #hb_pmp_details
BEGIN
PMP_GROUP=@GETENV(&quot;GGENVIRONMENT&quot;,&quot;GROUPNAME&quot;),
PMP_TIME=@DATE(&quot;YYYY-MM-DD HH:MI:SS.FFFFFF&quot;,&quot;JTS&quot;,@GETENV(&quot;JULIANTIMESTAMP&quot;))
END;
-- *****************************************
-- Application Specific Marcos for Heartbeat
-- *****************************************
MACRO #opshb_info
BEGIN
        #hb_pmp_details()
END;

Macro for the Replicat Process:
This macro does the mapping of all the tokens that have been passed from the other GoldenGate processes to the target tables.

--Heartbeat Replicat Macro Library - Version 1.01
--Inital write - 08-13-2014 - BLC
-- ***************************************
-- Common macros for this library
-- called internally
-- ***************************************
MACRO #opshb_rep_details
BEGIN
SITE_ID=@TOKEN(&quot;NHIN_STORE_ID&quot;), &amp;
DB_SCHEMA=@TOKEN(&quot;SRC_DB_SCHEMA&quot;),
HOST_NAME=@GETENV(&quot;GGFILEHEADER&quot;, &quot;HOSTNAME&quot;), &amp;
CSN_TS=@TOKEN(&quot;SRC_CSN_TS&quot;), &amp;
CSN_NB=@TOKEN(&quot;SRC_TRANS_CSN&quot;), &amp;
EXT_NAME=@TOKEN(&quot;EXT_GROUP&quot;), &amp;
EXT_TIME=@TOKEN(&quot;EXT_TIME&quot;), &amp;
PMP_GROUP=@TOKEN(&quot;PMP_GROUP&quot;), &amp;
PMP_TIME=@TOKEN(&quot;PMP_TIME&quot;), &amp;
TGT_DB_NAME=@GETENV(&quot;DBENVIRONMENT&quot;,&quot;DBNAME&quot;), &amp;
REP_GROUP=@GETENV (&quot;GGENVIRONMENT&quot;, &quot;GROUPNAME&quot;), &amp;
REP_TIME=@DATE(&quot;YYYY-MM-DD HH:MI:SS.FFFFFF&quot;,&quot;JTS&quot;,@GETENV (&quot;JULIANTIMESTAMP&quot;)), &amp;
UPDATE_TS=@DATENOW()
END;
-- *****************************************
-- Application Specific Marcos for Heartbeat
-- *****************************************
MACRO #hb_info
PARAMS (#src_schema)
BEGIN
MAP #src_schema.SETTINGS,TARGET &lt;target schema&gt;.GG_STATUS_HB, &amp;
KEYCOLS (SITE_ID), &amp;
INSERTMISSINGUPDATES, &amp;
COLMAP ( #hb_rep_details() );&lt;/pre&gt;
&lt;pre&gt;MAP #src_schema.SETTINGS,TARGET &lt;target schema&gt;.GG_STATUS_HB_HIST, &amp;
KEYCOLS (SITE_ID, DB_SCHEMA, CSN_TS, CSN_NB, UPDATE_TS), &amp;
INSERTALLRECORDS, HANDLECOLLISIONS &amp;
COLMAP ( #hb_rep_details() );
END;

For each of the target heartbeat tables, a trigger is needed to calculate the lag and timestamp information.  The triggers that I used/created are based on the Oracle supplied examples in note 1299679.1.  To keep this blog some what short, I’m just going to include one sample of the triggers used.
Trigger Example:

CREATE OR REPLACE TRIGGER &lt;target schema&gt;.GG_STATUS_HB_TRIG
BEFORE INSERT OR UPDATE ON &lt;target schema&gt;.GG_STATUS_HB
FOR EACH ROW
BEGIN
SELECT TRUNC(TO_NUMBER(SUBSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS),1, INSTR(:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS,' ')))) * 86400
+ TO_NUMBER(SUBSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS),' ')+1,2)) * 3600
+ TO_NUMBER(SUBSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS),' ')+4,2) ) * 60
+ TO_NUMBER(SUBSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS),' ')+7,2))
+ TO_NUMBER(SUBSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS),' ')+10,6)) / 1000000
INTO :NEW.EXT_LAG
FROM DUAL;&lt;/pre&gt;
&lt;pre&gt;SELECT TRUNC(TO_NUMBER(SUBSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME),1, INSTR(:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME,' ')))) * 86400
+ TO_NUMBER(SUBSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME), INSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME),' ')+1,2)) * 3600
+ TO_NUMBER(SUBSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME), INSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME),' ')+4,2) ) * 60
+ TO_NUMBER(SUBSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME), INSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME),' ')+7,2))
+ TO_NUMBER(SUBSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME), INSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME),' ')+10,6)) / 1000000
INTO :NEW.PMP_LAG
FROM DUAL;&lt;/pre&gt;
&lt;pre&gt;SELECT TRUNC(TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME),1, INSTR(:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME,' ')))) * 86400
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME),' ')+1,2)) * 3600
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME),' ')+4,2) ) * 60
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME),' ')+7,2))
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME),' ')+10,6)) / 1000000
INTO :NEW.REP_LAG
FROM DUAL;&lt;/pre&gt;
&lt;pre&gt;SELECT TRUNC(TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS),1, INSTR(:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS,' ')))) * 86400
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS),' ')+1,2)) * 3600
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS),' ')+4,2) ) * 60
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS),' ')+7,2))
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS),' ')+10,6)) / 1000000
INTO :NEW.TOTAL_LAG
FROM DUAL;
SELECT round((:NEW.TOTAL_LAG/60),1) INTO :NEW.TOTAL_LAG_MIN FROM DUAL;
SELECT SYSTIMESTAMP INTO :NEW.UPDATE_TS FROM DUAL;
END;
/

After all these items are in place, the next thing that has to be done is updating the parameter files.  With any macros, the parameter files have to reference the macro via an INCLUDE statement.  Then the mappings for the SETTINGS table has to be done in each parameter file before restarting the process.  More information on macros can be found here.
The biggest change that has to be be made to existing processes comes in the pump process.  The pump has to be configured to be a PASSTHRU for all tables except the SETTINGS table.  In order to assign tokens to the SETTINGS table, the pump needs to be put in NOPASSTHRU mode.  In order to do this, the pump parameter file needs to be updated with something similar to this approach:

NODYNAMICRESOLUTION
WILDCARDRESOLVE IMMEDIATE
NOPASSTHRU
TABLE &lt;schema&gt;.SETTINGS, TOKENS ( #opshb_info() );
DYNAMICRESOLUTION
WILDCARDRESOLVE DYNAMIC
PASSTHRU
TABLEEXCLUDE &lt;schema&gt;.SETTINGS;
TABLE &lt;schema&gt;.*;

With everything in place and processes restarted, the crontab job should be updating the heartbeat process on a scheduled interval.  By reviewing the STATUS_HB table, it will provide you a quick update on the application heartbeat.  By looking at the STATUS_HB_HIST table, you can get a sense of how much lag is happening within your GoldenGate environment over time without having additional overhead of a separate heartbeat processes.
Enjoy!
about.me: http://about.me/dbasolved
 
 
 

Please follow and like:
Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Enquire now

Give us a call or fill in the form below and we will contact you. We endeavor to answer all inquiries within 24 hours on business days.