Looking for something? Try here..

Wednesday, December 30, 2015

Uninstalling GoldenGate software

GoldenGate software can be uninstalled using the deinstall script available in deinstall directory under GoldenGate Home. The following explains the steps on the same.

[oracle@12r1-rac1 12121]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Aug  7 2014 10:21:34
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.



GGSCI (12r1-rac1.selvapc.com) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED                                           
EXTRACT     ABENDED     DEMO1E      00:00:02      154:53:03   

GGSCI (12r1-rac1.selvapc.com) 2>

The process is already abended as I did some changes (out of scope for this article). If the process is running, please stop the process by stop command.
STOP EXTRACT DEMO1E

One can stop all the extract and replicat process at once issueing following command
STOP ER *

Stop the manager process if it is running. In our case it is already stopped hence skipping the step. Also the following delete extract step is optional as GoldenGate deinstall removes everything by itself.

GGSCI (12r1-rac1.selvapc.com) 2> delete ext *
Are you sure you want to delete all groups? y
Deleted EXTRACT DEMO1E.


GGSCI (12r1-rac1.selvapc.com) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED                                           


GGSCI (12r1-rac1.selvapc.com) 4> exit

Run the following scripts to disable and remove DDL replication objects and marker objects.

[oracle@12r1-rac1 12121]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Thu Dec 31 12:17:03 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> @ddl_disable    

Trigger altered.

SQL> @ddl_remove

DDL replication removal script.
WARNING: this script removes all DDL replication objects and data.

You will be prompted for the name of a schema for the Oracle GoldenGate database objects.
NOTE: The schema must be created prior to running this script.

Enter Oracle GoldenGate schema name:ggs_dba
Working, please wait ...
Spooling to file ddl_remove_spool.txt

Script complete.
SQL> @remove_marker
SP2-0310: unable to open file "remove_marker.sql"
SQL> @marker_remove

Marker removal script.
WARNING: this script removes all marker objects and data.

You will be prompted for the name of a schema for the Oracle GoldenGate database objects.
NOTE: The schema must be created prior to running this script.

Enter Oracle GoldenGate schema name:ggs_dba

PL/SQL procedure successfully completed.


Sequence dropped.


Table dropped.


Script complete.
-- The following revoke is optional as I'm going to drop the user ggs_dba once goldengate is uninstalled. 
-- For target database the privilege_type is 'apply' or '*' is both capture and apply is granted.
SQL> exec dbms_goldengate_auth.revoke_admin_privilege(grantee => 'ggs_dba', privilege_type => 'capture');

PL/SQL procedure successfully completed.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

Change to deinstall directory under GoldenGate Home and run the deinstall script.

[oracle@12r1-rac1 12121]$ cd deinstall/
[oracle@12r1-rac1 deinstall]$ ./deinstall.sh

ALERT: Ensure all the processes running from the current Oracle Home are shutdown prior to running this software uninstallation script.

Proceed with removing Oracle GoldenGate home: /u02/gg_home/12121 (yes/no)? [no] 
yes
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3068 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-12-31_12-22-16PM. Please wait ...Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.

Starting deinstall


Deinstall in progress (Thursday, December 31, 2015 12:22:36 PM IST)
............................................................... 100% Done.

Deinstall successful


End of install phases.(Thursday, December 31, 2015 12:23:02 PM IST)
End of deinstallations
Please check '/oracle/oraInventory/logs/silentInstall2015-12-31_12-22-16PM.log' for more details.
[oracle@12r1-rac1 deinstall]$ 
[oracle@12r1-rac1 deinstall]$ cd ..
cd: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
[oracle@12r1-rac1 ..]$ pwd
/u02/gg_home/12121/deinstall/..
[oracle@12r1-rac1 ..]$ cd /u02/gg_home/
[oracle@12r1-rac1 gg_home]$ ls -lrt
total 0

Once the GoldenGate software is uninstalled, one can drop the GoldenGate schema.

[oracle@12r1-rac1 gg_home]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Thu Dec 31 12:24:20 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> drop user ggs_dba cascade;

User dropped.

SQL> 

Everything's done!
Happy deinstall!!

Friday, December 4, 2015

OGG-01423 No valid default archive log destination directory found for thread 1

Today while I was trying to set up GoldenGate sync between Oracle database 11.2.0.1 running on Windows 10 with Oracle database 12.1.0.2 running on openSUSE (this is a lab set up), I received the following error when attempting to start the real time extract.

ggsci > ADD EXTRACT DEMO1E, TRANLOG, BEGIN NOW
ggsci > ADD EXTTRAIL E:\oracle\ogg112101\dirdat\ea, EXTRACT DEMO1E, Megabytes 200
ggsci > start extract DEMO1E
INFO all command showed the process as ABENDED.
The report file DEMO1E.rpt shows the below.

...
...
...

2015-12-04 02:48:50  WARNING OGG-01423  No valid default archive log destination directory found for thread 1.

2015-12-04 02:48:50  INFO    OGG-01515  Positioning to begin time Dec 4, 2015 1:59:16 AM.

Source Context :
  SourceModule            : [er.common]
  SourceID                : [er/common.cpp]
  SourceFunction          : [extract_start_point]
  SourceLine              : [2090]

2015-12-04 02:49:10  ERROR   OGG-00446  Error 5 (Access is denied.) opening log file E:\ORACLE\ORADATA\ORCL\REDO03.LOG for sequence 57. Not able to establish initial position for begin time 2015-12-04 01:59:16.

2015-12-04 02:49:10  ERROR   OGG-01668  PROCESS ABENDING.
A similar issue may arise if the source is a RAC clustered database under different scenario, but this is not a RAC database but a standalone.

So why is this error reported? (see highlighted error line)
If you look at the archive log destination in the database, we have defined it to use the FRA.

SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     55
Next log sequence to archive   57
Current log sequence           57

SQL> sho parameter log_archive_dest_1

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_1                   string      

Also, there is no value defined in the default archive location ( the database is created using DBCA while installing the Oracle software using default values)

Hence, I tried to give a location to the archive log so that GoldenGate can check in the location specified. I did the below.

SQL> alter system set log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST' scope=both;

SQL> sho parameter log_archive_dest_1

NAME                                 TYPE        VALUE
------------------------------------ ----------- ----------------------------------
log_archive_dest_1                   string      location=USE_DB_RECOVERY_FILE_DEST

Now, after setting the location, I started the extract to which the response is positive with the following contents in the report file.

ggsci > start extract DEMO1E
The report file contents...

...
...
...
             
2015-12-04 02:57:23  WARNING OGG-01423  No valid default archive log destination directory found for thread 1.

2015-12-04 02:57:24  INFO    OGG-01515  Positioning to begin time Dec 4, 2015 2:56:48 AM.

2015-12-04 02:57:24  INFO    OGG-01052  No recovery is required for target file E:\oracle\ogg112101\dirdat\ea000000, at RBA 0 (file not opened).

2015-12-04 02:57:24  INFO    OGG-01478  Output file E:\oracle\ogg112101\dirdat\ea is using format RELEASE 11.2.

***********************************************************************
**                     Run Time Messages                             **
***********************************************************************


2015-12-04 02:57:25  INFO    OGG-01515  Positioning to begin time Dec 4, 2015 2:56:48 AM.

2015-12-04 02:57:25  INFO    OGG-01516  Positioned to Sequence 57, RBA 3447824, SCN 0.0, Dec 4, 2015 2:56:48 AM.

2015-12-04 02:57:25  INFO    OGG-01517  Position of first record processed Sequence 57, RBA 3447824, SCN 0.2086081, Dec 4, 2015 2:57:15 AM.

Now, we have the same warning message, but the extract process started to process the first record. (I'm still wondering why does it throw Access is denied for online redo log when all the permission required is available ;-) )

Happy Troubleshooting!!

Saturday, September 19, 2015

ODI - RCU error, Can't find javaw

Today, I have installed ODI 12c and was trying to configure Master and Work repository using the RCU tool. How to's can be found in this link
The RCU tool has to be invoked from ODI_HOME\oracle_common\bin directory. The version of OS that i was working on was Windows 10 enterprise and ODI version is 12.1.3.

I tried to invoke rcu.bat as below and it throws error that the javaw can't be found in the ODI_HOME

Microsoft Windows [Version 10.0.10240]
(c) 2015 Microsoft Corporation. All rights reserved.

C:\WINDOWS\system32>e:

E:\>cd E:\oracle\Middleware\Oracle_Home\oracle_common\bin

E:\oracle\Middleware\Oracle_Home\oracle_common\bin>rcu.bat


Here ODI software assumes that java jdk is bundled with the software and hence defaults to ODI_HOME path for java. We are aware that as a prerequisite to install ODI we install a compatible version of JDK libraries (I've used JDK1.8). So we need to provide the JAVA_HOME and ODI_JAVA_HOME both directing to the directory where we have the JDK installed. In my case the path is C:\Program Files\Java\jdk1.8.0_60

So set the path in the command prompt and try executing the RCU utility as below.

E:\oracle\Middleware\Oracle_Home\oracle_common\bin>set JAVA_HOME=C:\Program Files\Java\jdk1.8.0_60

E:\oracle\Middleware\Oracle_Home\oracle_common\bin>set ODI_JAVA_HOME=C:\Program Files\Java\jdk1.8.0_60

E:\oracle\Middleware\Oracle_Home\oracle_common\bin>rcu.bat

E:\oracle\Middleware\Oracle_Home\oracle_common\bin>
Now the utility starts and shows the start page without errors as below.

Happy Troubleshooting!!

Sunday, August 23, 2015

Oracle GoldenGate simple hands-on demo

I was working on a project where it involves use of Oracle GoldenGate for data migration from one database to another. We can term this project as cross platform data replication (migration and upgrade if done for whole database) as both the source and target system and database versions are different to each other.
I would like to show a simple demo on setting up Oracle GoldenGate software and configuring the same to use for migration or replication of data between databases (Oracle in our case) where as Oracle GoldenGate can be used for heterogeneous platform as well.

The post consists of the following
1. Environment details
2. GoldenGate installation
3. Prerequisites and Oracle GoldenGate software set up
4. Enabling TRANDATA
5. Param files set up
6. The GoldenGate process (Commands and Explanations)
7. Real time sample demo

Now let's get into the details.

1. Environment details

Source
Oracle 9.2.0.7.0 on SunOS operating system
OGG_source $ uname -sr
SunOS 5.10
Connected to:
Oracle9i Enterprise Edition Release 9.2.0.7.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.7.0 – Production

Destination
Oracle 11.2.0.4 on Linux 2.6.32-358.0.1.el6.x86_64
OGG_target $ uname -sr
Linux 2.6.32-358.0.1.el6.x86_64
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

GoldenGate software used
Source: Oracle GoldenGate Version 11.1.1.1 13
Destination: Oracle GoldenGate Version 11.2.1.0.17

2. GoldenGate Installation

For GoldenGate installation please see Oracle GoldenGate Installation

3. Prerequisites and Oracle GoldenGate software set up

3.1 Prerequisites
As a prerequisite for enabling and utilizing OGG, archive log should be enabled in the source database. If bidirectional, both the database should run in ARCHIVELOG mode.

For enabling Archivelog mode, see Enabling Archivelog


Once Archivelog has been enabled, enable supplemental logging in source by issuing the below command
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

3.2 OGG software setup
The following has to be performed on source and target databases to set up GG owner and create/grant required privileges and roles required to perform GG activities.
On Source:
export ORACLE_BASE=/r01/app/oracle/
export ORACLE_HOME=/r01/app/oracle/product/9.2.0
export PATH=$PATH:$ORACLE_HOME/bin:/backup/PRNETSDB/ogg/11.1
export ORACLE_SID=PRNETSDB
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export SHLIB_PATH=$ORACLE_HOME/lib
export GGH=/backup/PRNETSDB/ogg/11.1

$ cd $GGH
$ ggsci

ggsci> create subdirs
All the GoldenGate required directories will be created under GoldenGate Home directory
SQL> create user ggs_dba identified by ggs_dba default tablespace TOOLS temporary tablespace TEMP profile DEFAULT;
SQL> grant CONNECT to ggs_dba;
SQL> grant SELECT ANY TABLE to ggs_dba; -- This can be restricted to tables that are only involved
SQL> grant ALTER SESSION to ggs_dba;
Change to the OGG HOME (GGH) directory and execute the scripts.
cd $GGH
-- Run the following as SYSDBA user with input "GGS_DBA"
SQL> @marker_setup.sql -- creates GGS_DDL_HIST table under GGS_DBA for DDL replication
SQL> @ddl_setup.sql -- To support DDL replication
SQL> @role_setup.sql
SQL> grant GGS_GGSUSER_ROLE to ggs_dba;
SQL> @ddl_enable.sql -- Enables GGS trigger
Change to dirprm directory which was created earlier and perform the below
cd $GGH/dirprm
vi mgr.prm
PORT 15000
DYNAMICPORTLIST 15010-15020
USERID GGS_DBA, PASSWORD ggs_dba

ggsci> start manager

ggsci>

On Target:
export ORACLE_BASE=/ne01/app/oracle
export ORACLE_HOME=/ne01/app/oracle/product/11.2.0.4
export PATH=$PATH:$ORACLE_HOME/bin:/backup/gg_kt/ogg/11.2
export ORACLE_SID=STNETSDB
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/backup/gg_kt/ogg/11.2
export SHLIB_PATH=$ORACLE_HOME/lib
export GGH=/backup/gg_kt/ogg/11.2

$ cd GGH
$ ggsci

ggsci> create subdirs

All the GoldenGate required directories will be created under GoldenGate Home directory
SQL> create user ggs_dba identified by ggs_dba default tablespace TOOLS temporary tablespace TEMP profile DEFAULT;
SQL> grant CONNECT to ggs_dba;
SQL> grant SELECT ANY TABLE to ggs_dba; -- This can be restricted to tables that are only involved
SQL> grant ALTER SESSION to ggs_dba;
-- Grant insert, update, delete privilege on tables replicated
SQL> grant insert, delete, update on [list of tables] to ggs_dba;
Or grant Oracle GoldenGate user DBA role till the migration activity is complete and then revoke the grant if required.

Change to the OGG HOME (GGH) directory and execute the scripts.
cd $GGH
Run the following as SYSDBA user with input "GGS_DBA"
SQL> @marker_setup.sql -- creates GGS_DDL_HIST table under GGS_DBA for DDL replication
SQL> @ddl_setup.sql -- To support DDL replication
SQL> @role_setup.sql
SQL> grant GGS_GGSUSER_ROLE to ggs_dba;
SQL> @ddl_enable.sql -- Enable GGS trigger
Change to dirprm directory which was created earlier and perform the below
cd $GGH/dirprm
vi mgr.prm
PORT 12000
DYNAMICPORTLIST 12010-12020
USERID GGS_DBA, PASSWORD ggs_dba
cd $GGH
ggsci
ggsci 1> start manager
ggsci 2> edit params ./GLOBALS -- This creates a vi file named GLOBALS* under the current directory
GGSCHEMA ggs_dba 
CHECKPOINTTABLE ggs_dba.checkpoint
:wq – to save the file
ggsci 3> dblogin userid ggs_dba Password ggs_dba
ggsci 4> add checkpointtable ggs_dba.checkpoint
Enable ENABLE_GOLDENGATE_REPLICATION process as this is a mandatory step if the database is of Oracle version 11g and above.
SQL> ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION = TRUE SCOPE=BOTH;
* To learn more about GLOBALS and other parameter files, refer this oracle documentation link

4. Enabling TRANDATA

Enabling TRANDATA (transaction data) is explained in Enable TRANDATA
Once the TRANDATA is enabled for individual tables, then the parameter files can be created

5. Param files set up

All the parameters set up required for our demo is given in this link Simple parameter files for Oracle GoldenGate
Upon completion of Parameter files creation, we can start the Oracle GoldenGate replication process.

6. The GoldenGate process (Commands and Explanations)

Please see this link OGG step by step process for the Oracle GoldenGate commands, step by step processes and their simple explanations.
Thus we have completed our Oracle GoldenGate set up and replication.

7. Real time sample demo

As we have completed the Oracle GoldenGate replication between 2 databases of different versions and platforms, the real time demo shown will be of much useful in understanding the process better.

Please see here for the demo - Oracle GoldenGate sample demo.

Happy Replicating!!

OGG sample real time demo

This post gives a real time demo of Oracle GoldenGate replication. For complete installation and set up, see this link

Terminologies used
OGG_source $ -- refers commands executed from source server
OGG_target $ -- refers commands executed from target server
SQL@OGG_source > -- refers commands executed from source SQLPLUS terminal
SQL@OGG_target > -- refers commands executed from target SQLPLUS terminal
GGSCI (source) -- refers commands executed from source GGSCI terminal
GGSCI (target) -- refers commands executed from target GGSCI terminal

At line 69, the initial extract is started from OS as a background process. The reason being the initial extract might run for hours depending on the amount of data it has to unload from the source database and hence to let it run without any issues till it completes, we are starting the process as a background process.
Also we are grepping the start time of the extract at line 74 to use it at a later time in the replication operation. See line 245 where this captured time is used.
OGG_source $ cd $GGH
OGG_source $ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.1.1.1 13067027
Solaris, sparc, 64bit (optimized), Oracle 9i on Oct 14 2011 08:02:21

Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved.



GGSCI (source) 1> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING


GGSCI (source) 2> ADD EXTRACT DEMO1E, TRANLOG, THREADS 2, BEGIN NOW
EXTRACT added.


GGSCI (source) 3> ADD EXTTRAIL /backup/PRNETSDB/ogg/11.1/dirdat/ea, EXTRACT DEMO1E, Megabytes 200
EXTTRAIL added.


GGSCI (source) 4> start extract DEMO1E

Sending START request to MANAGER ...
EXTRACT DEMO1E starting


GGSCI (source) 5> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     DEMO1E      00:00:07      00:00:06


GGSCI (source) 6> ADD EXTRACT DEMO1P, EXTTRAILSOURCE /backup/PRNETSDB/ogg/11.1/dirdat/ea
EXTRACT added.


GGSCI (source) 7> ADD RMTTRAIL /backup/gg_kt/ogg/11.2/dirdat/pa, EXTRACT DEMO1P, Megabytes 100
RMTTRAIL added.


GGSCI (source) 8> start extract DEMO1P

Sending START request to MANAGER ...
EXTRACT DEMO1P starting


GGSCI (source) 9> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     DEMO1E      00:00:00      00:00:06
EXTRACT     RUNNING     DEMO1P      00:00:00      00:00:22


GGSCI (source) 10> ADD EXTRACT DEMO1I, sourceistable
EXTRACT added.


GGSCI (source) 11> exit
OGG_source $ nohup extract paramfile $GGH/dirprm/DEMO1I.prm reportfile $GGH/dirrpt/DEMO1I.rpt > DEMO1I.out 2>&1 &
[1]     17810
OGG_source $
[1] +  Done                    nohup extract paramfile $GGH/dirprm/DEMO1I.prm reportfile $GGH/dirrpt/DEMO1I.rpt > DEMO1I.out 2>&1 &
OGG_source $ cd dirrpt
OGG_source $ more DEMO1I.rpt|grep "Starting at"
                    Starting at 2015-08-07 08:02:26
OGG_source $ sqlplus "/ as sysdba"

SQL*Plus: Release 9.2.0.7.0 - Production on Fri Aug 7 08:03:15 2015

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.7.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.7.0 - Production

SQL@OGG_source > select count(*) from sdas.ogg_test;

  COUNT(*)
----------
      7004

********** Operation on Target system starts here **********

SQL@OGG_target > select count(*) from sdas.tgt_ogg_test;

  COUNT(*)
----------
         0

SQL@OGG_target > !
OGG_target $ cd $GGH
OGG_target $ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.17 17888650 OGGCORE_11.2.1.0.0OGGBP_PLATFORMS_131215.2328_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Dec 16 2013 03:43:25

Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserved.



GGSCI (target) 1> add rep TDEMO1I, exttrail /backup/gg_kt/ogg/11.2/dirdat/ia, nodbcheckpoint
REPLICAT added.


GGSCI (target) 2> start replicat TDEMO1I

Sending START request to MANAGER ...
REPLICAT TDEMO1I starting


GGSCI (target) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
REPLICAT    RUNNING     TDEMO1I     00:00:00      00:00:01


GGSCI (target) 4> stats replicat TDEMO1I

Sending STATS request to REPLICAT TDEMO1I ...

Start of Statistics at 2015-08-07 08:08:46.

Replicating from SDAS.OGG_TEST to SDAS.TGT_OGG_TEST:

*** Total statistics since 2015-08-07 08:08:29 ***
        Total inserts                                   7004.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                7004.00

*** Daily statistics since 2015-08-07 08:08:29 ***
        Total inserts                                   7004.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                7004.00

*** Hourly statistics since 2015-08-07 08:08:29 ***
        Total inserts                                   7004.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                7004.00

*** Latest statistics since 2015-08-07 08:08:29 ***
        Total inserts                                   7004.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                7004.00

End of Statistics.


GGSCI (target) 5> send rep TDEMO1I, getlag

Sending GETLAG request to REPLICAT TDEMO1I ...
Last record lag: 359 seconds.
At EOF, no more records to process.


GGSCI (target) 6> stop rep TDEMO1I

Sending STOP request to REPLICAT TDEMO1I ...
Request processed.


GGSCI (target) 7> exit
OGG_target $ exit
exit

SQL@OGG_target > select count(*) from sdas.tgt_ogg_test;

  COUNT(*)
----------
      7004

********** Insert and update some values in the source ************

SQL@OGG_source > insert into sdas.ogg_test values (20202,'Selvakumar Nagulan',sysdate,'N',20202);

1 row created.

SQL@OGG_source > update sdas.ogg_test set emp_name='Priyanka' where emp_id=7000;

1 row updated.

SQL@OGG_source > commit;

Commit complete.

SQL@OGG_source > select count(*) from sdas.ogg_test;

  COUNT(*)
----------
      7005

*********** Check the changes in target which still didn't reflect here ***********
*********** Do the rest of OGG operations to enable real time sync ****************

SQL@OGG_target > select * from sdas.tgt_ogg_test where emp_id=7000;

EMP_ID     EMP_NAME                                           JOINDATE  S     SALARY
---------- -------------------------------------------------- --------- - ----------
7000       Priya                                              20-MAY-15 A       7000

SQL@OGG_target > select * from sdas.tgt_ogg_test where emp_id=20202;

no rows selected

SQL@OGG_target > !
OGG_target $ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.17 17888650 OGGCORE_11.2.1.0.0OGGBP_PLATFORMS_131215.2328_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Dec 16 2013 03:43:25

Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserved.



GGSCI (target) 1> dblogin userid ggs_dba password ggs_dba
Successfully logged into database.

GGSCI (target) 2> ADD REPLICAT DEMO1R, EXTTRAIL /backup/gg_kt/ogg/11.2/dirdat/pa, checkpointtable ggs_dba.CHECKPOINT
REPLICAT added.


GGSCI (target) 3> ALTER REPLICAT DEMO1R, BEGIN 2015-08-07 08:02:26
REPLICAT altered.


GGSCI (target) 4> start replicat DEMO1R

Sending START request to MANAGER ...
REPLICAT DEMO1R starting


GGSCI (target) 5> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
REPLICAT    RUNNING     DEMO1R      00:06:19      00:00:08
REPLICAT    STOPPED     TDEMO1I     00:00:00      00:02:48


GGSCI (target) 6> stats rep DEMO1R

Sending STATS request to REPLICAT DEMO1R ...

Start of Statistics at 2015-08-07 08:12:03.

Replicating from SDAS.OGG_TEST to SDAS.TGT_OGG_TEST:

*** Total statistics since 2015-08-07 08:11:44 ***
        Total inserts                                      1.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   2.00

*** Daily statistics since 2015-08-07 08:11:44 ***
        Total inserts                                      1.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   2.00

*** Hourly statistics since 2015-08-07 08:11:44 ***
        Total inserts                                      1.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   2.00

*** Latest statistics since 2015-08-07 08:11:44 ***
        Total inserts                                      1.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   2.00

End of Statistics.


GGSCI (target) 7> send rep DEMO1R, ggetlag

Sending GGETLAG request to REPLICAT DEMO1R ...

ERROR: unknown request.

2015-08-07 08:12:28  WARNING OGG-01742  Command sent to REPLICAT DEMO1R returned with an ERROR response.


GGSCI (target) 8> send rep DEMO1R, getlag

Sending GETLAG request to REPLICAT DEMO1R ...
Last record lag: 379 seconds.
At EOF, no more records to process.


GGSCI (target) 9> stop rep DEMO1R

Sending STOP request to REPLICAT DEMO1R ...
Request processed.


GGSCI (target) 10> exit
OGG_target $ exit
exit

SQL@OGG_target > select count(*) from sdas.tgt_ogg_test;

  COUNT(*)
----------
      7005

SQL@OGG_target > select * from sdas.tgt_ogg_test where emp_id=7000;

EMP_ID     EMP_NAME                                           JOINDATE  S     SALARY
---------- -------------------------------------------------- --------- - ----------
7000       Priyanka                                           20-MAY-15 A       7000

SQL@OGG_target > select * from sdas.tgt_ogg_test where emp_id=20202;

EMP_ID     EMP_NAME                                           JOINDATE  S     SALARY
---------- -------------------------------------------------- --------- - ----------
20202      Selvakumar Nagulan                                 07-AUG-15 N      20202

SQL@OGG_target > !
OGG_target $ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.17 17888650 OGGCORE_11.2.1.0.0OGGBP_PLATFORMS_131215.2328_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Dec 16 2013 03:43:25

Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserved.



GGSCI (target) 1> edit params DEMO1R

********* Here remove the HANDLECOLLISIONS line from the param file and start the replicat **********

GGSCI (target) 2> start rep DEMO1R

Sending START request to MANAGER ...
REPLICAT DEMO1R starting


GGSCI (target) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
REPLICAT    RUNNING     DEMO1R      00:00:00      00:00:06
REPLICAT    STOPPED     TDEMO1I     00:00:00      00:05:50


GGSCI (target) 4> exit
OGG_target $ exit
exit

********* The source and target are now in sync with real time replication enabled **********

SQL@OGG_source > insert into sdas.ogg_test values (30303,'Rafael Nadal',sysdate,'N',30303);

1 row created.

SQL@OGG_source > commit;

Commit complete.

SQL@OGG_source >

SQL@OGG_target >
SQL@OGG_target > select * from sdas.tgt_ogg_test where emp_id=30303;

EMP_ID     EMP_NAME                                           JOINDATE  S     SALARY
---------- -------------------------------------------------- --------- - ----------
30303      Rafael Nadal                                       07-AUG-15 N      30303

SQL@OGG_target >

Happy Replicating!!

OGG step by step process

In this post let's look at the Oracle GoldenGate step by step process for replicating data between source and target databases. For all the installation, prerequisites and OGG set up part, see this link.

Now with all the pre requisites and other setup have been complete, we can start the actual process of GoldenGate replication. The data extraction and replication involves the below steps.

Start real time Extract process (Source)
Start Pump process (Source)
Start Initial extract (Source)
Start Initial replication (Target)
Start real time replication with handle collisions (Target)
Start real time replication without handle collisions (Target)

Let’s see one by one in action below.

1. Start real time Extract process (Source)
ggsci> ADD EXTRACT DEMO1E, TRANLOG, THREADS 2, BEGIN NOW
ggsci> ADD EXTTRAIL /backup/PRNETSDB/ogg/11.1/dirdat/ea, EXTRACT DEMO1E, Megabytes 200
ggsci> start extract DEMO1E
Explanation of commands:
  1. Add an extract DEMO1E with transaction log as data source. When TRANLOG is specified it should be accompanied with BEGIN and we are starting with the current time. THREADS is 2 here as per cluster setup.
  2. Add a trail dump file to extract DEMO1E with 200MB each
  3. Start the extract
2. Start Pump process (Source)
ggsci> ADD EXTRACT DEMO1P, EXTTRAILSOURCE /backup/PRNETSDB/ogg/11.1/dirdat/ea
ggsci> ADD RMTTRAIL /backup/gg_kt/ogg/11.2/dirdat/pa, EXTRACT DEMO1P, Megabytes 100
ggsci> start extract DEMO1P
Explanation of commands:
  1. Add an extract DEMO1P with trail dump source name
  2. Add remote trial file to extract DEMO1P with size limited to 100MB
  3. Start the extract
3. Start Initial extract (Source)
ggsci> ADD EXTRACT DEMO1I, sourceistable
OGG_source $ nohup extract paramfile $GGH/dirprm/DEMO1I.prm reportfile $GGH/dirrpt/DEMO1I.rpt > DEMO1I.out 2>&1 &
Explanation of commands:
  1. Add an extract DEMO1I with SOURCEISTABLE keyword which means table is the source for this extract. This is used for initial loading extract and is a one time run extract.
  2. Since we don’t know how long the extract would run, we run the extract job as a background job so that it completed without any issues. 
  3. Extract using the parameter file DEMO1I.prm and use DEMO1I.rpt as a report file.
4. Start Initial replication (Target)
ggsci> add rep TDEMO1I, exttrail /backup/gg_kt/ogg/11.2/dirdat/ia, nodbcheckpoint
ggsci> start replicat TDEMO1I
Explanation of commands:
  1. Add a replicat by name TDEMO1I with trail dump file name (This should be the name from the initial extract file at source). NODBCHECKPOINT ensures that this replicat will not write checkpoints to a checkpoint table
  2. Start the replicat
5. Start real time replication with handle collisions (Target)
ggsci> dblogin userid ggs_dba password ggs_dba
ggsci> ADD REPLICAT DEMO1R, EXTTRAIL /backup/gg_kt/ogg/11.2/dirdat/pa, checkpointtable ggs_dba.CHECKPOINT
ggsci> ALTER REPLICAT DEMO1R, BEGIN 2014-23-12 04:30:03 -- (Initial extract start Time)
ggsci> start replicat DEMO1R
Explanation of commands:
  1. Login to database via OGG as OGG user
  2. Add replicat DEMO1R with trail file name (This should be the name of the pump extract at source) and specify a CHECKPOINTTABLE table name to write check point information.
  3. Alter the replicat to begin at a specific time. This is the time when the initial extract is started. You can note the time when extract is started or you can find this from the report file of the initial extract.
  4. Start the replicat
The parameter file will contain a keyword HANDLECOLLISIONS which is used to detect and resolve conflicts in data integrity. More on this keyword can be found in this oracle documentation link

6. Start real time replication without handle collisions (Target)
Now with all the initial data has been loaded into the target table and when there is no data to be loaded from the initial extract (see how to see status in the next section 7) we can restart the replicat to apply changes without error and conflicts detection by removing the HANDLECOLLISIONS keyword.
ggsci> stop replicat DEMO1R
ggsci> edit params DEMO1R
/* Remove the HANDLECOLLISIONS lines from DEMO1R.prm file
using edit option or with any other file editor */
ggsci> start replicat DEMO1R
Explanation of commands:
  1. Stop the replicat
  2. Edit the parameter file DEMO1R.prm and remove handlecollisions
  3. Start the replicat
7. Information/status about the processes
One can check the status of any process by using the below commands.
ggsci> info  DEMO1E  -- (we will be able to see the Lag)
ggsci> stats replicat DEMO1E -- (summary of records processed)
ggsci> info manager -- (or info mgr)
ggsci> send rep DEMO1E, getlag -- To get lag time
Happy replicating!!

OGG simple parameter files

In this post, we will see about the simple parameter files required for the Oracle GoldenGate replication operation.
For details on installation, prerequisites, set up and demo see this link

Parameter files are an important part of the OGG setup as these files provide the details required for the extract/replication operation. We can separate the tasks as streams. For eg., if we have 100 tables to be replicated then it is not required to include every table in a single parameter file and we can segregate set of tables under different streams. Since we are dealing with only one table for demo purpose here, we have 1 stream with 1 table which would contain 3 files in source and 2 files in target system.

I have used I, E, P and R terminologies to indicate whether it is Initial extract, Extract, Pump or Replicat parameter file.

1. Source Param files

Initial extract param file - DEMO1I.prm
EXTRACT DEMO1I
SOURCEISTABLE
setenv NLS_CHARACTERSET = ""WE8MSWIN1252""
setenv NLS_LANG = ""AMERICAN_AMERICA.WE8MSWIN1252""
USERID ggs_dba, PASSWORD ggs_dba
RMTHOST 172.20.161.193, MGRPORT 12000
RMTFILE /backup/gg_kt/ogg/11.2/dirdat/ia, maxfiles 25, MEGABYTES 1000, PURGE
TABLE SDAS.OGG_TEST;
Extract param file - DEMO1E.prm
EXTRACT DEMO1E
setenv NLS_CHARACTERSET = "WE8MSWIN1252"
setenv NLS_LANG = "AMERICAN_AMERICA.WE8MSWIN1252"
USERID ggs_dba, PASSWORD ggs_dba
THREADOPTIONS PROCESSTHREADS EXCEPT 2
EXTTRAIL /backup/PRNETSDB/ogg/11.1/dirdat/ea
DISCARDFILE /backup/PRNETSDB/ogg/11.1/dirrpt/DEMO1E.dsc, APPEND
REPORTCOUNT EVERY 100 RECORDS, RATE
GETTRUNCATES
TABLE SDAS.OGG_TEST;
Data pump param file - DEMO1P.prm
EXTRACT DEMO1P
setenv NLS_CHARACTERSET = "WE8MSWIN1252"
setenv NLS_LANG = "AMERICAN_AMERICA.WE8MSWIN1252"
PASSTHRU
RMTHOST 172.20.161.193, MGRPORT 12000
RMTTRAIL /backup/gg_kt/ogg/11.2/dirdat/pa
GETTRUNCATES
TABLE SDAS.OGG_TEST;
The above .prm files should be created under $GGH/dirprm of the source server using vi command

Note: When the source database is an ASM instance, then Oracle GoldenGate change data capture requires ASM authentication for accessing the transaction logs. The extract parameter TRANLOGOPTIONS handles authentication to the ASM instance.

TRANLOGOPTIONS ASMUSER SYS@CONN_STRG, ASMPASSWORD [ENCRYPTKEY { | DEFAULT}]
Eg: TRANLOGOPTIONS ASMUSER sysasm@ASMDB, ASMPASSWORD password

Oracle GoldenGate with database version 11.2.0.4 and above, we can use integrated capture method which is better than the Classic capture method explained in this demo. To learn more about integrated capture and apply methods, see this oracle documentation link

2. Target Param files

Replicat file - DEMO1R.prm
REPLICAT DEMO1R
setenv NLS_CHARACTERSET = "WE8MSWIN1252"
setenv NLS_LANG = "AMERICAN_AMERICA.WE8MSWIN1252"
USERID ggs_dba, PASSWORD ggs_dba
DISCARDFILE /backup/gg_kt/ogg/11.2/dirrpt/DEMO1R.dsc, APPEND
REPORTCOUNT EVERY 100 RECORDS, RATE
ASSUMETARGETDEFS
HANDLECOLLISIONS
MAP SDAS.OGG_TEST, TARGET SDAS.TGT_OGG_TEST;
Initial Replicat file - TDEMO1I.prm
REPLICAT TDEMO1I
setenv NLS_CHARACTERSET = "WE8MSWIN1252"
setenv NLS_LANG = "AMERICAN_AMERICA.WE8MSWIN1252"
USERID ggs_dba, PASSWORD ggs_dba
DISCARDFILE /backup/gg_kt/ogg/11.2/dirrpt/TDEMO1I.dsc, APPEND
REPORTCOUNT EVERY 100 RECORDS, RATE
ASSUMETARGETDEFS
MAP SDAS.OGG_TEST, TARGET SDAS.TGT_OGG_TEST;
The above .prm files should be created under $GGH/dirprm of the target server using vi command or any other text editors.

Happy replication!!

Enabling TRANDATA for Oracle GoldenGate

One of the prerequisites for Oracle GoldenGate replication is to enable TRANDATA for the tables or whole schema. For other prerequisites, installation and complete process see here.

Use ADD TRANDATA to enable Oracle GoldenGate to acquire the transaction information that it needs from the transaction records. This can be done by either ggsci or sqlplus command prompts.

Examples of both are shown below.
ggsci 1> dblogin userid ggs_dba Password ggs_dba
ggsci 2> add trandata SCHEMA.TABLE_NAME
or from database as
To specify an unconditional supplemental log group for PRIMARY KEY column(s):
SQL > ALTER TABLE SCHEMA.TABLE_NAME ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
To specify an unconditional supplemental log group that includes ALL TABLE columns:
SQL > ALTER TABLE SCHEMA.TABLE_NAME ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
To specify an unconditional supplemental log group that includes SELECTED columns:
SQL> ALTER TABLE SCHEMA.TABLE_NAME ADD SUPPLEMENTAL LOG GROUP t1_g1 (C1,C2) ALWAYS;
See this oracle documentation link for more details on why and how supplemental logging has to be enabled.
One can also use ADD SCHEMATRANDATA to enable supplemental logging for the whole schema and use exclusion parameter to exclude objects that are not required to be replicated. More about ADD SCHEMATRANDATA can be seen in this oracle documentation link

For our demo purpose, I’ll be replicating OGG_TEST table under SDAS user to be replicated
GGSCI (source) 3> add trandata sdas.ogg_test

2015-08-06 10:08:53  WARNING OGG-00869  No unique key is defined for table OGG_TEST. All viable columns will be used to represent the key, but may not guarantee uniqueness.  KEYCOLS may be used to define the key.

Logging of supplemental redo data enabled for table SDAS.OGG_TEST.

This is because we don’t have a unique or primary key defined on the table. I’ll now alter the table to drop the supplemental log group, create primary key on a field and then enable supplemental log group. Then let’s see how it behaves.
SQL> alter table sdas.ogg_test drop SUPPLEMENTAL LOG GROUP GGS_OGG_TEST_39719;

Table altered.

SQL> alter table sdas.ogg_test add constraint ogg_test_pk primary key (emp_id) using index;

Table altered.
Now try adding trandata..
GGSCI (source) 5> add trandata sdas.ogg_test

Logging of supplemental redo data enabled for table SDAS.OGG_TEST.

Now supplemental log data enabled using the unique key.

Happy OGG set up!!

Oracle GoldenGate Installation

In this post, let's look at how to install Oracle GoldenGate as part of our demonstration. For other prerequisites, set up and process please see this link

Source:
Download the Oracle GoldenGate software from here (Choose correct version as per your server environment)
The file name we use here is p13087235_111110_SOLARIS64.zip
Copy to desired OGG home location and unzip the software bundle and untar the actual file. Once untar, all the binaries required to run Oracle GoldenGate will be extracted and installation is considered complete.
$ cd $OGG
$ unzip p13087235_111110_SOLARIS64.zip
$ tar -xvf ggs_Solaris_sparc_ora9i_64bit.tar
Destination:
Download the Oracle GoldenGate software from here (Choose correct version as per your server environment)
The file name we use here is p17952585_1121017_Linux-x86-64.zip
Copy the software to the target server OGG home location and unzip the software bundle using unzip. Untar to complete installation.
$ cd $OGG
$ unzip p17952585_1121017_Linux-x86-64.zip
$ tar -xvf fbo_ggs_Linux_x64_ora11g_64bit.tar
Note: Until Oracle GoldenGate version 11.2, unzip of the bundle provides all the required executable/binaries for working with GoldenGate. Starting Oracle GoldenGate version 12.1, the extracted file will be used to install Oracle GoldenGate using runInstaller. Steps are as below.
$ unzip 121210_fbo_ggs_Linux_x64_shiphome.zip
$ ls -lrt
total 256
drwxr-xr-x 3 oracle dba   4096 Aug  7  2014 fbo_ggs_Linux_x64_shiphome
-rw-r--r-- 1 oracle dba   1540 Aug 11  2014 OGG-12.1.2.1.0-README.txt
-rw-r--r-- 1 oracle dba 246718 Aug 11  2014 OGG-12.1.2.1.0-ReleaseNotes.pdf
$ cd /oracle/U04/gg_home/12.1/fbo_ggs_Linux_x64_shiphome/Disk1/response
Edit all the required parameters (using any test editor) in the response file under /oracle/U04/gg_home/12.1/fbo_ggs_Linux_x64_shiphome/Disk1/response for GoldenGate software installation.

Parameters required:
INSTALL_OPTION - specify whether for ORA12c or ORA11g
SOFTWARE_LOCATION - specify the GG software install location (OGG home location)
INVENTORY_LOCATION - specify the inventory location
UNIX_GROUP_NAME - specify unix group name

Other optional parameters are START_MANAGER, MANAGER_PORT and DATABASE_LOCATION.
Save the file oggcore.rsp and start the installation as below.
$ pwd
/oracle/U04/gg_home/12.1/fbo_ggs_Linux_x64_shiphome/Disk1
$ ./runInstaller -silent -responseFile /oracle/U04/gg_home/12.1/fbo_ggs_Linux_x64_shiphome/Disk1/response/oggcore.rsp

You would get something like below to say Software installation is successful.
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 14621 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 2832 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-08-03_02-49-44PM. Please wait ...$
…
…
…
The installation of Oracle GoldenGate Core was successful.
Please check '/apps/app/oracle/oraInventory/logs/silentInstall2015-08-03_02-49-44PM.log' for more details.
Successfully Setup Software.
Happy Installation!

Enabling Archivelog

For non clustered, database archive logging can be enabled by the below method

Set the below parameters if Flash Recovery Area is being used
SQL> ALTER SYSTEM SET db_recovery_file_dest='/u01/FRA/ORCL/'; -- starting from version 10g and above
SQL> ALTER SYSTEM SET db_recovery_file_dest_size=10G;

If you decide not to use FRA, then the below parameters should be set
SQL> ALTER SYSTEM SET log_archive_start=TRUE SCOPE=spfile; -- required for 9i
SQL> ALTER SYSTEM SET log_archive_dest_1='location=/u01/oradata/ORCL/archive/' SCOPE=spfile;
SQL> ALTER SYSTEM SET log_archive_format='arch_%t_%s.arc' SCOPE=spfile;

If the above has been set perform the below.
SQL> SHUT IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER DATABASE OPEN;

For clustered database, follow the steps below
Logon to any of the node in the cluster
$ sqlplus / as sysdba
Set the relevant archive parameters such as log_archive_dest_1 and log_archive_format as explained above and perform the below steps
SQL> ALTER SYSTEM SET cluster_database=FALSE SCOPE=spfile; -- required for DB version 10g and below
$ srvctl stop database -d DB_NAME
SQL> STARTUP MOUNT;
SQL> ARCHIVE LOG START; -- required for DB version 9i only
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER SYSTEM SET cluster_database=TRUE SCOPE=spfile; -- required for DB version 10g and below
SQL> SHUTDOWN IMMEDIATE;
$ srvctl start database -d DB_NAME

Now archive log has been turned on.

Happy logging!!

Thursday, August 6, 2015

Dropping a Goldengate user schema - ORA-20782

Dropping an Oracle Goldengate user schema involves dropping Goldengate related triggers first before schema drop.
The reason being the error that we receive while trying to drop the schema.
ORA-00604: error occurred at recursive SQL level 2
ORA-20782: GoldenGate DDL Replication Error: Code :ORA-20782: Cannot DROP
object used in GoldenGate replication while trigger is enabled. Consult
GoldenGate documentation and/or call GoldenGate Technical Support if you wish
to do so., error stack: Error stack is avalaible only on Oracle 10.1 and above
ORA-06512: at line 951

We have 2 options to drop the Oracle Goldengate user schema. Both options are presented below and is straight forward.

Option 1:

Run the below 3 scripts as SYSDBA user from Goldengate home directory with input as OGG schema name

@ddl_disable.sql
@ddl_remove.sql
@marker_remove.sql

SQL> drop user ggs_dba cascade;
drop user ggs_dba cascade
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 2
ORA-20782: Oracle GoldenGate DDL Replication Error: Code :ORA-20782: Cannot
DROP object used in Oracle GoldenGate replication while trigger is enabled.
Consult Oracle GoldenGate documentation and/or call Oracle GoldenGate Technical
Support if you wish to do so., error stack: ORA-06512: at line 231
ORA-06512: at line 1030


SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
$ cd $GGH
$ sqlplus  / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Thu Aug 6 06:42:11 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> @ddl_disable.sql

Trigger altered.

SQL> @ ddl_remove.sql

DDL replication removal script.
WARNING: this script removes all DDL replication objects and data.

You will be prompted for the name of a schema for the Oracle GoldenGate database objects.
NOTE: The schema must be created prior to running this script.

Enter Oracle GoldenGate schema name:GGS_DBA
Working, please wait ...
Spooling to file ddl_remove_spool.txt

Script complete.
SQL> @marker_remove.sql

Marker removal script.
WARNING: this script removes all marker objects and data.

You will be prompted for the name of a schema for the Oracle GoldenGate database objects.
NOTE: The schema must be created prior to running this script.

Enter Oracle GoldenGate schema name:GGS_DBA

PL/SQL procedure successfully completed.


Sequence dropped.


Table dropped.


Script complete.
SQL>
SQL> drop user ggs_dba cascade;

User dropped.


Option 2:

Drop the trigger that is preventing the drop of schema and try to drop user.
Use the below query to find GGS user related trigger.
SELECT A.OBJ#, A.SYS_EVTS, B.NAME
FROM TRIGGER$ A,OBJ$ B
WHERE A.SYS_EVTS > 0
AND A.OBJ#=B.OBJ#
AND BASEOBJECT = 0;

Complete process is as below.
SQL> drop user GGS_DBA cascade;
drop user GGS_DBA cascade
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 2
ORA-20782: GoldenGate DDL Replication Error: Code :ORA-20782: Cannot DROP
object used in GoldenGate replication while trigger is enabled. Consult
GoldenGate documentation and/or call GoldenGate Technical Support if you wish
to do so., error stack: Error stack is avalaible only on Oracle 10.1 and above
ORA-06512: at line 951


SQL> select * from dba_triggers db where owner='GGS_DBA';

no rows selected

SQL> SELECT A.OBJ#, A.SYS_EVTS, B.NAME
FROM TRIGGER$ A,OBJ$ B
WHERE A.SYS_EVTS > 0
AND A.OBJ#=B.OBJ#
AND BASEOBJECT = 0;  2    3    4    5

      OBJ#   SYS_EVTS NAME
---------- ---------- ------------------------------
      5115        128 AW_DROP_TRG
      6261         16 SYS_LOGOFF
      6263          8 SYS_LOGON
      6265        128 NO_VM_DROP
      6267         32 NO_VM_CREATE
     39446     524256 GGS_DDL_TRIGGER_BEFORE

6 rows selected.

SQL> drop trigger GGS_DDL_TRIGGER_BEFORE;

Trigger dropped.

SQL> drop user GGS_DBA cascade;

User dropped.


Happy dropping!

Monday, July 27, 2015

Unix - Bits and Pieces Part 1

In this blog post series, I'll be updating the post with random unix commands that might be of use to DBAs at regular intervals.

1. Trim Alert log contents
It becomes tedious job when we have to deal with a huge alert log and it gets worse if we have to dig a month old or a week old alert log contents for troubleshooting. So the below command can be used to trim the contents of alert log for a particular period of time.

# This is the format used
sed -n '/From date pattern / , /To date pattern/p' < input_file > output_file
# This is an example
sed -n '/Jul 23 / , /Jul 26/p' < alter_ORACLESID.log > 4days_trimmed.txt

2. Details of CPU available

The details of memory information is available below
Memory Information of different OS platforms
The Details of CPU available can be found by using the commands below in Linux servers.
# -- gives number of CPU
nproc
# Details of CPU from /proc/cpuinfo in human readable format
lscpu
# Details from cpuinfo directly
cat /proc/cpuinfo

For HP-UX the below commands can be used
# For details of procs
ioscan -fnk|grep proc 
# For number of  procs
ioscan -k|grep processor|wc -l
# For details of overall system 
top
For AIX use any of the below.
prtconf -s
pmcycles -m
lsdev -Cc processor
bindprocessor -q

3. Create multiple directories under all directories

If we are required to create multiple directories inside all the available directories the below command can be used
For eg., lets take a directory where envmt has 4 directories dir1, dir2, dir3, dir4
Now we need to create control,bad,param,log directories under each of all the directories under envmt. Illustration is as follows
$ cd envmt
$ ls -lrt
total 16
drwxr-xr-x 2 oracle dba 4096 Jul 27 17:07 dir4
drwxr-xr-x 2 oracle dba 4096 Jul 27 17:07 dir3
drwxr-xr-x 2 oracle dba 4096 Jul 27 17:07 dir2
drwxr-xr-x 2 oracle dba 4096 Jul 27 17:07 dir1
$ ls -l dir1
total 0
$ ls -l dir2
total 0
$ for dir in */; do mkdir -- "$dir"/{control,bad,param,log}; done
$ ls -l dir1
total 16
drwxr-xr-x 2 oracle dba 4096 Jul 27 17:08 bad
drwxr-xr-x 2 oracle dba 4096 Jul 27 17:08 control
drwxr-xr-x 2 oracle dba 4096 Jul 27 17:08 log
drwxr-xr-x 2 oracle dba 4096 Jul 27 17:08 param
$ ls -l dir2
total 16
drwxr-xr-x 2 oracle dba 4096 Jul 27 17:08 bad
drwxr-xr-x 2 oracle dba 4096 Jul 27 17:08 control
drwxr-xr-x 2 oracle dba 4096 Jul 27 17:08 log
drwxr-xr-x 2 oracle dba 4096 Jul 27 17:08 param
Now control,bad,param,log directories are created in dir1, dir2, dir3, dir4 directories. This saves a lot of time while doing any job.

4. Search and replace in all files

The following command can be used to search and replace a word in all the files inside a directory. This is useful when we have configuration files stored where all the files should be updated with same username or servername, etc. 
# This is the format used
grep -rl 'search_string' ./ | xargs sed -i 's/search_string/replace_with_string/g'
# This is an example
grep -rl 'STDBCS01' ./ | xargs sed -i 's/STDBCS01/DVDBCS01/g'
If this find and replace is required for a single file, then it can be done by opening the file in vi editor
# open file
vi file_name
# Use below command to find and replace 
# g denotes global. Remove g if find and replace is required only for the particular line
:%s/search_string/replace_with_string/g
# save file and exit
:wq

5. Email attachment from servers

Email attachments can be sent with uuencode and the command is as below.
Tested in AIX and Linux
uuencode file1 file1 |mailx -s "Subject" nagulan.selvakumar@my.company.com
The below command is used to send multiple attachments in a single email
uuencode r1.tar.gz r1.tar.gz > /tmp/out.mail
uuencode r2.tar.gz r2.tar.gz >> /tmp/out.mail
uuencode r3.tar.gz r3.tar.gz >> /tmp/out.mail
echo "msg body"  >> /tmp/out.mail
mail -s "Reports" nagulan.selvakumar@my.company.com < /tmp/out.mail
We do have another package "mutt" to send emails which can be utilized as below.
Tested in Linux
echo "This is the message body" | mutt -a file_to_attach -s "subject of message" nagulan.selvakumar@my.company.com 

Happy working!

Tuesday, July 21, 2015

Job Intervals and Date Arithmetic

There are times when a DBA has to schedule jobs from inside the database either via dbms_jobs or dbms_scheduler to automate the tasks. Most of the times the confusion arrives on how to schedule the tasks with date and time. The below can be used in the interval parameter to schedule a job on its desired time.

Every Second.
interval => 'SYSDATE +1/86400';

Every Minute.
interval => 'SYSDATE +1/1440';

Every Ten Minute.
interval => 'SYSDATE +10/1440';

Every Half Hour.
interval => 'SYSDATE +30/1440';

Every Hour.
interval => 'SYSDATE +60/1440';

Every Day.
interval => 'SYSDATE +1';

Every Day with Fixed Time. ex: every night 8:00 pm.
interval => 'trunc(SYSDATE +1) + 20/24'

Every Midnight 12:00.
interval => 'trunc(SYSDATE +1)'

Every Week
interval => 'trunc(SYSDATE +7)'

Every Month
interval => 'add_months(SYSDATE, 1)'

Happy scheduling!

Wednesday, July 15, 2015

MS SQL vs Oracle database operational differences

I was working on a project where I need to migrate a database running on MS SQL server 2000 to Oracle 11gR2. I was using Oracle SQL Developer to convert codes such as functions, procedures, etc,. I had to do a lot of programming logic changes. I have noticed a few differences in both the flavours of RDBMS which I'll try to cover it up in this post.
This post covers only the technical coding differences and not about the general differences such as adaptability, licensing, usage, etc,.

1. MS SQL 'IF EXISTS'

We have codes in MS SQL like
begin
if exists
(select * from TABLE_01 where COL_01 = @var_01)
insert into .. 
else 
do something ..
end if
end
In Oracle, the situation can be handled with MERGE statement for "if exists insert or update" only when the tables involved are 2.
An example of MERGE statement is as below
MERGE INTO TABLE_01 a
USING TABLE_02 b
 ON (a.first_name = b.first_name)
-- orabliss.blogspot.com
WHEN MATCHED
 THEN
  UPDATE
  SET a.last_name = b.last_name
WHEN NOT MATCHED
 THEN
  INSERT (
   first_name
   ,last_name
   ,school
   )
  VALUES (
   b.first_name
   ,b.last_name
   ,b.school
   );
You can't have more than 2 tables to use MERGE statement. Also if the above is not for "if exists insert or update", then Oracle can handle this as below.
DECLARE v_temp NUMBER ( 1, 0) : = 0;
BEGIN
 SELECT COUNT(*)
 INTO v_temp
 FROM TABLE_01
 WHERE KEY_COL = v_var_01; -- v_var_01 is value passed or already declared
 IF v_temp > 0 -- This becomes "IF EXISTS" of MS SQL
 Do some operation
 ELSE 
 Do other operation
 END IF;
END;
/

2. IDENTITY data type

In MS SQL database we have IDENTITY type which is basically a primary key sequentially inserted value column defined as below.
CREATE TABLE [dbo].[TABLE_01] (
 [SERIAL_ID] [SMALLINT] IDENTITY (1, 1) NOT NULL ,
 [COL_02] [CHAR] NOT NULL ,
 ..
 ..
 )
GO
In Oracle we don't have this functionality until Oracle release 12c. So we have to create a sequence, create a trigger and use the trigger to fetch the next_value from the sequence every time the table is inserted with a new row.
Starting Oracle 12c, we have this IDENTITY column implemented which internally creates a sequence and increments when row inserted.

3. Data handling

We are fetching data into variable from a table to do some process. The predicate filter columns are not primary key columns.

MS SQL code
SELECT @Id_No = ID_NO
 ,@First_Name = FIRST_NAME
 ,@R_Code = R_CODE
 ,@Last_Name = LAST_NAME
FROM TABLE_01
WHERE CLASS_NO = @Class_No
 AND STD_QTY = @StdQty;

Oracle Code
SELECT ID_NO
 ,FIRST_NAME
 ,R_CODE
 ,LAST_NAME
INTO v_Id_No
 ,v_First_Name
 ,v_R_Code
 ,v_Last_Name
FROM TABLE_01
WHERE CLASS_NO = v_Class_No
 AND STD_QTY = v_StdQty;
Here we have 3 conditions either...
1. We have a single row returned as expected
2. We don't have any matching rows
3. We have more then 1 row for the combination of predicate

When we have the first condition satisfied, then both MS SQL and Oracle handles data in the same way.
When we have no matching rows, MS SQL by default assigns null values to all the variables whereas in Oracle the code errors out as NO_DATA_FOUND exception.
So here we need to handle the exception explicitly. One might think this is bad, but this is advantageous than MS SQL as we would know whether we have data for this combination or not before processing with the data using exception handler as below
EXCEPTION 
 WHEN NO_DATA_FOUND THEN
 -- handle exception stmt
Now coming to the third condition, when we have more than 1 row, MS SQL by default assigns the last fetched data to the variables ignoring all other values.
Oracle errors out with another exception TOO_MANY_ROWS.
If filter column is a primary key, we won't encounter this issue, but in our case it is not.
When one encounters TOO_MANY_ROWS exception (if expected), this can be handled through a loop statement to process the operation for every rows returned.
EXCEPTION WHEN TOO_MANY_ROWS THEN LOOP
FOR i IN
(SELECT v_Id_No
FROM TABLE_01
WHERE CLASS_NO = v_Class_No
 AND STD_QTY = v_StdQty)
 -- Do something
 ..
 ..
 END LOOP;
If you are concerned only about particular fields and not all other fields involved, this can also be handled with limiting the query to get only one row as below.
SELECT ID_NO
 ,FIRST_NAME
 ,R_CODE
 ,LAST_NAME
INTO v_Id_No
 ,v_First_Name
 ,v_R_Code
 ,v_Last_Name
FROM TABLE_01
WHERE CLASS_NO = v_Class_No
 AND STD_QTY = v_StdQty 
 -- This will limit the code to fetch only one row
 and rownum =1;

4. Transaction Control

In MS SQL, when we run a block of statements/queries, it treats each statement/query as a single unit. For eg. check the block below.
insert into table t1 values (v1);
insert into table t1 values (v2);
delete from t1 where field=v1;
exec some_proc;
We have 3 statements and when we run this in MS SQL database, then each statement will be committed individually irrespective of their previous statement is success or failure where as in Oracle all the 3 statements are treated as a single unit and should be committed (COMMIT) or rolled back (ROLLBACK) for all the changes to take effect at once.  As queries are executed and commands are issued, changes are made only in memory and nothing is committed until an explicit COMMIT statement is given (exclusion for DDL statements where an immediate commit is issued after execution).

In order to make the MS SQL group the all the transactions to a single group, BEGIN TRANSACTION with either COMMIT or ROLLBACK has to be specified as below.
BEGIN TRANSACTION
insert into table t1 values (v1);
insert into table t1 values (v2);
delete from t1 where field=v1;
exec some_proc;
COMMIT; -- or ROLLBACK

5. Find and replace

STUFF function serves the find and replace task in MS SQL which is powerful function used in many places.
REGEXP_REPLACE function is even more powerful compared to MS SQL's stuff as the control to user on how to find and replace is very useful in complex programming.

6. Objects

MS SQL stored procedures can return values whereas Oracle procedures will not return values. We can use OUT or IN OUT argument to get the required output. Oracle FUNCTIONs serves the purpose of doing the same as procedures and can return values.
In Oracle, a group of procedures and functions can be collectively put in as PACKAGE whereas in MS SQL this can't be done.

7. Miscellaneous

a) There a few data type differences that has to be taken care while converting from MS SQL to Oracle database. A few noticeable differences are in this Oracle document link.

b) User defined error messages can be created using sp_addmessage stored procedure in MS SQL. In Oracle, the user messages doesn't need to be created separately and can be raised as an user defined exception which ranges from -20000 to -20999.

The more we share, the more we learn. I'll update the post as and when I get additional differences while working on the flavours again.

Happy working!

Monday, July 13, 2015

Query elapsed time

Many times, for performance point of view we would like to know the elapsed time of a query. May be for comparison purpose or for documenting the elapsed time in good and bad scenarios. The following piece of code will get you the elapsed time for any query.
This code uses the dbms_utility.get_time procedure before and after the query is run so that we get the elapsed time. Make sure you run the query as a single block so we don't have an additional time added to the difference.
set serveroutput on
variable n number
exec :n := dbms_utility.get_time
-- Place your actual query here
select * from dual;
-- Query ends here
begin
   dbms_output.put_line
   ( (round((dbms_utility.get_time - :n)/100,2)) || ' seconds' );
end;
/

Reference oracle_document and Asktom site

Happy Working!

Friday, July 10, 2015

Datapump Export using Datapump API

The Data Pump API, DBMS_DATAPUMP, provides a high-speed mechanism to move all or part of the data and metadata for a site from one database to another. The Data Pump Export and Data Pump Import utilities are based on the Data Pump API.

In this post lets see how to perform a schema level export using the datapump API.
There are a few reasons (as I'm thinking as of now) to perform export/import through API rather then command mode interface is as follows.
1. When you don't have access to database server to perform an export but you only have access to the database and you know the pre created directory object.
2. When you perform a remote operation on a database from another server (say an application server) so you would like to perform export from the same connection
3. When you would like to schedule a database job from within the database itself without the necessity to create OS scripts to invoke export.

As a prerequisite, to perform export using Datapump API, exp_full_database role has to be granted to the user performing expdp directly rather then through a role. (citation needed here as I tried granted through a role to fail)
I've also modified the API code to accept schema name to be exported as input so that we can store it as a procedure and invoke as and when required. The code is as below.
CREATE OR REPLACE PROCEDURE Proc_bkp_schema 
-- orabliss.blogspot.com
(v_schema IN VARCHAR2)
IS
   dp_handle    NUMBER;
   job_status   VARCHAR2 (30);
   v_dt         NUMBER;
   v_sch_name   VARCHAR2(30);
   v_filename   VARCHAR2(30);
   v_logname    VARCHAR2(30);
BEGIN
   SELECT TO_NUMBER (TO_CHAR (SYSDATE, 'yyyymmdd')) INTO v_dt FROM DUAL;
   v_sch_name := 'IN ('''||v_schema||''')';
   v_filename := ''||v_schema||'_'||v_dt||'.dmp'; -- dumpfile name
   v_logname  := ''||v_schema||'_'||v_dt||'.log'; -- logfile name

   -- schema export mode
   dp_handle := DBMS_DATAPUMP.open (operation => 'EXPORT', job_mode => 'SCHEMA');

   -- dump file
   DBMS_DATAPUMP.add_file (
      handle      => dp_handle,
      filename    => v_filename,
      directory   => 'DATAPUMP',
      filetype    => SYS.DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);

   -- log file
   DBMS_DATAPUMP.add_file (
      handle      => dp_handle,
      filename    => v_logname,
      directory   => 'DATAPUMP',
      filetype    => SYS.DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);

   -- specify schema name
   DBMS_DATAPUMP.metadata_filter (handle   => dp_handle,
                                  name     => 'SCHEMA_EXPR',
                                  VALUE    => v_sch_name);

   DBMS_DATAPUMP.start_job (dp_handle);

   DBMS_DATAPUMP.wait_for_job (handle => dp_handle, job_state => job_status);

   DBMS_OUTPUT.put_line (
         'DataPump Export - '
      || TO_CHAR (SYSDATE, 'DD/MM/YYYY HH24:MI:SS')
      || ' Status '
      || job_status);

   DBMS_DATAPUMP.detach (handle => dp_handle);
END;
/

The procedure can be called as below.
begin PROC_BKP_SCHEMA ('ORACLE'); end;
/

This procedure above accepts a single schema as input. The same procedure can be altered to accept many users through a 'for loop' and export several schemas.
You might also make use of metadata_filter with 'NAME_EXPR' to filter out only required tables and data_filter to filter out partitions of table using the datapump API.

Happy working!

Create and manage ACL

Oracle allows access to external network services using several PL/SQL APIs (UTL_TCP, UTL_SMTP, UTL_MAIL, UTL_HTTP andUTL_INADDR), all of which are implemented using the TCP protocol. Sometime when configuring jobs inside database to send emails from within the database, we get the following error.

ORA-24247: network access denied by access control list (ACL)

The reason being the user is not authorised by Oracle to send emails. In this case one should create ACL (Access Control List) and grant permission to access external network or send emails. Oracle 11g introduces fine grained access to network services using access control lists (ACL) in the XML DB repository, allowing control over which users access which network resources, regardless of package grants. So as a prerequisite XML DB component has to be installed on the database. 

The following code creates an ACL and grants permission to a user.

BEGIN
  DBMS_NETWORK_ACL_ADMIN.create_acl (
    acl          => 'email_access.xml', -- ACL file name
    description  => 'Email access to users', -- Description
    principal    => 'USER', -- username
    is_grant     => TRUE, 
    privilege    => 'connect',
    start_date   => SYSTIMESTAMP,
    end_date     => NULL);
  COMMIT;
END;
/

BEGIN
  DBMS_NETWORK_ACL_ADMIN.assign_acl (
    acl         => 'email_access.xml',
    host        => '*', -- all hosts, you can specify which hosts can be accessed
    lower_port  => NULL,
    upper_port  => NULL); 

 COMMIT;
END;
/

Now after creating ACL and granting it to the user and assigning the hosts, the user can send emails without issue from inside the database.

To set the email server, the following code can be issued.

-- replace localhost with mail server name
execute dbms_scheduler.set_scheduler_attribute('email_server','localhost');

We can now check to confirm whether the user is able to send emails from the database by the following block. Connect as granted user and run this

DECLARE
   v_mailsever_host VARCHAR2(30) := 'localhost'; -- replace with mail server name
   v_mailsever_port PLS_INTEGER  := 25; -- 25 is default port
   l_mail_conn  UTL_SMTP.CONNECTION;
BEGIN
   l_mail_conn := UTL_SMTP.OPEN_CONNECTION( v_mailsever_host, v_mailsever_port);
END;
/

Now we see the emails are sent without any errors or issues.

If we need to add another user 'USER2' to ACL, the following can be issued in the database.

  exec DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE('email_access.xml' ,'USER2', TRUE, 'connect');
  exec DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE('email_access.xml' ,'USER2', TRUE, 'resolve');

Now users USER and USER2 will be able to send emails from inside the database using PL/SQL APIs listed above.

Happy working!