Looking for something? Try here..

Tuesday, July 31, 2018

Filtering date with GoldenGate

Today, while trying to break a big table to chunks to improve GoldenGate initial loading, I was trying to utilize the partitioning feature already available with the table which was based on time. Table is weekly partitioned, so if I try to filter rows based on partition, I can get data to be extracted easily by partition pruning. I have to use classic extract as the database version we are upgrading is 11.2.0.2 which doesn't support integrated extract.

So technical environment is as follows
==> [aiacl00378] [oracce] [DVCCE1DB] $ggsci -v

Oracle GoldenGate Command Interpreter for Oracle
Version 12.3.0.1.4 OGGCORE_12.3.0.1.0_PLATFORMS_180415.0359_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Apr 15 2018 21:16:09

Copyright (C) 1995, 2018, Oracle and/or its affiliates. All rights reserved.


==> [aiacl00378] [oracce] [DVCCE1DB] $uname -a
Linux aiacl00378 2.6.32-642.3.1.el6.x86_64 #1 SMP Sun Jun 26 18:16:44 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
As per Oracle support note Doc ID 1458966.1, if you are trying to filter records based on date, the following can be used.

TABLE {source table},
FILTER (@COMPUTE (@DATE("JUL", "YYYY-MM-DD", creation_time) >=
@DATE("JUL", "YYYY-MM-DD", "2012-01-01")) > 0);

My extract parameter file w.r.to the above is as below. 
EXTRACT IEXT1
setenv NLS_CHARACTERSET="AL32UTF8"
setenv NLS_LANG="AMERICAN_AMERICA.AL32UTF8"
setenv ORACLE_SID="DV1DB"
USERID ggs_dba, PASSWORD ggs_dba
RMTHOST ********, MGRPORT 12000
RMTFILE /u01/app/oracle/product/gg12.3/dirdat/c1, megabytes 2000, PURGE
table ccedba.CUSTOMER, filter(@compute(@date('YYYY-MM-DD HH:MI:SS','DD-MMM-YY HH:MI:SS',creation_date) < @date('YYYY-MM-DD HH:MI:SS','DD-MMM-YY HH:MI:SS','22-JUN-18 0
0:00:00'))>0);

This above extract when started didn't extract any records.
Excerpt from report file the extract process generated:

***********************************************************************
*                   ** Run Time Statistics **                         *
***********************************************************************


Report at 2018-07-30 07:37:11 (activity since 2018-07-30 07:37:02)

Output to /u01/app/oracle/product/gg12.3/dirdat/c1:

No records extracted.

The database had a handful of records falling under this filter criteria.
SQL> SQL> select count(*) from ccedba.customer
  2  where creation_date < to_date('2018-06-22 00:00:00', 'yyyy-mm-dd hh24:mi:ss');

  COUNT(*)
----------
      3444

SQL> 
After trying different ways to make the above work, I came across another support note Doc ID 966966.1 where it talks about default date formats in both Oracle and GoldenGate.

So why not to use the direct format of date where GoldenGate defaults to so that the records can be filtered? And yes, it worked like a charm.
The parameter file looks like the below.
EXTRACT IEXT1
setenv NLS_CHARACTERSET="AL32UTF8"
setenv NLS_LANG="AMERICAN_AMERICA.AL32UTF8"
setenv ORACLE_SID="DV1DB"
USERID ggs_dba, PASSWORD ggs_dba
RMTHOST ********, MGRPORT 12000
RMTFILE /u01/app/oracle/product/gg12.3/dirdat/c1, megabytes 2000, PURGE
table ccedba.customer, WHERE (CREATION_DATE < '2018-06-22 00:00:00');

Excerpt from the report file generated for the above parameter file.
018-07-30 09:02:03  INFO    OGG-01478  Output file /u01/app/oracle/product/gg12.3/dirdat/c1 is using format RELEASE 12.3.

2018-07-30 09:02:03  INFO    OGG-02911  Processing table CCEDBA.CUSTOMER.

***********************************************************************
*                   ** Run Time Statistics **                         *
***********************************************************************


Report at 2018-07-30 09:02:06 (activity since 2018-07-30 09:01:58)

Output to /u01/app/oracle/product/gg12.3/dirdat/c1:

From Table CCEDBA.CUSTOMER:
       #                   inserts:      3444
       #                   updates:         0
       #                   deletes:         0
       #                  discards:         0


Look at the where clause for the table extraction. It's just too easy as I only made the date format as 'YYYY-MM-DD HH:MI:SS'.
I'm not sure whether this same approach works with GoldenGate versions below 12.3 as I didn't test for the same.

Note:
I have used options such as FILTER, SQLPREDICATE, WHERE caluses with different combinations of date format before arriving to this simple solution.

Happy Filtering!!!

Thursday, April 26, 2018

ORA_ROWSCN, SCN_TO_TIMESTAMP and ROWDEPENDENCIES

In one of the application that I work, the developers used the ORA_ROWSCN pseudo column to filter records for deletion as below and got an error as well.

delete from FOCUS_TRANS where scn_to_timestamp(ora_rowscn) < sysdate - :1

ORA-08181: specified number is not a valid system change number
ORA-06512: at "SYS.SCN_TO_TIMESTAMP", line 1
08181. 00000 - "specified number is not a valid system change number"
*Cause: supplied scn was beyond the bounds of a valid scn.
*Action: use a valid scn.


Though I'm not a big fan of using pseudo columns for record selection criteria, I thought of doing a bit of study on the same. Here are the things that one should think about when using ORA_ROWSCN

ORA_ROWSCN reflects the system change-number (SCN) of the most recent change to a row. The granularity is of 2 levels, block (coarse) or at row (fine-grained). Let's come to this part later. You may get details on ORA_ROWSCN from here.

Now, why does the query fail with the error? SCN should be stored for all the records which where either inserted or updated and it shouldn't be a problem for the query to convert that scn to timestamp using SCN_TO_TIMESTAMP function, isn't it?

Here is the catch on SCN_TO_TIMESTAMP: Note from Oracle Documentation

"The association between an SCN and a timestamp when the SCN is generated is remembered by the database for a limited period of time. This period is the maximum of the auto-tuned undo retention period, if the database runs in the Automatic Undo Management mode, and the retention times of all flashback archives in the database, but no less than 120 hours. The time for the association to become obsolete elapses only when the database is open. An error is returned if the SCN specified for the argument to SCN_TO_TIMESTAMP is too old."

The above means Oracle can keep track of times up to a maximum of 5 days or tuned_undoretention if the database is in automatic undo management. When the scn number becomes more than this specification, it goes out of range for the scn_to_timestamp function and thus the error.

Let's see which record causing the error in the database

I'm going to find which is the minimum scn that the database currently has in it's memory for the conversion.
SQL> select min(scn) min_scn from sys.SMON_SCN_TIME;

        MIN_SCN
---------------
 13777748906536

So anything beyond this scn should throw us the error.
In the table, here are the details.
SQL> select * from (select ora_rowscn, customer_number from FOCUS_TRANS
  2  where ora_rowscn < 13777748906536 order by ora_rowscn desc) where rownum < 6;

     ORA_ROWSCN CUSTOMER_NUMBER
--------------- --------------------
 13777629123985 00000000000019982791
 13777629123983 00000000000023280153
 13777572276494 00000000000023280145
 13777572273829 00000000000023283108
 13777572273825 00000000000023283108

SQL>

Now this first value which is beyond the minimum scn should throw error when attempted to convert to timestamp.
SQL> select ora_rowscn, scn_to_timestamp(ora_rowscn), customer_number, insert_date_time from FOCUS_TRANS
  2  where ora_rowscn = 13777629123985;
select ora_rowscn, scn_to_timestamp(ora_rowscn), customer_number, insert_date_time from FOCUS_TRANS
                   *
ERROR at line 1:
ORA-08181: specified number is not a valid system change number
ORA-06512: at "SYS.SCN_TO_TIMESTAMP", line 1


SQL>

So we found which record or which records (out of range) are the cause of this error. If you wonder if the previous value within the range would work, let's try that too
SQL> select * from (select ora_rowscn, customer_number from FOCUS_TRANS
  2  where ora_rowscn >= 13777748906536 order by ora_rowscn) where rownum < 3;

     ORA_ROWSCN CUSTOMER_NUMBER
--------------- --------------------
 13777817297849 00000000000024408803
 13777983336842 00000000000024855932

SQL> select ora_rowscn, scn_to_timestamp(ora_rowscn), customer_number, insert_date_time from FOCUS_TRANS
  2  where ora_rowscn = 13777817297849;

     ORA_ROWSCN SCN_TO_TIMESTAMP(ORA_ROWSCN)                                                CUSTOMER_NUMBER      INSERT_DA
--------------- --------------------------------------------------------------------------- -------------------- ---------
 13777817297849 14-APR-18 11.59.33.000000000 AM                                             00000000000024408803 19-DEC-17

SQL>

That worked :)

Now coming back to why I don't want the developers to use ORA_ROWSCN for selection criteria in this case is as below.

First:
This solution works only when the insert, update and delete all occurs within 5 days (or lets say within the time reached tuned_undoretention) else it would fail and you can't do anything about that.

Second:
To implement this solution, the table should be created with ROWDEPENDENCIES for ORA_ROWSCN to be working with row level granularity.

Let's take this example below to show how ROWDEPENDENCIES work. See the difference in create table statements highlighted and how scn_to_timestamp getting converted with enabling ROWDEPENDENCIES
SQL> create table rowscn_test (id number, name varchar2(20));

Table created.

SQL> insert into rowscn_test values (1, 'Selva');

1 row created.

SQL> insert into rowscn_test values (2, 'Kumar');

1 row created.

SQL> commit;

Commit complete.


SQL> select ora_rowscn, id, name from rowscn_test;

ORA_ROWSCN         ID NAME
---------- ---------- --------------------
6.8198E+10          1 Selva
6.8198E+10          2 Kumar

SQL> select scn_to_timestamp(ora_rowscn), id, name from rowscn_test;

SCN_TO_TIMESTAMP(ORA_ROWSCN)
---------------------------------------------------------------------------
        ID NAME
---------- --------------------
25-APR-18 06.36.07.000000000 AM
         1 Selva

25-APR-18 06.36.07.000000000 AM
         2 Kumar


SQL> update rowscn_test set name='SELVA' where id=1;

1 row updated.

SQL> commit;

Commit complete.

SQL> select scn_to_timestamp(ora_rowscn), id, name from rowscn_test;

SCN_TO_TIMESTAMP(ORA_ROWSCN)
---------------------------------------------------------------------------
        ID NAME
---------- --------------------
25-APR-18 06.37.49.000000000 AM
         1 SELVA

25-APR-18 06.37.49.000000000 AM
         2 Kumar


SQL> create table rowscn_test_rld (id number, name varchar2(20)) rowdependencies;

Table created.

SQL> insert into rowscn_test_rld values (1, 'Selva');

1 row created.

SQL> insert into rowscn_test_rld values (2, 'Kumar');

1 row created.

SQL> commit;

Commit complete.

SQL> select scn_to_timestamp(ora_rowscn), id, name from rowscn_test_rld;

SCN_TO_TIMESTAMP(ORA_ROWSCN)
---------------------------------------------------------------------------
        ID NAME
---------- --------------------
25-APR-18 06.42.19.000000000 AM
         1 Selva

25-APR-18 06.42.19.000000000 AM
         2 Kumar


SQL> update rowscn_test_rld set name='SELVA' where id=1;

1 row updated.

SQL> commit;

Commit complete.

SQL> select scn_to_timestamp(ora_rowscn), id, name from rowscn_test_rld;

SCN_TO_TIMESTAMP(ORA_ROWSCN)
---------------------------------------------------------------------------
        ID NAME
---------- --------------------
25-APR-18 06.42.52.000000000 AM
         1 SELVA

25-APR-18 06.42.19.000000000 AM
         2 Kumar

Now you can see that while designing the table, if the table is created without specifying ROWDEPENDENCIES, the ORA_ROWSCN selection will pick up records which doesn't fall into the selection criteria as well. In my case, the table is created with ROWDEPENDENCIES :)


Hope this post helps understanding the terms ORA_ROWSCN and ROWDEPENDENCIES better.

Happy SCNing !!! :)

Wednesday, February 28, 2018

Dropping pluggable database

This is a small post on how to drop a pluggable database if you are on multi tenant architecture of the Oracle 12c database.
Commands are as below and the explanation is provided after the command executions.
[oracle@xxxx ~]$ dblogin

SQL*Plus: Release 12.1.0.2.0 Production on Tue Feb 20 01:56:45 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

Enter value for container:
old   1: alter session set container=&container
new   1: alter session set container=
alter session set container=
                           *
ERROR at line 1:
ORA-65015: missing or invalid container name


SQL> select name, TOTAL_MB, FREE_MB, CON_ID from  v$asm_diskgroup;

NAME                             TOTAL_MB    FREE_MB     CON_ID
------------------------------ ---------- ---------- ----------
RECOD1                           59731200   53564384          0
DBFS_DG                           1030816    1012720          0
DATAD1                          209018880   99355460          0

SQL> sho pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
  ..
  ..
        16 QADB26                         READ WRITE NO
        17 PTETFSA                        READ WRITE NO
        18 DVBSODB                        READ WRITE NO
  ..
  ..
  
SQL> alter pluggable database PTETFSA close;
^Calter pluggable database PTETFSA close
*
ERROR at line 1:
ORA-01013: user requested cancel of current operation

SQL>  alter pluggable database PTETFSA close immediate;

Pluggable database altered.

SQL> DROP PLUGGABLE DATABASE ptetfsa INCLUDING DATAFILES;

Pluggable database dropped.

SQL> select name, TOTAL_MB, FREE_MB, CON_ID from  v$asm_diskgroup;

NAME                             TOTAL_MB    FREE_MB     CON_ID
------------------------------ ---------- ---------- ----------
RECOD1                           59731200   53564384          0
DBFS_DG                           1030816    1012720          0
DATAD1                          209018880  101575024          0

SQL>

Explanations are as below.

Line 1: dblogin is an alias used in my profile file defined as below.
alias dblogin='sqlplus sys/***** as sysdba @set_con'

set_con is a sql script with the following line
alter session set container=&container;

You don't pass a pdb name, the login defaults to root container.

Line 22: Checking for space information in the ASM diskgroup

Line 48: Close the pluggable database on both the instances as this is a RAC setup

Line 52: INCLUDING DATAFILES clause is required to delete any files associated with the pluggable database. The default is KEEP DATAFILES

Reference: Click here

Happy dropping!!