Looking for something? Try here..
Thursday, December 29, 2022
Troubleshooting Kerberos and Oracle CMU configuration
Monday, December 12, 2022
Performance slowness due to Spatial component
We had a complaint from application team that their database is running slow after the quarterly patching. We had patched many databases and only one database is complaining on slowness. The environment is Exacc and the database is a container (running version 19.16) which has multiple PDBs. So patching couldn't be a reason is the initial thought. So we stepped into investigate what is causing the slowness.
As a first step, we pulled an AWR report for the time frame where they did some DB operations.
We can see from above that most of the waits are related to Cluster and topping them is gc cr multi block mixed, gc cr block 2-way and gc buffer busy acquire.
So this means some query is running very frequently accessing 1 or 2 table blocks continuously. We need to figure out which query is running, accessing what table/index and how frequent and what's the elapsed time of the query. These information will get us to move forward.
Since the waits are related to Cluster, let's see SQL ordered by Cluster Wait Time (Global) section under SQL Statistics
SELECT /* chk_journal_reg */ rowid FROM "MDSYS"."SDO_TXN_JOURNAL_REG" WHERE sid = :1 AND sdo_txn_idx_id = :2 AND indexptniden = :3 AND operation = :4 AND rid = :5SELECT /* sel_dist_upd */ ROWID, ....., ins_no, del_no FROM "MDSYS"."SDO_DIST_METADATA_TABLE" WHERE sdo_index_owner = :own AND sdo_index_name = :nam AND sdo_partition_name = :ptn FOR UPDATE
Ok, these 2 queries look simple and not complicated, so what's causing this buffer busy issues? Investigation went in following direction
Check size of tables: They are too small.
Check the execution plan of the first code with high cluster waits gave us the clue. Though we have primary key defined on all five columns (SID, SDO_TXN_IDX_ID, INDEXPTNIDEN, OPERATION, RID), the query took the path of INDEX STORAGE FAST FULL SCAN which would traverse through all the blocks unnecessarily. The query took around 4 second per execution.
I hint the query to use the index already available (/*+ index(SDO_TXN_JOURNAL_REG SYS_C0014253) */) - this resulted in a much better plan and execution time. Execution time reduced to milliseconds. Remember this query executed around 53k times which would make a big difference in application performance. While checking for any plan change, I found that the index path was used until the patch was applied and the plan flipped
SQL> select distinct sql_id, plan_hash_value, id, operation, options 2* from dba_hist_sql_plan where sql_id='fw4q2s9j3bckt' order by plan_hash_value, id; SQL_ID PLAN_HASH_VALUE ID OPERATION OPTIONS ________________ __________________ _____ ___________________ _________________________ fw4q2s9j3bckt 1765869626 0 SELECT STATEMENT fw4q2s9j3bckt 1765869626 1 INDEX STORAGE FAST FULL SCAN fw4q2s9j3bckt 2134037127 0 SELECT STATEMENT fw4q2s9j3bckt 2134037127 1 INDEX UNIQUE SCAN SQL>
Reason for slowness is
Running a MV refresh using the DBMS_SCHEDULER package causes spatial to use the SDO_TXN_JOURNAL_REG transaction journal table, instead of the SDO_TXN_JOURNAL_GTT transaction journal table.
So, now the solution is to revert the sql profile created and apply spatial bundle bug fix Patch 34725493 or to apply RU 19.17
Since we already had applied another one off patch for a previous issue with Spatial, we couldn't apply this bug fix patch as it was conflicting with the one off patch. Raised an SR with Oracle to get the merge patch for both the conflicting patches but Oracle couldn't provide any commitment on when the merge patch will be ready.
As we need to get the production database back to normal as soon as we can, we had to roll back the entire 19.16 RU to bring it back to pre patch state (RU 19.15) and then the application verified all the jobs are running without issues. We could also not see any cluster related wait post the patch rollback. We rolled back as 19.17 is yet to be tested in our lower environments before we promote to production.
Interestingly, Oracle maintains a note on all the patches that needs to be applied post RU patch apply under What Is The Latest Spatial Patch Bundle for 12c and Higher Databases? (Doc ID 2514624.1) So every time we apply RU to the database, we need to refer this note and apply the patch that is mentioned for the related RU as well.
This investigation is not only a trouble shooting experience but also a knowledge gathering experience with Oracle Spatial component regarding the bugs related to Spatial and separate Spatial patch bundle, etc.,.
Happy Trouble shooting and Patching...!!!
Tuesday, November 22, 2022
Oracle Database and Anti Virus Softwares
If you do a search for antivirus related issues in Oracle support, you will hit many notes talking about AVs causing issues with the Oracle database and this is not specific to any AV but in general to all the AV software.
Taking a look at Oracle support note How To Configure Anti-Virus On Oracle Database Server (Doc ID 782354.1), Oracle mentions the below
we recommend the following files to be excluded from online anti-virus scanning.
- Oracle datafiles
- Control files
- Redo-log files
- Archived redo-log files if database is in archive log mode
- Files with extension '.ora'
- Password file
- File in ADR directory
- Oracle Software directories (including Oracle_HOME and Oracle base)
- /dev/shm on Linux platform
So, is that it? Are we good if we just exclude all the files mentioned above?
I'll discuss regarding the 2 scenarios which caused outages to the database w.r.to AV software (I'm not mentioning the names of the AV software here)
Scenario 1:
We had installed one of the leading AV software on all our DB servers. During the testing phase everything goes smooth. Few files were asked to be excluded as suggested by Oracle and finally agreed to deploy them on production servers after rigorous testing in the lower environments.
Soon after the production deployment, we started having issues with ASM disk writes as the ASM disks were invisible to Oracle. The following error pops up frequently on busy and high load databases.
ORA-00202: control file: '+DATA/ABC/cntrlabc.dbf'~ORA-15081: failed to submit an I/O operation to a disk
ORA-00312: online log 5 thread 1: '+LOG/ABC/ONLINELOG/group_5.261.981173379'~ORA-15081: failed to submit an I/O operation to a diskSo what's really happening here?
- DB server should be restricted only for DB admin login, specifically only a senior DBA who does admin work (such as startup and shutdown using a sys account) and not the management work like adding/extending tablespaces kind of tasks. These other activities can be done remote using system or customized DBA account. This makes sure very limited logins are available on the server directly unlike we have 1 common account (in most cases oracle) where all the members in team use to login. This poses a very serious security risk
- Place all DB servers under a subnet with tight firewalls such as auto log off in x seconds/minutes, trace entire activity and audit frequently on what commands been run, etc. Only SA and DBA should have direct login accounts.
- OS login account should be coupled with AD and should use MFA to authenticate. The configuration seems complicated but this is one time configuration that needs the initial efforts to set up and we are done.
- To limit direct access to DB, use kerberos/radius authentication or Centrally Managed Users (available from version 18c) which implements coupling enterprise users with AD accounts and groups which would allow users with their enterprise logins. Again this provides double protection as once the member leaves the organization, his/her account is totally wiped out making the server/DB secure
- Use TCP.VALIDNODE_CHECKING = YES , TCP.EXCLUDED_NODES, TCP.INVITED_NODES sqlnet parameters to accept connections only from specific ip address or subnet making the application access to database super strong.
Tuesday, August 23, 2022
Oracle Centrally Managed Users (CMU) configuration
"In order for the Oracle Database CMU with Active Directory integration to work, the Oracle database must be able to login to a service account specifically created for the database in Active Directory. The database uses this service account to query Active Directory for user and group information when a user logs into the database. This Active Directory service account must have all the privileges required to query the user and group information as well as being able to write updates related to the password policies in Active Directory (for example, failed login attempts, clear failed login attempts). Users can authenticate using passwords, Kerberos, or PKI and either be assigned to an exclusive schema or a shared schema. Mapping of an Active Directory user to a shared schema is determined by the association of the user to an Active Directory group that is mapped to the shared schema. Active Directory groups can also be mapped to database global roles. An Active Directory security administrator can assign a user to groups that are mapped to shared database global users (schemas) and/or database global roles, and hence update privileges and roles that are assigned to the Active Directory user in a database."
In this post, I'll be explaining how to configure Oracle Centrally Managed Users using Windows server 2019 Active Directory and Oracle database version 19c.
Since the configuration is a bit long with multiple screen shots I have made, I have divided the steps into 4 major parts which are as below
- Part A: AD Configuration
- Part B: Database Server Configuration
- Part C: Enhancing to CMU
- Part D: Enhancing CMU with Shared users and roles
All the steps are explained in the downloadable document. Click below link to download the file.
>>> Centrally Managed Users - orabliss.com <<<
Please make use of the document and share the feedback or any clarification if you have, I'll try to help solve them.
I had to set up my own AD on virtual box for which I have not covered the steps in the document but the first 2 reference links will help you set up one if you need to..
References:
Install the Certification Authority | Microsoft Docs
Authenticate Oracle Database users with MS Active Directory - YouTube
Make Someone Else do the Work - Managing Oracle Database 19c Users in Active Directory (part 1 - Kerberos)
Configuring Centrally Managed Users with Microsoft Active Directory (oracle.com)
PART 1-4: Creating an Oracle 18c Centrally Managed Users Testbed using Oracle Cloud Infrastructure
How to Configure Centrally Managed Users For Database Release 18c or Later Releases (Doc ID 2462012.1)
Configuring ASO Kerberos Authentication with a Microsoft Windows 2008 R2 Active Directory KDC (Doc ID 1304004.1)
Kerberos Troubleshooting Guide (Doc ID 185897.1)
ktpass | Microsoft Docs
Happy CMing Users!!! :)
Thursday, June 30, 2022
RMAN Merged incremental backups
We are changing our backup solutions vendor in our company and the new vendor utilizes the RMAN merged incremental backup. So let's see what is this and what are the advantages of this RMAN capability.
Merged Incremental creates a level 0 image copy backup of all of your database's datafiles in a disk location. All datafiles are backed up using the same tag name. Block Change Tracking can be used for the incremental level 1 backups for Fast Incremental Backupsrun {allocate channel c1 device type disk format '/media/sf_Oracle/RMAN-training/%U';recover copy of database with tag 'Demo_merged_l0';backup incremental level 1 copies=1 for recover of copy with tag 'Demo_merged_l0' database;}
[oracle@linux-8 RMAN-training]$ rman target sys/oracle@odb1 Recovery Manager: Release 19.0.0.0.0 - Production on Thu Jun 30 01:10:09 2022 Version 19.10.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. connected to target database: CDB19:ODB1 (DBID=223390358) RMAN> run { allocate channel c1 device type disk format '/media/sf_Oracle/RMAN-training/%U'; recover copy of database with tag 'Demo_merged_l0'; backup incremental level 1 copies=1 for recover of copy with tag 'Demo_merged_l0' database; }2> 3> 4> 5> using target database control file instead of recovery catalog allocated channel: c1 channel c1: SID=414 device type=DISK Starting recover at 30-JUN-22 no copy of datafile 16 found to recover no copy of datafile 17 found to recover no copy of datafile 18 found to recover no copy of datafile 19 found to recover Finished recover at 30-JUN-22 Starting backup at 30-JUN-22 no parent backup or copy of datafile 18 found no parent backup or copy of datafile 16 found no parent backup or copy of datafile 17 found no parent backup or copy of datafile 19 found channel c1: starting datafile copy input datafile file number=00018 name=+DATA/ODB1/DATAFILE/undotbs1.273.1084998671 output file name=/media/sf_Oracle/RMAN-training/data_D-CDB19_I-3846007102_TS-UNDOTBS1_FNO-18_kr11ag49 tag=DEMO_MERGED_L0 RECID=95 STAMP=1108689035 channel c1: datafile copy complete, elapsed time: 00:00:03 channel c1: starting datafile copy input datafile file number=00016 name=+DATA/ODB1/DATAFILE/system.272.1084998669 output file name=/media/sf_Oracle/RMAN-training/data_D-CDB19_I-3846007102_TS-SYSTEM_FNO-16_ks11ag4c tag=DEMO_MERGED_L0 RECID=96 STAMP=1108689038 channel c1: datafile copy complete, elapsed time: 00:00:03 channel c1: starting datafile copy input datafile file number=00017 name=+DATA/ODB1/DATAFILE/sysaux.274.1084998671 output file name=/media/sf_Oracle/RMAN-training/data_D-CDB19_I-3846007102_TS-SYSAUX_FNO-17_kt11ag4f tag=DEMO_MERGED_L0 RECID=97 STAMP=1108689040 channel c1: datafile copy complete, elapsed time: 00:00:01 channel c1: starting datafile copy input datafile file number=00019 name=+DATA/CDB19/CD7591F2458BB5B4E053B538A8C08E48/DATAFILE/users.275.1108164853 output file name=/media/sf_Oracle/RMAN-training/data_D-CDB19_I-3846007102_TS-USERS_FNO-19_ku11ag4g tag=DEMO_MERGED_L0 RECID=98 STAMP=1108689040 channel c1: datafile copy complete, elapsed time: 00:00:01 Finished backup at 30-JUN-22 released channel: c1 RMAN> exit Recovery Manager complete. [oracle@linux-8 RMAN-training]$ ls -lrt ... -rwxrwx---. 1 root vboxsf 1447043072 Jun 30 01:10 data_D-CDB19_I-3846007102_TS-UNDOTBS1_FNO-18_kr11ag49 -rwxrwx---. 1 root vboxsf 1258299392 Jun 30 01:10 data_D-CDB19_I-3846007102_TS-SYSTEM_FNO-16_ks11ag4c -rwxrwx---. 1 root vboxsf 650125312 Jun 30 01:10 data_D-CDB19_I-3846007102_TS-SYSAUX_FNO-17_kt11ag4f -rwxrwx---. 1 root vboxsf 41951232 Jun 30 01:10 data_D-CDB19_I-3846007102_TS-USERS_FNO-19_ku11ag4gWe can see we get "no copy of database # found to recover" as this is the first run and we don't have any copy made before to recover.
RMAN> run { allocate channel c1 device type disk format '/media/sf_Oracle/RMAN-training/%U'; recover copy of database with tag 'Demo_merged_l0'; backup incremental level 1 copies=1 for recover of copy with tag 'Demo_merged_l0' database; }2> 3> 4> 5> allocated channel: c1 channel c1: SID=414 device type=DISK Starting recover at 30-JUN-22 no copy of datafile 16 found to recover no copy of datafile 17 found to recover no copy of datafile 18 found to recover no copy of datafile 19 found to recover Finished recover at 30-JUN-22 Starting backup at 30-JUN-22 channel c1: starting compressed incremental level 1 datafile backup set channel c1: specifying datafile(s) in backup set input datafile file number=00018 name=+DATA/ODB1/DATAFILE/undotbs1.273.1084998671 input datafile file number=00016 name=+DATA/ODB1/DATAFILE/system.272.1084998669 input datafile file number=00017 name=+DATA/ODB1/DATAFILE/sysaux.274.1084998671 input datafile file number=00019 name=+DATA/CDB19/CD7591F2458BB5B4E053B538A8C08E48/DATAFILE/users.275.1108164853 channel c1: starting piece 1 at 30-JUN-22 channel c1: finished piece 1 at 30-JUN-22 piece handle=/media/sf_Oracle/RMAN-training/kv11agra_671_1_1 tag=DEMO_MERGED_L0 comment=NONE channel c1: backup set complete, elapsed time: 00:00:03 Finished backup at 30-JUN-22 released channel: c1 RMAN> exit [oracle@linux-8 RMAN-training]$ ls -lrt ... -rwxrwx---. 1 root vboxsf 1447043072 Jun 30 01:10 data_D-CDB19_I-3846007102_TS-UNDOTBS1_FNO-18_kr11ag49 -rwxrwx---. 1 root vboxsf 1258299392 Jun 30 01:10 data_D-CDB19_I-3846007102_TS-SYSTEM_FNO-16_ks11ag4c -rwxrwx---. 1 root vboxsf 650125312 Jun 30 01:10 data_D-CDB19_I-3846007102_TS-SYSAUX_FNO-17_kt11ag4f -rwxrwx---. 1 root vboxsf 41951232 Jun 30 01:10 data_D-CDB19_I-3846007102_TS-USERS_FNO-19_ku11ag4g -rwxrwx---. 1 root vboxsf 2482176 Jun 30 01:22 kv11agra_671_1_1We can still see "no copy of database # found to recover" as we don't have a previous level 1 backup. Remember the run 1 created a level 0 backup. Now, second created a incremental level 1 backup which you can see from "starting compressed incremental level 1 datafile backup set" line and also from the files list kv11agra_671_1_1
RMAN> run { allocate channel c1 device type disk format '/media/sf_Oracle/RMAN-training/%U'; recover copy of database with tag 'Demo_merged_l0'; backup incremental level 1 copies=1 for recover of copy with tag 'Demo_merged_l0' database; }2> 3> 4> 5> using target database control file instead of recovery catalog allocated channel: c1 channel c1: SID=414 device type=DISK Starting recover at 30-JUN-22 channel c1: starting incremental datafile backup set restore channel c1: specifying datafile copies to recover recovering datafile copy file number=00016 name=/media/sf_Oracle/RMAN-training/data_D-CDB19_I-3846007102_TS-SYSTEM_FNO-16_ks11ag4c recovering datafile copy file number=00017 name=/media/sf_Oracle/RMAN-training/data_D-CDB19_I-3846007102_TS-SYSAUX_FNO-17_kt11ag4f recovering datafile copy file number=00018 name=/media/sf_Oracle/RMAN-training/data_D-CDB19_I-3846007102_TS-UNDOTBS1_FNO-18_kr11ag49 recovering datafile copy file number=00019 name=/media/sf_Oracle/RMAN-training/data_D-CDB19_I-3846007102_TS-USERS_FNO-19_ku11ag4g channel c1: reading from backup piece /media/sf_Oracle/RMAN-training/kv11agra_671_1_1 channel c1: piece handle=/media/sf_Oracle/RMAN-training/kv11agra_671_1_1 tag=DEMO_MERGED_L0 channel c1: restored backup piece 1 channel c1: restore complete, elapsed time: 00:00:01 Finished recover at 30-JUN-22 Starting backup at 30-JUN-22 channel c1: starting compressed incremental level 1 datafile backup set channel c1: specifying datafile(s) in backup set input datafile file number=00018 name=+DATA/ODB1/DATAFILE/undotbs1.273.1084998671 input datafile file number=00016 name=+DATA/ODB1/DATAFILE/system.272.1084998669 input datafile file number=00017 name=+DATA/ODB1/DATAFILE/sysaux.274.1084998671 input datafile file number=00019 name=+DATA/CDB19/CD7591F2458BB5B4E053B538A8C08E48/DATAFILE/users.275.1108164853 channel c1: starting piece 1 at 30-JUN-22 channel c1: finished piece 1 at 30-JUN-22 piece handle=/media/sf_Oracle/RMAN-training/l011ahbn_672_1_1 tag=DEMO_MERGED_L0 comment=NONE channel c1: backup set complete, elapsed time: 00:00:03 Finished backup at 30-JUN-22 released channel: c1 RMAN> exit [oracle@linux-8 RMAN-training]$ ls -lrt ... -rwxrwx---. 1 root vboxsf 2482176 Jun 30 01:22 kv11agra_671_1_1 -rwxrwx---. 1 root vboxsf 41951232 Jun 30 01:31 data_D-CDB19_I-3846007102_TS-USERS_FNO-19_ku11ag4g -rwxrwx---. 1 root vboxsf 650125312 Jun 30 01:31 data_D-CDB19_I-3846007102_TS-SYSAUX_FNO-17_kt11ag4f -rwxrwx---. 1 root vboxsf 1447043072 Jun 30 01:31 data_D-CDB19_I-3846007102_TS-UNDOTBS1_FNO-18_kr11ag49 -rwxrwx---. 1 root vboxsf 1258299392 Jun 30 01:31 data_D-CDB19_I-3846007102_TS-SYSTEM_FNO-16_ks11ag4c -rwxrwx---. 1 root vboxsf 2580480 Jun 30 01:31 l011ahbn_672_1_1Now we can see the previous level 1 backup has been applied on the datafile copies and also an incr level 1 copy has been taken.
run {allocate channel c1 device type disk format '/media/sf_Oracle/RMAN-training/%U';backup incremental level 1 copies=1 for recover of copy with tag 'Demo_merged_l0' database;recover copy of database with tag 'Demo_merged_l0';}
RECOVER COPY OF DATABASE WITH TAG 'Demo_merged_l0' UNTIL TIME "SYSDATE-31" FROM TAG 'Demo_merged_l0';
- RMAN: Merged Incremental Backup Strategies (Doc ID 745798.1)
- Incrementally updating backups
- Image courtesy: https://www.rubrik.com
Thursday, May 26, 2022
Script to gather details while using Datapump for DB migration
Here is a script that can be used to gather source database details when we are attempting to upgrade from one version of DB to another or migrate Oracle DB from one server to another.
The purpose is to gather as much details so that the target DB can be prepared in such a way we don't have much work post import of the required schemas.
/* ----------------------------------------------------------------------------------------- Script name : export_details.sql Usage : sqlplus / as sysdba @export_details.sql Instructions : Replace all SCHEMA1 and SCHEMA2 with the schemas you are intended to export and save the file as export_details.sql, use as many users needed Important Note : Once details gathered, export NLS_LANG as specified below to set env variable and start export by using the options mentioned in the end of this file export NLS_LANG=AMERICAN_AMERICA.AL32UTF8 - Check what is the NLS_LANG needed as per app This script will create 3 o/p files, 1 html and 2 sql files. ------------------------------------------------------------------------------------------ */ set echo on pages 999 long 90000 col OWNER for a30 col USERNAME for a30 col PROFILE for a30 col GRANTED_ROLE for a30 col GRANTEE for a30 col PROPERTY_NAME for a40 col PROPERTY_VALUE for a40 col DISPLAY_VALUE for a50 col DEFAULT_VALUE for a50 set markup html on spool on pre off spool source_details.html Prompt Parameter details: ------ ------------------ select name, DISPLAY_VALUE from v$parameter where name in ('service_names', 'compatible','sga_max_size','sga_target','log_buffer','db_cache_size','pga_aggregate_target', 'pga_aggregate_limit','cpu_count','session_cached_cursors','open_cursors','processes') order by name; Prompt User details: ------ ------------- select count(*) "Total Schemas" from dba_users; select sum(bytes)/1024/1024/1024 Total_DB_used_Size_In_GB from dba_segments; select sum(bytes)/1024/1024/1024 Total_DB_allocated_Size_In_GB from dba_data_files; SELECT owner,sum(bytes)/1024/1024/1024 Size_In_GB from dba_segments WHERE owner IN ('SCHEMA1','SCHEMA2') group by owner order by owner; Prompt Info about Schemas: ------ ------------------- select USERNAME,ACCOUNT_STATUS,LOCK_DATE,EXPIRY_DATE,DEFAULT_TABLESPACE,TEMPORARY_TABLESPACE,PROFILE from dba_users where USERNAME in ('SCHEMA1','SCHEMA2') order by 1; set markup html on spool on pre on Prompt Profile details for user: ------ ------------------------- SELECT dbms_metadata.get_ddl('PROFILE', p.profile) || ';' from dba_profiles p where profile not in ('DEFAULT') and resource_name='COMPOSITE_LIMIT'; set markup html on spool on pre off Prompt Tablespace Size for particular schema: ------ -------------------------------------- select owner, TABLESPACE_NAME, sum(BYTES)/1024/1024/1024 Size_In_GB from dba_segments where OWNER in ('SCHEMA1','SCHEMA2') group by owner,TABLESPACE_NAME order by owner,TABLESPACE_NAME; set markup html on spool on pre on Prompt DDL for Schema, Non default Roles and Tablespace: ------ ------------------------------------------------- SELECT dbms_metadata.get_ddl('USER',u.username) || ';' as USERS from dba_users u where username in ('SCHEMA1','SCHEMA2'); select dbms_metadata.get_ddl('TABLESPACE',tb.tablespace_name) || ';' as Tablespaces from dba_tablespaces tb where tablespace_name not in ('SYSTEM','SYSAUX','UNDOTBS','TEMP'); select dbms_metadata.get_ddl('ROLE', r.role) || ';' as Roles from dba_roles r where ORACLE_MAINTAINED<>'Y'; set markup html on spool on pre off Prompt Temp tablespace size: ------ --------------------- select tablespace_name, sum(BYTES)/1024/1024/1024 from dba_temp_files group by tablespace_name; Prompt Objects counts: ------ --------------- select owner, object_type, status, count(*) from dba_objects where owner in ('SCHEMA1','SCHEMA2') group by object_type,owner, status order by 1,2,3; Prompt Roles: ------ ------ select GRANTEE, GRANTED_ROLE, ADMIN_OPTION from dba_role_privs where grantee in ('SCHEMA1','SCHEMA2') order by 1,2; select GRANTEE, PRIVILEGE, ADMIN_OPTION from dba_sys_privs where grantee in ('SCHEMA1','SCHEMA2') order by 1,2; Prompt System privileges having a property value = 1 Prompt Use specific DBMS packages such as DBMS_RESOURCE_MANAGER_PRIVS to provide grants to these objects. ------ -------------------------------------------------------------------------------------------------- select p.grantee, m.name from system_privilege_map m, dba_sys_privs p where m.name=p.privilege and m.property=1 and p.grantee in ('SCHEMA1','SCHEMA2') order by p.grantee, m.name; Prompt DB Time Zone Details: ------ --------------------- select sysdate,systimestamp,current_timestamp,sessiontimezone,dbtimezone from dual; Prompt DB Characterset: ------ ---------------- select PROPERTY_NAME,PROPERTY_VALUE from database_properties where PROPERTY_NAME in ('NLS_CHARACTERSET','NLS_NCHAR_CHARACTERSET'); spool off Prompt Generating grants script: ------ ------------------------- set markup html off set lines 200 pages 0 head off feed off spool source_all_privs.sql select 'grant ' || privilege ||' on '|| owner || '.' || table_name ||' to '||grantee||';' from dba_tab_privs where grantee in ('SCHEMA1','SCHEMA2') and table_name NOT like 'BIN%'; spool off spool source_all_synonyms.sql SELECT 'CREATE OR REPLACE PUBLIC SYNONYM '||SYNONYM_NAME||' FOR '||TABLE_OWNER||'.'||TABLE_NAME||';' FROM ALL_SYNONYMS WHERE OWNER='PUBLIC' AND TABLE_OWNER IN ('SCHEMA1','SCHEMA2'); spool off exitOnce the details are gathered, we can use the generated html file to prepare the target DB like creating required tablespaces, roles, etc.
Saturday, May 7, 2022
Datapump & system privileges with property value = 1
SQL> select name from system_privilege_map where property=1 order by name; NAME ---------------------------------------- ADMINISTER RESOURCE MANAGER ALTER ANY EVALUATION CONTEXT ALTER ANY RULE ALTER ANY RULE SET CREATE ANY EVALUATION CONTEXT CREATE ANY RULE CREATE ANY RULE SET CREATE EVALUATION CONTEXT CREATE RULE CREATE RULE SET DEQUEUE ANY QUEUE NAME ---------------------------------------- DROP ANY EVALUATION CONTEXT DROP ANY RULE DROP ANY RULE SET ENQUEUE ANY QUEUE EXECUTE ANY EVALUATION CONTEXT EXECUTE ANY RULE EXECUTE ANY RULE SET MANAGE ANY FILE GROUP MANAGE ANY QUEUE MANAGE FILE GROUP READ ANY FILE GROUP 22 rows selected. SQL>All the above are granted via specific packages or via explicit grants on the system.
SQL> select p.grantee, m.name from system_privilege_map m, dba_sys_privs p 2 where m.name=p.privilege and m.property=1 and p.grantee in ('DUMMY') 3 order by p.grantee, m.name; GRANTEE NAME ------------------------------ ---------------------------------------- DUMMY ADMINISTER RESOURCE MANAGER DUMMY READ ANY FILE GROUP SQL>I have got 2 of such privileges that needs explicit grants on the target DB.
SQL> @check_user_privs.sql GRANTEE TYP PRIVILEGE OR ROLE ------------------------------ --- --------------------------------------------------------------------------- DUMMY PRV ALTER SESSION DUMMY CREATE ANY DIRECTORY DUMMY CREATE DATABASE LINK DUMMY CREATE JOB DUMMY CREATE PROCEDURE DUMMY CREATE SEQUENCE DUMMY CREATE SESSION DUMMY CREATE SYNONYM DUMMY CREATE TABLE DUMMY CREATE TYPE DUMMY CREATE VIEW DUMMY SELECT ANY DICTIONARY DUMMY UNLIMITED TABLESPACE 13 rows selected. SQL> BEGIN DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SYSTEM_PRIVILEGE( GRANTEE_NAME => 'DUMMY', PRIVILEGE_NAME => 'ADMINISTER_RESOURCE_MANAGER', ADMIN_OPTION => FALSE); END; / 2 3 4 5 6 7 PL/SQL procedure successfully completed. SQL> grant READ ANY FILE GROUP to DUMMY; Grant succeeded. SQL> @check_user_privs.sql GRANTEE TYP PRIVILEGE OR ROLE ------------------------------ --- --------------------------------------------------------------------------- DUMMY PRV ADMINISTER RESOURCE MANAGER DUMMY ALTER SESSION DUMMY CREATE ANY DIRECTORY DUMMY CREATE DATABASE LINK DUMMY CREATE JOB DUMMY CREATE PROCEDURE DUMMY CREATE SEQUENCE DUMMY CREATE SESSION DUMMY CREATE SYNONYM DUMMY CREATE TABLE DUMMY CREATE TYPE DUMMY CREATE VIEW DUMMY READ ANY FILE GROUP DUMMY SELECT ANY DICTIONARY DUMMY UNLIMITED TABLESPACE 15 rows selected. SQL>Now, we are all set. So while performing export and import we need to make sure to take care of these privilege grants without fail for the application to run without any issues.
select grantee, 'PRV' type, privilege pvfrom dba_sys_privs where grantee = 'DUMMY' unionselect username grantee, '---' type, 'empty user ---' pv from dba_userswhere not username in (select distinct grantee from dba_role_privs) andnot username in (select distinct grantee from dba_sys_privs) andnot username in (select distinct grantee from dba_tab_privs) and username like 'DUMMY'group by usernameorder by grantee, type, pv;
Monday, March 21, 2022
PDB Cloning random time out issue
This post is to discuss about a similar issue I encountered in the previous blog post
When I tried to clone the pluggable database from one cluster to another, I did get the following error.
SQL> CREATE PLUGGABLE DATABASE PRD01 from QA01@QA01 keystore identified by xxxxx no data; CREATE PLUGGABLE DATABASE PRD01 from QA01@QA01 keystore identified by xxxxx no data * ERROR at line 1: ORA-65169: error encountered while attempting to copy file +DATAC7/XXX04/D84A38E95B599A60E053E1A1C30AC20D/DATAFILE/undo_14.847.1096976301 ORA-17627: ORA-12170: TNS:Connect timeout occurred ORA-17629: Cannot connect to the remote database server SQL>
Seems somethings is timing out and we need to figure out where. So it's the alert log which will help us. The contents of the alert log is as below
*********************************************************************** Fatal NI connect error 12170, connecting to: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(Host=dev2-scan.xxx.com)(Port=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=QA01_SVC.xxx.com)(CID=(PROGRAM=oracle)(HOST=s23)(USER=oracle)))) VERSION INFORMATION: TNS for Linux: Version 19.0.0.0.0 - Production Oracle Bequeath NT Protocol Adapter for Linux: Version 19.0.0.0.0 - Production TCP/IP NT Protocol Adapter for Linux: Version 19.0.0.0.0 - Production Version 19.12.0.0.0 Time: 18-MAR-2022 18:07:48 Tracing not turned on. Tns error struct: ns main err code: 12535 TNS-12535: TNS:operation timed out ns secondary err code: 12560 nt main err code: 505 TNS-00505: Operation timed out nt secondary err code: 0 nt OS err code: 0 Errors in file /u02/app/oracle/diag/rdbms/xxx04/XXX041/trace/XXX041_ora_295605.trc: ORA-17627: ORA-12170: TNS:Connect timeout occurred ORA-17629: Cannot connect to the remote database server ******************************************************** Undo Create of Pluggable Database PRD01 with pdb id - 3. ********************************************************Sometimes, I also got the below error for which I don't know the exact reason as I mentioned in my previous post.
TNS-00505: Operation timed out nt secondary err code: 0 nt OS err code: 0 2022-03-18T18:14:25.706649+00:00 Errors in file /u02/app/oracle/diag/rdbms/xxx04/XXX041/trace/XXX041_ora_269420.trc: ORA-17627: ORA-01017: invalid username/password; logon denied ORA-17629: Cannot connect to the remote database server 2022-03-18T18:14:25.706956+00:00 ******************************************************** Undo Create of Pluggable Database PRD01 with pdb id - 3. ********************************************************We can see the nt secondary err code: 0 meaning the timeout is caused due to listener or database and not external such as server or firewall settings.
SQL> select sysdate from dual@QA01; select sysdate from dual@QA01 * ERROR at line 1: ORA-12170: TNS:Connect timeout occurred SQL> / SYSDATE --------- 18-MAR-22So where are we getting the time out from?
SQL> sho parameter local NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=1 0.195.161.228)(PORT=1521)), (A DDRESS=(PROTOCOL=TCP)(HOST=10. 195.161.228)(PORT=1522)),(ADDR ESS=(PROTOCOL=TCPS)(HOST=10.19 5.161.228)(PORT=2484))I then see the listener status to figure out what ports it is configured to listen
[oracle@s11 ~]$ lsnrctl status|more LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 20-MAR-2022 19:08:12 Copyright (c) 1991, 2021, Oracle. All rights reserved. Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521)) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 19.0.0.0.0 - Production Start Date 13-FEB-2022 09:20:10 Uptime 35 days 9 hr. 48 min. 1 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/19.0.0.0/grid/network/admin/listener.ora Listener Log File /u01/app/grid/diag/tnslsnr/s11/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=10.195.161.226)(PORT=2484))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.195.161.225)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.195.161.226)(PORT=1521))) Services Summary... Service "+APX" has 1 instance(s). Instance "+APX1", status READY, has 1 handler(s) for this service... Service "+ASM" has 1 instance(s). Instance "+ASM1", status READY, has 1 handler(s) for this service... ... ... ...As you can clearly see here, listener is listening only on port 1521 and 2484.
SQL> alter system set local_listener="(ADDRESS=(PROTOCOL=TCP)(HOST=10.195.161.226)(PORT=1521)),(ADDRESS=(PROTOCOL=TCPS)(HOST=10.195.161.226)(PORT=2484))" sid='XXX041'; System altered. SQL> sho parameter local NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=1 0.195.161.226)(PORT=1521)),(AD DRESS=(PROTOCOL=TCPS)(HOST=10. 195.161.226)(PORT=2484))After resetting the local_listener in the source database, now I'm able to clone the PDB without any issues.
SQL> CREATE PLUGGABLE DATABASE PRD01 from QA01@QA01 keystore identified by xxxxx no data; Pluggable database created.
SCAN listener hand over the connection to the VIP of the node in the cluster and in turn will be transferred to local listener. Here since the local listener is also configured with port 1522 which didn't exist, we got the random timeouts whenever the connection handoff done to 1522. While configuring parameters utmost care has to be taken to avoid such issues.
Monday, January 17, 2022
Wrong ORA error during PDB cloning!
Last week, while trying to clone a PDB under a CDB on one cluster (non-prod) to another PDB under different CDB on a different cluster (prod), I encountered a series of errors where the first 2 are what someone can expect but the last error is something I couldn't figure out how it got generated. Let's take a look into it in detail
Set up:
- Updated tnsnames.ora file on all the nodes of the cluster
- Created db link from Prod to non prod CDB to connect to pdb
- Test the connection
QA01= (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(Host = scanname.example.com)(Port = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = QA01_SVC.example.com) ) )
SQL> CREATE DATABASE LINK QA01 CONNECT TO c##remote_user IDENTIFIED BY xxx USING 'QA01'; Database link created. SQL> select sysdate from dual@QA01; SYSDATE --------- 12-JAN-22 SQL> exitSo far, so good. We are able to connect to the PDB via DB link and hence we can progress to clone the PDB from source CDB to target CDB.
SQL> CREATE PLUGGABLE DATABASE PROD01 from QA01@QA01 keystore identified by xxx; CREATE PLUGGABLE DATABASE PROD01 from QA01@QA01 keystore identified by xxx * ERROR at line 1: ORA-17628: Oracle error 46659 returned by remote Oracle server ORA-46659: master keys for the given PDB not foundWith reference to MOS Doc id: 2778618.1, to resolve the above error when executing CREATE PLUGGABLE statement add " including shared key" and the end of statement.
SQL> CREATE PLUGGABLE DATABASE PROD01 from QA01@QA01 keystore identified by xxx including shared key; CREATE PLUGGABLE DATABASE PROD01 from QA01@QA01 keystore identified by xxx including shared key * ERROR at line 1: ORA-65169: error encountered while attempting to copy file +DATAC3/CDB01/CE3C246E805F6F3AE053D8A1C30A498C/DATAFILE/libraryd.2661.1085857629 ORA-17627: ORA-12170: TNS:Connect timeout occurred ORA-17629: Cannot connect to the remote database serverCool, now we at least have a different error than the above. From alert log..
Fatal NI connect error 12170, connecting to: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(Host=scanname.example.com)(Port=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=QA01_SVC.example.com)(CID=(PROGRAM=oracle)(HOST=pd07)(USER=oracle)))) VERSION INFORMATION: TNS for Linux: Version 19.0.0.0.0 - Production TCP/IP NT Protocol Adapter for Linux: Version 19.0.0.0.0 - Production Version 19.12.0.0.0 Time: 12-JAN-2022 18:21:25 Tracing not turned on. Tns error struct: ns main err code: 12535 TNS-12535: TNS:operation timed out ns secondary err code: 12560 nt main err code: 505 TNS-00505: Operation timed out nt secondary err code: 0 nt OS err code: 0 2022-01-12T18:21:25.948274+00:00 Errors in file /u02/app/oracle/diag/rdbms/***/****/trace/****_p000_82225.trc: ORA-17627: ORA-12170: TNS:Connect timeout occurred ORA-17629: Cannot connect to the remote database serverThere are no nt secondary err code which means this is purely timeout from the database. We have not set any timeout value and the default INBOUND_CONNECT_TIMEOUT of 60 seconds kicked in and the connection timed out. No issues with this. All cool.
SQL> CREATE PLUGGABLE DATABASE PROD01 from QA01@QA01 keystore identified by xxx including shared key; CREATE PLUGGABLE DATABASE PROD01 from QA01@QA01 keystore identified by xxx including shared key * ERROR at line 1: ORA-65169: error encountered while attempting to copy file +DATAC3/CDB01/CE3C246E805F6F3AE053D8A1C30A498C/DATAFILE/ddtbsd.2636.1085857609 ORA-17627: ORA-01017: invalid username/password; logon denied ORA-17629: Cannot connect to the remote database serverThis attempt turns out to be weird as the same command just a minute later throws out a totally weird error of ORA-01017. If we have got the error in the first attempt, I might have suspected the password provided and rechecked the password but now the password which worked properly while testing is giving us ORA-01017.
2022-01-12T18:23:27.300138+00:00 Errors in file /u02/app/oracle/diag/rdbms/***/***/trace/***_p004_82257.trc: ORA-17627: ORA-01017: invalid username/password; logon denied ORA-17629: Cannot connect to the remote database server 2022-01-12T18:24:26.097107+00:00 Thread 7 advanced to log sequence 94 (LGWR switch), current SCN: 37359630199069 Current log# 26 seq# 94 mem# 0: +DATAC4/CDB0101/ONLINELOG/group_26.2116.1092461287 2022-01-12T18:24:29.260711+00:00 Deleted file +DATAC4/CDB01/D566C47DE983511DE05396A1C30A5D5E/DATAFILE/libraryd.2124.1093803079. 2022-01-12T18:24:29.287000+00:00 Deleted file +DATAC4/CDB01/D566C47DE983511DE05396A1C30A5D5E/DATAFILE/users.2133.1093803079. 2022-01-12T18:24:29.408203+00:00 Deleted file +DATAC4/CDB01/D566C47DE983511DE05396A1C30A5D5E/DATAFILE/pltbsd.2046.1093803079. 2022-01-12T18:24:29.514934+00:00 Deleted file +DATAC4/CDB01/D566C47DE983511DE05396A1C30A5D5E/DATAFILE/pptbsx.2050.1093803079. 2022-01-12T18:24:29.547221+00:00 Deleted file +DATAC4/CDB01/D566C47DE983511DE05396A1C30A5D5E/DATAFILE/cltbsd.2126.1093803079. 2022-01-12T18:24:29.558701+00:00 Deleted file +DATAC4/CDB01/D566C47DE983511DE05396A1C30A5D5E/DATAFILE/dmtbsx.2131.1093803079. 2022-01-12T18:24:29.610720+00:00 ************************************************************** Undo Create of Pluggable Database PROD01 with pdb id - 3. **************************************************************Checking the trace file associated with the error.. There are multiple session id with the same process id, so taking a deeper look reveals the below.
... ... trimmed o/p ... *** 2022-01-10T16:28:38.119788+00:00 (CDB$ROOT(1)) *** SESSION ID:(1188.31010) 2022-01-10T16:28:38.119813+00:00 *** CLIENT ID:() 2022-01-10T16:28:38.119816+00:00 *** SERVICE NAME:(pdb01) 2022-01-10T16:28:38.119819+00:00 *** MODULE NAME:(sqlplus@pd07 (TNS V1-V3)) 2022-01-10T16:28:38.119823+00:00 *** ACTION NAME:() 2022-01-10T16:28:38.119826+00:00 *** CLIENT DRIVER:() 2022-01-10T16:28:38.119829+00:00 *** CONTAINER ID:(1) 2022-01-10T16:28:38.119832+00:00 OSSIPC: IPCDAT DESTROY QP for qp 0x18b96e48 ep 0x18b96cd0 drain start time 146888533 OSSIPC:IPCDAT DESTROY QP for qp 0x18b96e48 ep 0x18b96cd0 drain complete num reqs drained 0 drain end time 146888534 drain time 1 msec time slept 0 ... trimmed o/p ... *** 2022-01-12T17:47:46.797793+00:00 ((6)) *** SESSION ID:(1191.49246) 2022-01-12T17:47:46.799247+00:00 *** SERVICE NAME:() 2022-01-12T17:47:46.799261+00:00 *** MODULE NAME:(sqlplus@pd07 (TNS V1-V3)) 2022-01-12T17:47:46.799264+00:00 *** ACTION NAME:() 2022-01-12T17:47:46.799268+00:00 *** CONTAINER ID:(6) 2022-01-12T17:47:46.799274+00:00 Using key: kcbtek structure 0x7ffd3b4751e0 utsn: 0x10030000001e (4099/30), alg: 0 keystate: 0 inv?: 0 usesalt?: 0 enctbs?: 0 obf?: 0 keyver: 4294967295 fbkey?: 0 fullenc?: 0 frn?: 0 rcv?: 0 skipchk?: 0 use_for_dec?: 0 encrypted key: 0000000000000000000000000000000000000000000000000000000000000000 mklocact 0 mkloc 0, mkid: 00000000000000000000000000000000 kcl: [0x7f344f3a15b8,0x7f3446627b30] kverl: [0x7f3446627af0,0x7f3446627af0] kpdbfRekeyFileBlock: decrypting block file 100 block 56737 (<6/30, 0xdda1) failed with error 3. Continue with operation.. kcbtse_encdec_tbsblk: WARNING cannot decrypt encrypted block since a valie key is not found. <4099/30, 0xdda2 (afn 100 block 56738)> ... trimmed o/p ... encrypted key: 0000000000000000000000000000000000000000000000000000000000000000 mklocact 0 mkloc 0, mkid: 00000000000000000000000000000000 kcl: [0x7f344f3a15b8,0x7f3446627b30] kverl: [0x7f3446627af0,0x7f3446627af0] kpdbfRekeyFileBlock: decrypting block file 100 block 109895 (<6/30, 0x1ad47) failed with error 3. Continue with operation.. OCI error val is 1017 and errmsg is 'ORA-01017: invalid username/password; logon denied *** 2022-01-12T18:23:27.299715+00:00 (CDB$ROOT(1)) *** SESSION ID:(1191.47408) 2022-01-12T18:23:27.299745+00:00 *** SERVICE NAME:(SYS$USERS) 2022-01-12T18:23:27.299749+00:00 *** MODULE NAME:(sqlplus@pd07 (TNS V1-V3)) 2022-01-12T18:23:27.299753+00:00 *** ACTION NAME:() 2022-01-12T18:23:27.299757+00:00 *** CONTAINER ID:(1) 2022-01-12T18:23:27.299760+00:00 ' ORA-17627: ORA-01017: invalid username/password; logon denied <error barrier> at 0x7ffd3b473998 placed ksrpc.c@5166 ORA-17629: Cannot connect to the remote database server <error barrier> at 0x7ffd3b47cf80 placed kpoodr.c@237 OSSIPC: IPCDAT DESTROY QP for qp 0x18bc3ce8 ep 0x18b50e90 drain start time 472374487 *** 2022-01-14T13:03:35.743519+00:00 (CDB$ROOT(1)) *** SESSION ID:(1195.64486) 2022-01-14T13:03:35.743542+00:00 *** SERVICE NAME:() 2022-01-14T13:03:35.743548+00:00 *** MODULE NAME:() 2022-01-14T13:03:35.743552+00:00 *** ACTION NAME:() 2022-01-14T13:03:35.743557+00:00 OSSIPC:IPCDAT DESTROY QP for qp 0x18bc3ce8 ep 0x18b50e90 drain complete num reqs drained 0 drain end time 472374494 drain time 7 msec time slept 0 OSSIPC: IPCDAT DESTROY QP for qp 0x18b5f038 ep 0x18b5ee10 drain start time 472374537 OSSIPC:IPCDAT DESTROY QP for qp 0x18b5f038 ep 0x18b5ee10 drain complete num reqs drained 0 drain end time 472374537 drain time 0 msec time slept 0 Process termination requested for pid 82257 [source = rdbms], [info = 2] [request issued by pid: 199810, uid: 1001]The trace file is updated with Timestamp when a new SESSION ID is logged with the same process id. We can see that there is already a session started at 2022-01-12T17:47:46 (line 19) with SESSION ID:(1191.49246) and our actual session of interest is logged at 2022-01-12T18:23:27 (line 41) with SESSION ID:(1191.47408). Match this timestamp with the timestamp of alert log when we issued the clone command and also see the serial# differs with both the sessions.
SQL> CREATE PLUGGABLE DATABASE PROD01 from QA01@QA01 keystore identified by xxx including shared key; Pluggable database created.This time in the 3rd attempt, the pluggable database created without any issues with the same command issued in the previous 2 attempts.