This would be my first post of 2016. Hope I do post more to contribute in the Oracle community!
We will see how to convert a single instance Oracle 12c database to a RAC database by following the procedure below.
We have a single instance database running on a server where cluster software is already installed and configured.
You can use this link to create a single instance (stand alone) 12c database. Change the compatible value to 12.1.0.0.0 and check the relavent path settings. If you are using ASM, then use the ASM path in CREATE DATABASE command accordingly.
If you would like the database to be container database i.e if you want to plug other databases to this main database, then use the below parameter in the parameter file
enable_pluggable_database=true
The existing setup is as follows.
Oracle database version: 12.1.0.1.0
Oracle clusterware version: 12.1.0.1.0
ASM version: 12.1.0.1.0
Non-RAC DB & instance name: NONPLUG
Servers in cluster: 12r1-rac1.selvapc.com (12r1-rac1) & 12r1-rac2.selvapc.com (12r1-rac2)
Step 1: Environment Check
I have a database which is a single instance non container 12c database and the properties are as below.
We will see how to convert a single instance Oracle 12c database to a RAC database by following the procedure below.
We have a single instance database running on a server where cluster software is already installed and configured.
You can use this link to create a single instance (stand alone) 12c database. Change the compatible value to 12.1.0.0.0 and check the relavent path settings. If you are using ASM, then use the ASM path in CREATE DATABASE command accordingly.
If you would like the database to be container database i.e if you want to plug other databases to this main database, then use the below parameter in the parameter file
enable_pluggable_database=true
The existing setup is as follows.
Oracle database version: 12.1.0.1.0
Oracle clusterware version: 12.1.0.1.0
ASM version: 12.1.0.1.0
Non-RAC DB & instance name: NONPLUG
Servers in cluster: 12r1-rac1.selvapc.com (12r1-rac1) & 12r1-rac2.selvapc.com (12r1-rac2)
Step 1: Environment Check
I have a database which is a single instance non container 12c database and the properties are as below.
SQL> select name, instance_name from v$database, v$instance; NAME INSTANCE_NAME --------- ---------------- NONPLUG nonplug SQL> sho parameter enable_pluggable NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ enable_pluggable_database boolean FALSE SQL> sho parameter cluster NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ cluster_database boolean FALSE cluster_database_instances integer 1 cluster_interconnects stringCheck that we have the ASM software of version equal or higher then the RDBMS that we are converting.
Here in our case we are good.
SQL> select software_version from v$asm_client; SOFTWARE_VERSION ------------------------------------------------------------ 12.1.0.1.0This shows the clusterware version on both the nodes involved.
[oracle@12r1-rac1 ~]$ crsctl query crs softwareversion 12r1-rac1 Oracle Clusterware version on node [12r1-rac1] is [12.1.0.1.0] [oracle@12r1-rac1 ~]$ crsctl query crs softwareversion 12r1-rac2 Oracle Clusterware version on node [12r1-rac2] is [12.1.0.1.0] [oracle@12r1-rac1 ~]$Step 2: DB files
Restore the backup of datafiles,redo logs,control file to a shared location on the cluster.
Here as we already have a single instance database running on a storage which is shared by the nodes, we can skip this step. For this, migration from non ASM to ASM has to be performed. RMAN duplicate can be used or datapump can be used if the database is smaller in size. Option is yours...
Step 3: PFILE
Create pfile from memory
SQL> create pfile='/oracle/app/product/12.1/dbs/initnonplug1.ora' from memory; File created. SQL>Add the following lines to the pfile that we have for the single instance database.
*.cluster_database = TRUE
*.cluster_database_instances = 2
*.undo_management=AUTO
nonplug1.undo_tablespace=undotbs1 # (undo tablespace which already exists)
nonplug1.instance_name=nonplug1
nonplug1.instance_number=1
nonplug1.thread=1
nonplug1.local_listener=LISTENER_12r1rac1 # <LISTENERNAME>_<HOSTNAME1>
nonplug2.instance_name=nonplug2
nonplug2.instance_number=2
nonplug2.local_listener=LISTENER_12r1rac2 # <LISTENERNAME>_<HOSTNAME2>
nonplug2.thread=2
nonplug2.undo_tablespace=UNDOTBS2
nonplug2.cluster_database = TRUE
nonplug2.cluster_database_instances = 2
Edit the created pfile to add the above parameters and copy to the other cluster node as below.
[oracle@12r1-rac1 dbs]$ hostname 12r1-rac1.selvapc.com [oracle@12r1-rac1 dbs]$ vi initnonplug1.ora [oracle@12r1-rac1 dbs]$ scp initnonplug1.ora oracle@12r1-rac2.selvapc.com:/oracle/app/product/12.1/dbs/initnonplug2.ora initnonplug1.ora 100% 1039 1.0KB/s 00:00Step 4: Create redo and undo for second instance
SQL> alter database add logfile thread 2 group 3 ('+DATA') size 100m; Database altered. SQL> alter database add logfile thread 2 group 4 ('+DATA') size 100m; Database altered. SQL> alter database enable public thread 2; Database altered. SQL> create undo tablespace UNDOTBS2 datafile '+DATA' size 200M; Tablespace created.Step 5: Listener check
If the single instance database listener is running and using the same port as of RAC listener, then stop the listener and configure the listener through "NETCA" using cluster RDBMS home.
I now have one local listener on each servers LISTENER_12R1RAC1 & LISTENER_12R1RAC2 and 3 scan listeners in total. The local listeners run on port 1522 and scan listeners on default port 1521.
[oracle@12r1-rac1 ~]$ ps -ef|grep inhe oracle 3449 1 0 Jan01 ? 00:00:00 /u01/oracle/app/product/12.1/bin/tnslsnr LISTENER_SCAN3 -no_crs_notify -inherit oracle 3459 1 0 Jan01 ? 00:00:00 /u01/oracle/app/product/12.1/bin/tnslsnr LISTENER_SCAN2 -no_crs_notify -inherit oracle 11313 1 0 00:32 ? 00:00:00 /oracle/app/product/12.1/bin/tnslsnr LISTENER_12R1RAC1 -inherit oracle 11474 9469 0 00:35 pts/0 00:00:00 grep inhe
[oracle@12r1-rac2 ~]$ ps -ef|grep inhe oracle 3188 1 0 Jan01 ? 00:00:00 /u01/oracle/app/product/12.1/bin/tnslsnr LISTENER_SCAN1 -no_crs_notify -inherit oracle 9969 1 0 00:34 ? 00:00:00 /oracle/app/product/12.1/bin/tnslsnr LISTENER_12R1RAC2 -inherit oracle 10042 5415 0 00:36 pts/0 00:00:00 grep inhe [oracle@12r1-rac2 ~]$You might get this below error
ORA-00119: invalid specification for system parameter LOCAL_LISTENER
ORA-00132: syntax error or unresolved network name 'LISTENER_12r1rac1'
So to avoid this, entries to the tnsnames.ora should be made as below on servers 12r1-rac1 (LISTENER_12r1rac1) and 12r1-rac2 (LISTENER_12r1rac2).
LISTENER_12r1rac1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 12r1-rac1)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = nonplug)
)
)
LISTENER_12r1rac2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 12r1-rac2)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = nonplug)
)
)
Step 6: Shut down the database
SQL> shut immediate Database closed. Database dismounted. ORACLE instance shut down.
Recreate the password file which can be utilized by both instances separately. Set the new environments and startup the database.
[oracle@12r1-rac1 dbs]$ export ORACLE_SID=nonplug1 [oracle@12r1-rac1 dbs]$ export ORACLE_HOME=/oracle/app/product/12.1 [oracle@12r1-rac1 dbs]$ orapwd file= $ORACLE_HOME/dbs/orapwnonplug1 password=xxxxxxx
[oracle@12r1-rac1 dbs]$ sqlplus / as sysdba SQL*Plus: Release 12.1.0.1.0 Production on Sat Jan 2 01:16:19 2016 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to an idle instance. SQL> startup ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance ORACLE instance started. Total System Global Area 1068937216 bytes Fixed Size 2296576 bytes Variable Size 348128512 bytes Database Buffers 713031680 bytes Redo Buffers 5480448 bytes Database mounted. Database opened. SQL>
[oracle@12r1-rac2 ~]$ export ORACLE_SID=nonplug2 [oracle@12r1-rac2 ~]$ export ORACLE_HOME=/oracle/app/product/12.1 [oracle@12r1-rac2 ~]$ orapwd file=$ORACLE_HOME/dbs/orapwnonplug2 password=xxxxxxx
[oracle@12r1-rac2 ~]$ sqlplus / as sysdba SQL*Plus: Release 12.1.0.1.0 Production on Sat Jan 2 01:19:38 2016 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to an idle instance. SQL> startup ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance ORACLE instance started. Total System Global Area 1068937216 bytes Fixed Size 2296576 bytes Variable Size 348128512 bytes Database Buffers 713031680 bytes Redo Buffers 5480448 bytes Database mounted. Database opened. SQL>Step 8: Register RAC instances with CRS
Register the instances with the SRVCTL utility with CRS and stop and start the database using SRVCTL to complete the conversion process.
[oracle@12r1-rac1 dbs]$ srvctl add database -d nonplug -o /oracle/app/product/12.1 [oracle@12r1-rac1 dbs]$ srvctl add instance -d nonplug -i nonplug1 -n 12r1-rac1 [oracle@12r1-rac1 dbs]$ srvctl add instance -d nonplug -i nonplug2 -n 12r1-rac2 [oracle@12r1-rac1 ~]$ srvctl stop database -d nonplug PRCC-1016 : nonplug was already stopped [oracle@12r1-rac1 ~]$ srvctl status database -d nonplug Instance nonplug1 is not running on node 12r1-rac1 Instance nonplug2 is not running on node 12r1-rac2 [oracle@12r1-rac1 ~]$ srvctl start database -d nonplug [oracle@12r1-rac1 ~]$ srvctl status database -d nonplug Instance nonplug1 is running on node 12r1-rac1 Instance nonplug2 is running on node 12r1-rac2 [oracle@12r1-rac1 ~]$Step 9: Create spfile
Create spfile and restart the database.
Create spfile using the following command. Best practice is to create spfile in ASM storage and hence lets do it by the following.
SQL> create spfile='+DATA/spfilenonplug.ora' from pfile; File created. SQL> exit Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Advanced Analytics and Real Application Testing options
[oracle@12r1-rac1 ~]$ srvctl stop database -d nonplug [oracle@12r1-rac1 ~]$ ps -ef|grep pmon oracle 3156 1 0 09:01 ? 00:00:01 asm_pmon_+ASM1 oracle 27179 21479 0 12:08 pts/0 00:00:00 grep pmon [oracle@12r1-rac1 ~]$ srvctl start database -d nonplug [oracle@12r1-rac1 ~]$ ps -ef|grep pmon oracle 3156 1 0 09:01 ? 00:00:01 asm_pmon_+ASM1 oracle 27229 1 0 12:09 ? 00:00:00 ora_pmon_nonplug1 oracle 27716 21479 0 12:10 pts/0 00:00:00 grep pmon [oracle@12r1-rac1 ~]$Step 10: Post conversion verification and checks
Perform the below following as a verfication and checks.
SQL> select * from v$active_instances; INST_NUMBER INST_NAME CON_ID ----------- ---------------------------------------- ---------- 1 12r1-rac1.selvapc.com:nonplug1 0 2 12r1-rac2.selvapc.com:nonplug2 0Check whether all the resources are online with the following commands.
[oracle@12r1-rac2 ~]$ crsctl status resource -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.dg ONLINE ONLINE 12r1-rac1 STABLE ONLINE ONLINE 12r1-rac2 STABLE ora.asm ONLINE ONLINE 12r1-rac1 Started,STABLE ONLINE ONLINE 12r1-rac2 Started,STABLE ora.net1.network ONLINE ONLINE 12r1-rac1 STABLE ONLINE ONLINE 12r1-rac2 STABLE ora.ons ONLINE ONLINE 12r1-rac1 STABLE ONLINE ONLINE 12r1-rac2 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.12r1-rac1.vip 1 ONLINE ONLINE 12r1-rac1 STABLE ora.12r1-rac2.vip 1 ONLINE ONLINE 12r1-rac2 STABLE ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE 12r1-rac2 STABLE ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE 12r1-rac2 STABLE ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE 12r1-rac2 STABLE ora.cvu 1 ONLINE ONLINE 12r1-rac2 STABLE ora.nonplug.db 1 ONLINE ONLINE 12r1-rac1 Open,STABLE 2 ONLINE ONLINE 12r1-rac2 Open,STABLE ora.oc4j 1 OFFLINE OFFLINE STABLE ora.scan1.vip 1 ONLINE ONLINE 12r1-rac2 STABLE ora.scan2.vip 1 ONLINE ONLINE 12r1-rac2 STABLE ora.scan3.vip 1 ONLINE ONLINE 12r1-rac2 STABLE -------------------------------------------------------------------------------- [oracle@12r1-rac2 ~]$ crs_stat -t # This is a depricated command from Oracle version 11R2 but still works. Name Type Target State Host ------------------------------------------------------------ ora....SM1.asm application ONLINE ONLINE 12r1-rac1 ora....ac1.ons application ONLINE ONLINE 12r1-rac1 ora....ac1.vip ora....t1.type ONLINE ONLINE 12r1-rac1 ora....SM2.asm application ONLINE ONLINE 12r1-rac2 ora....ac2.ons application ONLINE ONLINE 12r1-rac2 ora....ac2.vip ora....t1.type ONLINE ONLINE 12r1-rac2 ora.DATA.dg ora....up.type ONLINE ONLINE 12r1-rac1 ora....N1.lsnr ora....er.type ONLINE ONLINE 12r1-rac2 ora....N2.lsnr ora....er.type ONLINE ONLINE 12r1-rac2 ora....N3.lsnr ora....er.type ONLINE ONLINE 12r1-rac2 ora.asm ora.asm.type ONLINE ONLINE 12r1-rac1 ora.cvu ora.cvu.type ONLINE ONLINE 12r1-rac2 ora....network ora....rk.type ONLINE ONLINE 12r1-rac1 ora.nonplug.db ora....se.type ONLINE ONLINE 12r1-rac1 ora.oc4j ora.oc4j.type OFFLINE OFFLINE ora.ons ora.ons.type ONLINE ONLINE 12r1-rac1 ora.scan1.vip ora....ip.type ONLINE ONLINE 12r1-rac2 ora.scan2.vip ora....ip.type ONLINE ONLINE 12r1-rac2 ora.scan3.vip ora....ip.type ONLINE ONLINE 12r1-rac2 [oracle@12r1-rac2 ~]$
crsctl stop cluster –all
Verify the cluster by the following command using cluvfy utility
[oracle@12r1-rac1 ~]$ cluvfy comp crs -n all -verbose Verifying CRS integrity Checking CRS integrity... Clusterware version consistency passed. The Oracle Clusterware is healthy on node "12r1-rac1" The Oracle Clusterware is healthy on node "12r1-rac2" CRS integrity check passed Verification of CRS integrity was successful. [oracle@12r1-rac1 ~]$Other optional checks include checking the logs related to Cluster, Database and instances when you install and perform any operations to make sure you don't have any unexpected issues.
Also configure Backup and all the monitoring scripts as per your organisation standards.
Go through the check list once again to see that you have not missed any checks to make everything perfect.
Happy RACing!!