You can refer the below link in order to learn more about the cloning process and its advantages.
Note the link refers to Oracle version 11.2 and in this post I'll be taking you through the process with version 19.7.
clone.pl script referred in the document is deprecated from Oracle 19c and hence we will use root.sh to create the new grid home.
OS: Red Hat Enterprise Linux Server release 7.7 (Maipo)
Oracle: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.7.0.0.0
Step 1: Create the Oracle Grid Home Gold image
Set the environment variables to +ASM instance and run the below on the source server where Oracle database server software is already installed
In the reference 11.2 document provided above, Oracle suggests that you remove unnecessary files and create an exclude list before copy of the Oracle home. In the recent version, they are smartly done by Oracle itself and the zip file is also create by creategoldimage command.
$ $ORACLE_HOME/gridSetup.sh -creategoldimage -destinationlocation /oracle/media/Grid_GoldImg/19.7_goldImage -silent Launching Oracle Grid Infrastructure Setup Wizard... Successfully Setup Software. Gold Image location: /oracle/media/Grid_GoldImg/19.7_goldImage/grid_home_2020-07-27_11-24-58PM.zip
Once the software zip is created, it can be transferred to the target server and all the steps below are to be executed on the target server
Like in a normal Oracle database server installation, you should satisfy all the Oracle installation prerequisites. Refer to Database Installation guide for Linux for full and proper prerequisites. For easy reference please satisfy all the points mentioned in the document here Prereq-linux-19c
Along with the prereq document, the user and group requirements are to be satisfied as well. Simple command for reference is as below. Modify according to your environment
# # CREATE ORACLE ACCOUNT # useradd -u 600 -g oinstall -s /bin/bash -d /home/oracle -m oracle # # # MODIFY ORACLE ACCOUNT # usermod -a -G dba,oper,asmadmin,asmoper,asmdba oracle
Create the grid home directory and unzip the transferred Gold image zip file
$ mkdir -p /oracle/grid/19.0.0 $ cd /oracle/grid/19.0.0 $ unzip /oracle/media/Grid_GoldImg/19.7_goldImage/grid_home_2020-07-27_11-24-58PM.zip Archive: /oracle/media/Grid_GoldImg/19.7_goldImage/grid_home_2020-07-27_11-24-58PM.zip creating: cv/ creating: cv/admin/ inflating: cv/admin/cvunetquery.bin inflating: cv/admin/cvusys.sql inflating: cv/admin/cvu_config inflating: cv/admin/odnsdlite.bin inflating: cv/admin/cvunetquery inflating: cv/admin/odnsdlite creating: cv/cvdata/ creating: cv/cvdata/101/ inflating: cv/cvdata/101/crsinst_prereq.xml inflating: cv/cvdata/101/dbcfg_prereq.xml inflating: cv/cvdata/101/dbinst_prereq.xml .... .... inflating: install/.img.bin finishing deferred symbolic links: bin/lbuilder -> ../nls/lbuilder/lbuilder lib/libocci.so -> libocci.so.19.1 lib/libodm19.so -> libodmd19.so lib/libagtsh.so -> libagtsh.so.1.0 .... .... javavm/lib/security/README.txt -> ../../../javavm/jdk/jdk8/lib/security/README.txt javavm/lib/sunjce_provider.jar -> ../../javavm/jdk/jdk8/lib/sunjce_provider.jar javavm/lib/security/java.security -> ../../../javavm/jdk/jdk8/lib/security/java.security
Step 4: Install cvuqdisk rpm
cvuqdisk rpm will be bundled along with Oracle software. Install as root user or use sudo
$ cd cv/rpm/ $ ls -lrt total 12 -rw-r--r--. 1 oracle oinstall 11620 Apr 11 08:19 cvuqdisk-1.0.10-1.rpm $ sudo rpm -ivh cvuqdisk-1.0.10-1.rpm Preparing... ################################# [100%] Using default group oinstall to install package Updating / installing... 1:cvuqdisk-1.0.10-1 ################################# [100%] $
runcluvfy.sh can be run to find whether all the prerequisites are properly satisfied before we run the grid setup
$ cd $ /oracle/grid/19.0.0/runcluvfy.sh stage -pre hacfg -verbose > mycluvfy.out $ tail -f mycluvfy.out ... ... Pre-check for Oracle Restart configuration was unsuccessful. Failures were encountered during execution of CVU verification request "stage -pre hacfg". Verifying OS Kernel Parameter: semopm ...FAILED xxxxxx: PRVG-1205 : OS kernel parameter "semopm" does not have expected current value on node "xxxxxx" [Expected = "100" ; Current = "32"; Configured = "undefined"]. ... ...The above shows an example of failed prerequisite required for the installation. If all the prerequisites are taken care/fulfilled, this command will not fail for any prerequisites. Otherwise, the command will suggest to alter settings, install required rpms, etc. Those should be satisfied before proceeding further.
Please check the mycluvfy.out file for failures and fix them.
Once all prerequisites satisfied, the below will be logged meaning a successful prerequisite checks
Verifying Root user consistency ... Node Name Status ------------------------------------ ------------------------ xxxxxx passed Verifying Root user consistency ...PASSED Pre-check for Oracle Restart configuration was successful. CVU operation performed: stage -pre hacfg Date: Jul 29, 2020 5:32:56 AM CVU home: /oracle/grid/19.0.0/ User: oracle $
Prepare the response file and run the gridSetup.sh. Previously this used to be clone.pl but now its replaced by gridSetup.sh which runs the already compiled grid home with all the patches included.
Example response file grid.rsp
$ /oracle/grid/19.0.0/gridSetup.sh -silent -responseFile /home/oracle/grid.rsp Launching Oracle Grid Infrastructure Setup Wizard... [WARNING] [INS-30543] Installer has detected that one or more of the selected disk(s) is of size larger than 2 terabyte(TB). Because of this, the Oracle Database compatibility level (COMPATIBLE.RDBMS attribute) of the diskgroup (DATA) will be set to 12.1. This means that, in order to use the diskgroup for data storage, the COMPATIBLE initialization parameter of the Oracle Database should be greater than or equal to 12.1. CAUSE: The following disks are of size larger than 2 terabyte(TB): [/dev/oracleasm/DATA1, /dev/oracleasm/DATA2 [WARNING] [INS-30011] The SYS password entered does not conform to the Oracle recommended standards. CAUSE: Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. ACTION: Provide a password that conforms to the Oracle recommended standards. [WARNING] [INS-30011] The ASMSNMP password entered does not conform to the Oracle recommended standards. CAUSE: Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. ACTION: Provide a password that conforms to the Oracle recommended standards. [WARNING] [INS-32047] The location (/oracle/oraInventory) specified for the central inventory is not empty. ACTION: It is recommended to provide an empty location for the inventory. The response file for this session can be found at: /oracle/grid/19.0.0/install/response/grid_2020-07-29_10-04-33AM.rsp You can find the log of this install session at: /tmp/GridSetupActions2020-07-29_10-04-33AM/gridSetupActions2020-07-29_10-04-33AM.log As a root user, execute the following script(s): 1. /oracle/oraInventory/orainstRoot.sh 2. /oracle/grid/19.0.0/root.sh Execute /oracle/grid/19.0.0/root.sh on the following nodes: [XXXXXX] Successfully Setup Software. As install user, execute the following command to complete the configuration. /oracle/grid/19.0.0/gridSetup.sh -executeConfigTools -responseFile /home/oracle/grid.rsp [-silent] Moved the install session logs to: /oracle/oraInventory/logs/GridSetupActions2020-07-29_10-04-33AM $Make sure to provide good password to avoid the above warnings.
Step 7: Execute the scripts as root and oracle user as directed by oracle above
Highlighted lines are the commands executed on the terminal
$ # Switching to root user $ sudo su - # /oracle/oraInventory/orainstRoot.sh Changing permissions of /oracle/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /oracle/oraInventory to oinstall. The execution of the script is complete. # /oracle/grid/19.0.0/root.sh Check /oracle/grid/19.0.0/install/root_XXXXXX.dc.honeywell.com_2020-07-29_10-09-01-095734828.log for the output of root script # # # cat /oracle/grid/19.0.0/install/root_XXXXXX.dc.honeywell.com_2020-07-29_10-09-01-095734828.log Performing root user operation. The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /oracle/grid/19.0.0 Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /oracle/grid/19.0.0/crs/install/crsconfig_params The log of current session can be found at: /oracle/grid/crsdata/xxxxxx/crsconfig/roothas_2020-07-29_10-09-01AM.log LOCAL ADD MODE Creating OCR keys for user 'oracle', privgrp 'oinstall'.. Operation successful. LOCAL ONLY MODE Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. CRS-4664: Node xxxxxx successfully pinned. 2020/07/29 10:09:13 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service' xxxxxx 2020/07/29 10:13:26 /oracle/grid/crsdata/xxxxxx/olr/backup_20200729_101326.olr 3030781831 2020/07/29 10:13:27 CLSRSC-327: Successfully configured Oracle Restart for a standalone server # exit $ # Running the next script as Oracle user $ /oracle/grid/19.0.0/gridSetup.sh -executeConfigTools -responseFile /home/oracle/grid.rsp -silent Launching Oracle Grid Infrastructure Setup Wizard... You can find the logs of this session at: /oracle/oraInventory/logs/GridSetupActions2020-07-29_10-16-06AM You can find the log of this install session at: /oracle/oraInventory/logs/UpdateNodeList2020-07-29_10-16-06AM.log Successfully Configured Software. $By now /etc/oratab will have +ASM entry added as the configuration is successful (Removed all comment lines in below o/p). You can also see that all the HAS resources are running as well
$ cat /etc/oratab #Backup file is /oracle/grid/crsdata/xxxxxxx/output/oratab.bak.xxxxxxx.oracle line added by Agent # +ASM:/oracle/grid/19.0.0:N # line added by Agent $ $ . oraenv ORACLE_SID = [oracle] ? +ASM The Oracle base has been set to /oracle/grid $ crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.dg ONLINE ONLINE xxxxxxx STABLE ora.LISTENER.lsnr ONLINE ONLINE xxxxxxx STABLE ora.asm ONLINE ONLINE xxxxxxx Started,STABLE ora.ons OFFLINE OFFLINE xxxxxxx STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.cssd 1 ONLINE ONLINE xxxxxxx STABLE ora.diskmon 1 OFFLINE OFFLINE STABLE ora.evmd 1 ONLINE ONLINE xxxxxxx STABLE -------------------------------------------------------------------------------- $We have successfully set up grid home now. All the below steps are optional and can be done according to your environment
Now we can alter the DATA diskgroup if it has more disks and also can create other disk groups such as TEMP, LOG, etc
Use asmca for creating disk groups as below
$ asmca -silent -createDiskGroup -diskGroupName TEMP -diskList /dev/oracleasm/TEMP1,/dev/oracleasm/TEMP2 -redundancy EXTERNAL -au_size 4 [DBT-30001] Disk groups created successfully. Check /oracle/grid/cfgtoollogs/asmca/asmca-200729AM105809.log for details. $ $ crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.dg ONLINE ONLINE xxxxxxx STABLE ora.LISTENER.lsnr ONLINE ONLINE xxxxxxx STABLE ora.LOG1.dg ONLINE ONLINE xxxxxxx STABLE ora.LOG2.dg ONLINE ONLINE xxxxxxx STABLE ora.RECO.dg ONLINE ONLINE xxxxxxx STABLE ora.TEMP.dg ONLINE ONLINE xxxxxxx STABLE ora.asm ONLINE ONLINE xxxxxxx Started,STABLE ora.ons OFFLINE OFFLINE xxxxxxx STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.cssd 1 ONLINE ONLINE xxxxxxx STABLE ora.diskmon 1 OFFLINE OFFLINE STABLE ora.evmd 1 ONLINE ONLINE xxxxxxx STABLE -------------------------------------------------------------------------------- $
Step 9: (Optional) Change compatibility of diskgroups
The database comapatibility of diskgroup added via asmca will be in 10.1 and can be set to a higher value if required accordingly
SQL> select group_number,name,compatibility,database_compatibility from v$asm_diskgroup; GROUP_NUMBER NAME COMPATIBILITY DATABASE_COMPATIBILI ------------ ------------------------------ -------------------- -------------------- 1 DATA 19.0.0.0.0 12.1.0.0.0 2 LOG1 19.0.0.0.0 10.1.0.0.0 3 LOG2 19.0.0.0.0 10.1.0.0.0 5 TEMP 19.0.0.0.0 12.1.0.0.0 4 RECO 19.0.0.0.0 12.1.0.0.0 SQL> exit Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.7.0.0.0 $ asmcmd setattr -G DATA compatible.rdbms 12.1.0.2.0 $ asmcmd setattr -G RECO compatible.rdbms 12.1.0.2.0 $ asmcmd setattr -G LOG1 compatible.rdbms 12.1.0.2.0 $ asmcmd setattr -G LOG2 compatible.rdbms 12.1.0.2.0 $ asmcmd setattr -G TEMP compatible.rdbms 12.1.0.2.0 $ sqlplus / as sysdba SQL> select group_number,name,compatibility,database_compatibility from v$asm_diskgroup; GROUP_NUMBER NAME COMPATIBILITY DATABASE_COMPATIBILI ------------ ------------------------------ -------------------- -------------------- 1 DATA 19.0.0.0.0 12.1.0.2.0 2 LOG1 19.0.0.0.0 12.1.0.2.0 3 LOG2 19.0.0.0.0 12.1.0.2.0 5 TEMP 19.0.0.0.0 12.1.0.2.0 4 RECO 19.0.0.0.0 12.1.0.2.0 SQL> exit $Now the grid home is ready for operation. We can see how to clone Oracle RDBMS home in my next post here
References:
How to Clone an 11.2 Grid Infrastructure Home and Clusterware (Doc ID 1413846.1)19.x:Clone.pl script is deprecated and how to clone using gold-image (Doc ID 2565006.1)
Happy cloning!!!
Excellent work
ReplyDeleteThank you 😊
Deleteexcellent post
ReplyDeleteThanks for the appreciation!
DeleteHi Selva, Few questions - were you required to shut down grid on source before creating gold image as advised in the link - https://docs.oracle.com/en/database/oracle/oracle-database/19/cwadd/cloning-oracle-clusterware.html#GUID-502ABA1D-8246-4A13-BE72-3E806B77AB8F. Does the gridSetup.sh on the target takes care of the OLR & voting disk migrations?
ReplyDeleteHi Indraneil - Shutting down of grid is not required but you might encounter error when a file is changing during zipping. This happens mostly with logfiles or $ORACLE_HOME/bin/oracle file. So avoid this one needs to shut down grid. Again, 8 out of 10 times Gold image creation works properly when clusterware is up.
DeleteThis is just cloning of binaries which would eliminate applying patches and one offs if done using traditional method. You would still required to run gridSetup.sh which will take care of OCR and voting disks as you will be providing details while running this.
Thanks!
Does it take care of migrating OLR/VD from source to target ASM?
ReplyDeleteOLR - is local. Again when you are cloning to another server, you wont need this. OCR/VD will be configured when running gridSetup.sh after cloning binaries.
Delete