09 Oct Silent Rollback From an Out Of Place Patching For Oracle 19c GI
My last post was on how to do an Out Of Place Patching for Oracle GI. The next thing that I wanted to try was the rollback procedure for this methodology, but as I searched in MOS and the Oracle documentation, I couldn’t find how to rollback from it using gridsetup.sh.
So the first thing I did was try to follow the document 2419319.1 for OOP rollback but using opatchauto, but I faced the error below:
[root@node2 ~]# . oraenv ORACLE_SID = [root] ? +ASM2 The Oracle base has been set to /u01/app/grid [root@node2 ~]# $ORACLE_HOME/crs/install/rootcrs.sh -downgrade Using configuration parameter file: /u01/app/19.8.0.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/node2/crsconfig/crsdowngrade_node2_2020-10-08_11-01-27AM.log 2020/10/08 11:01:29 CLSRSC-416: Failed to retrieve old Grid Infrastructure configuration data during downgrade Died at /u01/app/19.8.0.0/grid/crs/install/crsdowngrade.pm line 760. The command '/u01/app/19.8.0.0/grid/perl/bin/perl -I/u01/app/19.8.0.0/grid/perl/lib -I/u01/app/19.8.0.0/grid/crs/install -I/u01/app/19.8.0.0/grid/xag /u01/app/19.8.0.0/grid/crs/install/rootcrs.pl -downgrade' execution failed
Looking at the log it mentions that there is no previous GI information for the CRS downgrade. This is because I did an OOP Patching to 19.8 from the source 19.3 binaries, and the previous version of the GI HOME was never in place prior to 19.8.CLSRSC-416: Failed to retrieve old Grid Infrastructure configuration data during downgrade
Right after the error, I checked that everything was still ok with the 19.8 GI.
[root@node2 ~]# crsctl check cluster -all ************************************************************** node1: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** node2: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** [root@node2 ~]# crsctl query crs releasepatch Oracle Clusterware release patch level is [441346801] and the complete list of patches [31281355 31304218 31305087 31335188 ] have been applied on the local node. The release patch string is [19.8.0.0.0]. [root@node2 ~]# crsctl query crs activeversion -f Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [441346801].
My next train of thought was maybe it would let me use the same procedure that I did to switch from the 19.6 to 19.8 GI, but since I had already detached the 19.6 GI, I had to reattach it to the inventory.
[grid@node1 ~]$ /u01/app/19.8.0.0/grid/oui/bin/runInstaller -attachhome \ -silent ORACLE_HOME="/u01/app/19.6.0.0/grid" \ ORACLE_HOME_NAME="OraGI196Home"
Again I proceeded to unset my Oracle Variables and set the 19.6 GI HOME.
[grid@node1 ~]$ unset ORACLE_BASE [grid@node1 ~]$ unset ORACLE_HOME [grid@node1 ~]$ unset ORA_CRS_HOME [grid@node1 ~]$ unset ORACLE_SID [grid@node1 ~]$ unset TNS_ADMIN [grid@node1 ~]$ env | egrep "ORA|TNS" | wc -l 0 [grid@node1 ~]$ export ORACLE_HOME=/u01/app/19.6.0.0/grid [grid@node1 ~]$ cd $ORACLE_HOME
Once I had reattached the 19.6 GI HOME and unset my variables, I went ahead and tried to do the switch, but I got a lot of errors due to permissions. Be aware that you have to reattach the GI_HOME in all the nodes, below is just an example of node 1.
[grid@node1 grid]$ pwd /u01/app/19.6.0.0/grid [grid@node1 grid]$ ./gridSetup.sh -switchGridHome -silent Launching Oracle Grid Infrastructure Setup Wizard... You can find the log of this install session at: /u01/app/oraInventory/logs/cloneActions2020-10-08_11-37-23AM.log Could not backup file /u01/app/19.6.0.0/grid/rootupgrade.sh to /u01/app/19.6.0.0/grid/rootupgrade.sh.ouibak Could not backup file /u01/app/19.6.0.0/grid/perl/lib/5.28.1/x86_64-linux-thread-multi/perllocal.pod to /u01/app/19.6.0.0/grid/perl/lib/5.28.1/x86_64-linux-thread-multi/perllocal.pod.ouibak ... [FATAL] Failed to restore the saved templates to the Oracle home being cloned. Aborting the clone operation.
I realized that I had forgotten to change the ownership back to grid:oinstall
as it kept the ownership of certain files and directories as root. I proceeded to change the permissions of the 19.6 GI Home.
[root@node1 ~]# cd /u01/app/19.6.0.0 [root@node1 19.6.0.0]# chown -R grid:oinstall ./grid [root@node2 ~]# cd /u01/app/19.6.0.0 [root@node2 19.6.0.0]# chown -R grid:oinstall ./grid
After I had changed the permissions, I ran the command gridsetup.sh as the grid owner by first unsetting the Oracle variables. This is the same command that I used for the 19.8 patching but now from the 19.6 GI HOME.
[grid@node1 grid]$ pwd /u01/app/19.6.0.0/grid [grid@node1 grid]$ ./gridSetup.sh -switchGridHome -silent Launching Oracle Grid Infrastructure Setup Wizard... You can find the log of this install session at: /u01/app/oraInventory/logs/cloneActions2020-10-08_11-40-15AM.log As a root user, execute the following script(s): 1. /u01/app/19.6.0.0/grid/root.sh Execute /u01/app/19.6.0.0/grid/root.sh on the following nodes: [node1, node2] Run the scripts on the local node first. After successful completion, run the scripts in sequence on all other nodes. Successfully Setup Software.
Now the only thing that I needed to do after the gridsetup.sh finished was to run the root.sh commands.
[root@node1 ~]# /u01/app/19.6.0.0/grid/root.sh Check /u01/app/19.6.0.0/grid/install/root_node1_2020-10-08_12-01-01-027843996.log for the output of root script [root@node2 ~]# /u01/app/19.6.0.0/grid/root.sh Check /u01/app/19.6.0.0/grid/install/root_node2_2020-10-08_12-09-00-516251584.log for the output of root script
The last thing that I did was to verify that everything was running correctly from the 19.6 GI HOME.
[grid@node1 ~]$ crsctl query crs releasepatch Oracle Clusterware release patch level is [2701864972] and the complete list of patches [30489227 30489632 30557433 30655595 ] have been applied on the local node. The release patch string is [19.6.0.0.0]. [grid@node1 ~]$ crsctl query crs activeversion -f Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2701864972]. [grid@node1 ~]$ ./rac_status.sh -a Cluster rene-ace-c Type | Name | node1 | node2 | --------------------------------------------------------------------- asm | asm | Online | Online | asmnetwork | asmnet1 | Online | Online | chad | chad | Online | Online | cvu | cvu | Online | - | dg | DATA | Online | Online | dg | RECO | Online | Online | network | net1 | Online | Online | ons | ons | Online | Online | qosmserver | qosmserver | Online | - | vip | node1 | Online | - | vip | node2 | - | Online | vip | scan1 | - | Online | vip | scan2 | Online | - | vip | scan3 | Online | - | --------------------------------------------------------------------- x : Resource is disabled : Has been restarted less than 24 hours ago Listener | Port | node1 | node2 | Type | ------------------------------------------------------------------------------------ ASMNET1LSNR_ASM| TCP:1525 | Online | Online | Listener | LISTENER | TCP:1521 | Online | Online | Listener | LISTENER_SCAN1 | TCP:1521 | - | Online | SCAN | LISTENER_SCAN2 | TCP:1521 | Online | - | SCAN | LISTENER_SCAN3 | TCP:1521 | Online | - | SCAN | ------------------------------------------------------------------------------------ DB | Version | node1 | node2 | DB Type | --------------------------------------------------------------------------------------- renedev | 19.6.0.0 (1) | - | Open | SINGLE (P) | reneqa | 19.6.0.0 (2) | Open | Open | RAC (P) |
Hope these two posts help you out if you are trying this methodology. Should you face an error , be sure to let me know and I’ll try to help out.
Sorry, the comment form is closed at this time.