While upgrading from GI 12c to 19c +asm1(19c) does not see +asm2(12c)

While upgrading from GI 12c to 19c +asm1(19c) does not see +asm2(12c)

A few weeks ago I was working on a GI upgrade from 12.2 to 19.6, but after running the upgrade in node 1 +asm1(19c) does not see +asm2(12c) .

Below you will see what I did and how it was fixed.

I started running the prechecks and everything passed successfully.

[grid@vxe-dev-rac-01 bin]$ ./cluvfy stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/12.2.0.1/grid -dest_crshome /u01/app/19.6.0.0/grid -dest_version 19.1.0.0.0 -fixupnoexec -verbose
...
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  reneace02  no                        yes                       passed
  reneace01  no                        yes                       passed
Verifying Node Application Existence ...PASSED
Verifying Check incorrectly sized ASM Disks ...PASSED
Verifying ASM disk group free space ...PASSED
Verifying Network configuration consistency checks ...PASSED
Verifying File system mount options for path GI_HOME ...PASSED
Verifying /boot mount ...PASSED
Verifying OLR Integrity ...PASSED
Verifying Verify that the ASM instance was configured using an existing ASM parameter file. ...PASSED
Verifying User Equivalence ...PASSED
Verifying RPM Package Manager database ...INFORMATION (PRVG-11250)
Verifying Network interface bonding status of private interconnect network interfaces ...PASSED
Verifying /dev/shm mounted as temporary file system ...PASSED
Verifying File system mount options for path /var ...PASSED
Verifying DefaultTasksMax parameter ...PASSED
Verifying zeroconf check ...PASSED
Verifying ASM Filter Driver configuration ...PASSED
Verifying Systemd login manager IPC parameter ...PASSED
Verifying Kernel retpoline support ...PASSED

Pre-check for cluster services setup was successful.
Verifying RPM Package Manager database ...INFORMATION
PRVG-11250 : The check "RPM Package Manager database" was not performed because
it needs 'root' user privileges.


CVU operation performed:      stage -pre crsinst
Date:                         Feb 20, 2020 11:28:43 AM
CVU home:                     /u01/temp/19.6.0.0/cvu/bin/../
User:                         grid

I created the following response file so that I can run the GI upgrade in silent mode

[grid@reneace01 grid]$ cat /tmp/gridresponse.rsp | grep -v "#"
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v19.0.0
INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=UPGRADE
ORACLE_BASE=/u01/app/grid
oracle.install.asm.OSDBA=
oracle.install.asm.OSOPER=
oracle.install.asm.OSASM=
oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.SCANClientDataFile=
oracle.install.crs.config.gpnp.scanName=
oracle.install.crs.config.gpnp.scanPort=
oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.memberClusterManifestFile=
oracle.install.crs.config.clusterName=vxe-dev-cluster
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
oracle.install.crs.config.gpnp.gnsClientDataFile=
oracle.install.crs.config.gpnp.gnsSubDomain=
oracle.install.crs.config.gpnp.gnsVIPAddress=
oracle.install.crs.config.sites=
oracle.install.crs.config.clusterNodes=reneace01:reneace01-vip,reneace02:reneace02-vip
oracle.install.crs.config.networkInterfaceList=
oracle.install.crs.configureGIMR=false
oracle.install.asm.configureGIMRDataDG=false
oracle.install.crs.config.storageOption=
oracle.install.crs.config.sharedFileSystemStorage.votingDiskLocations=
oracle.install.crs.config.sharedFileSystemStorage.ocrLocations=
oracle.install.crs.config.useIPMI=false
oracle.install.crs.config.ipmi.bmcUsername=
oracle.install.crs.config.ipmi.bmcPassword=
oracle.install.asm.SYSASMPassword=
oracle.install.asm.diskGroup.name=ORACRS
oracle.install.asm.diskGroup.redundancy=
oracle.install.asm.diskGroup.AUSize=
oracle.install.asm.diskGroup.FailureGroups=
oracle.install.asm.diskGroup.disksWithFailureGroupNames=
oracle.install.asm.diskGroup.disks=
oracle.install.asm.diskGroup.quorumFailureGroupNames=
oracle.install.asm.diskGroup.diskDiscoveryString=
oracle.install.asm.monitorPassword=
oracle.install.asm.gimrDG.name=
oracle.install.asm.gimrDG.redundancy=
oracle.install.asm.gimrDG.AUSize=4
oracle.install.asm.gimrDG.FailureGroups=
oracle.install.asm.gimrDG.disksWithFailureGroupNames=
oracle.install.asm.gimrDG.disks=
oracle.install.asm.gimrDG.quorumFailureGroupNames=
oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=NONE
oracle.install.config.omsHost=
oracle.install.config.omsPort=
oracle.install.config.emAdminUser=
oracle.install.config.emAdminPassword=
oracle.install.crs.rootconfig.executeRootScript=false
oracle.install.crs.rootconfig.configMethod=
oracle.install.crs.rootconfig.sudoPath=
oracle.install.crs.rootconfig.sudoUserName=
oracle.install.crs.config.batchinfo=
oracle.install.crs.app.applicationAddress=
oracle.install.crs.deleteNode.nodes=

I ran the upgrade and as you see below, there is no failure during the rootupgrade.sh

[grid@reneace01 grid]$ pwd
/u01/app/19.6.0.0/grid
[grid@reneace01 grid]$ ./gridSetup.sh -silent -responseFile /tmp/gridresponse.rsp -applyRU /u06/patches/oracle19c/30501910
Preparing the home to patch...
Applying the patch /u06/patches/oracle19c/30501910...
Successfully applied the patch.
The log can be found at: /u01/app/oraInventory/logs/GridSetupActions2020-02-26_07-27-27PM/installerPatchActions_2020-02-26_07-27-27PM.log
Launching Oracle Grid Infrastructure Setup Wizard...

[WARNING] [INS-13014] Target environment does not meet some optional requirements.
   CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/GridSetupActions2020-02-26_07-27-27PM/gridSetupActions2020-02-26_07-27-27PM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/GridSetupActions2020-02-26_07-27-27PM/gridSetupActions2020-02-26_07-27-27PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
The response file for this session can be found at:
 /u01/app/19.6.0.0/grid/install/response/grid_2020-02-26_07-27-27PM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2020-02-26_07-27-27PM/gridSetupActions2020-02-26_07-27-27PM.log


As a root user, execute the following script(s):
        1. /u01/app/19.6.0.0/grid/rootupgrade.sh

Execute /u01/app/19.6.0.0/grid/rootupgrade.sh on the following nodes:
[reneace01, reneace02]

Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes, except a node you designate as the last node. When all the nodes except the last node are done successfully, run the script on the last node.

Successfully Setup Software with warning(s).
As install user, execute the following command to complete the configuration.
        /u01/app/19.6.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /tmp/gridresponse.rsp [-silent]


[grid@reneace01 GridSetupActions2020-02-26_07-27-27PM]$ tail -f /u01/app/19.6.0.0/grid/install/root_reneace01.network_2020-02-26_20-09-57-369528654.log
    ORACLE_HOME=  /u01/app/19.6.0.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/19.6.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/reneace01/crsconfig/rootcrs_reneace01_2020-02-26_08-10-30PM.log
2020/02/26 20:11:02 CLSRSC-595: Executing upgrade step 1 of 18: 'UpgradeTFA'.
2020/02/26 20:11:02 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2020/02/26 20:11:02 CLSRSC-595: Executing upgrade step 2 of 18: 'ValidateEnv'.
2020/02/26 20:11:09 CLSRSC-595: Executing upgrade step 3 of 18: 'GetOldConfig'.
2020/02/26 20:11:09 CLSRSC-464: Starting retrieval of the cluster configuration data
2020/02/26 20:11:46 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2020/02/26 20:14:49 CLSRSC-692: Checking whether CRS entities are ready for upgrade. This operation may take a few minutes.
2020/02/26 20:17:53 CLSRSC-693: CRS entities validation completed successfully.
2020/02/26 20:18:02 CLSRSC-515: Starting OCR manual backup.
2020/02/26 20:18:18 CLSRSC-516: OCR manual backup successful.
2020/02/26 20:18:28 CLSRSC-486:
 At this stage of upgrade, the OCR has changed.
 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2020/02/26 20:18:28 CLSRSC-541:
 To downgrade the cluster:
 1. All nodes that have been upgraded must be downgraded.
2020/02/26 20:18:28 CLSRSC-542:
 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.
2020/02/26 20:18:38 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2020/02/26 20:18:39 CLSRSC-595: Executing upgrade step 4 of 18: 'GenSiteGUIDs'.
2020/02/26 20:18:40 CLSRSC-595: Executing upgrade step 5 of 18: 'UpgPrechecks'.
2020/02/26 20:18:46 CLSRSC-363: User ignored prerequisites during installation
2020/02/26 20:19:02 CLSRSC-595: Executing upgrade step 6 of 18: 'SetupOSD'.
2020/02/26 20:19:02 CLSRSC-595: Executing upgrade step 7 of 18: 'PreUpgrade'.
2020/02/26 20:22:26 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2020/02/26 20:22:26 CLSRSC-482: Running command: '/u01/app/12.2.0.1/grid/bin/crsctl start rollingupgrade 19.0.0.0.0'
CRS-1131: The cluster was successfully set to rolling upgrade mode.
2020/02/26 20:22:31 CLSRSC-482: Running command: '/u01/app/19.6.0.0/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/12.2.0.1/grid -oldCRSVersion 12.2.0.1.0 -firstNode true -startRolling false '

ASM configuration upgraded in local node successfully.

2020/02/26 20:25:27 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2020/02/26 20:25:34 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2020/02/26 20:26:08 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2020/02/26 20:26:11 CLSRSC-595: Executing upgrade step 8 of 18: 'CheckCRSConfig'.
2020/02/26 20:26:12 CLSRSC-595: Executing upgrade step 9 of 18: 'UpgradeOLR'.
2020/02/26 20:26:30 CLSRSC-595: Executing upgrade step 10 of 18: 'ConfigCHMOS'.
2020/02/26 20:26:30 CLSRSC-595: Executing upgrade step 11 of 18: 'UpgradeAFD'.
2020/02/26 20:26:41 CLSRSC-595: Executing upgrade step 12 of 18: 'createOHASD'.
2020/02/26 20:26:51 CLSRSC-595: Executing upgrade step 13 of 18: 'ConfigOHASD'.
2020/02/26 20:26:51 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2020/02/26 20:29:52 CLSRSC-595: Executing upgrade step 14 of 18: 'InstallACFS'.
2020/02/26 20:33:58 CLSRSC-595: Executing upgrade step 15 of 18: 'InstallKA'.
2020/02/26 20:34:07 CLSRSC-595: Executing upgrade step 16 of 18: 'UpgradeCluster'.
2020/02/26 21:22:11 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2020/02/26 21:22:36 CLSRSC-595: Executing upgrade step 17 of 18: 'UpgradeNode'.
2020/02/26 21:22:41 CLSRSC-474: Initiating upgrade of resource types
2020/02/26 22:07:33 CLSRSC-475: Upgrade of resource types successfully initiated.
2020/02/26 22:07:48 CLSRSC-595: Executing upgrade step 18 of 18: 'PostUpgrade'.
2020/02/26 22:07:58 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

But after this, my team and myself saw that the +ASM1 instance was not seeing +ASM2 instance. After troubleshooting we were able to find out that interconnect has a different netmask in node 1 and node 2.

# Node 1
eth1:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 169.254.13.192 netmask 255.255.224.0 broadcast 169.254.31.255
ether 00:50:56:8e:21:77 txqueuelen 1000 (Ethernet)

# Node 2
eth1:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 169.254.60.115 netmask 255.255.0.0 broadcast 169.254.255.255
ether 00:50:56:8e:b4:4a txqueuelen 1000 (Ethernet)

After a lot of back and forth with Oracle. We were able to confirm that this is a known issue from 19c.

Documents and Bugs are internal

  • ASM Fails to Start After Running “rootupgrade.sh” on First Node While Upgrading to 19c Grid Infrastructure ( Doc ID 2606735.1 )
  • Bug 30265357 – FAILS TO START ORA.ASM WHEN UPGRADE FROM 12.1.0.2 TO 19.3.0
  • Bug 30452852 – DURING UPGRADE FROM 12C TO 19C +ASM1(19C) FAILS TO JOIN ALREADY RUNNING +ASM2(12C)
  • BUG:29379299 – HAIP SUBNET FLIPS UPON BOTH LINK DOWN/UP EVENT

So the fix for this, is the below

  1.  shut down node1 (crsctl stop crs).
  2. Once shut down, run rootupgrade.sh in node 2
  3. Once rootupgrade.sh is complete in node 2, bring up both nodes
  4. To Finish the upgrade in node 1, run /u01/app/19.6.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /tmp/gridresponse.rsp -silent

So if you are going to be running a GI upgrade from 12.x to 19.6 , you need to be aware of this bug, as the upgrade becomes a full outage, not a rolling upgrade.

Rene Antunez
[email protected]
4 Comments
  • Kala
    Posted at 14:50h, 03 April

    Hi,

    Such a useful post and would like to thank you for this detailed post as I am a RAC upgrade coming soon..I am a little confused with the solution so what you meant is
    run rootupgrade on node1
    stop crs on node1
    run rootupgrade on node2
    stop crs on node2
    start crs on node1
    start crs on node2
    run /u01/app/19.6.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /tmp/gridresponse.rsp -silent on node1

    • Rene Antunez
      Posted at 16:56h, 03 April

      Hi Kala
      These are the steps
      1. Run gridSetup.sh -silent -responseFile /tmp/gridresponse.rsp -applyRU /u06/patches/oracle19c/30501910
      2. Run rootupgrade.sh on node1
      3. Stop crs on node1
      4. Run rootupgrade.sh on node2
      5. Start crs on node1
      6. Run /u01/app/19.6.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /tmp/gridresponse.rsp -silent on node1
      There is no need to stop CRS in node 2 after you run rootupgrade.sh in that node

  • Kala
    Posted at 16:36h, 07 April

    Thanks a lot 🙂

  • Pingback:Easter reading material – Oracle Business Intelligence
    Posted at 07:45h, 14 April

    […] 1. While Upgrading From GI 12C to 19C +ASM1(19C) Does Not See + ASM2(12C)  […]