TNS-12564 Errors and the Importance of the Parameter LOCAL_LISTENER During a Dataguard Switchover

TNS-12564 Errors and the Importance of the Parameter LOCAL_LISTENER During a Dataguard Switchover

A couple of weeks ago a client of mine had a situation where DataGuard (DG) switchover that should have taken 5 minutes, took over 1 hour in troubleshooting and resolution. Right after the switchover completed, their application had issues connecting to the database. What we found out was that the database parameter LOCAL_LISTENER was set incorrectly.

We went ahead and fixed this parameter and went to rebuild the DG broker, immediately after that, everything went back to normal.

What I will try to do in this blog post is to build an example of how this happened, it will be a long read, so please bear with me.

In this example I am using a single instance non RAC database, so the setting of LOCAL_LISTENER is a bit different than it is for RAC. Further below I will show how to set LOCAL_LISTENER for the 19c versions of both RAC and single instance databases.

What I want to show first is that when I started, everything looked normal in the environment for the switchover to happen.

[oracle@primary ~]$ dgmgrl sys@TESTDB
Password:
Connected to "TESTDB"
Connected as SYSDBA.
DGMGRL> show configuration;

Configuration - db_broker_config

  Protection Mode: MaxPerformance
  Members:
  testdb       - Primary database
    testdb_stdby - Physical standby database 

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 13 seconds ago)

DGMGRL> show database testdb;

Database - testdb

  Role:               PRIMARY
  Intended State:     TRANSPORT-ON
  Instance(s):
    TESTDB

Database Status:
SUCCESS

DGMGRL> show database testdb_stdby;

Database - testdb_stdby

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          0 seconds (computed 0 seconds ago)
  Average Apply Rate: 1.83 MByte/s
  Real Time Query:    ON
  Instance(s):
    TESTDB

Database Status:
SUCCESS

DGMGRL> validate database testdb_stdby;

  Database Role:     Physical standby database
  Primary Database:  testdb

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

  Flashback Database Status:
    testdb      :  On
    testdb_stdby:  Off

  Managed by Clusterware:
    testdb      :  NO             
    testdb_stdby:  NO             
    Validating static connect identifier for the primary database testdb...
    The static connect identifier allows for a connection to database "testdb".

  Current Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status       
              (testdb)               (testdb_stdby)                      
    1         3                       1                       Insufficient SRLs

  Future Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status       
              (testdb_stdby)         (testdb)                            
    1         3                       0                       Insufficient SRLs
    Warning: standby redo logs not configured for thread 1 on testdb

After this, I actually did a Health Check using scripts in MOS note 1581388.1, on both primary and standby and everything looked good from my end to perform the switchover as I saw no lag and no errors in the v$dataguard_status of the standby.

################################################################################################
# Brief summary from Primary HC
################################################################################################

Primary Site last generated SCN

*******************************

DB_UNIQUE_NAME	SWITCHOVER_STATUS	  CURRENT_SCN
--------------- -------------------- ----------------
DB193H1 	TO STANDBY		      2784649

1 row selected.

...
Data Guard Redo Shipping Progress

*********************************

SYSTIMESTAMP
---------------------------------------------------------------------------
27-OCT-20 09.49.07.725370 AM -06:00

1 row selected.


PROCESS STATUS	     CLIENT_P CLIENT_PID	  THREAD#	 SEQUENCE#	     BLOCK#    ACTIVE_AGENTS	 KNOWN_AGENTS
------- ------------ -------- ---------- ---------------- ---------------- ---------------- ---------------- ----------------
ARCH	CLOSING      ARCH     22960			1		58		  1		   0		    0
ARCH	CLOSING      ARCH     22956			1		58		  1		   0		    0
ARCH	CLOSING      ARCH     22958			1		59		  1		   0		    0
ARCH	CLOSING      ARCH     22950			1		60	      16384		   0		    0
LNS	WRITING      LNS      1288			1		61		423		   0		    0
DGRD	ALLOCATED    N/A      N/A			0		 0		  0		   0		    0
DGRD	ALLOCATED    N/A      N/A			0		 0		  0		   0		    0
DGRD	ALLOCATED    N/A      N/A			0		 0		  0		   0		    0
DGRD	ALLOCATED    N/A      N/A			0		 0		  0		   0		    0

9 rows selected.
...

################################################################################################
# Brief summary from Standby HC
################################################################################################
...
Data Guard Apply Status

***********************

SYSTIMESTAMP
---------------------------------------------------------------------------
27-OCT-20 09.48.33.410311 AM -06:00

1 row selected.


PROCESS STATUS	     CLIENT_P CLIENT_PID	  THREAD#	 SEQUENCE#	     BLOCK#    ACTIVE_AGENTS	 KNOWN_AGENTS
------- ------------ -------- ---------- ---------------- ---------------- ---------------- ---------------- ----------------
ARCH	CONNECTED    ARCH     30543			0		 0		  0		   0		    0
ARCH	CONNECTED    ARCH     30546			0		 0		  0		   0		    0
ARCH	CONNECTED    ARCH     30548			0		 0		  0		   0		    0
ARCH	CLOSING      ARCH     30550			1		60	      16384		   0		    0
RFS	IDLE	     Archival 22952			1		 0		  0		   0		    0
RFS	IDLE	     LGWR     1288			1		61		381		   0		    0
DGRD	ALLOCATED    N/A      N/A			0		 0		  0		   0		    0
DGRD	ALLOCATED    N/A      N/A			0		 0		  0		   0		    0
MRP0	APPLYING_LOG N/A      N/A			1		61		381		   3		    3
RFS	IDLE	     UNKNOWN  22958			0		 0		  0		   0		    0
RFS	IDLE	     UNKNOWN  22956			0		 0		  0		   0		    0

11 rows selected.
...
Data Guard Apply Lag

********************

NAME	     LAG_TIME		  DATUM_TIME	       TIME_COMPUTED
------------ -------------------- -------------------- --------------------
apply lag    +00 00:00:00	  10/27/2020 09:48:52  10/27/2020 09:48:53

1 row selected.


Data Guard Gap Problems

***********************

no rows selected


Data Guard Errors in the Last Hour

**********************************

no rows selected

So I went ahead and did the switchover and as you will see in the command below, it seems like the switchover did not happen.

[oracle@primary ~]$ dgmgrl sys@TESTDB
Password:
Connected to "TESTDB"
Connected as SYSDBA.
DGMGRL> switchover to 'testdb_stdby';
Performing switchover NOW, please wait...
Operation requires a connection to database "testdb_stdby"
Connecting ...
Connected to "TESTDB_STDBY"
Connected as SYSDBA.
Error: ORA-1034: ORACLE not available
Error: ORA-16625: cannot reach member "testdb"

Failed.
Unable to switchover, primary database is still "testdb"

But looking at the logs of the primary database, it seems the switchover did happen. What seemed fishy was that I started getting errors ORA-12514 and TNS-12564 . But first I wanted to concentrate on the status of both databases and review the logs.

################################################################################################
# From the Primary Database Log
################################################################################################

2020-10-27T12:12:34.240-06:00
Forwarding MON_PROPERTY operation to member testdb_stdby for processing
2020-10-27T12:22:17.368-06:00
Forwarding MON_PROPERTY operation to member testdb_stdby for processing
Forwarding MON_PROPERTY operation to member testdb_stdby for processing
2020-10-27T12:22:33.178-06:00
Forwarding MON_PROPERTY operation to member testdb_stdby for processing
2020-10-27T12:27:39.365-06:00
Initiating a healthcheck...
SWITCHOVER TO testdb_stdby
Switchover to physical standby database cannot be initiated from the primary database
redirecting connection to switchover target database testdb_stdby...
...using connect identifier: testdb_stdby
SWITCHOVER TO testdb_stdby
Notifying Oracle Clusterware to prepare primary database for switchover
2020-10-27T12:27:40.847-06:00
Executing SQL: [ALTER DATABASE SWITCHOVER TO 'testdb_stdby']
2020-10-27T12:27:57.750-06:00
SQL [ALTER DATABASE SWITCHOVER TO 'testdb_stdby'] executed successfully
2020-10-27T12:28:12.255-06:00
Switchover in progress...
2020-10-27T12:30:19.497-06:00
Failed to connect to remote database testdb_stdby. Error is ORA-12514
Failed to send message to member testdb_stdby. Error code is ORA-12514.
***********************************************************************

Fatal NI connect error 12514, connecting to:
 (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=standby)(PORT=1521)))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=testd b_stdby)(INSTANCE_NAME=TESTDB)(CID=(PROGRAM=oracle)(HOST=primary)(USER=oracle))))

  VERSION INFORMATION:
    TNS for Linux: Version 19.0.0.0.0 - Production
    TCP/IP NT Protocol Adapter for Linux: Version 19.0.0.0.0 - Production
  Version 19.9.0.0.0
  Time: 27-OCT-2020 12:30:20
  Tracing not turned on.
  Tns error struct:
    ns main err code: 12564
    
TNS-12564: TNS:connection refused
    ns secondary err code: 0
    nt main err code: 0
    nt secondary err code: 0
    nt OS err code: 0


################################################################################################
# From the Primary Database DataGuard Broker Log
################################################################################################

2020-10-27T12:27:39.535487-06:00
SWITCHOVER VERIFY: Send VERIFY request to switchover target testdb_stdby
SWITCHOVER VERIFY COMPLETE: READY FOR SWITCHOVER
SWITCHOVER VERIFY: Send VERIFY request to switchover target testdb_stdby
SWITCHOVER VERIFY COMPLETE: READY FOR SWITCHOVER
2020-10-27T12:27:40.848477-06:00
ALTER DATABASE SWITCHOVER TO 'testdb_stdby'
2020-10-27T12:27:40.848600-06:00
RSM0 (PID:23638): The Time Management Interface (TMI) is being enabled for role transition
RSM0 (PID:23638): information.  This will result in messages beingoutput to the alert log
RSM0 (PID:23638): file with the prefix 'TMI: '.  This is being enabled to make the timing of
RSM0 (PID:23638): the various stages of the role transition available for diagnostic purposes.
RSM0 (PID:23638): This output will end when the role transition is complete.
TMI: dbsdrv switchover to target BEGIN 2020-10-27 12:27:40.849258
RSM0 (PID:23638): Starting switchover [Process ID: 23638]
TMI: kcv_switchover_to_target convert to physical BEGIN 2020-10-27 12:27:40.927250
2020-10-27T12:27:40.927438-06:00
ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY [Process Id: 23638] (TESTDB)
...
RSM0 (PID:23638): Sending request(convert to primary database) to switchover target testdb_stdby
2020-10-27T12:27:44.474112-06:00
Process (ospid 22733) is suspended due to switchover to physical standby operation.
2020-10-27T12:27:45.753622-06:00
Process (ospid 22479) is suspended due to switchover to physical standby operation.
2020-10-27T12:27:57.749843-06:00
RSM0 (PID:23638): Switchover complete. Database shutdown required
TMI: dbsdrv switchover to target END 2020-10-27 12:27:57.749864
Completed: ALTER DATABASE SWITCHOVER TO 'testdb_stdby'
2020-10-27T12:27:58.811463-06:00

Now I moved on to check the standby database log and dataguard broker log, and I confirmed that the switchover did happen.

################################################################################################
# From the Standby Database log
################################################################################################

PR00 (PID:19640): MRP0: Background Media Recovery cancelled with status 16037
2020-10-27T12:27:44.243985-06:00
Errors in file /u01/app/oracle/diag/rdbms/testdb_stdby/TESTDB/trace/TESTDB_pr00_19640.trc:
ORA-16037: user requested cancel of managed recovery operation
PR00 (PID:19640): Managed Standby Recovery not using Real Time Apply
2020-10-27T12:27:44.353706-06:00
Recovery interrupted!
stopping change tracking
...
Standby became primary SCN: 2408641
 rmi (PID:20473): RT: Role transition work is not done
 rmi (PID:20473): The Time Management Interface (TMI) is being enabled for role transition
 rmi (PID:20473): information.  This will result in messages beingoutput to the alert log
 rmi (PID:20473): file with the prefix 'TMI: '.  This is being enabled to make the timing of
 rmi (PID:20473): the various stages of the role transition available for diagnostic purposes.
 rmi (PID:20473): This output will end when the role transition is complete.
 rmi (PID:20473): Redo network throttle feature is disabled at mount time
AUDIT_TRAIL initialization parameter is changed back to its original value as specified in the parameter file.
2020-10-27T12:27:57.747028-06:00
 rmi (PID:20473): Database role cleared from PHYSICAL STANDBY [kcvs.c:1069]
Switchover: Complete - Database mounted as primary
TMI: kcv_commit_to_so_to_primary Switchover from physical END 2020-10-27 12:27:57.747449
SWITCHOVER: completed request from primary database.
2020-10-27T12:28:43.126856-06:00
ARC0 (PID:18672): Becoming the 'no SRL' ARCH
2020-10-27T12:30:19.584435-06:00
ALTER SYSTEM SET fal_server='testdb' SCOPE=BOTH;
2020-10-27T12:43:00.679979-06:00

################################################################################################
# From the Standby Database DataGuard Broker Log
################################################################################################

Forwarding EDIT_RES_PROP operation to member testdb for processing
Apply Instance for Database testdb_stdby set to TESTDB
2020-10-27T12:07:54.013-06:00
Updated broker configuration file available, loading from "/u01/app/oracle/product/19.3.0.0/dbhome_1/dbs/dr1TESTDB_STDBY.dat"
2020-10-27T12:08:07.080-06:00
Starting redo apply services...
2020-10-27T12:11:51.444-06:00
Stopping apply to allow database open to proceed
2020-10-27T12:27:39.368-06:00
Initiating a healthcheck...
Forwarding CTL_SWITCH operation to member testdb for processing
FSFO SetState(st=12 "SET OBID", fl=0x0 "", ob=0x1, tgt=0, v=0)
Updated broker configuration file available, loading from "/u01/app/oracle/product/19.3.0.0/dbhome_1/dbs/dr2TESTDB_STDBY.dat"
2020-10-27T12:27:40.839-06:00
Switchover processing to this database has started
Notifying Oracle Clusterware to prepare target standby database for switchover
2020-10-27T12:27:54.410-06:00
Switchover in progress...
2020-10-27T12:30:19.564-06:00
Updated broker configuration file available, loading from "/u01/app/oracle/product/19.3.0.0/dbhome_1/dbs/dr1TESTDB_STDBY.dat"
2020-10-27T12:42:46.387-06:00
Failed to connect to remote database testdb. Error is ORA-1034
Failed to send message to member testdb. Error code is ORA-1034.
2020-10-27T12:43:00.701-06:00
Failed to connect to remote database testdb. Error is ORA-12514
Failed to send message to member testdb. Error code is ORA-12514.
2020-10-27T12:44:02.112-06:00
Failed to connect to remote database testdb. Error is ORA-12514
Failed to send message to member testdb. Error code is ORA-12514.

What I did next is verify the status of both the new primary and the new standby databases and it seemed the new primary never moved from MOUNT state to OPEN state, so I opened it. Similar to the new standby, it was never started, so I started the DB.

################################################################################################
# New Primary Database
################################################################################################

SQL> set lines 200 pages 99
SELECT NAME,DB_UNIQUE_NAME,OPEN_MODE,PROTECTION_MODE,
       PROTECTION_LEVEL,
       DATABASE_ROLE ROLE,
       SWITCHOVER_STATUS
from gv$DATABASE;  

NAME      DB_UNIQUE_NAME         OPEN_MODE          PROTECTION_MODE       PROTECTION_LEVEL    ROLE         SWITCHOVER_STATUS
--------- ------------------------------ -------------------- -------------------- -------------------- ---------------- --------------------
TESTDB   TESTDB_STDBY          MOUNTED          MAXIMUM PERFORMANCE  UNPROTECTED        PRIMARY      NOT ALLOWED

SQL> alter database open;

Database altered.

SELECT NAME,DB_UNIQUE_NAME,OPEN_MODE,PROTECTION_MODE,
       PROTECTION_LEVEL,
       DATABASE_ROLE ROLE,
       SWITCHOVER_STATUS
from gv$DATABASE;SQL>   2    3    4    5  

NAME	  DB_UNIQUE_NAME		 OPEN_MODE	      PROTECTION_MODE	   PROTECTION_LEVEL	ROLE		 SWITCHOVER_STATUS
--------- ------------------------------ -------------------- -------------------- -------------------- ---------------- --------------------
DB193H1   DB193H1_STDBY 		 READ WRITE	      MAXIMUM PERFORMANCE  MAXIMUM PERFORMANCE	PRIMARY 	 TO STANDBY

################################################################################################
# New Standby Database
################################################################################################

[oracle@primary ~]$ sqlplus

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Oct 28 11:03:09 2020
Version 19.9.0.0.0

Copyright (c) 1982, 2020, Oracle.  All rights reserved.

Enter user-name: /as sysdba
Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 2415917880 bytes
Fixed Size		    8899384 bytes
Variable Size		  520093696 bytes
Database Buffers	 1879048192 bytes
Redo Buffers		    7876608 bytes
Database mounted.
Database opened.
SQL> set lines 200 pages 99
SELECT NAME,DB_UNIQUE_NAME,OPEN_MODE,PROTECTION_MODE,
       PROTECTION_LEVEL,
       DATABASE_ROLE ROLE,
       SWITCHOVER_STATUS
from gv$DATABASE;  SQL>   2    3    4    5  

NAME	  DB_UNIQUE_NAME		 OPEN_MODE	      PROTECTION_MODE	   PROTECTION_LEVEL	ROLE		 SWITCHOVER_STATUS
--------- ------------------------------ -------------------- -------------------- -------------------- ---------------- --------------------
DB193H1   DB193H1			 READ ONLY	      MAXIMUM PERFORMANCE  MAXIMUM PERFORMANCE	PHYSICAL STANDBY NOT ALLOWED

Next I went and checked the broker status, and everything seemed bogus, as you will see the broker is still showing the old roles of the databases and as you see above, the roles had switched correctly.

[oracle@standby ~]$ dgmgrl sys@TESTDB_STDBY
Password:
Connected to "TESTDB_STDBY"
Connected as SYSDBA.
DGMGRL> show configuration;

Configuration - db_broker_config

  Protection Mode: MaxPerformance
  Members:
  testdb       - Primary database
    Error: ORA-16810: multiple errors or warnings detected for the member

    testdb_stdby - Physical standby database 
      Error: ORA-16810: multiple errors or warnings detected for the member

Fast-Start Failover:  Disabled

Configuration Status:
ERROR   (status updated 25 seconds ago)

What I focused next, instead of fixing the broker configuration first, was exactly the next error after the switchover, which are the TNS-12564: TNS:connection refused errors

################################################################################################
# From the new Primary Database log
################################################################################################

SWITCHOVER: completed request from primary database.
2020-10-27T06:55:26.572528-06:00
ARC0 (PID:18472): Becoming the 'no SRL' ARCH
2020-10-27T07:10:55.228901-06:00
***********************************************************************

Fatal NI connect error 12514, connecting to:
 (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=primary)(PORT=1521)))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=db193h1)(INSTANCE_NAME=DB193H1)(CID=(PROGRAM=oracle)(HOST=standby)(USER=oracle))))

  VERSION INFORMATION:
        TNS for Linux: Version 19.0.0.0.0 - Production
        TCP/IP NT Protocol Adapter for Linux: Version 19.0.0.0.0 - Production
  Version 19.9.0.0.0
  Time: 27-OCT-2020 07:10:55
  Tracing not turned on.
  Tns error struct:
    ns main err code: 12564

TNS-12564: TNS:connection refused
    ns secondary err code: 0
    nt main err code: 0
    nt secondary err code: 0
    nt OS err code: 0

################################################################################################
# From the new Standby Database log
################################################################################################

TT02 (PID:13281): All non-current ORLs have been archived
2020-10-27T07:16:08.979966-06:00
***********************************************************************
Fatal NI connect error 12514, connecting to:
 (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=standby)(PORT=1521)))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=db193h1_stdby)(CID=(PROGRAM=oracle)(HOST=primary)(USER=oracle))))

  VERSION INFORMATION:
        TNS for Linux: Version 19.0.0.0.0 - Production
        TCP/IP NT Protocol Adapter for Linux: Version 19.0.0.0.0 - Production
  Version 19.9.0.0.0
  Time: 27-OCT-2020 07:16:08
  Tracing not turned on.
  Tns error struct:
    ns main err code: 12564

TNS-12564: TNS:connection refused
    ns secondary err code: 0
    nt main err code: 0
    nt secondary err code: 0
    nt OS err code: 0

After troubleshooting connectivity via the connector descriptor from the old primary database log above, I found that the service db193h1_stdby was not registered in the standby listener, and actually the service registered was incorrect. Similarly for the old primary database.

[oracle@standby ~]$ lsnrctl services
...
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=standby)(PORT=1521)))
Services Summary...
Service "TESTDB_DGMGRL" has 1 instance(s).
  Instance "TESTDB", status UNKNOWN, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:31 refused:0
         LOCAL SERVER
The command completed successfully

What I proceeded to do next is verify the value of the parameter LOCAL_LISTENER  on the new primary and saw that the value was incorrect, as it had a value of an old listener that was no longer present. So I changed it to the current LISTENER.

 

SQL> show parameter local_listener

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
local_listener			     string	 LISTENER_STBY

SQL> alter system set local_listener='LISTENER';

System altered.

It is important to note that as of Oracle 12.1 in a RAC DataGuard environment, it is critical to not have the parameter LOCAL_LISTENER set as per documentation. So you don’t want to do what I did above, what you want to do is the following  SQL> ALTER SYSTEM RESET LOCAL_LISTENER SCOPE=BOTH SID='*'; 

I had to verify the tnsnames.ora file had the network name that resolves to an address. As it didn’t have it, I proceeded to add it. Remember, this is only for Non RAC instances, there is no need to add the address list to the tnsnames.ora in a RAC environment.

[oracle@standby admin]$ cat tnsnames.ora | grep LISTENER | wc -l
0
[oracle@standby admin]$ vi tnsnames.ora
[oracle@standby admin]$ cat tnsnames.ora | head -2
LISTENER =
  (ADDRESS = (PROTOCOL = TCP)(HOST = standby)(PORT = 1521))
[oracle@standby admin]$ lsnrctl reload
...
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=standby)(PORT=1521)))
The command completed successfully

[oracle@standby admin]$ lsnrctl services
...
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=standby)(PORT=1521)))
Services Summary...
Service "DB193H1XDB" has 1 instance(s).
  Instance "DB193H1", status READY, has 1 handler(s) for this service...
    Handler(s):
      "D000" established:0 refused:0 current:0 max:1022 state:ready
         DISPATCHER <machine: standby, pid: 2094>
         (ADDRESS=(PROTOCOL=tcp)(HOST=standby.localdomain)(PORT=21723))
Service "DB193H1_CFG" has 1 instance(s).
  Instance "DB193H1", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
Service "DB193H1_STDBY" has 1 instance(s).
  Instance "DB193H1", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
Service "DB193H1_STDBY_DGMGRL" has 1 instance(s).
  Instance "DB193H1", status UNKNOWN, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0
         LOCAL SERVER
The command completed successfully

Once I had corrected this, I now had to proceed to recreate the DataGuard Broker, as when you change the parameter LOCAL_LISTENER , you have to recreate the DataGuard configuration or else it will start behaving abnormally.


Recreating the DataGuard Broker

1.- From the New Primary

[oracle@standby ~]$ dgmgrl sys@TESTDB_STDBY
Password:
Connected to "TESTDB_STDBY"
Connected as SYSDBA.
DGMGRL> show configuration;

Configuration - db_broker_config

  Protection Mode: MaxPerformance
  Members:
  testdb       - Primary database
    Error: ORA-16810: multiple errors or warnings detected for the member

    testdb_stdby - Physical standby database 

Fast-Start Failover:  Disabled

Configuration Status:
ERROR   (status updated 25 seconds ago)

DGMGRL> remove configuration;
Removed configuration

DGMGRL> exit

[oracle@standby ~]$ sqlplus / as sysdba

SQL> alter system set dg_broker_start=false scope=both;

System altered.

SQL> show parameter dg_broker_config_file

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
dg_broker_config_file1		     string	 /u01/app/oracle/product/19.3.0.0/dbhome_1/dbs/dr1TESTDB_STDBY.dat
dg_broker_config_file2		     string	 /u01/app/oracle/product/19.3.0.0/dbhome_1/dbs/dr2TESTDB_STDBY.dat

SQL> ! rm /u01/app/oracle/product/19.3.0.0/dbhome_1/dbs/dr*TESTDB_STDBY.dat

SQL> alter system set dg_broker_start=true scope=both;

System altered.
SQL> exit

2.- From the New Standby

[oracle@primary ~]$ sqlplus / as sysdba

SQL> alter system set dg_broker_start=false scope=both;

System altered.

SQL> show parameter dg_broker_config_file

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
dg_broker_config_file1		     string	 /u01/app/oracle/product/19.3.0.0/dbhome_1/dbs/dr1TESTDB.dat
dg_broker_config_file2		     string	 /u01/app/oracle/product/19.3.0.0/dbhome_1/dbs/dr2TESTDB.dat

SQL> ! rm /u01/app/oracle/product/19.3.0.0/dbhome_1/dbs/dr*TESTDB.dat

SQL> alter system set dg_broker_start=true scope=both;

System altered.

3.- From the New Primary

[oracle@primary ~]$ dgmgrl sys@TESTDB_STDBY
Password:
Connected to "TESTDB_STDBY"
Connected as SYSDBA.

DGMGRL> CREATE CONFIGURATION db_broker_config AS PRIMARY DATABASE IS testdb_stdby CONNECT IDENTIFIER IS testdb_stdby;
Configuration "db_broker_config" created with primary database "testdb_stdby"

DGMGRL> ADD DATABASE testdb AS CONNECT IDENTIFIER IS testdb;
Database "testdb" added

DGMGRL> ENABLE CONFIGURATION;
Enabled.

Finally, after this, I was now able to switch back and forth between both databases without any issues.

DGMGRL> switchover to testdb;     
Performing switchover NOW, please wait...
Operation requires a connection to database "testdb"
Connecting ...
Connected to "TESTDB"
Connected as SYSDBA.
New primary database "testdb" is opening...
Operation requires start up of instance "TESTDB" on database "testdb_stdby"
Starting instance "TESTDB"...
Connected to an idle instance.
ORACLE instance started.
Connected to "TESTDB_STDBY"
Database mounted.
Database opened.
Connected to "TESTDB_STDBY"
Switchover succeeded, new primary is "testdb"

DGMGRL> switchover to testdb_stdby;
Performing switchover NOW, please wait...
Operation requires a connection to database "testdb_stdby"
Connecting ...
Connected to "TESTDB_STDBY"
Connected as SYSDBA.
New primary database "testdb_stdby" is opening...
Operation requires start up of instance "TESTDB" on database "testdb"
Starting instance "TESTDB"...
Connected to an idle instance.
ORACLE instance started.
Connected to "TESTDB"
Database mounted.
Database opened.
Connected to "TESTDB"
Switchover succeeded, new primary is "testdb_stdby"

As you see, the importance of having the database parameter LOCAL_LISTENER set properly or else you will have a set of headaches during a switchover, even if everything seems like it is going to work out.

Tags:
,
Rene Antunez
[email protected]
No Comments

Sorry, the comment form is closed at this time.