Solving MGMTDB errors during 18c GI RU apply

Recently I executed the upgrade of Oracle GI to 19c version, from 18.6.0.0 to 19.5.0.0 version. But one step that was not showed there was that, because of requirements, the GI was upgraded from 18.2.0.0 to 18.6.0.0. This upgrade is a just Release Update (RU) apply and opatchauto command.

But during this upgrade, from 18.2 to 18.6, I faced (more than one time – 5 to be precise) errors during the update because of the MGMTDB errors. I got these errors:

  • ORA-12514, TNS: Listener does not currently know of service requested in connect descriptor
  • ORA-01017: invalid username/password; logon denied
  • MGTCA-1005 : Could not connect to the GIMR.
  • CRS-10407: (:CLSCRED1079:)Credential domain does not exist.

Here I will show how to solve these errors, how to identify if everything was fine and if you can continue. Be careful that it is an example, always open a support SR to identify the source of the error.

RU and Patch Process

To apply 18c RU over GI it is simple, basically, it is needed to call opatchauto for every node. One example of correct execution is:

[root@exa01vm01 ~]# opatchauto apply /u01/patches/grid/29301682 -oh /u01/app/18.0.0/grid

OPatchauto session is initiated at Mon Jan 20 12:48:25 2020

System initialization log file is /u01/app/18.0.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-01-20_12-48-48PM.log.

Session log file is /u01/app/18.0.0/grid/cfgtoollogs/opatchauto/opatchauto2020-01-20_12-49-44PM.log
The id for this session is RQ3F

Executing OPatch prereq operations to verify patch applicability on home /u01/app/18.0.0/grid
Patch applicability verified successfully on home /u01/app/18.0.0/grid


Bringing down CRS service on home /u01/app/18.0.0/grid
CRS service brought down successfully on home /u01/app/18.0.0/grid


Start applying binary patch on home /u01/app/18.0.0/grid
Binary patch applied successfully on home /u01/app/18.0.0/grid


Starting CRS service on home /u01/app/18.0.0/grid
CRS service started successfully on home /u01/app/18.0.0/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:exa01vm01
CRS Home:/u01/app/18.0.0/grid
Version:18.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /u01/patches/grid/29301682/28435192
Log: /u01/app/18.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-20_12-51-42PM_1.log

Patch: /u01/patches/grid/29301682/28547619
Log: /u01/app/18.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-20_12-51-42PM_1.log

Patch: /u01/patches/grid/29301682/29301631
Log: /u01/app/18.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-20_12-51-42PM_1.log

Patch: /u01/patches/grid/29301682/29301643
Log: /u01/app/18.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-20_12-51-42PM_1.log

Patch: /u01/patches/grid/29301682/29302264
Log: /u01/app/18.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-20_12-51-42PM_1.log



OPatchauto session completed at Mon Jan 20 13:00:01 2020
Time taken to complete the session 11 minutes, 36 seconds
[root@exa01vm01 ~]#



####################################################
#Execute in node 2
####################################################


[root@exa02vm01 ~]# opatchauto apply /u01/patches/grid/29301682 -oh /u01/app/18.0.0/grid

OPatchauto session is initiated at Tue Jan 21 13:12:37 2020

System initialization log file is /u01/app/18.0.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-01-21_01-12-51PM.log.

Session log file is /u01/app/18.0.0/grid/cfgtoollogs/opatchauto/opatchauto2020-01-21_01-13-37PM.log
The id for this session is NFXL

Executing OPatch prereq operations to verify patch applicability on home /u01/app/18.0.0/grid
Patch applicability verified successfully on home /u01/app/18.0.0/grid


Bringing down CRS service on home /u01/app/18.0.0/grid
CRS service brought down successfully on home /u01/app/18.0.0/grid


Start applying binary patch on home /u01/app/18.0.0/grid
Binary patch applied successfully on home /u01/app/18.0.0/grid


Starting CRS service on home /u01/app/18.0.0/grid
CRS service started successfully on home /u01/app/18.0.0/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:exa02vm01
CRS Home:/u01/app/18.0.0/grid
Version:18.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /u01/patches/grid/29301682/28435192
Log: /u01/app/18.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-21_13-15-58PM_1.log

Patch: /u01/patches/grid/29301682/28547619
Log: /u01/app/18.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-21_13-15-58PM_1.log

Patch: /u01/patches/grid/29301682/29301631
Log: /u01/app/18.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-21_13-15-58PM_1.log

Patch: /u01/patches/grid/29301682/29301643
Log: /u01/app/18.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-21_13-15-58PM_1.log

Patch: /u01/patches/grid/29301682/29302264
Log: /u01/app/18.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-21_13-15-58PM_1.log



OPatchauto session completed at Tue Jan 21 13:35:21 2020
Time taken to complete the session 22 minutes, 44 seconds
[root@exa02vm01 ~]#

As you can see, call in one node first, and after the second node. Basically during this process, the opatch will apply the binary updates in GI home, restart GI (to open with the new version), and apply needed SQL patches over MGMTDB and ASM.

MGMTDB and errors

But some errors can occur during the apply, mainly for MGMTDB. These errors are usually related to the communication between MGMTDB and CRS resources (listener and others). Usually, these errors occur when applying the patch in the last node of the cluster because the MGMTDB is swapped from other nodes (until the last one) during the patch application.

Some errors examples that I faced:

ORA-12514, TNS: Listener does not currently know of service requested in connect descriptor

...
CRS-4123: Oracle High Availability Services has been started.
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [4127601284].
SQL Patching tool version 18.0.0.0.0 Production on Wed Jan 15 13:57:08 2020
Copyright (c) 2012, 2019, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/grid/cfgtoollogs/sqlpatch/sqlpatch_360667_2020_01_15_13_57_08/sqlpatch_invocation.log

Connecting to database...OK
Gathering database info...done

Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)

Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of interim SQL patches:
  No interim patches found

Current state of release update SQL patches:
  Binary registry:
    18.6.0.0.0 Release_Update 1903190102: Installed
  PDB CDB$ROOT:
    Applied 18.2.0.0.0 Release_Update 1804041635 successfully on 02-OCT-18 03.11.47.950696 PM
  PDB PDB$SEED:
    Applied 18.2.0.0.0 Release_Update 1804041635 successfully on 02-OCT-18 03.16.34.680672 PM

Adding patches to installation queue and performing prereq checks...done
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED
    No interim patches need to be rolled back
    Patch 29301631 (Database Release Update : 18.6.0.0.190416 (29301631)):
      Apply from 18.2.0.0.0 Release_Update 1804041635 to 18.6.0.0.0 Release_Update 1903190102
    No interim patches need to be applied

Installing patches...
Patch installation complete.  Total patches installed: 2

Validating logfiles...done
Patch 29301631 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/29301631/22832106/29301631_apply__MGMTDB_CDBROOT_2020Jan15_13_58_14.log (no errors)
Patch 29301631 apply (pdb PDB$SEED): SUCCESS
  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/29301631/22832106/29301631_apply__MGMTDB_PDBSEED_2020Jan15_14_01_22.log (no errors)
SQL Patching tool complete on Wed Jan 15 14:02:41 2020
MGTCA-1005 : Could not connect to the GIMR.

Listener refused the connection with the following error:

ORA-12514, TNS:listener does not currently know of service requested in connect descriptor



2020/01/15 14:02:49 CLSRSC-180: An error occurred while executing the command '/u01/app/18.0.0/grid/bin/mgmtca applysql'

After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Wed Jan 15 14:02:50 2020
Time taken to complete the session 28 minutes, 34 seconds

 opatchauto failed with error code 42
[root@exa03vm02 ~]#

ORA-01017: invalid username/password; logon denied

CRS-4123: Oracle High Availability Services has been started.
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [4127601284].
SQL Patching tool version 18.0.0.0.0 Production on Fri Nov 29 10:32:41 2019
Copyright (c) 2012, 2019, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/grid/cfgtoollogs/sqlpatch/sqlpatch_128123_2019_11_29_10_32_41/sqlpatch_invocation.log

Connecting to database...OK
Gathering database info...done

Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)

Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of interim SQL patches:
  No interim patches found

Current state of release update SQL patches:
  Binary registry:
    18.6.0.0.0 Release_Update 1903190102: Installed
  PDB CDB$ROOT:
    Applied 18.2.0.0.0 Release_Update 1804041635 successfully on 02-OCT-18 01.32.26.561113 PM
  PDB GIMR_DSCREP_10:
    Applied 18.2.0.0.0 Release_Update 1804041635 successfully on 02-OCT-18 01.37.12.290147 PM
  PDB PDB$SEED:
    Applied 18.2.0.0.0 Release_Update 1804041635 successfully on 02-OCT-18 01.37.12.290147 PM

Adding patches to installation queue and performing prereq checks...done
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED GIMR_DSCREP_10
    No interim patches need to be rolled back
    Patch 29301631 (Database Release Update : 18.6.0.0.190416 (29301631)):
      Apply from 18.2.0.0.0 Release_Update 1804041635 to 18.6.0.0.0 Release_Update 1903190102
    No interim patches need to be applied

Installing patches...
Patch installation complete.  Total patches installed: 3

Validating logfiles...done
Patch 29301631 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/29301631/22832106/29301631_apply__MGMTDB_CDBROOT_2019Nov29_10_33_45.log (no errors)
Patch 29301631 apply (pdb PDB$SEED): SUCCESS
  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/29301631/22832106/29301631_apply__MGMTDB_PDBSEED_2019Nov29_10_36_02.log (no errors)
Patch 29301631 apply (pdb GIMR_DSCREP_10): SUCCESS
  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/29301631/22832106/29301631_apply__MGMTDB_GIMR_DSCREP_10_2019Nov29_10_36_02.log (no errors)
SQL Patching tool complete on Fri Nov 29 10:38:04 2019
MGTCA-1005 : Could not connect to the GIMR.

ORA-01017: invalid username/password; logon denied



2019/11/29 10:38:13 CLSRSC-180: An error occurred while executing the command '/u01/app/18.0.0/grid/bin/mgmtca applysql'

After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Fri Nov 29 10:38:15 2019
Time taken to complete the session 26 minutes, 48 seconds

 opatchauto failed with error code 42
[root@exa01vm02 ~]#

And inside of the opatch log you can reach/see something like:

SQL Patching tool complete on Fri Nov 29 10:44:09 2019
MGTCA-1005 : Could not connect to the GIMR.

ORA-01017: invalid username/password; logon denied



2019/11/29 10:44:18 CLSRSC-180: An error occurred while executing the command '/u01/app/18.0.0/grid/bin/mgmtca applysql'

After fixing the cause of failure Run opatchauto resume

]]. Failures:
OPATCHAUTO-68067: Check the details to determine the cause of the failure.
        at com.oracle.glcm.patch.auto.action.PatchActionExecutor.execute(PatchActionExecutor.java:172)
        at com.oracle.glcm.patch.auto.wizard.silent.tasks.PatchActionTask.execute(PatchActionTask.java:102)
        ... 2 more
2019-11-29 10:44:19,660 INFO  [1] com.oracle.glcm.patch.auto.db.integration.model.productsupport.DBBaseProductSupport - Space available after session: 29898 MB
2019-11-29 10:44:19,728 SEVERE [1] com.oracle.glcm.patch.auto.OPatchAuto - OPatchAuto failed.
com.oracle.glcm.patch.auto.OPatchAutoException: OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
        at com.oracle.glcm.patch.auto.OrchestrationEngineImpl.orchestrate(OrchestrationEngineImpl.java:40)
        at com.oracle.glcm.patch.auto.OPatchAuto.orchestrate(OPatchAuto.java:858)
        at com.oracle.glcm.patch.auto.OPatchAuto.orchestrate(OPatchAuto.java:398)
        at com.oracle.glcm.patch.auto.OPatchAuto.orchestrate(OPatchAuto.java:344)
        at com.oracle.glcm.patch.auto.OPatchAuto.main(OPatchAuto.java:212)
2019-11-29 10:44:19,729 INFO  [1] com.oracle.cie.common.util.reporting.CommonReporter - Reporting console output : Message{id='null', message='OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.'}
2019-11-29 10:44:19,729 INFO  [1] com.oracle.cie.common.util.reporting.CommonReporter - Reporting console output : Message{id='null', message='OPatchAuto failed.'}
^C
[root@exa01vm02 ~]#

But the common in all cases occurred during the “/u01/app/18.0.0/grid/bin/mgmtca applysql” call because the opatch detected that is need to apply the patch over MGMTDB, and tried to apply (and reached the error).

Solving the issue

After some dig in the web about the error and MOS/Metalink search, nothing was found. No error, hint or something related. Even after one SR opened the report was always the same: inconclusive and not relevant to build a fix. But for all one solution/workaround can be applied and worked every time.

Before, just to show that if you look inside the MGMTDB itself you can see that the patch was applied correctly:

[grid@exa01vm02 -MGMTDB]$ sqlplus / as sysdba

SQL*Plus: Release 18.0.0.0.0 - Production on Thu Nov 14 09:19:05 2019
Version 18.6.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.


Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.6.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
-MGMTDB

SQL> exit
Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.6.0.0.0
[grid@exa01vm02 -MGMTDB]$
[grid@exa01vm02 -MGMTDB]$
[grid@exa01vm02 -MGMTDB]$ cd $ORACLE_HOME/OPatch
[grid@exa01vm02 -MGMTDB]$ ./datapatch -verbose
SQL Patching tool version 18.0.0.0.0 Production on Thu Nov 14 09:21:33 2019
Copyright (c) 2012, 2019, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/grid/cfgtoollogs/sqlpatch/sqlpatch_242627_2019_11_14_09_21_33/sqlpatch_invocation.log

Connecting to database...OK
Gathering database info...done

Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)

Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of interim SQL patches:
  No interim patches found

Current state of release update SQL patches:
  Binary registry:
    18.6.0.0.0 Release_Update 1903190102: Installed
  PDB CDB$ROOT:
    Applied 18.6.0.0.0 Release_Update 1903190102 successfully on 13-NOV-19 05.17.48.183101 PM
  PDB PDB$SEED:
    Applied 18.6.0.0.0 Release_Update 1903190102 successfully on 13-NOV-19 05.17.50.152110 PM

Adding patches to installation queue and performing prereq checks...done
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED
    No interim patches need to be rolled back
    No release update patches need to be installed
    No interim patches need to be applied

SQL Patching tool complete on Thu Nov 14 09:21:48 2019
[grid@exa01vm02 -MGMTDB]$
[grid@exa01vm02 -MGMTDB]$
[grid@exa01vm02 -MGMTDB]$ 
[grid@exa01vm02 -MGMTDB]$ sqlplus / as sysdba

SQL*Plus: Release 18.0.0.0.0 - Production on Thu Nov 14 09:23:35 2019
Version 18.6.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.


Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.6.0.0.0

SQL> set linesize 250
SQL> col comments format a50
SQL> select ACTION_TIME,VERSION,COMMENTS from dba_registry_history;

ACTION_TIME                                                                 VERSION                        COMMENTS
--------------------------------------------------------------------------- ------------------------------ --------------------------------------------------
                                                                            18                             RDBMS_18.6.0.0.0DBRU_LINUX.X64_190313.1
19-JUN-18 01.51.48.687187 PM                                                18.0.0.0.0                     Patch applied from 18.1.0.0.0 to 18.2.0.0.0
13-NOV-19 05.16.14.040821 PM                                                18.0.0.0.0                     Patch applied from 18.2.0.0.0 to 18.6.0.0.0

SQL> exit
Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.6.0.0.0
[grid@exa01vm02 -MGMTDB]$
If you look above even the datapatch and database itself report that patch was applied correctly:
[grid@exa01vm02 -MGMTDB]$ sqlplus / as sysdba

SQL*Plus: Release 18.0.0.0.0 - Production on Thu Nov 14 09:19:05 2019
Version 18.6.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.


Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.6.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
-MGMTDB

SQL> exit
Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.6.0.0.0
[grid@exa01vm02 -MGMTDB]$
[grid@exa01vm02 -MGMTDB]$
[grid@exa01vm02 -MGMTDB]$ cd $ORACLE_HOME/OPatch
[grid@exa01vm02 -MGMTDB]$ ./datapatch -verbose
SQL Patching tool version 18.0.0.0.0 Production on Thu Nov 14 09:21:33 2019
Copyright (c) 2012, 2019, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/grid/cfgtoollogs/sqlpatch/sqlpatch_242627_2019_11_14_09_21_33/sqlpatch_invocation.log

Connecting to database...OK
Gathering database info...done

Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)

Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of interim SQL patches:
  No interim patches found

Current state of release update SQL patches:
  Binary registry:
    18.6.0.0.0 Release_Update 1903190102: Installed
  PDB CDB$ROOT:
    Applied 18.6.0.0.0 Release_Update 1903190102 successfully on 13-NOV-19 05.17.48.183101 PM
  PDB PDB$SEED:
    Applied 18.6.0.0.0 Release_Update 1903190102 successfully on 13-NOV-19 05.17.50.152110 PM

Adding patches to installation queue and performing prereq checks...done
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED
    No interim patches need to be rolled back
    No release update patches need to be installed
    No interim patches need to be applied

SQL Patching tool complete on Thu Nov 14 09:21:48 2019
[grid@exa01vm02 -MGMTDB]$
[grid@exa01vm02 -MGMTDB]$
[grid@exa01vm02 -MGMTDB]$ 
[grid@exa01vm02 -MGMTDB]$ sqlplus / as sysdba

SQL*Plus: Release 18.0.0.0.0 - Production on Thu Nov 14 09:23:35 2019
Version 18.6.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.


Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.6.0.0.0

SQL> set linesize 250
SQL> col comments format a50
SQL> select ACTION_TIME,VERSION,COMMENTS from dba_registry_history;

ACTION_TIME                                                                 VERSION                        COMMENTS
--------------------------------------------------------------------------- ------------------------------ --------------------------------------------------
                                                                            18                             RDBMS_18.6.0.0.0DBRU_LINUX.X64_190313.1
19-JUN-18 01.51.48.687187 PM                                                18.0.0.0.0                     Patch applied from 18.1.0.0.0 to 18.2.0.0.0
13-NOV-19 05.16.14.040821 PM                                                18.0.0.0.0                     Patch applied from 18.2.0.0.0 to 18.6.0.0.0

SQL> exit
Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.6.0.0.0
[grid@exa01vm02 -MGMTDB]$

The Workaround

The workaround tries to bypass some checks executed to opatch when checking MGMTDB. If you look at the errors above, they were caused because CRS can’t communicate correctly with the database. Maybe because of some credential failure inside of CRS or listener registration.

The first step is to remove completely the MGMTDB. To do that you can use the dbca (from GI home and only one node):

[grid@exa01vm02 +ASM2]$ $ORACLE_HOME/bin/dbca -silent -deleteDatabase -sourceDB -MGMTDB
[WARNING] [DBT-19202] The Database Configuration Assistant will delete the Oracle instances and datafiles for your database. All information in the database will be destroyed.
Prepare for db operation
32% complete
Connecting to database
35% complete
39% complete
42% complete
45% complete
48% complete
52% complete
65% complete
Updating network configuration files
68% complete
Deleting instance and datafiles
84% complete
100% complete
Database deletion completed.
Look at the log file "/u01/app/grid/cfgtoollogs/dbca/_mgmtdb/_mgmtdb.log" for further details.
[grid@exa01vm02 +ASM2]$

After remove completely, you just need to recreate the MGMTDB. This can be done manually or using the script from MOS. One good source for information is the note How to Move/Recreate GI Management Repository (GIMR / MGMTDB) to Different Shared Storage (Diskgroup, CFS or NFS etc) (Doc ID 1589394.1) where you can find how to create manually or the script (the script just call the dbca).

Here I used the script, you can download it from the note MDBUtil: GI Management Repository configuration tool (Doc ID 2065175.1). The script call DBCA and you just need to specify the diskgroup that you want to use to store the database files.

In this case, I made:

[grid@exa01vm02 +ASM2]$ /tmp/mdbutil.pl --addmdb --target=+DATAC2
mdbutil.pl version : 1.98
2019-11-14 09:54:58: I Starting To Configure MGMTDB at +DATAC2...
2019-11-14 09:55:19: I Container database creation in progress... for GI 18.0.0.0.0
2019-11-14 10:12:50: I Plugable database creation in progress...
2019-11-14 10:16:59: I Executing "/tmp/mdbutil.pl --addchm" on exa01vm02 as root to configure CHM.
root@exa01vm02's password:
2019-11-14 10:17:26: I Executing "/tmp/mdbutil.pl --addchm" on exa01vm02 as root to configure CHM.
root@exa01vm02's password:
2019-11-14 10:17:34: I MGMTDB & CHM configuration done!
[grid@exa01vm02 +ASM2]$

You can even call with debug mode:

[grid@exa02vm03 -MGMTDB]$ /tmp/mdbutil.pl --addmdb --target=+DATAC3 --debug
mdbutil.pl version : 1.98
2020-01-15 15:30:58: D Executing: /u01/app/18.0.0/grid/bin/srvctl status diskgroup -g DATAC3
2020-01-15 15:30:59: D Exit code: 0
2020-01-15 15:30:59: D Output of last command execution:
Disk Group DATAC3 is running on exa02vm03,exa01vm03
2020-01-15 15:30:59: I Starting To Configure MGMTDB at +DATAC3...
2020-01-15 15:30:59: D Executing: /u01/app/18.0.0/grid/bin/srvctl status mgmtlsnr
2020-01-15 15:30:59: D Exit code: 0
2020-01-15 15:30:59: D Output of last command execution:
Listener MGMTLSNR is enabled
2020-01-15 15:30:59: D Executing: /u01/app/18.0.0/grid/bin/srvctl status mgmtdb
2020-01-15 15:31:00: D Exit code: 1
2020-01-15 15:31:00: D Output of last command execution:
PRCD-1120 : The resource for database _mgmtdb could not be found.
2020-01-15 15:31:00: D Executing: /u01/app/18.0.0/grid/bin/srvctl status mgmtdb
2020-01-15 15:31:01: D Exit code: 1
2020-01-15 15:31:01: D Output of last command execution:
PRCD-1120 : The resource for database _mgmtdb could not be found.
2020-01-15 15:31:01: D Executing: /u01/app/18.0.0/grid/bin/srvctl stop mgmtlsnr
2020-01-15 15:31:05: D Exit code: 0
2020-01-15 15:31:05: D Output of last command execution:
2020-01-15 15:31:05: D Executing: /u01/app/18.0.0/grid/bin/crsctl query crs activeversion
2020-01-15 15:31:05: D Exit code: 0
2020-01-15 15:31:05: D Output of last command execution:
Oracle Clusterware active version on the cluster is [18.0.0.0.0]
2020-01-15 15:31:05: D Executing: /u01/app/18.0.0/grid/bin/srvctl enable qosmserver
2020-01-15 15:31:06: D Exit code: 2
2020-01-15 15:31:06: D Output of last command execution:
PRKF-1321 : QoS Management Server is already enabled.
2020-01-15 15:31:06: D Executing: /u01/app/18.0.0/grid/bin/srvctl start qosmserver
2020-01-15 15:31:06: D Exit code: 2
2020-01-15 15:31:06: D Output of last command execution:
PRCC-1014 : qosmserver was already running
2020-01-15 15:31:06: I Container database creation in progress... for GI 18.0.0.0.0
2020-01-15 15:31:06: D Executing: /u01/app/18.0.0/grid/bin/dbca  -silent -createDatabase -createAsContainerDatabase true -templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType ASM -diskGroupName DATAC3 -datafileJarLocation /u01/app/18.0.0/grid/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -skipUserTemplateCheck
2020-01-15 15:47:19: D Exit code: 0
2020-01-15 15:47:19: D Output of last command execution:
Prepare for db operation
2020-01-15 15:47:19: I Plugable database creation in progress...
2020-01-15 15:47:19: D Executing: /u01/app/18.0.0/grid/bin/mgmtca -local
2020-01-15 15:49:28: D Exit code: 0
2020-01-15 15:49:28: D Output of last command execution:
2020-01-15 15:49:28: D Executing: scp /tmp/mdbutil.pl exa01vm03:/tmp/
2020-01-15 15:49:28: D Exit code: 0
2020-01-15 15:49:28: D Output of last command execution:
2020-01-15 15:49:28: I Executing "/tmp/mdbutil.pl --addchm" on exa01vm03 as root to configure CHM.
2020-01-15 15:49:28: D Executing: ssh root@exa01vm03 "/tmp/mdbutil.pl --addchm"
root@exa01vm03's password:
2020-01-15 15:49:39: D Exit code: 0
2020-01-15 15:49:39: D Output of last command execution:
mdbutil.pl version : 1.98
2020-01-15 15:49:39: D Executing: scp /tmp/mdbutil.pl exa02vm03:/tmp/
2020-01-15 15:49:39: D Exit code: 0
2020-01-15 15:49:39: D Output of last command execution:
2020-01-15 15:49:39: I Executing "/tmp/mdbutil.pl --addchm" on exa02vm03 as root to configure CHM.
2020-01-15 15:49:39: D Executing: ssh root@exa02vm03 "/tmp/mdbutil.pl --addchm"
root@exa02vm03's password:
2020-01-15 15:49:46: D Exit code: 0
2020-01-15 15:49:46: D Output of last command execution:
mdbutil.pl version : 1.98
2020-01-15 15:49:46: I MGMTDB & CHM configuration done!
[grid@exa02vm03 -MGMTDB]$

Resume and finish

So, after recreating the MGMTDB we can resume the opatch:

[root@exa02vm03 ~]# opatchauto resume

OPatchauto session is initiated at Wed Jan 15 15:50:15 2020
Session log file is /u01/app/18.0.0/grid/cfgtoollogs/opatchauto/opatchauto2020-01-15_03-50-16PM.log
Resuming existing session with id NRZ1

Starting CRS service on home /u01/app/18.0.0/grid
CRS service started successfully on home /u01/app/18.0.0/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:exa02vm03
CRS Home:/u01/app/18.0.0/grid
Version:18.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /u01/patches/grid/29301682/28435192
Log: /u01/app/18.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-15_13-40-42PM_1.log

Patch: /u01/patches/grid/29301682/28547619
Log: /u01/app/18.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-15_13-40-42PM_1.log

Patch: /u01/patches/grid/29301682/29301631
Log: /u01/app/18.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-15_13-40-42PM_1.log

Patch: /u01/patches/grid/29301682/29301643
Log: /u01/app/18.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-15_13-40-42PM_1.log

Patch: /u01/patches/grid/29301682/29302264
Log: /u01/app/18.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-15_13-40-42PM_1.log



OPatchauto session completed at Wed Jan 15 15:57:24 2020
Time taken to complete the session 7 minutes, 9 seconds
[root@exa02vm03 ~]#

But why this work? So, the idea is that when the opatch tries to check if it is needed to apply the patch over the MGMTDB, it will reach it and verify that everything was already there and the call for mgmtca applysql will be successful. And since all the credentials between GI and MGMTDB are fine, the check can be done correctly. Another point is that you are recreating the database with one binary that already has the RU running.

Again, this solution can be applied for these errors when the communication between MGMTDB and GI fails because of credential errors or listener errors. And I include the complete check for opatch error to identify exactly the error (it is just because mgmtca applysql), and if the SQL’s was applied by datpatch and are registered inside of the database. To finish, always open SR.

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”

Share it

Leave a Reply

Your email address will not be published. Required fields are marked *