Yesterday, we were applying Oracle Oct 2022 PSU patch on grid home when we got following error on node 2. Our node 1 patching was successful but it failed on node 2.
OPatchauto session is initiated at Sun Feb 19 13:46:57 2023 Session log file is /GRIDHOME/oracle/app/product/grid/19.3.0/cfgtoollogs/opatchauto/opatchauto2023-02-19_01-46-59PM.log Resuming existing session with id BD4L Performing postpatch operations on CRS - starting CRS service on home /GRIDHOME/oracle/app/product/grid/19.3.0 Postpatch operation log file location: /GRIDHOME/oracle/orabase/crsdata/oracledbworld02/crsconfig/crs_postpatch_apply_inplace_oracledbworld02_2023-02-19_01-47-25PM.log Failed to start CRS service on home /GRIDHOME/oracle/app/product/grid/19.3.0 Execution of [GIStartupAction] patch action failed, check log for more details. Failures: Patch Target : oracledbworld02->/GRIDHOME/oracle/app/product/grid/19.3.0 Type[crs] Details: [ ---------------------------Patching Failed--------------------------------- Command execution failed during patching in home: /GRIDHOME/oracle/app/product/grid/19.3.0, host: oracledbworld02. Command failed: /GRIDHOME/oracle/app/product/grid/19.3.0/perl/bin/perl -I/GRIDHOME/oracle/app/product/grid/19.3.0/perl/lib -I/GRIDHOME/oracle/app/product/grid/19.3.0/opatchautocfg/db/dbtmp/bootstrap_oracledbworld02/patchwork/crs/install -I/GRIDHOME/oracle/app/product/grid/19.3.0/opatchautocfg/db/dbtmp/bootstrap_oracledbworld02/patchwork/xag /GRIDHOME/oracle/app/product/grid/19.3.0/opatchautocfg/db/dbtmp/bootstrap_oracledbworld02/patchwork/crs/install/rootcrs.pl -postpatch -norestart Command failure output: Using configuration parameter file: /GRIDHOME/oracle/app/product/grid/19.3.0/opatchautocfg/db/dbtmp/bootstrap_oracledbworld02/patchwork/crs/install/crsconfig_params The log of current session can be found at: /GRIDHOME/oracle/orabase/crsdata/oracledbworld02/crsconfig/crs_postpatch_apply_inplace_oracledbworld02_2023-02-19_01-47-25PM.log Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [1583550732]. PRCR-1105 : Failed to relocate resource ora.mgmtdb to node oracledbworld02 PRCR-1089 : Failed to relocate resource ora.mgmtdb. CRS-2549: Resource 'ora.mgmtdb' cannot be placed on 'oracledbworld02' as it is not a valid candidate as per the placement policy 2023/02/19 14:02:33 CLSRSC-180: An error occurred while executing the command '/GRIDHOME/oracle/app/product/grid/19.3.0/bin/srvctl relocate mgmtdb -n oracledbworld02' 2023/02/19 14:02:33 CLSRSC-180: An error occurred while executing the command 'srvctl relocate mgmtdb' After fixing the cause of failure Run opatchauto resume ] OPATCHAUTO-68061: The orchestration engine failed. OPATCHAUTO-68061: The orchestration engine failed with return code 1 OPATCHAUTO-68061: Check the log for more details. OPatchAuto failed. OPatchauto session completed at Sun Feb 19 14:02:38 2023 Time taken to complete the session 15 minutes, 42 seconds
Just to check, we tried to manually relocate MGMTDB on node 2. That didn’t work for us.
With old learning we knew that resource_use_enabled should be 1 to start ASM and Mgmt db. So we enable “RESOURCE_USE_ENABLED” on Second node to relocate the mgmtdb. And it worked for us.
RESOURCE_USE_ENABLED requires cluster bounce to reflect the changes.
-bash-5.1# ./crsctl stat server -f NAME=oracledbworld01 MEMORY_SIZE=86016 CPU_COUNT=48 CPU_CLOCK_RATE=5067 CPU_HYPERTHREADING=1 CPU_EQUIVALENCY=1000 DEPLOYMENT=other CONFIGURED_CSS_ROLE=hub RESOURCE_USE_ENABLED=1 SERVER_LABEL= PHYSICAL_HOSTNAME= CSS_CRITICAL=no CSS_CRITICAL_TOTAL=0 RESOURCE_TOTAL=2 SITE_NAME=icoresng-cls STATE=ONLINE ACTIVE_POOLS=ora.namrata ora.oracledbworld Generic STATE_DETAILS= ACTIVE_CSS_ROLE=hub NAME=oracledbworld02 MEMORY_SIZE=117760 CPU_COUNT=48 CPU_CLOCK_RATE=5067 CPU_HYPERTHREADING=1 CPU_EQUIVALENCY=1000 DEPLOYMENT=other CONFIGURED_CSS_ROLE=hub RESOURCE_USE_ENABLED=0 SERVER_LABEL= PHYSICAL_HOSTNAME= CSS_CRITICAL=no CSS_CRITICAL_TOTAL=0 RESOURCE_TOTAL=0 SITE_NAME=icoresng-cls STATE=ONLINE ACTIVE_POOLS=Free STATE_DETAILS= ACTIVE_CSS_ROLE=hub -bash-5.1# ./crsctl set resource use 1 CRS-4416: Server attribute 'RESOURCE_USE_ENABLED' successfully changed. Restart Oracle High Availability Services for new value to take effect. Then took restart of crs --> -bash-5.1# ./crsctl stop crs -f > stop_log_<date>.log -bash-5.1# ./crsctl start crs -wait > start_log_<date>.log -bash-5.1# ./crsctl stat server -f NAME=oracledbworld01 MEMORY_SIZE=86016 CPU_COUNT=48 CPU_CLOCK_RATE=5067 CPU_HYPERTHREADING=1 CPU_EQUIVALENCY=1000 DEPLOYMENT=other CONFIGURED_CSS_ROLE=hub RESOURCE_USE_ENABLED=1 SERVER_LABEL= PHYSICAL_HOSTNAME= CSS_CRITICAL=no CSS_CRITICAL_TOTAL=0 RESOURCE_TOTAL=2 SITE_NAME=icoresng-cls STATE=ONLINE ACTIVE_POOLS=ora.namrata ora.oracledbworld Generic STATE_DETAILS= ACTIVE_CSS_ROLE=hub NAME=oracledbworld02 MEMORY_SIZE=117760 CPU_COUNT=48 CPU_CLOCK_RATE=5067 CPU_HYPERTHREADING=1 CPU_EQUIVALENCY=1000 DEPLOYMENT=other CONFIGURED_CSS_ROLE=hub RESOURCE_USE_ENABLED=1 SERVER_LABEL= PHYSICAL_HOSTNAME= CSS_CRITICAL=no CSS_CRITICAL_TOTAL=0 RESOURCE_TOTAL=0 SITE_NAME=icoresng-cls STATE=ONLINE ACTIVE_POOLS=ora.namrata ora.oracledbworld Generic STATE_DETAILS= ACTIVE_CSS_ROLE=hub
And then relocate worked for us. So just to complete our post patch we ran opatchauto resume
But again it failed with same error –
PRCR-1105 : Failed to relocate resource ora.mgmtdb to node oracledbworld02 PRCR-1089 : Failed to relocate resource ora.mgmtdb.
This was weird for me, in my case what I observed, when we run opatch resume it again changes RESOURCE_USE_ENABLED=0.
Since usually mgmtdb is last step in post patch, so I was honestly not worried about it. But then since it was a production database server we wanted to have clean patching.
So we raised SR with oracle support , and we got solution as follows –
Oracle Engineer requested for following details -> Which showed everything was clean –
Node1:: ====== grid@oracledbworld01:$ORACLE_HOME/bin/kfod op=patchlvl ------------------- Current Patch level =================== 1583550732 grid@oracledbworld01:$ORACLE_HOME/bin/kfod op=patches --------------- List of Patches =============== 33575402 34419443 34428761 34444834 34580338 grid@oracledbworld01:$crsctl query crs activeversion Oracle Clusterware active version on the cluster is [19.0.0.0.0] grid@oracledbworld01:/GRIDHOME/oracle$ Node 2:: ====== grid@oracledbworld02:$ORACLE_HOME/bin/kfod op=patchlvl ------------------- Current Patch level =================== 1583550732 grid@oracledbworld02: grid@oracledbworld02:$ORACLE_HOME/bin/kfod op=patches --------------- List of Patches =============== 33575402 34419443 34428761 34444834 34580338 grid@oracledbworld02:crsctl query crs activeversion Oracle Clusterware active version on the cluster is [19.0.0.0.0] grid@oracledbworld02:
Solution for PRCR-1089 : Failed to relocate resource ora.mgmtdb Oracle 19c
call $GI_HOME/crs/install/rootcrs.pl -postpatch from second node where post patch failed for us.
# hostname oracledbworld02 # id uid=0(root) gid=0(system) groups=2(bin),3(sys),7(security),8(cron),10(audit),11(lp) # /GRIDHOME/oracle/app/product/grid/19.3.0/crs/install/rootcrs.sh -postpatch Using configuration parameter file: /GRIDHOME/oracle/app/product/grid/19.3.0/crs/install/crsconfig_params The log of current session can be found at: /GRIDHOME/oracle/orabase/crsdata/oracledbworld02/crsconfig/crs_postpatch_apply_inplace_oracledbworld02_2023-02-19_04-20-31PM.log Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [1583550732]. SQL Patching tool version 19.17.0.0.0 Production on Sun Feb 19 16:29:53 2023 Copyright (c) 2012, 2022, Oracle. All rights reserved. Log file for this invocation: /GRIDHOME/oracle/orabase/cfgtoollogs/sqlpatch/sqlpatch_5706166_2023_02_19_16_29_54/sqlpatch_invocation.log Connecting to database...OK Gathering database info...done Note: Datapatch will only apply or rollback SQL fixes for PDBs that are in an open state, no patches will be applied to closed PDBs. Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation ( Doc ID 1585822.1 ) Bootstrapping registry and package to current versions...done Determining current state...done Current state of interim SQL patches: No interim patches found Current state of release update SQL patches: Binary registry: 19.17.0.0.0 Release_Update 220928055956: Installed PDB CDB$ROOT: Applied 19.11.0.0.0 Release_Update 210415114417 successfully on 27-JUL-21 07.57.55.701659 PM PDB GIMR_DSCREP_10: Applied 19.11.0.0.0 Release_Update 210415114417 successfully on 27-JUL-21 07.58.02.482673 PM PDB PDB$SEED: Applied 19.11.0.0.0 Release_Update 210415114417 successfully on 27-JUL-21 07.57.59.056439 PM Adding patches to installation queue and performing prereq checks...done Installation queue: For the following PDBs: CDB$ROOT PDB$SEED GIMR_DSCREP_10 No interim patches need to be rolled back Patch 34419443 (Database Release Update : 19.17.0.0.221018 (34419443)): Apply from 19.11.0.0.0 Release_Update 210415114417 to 19.17.0.0.0 Release_Update 220928055956 No interim patches need to be applied Installing patches... Patch installation complete. Total patches installed: 3 Validating logfiles...done Patch 34419443 apply (pdb CDB$ROOT): SUCCESS logfile: /GRIDHOME/oracle/orabase/cfgtoollogs/sqlpatch/34419443/24972392/34419443_apply__MGMTDB_CDBROOT_2023Feb19_16_31_11.log (no errors) Patch 34419443 apply (pdb PDB$SEED): SUCCESS logfile: /GRIDHOME/oracle/orabase/cfgtoollogs/sqlpatch/34419443/24972392/34419443_apply__MGMTDB_PDBSEED_2023Feb19_16_33_38.log (no errors) Patch 34419443 apply (pdb GIMR_DSCREP_10): SUCCESS logfile: /GRIDHOME/oracle/orabase/cfgtoollogs/sqlpatch/34419443/24972392/34419443_apply__MGMTDB_GIMR_DSCREP_10_2023Feb19_16_33_38.log (no errors) Automatic recompilation incomplete; run utlrp.sql to revalidate. PDBs: GIMR_DSCREP_10 PDB$SEED SQL Patching tool complete on Sun Feb 19 16:34:55 2023 2023/02/19 16:37:00 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector. 2023/02/19 16:37:08 CLSRSC-672: Post-patch steps for patching GI home successfully completed. #
Post completion we checked and everything came up as normal.
Node 1 --> grid@oracledbworld01:/GRIDHOME/oracle$crsctl query crs activeversion -f Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [1583550732]. grid@oracledbworld01:/GRIDHOME/oracle$ Node 2 --> grid@oracledbworld02:/GRIDHOME/oracle$crsctl query crs activeversion -f Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [1583550732]. grid@oracledbworld02:/GRIDHOME/oracle$
Hope this solution will work for you as well! Happy Learning 🙂