Similar to Database, storage also comes with support timelines, and this is one of the major reason storage team will ask you to migrate asm disks from old to new. Adding a disks and removing the old disks can be done with minor change in size for external redundancy diskgroups but not for high or normal redundancy diskgroups. Chances of getting the new disks same as old disks is kind of hard. Hence while migrating OCR,VOTING diskgroup we should be extra careful. Today, we will cover how to migrate ocr,vote file, ASM spfile and asm password file to new diskgroup
Please note all these commands were tested on Oracle 19c cluster
Table of Contents
Create New diskgroup with Disks from new storage for OCR and Voting files
Check for new disks provided from new storage
Lets get started, once disks are provided to us, we need to check if those disks are visible on ASM. I will check it on both the nodes.
SQL> select path from v$asm_disk where header_status='CANDIDATE' order by 1; PATH --------------------------------- /dev/ASM/OCR_VOTE_01 /dev/ASM/OCR_VOTE_02 /dev/ASM/OCR_VOTE_03 /dev/ASM/OCR_VOTE_04 /dev/ASM/OCR_VOTE_05 SQL>
Create Diskgroup with allocated disks from new Storage
Now I will create diskgroup named OCR_VOTE_NEW with new storage disks which are appearing as Candidate for us. Please note we are going with high redundancy, you can chose based on your company policy.
export ORACLE_SID=+ASM1 sqlplus / as sysasm SQL> create diskgroup OCR_VOTE_NEW high redundancy disk '/dev/ASM/OCR_VOTE_01','/dev/ASM/OCR_VOTE_02','/dev/ASM/OCR_VOTE_03','/dev/ASM/OCR_VOTE_04','/dev/ASM/OCR_VOTE_05' attribute 'compatible.rdbms'='11.2.0.0', 'compatible.asm'='19.0.0.0';
Mount the newly created diskgroup on other nodes
export ORACLE_SID=+ASM2 sqlplus / as sysasm SQL> select * from v$asm_diskgroup where state='DISMOUNTED'; SQL> alter diskgroup OCR_VOTE_NEW mount;
Final Check
You need to check the size of diskgroup, free_mb, total_mb and diskgroup state on both the nodes –
set lines 3000 pages 300 select * from gv$asm_diskgroup;
Change ASM SPFILE and Password file location from current Diskgroup to New Diskgroup
Change of ASM SPFILE from current diskgroup to New Diskgroup
Lets get the details of spfile location –
$ORACLE_HOME/bin/gpnptool get
Now lets create pfile from current spfile and create spfile from same pfile.
export ORACLE_SID=+ASM1 sqlplus / as sysasm SQL> create pfile='/tmp/init_asm.ora' from spfile; File created. SQL> SQL> create spfile='+OCR_VOTE_NEW' from pfile='/tmp/init_asm.ora'; File created. SQL> SQL> Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.14.0.0.0
Now lets crosscheck the location of spfile for ASM
$ORACLE_HOME/bin/gpnptool get
Move ASM Password file from current diskgroup to New diskgroup
Check the current location of password file in configuration of asm –
grid@oracledbworld.com:~$ srvctl config asm ASM home: <CRS home> Password file: +OCR_VOTE/orapwASM Backup of Password file: +MGMT/orapwASM_backup ASM listener: LISTENER ASM instance count: 2 Cluster ASM listener: ASMNET1LSNR_ASM grid@oracledbworld.com:~$
Now let’s check password file location through asmcmd –
grid@oracledbworld.com:~$ asmcmd pwget --asm +OCR_VOTE/orapwASM
Lets move the password file using following command –
grid@oracledbworld.com:~$ asmcmd pwmove --asm -f +OCR_VOTE/orapwASM +OCR_VOTE_NEW/orapwASM moving +OCR_VOTE/orapwASM -> +OCR_VOTE_NEW/orapwASM
If you don’t use -f option then you will get ASMCMD-8028
grid@oracledbworld.com:~$ asmcmd pwmove --asm +OCR_VOTE/orapwASM +OCR_VOTE_NEW/orapwASM ASMCMD-8028: Password file '+OCR_VOTE/orapwASM' is associated with 'asm' already. Use the force option.
Now check the configuration to confirm the same
grid@oracledbworld.com:~$ srvctl config asm ASM home: <CRS home> Password file: +OCR_VOTE_NEW/orapwASM Backup of Password file: +MGMT/orapwASM_backup ASM listener: LISTENER ASM instance count: 2 Cluster ASM listener: ASMNET1LSNR_ASM
Change OCR and Vote disks from Current diskgroup to New diskgroup
Check the current OCR diskgroup and integrity using root user
root@oracledbworld.com:~# root@oracledbworld.com:~# /GRIDHOME/oracle/app/product/grid/19.3.0/bin/ocrcheck Status of Oracle Cluster Registry is as follows : Version : 4 Total space (kbytes) : 491684 Used space (kbytes) : 84956 Available space (kbytes) : 406728 ID : 1317199346 Device/File Name : +REDO_01 Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded root@oracledbworld.com:~#
Validate the current vote disk status
root@oracledbworld.com:~#/GRIDHOME/oracle/app/product/grid/19.3.0/bin/crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 410c026e40234f78bf9fbb0da838be6f (/dev/ASM/REDO_DUB_01) [REDO_01] Located 1 voting disk(s). root@oracledbworld.com:~#
Backup of current OCR Voting disk
Following command will take backup in same diskgroup/backuploc which is configured earlier
root@oracledbworld.com:~#/GRIDHOME/oracle/app/product/grid/19.3.0/bin/ocrconfig -manualbackup
or
You can take backup at specific location –
root@oracledbworld.com:~#/GRIDHOME/oracle/app/product/grid/19.3.0/bin/ocrconfig -export <location>
Main Activity
Open CRS alert.log in one session and in another session you can execute change of diskgroup commands –
Check cluster status –
crsctl stat res -t -init crsctl stat res -t crsctl check crs crsctl check cluster -all
Lets move OCR and Vote disk from current diskgroup to New diskgroup –
root@oracledbworld.com:~# /GRIDHOME/oracle/app/product/grid/19.3.0/bin/ocrconfig -add +OCR_VOTE_NEW root@oracledbworld.com:~# /GRIDHOME/oracle/app/product/grid/19.3.0/bin/ocrconfig -delete +REDO_01 root@oracledbworld.com:~# /GRIDHOME/oracle/app/product/grid/19.3.0/bin/crsctl replace votedisk +OCR_VOTE_NEW Successful addition of voting disk 55c21209de7d4f73bf1d0a1b2bbb0b48. Successful addition of voting disk a9de904b4d414feebf4784e1a19ae1ac. Successful addition of voting disk 7e519858f0944ffbbf4003dc798811b8. Successful addition of voting disk bc3a763700a24fecbf340e8dc10b5771. Successful addition of voting disk 40ec69d53a254feebf2739743c03415d. Successful deletion of voting disk 410c026e40234f78bf9fbb0da838be6f. Successfully replaced voting disk group with +OCR_VOTE_NEW. CRS-4266: Voting file(s) successfully replaced
Check the cluster registry integrity and Vote disk status to revalidate –
root@oracledbworld.com:~#/GRIDHOME/oracle/app/product/grid/19.3.0/bin/ocrcheck Status of Oracle Cluster Registry is as follows : Version : 4 Total space (kbytes) : 491684 Used space (kbytes) : 84956 Available space (kbytes) : 406728 ID : 1317199346 Device/File Name : +OCR_VOTE_NEW Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded root@oracledbworld.com:~#/GRIDHOME/oracle/app/product/grid/19.3.0/bin/crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 55c21209de7d4f73bf1d0a1b2bbb0b48 (/dev/ASM/OCR_VOTE_01) [OCR_VOTE_NEW] 2. ONLINE a9de904b4d414feebf4784e1a19ae1ac (/dev/ASM/OCR_VOTE_02) [OCR_VOTE_NEW] 3. ONLINE 7e519858f0944ffbbf4003dc798811b8 (/dev/ASM/OCR_VOTE_03) [OCR_VOTE_NEW] 4. ONLINE bc3a763700a24fecbf340e8dc10b5771 (/dev/ASM/OCR_VOTE_04) [OCR_VOTE_NEW] 5. ONLINE 40ec69d53a254feebf2739743c03415d (/dev/ASM/OCR_VOTE_05) [OCR_VOTE_NEW] Located 5 voting disk(s). root@oracledbworld.com:~#
Keep on monitoring the CRS Alert log to ensure no errors
Cluster bounce is required
Until above steps everything is online activity
To start the ASM with new spfile and to ensure everything is working fine post our activity we will take node by node bounce of cluster services.
Please check with application team for Cluster bounce and schedule it in lean period as fluctuation can observed during cluster bounce.
Prechecks for Node 1
crsctl stat res -t -init crsctl stat res -t crsctl check crs crsctl check cluster -all
Stop the database and cluster services on first node
From database owner –
srvctl config database srvctl status database -d <db_name> -v srvctl stop instance -d <db_name> -i <first_instance> srvctl status database -d <db_name> -v
From root user –
ps -ef | grep tns ps -ef | grep pmon ps -ef | grep d.bin root@oracledbworld.com:~#/GRIDHOME/oracle/app/product/grid/19.3.0/bin/crsctl stop crs -f
Monitor the crs alert log, And check if cluster services are down on node 1 and start the cluster services
ps -ef | grep d.bin root@oracledbworld.com:~#/GRIDHOME/oracle/app/product/grid/19.3.0/bin/crsctl start crs -wait
Post Check for node 1
crsctl stat res -t -init crsctl stat res -t crsctl check crs crsctl check cluster -all
Once cluster services are started and all the services are online, start the database services from database owner
srvctl config database srvctl status database -d <db_name> -v srvctl start instance -d <db_name> -i <first_instance> srvctl status database -d <db_name> -v
Please monitor database alert log during execution of above commands.
Prechecks for Node 2
crsctl stat res -t -init crsctl stat res -t crsctl check crs crsctl check cluster -all
Stop the database and cluster services on second node
From database owner –
srvctl config database srvctl status database -d <db_name> -v srvctl stop instance -d <db_name> -i <second_instance> srvctl status database -d <db_name> -v
From root user –
ps -ef | grep tns ps -ef | grep pmon ps -ef | grep d.bin root@oracledbworld.com:~#/GRIDHOME/oracle/app/product/grid/19.3.0/bin/crsctl stop crs -f
Monitor the crs alert log, And check if cluster services are down on node 2 and start the cluster services
ps -ef | grep d.bin root@oracledbworld.com:~#/GRIDHOME/oracle/app/product/grid/19.3.0/bin/crsctl start crs -wait
Post Check for node 2
crsctl stat res -t -init crsctl stat res -t crsctl check crs crsctl check cluster -all
Once cluster services are started and all the services are online, start the instance services from database owner
srvctl config database srvctl status database -d <db_name> -v srvctl start instance -d <db_name> -i <second_instance> srvctl status database -d <db_name> -v
Change of Backup location if it is pointing to old diskgroup which you want do drop
ocrconfig -backuploc <new location>
Final Check
Once everything looks clean, dismount the old diskgroup from both the nodes –
export ORACLE_SID=+ASM1 sqlplus / as sysasm alter diskgroup redo01 dismount; export ORACLE_SID=+ASM2 sqlplus / as sysasm alter diskgroup redo01 dismount;
Wait for one day, now you can drop the diskgroup and Release the storage to system administrator and storage team.
Reference
Move OCR , Vote File , ASM SPILE to new Diskgroup (Doc ID 1638177.1)