Hello Readers, You might be thinking why we are clearing network socket files in oracle 19c RAC configuration. How to Clear Socket Files in Oracle RAC. How often we need to clear it. What are the location of the socket files. How to clear socket files in Oracle RAC 19c. What happens if you clear socket files in Oracle RAC setup when Cluster services are up? We will answer all the questions here.
Lets begin
Table of Contents
What happens when you delete socket files when cluster is Running ?
Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
TNS-12546: TNS:permission denied
TNS-12560: TNS:protocol adapter error
TNS-00516: Permission denied
Solaris Error: 13: Permission denied
ORA-29701: unable to connect to Cluster Manager
SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-01154: database busy. Open, close, mount, and dismount not allowed now
Each have experienced different errors to be honest with you, it depends upon what you are calling it at that moment.
What is the Location of Socket Files in Oracle
What I have seen depending upon OS, the location tends to change. Though each location mentioned below has one or more socket files –
/tmp/.oracle, /var/tmp/.oracle or /usr/tmp/.oracle
Don’t Delete the folder, You have to delete the content of the folders.
When to Clear Socket Files
Network socket files inconsistency usually cause lot of Errors few of which are listed above as examples. These can be caused by either linux automated jobs/ someone manually clear the socket files. Along with that whenever we patch the server and cluster services doesn’t come up we usually get an update from SR engineer to clear the socket files and try again to bring up the cluster services.
Step by Step How to Delete Socket Files
Prechecks ->
Before going for server reboot make sure you have enough information if server reboot mess the configuration for you. Hence we will note down following thins ->
$GRID_HOME/bin/crsctl stat res -t
$GRID_HOME/bin/crsctl stat res -t -init
ps -ef | grep pmon
ps -ef | grep tns
ps -ef | grep d.bin
ifconfig -a
df -gt or df -kh
Login to database and check for following ->
select name,open_mode,database_role from gv$database;
show parameter spfile
show parameter control
select path,os_mb from v$asm_disk where header_status='MEMBER';
Step 1 – Stop database services on specific node
You have to make sure you have shutdown the instances on the node where you are planning to shutdown the cluster.
Login with rdbms owner ->
srvctl stop home -o <$ORACLE_HOME> -n <nodename> -s <statfile> -t immediate
Above command will stop all the instances which are running for ORACLE_HOME. In case you have more than 2 database instance running from same RDBMS home. And you want to stop it with single command this is the easiest way. Or else you can go traditional way –
srvctl config database
srvctl stop instance -d <db_unique_name> -i <instance_name>
Step 2 – Stop the cluster services on specific node And Disable crs autostart.
For stopping cluster services on a node we required root access. Ask sysadmin to help you to execute the same if they don’t give you root password.
Login as root -
$GRID_HOME/bin/crsctl check crs
$GRID_HOME/bin/crsctl stop crs -f
or
$GRID_HOME/bin/crsctl stop crs
If you are planning to reboot the server then you can go ahead with disable auto start of cluster service on reboot.
crsctl disable crs
Step 3 – Server Reboot with Help of System Administrator.
We usually let System Administrator to take care of all this points as –
1. There role segregation in all Companies
2. They know logs to collect before restart.
Step 4 – Remove the Socket files
Once server reboot is completed and system administrator has released the system to us. Just check if any cluster services are not running after server reboot. Now lets delete the socket files only on one node which was released to us post reboot.
Delete the files under folders –
/tmp/.oracle, /var/tmp/.oracle or /usr/tmp/.oracle
DON’T DELETE THE .ORACLE FOLDER
Step 5 – Start the Cluster Services
Login as root and execute following command –
$GRID_HOME/bin/crsctl start crs
Step 6 – Check Cluster Status And enable the autostart
You can execute the same from GI owner or root.
$GRID_HOME/bin/crsctl check crs
$GRID_HOME/bin/crsctl stat res -t -init
$GRID_HOME/bin/crsctl stat res -t
Once all services are up. We have to enable crs autostart
$GRID_HOME/bin/crsctl enable crs
$GRID_HOME/bin/crsctl config crs
Step 7 – Start instance which were running on the node
Run following command as RDBMS owner –
srvctl start home -o <$ORACLE_HOME> -n <nodename> -s <statfile>
or
srvctl start instance -d <db_unique_name> -i <instance_name>
Once node 1 is completed move to node 2 And repeat the steps from 1 to 7.
Activity is completed. Thanks for Reading! Happy Learning 🙂