Changing Internet Protocol (IP) Address on Oracle Real Application Clusters (RAC)

Abstract

When required to move the Oracle database that is in a Real Application Cluster (RAC) to a new network/IP address it can take some time and be over whelming if you don’t know what is involved. The steps that we followed made it easy and successful for us to move both our test and production RAC systems to new IP addresses within a 6 hour window of time and ensured that all services were able to come up and be ready to use.

I have provided these steps below for general purpose information. The format which this post is written is is similar to an Oracle white paper that you can find on Metalink.

Documented Steps to change RAC IP Addresses

  1. Log on to the one of the cluster servers.
  2. Change directory to the $CRS_HOME\bin
  3. Query the OCR for the current settings in the RAC. This will provide the current setting for the networks that are setup in the cluster.
    1. $CRS_HOME\bin\oifcfg iflist
  4. Query the OCR to find out what network is using which interface. This is similar to the command above; however, this will just return the public and cluster_interconnect settings.
    1. $CRS_HOME\bin\oifcfg getif
  5. Next we need to find out what VIP addresses are assigned to the node apps.
    1. $CRS_HOME\bin\srvctl config nodeapps –n <node name> -a –g –l –s
  6. Once we have all of the current settings documented we can bring down the cluster and all the associated services. This could be done with crs_ctl –stop all but there are problems with bringing the cluster down this way unless Oracle Support recommends this. On this step we will run a series of SRVCTL commands that will bring down the cluster and notify us of any errors that may come up.
    1. $CRS_HOME\bin\srvctl stop database –d orcl –o immediate
    2. $CRS_HOME\bin\srvctl stop asm –n <node name>
      1. repeat for the number of ASM instances the cluster is using
    3. $CRS_HOME\bin\srvctl stop nodeapps –n <node name>
      1. run the command for each node in the cluster
  7. Backup the voting disk and OCR before proceeding. This will ensure that both the voting disk and OCR can be recovered if needed.
    1. $CRS_HOME\bin\crsctl query css votedisk
    2. Ocopy \\.\votedisk1 <file system location>
    3. Ocrconfig –export <file system location> -s online
  8. Restart the Cluster Ready Services (CRS)
    1. $CRS_HOME\bin\srvctl start nodeapps –n <node name>
      1. run the command for each node in the cluster
  9. Change the IP Addresses for all the public and private interfaces that the RAC uses
    1. Change the public interface
      1. $CRS_HOME\bin\oifcfg delif –global <interface name>
      2. $CRS_HOME\bin\oifcfg setif –global <interface name>/#.#.#.0:public
    2. Change the interconnect interface (private)
      1. $CRS_HOME\bin\oifcfg delif –global <interface name>
      2. $CRS_HOME\bin\oifcfg setif –global <interface name>/#.#.#.0:cluster_interconnect
  10. Change the VIP ip addressing for the node apps
    1. $CRS_HOME\bin\srvctl modify nodeapps –n <node name> -A <new vip address>/<subnet mask>/<interface name>
      1. run this command for each node in the cluster that will be assigned a new vip address
  11. Update the tnsnames.ora and listener.ora file entries to reflect ip address changes for the vips if needed. If using DNS this should not be needed but worth checking.
  12. Flush the DNS for each server (windows only)
    1. Ipconfig /flushdns
  13. Ensure all the proper entries are in the /etc/hosts file (on windows c:\windows\system32\drivers\etc\hosts). Be sure not to put tabs in the file. If tabs are inserted into the file this will prevent the pinging between nodes on the cluster_interconnect.
  14. Bring up the rest of the cluster applications (database, listener, asm)
    1. $CRS_HOME\bin\srvctl start asm –n <node name>
    2. $CRS_HOME\bin\srvctl start database –d <database name> -o open
    3. $CRS_HOME\bin\srvctl start listener –n <node name> -l <listener name>
  15. At this point all cluster services should be online.

In the environment which our cluster runs, we use Oracle Grid Control (GC) to manage the cluster and other databases. With changing IP addresses the Oracle Management Server (OMS) piece of GC cache needs to be cleared out. By clearing the cache this will allow for the new IP address to be associated with the cluster hostnames in GC and allow for the follow of data during uploads.

  1. Log into the grid control OMS host server
  2. Stop the OMS servers
    1. $OMS_HOME\opmn\bin\opmnctl stopall
  3. Start the OMS server
    1. $OMS_HOME\opmn\bin\opmnctl startall

References

Oracle Metalink Note: 276434.1,Oracle Metalink Note: 283684.1,Oracle Database 10g : Real Application Cluster Handbook

Please follow and like:
Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Enquire now

Give us a call or fill in the form below and we will contact you. We endeavor to answer all inquiries within 24 hours on business days.