Guenadi N Jilevski's Oracle BLOG

Oracle RAC, DG, EBS, DR and HA DBA BLOG

Clone GI and RDBMS homes in Oracle RAC 11.2.0.3 with clone.pl

Clone GI and RDBMS homes in Oracle RAC 11.2.0.3 with clone.pl

In this article you will look at how to use clone.pl script to clone GI and RDBMS home of an existing Oracle RAC database installation to add a new node and extend the existing cluster. For additional approaches to add or delete a node to an Oracle RAC click here. The clone.pl can also be used to clone existing GI and RDBMS homes of one cluster to create a new Oracle clusters on another set of servers. You will have a look at how to manually add a RDBMS instance to the new node without using the dbca. The article refers to an existing Oracle cluster consisting of nodes raclinux1 and raclinux2 as described here and later upgraded to 11.2.0.3 as described here.

You will look at the following topics related to extending a node using clone.pl

  1. Clone GI home to the new node
  2. Clone RDBMS home to the new node
  3. Manually add an instance to the new node

Cloning existing home to either extend a cluster to a new node or to create a new cluster has an advantage of considerably reducing deployment time as cloning copies an original source GI and DRBMS home along with all patches and creates an identical image on the target node(s). Apart from distributing all patches from source to target location the cloning mechanism maintains the oracle inventory on the target location resulting in having cloned Oracle GI and RDBMS homes that can be maintained and patched as any standard Oracle home installation using opatch, OUI or OEM.
Continue reading

November 17, 2011 Posted by | oracle | Leave a comment

Adding and Deleting a Node from Oracle RAC 11.2.0.3

Adding and Deleting a Node from Oracle RAC 11.2.0.3

In the article you will have a look at the steps related to extending an existing Oracle cluster by adding a node to the cluster and the opposite steps to remove a node from the cluster. Oracle 11.2.0.3 is used for verifying the reconfiguration steps in the article. Two nodes Oracle cluster consisting of nodes (raclinux1, raclinux2) with specifications described here and third node raclinux3 are used as a basis for the reconfiguration test described in the article. GI is installed using non-GNS setup and the database is admin policy managed. You can find information related to the upgrade to 11.2.0.3 here. The article will cover the following topics:

  • Add a node
    • Add GI home to the new node
    • Add RDBMS home to the new node
    • Add instance to the new node
  • Remove a node
    • Remove an instance from the node to be removed
    • Remove RDBMS home from the node to be removed
    • Remove GI home from the node to be removed
      Continue reading

November 17, 2011 Posted by | oracle | 89 Comments

Upgrade Oracle RAC to 11.2.0.3 from 11.2.0.2 on Linux

Upgrade Oracle RAC to 11.2.0.3 from 11.2.0.2 on Linux

In the article you will have a look at the steps to upgrade two node Oracle 11.2.0.2 RAC cluster on Linux to Oracle 11.2.0.3. You will check the prerequisites to upgrade the cluster, upgrade GI software, upgrade RDBMS software and last but not least upgrade the database. The setup consists of two node Oracle cluster running 11.2.0.2 on Linux as configured here.

I expected Oracle patch set 11.2.0.3 that is distributed as capable of a complete installation, see here for complete 11.2.0.3 installation, to perform an upgrade without asking for prerequisite patches that is, to include all prerequisite patches and upgrade straight away from a previous patch set 11.2.0.2. However, during the upgrade of GI I was stopped by OUI asking for Oracle patch 12539000.
Continue reading

November 13, 2011 Posted by | oracle | 8 Comments

Build HA for third party application with Oracle GI 11.2.0.3

Build HA for third party application with Oracle GI 11.2.0.3

In the article you will have a look at how to protect an Apache from node failure by registering it with Oracle Clusterware (OC) for monitoring, restart, failover and high availability.

Overview of using Oracle Clusterware for protecting third party application

Starting with Oracle release 10.2 Oracle Clusterware provided high availability for third party applications. An application should be associated with a profile that contain a set of attributes for the Oracle Clusterware(OC) to manage (stop, start, monitor, restart, failover) it. OCR stores the application profile. Prior to Oracle 11.2 crs_* utilities ( crs_profile, crs_register, crs_setprm, crs_start, crs_relocate, crs_stop, crs_unregister) were used for management of third party applications brought under control of OC. In Oracle 11.2 management of third party applications is standardized and has a common management interface using crsctl. Oracle 11.2 also introduces appvipcfg utility for creating VIPs. Upon node failure application VIP fails over to a surviving node along with the protected application. It is the Application VIP that is used for accessing the application, thus in case of failure the application will be highly.

In this article you will look at how to register Apache for management with OC 11.2.0.3. The process comprises several ordered steps as listed below.

  1. Create an Application VIP MyTestVIP on 192.168.20.111.
  2. Modify Apache configuration to start on the newly create Application VIP.
  3. Create an action perl script for monitoring, start, stop and clean Apache.
  4. Create a MyTest resource
  5. Test the implementation.

Continue reading

November 13, 2011 Posted by | oracle | 7 Comments

Oracle GNS and converting Grid Infrastructure from static DNS SCAN/VIP to dynamic GNS provided SCAN/VIP

Oracle GNS and converting Grid Infrastructure from static DNS SCAN/VIP to dynamic GNS provided SCAN/VIP

Note: As I did not see official Oracle documentation on the subject conduct thorough testing.

In the article you will have a look at the steps to convert already installed Oracle Grid Infrastructure based on static DNS SCAN and VIPs to dynamic GNS based where SCAN and node VIPs are provided by DHCP. You will refresh your understanding of GNS by briefly reviewing GNS concepts and benefits. You will look at the steps to set up and verify GNS configuration as a prerequisite for GNS implementation in existing Oracle 11.2.0.X GI/RAC installation.

Overview of GNS

Starting with release 11.2 Oracle introduced Grid Naming Service (GNS) functionality aiming at facilitating management of SCAN and nodes VIPs. GNS provides dynamic DHCP allocated addresses for the SCAN and nodes VIPs. For GNS to operate only one static address is required registered in the DNS that is for the GNS VIP. A sub-domain in DNS needs to be configured to perform delegation to the GNS, that is, queries for the defined sub-domain will be forwarded to GNS at the specified GNS VIP. DHCP is used to provide IP addresses. Due to the fact that DHCP does not provide a name to IP address mapping, mDNS is used to provide a DHCP IP address mapping to names. Within the cluster, nodes are using mDNS for name resolution while for servers outside the cluster DNS is used, which in turn forwards request to GNS. GNS acts as a gateway between DNS and mDNS and assists with name resolution of dynamically provided DHCP based IP addresses. The benefits of GNS are apparent in multi-node clusters where SCANs and node VIPs management is simplified and transparent for the administrator. In GNS configuration adding and deleting nodes is even simpler than with static DNS entries for the node VIPs.

If you have implemented RAC with static DNS initially, it makes sense to migrate to GNS if number of nodes increase.
Continue reading

November 11, 2011 Posted by | oracle | 4 Comments

Oracle SCAN and converting from a single entry SCAN to DNS SCAN

Oracle SCAN and converting from a single entry SCAN to DNS SCAN

In the article you will have a look at the steps to modify SCAN configuration while converting your SCAN from a single entry residing in /etc/hosts to a SCAN based on 3 DNS entries. SCAN was introduced in 11.2 and allows a single name to be used for the clients to connect to the RAC cluster. SCAN load balances across the node listeners to connect to the instance that provides the best quality of service for a service. SCAN provides a transparency of the cluster structure as follows:

  1. You can add remove cluster nodes without modifying SCAN
  2. You can connect to a database without prior information on which cluster nodes the database is running

In the article an assumption is made that Oracle GI 11.2.0.X is installed and running using SCAN defined in /etc/hosts as shown below.

192.168.2.71 oel-cluster.gj.com oel-cluster

Oracle 11.2.0.3 is used for the testing the steps.
Continue reading

November 10, 2011 Posted by | oracle | 2 Comments