Guenadi N Jilevski's Oracle BLOG

Oracle RAC, DG, EBS, DR and HA DBA BLOG

Adding and Deleting a Node from Oracle RAC 11.2.0.3

Adding and Deleting a Node from Oracle RAC 11.2.0.3

In the article you will have a look at the steps related to extending an existing Oracle cluster by adding a node to the cluster and the opposite steps to remove a node from the cluster. Oracle 11.2.0.3 is used for verifying the reconfiguration steps in the article. Two nodes Oracle cluster consisting of nodes (raclinux1, raclinux2) with specifications described here and third node raclinux3 are used as a basis for the reconfiguration test described in the article. GI is installed using non-GNS setup and the database is admin policy managed. You can find information related to the upgrade to 11.2.0.3 here. The article will cover the following topics:

  • Add a node
    • Add GI home to the new node
    • Add RDBMS home to the new node
    • Add instance to the new node
  • Remove a node
    • Remove an instance from the node to be removed
    • Remove RDBMS home from the node to be removed
    • Remove GI home from the node to be removed
  1. Add a node

Before you add a node make sure that the node is properly configured to support Oracle GI and RDBMS installation and database creation. For a detail list of the prerequisites for installing Oracle GI and RDBMS click here. As RAC3 VM running raclinux3 node was cloned from RAC1 running raclinux1 the prerequisites are met. I will illustrate the Cluster Verify Utility cluvfy commands to verify the prerequisites. The existing cluster nodes are raclinux1 and raclinux2. Node raclinux3 will be added.

Verify the prerequisites with cluvfy

Login as a grid user on one of the existing cluster nodes, for example raclinux2, and run the following commands.

cluvfy stage -post hwos -n raclinux3 -verbose

cluvfy comp peer -refnode raclinux1 -n raclinux3 -orainv oinstall -osdba asmdba -verbose

cluvfy stage -pre nodeadd -n raclinux3 -verbose

cluvfy stage -pre crsinst -n raclinux3 –verbose

Fix any problems related to failed check(s) and make sure that the prerequisites are met before proceeding further. For information related to the detail output of the commands look at Annex 1

1.1    Add GI home to the new node using addNode

In order to add GI to the new node raclinux3 addNode.sh will be used from an existing node raclinux2. Run, as grid user, addNode.sh from $GI_HOME/oui/bin assuming that you have met the prerequisites. If you want to override some errors set IGNORE_PREADDNODE_CHECKS variable to Y.

export IGNORE_PREADDNODE_CHECKS=Y

[grid@raclinux2 bin]$ ./addNode.sh -silent “CLUSTER_NEW_NODES={raclinux3}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={raclinux3-vip}”

Performing pre-checks for node addition

Checking node reachability…

Node reachability check passed from node “raclinux2”

Checking user equivalence…

User equivalence check passed for user “grid”

Checking node connectivity…

Checking hosts config file…

Verification of the hosts config file successful

Check: Node connectivity for interface “eth0”

Node connectivity passed for interface “eth0”

TCP connectivity check passed for subnet “192.168.20.0”

Checking subnet mask consistency…

Subnet mask consistency check passed for subnet “192.168.20.0”.

Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication…

Checking subnet “192.168.20.0” for multicast communication with multicast group “230.0.1.0”…

Check of subnet “192.168.20.0” for multicast communication with multicast group “230.0.1.0” passed.

Check of multicast communication passed.

Checking CRS integrity…

Clusterware version consistency passed

CRS integrity check passed

Checking shared resources…

Checking CRS home location…

“/u01/app/11.2.0.3/grid” is shared

Shared resources check for node addition passed

Checking node connectivity…

Checking hosts config file…

Verification of the hosts config file successful

Check: Node connectivity for interface “eth0”

Node connectivity passed for interface “eth0”

TCP connectivity check passed for subnet “192.168.20.0”

Check: Node connectivity for interface “eth1”

Node connectivity passed for interface “eth1”

TCP connectivity check passed for subnet “10.10.20.0”

Checking subnet mask consistency…

Subnet mask consistency check passed for subnet “192.168.20.0”.

Subnet mask consistency check passed for subnet “10.10.20.0”.

Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication…

Checking subnet “192.168.20.0” for multicast communication with multicast group “230.0.1.0”…

Check of subnet “192.168.20.0” for multicast communication with multicast group “230.0.1.0” passed.

Checking subnet “10.10.20.0” for multicast communication with multicast group “230.0.1.0”…

Check of subnet “10.10.20.0” for multicast communication with multicast group “230.0.1.0” passed.

Check of multicast communication passed.

Total memory check passed

Available memory check passed

Swap space check passed

Free disk space check passed for “raclinux3:/u01/app/11.2.0.3/grid”

Free disk space check passed for “raclinux2:/u01/app/11.2.0.3/grid”

Free disk space check passed for “raclinux3:/tmp”

Free disk space check passed for “raclinux2:/tmp”

Check for multiple users with UID value 1100 passed

User existence check passed for “grid”

Run level check passed

Hard limits check passed for “maximum open file descriptors”

Soft limits check passed for “maximum open file descriptors”

Hard limits check passed for “maximum user processes”

Soft limits check passed for “maximum user processes”

System architecture check passed

Kernel version check passed

Kernel parameter check passed for “semmsl”

Kernel parameter check passed for “semmns”

Kernel parameter check passed for “semopm”

Kernel parameter check passed for “semmni”

Kernel parameter check passed for “shmmax”

Kernel parameter check passed for “shmmni”

Kernel parameter check passed for “shmall”

Kernel parameter check passed for “file-max”

Kernel parameter check passed for “ip_local_port_range”

Kernel parameter check passed for “rmem_default”

Kernel parameter check passed for “rmem_max”

Kernel parameter check passed for “wmem_default”

Kernel parameter check passed for “wmem_max”

Kernel parameter check passed for “aio-max-nr”

Package existence check passed for “make”

Package existence check passed for “binutils”

Package existence check passed for “gcc(x86_64)”

Package existence check passed for “libaio(x86_64)”

Package existence check passed for “glibc(x86_64)”

Package existence check passed for “compat-libstdc++-33(x86_64)”

Package existence check passed for “elfutils-libelf(x86_64)”

Package existence check passed for “elfutils-libelf-devel”

Package existence check passed for “glibc-common”

Package existence check passed for “glibc-devel(x86_64)”

Package existence check passed for “glibc-headers”

Package existence check passed for “gcc-c++(x86_64)”

Package existence check passed for “libaio-devel(x86_64)”

Package existence check passed for “libgcc(x86_64)”

Package existence check passed for “libstdc++(x86_64)”

Package existence check passed for “libstdc++-devel(x86_64)”

Package existence check passed for “sysstat”

Package existence check passed for “ksh”

Check for multiple users with UID value 0 passed

Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user’s primary group passed

Checking OCR integrity…

OCR integrity check passed

Checking Oracle Cluster Voting Disk configuration…

Oracle Cluster Voting Disk configuration check passed

Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)…

NTP Configuration file check started…

NTP Configuration file check passed

Checking daemon liveness…

Liveness check passed for “ntpd”

Check for NTP daemon or service alive passed on all nodes

NTP daemon slewing option check passed

NTP daemon’s boot time configuration check for slewing option passed

NTP common Time Server Check started…

PRVF-5408 : NTP Time Server “78.47.24.68” is common only to the following nodes “raclinux2”

PRVF-5408 : NTP Time Server “192.43.244.18” is common only to the following nodes “raclinux2”

PRVF-5408 : NTP Time Server “129.69.1.153” is common only to the following nodes “raclinux2”

Check of common NTP Time Server passed

Clock time offset check from NTP Time Server started…

Clock time offset check passed

Clock synchronization check using Network Time Protocol(NTP) passed

User “grid” is not part of “root” group. Check passed

Checking consistency of file “/etc/resolv.conf” across nodes

File “/etc/resolv.conf” does not have both domain and search entries defined

domain entry in file “/etc/resolv.conf” is consistent across nodes

search entry in file “/etc/resolv.conf” is consistent across nodes

PRVF-5636 : The DNS response time for an unreachable node exceeded “15000” ms on following nodes: raclinux3

File “/etc/resolv.conf” is not consistent across nodes

Checking VIP configuration.

Checking VIP Subnet configuration.

Check for VIP Subnet configuration passed.

Checking VIP reachability

Check for VIP reachability passed.

Pre-check for node addition was unsuccessful on all the nodes.

[grid@raclinux2 bin]$

export IGNORE_PREADDNODE_CHECKS=Y

[grid@raclinux2 bin]$ ./addNode.sh -silent “CLUSTER_NEW_NODES={raclinux3}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={raclinux3-vip}”

Starting Oracle Universal Installer…

Checking swap space: must be greater than 500 MB. Actual 9951 MB Passed

Oracle Universal Installer, Version 11.2.0.3.0 Production

Copyright (C) 1999, 2011, Oracle. All rights reserved.

Performing tests to see whether nodes raclinux1,raclinux3 are available

……………………………………………………… 100% Done.


—————————————————————————–

Cluster Node Addition Summary

Global Settings

 Source: /u01/app/11.2.0.3/grid

New Nodes

Space Requirements

New Nodes

 raclinux3

 /u01: Required 4.30GB : Available 269.12GB

Installed Products

Product Names

Oracle Grid Infrastructure 11.2.0.3.0

 Sun JDK 1.5.0.30.03

Installer SDK Component 11.2.0.3.0

Oracle One-Off Patch Installer 11.2.0.1.7

Oracle Universal Installer 11.2.0.3.0

 Oracle USM Deconfiguration 11.2.0.3.0

 Oracle Configuration Manager Deconfiguration 10.3.1.0.0

Enterprise Manager Common Core Files 10.2.0.4.4

 Oracle DBCA Deconfiguration 11.2.0.3.0

 Oracle RAC Deconfiguration 11.2.0.3.0

Oracle Quality of Service Management (Server) 11.2.0.3.0

Installation Plugin Files 11.2.0.3.0

Universal Storage Manager Files 11.2.0.3.0

Oracle Text Required Support Files 11.2.0.3.0

Automatic Storage Management Assistant 11.2.0.3.0

 Oracle Database 11g Multimedia Files 11.2.0.3.0

Oracle Multimedia Java Advanced Imaging 11.2.0.3.0

Oracle Globalization Support 11.2.0.3.0

 Oracle Multimedia Locator RDBMS Files 11.2.0.3.0

Oracle Core Required Support Files 11.2.0.3.0

Bali Share 1.1.18.0.0

 Oracle Database Deconfiguration 11.2.0.3.0

Oracle Quality of Service Management (Client) 11.2.0.3.0

Expat libraries 2.0.1.0.1

Oracle Containers for Java 11.2.0.3.0

Perl Modules 5.10.0.0.1

Secure Socket Layer 11.2.0.3.0

 Oracle JDBC/OCI Instant Client 11.2.0.3.0

Oracle Multimedia Client Option 11.2.0.3.0

LDAP Required Support Files 11.2.0.3.0

Character Set Migration Utility 11.2.0.3.0

Perl Interpreter 5.10.0.0.2

PL/SQL Embedded Gateway 11.2.0.3.0

 OLAP SQL Scripts 11.2.0.3.0

Database SQL Scripts 11.2.0.3.0

Oracle Extended Windowing Toolkit 3.4.47.0.0

 SSL Required Support Files for InstantClient 11.2.0.3.0

SQL*Plus Files for Instant Client 11.2.0.3.0

Oracle Net Required Support Files 11.2.0.3.0

Oracle Database User Interface 2.2.13.0.0

 RDBMS Required Support Files for Instant Client 11.2.0.3.0

 RDBMS Required Support Files Runtime 11.2.0.3.0

XML Parser for Java 11.2.0.3.0

Oracle Security Developer Tools 11.2.0.3.0

Oracle Wallet Manager 11.2.0.3.0

Enterprise Manager plugin Common Files 11.2.0.3.0

Platform Required Support Files 11.2.0.3.0

 Oracle JFC Extended Windowing Toolkit 4.2.36.0.0

 RDBMS Required Support Files 11.2.0.3.0

Oracle Ice Browser 5.2.3.6.0

Oracle Help For Java 4.2.9.0.0

Enterprise Manager Common Files 10.2.0.4.3

 Deinstallation Tool 11.2.0.3.0

Oracle Java Client 11.2.0.3.0

Cluster Verification Utility Files 11.2.0.3.0

Oracle Notification Service (eONS) 11.2.0.3.0

Oracle LDAP administration 11.2.0.3.0

Cluster Verification Utility Common Files 11.2.0.3.0

 Oracle Clusterware RDBMS Files 11.2.0.3.0

Oracle Locale Builder 11.2.0.3.0

Oracle Globalization Support 11.2.0.3.0

 Buildtools Common Files 11.2.0.3.0

Oracle RAC Required Support Files-HAS 11.2.0.3.0

SQL*Plus Required Support Files 11.2.0.3.0

 XDK Required Support Files 11.2.0.3.0

Agent Required Support Files 10.2.0.4.3

Parser Generator Required Support Files 11.2.0.3.0

 Precompiler Required Support Files 11.2.0.3.0

Installation Common Files 11.2.0.3.0

Required Support Files 11.2.0.3.0

 Oracle JDBC/THIN Interfaces 11.2.0.3.0

Oracle Multimedia Locator 11.2.0.3.0

Oracle Multimedia 11.2.0.3.0

HAS Common Files 11.2.0.3.0

Assistant Common Files 11.2.0.3.0

PL/SQL 11.2.0.3.0

HAS Files for DB 11.2.0.3.0

Oracle Recovery Manager 11.2.0.3.0

Oracle Database Utilities 11.2.0.3.0

Oracle Notification Service 11.2.0.3.0

SQL*Plus 11.2.0.3.0

 Oracle Netca Client 11.2.0.3.0

Oracle Net 11.2.0.3.0

Oracle JVM 11.2.0.3.0

Oracle Internet Directory Client 11.2.0.3.0

Oracle Net Listener 11.2.0.3.0

Cluster Ready Services Files 11.2.0.3.0

 Oracle Database 11g 11.2.0.3.0

—————————————————————————–

Instantiating scripts for add node (Monday, November 14, 2011 10:56:28 PM EET)

. 1% Done.

Instantiation of add node scripts complete

Copying to remote nodes (Monday, November 14, 2011 10:56:40 PM EET)

………………………………………………………………………………….. 96% Done.

Home copied to new nodes

Saving inventory on nodes (Monday, November 14, 2011 11:15:12 PM EET)

. 100% Done.

Save inventory complete

WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.

To register the new inventory please run the script at ‘/u01/app/oraInventory/orainstRoot.sh’ with root privileges on nodes ‘raclinux3’.

If you do not register the inventory, you may not be able to update or patch the products you installed.

The following configuration scripts need to be executed as the “root” user in each new cluster node. Each script in the list below is followed by a list of nodes.

/u01/app/oraInventory/orainstRoot.sh #On nodes raclinux3

/u01/app/11.2.0.3/grid/root.sh #On nodes raclinux3

To execute the configuration scripts:

1. Open a terminal window

2. Log in as “root”

3. Run the scripts in each cluster node

The Cluster Node Addition of /u01/app/11.2.0.3/grid was successful.

Please check ‘/tmp/silentInstall.log’ for more details.

[grid@raclinux2 bin]$

After addNode.sh completes open a new session on the node to be added raclinux3 and run the suggested scripts as root user.

[root@raclinux3 grid]# /u01/app/oraInventory/orainstRoot.sh

Creating the Oracle inventory pointer file (/etc/oraInst.loc)

Changing permissions of /u01/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.

The execution of the script is complete.

You have new mail in /var/spool/mail/root

[root@raclinux3 grid]#

[root@raclinux3 grid]# /u01/app/11.2.0.3/grid/root.sh

Performing root user operation for Oracle 11g

The following environment variables are set as:

ORACLE_OWNER= grid

 ORACLE_HOME= /u01/app/11.2.0.3/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

 Copying dbhome to /usr/local/bin …

 Copying oraenv to /usr/local/bin …

 Copying coraenv to /usr/local/bin …

Creating /etc/oratab file…

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /u01/app/11.2.0.3/grid/crs/install/crsconfig_params

Creating trace directory

OLR initialization – successful

Adding Clusterware entries to inittab

CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node raclinux1, number 1, and is terminating

An active cluster was found during exclusive startup, restarting to join the cluster

clscfg: EXISTING configuration version 5 detected.

clscfg: version 5 is 11g Release 2.

Successfully accumulated necessary OCR keys.

Creating OCR keys for user ‘root’, privgrp ‘root’..

Operation successful.

Configure Oracle Grid Infrastructure for a Cluster … succeeded

[root@raclinux3 grid]#

1.2    Add a RDBMS home to the new node using addNode

After GI home has been added to node raclinux3 successfully from any existing node, for example raclinux2, run addNode.sh as oracle user.

[oracle@raclinux2 bin]$ ./addNode.sh -silent “CLUSTER_NEW_NODES={raclinux3}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={raclinux3-vip}”

Performing pre-checks for node addition

Checking node reachability…

Node reachability check passed from node “raclinux2”

Checking user equivalence…

User equivalence check passed for user “oracle”

WARNING:

Node “raclinux3” already appears to be part of cluster

Pre-check for node addition was successful.

Starting Oracle Universal Installer…

Checking swap space: must be greater than 500 MB. Actual 9543 MB Passed

Oracle Universal Installer, Version 11.2.0.3.0 Production

Copyright (C) 1999, 2011, Oracle. All rights reserved.

Performing tests to see whether nodes raclinux1,raclinux3 are available

……………………………………………………… 100% Done.

..

—————————————————————————–

Cluster Node Addition Summary

Global Settings

 Source: /u01/app/oracle/product/11.2.0/db_3

New Nodes

Space Requirements

New Nodes

 raclinux3

 /u01: Required 4.53GB : Available 265.96GB

Installed Products

Product Names

 Oracle Database 11g 11.2.0.3.0

 Sun JDK 1.5.0.30.03

Installer SDK Component 11.2.0.3.0

Oracle One-Off Patch Installer 11.2.0.1.7

Oracle Universal Installer 11.2.0.3.0

 Oracle USM Deconfiguration 11.2.0.3.0

 Oracle Configuration Manager Deconfiguration 10.3.1.0.0

 Oracle DBCA Deconfiguration 11.2.0.3.0

 Oracle RAC Deconfiguration 11.2.0.3.0

 Oracle Database Deconfiguration 11.2.0.3.0

Oracle Configuration Manager Client 10.3.2.1.0

Oracle Configuration Manager 10.3.5.0.1

 Oracle ODBC Driverfor Instant Client 11.2.0.3.0

LDAP Required Support Files 11.2.0.3.0

 SSL Required Support Files for InstantClient 11.2.0.3.0

Bali Share 1.1.18.0.0

Oracle Extended Windowing Toolkit 3.4.47.0.0

 Oracle JFC Extended Windowing Toolkit 4.2.36.0.0

Oracle Real Application Testing 11.2.0.3.0

Oracle Database Vault J2EE Application 11.2.0.3.0

Oracle Label Security 11.2.0.3.0

 Oracle Data Mining RDBMS Files 11.2.0.3.0

 Oracle OLAP RDBMS Files 11.2.0.3.0

 Oracle OLAP API 11.2.0.3.0

Platform Required Support Files 11.2.0.3.0

Oracle Database Vault option 11.2.0.3.0

Oracle RAC Required Support Files-HAS 11.2.0.3.0

SQL*Plus Required Support Files 11.2.0.3.0

Oracle Display Fonts 9.0.2.0.0

Oracle Ice Browser 5.2.3.6.0

 Oracle JDBC Server Support Package 11.2.0.3.0

Oracle SQL Developer 11.2.0.3.0

Oracle Application Express 11.2.0.3.0

 XDK Required Support Files 11.2.0.3.0

 RDBMS Required Support Files for Instant Client 11.2.0.3.0

 SQLJ Runtime 11.2.0.3.0

Database Workspace Manager 11.2.0.3.0

 RDBMS Required Support Files Runtime 11.2.0.3.0

Oracle Globalization Support 11.2.0.3.0

 Exadata Storage Server 11.2.0.1.0

Provisioning Advisor Framework 10.2.0.4.3

Enterprise Manager Database Plugin — Repository Support 11.2.0.3.0

Enterprise Manager Repository Core Files 10.2.0.4.4

Enterprise Manager Database Plugin — Agent Support 11.2.0.3.0

Enterprise Manager Grid Control Core Files 10.2.0.4.4

Enterprise Manager Common Core Files 10.2.0.4.4

Enterprise Manager Agent Core Files 10.2.0.4.4

 RDBMS Required Support Files 11.2.0.3.0

regexp 2.1.9.0.0

Agent Required Support Files 10.2.0.4.3

 Oracle 11g Warehouse Builder Required Files 11.2.0.3.0

Oracle Notification Service (eONS) 11.2.0.3.0

Oracle Text Required Support Files 11.2.0.3.0

Parser Generator Required Support Files 11.2.0.3.0

 Oracle Database 11g Multimedia Files 11.2.0.3.0

Oracle Multimedia Java Advanced Imaging 11.2.0.3.0

Oracle Multimedia Annotator 11.2.0.3.0

 Oracle JDBC/OCI Instant Client 11.2.0.3.0

 Oracle Multimedia Locator RDBMS Files 11.2.0.3.0

 Precompiler Required Support Files 11.2.0.3.0

Oracle Core Required Support Files 11.2.0.3.0

Sample Schema Data 11.2.0.3.0

Oracle Starter Database 11.2.0.3.0

Oracle Message Gateway Common Files 11.2.0.3.0

Oracle XML Query 11.2.0.3.0

XML Parser for Oracle JVM 11.2.0.3.0

Oracle Help For Java 4.2.9.0.0

Installation Plugin Files 11.2.0.3.0

Enterprise Manager Common Files 10.2.0.4.3

Expat libraries 2.0.1.0.1

 Deinstallation Tool 11.2.0.3.0

Oracle Quality of Service Management (Client) 11.2.0.3.0

Perl Modules 5.10.0.0.1

 JAccelerator (COMPANION) 11.2.0.3.0

Oracle Containers for Java 11.2.0.3.0

Perl Interpreter 5.10.0.0.2

Oracle Net Required Support Files 11.2.0.3.0

Secure Socket Layer 11.2.0.3.0

Oracle Universal Connection Pool 11.2.0.3.0

 Oracle JDBC/THIN Interfaces 11.2.0.3.0

Oracle Multimedia Client Option 11.2.0.3.0

Oracle Java Client 11.2.0.3.0

Character Set Migration Utility 11.2.0.3.0

Oracle Code Editor 1.2.1.0.0I

PL/SQL Embedded Gateway 11.2.0.3.0

 OLAP SQL Scripts 11.2.0.3.0

Database SQL Scripts 11.2.0.3.0

Oracle Locale Builder 11.2.0.3.0

Oracle Globalization Support 11.2.0.3.0

SQL*Plus Files for Instant Client 11.2.0.3.0

Required Support Files 11.2.0.3.0

Oracle Database User Interface 2.2.13.0.0

Oracle ODBC Driver 11.2.0.3.0

Oracle Notification Service 11.2.0.3.0

XML Parser for Java 11.2.0.3.0

Oracle Security Developer Tools 11.2.0.3.0

Oracle Wallet Manager 11.2.0.3.0

Cluster Verification Utility Common Files 11.2.0.3.0

 Oracle Clusterware RDBMS Files 11.2.0.3.0

 Oracle UIX 2.2.24.6.0

Enterprise Manager plugin Common Files 11.2.0.3.0

HAS Common Files 11.2.0.3.0

 Precompiler Common Files 11.2.0.3.0

Installation Common Files 11.2.0.3.0

Oracle Help for the Web 2.0.14.0.0

Oracle LDAP administration 11.2.0.3.0

 Buildtools Common Files 11.2.0.3.0

Assistant Common Files 11.2.0.3.0

Oracle Recovery Manager 11.2.0.3.0

PL/SQL 11.2.0.3.0

Generic Connectivity Common Files 11.2.0.3.0

Oracle Database Gateway for ODBC 11.2.0.3.0

Oracle Programmer 11.2.0.3.0

Oracle Database Utilities 11.2.0.3.0

Enterprise Manager Agent 10.2.0.4.3

SQL*Plus 11.2.0.3.0

 Oracle Netca Client 11.2.0.3.0

Oracle Multimedia Locator 11.2.0.3.0

 Oracle Call Interface (OCI) 11.2.0.3.0

Oracle Multimedia 11.2.0.3.0

Oracle Net 11.2.0.3.0

Oracle XML Development Kit 11.2.0.3.0

Database Configuration and Upgrade Assistants 11.2.0.3.0

Oracle JVM 11.2.0.3.0

Oracle Advanced Security 11.2.0.3.0

Oracle Internet Directory Client 11.2.0.3.0

Oracle Enterprise Manager Console DB 11.2.0.3.0

HAS Files for DB 11.2.0.3.0

Oracle Net Listener 11.2.0.3.0

Oracle Text 11.2.0.3.0

Oracle Net Services 11.2.0.3.0

 Oracle Database 11g 11.2.0.3.0

 Oracle OLAP 11.2.0.3.0

Oracle Spatial 11.2.0.3.0

Oracle Partitioning 11.2.0.3.0

Enterprise Edition Options 11.2.0.3.0

—————————————————————————–

Instantiating scripts for add node (Tuesday, November 15, 2011 12:36:09 AM EET)

. 1% Done.

Instantiation of add node scripts complete

Copying to remote nodes (Tuesday, November 15, 2011 12:36:23 AM EET)

………………………………………………………………………………….. 96% Done.

Home copied to new nodes

Saving inventory on nodes (Tuesday, November 15, 2011 12:58:10 AM EET)

. 100% Done.

Save inventory complete

WARNING:

The following configuration scripts need to be executed as the “root” user in each new cluster node. Each script in the list below is followed by a list of nodes.

/u01/app/oracle/product/11.2.0/db_3/root.sh #On nodes raclinux3

To execute the configuration scripts:

1. Open a terminal window

2. Log in as “root”

3. Run the scripts in each cluster node

The Cluster Node Addition of /u01/app/oracle/product/11.2.0/db_3 was successful.

Please check ‘/tmp/silentInstall.log’ for more details.

[oracle@raclinux2 bin]$

Run as root on the new node raclinux3

[root@raclinux3 bin]# /u01/app/oracle/product/11.2.0/db_3/root.sh

Performing root user operation for Oracle 11g

The following environment variables are set as:

ORACLE_OWNER= oracle

 ORACLE_HOME= /u01/app/oracle/product/11.2.0/db_3

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of “dbhome” have not changed. No need to overwrite.

The contents of “oraenv” have not changed. No need to overwrite.

The contents of “coraenv” have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Finished product-specific root actions.

[root@raclinux3 bin]#

  1. Add an instance on raclinux3 using dbca

    Login as oracle on a cluster node where there is a running instance, for example raclinux2, and run dbca.

    Select Oracle RAC database and press Next.


    Select Instance management and press Next.


    Select Add Instance.


    Select a database and enter username and credentials.


    Press Next.


    Enter Instance name, select node and press Next.


    Press Next.


    Review and press OK.


    Wait until dbca completes.


  2. Verification.

    Verify that instance has been added successfully.

    SQL> select * from v$active_instances;

    INST_NUMBER INST_NAME

    ———– ————————————————————

     1 raclinux1.gj.com:RACDB1

     2 raclinux2.gj.com:RACDB2

     3 raclinux3.gj.com:RACDB3

    SQL>

    [grid@raclinux3 grid]$ olsnodes -n -t

    raclinux1 1 Pinned

    raclinux2 2 Pinned

    raclinux3 3 Unpinned

    [grid@raclinux3 grid]$

    Run the following command as grid user.

    cluvfy stage -post nodeadd -n raclinux3 –verbose

    The output of the command is in Annex 1.

  1. Remove a node

In order to remove a node raclinux3 from the cluster the following ordered activities should take place:

  1. Remove instance RACDB3 on raclinux3
  2. Remove RDBMS home from raclinux3
  3. Remove GI home from aclinux3

2.1    Remove instance RACDB3 on raclinux3

Login as oracle user on a node that is to remain, for example raclinux2, and invoke dbca.

Select Oracle RAC database and press Next.


Select Instance Management.


Select Delete an Instance.


Select a database and enter username and credentials.


Select the instance


Press OK.


Press OK.


Wait until instance removal takes place.

Verify that the instance is removed from a node that is to remain.

SQL> select * from v$active_instances;

INST_NUMBER INST_NAME

———– ————————————————————

 1 raclinux1.gj.com:RACDB1

 2 raclinux2.gj.com:RACDB2

SQL>

  1. Remove RDBMS home from raclinux3

    On a node to be removed log in as oracle user and run the following command from $ORACLE_HOME/oui/bin.

    [oracle@raclinux3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_3 “CLUSTER_NODES={raclinux3}” -local

    Starting Oracle Universal Installer…

    Checking swap space: must be greater than 500 MB. Actual 9832 MB Passed

    The inventory pointer is located at /etc/oraInst.loc

    The inventory is located at /u01/app/oraInventory

    ‘UpdateNodeList’ was successful.

    [oracle@raclinux3 bin]$

Run from deinstall with option local from $ORACLE_HOME/deinstall from the node to be removed (raclinux3) in order to remove the RDBMS binaries.

[oracle@raclinux3 deinstall]$ ./deinstall -local

Checking for required files and bootstrapping …

Please wait …

Location of logs /u01/app/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

######################### CHECK OPERATION START #########################

## [START] Install check configuration ##

Checking for existence of the Oracle home location /u01/app/oracle/product/11.2.0/db_3

Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database

Oracle Base selected for deinstall is: /u01/app/oracle

Checking for existence of central inventory location /u01/app/oraInventory

Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0.3/grid

The following nodes are part of this cluster: raclinux3

Checking for sufficient temp space availability on node(s) : ‘raclinux3’

## [END] Install check configuration ##

Network Configuration check config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2011-11-15_02-05-17-AM.log

Network Configuration check config END

Database Check Configuration START

Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2011-11-15_02-05-24-AM.log

Database Check Configuration END

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_check2011-11-15_02-05-28-AM.log

Enterprise Manager Configuration Assistant END

Oracle Configuration Manager check START

OCM check log file location : /u01/app/oraInventory/logs//ocm_check8370.log

Oracle Configuration Manager check END

######################### CHECK OPERATION END #########################

####################### CHECK OPERATION SUMMARY #######################

Oracle Grid Infrastructure Home is: /u01/app/11.2.0.3/grid

The cluster node(s) on which the Oracle home deinstallation will be performed are:raclinux3

Since -local option has been specified, the Oracle home will be deinstalled only on the local node, ‘raclinux3’, and the global configuration will be removed.

Oracle Home selected for deinstall is: /u01/app/oracle/product/11.2.0/db_3

Inventory Location where the Oracle home registered is: /u01/app/oraInventory

The option -local will not modify any database configuration for this Oracle home.

No Enterprise Manager configuration to be updated for any database(s)

No Enterprise Manager ASM targets to update

No Enterprise Manager listener targets to migrate

Checking the config status for CCR

Oracle Home exists with CCR directory, but CCR is not configured

CCR check is finished

Do you want to continue (y – yes, n – no)? [n]: y

A log of this session will be written to: ‘/u01/app/oraInventory/logs/deinstall_deconfig2011-11-15_02-04-55-AM.out’

Any error messages from this session will be written to: ‘/u01/app/oraInventory/logs/deinstall_deconfig2011-11-15_02-04-55-AM.err’

######################## CLEAN OPERATION START ########################

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2011-11-15_02-05-28-AM.log

Updating Enterprise Manager ASM targets (if any)

Updating Enterprise Manager listener targets (if any)

Enterprise Manager Configuration Assistant END

Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2011-11-15_02-06-10-AM.log

Network Configuration clean config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2011-11-15_02-06-10-AM.log

De-configuring Local Net Service Names configuration file…

Local Net Service Names configuration file de-configured successfully.

De-configuring backup files…

Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

Oracle Configuration Manager clean START

OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean8370.log

Oracle Configuration Manager clean END

Setting the force flag to false

Setting the force flag to cleanup the Oracle Base

Oracle Universal Installer clean START

Detach Oracle home ‘/u01/app/oracle/product/11.2.0/db_3’ from the central inventory on the local node : Done

Delete directory ‘/u01/app/oracle/product/11.2.0/db_3’ on the local node : Done

The Oracle Base directory ‘/u01/app/oracle’ will not be removed on local node. The directory is not empty.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END

## [START] Oracle install clean ##

Clean install operation removing temporary directory ‘/tmp/deinstall2011-11-15_02-04-50AM’ on node ‘raclinux3’

## [END] Oracle install clean ##

######################### CLEAN OPERATION END #########################

####################### CLEAN OPERATION SUMMARY #######################

Cleaning the config for CCR

As CCR is not configured, so skipping the cleaning of CCR configuration

CCR clean is finished

Successfully detached Oracle home ‘/u01/app/oracle/product/11.2.0/db_3’ from the central inventory on the local node.

Successfully deleted directory ‘/u01/app/oracle/product/11.2.0/db_3’ on the local node.

Oracle Universal Installer cleanup was successful.

Oracle deinstall tool successfully cleaned up temporary directories.

#######################################################################

############# ORACLE DEINSTALL & DECONFIG TOOL END #############

[oracle@raclinux3 deinstall]$

From any node that will remain, for example raclinux2, run the following command from $ORACLE_HOME/oui/bin while logged in as oracle user.

[oracle@raclinux2 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_3 “CLUSTER_NODES={raclinux1, raclinux2}”

Starting Oracle Universal Installer…

Checking swap space: must be greater than 500 MB. Actual 9579 MB Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

‘UpdateNodeList’ was successful.

[oracle@raclinux2 bin]$

  1. Remove GI home from raclinux3

2.3.1    While logged in as root unpin the node to be removed from any node.

[root@raclinux3 bin]# ./crsctl unpin css -n raclinux3

CRS-4667: Node raclinux3 successfully unpinned.

[root@raclinux3 bin]#

2.3.2    As root from the node to be removed raclinux3 go to $GI_HOME/crs/install and execute the following command ./rootcrs.pl -deconfig -deinstall –force.

[root@raclinux3 install]# ./rootcrs.pl -deconfig -deinstall -force

Using configuration parameter file: ./crsconfig_params

Network exists: 1/192.168.20.0/255.255.255.0/eth0, type static

VIP exists: /raclinux1-vip/192.168.20.51/192.168.20.0/255.255.255.0/eth0, hosting node raclinux1

VIP exists: /raclinux2-vip/192.168.20.52/192.168.20.0/255.255.255.0/eth0, hosting node raclinux2

VIP exists: /raclinux3-vip/192.168.20.53/192.168.20.0/255.255.255.0/eth0, hosting node raclinux3

GSD exists

ONS exists: Local port 6100, remote port 6200, EM port 2016

CRS-2673: Attempting to stop ‘ora.registry.acfs’ on ‘raclinux3’

CRS-2677: Stop of ‘ora.registry.acfs’ on ‘raclinux3’ succeeded

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘raclinux3’

CRS-2673: Attempting to stop ‘ora.crsd’ on ‘raclinux3’

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on ‘raclinux3’

CRS-2673: Attempting to stop ‘ora.DATA.dg’ on ‘raclinux3’

CRS-2673: Attempting to stop ‘ora.DATADG.dg’ on ‘raclinux3’

CRS-2677: Stop of ‘ora.DATADG.dg’ on ‘raclinux3’ succeeded

CRS-2677: Stop of ‘ora.DATA.dg’ on ‘raclinux3’ succeeded

CRS-2673: Attempting to stop ‘ora.asm’ on ‘raclinux3’

CRS-2677: Stop of ‘ora.asm’ on ‘raclinux3’ succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on ‘raclinux3’ has completed

CRS-2677: Stop of ‘ora.crsd’ on ‘raclinux3’ succeeded

CRS-2673: Attempting to stop ‘ora.crf’ on ‘raclinux3’

CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘raclinux3’

CRS-2673: Attempting to stop ‘ora.evmd’ on ‘raclinux3’

CRS-2673: Attempting to stop ‘ora.asm’ on ‘raclinux3’

CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘raclinux3’

CRS-2677: Stop of ‘ora.evmd’ on ‘raclinux3’ succeeded

CRS-2677: Stop of ‘ora.mdnsd’ on ‘raclinux3’ succeeded

CRS-2677: Stop of ‘ora.crf’ on ‘raclinux3’ succeeded

CRS-2677: Stop of ‘ora.ctssd’ on ‘raclinux3’ succeeded

CRS-2677: Stop of ‘ora.asm’ on ‘raclinux3’ succeeded

CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip’ on ‘raclinux3’

CRS-2677: Stop of ‘ora.cluster_interconnect.haip’ on ‘raclinux3’ succeeded

CRS-2673: Attempting to stop ‘ora.cssd’ on ‘raclinux3’

CRS-2677: Stop of ‘ora.cssd’ on ‘raclinux3’ succeeded

CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘raclinux3’

CRS-2677: Stop of ‘ora.gipcd’ on ‘raclinux3’ succeeded

CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘raclinux3’

CRS-2677: Stop of ‘ora.gpnpd’ on ‘raclinux3’ succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘raclinux3’ has completed

CRS-4133: Oracle High Availability Services has been stopped.

Successfully deconfigured Oracle clusterware stack on this node

[root@raclinux3 install]#

2.3.3    Execute as root from a node that is to remain the following command.

[root@raclinux2 bin]# ./crsctl delete node -n raclinux3

CRS-4661: Node raclinux3 successfully deleted.

[root@raclinux2 bin]#

2.3.4    From the node to be deleted log in as grid and from $GI_HOME/oui/bin execute the following command.

[grid@raclinux3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0.3/grid “CLUSTER_NODES={raclinux3}” CRS=TRUE -local

Starting Oracle Universal Installer…

Checking swap space: must be greater than 500 MB. Actual 10001 MB Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

‘UpdateNodeList’ was successful.

[grid@raclinux3 bin]$

2.3.5    From the node to be deleted log in as grid and from $GI_HOME/deinstall the following deinstall command with a local option and follow the requests.

./deinstall -local

[grid@raclinux3 deinstall]$ ./deinstall -local

Checking for required files and bootstrapping …

Please wait …

Location of logs /tmp/deinstall2011-11-15_02-38-50AM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

######################### CHECK OPERATION START #########################

## [START] Install check configuration ##

Checking for existence of the Oracle home location /u01/app/11.2.0.3/grid

Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster

Oracle Base selected for deinstall is: /u01/app/grid

Checking for existence of central inventory location /u01/app/oraInventory

Checking for existence of the Oracle Grid Infrastructure home

The following nodes are part of this cluster: raclinux3

Checking for sufficient temp space availability on node(s) : ‘raclinux3’

## [END] Install check configuration ##

Traces log file: /tmp/deinstall2011-11-15_02-38-50AM/logs//crsdc.log

Enter an address or the name of the virtual IP used on node “raclinux3″[raclinux3-vip]

>

192.168.20.53

The following information can be collected by running “/sbin/ifconfig -a” on node “raclinux3”

Enter the IP netmask of Virtual IP “192.168.20.53” on node “raclinux3″[255.255.255.0]

>

Enter the network interface name on which the virtual IP address “192.168.20.53” is active

>

Enter an address or the name of the virtual IP[]

>

192.168.20.53

The following information can be collected by running “/sbin/ifconfig -a” on node “raclinux3”

Enter the IP netmask of the virtual IP “192.168.20.53”[]

>

255.255.255.0

Enter the network interface name on which the virtual IP address “192.168.20.53” is active

>

eth0

Enter an address or the name of the virtual IP[]

>

Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2011-11-15_02-38-50AM/logs/netdc_check2011-11-15_02-40-55-AM.log

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER]:

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /tmp/deinstall2011-11-15_02-38-50AM/logs/asmcadc_check2011-11-15_02-41-00-AM.log

######################### CHECK OPERATION END #########################

####################### CHECK OPERATION SUMMARY #######################

Oracle Grid Infrastructure Home is:

The cluster node(s) on which the Oracle home deinstallation will be performed are:raclinux3

Since -local option has been specified, the Oracle home will be deinstalled only on the local node, ‘raclinux3’, and the global configuration will be removed.

Oracle Home selected for deinstall is: /u01/app/11.2.0.3/grid

Inventory Location where the Oracle home registered is: /u01/app/oraInventory

Following RAC listener(s) will be de-configured: LISTENER

Option -local will not modify any ASM configuration.

Do you want to continue (y – yes, n – no)? [n]: y

A log of this session will be written to: ‘/tmp/deinstall2011-11-15_02-38-50AM/logs/deinstall_deconfig2011-11-15_02-39-29-AM.out’

Any error messages from this session will be written to: ‘/tmp/deinstall2011-11-15_02-38-50AM/logs/deinstall_deconfig2011-11-15_02-39-29-AM.err’

######################## CLEAN OPERATION START ########################

ASM de-configuration trace file location: /tmp/deinstall2011-11-15_02-38-50AM/logs/asmcadc_clean2011-11-15_02-41-03-AM.log

ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /tmp/deinstall2011-11-15_02-38-50AM/logs/netdc_clean2011-11-15_02-41-03-AM.log

De-configuring RAC listener(s): LISTENER

De-configuring listener: LISTENER

 Stopping listener on node “raclinux3”: LISTENER

 Warning: Failed to stop listener. Listener may not be running.

Listener de-configured successfully.

De-configuring Naming Methods configuration file…

Naming Methods configuration file de-configured successfully.

De-configuring backup files…

Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

—————————————->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node “raclinux3”.

/tmp/deinstall2011-11-15_02-38-50AM/perl/bin/perl -I/tmp/deinstall2011-11-15_02-38-50AM/perl/lib -I/tmp/deinstall2011-11-15_02-38-50AM/crs/install /tmp/deinstall2011-11-15_02-38-50AM/crs/install/rootcrs.pl -force -deconfig -paramfile “/tmp/deinstall2011-11-15_02-38-50AM/response/deinstall_Ora11g_gridinfrahome2.rsp”

Press Enter after you finish running the above commands

<—————————————-

Remove the directory: /tmp/deinstall2011-11-15_02-38-50AM on node:

Setting the force flag to false

Setting the force flag to cleanup the Oracle Base

Oracle Universal Installer clean START

Detach Oracle home ‘/u01/app/11.2.0.3/grid’ from the central inventory on the local node : Done

Delete directory ‘/u01/app/11.2.0.3/grid’ on the local node : Done

Delete directory ‘/u01/app/oraInventory’ on the local node : Done

Delete directory ‘/u01/app/grid’ on the local node : Done

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END

## [START] Oracle install clean ##

Clean install operation removing temporary directory ‘/tmp/deinstall2011-11-15_02-38-50AM’ on node ‘raclinux3’

## [END] Oracle install clean ##

######################### CLEAN OPERATION END #########################

####################### CLEAN OPERATION SUMMARY #######################

Following RAC listener(s) were de-configured successfully: LISTENER

Oracle Clusterware is stopped and successfully de-configured on node “raclinux3”

Oracle Clusterware is stopped and de-configured successfully.

Successfully detached Oracle home ‘/u01/app/11.2.0.3/grid’ from the central inventory on the local node.

Successfully deleted directory ‘/u01/app/11.2.0.3/grid’ on the local node.

Successfully deleted directory ‘/u01/app/oraInventory’ on the local node.

Successfully deleted directory ‘/u01/app/grid’ on the local node.

Oracle Universal Installer cleanup was successful.

Run ‘rm -rf /etc/oraInst.loc’ as root on node(s) ‘raclinux3’ at the end of the session.

Run ‘rm -rf /opt/ORCLfmap’ as root on node(s) ‘raclinux3’ at the end of the session.

Oracle deinstall tool successfully cleaned up temporary directories.

#######################################################################

############# ORACLE DEINSTALL & DECONFIG TOOL END #############

[grid@raclinux3 deinstall]$

[root@raclinux3 grid]# /tmp/deinstall2011-11-15_02-38-50AM/perl/bin/perl -I/tmp/deinstall2011-11-15_02-38-50AM/perl/lib -I/tmp/deinstall2011-11-15_02-38-50AM/crs/install /tmp/deinstall2011-11-15_02-38-50AM/crs/install/rootcrs.pl -force -deconfig -paramfile “/tmp/deinstall2011-11-15_02-38-50AM/response/deinstall_Ora11g_gridinfrahome2.rsp”

Using configuration parameter file: /tmp/deinstall2011-11-15_02-38-50AM/response/deinstall_Ora11g_gridinfrahome2.rsp

****Unable to retrieve Oracle Clusterware home.

Start Oracle Clusterware stack and try again.

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Stop failed, or completed with errors.

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Modify failed, or completed with errors.

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Delete failed, or completed with errors.

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Stop failed, or completed with errors.

################################################################

# You must kill processes or reboot the system to properly #

# cleanup the processes started by Oracle clusterware #

################################################################

ACFS-9313: No ADVM/ACFS installation detected.

Either /etc/oracle/olr.loc does not exist or is not readable

Make sure the file exists and it has read and execute access

Either /etc/oracle/olr.loc does not exist or is not readable

Make sure the file exists and it has read and execute access

Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall

error: package cvuqdisk is not installed

Successfully deconfigured Oracle clusterware stack on this node

[root@raclinux3 grid]#

2.3.6    From any remaining node, for example raclinux2, while logged in as grid user from $GI_HOME/oui/bin execute the following command.

[grid@raclinux2 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0.3/grid “CLUSTER_NODES={raclinux1, raclinux2}” CRS=TRUE

Starting Oracle Universal Installer…

Checking swap space: must be greater than 500 MB. Actual 9581 MB Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

‘UpdateNodeList’ was successful.

[grid@raclinux2 bin]$

2.3.7    Verify that a node has been removed.

[grid@raclinux2 bin]$ olsnodes -n -t

raclinux1 1 Pinned

raclinux2 2 Pinned

[grid@raclinux2 bin]$

cluvfy stage -post nodedel -n raclinux3

[grid@raclinux2 bin]$ cluvfy stage -post nodedel -n raclinux3

Performing post-checks for node removal

Checking CRS integrity…

Clusterware version consistency passed

CRS integrity check passed

Node removal check passed

Post-check for node removal was successful.

[grid@raclinux2 bin]$

Annex 1

[grid@raclinux2 ~]$ cluvfy stage -post hwos -n raclinux3 -verbose

Performing post-checks for hardware and operating system setup

Checking node reachability…

Check: Node reachability from node “raclinux2”

Destination Node Reachable?

———————————— ————————

 raclinux3 yes

Result: Node reachability check passed from node “raclinux2”

Checking user equivalence…

Check: User equivalence for user “grid”

Node Name Status

———————————— ————————

 raclinux3 passed

Result: User equivalence check passed for user “grid”

Checking node connectivity…

Checking hosts config file…

Node Name Status

———————————— ————————

 raclinux3 passed

Verification of the hosts config file successful

Interface information for node “raclinux3”

 Name IP Address Subnet Gateway Def. Gateway HW Address MTU

—— ————— ————— ————— ————— —————– ——

 eth0 192.168.20.23 192.168.20.0 0.0.0.0 UNKNOWN 08:00:27:86:CA:20 1500

 eth1 10.10.20.23 10.10.20.0 0.0.0.0 UNKNOWN 08:00:27:A4:7B:A4 1500

 eth2 192.168.156.103 192.168.156.0 0.0.0.0 UNKNOWN 08:00:27:8C:A0:9B 1500

 eth3 192.168.2.24 192.168.2.0 0.0.0.0 UNKNOWN 08:00:27:E5:70:A8 1500

Check: Node connectivity of subnet “192.168.20.0”

Result: Node connectivity passed for subnet “192.168.20.0” with node(s) raclinux3

Check: TCP connectivity of subnet “192.168.20.0”

Source Destination Connected?

—————————— —————————— —————-

 raclinux2:192.168.20.22 raclinux3:192.168.20.23 passed

Result: TCP connectivity check passed for subnet “192.168.20.0”

Check: Node connectivity of subnet “10.10.20.0”

Result: Node connectivity passed for subnet “10.10.20.0” with node(s) raclinux3

Check: TCP connectivity of subnet “10.10.20.0”

Source Destination Connected?

—————————— —————————— —————-

 raclinux2:192.168.20.22 raclinux3:10.10.20.23 passed

Result: TCP connectivity check passed for subnet “10.10.20.0”

Check: Node connectivity of subnet “192.168.156.0”

Result: Node connectivity passed for subnet “192.168.156.0” with node(s) raclinux3

Check: TCP connectivity of subnet “192.168.156.0”

Source Destination Connected?

—————————— —————————— —————-

 raclinux2:192.168.20.22 raclinux3:192.168.156.103 passed

Result: TCP connectivity check passed for subnet “192.168.156.0”

Check: Node connectivity of subnet “192.168.2.0”

Result: Node connectivity passed for subnet “192.168.2.0” with node(s) raclinux3

Check: TCP connectivity of subnet “192.168.2.0”

Source Destination Connected?

—————————— —————————— —————-

 raclinux2:192.168.20.22 raclinux3:192.168.2.24 passed

Result: TCP connectivity check passed for subnet “192.168.2.0”

Interfaces found on subnet “192.168.20.0” that are likely candidates for a private interconnect are:

raclinux3 eth0:192.168.20.23

Interfaces found on subnet “10.10.20.0” that are likely candidates for a private interconnect are:

raclinux3 eth1:10.10.20.23

Interfaces found on subnet “192.168.156.0” that are likely candidates for a private interconnect are:

raclinux3 eth2:192.168.156.103

Interfaces found on subnet “192.168.2.0” that are likely candidates for a private interconnect are:

raclinux3 eth3:192.168.2.24

WARNING:

Could not find a suitable set of interfaces for VIPs

Result: Node connectivity check passed

Checking multicast communication…

Checking subnet “192.168.20.0” for multicast communication with multicast group “230.0.1.0”…

Check of subnet “192.168.20.0” for multicast communication with multicast group “230.0.1.0” passed.

Checking subnet “10.10.20.0” for multicast communication with multicast group “230.0.1.0”…

Check of subnet “10.10.20.0” for multicast communication with multicast group “230.0.1.0” passed.

Checking subnet “192.168.156.0” for multicast communication with multicast group “230.0.1.0”…

Check of subnet “192.168.156.0” for multicast communication with multicast group “230.0.1.0” passed.

Checking subnet “192.168.2.0” for multicast communication with multicast group “230.0.1.0”…

Check of subnet “192.168.2.0” for multicast communication with multicast group “230.0.1.0” passed.

Check of multicast communication passed.

Checking for multiple users with UID value 0

Result: Check for multiple users with UID value 0 passed

Check: Time zone consistency

Result: Time zone consistency check passed

Checking shared storage accessibility…

Disk Sharing Nodes (1 in count)

———————————— ————————

 /dev/sda raclinux3

Disk Sharing Nodes (1 in count)

———————————— ————————

 /dev/sdb raclinux3

Disk Sharing Nodes (1 in count)

———————————— ————————

 /dev/sdc raclinux3

Disk Sharing Nodes (1 in count)

———————————— ————————

 /dev/sdd raclinux3

Disk Sharing Nodes (1 in count)

———————————— ————————

 /dev/sde raclinux3

Disk Sharing Nodes (1 in count)

———————————— ————————

 /dev/sdf raclinux3

Disk Sharing Nodes (1 in count)

———————————— ————————

 /dev/sdg raclinux3

Disk Sharing Nodes (1 in count)

———————————— ————————

 /dev/sdh raclinux3

Disk Sharing Nodes (1 in count)

———————————— ————————

 /dev/sdi raclinux3

Disk Sharing Nodes (1 in count)

———————————— ————————

 /dev/sdj raclinux3

Disk Sharing Nodes (1 in count)

———————————— ————————

 /dev/sdk raclinux3

Disk Sharing Nodes (1 in count)

———————————— ————————

 /dev/sdl raclinux3

Disk Sharing Nodes (1 in count)

———————————— ————————

 /dev/sdm raclinux3

Disk Sharing Nodes (1 in count)

———————————— ————————

 /dev/sdn raclinux3

Disk Sharing Nodes (1 in count)

———————————— ————————

 /dev/sdo raclinux3

Disk Sharing Nodes (1 in count)

———————————— ————————

 /dev/sdp raclinux3

Shared storage check was successful on nodes “raclinux3”

Post-check for hardware and operating system setup was successful.

[grid@raclinux2 ~]$

[grid@raclinux2 ~]$ cluvfy comp peer -refnode raclinux1 -n raclinux3 -orainv oinstall -osdba asmdba -verbose

Verifying peer compatibility

Checking peer compatibility…

Compatibility check: Physical memory [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 2.9476GB (3090796.0KB) 2.9476GB (3090796.0KB) matched

Physical memory check passed

Compatibility check: Available memory [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 2.7513GB (2884896.0KB) 1.292GB (1354780.0KB) mismatched

Available memory check failed

Compatibility check: Swap space [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 9.767GB (1.0241428E7KB) 9.767GB (1.0241428E7KB) matched

Swap space check passed

Compatibility check: Free disk space for “/u01/app/11.2.0.3/grid” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 288.9707GB (3.03007744E8KB) 247.0264GB (2.5902592E8KB) mismatched

Free disk space check failed

Compatibility check: Free disk space for “/tmp” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 9.2529GB (9702400.0KB) 9.2402GB (9689088.0KB) mismatched

Free disk space check failed

Compatibility check: User existence for “grid” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 grid(1100) grid(1100) matched

User existence for “grid” check passed

Compatibility check: Group existence for “oinstall” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 oinstall(501) oinstall(501) matched

Group existence for “oinstall” check passed

Compatibility check: Group existence for “asmdba” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 asmdba(1021) asmdba(1021) matched

Group existence for “asmdba” check passed

Compatibility check: Group membership for “grid” in “oinstall (Primary)” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 yes yes matched

Group membership for “grid” in “oinstall (Primary)” check passed

Compatibility check: Group membership for “grid” in “asmdba” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 yes yes matched

Group membership for “grid” in “asmdba” check passed

Compatibility check: Run level [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 5 5 matched

Run level check passed

Compatibility check: System architecture [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 x86_64 x86_64 matched

System architecture check passed

Compatibility check: Kernel version [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 2.6.18-164.el5 2.6.18-164.el5 matched

Kernel version check passed

Compatibility check: Kernel param “semmsl” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 250 250 matched

Kernel param “semmsl” check passed

Compatibility check: Kernel param “semmns” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 32000 32000 matched

Kernel param “semmns” check passed

Compatibility check: Kernel param “semopm” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 100 100 matched

Kernel param “semopm” check passed

Compatibility check: Kernel param “semmni” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 128 128 matched

Kernel param “semmni” check passed

Compatibility check: Kernel param “shmmax” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 2074529792 2074529792 matched

Kernel param “shmmax” check passed

Compatibility check: Kernel param “shmmni” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 4096 4096 matched

Kernel param “shmmni” check passed

Compatibility check: Kernel param “shmall” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 2097152 2097152 matched

Kernel param “shmall” check passed

Compatibility check: Kernel param “file-max” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 6815744 6815744 matched

Kernel param “file-max” check passed

Compatibility check: Kernel param “ip_local_port_range” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 between 9000.0 & 65500.0 between 9000.0 & 65500.0 matched

Kernel param “ip_local_port_range” check passed

Compatibility check: Kernel param “rmem_default” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 1048586 1048586 matched

Kernel param “rmem_default” check passed

Compatibility check: Kernel param “rmem_max” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 4194304 4194304 matched

Kernel param “rmem_max” check passed

Compatibility check: Kernel param “wmem_default” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 1048586 1048586 matched

Kernel param “wmem_default” check passed

Compatibility check: Kernel param “wmem_max” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 1048586 1048586 matched

Kernel param “wmem_max” check passed

Compatibility check: Kernel param “aio-max-nr” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 1048576 1048576 matched

Kernel param “aio-max-nr” check passed

Compatibility check: Package existence for “make” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 make-3.81-3.el5 make-3.81-3.el5 matched

Package existence for “make” check passed

Compatibility check: Package existence for “binutils” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 binutils-2.17.50.0.6-12.el5 binutils-2.17.50.0.6-12.el5 matched

Package existence for “binutils” check passed

Compatibility check: Package existence for “gcc (x86_64)” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 gcc-4.1.2-46.el5 (x86_64) gcc-4.1.2-46.el5 (x86_64) matched

Package existence for “gcc (x86_64)” check passed

Compatibility check: Package existence for “libaio (x86_64)” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 libaio-0.3.106-3.2 (x86_64),libaio-0.3.106-3.2 (i386) libaio-0.3.106-3.2 (x86_64),libaio-0.3.106-3.2 (i386) matched

Package existence for “libaio (x86_64)” check passed

Compatibility check: Package existence for “glibc (x86_64)” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 glibc-2.5-42 (i686),glibc-2.5-42 (x86_64) glibc-2.5-42 (i686),glibc-2.5-42 (x86_64) matched

Package existence for “glibc (x86_64)” check passed

Compatibility check: Package existence for “compat-libstdc++-33 (x86_64)” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 compat-libstdc++-33-3.2.3-61 (x86_64),compat-libstdc++-33-3.2.3-61 (i386) compat-libstdc++-33-3.2.3-61 (x86_64),compat-libstdc++-33-3.2.3-61 (i386) matched

Package existence for “compat-libstdc++-33 (x86_64)” check passed

Compatibility check: Package existence for “elfutils-libelf (x86_64)” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 elfutils-libelf-0.137-3.el5 (x86_64),elfutils-libelf-0.137-3.el5 (i386) elfutils-libelf-0.137-3.el5 (x86_64),elfutils-libelf-0.137-3.el5 (i386) matched

Package existence for “elfutils-libelf (x86_64)” check passed

Compatibility check: Package existence for “elfutils-libelf-devel” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 elfutils-libelf-devel-0.137-3.el5 elfutils-libelf-devel-0.137-3.el5 matched

Package existence for “elfutils-libelf-devel” check passed

Compatibility check: Package existence for “glibc-common” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 glibc-common-2.5-42 glibc-common-2.5-42 matched

Package existence for “glibc-common” check passed

Compatibility check: Package existence for “glibc-devel (x86_64)” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 glibc-devel-2.5-42 (i386),glibc-devel-2.5-42 (x86_64) glibc-devel-2.5-42 (i386),glibc-devel-2.5-42 (x86_64) matched

Package existence for “glibc-devel (x86_64)” check passed

Compatibility check: Package existence for “glibc-headers” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 glibc-headers-2.5-42 glibc-headers-2.5-42 matched

Package existence for “glibc-headers” check passed

Compatibility check: Package existence for “gcc-c++ (x86_64)” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 gcc-c++-4.1.2-46.el5 (x86_64) gcc-c++-4.1.2-46.el5 (x86_64) matched

Package existence for “gcc-c++ (x86_64)” check passed

Compatibility check: Package existence for “libaio-devel (x86_64)” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 libaio-devel-0.3.106-3.2 (x86_64) libaio-devel-0.3.106-3.2 (x86_64) matched

Package existence for “libaio-devel (x86_64)” check passed

Compatibility check: Package existence for “libgcc (x86_64)” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 libgcc-4.1.2-46.el5 (i386),libgcc-4.1.2-46.el5 (x86_64) libgcc-4.1.2-46.el5 (i386),libgcc-4.1.2-46.el5 (x86_64) matched

Package existence for “libgcc (x86_64)” check passed

Compatibility check: Package existence for “libstdc++ (x86_64)” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 libstdc++-4.1.2-46.el5 (x86_64),libstdc++-4.1.2-46.el5 (i386) libstdc++-4.1.2-46.el5 (x86_64),libstdc++-4.1.2-46.el5 (i386) matched

Package existence for “libstdc++ (x86_64)” check passed

Compatibility check: Package existence for “libstdc++-devel (x86_64)” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 libstdc++-devel-4.1.2-46.el5 (x86_64) libstdc++-devel-4.1.2-46.el5 (x86_64) matched

Package existence for “libstdc++-devel (x86_64)” check passed

Compatibility check: Package existence for “sysstat” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 sysstat-7.0.2-3.el5 sysstat-7.0.2-3.el5 matched

Package existence for “sysstat” check passed

Compatibility check: Package existence for “ksh” [reference node: raclinux1]

Node Name Status Ref. node status Comment

———— ———————— ———————— ———-

 raclinux3 ksh-20080202-14.el5 ksh-20080202-14.el5 matched

Package existence for “ksh” check passed

Verification of peer compatibility was unsuccessful.

Checks did not pass for the following node(s):

 raclinux3

[grid@raclinux2 ~]$

[grid@raclinux2 ~]$ cluvfy stage -pre nodeadd -n raclinux3

Performing pre-checks for node addition

Checking node reachability…

Node reachability check passed from node “raclinux2”

Checking user equivalence…

User equivalence check passed for user “grid”

Checking node connectivity…

Checking hosts config file…

Verification of the hosts config file successful

Check: Node connectivity for interface “eth0”

Node connectivity passed for interface “eth0”

TCP connectivity check passed for subnet “192.168.20.0”

Checking subnet mask consistency…

Subnet mask consistency check passed for subnet “192.168.20.0”.

Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication…

Checking subnet “192.168.20.0” for multicast communication with multicast group “230.0.1.0”…

Check of subnet “192.168.20.0” for multicast communication with multicast group “230.0.1.0” passed.

Check of multicast communication passed.

Checking CRS integrity…

Clusterware version consistency passed

CRS integrity check passed

Checking shared resources…

Checking CRS home location…

“/u01/app/11.2.0.3/grid” is shared

Shared resources check for node addition passed

Checking node connectivity…

Checking hosts config file…

Verification of the hosts config file successful

Check: Node connectivity for interface “eth0”

Node connectivity passed for interface “eth0”

TCP connectivity check passed for subnet “192.168.20.0”

Check: Node connectivity for interface “eth1”

Node connectivity passed for interface “eth1”

TCP connectivity check passed for subnet “10.10.20.0”

Checking subnet mask consistency…

Subnet mask consistency check passed for subnet “192.168.20.0”.

Subnet mask consistency check passed for subnet “10.10.20.0”.

Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication…

Checking subnet “192.168.20.0” for multicast communication with multicast group “230.0.1.0”…

Check of subnet “192.168.20.0” for multicast communication with multicast group “230.0.1.0” passed.

Checking subnet “10.10.20.0” for multicast communication with multicast group “230.0.1.0”…

Check of subnet “10.10.20.0” for multicast communication with multicast group “230.0.1.0” passed.

Check of multicast communication passed.

Total memory check passed

Available memory check passed

Swap space check passed

Free disk space check passed for “raclinux3:/u01/app/11.2.0.3/grid”

Free disk space check passed for “raclinux2:/u01/app/11.2.0.3/grid”

Free disk space check passed for “raclinux3:/tmp”

Free disk space check passed for “raclinux2:/tmp”

Check for multiple users with UID value 1100 passed

User existence check passed for “grid”

Run level check passed

Hard limits check passed for “maximum open file descriptors”

Soft limits check passed for “maximum open file descriptors”

Hard limits check passed for “maximum user processes”

Soft limits check passed for “maximum user processes”

System architecture check passed

Kernel version check passed

Kernel parameter check passed for “semmsl”

Kernel parameter check passed for “semmns”

Kernel parameter check passed for “semopm”

Kernel parameter check passed for “semmni”

Kernel parameter check passed for “shmmax”

Kernel parameter check passed for “shmmni”

Kernel parameter check passed for “shmall”

Kernel parameter check passed for “file-max”

Kernel parameter check passed for “ip_local_port_range”

Kernel parameter check passed for “rmem_default”

Kernel parameter check passed for “rmem_max”

Kernel parameter check passed for “wmem_default”

Kernel parameter check passed for “wmem_max”

Kernel parameter check passed for “aio-max-nr”

Package existence check passed for “make”

Package existence check passed for “binutils”

Package existence check passed for “gcc(x86_64)”

Package existence check passed for “libaio(x86_64)”

Package existence check passed for “glibc(x86_64)”

Package existence check passed for “compat-libstdc++-33(x86_64)”

Package existence check passed for “elfutils-libelf(x86_64)”

Package existence check passed for “elfutils-libelf-devel”

Package existence check passed for “glibc-common”

Package existence check passed for “glibc-devel(x86_64)”

Package existence check passed for “glibc-headers”

Package existence check passed for “gcc-c++(x86_64)”

Package existence check passed for “libaio-devel(x86_64)”

Package existence check passed for “libgcc(x86_64)”

Package existence check passed for “libstdc++(x86_64)”

Package existence check passed for “libstdc++-devel(x86_64)”

Package existence check passed for “sysstat”

Package existence check passed for “ksh”

Check for multiple users with UID value 0 passed

Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user’s primary group passed

Checking OCR integrity…

OCR integrity check passed

Checking Oracle Cluster Voting Disk configuration…

Oracle Cluster Voting Disk configuration check passed

Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)…

NTP Configuration file check started…

NTP Configuration file check passed

Checking daemon liveness…

Liveness check passed for “ntpd”

Check for NTP daemon or service alive passed on all nodes

NTP daemon slewing option check passed

NTP daemon’s boot time configuration check for slewing option passed

NTP common Time Server Check started…

PRVF-5408 : NTP Time Server “78.47.24.68” is common only to the following nodes “raclinux2”

PRVF-5408 : NTP Time Server “192.43.244.18” is common only to the following nodes “raclinux2”

PRVF-5408 : NTP Time Server “129.69.1.153” is common only to the following nodes “raclinux2”

Check of common NTP Time Server passed

Clock time offset check from NTP Time Server started…

Clock time offset check passed

Clock synchronization check using Network Time Protocol(NTP) passed

User “grid” is not part of “root” group. Check passed

Checking consistency of file “/etc/resolv.conf” across nodes

File “/etc/resolv.conf” does not have both domain and search entries defined

domain entry in file “/etc/resolv.conf” is consistent across nodes

search entry in file “/etc/resolv.conf” is consistent across nodes

PRVF-5636 : The DNS response time for an unreachable node exceeded “15000” ms on following nodes: raclinux3

File “/etc/resolv.conf” is not consistent across nodes

Pre-check for node addition was unsuccessful on all the nodes.

[grid@raclinux2 ~]$

[grid@raclinux2 ~]$ cluvfy stage -pre crsinst -n raclinux3

Performing pre-checks for cluster services setup

Checking node reachability…

Node reachability check passed from node “raclinux2”

Checking user equivalence…

User equivalence check passed for user “grid”

Checking node connectivity…

Checking hosts config file…

Verification of the hosts config file successful

Node connectivity passed for subnet “192.168.20.0” with node(s) raclinux3

TCP connectivity check passed for subnet “192.168.20.0”

Node connectivity passed for subnet “10.10.20.0” with node(s) raclinux3

TCP connectivity check passed for subnet “10.10.20.0”

Node connectivity passed for subnet “192.168.156.0” with node(s) raclinux3

TCP connectivity check passed for subnet “192.168.156.0”

Node connectivity passed for subnet “192.168.2.0” with node(s) raclinux3

TCP connectivity check passed for subnet “192.168.2.0”

Interfaces found on subnet “192.168.20.0” that are likely candidates for a private interconnect are:

raclinux3 eth0:192.168.20.23

Interfaces found on subnet “10.10.20.0” that are likely candidates for a private interconnect are:

raclinux3 eth1:10.10.20.23

Interfaces found on subnet “192.168.156.0” that are likely candidates for a private interconnect are:

raclinux3 eth2:192.168.156.103

Interfaces found on subnet “192.168.2.0” that are likely candidates for a private interconnect are:

raclinux3 eth3:192.168.2.24

WARNING:

Could not find a suitable set of interfaces for VIPs

Node connectivity check passed

Checking multicast communication…

Checking subnet “192.168.20.0” for multicast communication with multicast group “230.0.1.0”…

Check of subnet “192.168.20.0” for multicast communication with multicast group “230.0.1.0” passed.

Checking subnet “10.10.20.0” for multicast communication with multicast group “230.0.1.0”…

Check of subnet “10.10.20.0” for multicast communication with multicast group “230.0.1.0” passed.

Checking subnet “192.168.156.0” for multicast communication with multicast group “230.0.1.0”…

Check of subnet “192.168.156.0” for multicast communication with multicast group “230.0.1.0” passed.

Checking subnet “192.168.2.0” for multicast communication with multicast group “230.0.1.0”…

Check of subnet “192.168.2.0” for multicast communication with multicast group “230.0.1.0” passed.

Check of multicast communication passed.

Checking ASMLib configuration.

Check for ASMLib configuration passed.

Total memory check passed

Available memory check passed

Swap space check passed

Free disk space check passed for “raclinux3:/u01/app/11.2.0.3/grid”

Free disk space check passed for “raclinux3:/tmp”

Check for multiple users with UID value 1100 passed

User existence check passed for “grid”

Group existence check passed for “oinstall”

Group existence check passed for “dba”

Membership check for user “grid” in group “oinstall” [as Primary] passed

Membership check for user “grid” in group “dba” passed

Run level check passed

Hard limits check passed for “maximum open file descriptors”

Soft limits check passed for “maximum open file descriptors”

Hard limits check passed for “maximum user processes”

Soft limits check passed for “maximum user processes”

System architecture check passed

Kernel version check passed

Kernel parameter check passed for “semmsl”

Kernel parameter check passed for “semmns”

Kernel parameter check passed for “semopm”

Kernel parameter check passed for “semmni”

Kernel parameter check passed for “shmmax”

Kernel parameter check passed for “shmmni”

Kernel parameter check passed for “shmall”

Kernel parameter check passed for “file-max”

Kernel parameter check passed for “ip_local_port_range”

Kernel parameter check passed for “rmem_default”

Kernel parameter check passed for “rmem_max”

Kernel parameter check passed for “wmem_default”

Kernel parameter check passed for “wmem_max”

Kernel parameter check passed for “aio-max-nr”

Package existence check passed for “make”

Package existence check passed for “binutils”

Package existence check passed for “gcc(x86_64)”

Package existence check passed for “libaio(x86_64)”

Package existence check passed for “glibc(x86_64)”

Package existence check passed for “compat-libstdc++-33(x86_64)”

Package existence check passed for “elfutils-libelf(x86_64)”

Package existence check passed for “elfutils-libelf-devel”

Package existence check passed for “glibc-common”

Package existence check passed for “glibc-devel(x86_64)”

Package existence check passed for “glibc-headers”

Package existence check passed for “gcc-c++(x86_64)”

Package existence check passed for “libaio-devel(x86_64)”

Package existence check passed for “libgcc(x86_64)”

Package existence check passed for “libstdc++(x86_64)”

Package existence check passed for “libstdc++-devel(x86_64)”

Package existence check passed for “sysstat”

Package existence check passed for “ksh”

Check for multiple users with UID value 0 passed

Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user’s primary group passed

Starting Clock synchronization checks using Network Time Protocol(NTP)…

NTP Configuration file check started…

NTP Configuration file check passed

Checking daemon liveness…

Liveness check passed for “ntpd”

Check for NTP daemon or service alive passed on all nodes

NTP daemon slewing option check passed

NTP daemon’s boot time configuration check for slewing option passed

NTP common Time Server Check started…

Check of common NTP Time Server passed

Clock time offset check from NTP Time Server started…

Clock time offset check passed

Clock synchronization check using Network Time Protocol(NTP) passed

Core file name pattern consistency check passed.

User “grid” is not part of “root” group. Check passed

Default user file creation mask check passed

Checking consistency of file “/etc/resolv.conf” across nodes

File “/etc/resolv.conf” does not have both domain and search entries defined

domain entry in file “/etc/resolv.conf” is consistent across nodes

search entry in file “/etc/resolv.conf” is consistent across nodes

PRVF-5636 : The DNS response time for an unreachable node exceeded “15000” ms on following nodes: raclinux3

File “/etc/resolv.conf” is not consistent across nodes

Time zone consistency check passed

Pre-check for cluster services setup was unsuccessful on all the nodes.

[grid@raclinux2 ~]$

[grid@raclinux2 bin]$ cluvfy stage -post nodeadd -n raclinux3 -verbose

Performing post-checks for node addition

Checking node reachability…

Check: Node reachability from node “raclinux2”

Destination Node Reachable?

———————————— ————————

 raclinux3 yes

Result: Node reachability check passed from node “raclinux2”

Checking user equivalence…

Check: User equivalence for user “grid”

Node Name Status

———————————— ————————

 raclinux3 passed

Result: User equivalence check passed for user “grid”

Checking node connectivity…

Checking hosts config file…

Node Name Status

———————————— ————————

 raclinux3 passed

 raclinux2 passed

 raclinux1 passed

Verification of the hosts config file successful

Interface information for node “raclinux3”

 Name IP Address Subnet Gateway Def. Gateway HW Address MTU

—— ————— ————— ————— ————— —————– ——

 eth0 192.168.20.23 192.168.20.0 0.0.0.0 UNKNOWN 08:00:27:86:CA:20 1500

 eth0 192.168.20.53 192.168.20.0 0.0.0.0 UNKNOWN 08:00:27:86:CA:20 1500

 eth1 10.10.20.23 10.10.20.0 0.0.0.0 UNKNOWN 08:00:27:A4:7B:A4 1500

 eth1 169.254.237.177 169.254.0.0 0.0.0.0 UNKNOWN 08:00:27:A4:7B:A4 1500

 eth2 192.168.156.103 192.168.156.0 0.0.0.0 UNKNOWN 08:00:27:8C:A0:9B 1500

 eth3 192.168.2.24 192.168.2.0 0.0.0.0 UNKNOWN 08:00:27:E5:70:A8 1500

Interface information for node “raclinux2”

 Name IP Address Subnet Gateway Def. Gateway HW Address MTU

—— ————— ————— ————— ————— —————– ——

 eth0 192.168.20.22 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:F7:87:C6 1500

 eth0 192.168.20.52 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:F7:87:C6 1500

 eth1 10.10.20.22 10.10.20.0 0.0.0.0 10.0.5.2 08:00:27:41:52:72 1500

 eth1 169.254.206.240 169.254.0.0 0.0.0.0 10.0.5.2 08:00:27:41:52:72 1500

 eth2 192.168.156.102 192.168.156.0 0.0.0.0 10.0.5.2 08:00:27:13:BC:77 1500

 eth3 10.0.5.15 10.0.5.0 0.0.0.0 10.0.5.2 08:00:27:93:4A:17 1500

Interface information for node “raclinux1”

 Name IP Address Subnet Gateway Def. Gateway HW Address MTU

—— ————— ————— ————— ————— —————– ——

 eth0 192.168.20.21 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:80:E3:C1 1500

 eth0 192.168.20.111 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:80:E3:C1 1500

 eth0 192.168.20.100 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:80:E3:C1 1500

 eth0 192.168.20.112 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:80:E3:C1 1500

 eth0 192.168.20.51 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:80:E3:C1 1500

 eth1 10.10.20.21 10.10.20.0 0.0.0.0 10.0.5.2 08:00:27:FD:AA:42 1500

 eth1 169.254.89.140 169.254.0.0 0.0.0.0 10.0.5.2 08:00:27:FD:AA:42 1500

 eth2 192.168.156.101 192.168.156.0 0.0.0.0 10.0.5.2 08:00:27:B0:B4:C7 1500

 eth3 10.0.5.15 10.0.5.0 0.0.0.0 10.0.5.2 08:00:27:8D:38:97 1500

Check: Node connectivity for interface “eth0”

Source Destination Connected?

—————————— —————————— —————-

 raclinux3[192.168.20.23] raclinux3[192.168.20.53] yes

 raclinux3[192.168.20.23] raclinux2[192.168.20.22] yes

 raclinux3[192.168.20.23] raclinux2[192.168.20.52] yes

 raclinux3[192.168.20.23] raclinux1[192.168.20.21] yes

 raclinux3[192.168.20.23] raclinux1[192.168.20.111] yes

 raclinux3[192.168.20.23] raclinux1[192.168.20.100] yes

 raclinux3[192.168.20.23] raclinux1[192.168.20.112] yes

 raclinux3[192.168.20.23] raclinux1[192.168.20.51] yes

 raclinux3[192.168.20.53] raclinux2[192.168.20.22] yes

 raclinux3[192.168.20.53] raclinux2[192.168.20.52] yes

 raclinux3[192.168.20.53] raclinux1[192.168.20.21] yes

 raclinux3[192.168.20.53] raclinux1[192.168.20.111] yes

 raclinux3[192.168.20.53] raclinux1[192.168.20.100] yes

 raclinux3[192.168.20.53] raclinux1[192.168.20.112] yes

 raclinux3[192.168.20.53] raclinux1[192.168.20.51] yes

 raclinux2[192.168.20.22] raclinux2[192.168.20.52] yes

 raclinux2[192.168.20.22] raclinux1[192.168.20.21] yes

 raclinux2[192.168.20.22] raclinux1[192.168.20.111] yes

 raclinux2[192.168.20.22] raclinux1[192.168.20.100] yes

 raclinux2[192.168.20.22] raclinux1[192.168.20.112] yes

 raclinux2[192.168.20.22] raclinux1[192.168.20.51] yes

 raclinux2[192.168.20.52] raclinux1[192.168.20.21] yes

 raclinux2[192.168.20.52] raclinux1[192.168.20.111] yes

 raclinux2[192.168.20.52] raclinux1[192.168.20.100] yes

 raclinux2[192.168.20.52] raclinux1[192.168.20.112] yes

 raclinux2[192.168.20.52] raclinux1[192.168.20.51] yes

 raclinux1[192.168.20.21] raclinux1[192.168.20.111] yes

 raclinux1[192.168.20.21] raclinux1[192.168.20.100] yes

 raclinux1[192.168.20.21] raclinux1[192.168.20.112] yes

 raclinux1[192.168.20.21] raclinux1[192.168.20.51] yes

 raclinux1[192.168.20.111] raclinux1[192.168.20.100] yes

 raclinux1[192.168.20.111] raclinux1[192.168.20.112] yes

 raclinux1[192.168.20.111] raclinux1[192.168.20.51] yes

 raclinux1[192.168.20.100] raclinux1[192.168.20.112] yes

 raclinux1[192.168.20.100] raclinux1[192.168.20.51] yes

 raclinux1[192.168.20.112] raclinux1[192.168.20.51] yes

Result: Node connectivity passed for interface “eth0”

Check: TCP connectivity of subnet “192.168.20.0”

Source Destination Connected?

—————————— —————————— —————-

 raclinux2:192.168.20.22 raclinux3:192.168.20.23 passed

 raclinux2:192.168.20.22 raclinux3:192.168.20.53 passed

 raclinux2:192.168.20.22 raclinux2:192.168.20.52 passed

 raclinux2:192.168.20.22 raclinux1:192.168.20.21 passed

 raclinux2:192.168.20.22 raclinux1:192.168.20.111 passed

 raclinux2:192.168.20.22 raclinux1:192.168.20.100 passed

 raclinux2:192.168.20.22 raclinux1:192.168.20.112 passed

 raclinux2:192.168.20.22 raclinux1:192.168.20.51 passed

Result: TCP connectivity check passed for subnet “192.168.20.0”

Checking subnet mask consistency…

Subnet mask consistency check passed for subnet “192.168.20.0”.

Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication…

Checking subnet “192.168.20.0” for multicast communication with multicast group “230.0.1.0”…

Check of subnet “192.168.20.0” for multicast communication with multicast group “230.0.1.0” passed.

Check of multicast communication passed.

Checking cluster integrity…

Node Name

————————————

 raclinux1

 raclinux2

 raclinux3

Cluster integrity check passed

Checking CRS integrity…

Clusterware version consistency passed

The Oracle Clusterware is healthy on node “raclinux3”

The Oracle Clusterware is healthy on node “raclinux2”

The Oracle Clusterware is healthy on node “raclinux1”

CRS integrity check passed

Checking shared resources…

Checking CRS home location…

“/u01/app/11.2.0.3/grid” is not shared

Result: Shared resources check for node addition passed

Checking node connectivity…

Checking hosts config file…

Node Name Status

———————————— ————————

 raclinux3 passed

 raclinux2 passed

 raclinux1 passed

Verification of the hosts config file successful

Interface information for node “raclinux3”

 Name IP Address Subnet Gateway Def. Gateway HW Address MTU

—— ————— ————— ————— ————— —————– ——

 eth0 192.168.20.23 192.168.20.0 0.0.0.0 UNKNOWN 08:00:27:86:CA:20 1500

 eth0 192.168.20.53 192.168.20.0 0.0.0.0 UNKNOWN 08:00:27:86:CA:20 1500

 eth1 10.10.20.23 10.10.20.0 0.0.0.0 UNKNOWN 08:00:27:A4:7B:A4 1500

 eth1 169.254.237.177 169.254.0.0 0.0.0.0 UNKNOWN 08:00:27:A4:7B:A4 1500

 eth2 192.168.156.103 192.168.156.0 0.0.0.0 UNKNOWN 08:00:27:8C:A0:9B 1500

 eth3 192.168.2.24 192.168.2.0 0.0.0.0 UNKNOWN 08:00:27:E5:70:A8 1500

Interface information for node “raclinux2”

 Name IP Address Subnet Gateway Def. Gateway HW Address MTU

—— ————— ————— ————— ————— —————– ——

 eth0 192.168.20.22 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:F7:87:C6 1500

 eth0 192.168.20.52 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:F7:87:C6 1500

 eth1 10.10.20.22 10.10.20.0 0.0.0.0 10.0.5.2 08:00:27:41:52:72 1500

 eth1 169.254.206.240 169.254.0.0 0.0.0.0 10.0.5.2 08:00:27:41:52:72 1500

 eth2 192.168.156.102 192.168.156.0 0.0.0.0 10.0.5.2 08:00:27:13:BC:77 1500

 eth3 10.0.5.15 10.0.5.0 0.0.0.0 10.0.5.2 08:00:27:93:4A:17 1500

Interface information for node “raclinux1”

 Name IP Address Subnet Gateway Def. Gateway HW Address MTU

—— ————— ————— ————— ————— —————– ——

 eth0 192.168.20.21 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:80:E3:C1 1500

 eth0 192.168.20.111 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:80:E3:C1 1500

 eth0 192.168.20.100 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:80:E3:C1 1500

 eth0 192.168.20.112 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:80:E3:C1 1500

 eth0 192.168.20.51 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:80:E3:C1 1500

 eth1 10.10.20.21 10.10.20.0 0.0.0.0 10.0.5.2 08:00:27:FD:AA:42 1500

 eth1 169.254.89.140 169.254.0.0 0.0.0.0 10.0.5.2 08:00:27:FD:AA:42 1500

 eth2 192.168.156.101 192.168.156.0 0.0.0.0 10.0.5.2 08:00:27:B0:B4:C7 1500

 eth3 10.0.5.15 10.0.5.0 0.0.0.0 10.0.5.2 08:00:27:8D:38:97 1500

Check: Node connectivity for interface “eth0”

Source Destination Connected?

—————————— —————————— —————-

raclinux3[192.168.20.23] raclinux3[192.168.20.53] yes

 raclinux3[192.168.20.23] raclinux2[192.168.20.22] yes

 raclinux3[192.168.20.23] raclinux2[192.168.20.52] yes

 raclinux3[192.168.20.23] raclinux1[192.168.20.21] yes

 raclinux3[192.168.20.23] raclinux1[192.168.20.111] yes

 raclinux3[192.168.20.23] raclinux1[192.168.20.100] yes

 raclinux3[192.168.20.23] raclinux1[192.168.20.112] yes

 raclinux3[192.168.20.23] raclinux1[192.168.20.51] yes

 raclinux3[192.168.20.53] raclinux2[192.168.20.22] yes

 raclinux3[192.168.20.53] raclinux2[192.168.20.52] yes

 raclinux3[192.168.20.53] raclinux1[192.168.20.21] yes

 raclinux3[192.168.20.53] raclinux1[192.168.20.111] yes

 raclinux3[192.168.20.53] raclinux1[192.168.20.100] yes

 raclinux3[192.168.20.53] raclinux1[192.168.20.112] yes

 raclinux3[192.168.20.53] raclinux1[192.168.20.51] yes

 raclinux2[192.168.20.22] raclinux2[192.168.20.52] yes

 raclinux2[192.168.20.22] raclinux1[192.168.20.21] yes

 raclinux2[192.168.20.22] raclinux1[192.168.20.111] yes

 raclinux2[192.168.20.22] raclinux1[192.168.20.100] yes

 raclinux2[192.168.20.22] raclinux1[192.168.20.112] yes

 raclinux2[192.168.20.22] raclinux1[192.168.20.51] yes

 raclinux2[192.168.20.52] raclinux1[192.168.20.21] yes

 raclinux2[192.168.20.52] raclinux1[192.168.20.111] yes

 raclinux2[192.168.20.52] raclinux1[192.168.20.100] yes

 raclinux2[192.168.20.52] raclinux1[192.168.20.112] yes

 raclinux2[192.168.20.52] raclinux1[192.168.20.51] yes

 raclinux1[192.168.20.21] raclinux1[192.168.20.111] yes

 raclinux1[192.168.20.21] raclinux1[192.168.20.100] yes

 raclinux1[192.168.20.21] raclinux1[192.168.20.112] yes

 raclinux1[192.168.20.21] raclinux1[192.168.20.51] yes

 raclinux1[192.168.20.111] raclinux1[192.168.20.100] yes

 raclinux1[192.168.20.111] raclinux1[192.168.20.112] yes

 raclinux1[192.168.20.111] raclinux1[192.168.20.51] yes

 raclinux1[192.168.20.100] raclinux1[192.168.20.112] yes

 raclinux1[192.168.20.100] raclinux1[192.168.20.51] yes

raclinux1[192.168.20.112] raclinux1[192.168.20.51] yes

Result: Node connectivity passed for interface “eth0”

Check: TCP connectivity of subnet “192.168.20.0”

Source Destination Connected?

—————————— —————————— —————-

raclinux2:192.168.20.22 raclinux3:192.168.20.23 passed

raclinux2:192.168.20.22 raclinux3:192.168.20.53 passed

raclinux2:192.168.20.22 raclinux2:192.168.20.52 passed

raclinux2:192.168.20.22 raclinux1:192.168.20.21 passed

raclinux2:192.168.20.22 raclinux1:192.168.20.111 passed

raclinux2:192.168.20.22 raclinux1:192.168.20.100 passed

raclinux2:192.168.20.22 raclinux1:192.168.20.112 passed

raclinux2:192.168.20.22 raclinux1:192.168.20.51 passed

Result: TCP connectivity check passed for subnet “192.168.20.0”

Check: Node connectivity for interface “eth1”

Source Destination Connected?

—————————— —————————— —————-

raclinux3[10.10.20.23] raclinux2[10.10.20.22] yes

raclinux3[10.10.20.23] raclinux1[10.10.20.21] yes

raclinux2[10.10.20.22] raclinux1[10.10.20.21] yes

Result: Node connectivity passed for interface “eth1”

Check: TCP connectivity of subnet “10.10.20.0”

Source Destination Connected?

—————————— —————————— —————-

raclinux2:10.10.20.22 raclinux3:10.10.20.23 passed

raclinux2:10.10.20.22 raclinux1:10.10.20.21 passed

Result: TCP connectivity check passed for subnet “10.10.20.0”

Checking subnet mask consistency…

Subnet mask consistency check passed for subnet “192.168.20.0”.

Subnet mask consistency check passed for subnet “10.10.20.0”.

Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication…

Checking subnet “192.168.20.0” for multicast communication with multicast group “230.0.1.0”…

Check of subnet “192.168.20.0” for multicast communication with multicast group “230.0.1.0” passed.

Checking subnet “10.10.20.0” for multicast communication with multicast group “230.0.1.0”…

Check of subnet “10.10.20.0” for multicast communication with multicast group “230.0.1.0” passed.

Check of multicast communication passed.

Checking node application existence…

Checking existence of VIP node application (required)

Node Name Required Running? Comment

———— ———————— ———————— ———-

 raclinux3 yes yes passed

 raclinux2 yes yes passed

 raclinux1 yes yes passed

VIP node application check passed

Checking existence of NETWORK node application (required)

Node Name Required Running? Comment

———— ———————— ———————— ———-

 raclinux3 yes yes passed

 raclinux2 yes yes passed

 raclinux1 yes yes passed

NETWORK node application check passed

Checking existence of GSD node application (optional)

Node Name Required Running? Comment

———— ———————— ———————— ———-

 raclinux3 no no exists

 raclinux2 no no exists

 raclinux1 no no exists

GSD node application is offline on nodes “raclinux3,raclinux2,raclinux1”

Checking existence of ONS node application (optional)

Node Name Required Running? Comment

———— ———————— ———————— ———-

raclinux3 no yes passed

raclinux2 no yes passed

raclinux1 no yes passed

ONS node application check passed

Checking Single Client Access Name (SCAN)…

 SCAN Name Node Running? ListenerName Port Running?

—————- ———— ———— ———— ———— ————

 rac-scan raclinux1 true LISTENER_SCAN1 1521 true

Checking TCP connectivity to SCAN Listeners…

 Node ListenerName TCP connectivity?

———— ———————— ————————

raclinux2 LISTENER_SCAN1 yes

TCP connectivity to SCAN Listeners exists on all cluster nodes

Checking name resolution setup for “rac-scan”…

ERROR:

PRVG-1101 : SCAN name “rac-scan” failed to resolve

SCAN Name IP Address Status Comment

———— ———————— ———————— ———-

 rac-scan 192.168.20.100 failed NIS Entry

ERROR:

PRVF-4657 : Name resolution setup check for “rac-scan” (IP address: 192.168.20.100) failed

ERROR:

PRVF-4664 : Found inconsistent name resolution entries for SCAN name “rac-scan”

Verification of SCAN VIP and Listener setup failed

Checking to make sure user “grid” is not in “root” group

Node Name Status Comment

———— ———————— ————————

raclinux3 passed does not exist

Result: User “grid” is not part of “root” group. Check passed

Checking if Clusterware is installed on all nodes…

Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes…

Check: CTSS Resource running on all nodes

Node Name Status

———————————— ————————

raclinux3 passed

Result: CTSS resource check passed

Querying CTSS for time offset on all nodes…

Result: Query of CTSS for time offset passed

Check CTSS state started…

Check: CTSS state

Node Name State

———————————— ————————

raclinux3 Observer

CTSS is in Observer state. Switching over to clock synchronization checks using NTP

Starting Clock synchronization checks using Network Time Protocol(NTP)…

NTP Configuration file check started…

The NTP configuration file “/etc/ntp.conf” is available on all nodes

NTP Configuration file check passed

Checking daemon liveness…

Check: Liveness for “ntpd”

Node Name Running?

———————————— ————————

raclinux3 yes

Result: Liveness check passed for “ntpd”

Check for NTP daemon or service alive passed on all nodes

Checking NTP daemon command line for slewing option “-x”

Check: NTP daemon command line

 Node Name Slewing Option Set?

———————————— ————————

raclinux3 yes

Result:

NTP daemon slewing option check passed

Checking NTP daemon’s boot time configuration, in file “/etc/sysconfig/ntpd”, for slewing option “-x”

Check: NTP daemon’s boot time configuration

 Node Name Slewing Option Set?

———————————— ————————

raclinux3 yes

Result:

NTP daemon’s boot time configuration check for slewing option passed

Checking whether NTP daemon or service is using UDP port 123 on all nodes

Check for NTP daemon or service using UDP port 123

Node Name Port Open?

———————————— ————————

raclinux3 yes

NTP common Time Server Check started…

NTP Time Server “.LOCL.” is common to all nodes on which the NTP daemon is running

Check of common NTP Time Server passed

Clock time offset check from NTP Time Server started…

Checking on nodes “[raclinux3]”…

Check: Clock time offset from NTP Time Server

Time Server: .LOCL.

Time Offset Limit: 1000.0 msecs

Node Name Time Offset Status

———— ———————— ————————

raclinux3 0.0 passed

Time Server “.LOCL.” has time offsets that are within permissible limits for nodes “[raclinux3]”.

Clock time offset check passed

Result: Clock synchronization check using Network Time Protocol(NTP) passed

Oracle Cluster Time Synchronization Services check passed

Post-check for node addition was unsuccessful on all the nodes.

[grid@raclinux2 bin]$


November 17, 2011 - Posted by | oracle

89 Comments »

  1. […] the existing cluster. For additional approaches to add or delete a node to an Oracle RAC click here. The clone.pl can also be used to clone existing GI and RDBMS homes of one cluster to create a new […]

    Pingback by Clone GI and RDBMS homes in Oracle RAC 11.2.0.3 with clone.pl « Guenadi N Jilevski's Oracle BLOG | November 17, 2011 | Reply

  2. […] used in the article is based on three (3) node OEL 5 cluster using Oracle 11.2.0.3 as described here. The Active/Passive configuration will have one active and two passive standby […]

    Pingback by Build Active-Passive HA configuration for single instance database with Oracle GI 11.2.0.3 « Guenadi N Jilevski's Oracle BLOG | January 9, 2012 | Reply

  3. Hi

    I am trying to install rac on my vmware and I am referring document:
    http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVMwareServer2.php

    I`ve installed grid infrastructure on both nodes. ( I got the same error about scan ip address as shown in the document)

    When I try to install database,Although I choose RAC installation, oui doesnt detect other node.What is the reason for this ? I can confirm that ASM is up and running on both nodes.

    RAC1:

    [oracle@RAC1 database]$ crs_stat -t
    Name Type Target State Host
    ————————————————————
    ora.DATA.dg ora….up.type ONLINE ONLINE rac1
    ora….ER.lsnr ora….er.type ONLINE ONLINE rac1
    ora….N1.lsnr ora….er.type ONLINE ONLINE rac1
    ora.asm ora.asm.type ONLINE ONLINE rac1
    ora.cvu ora.cvu.type ONLINE ONLINE rac1
    ora.gsd ora.gsd.type OFFLINE OFFLINE
    ora….network ora….rk.type ONLINE ONLINE rac1
    ora.oc4j ora.oc4j.type ONLINE ONLINE rac1
    ora.ons ora.ons.type ONLINE ONLINE rac1
    ora….SM1.asm application ONLINE ONLINE rac1
    ora….C1.lsnr application ONLINE ONLINE rac1
    ora.rac1.gsd application OFFLINE OFFLINE
    ora.rac1.ons application ONLINE ONLINE rac1
    ora.rac1.vip ora….t1.type ONLINE ONLINE rac1
    ora.scan1.vip ora….ip.type ONLINE ONLINE rac1

    RAC2:

    [oracle@RAC2 ~]$ crs_stat -t
    Name Type Target State Host
    ————————————————————
    ora.DATA.dg ora….up.type ONLINE ONLINE rac2
    ora….N1.lsnr ora….er.type OFFLINE OFFLINE
    ora.asm ora.asm.type ONLINE ONLINE rac2
    ora.cvu ora.cvu.type OFFLINE OFFLINE
    ora.gsd ora.gsd.type OFFLINE OFFLINE
    ora….network ora….rk.type ONLINE ONLINE rac2
    ora.oc4j ora.oc4j.type OFFLINE OFFLINE
    ora.ons ora.ons.type ONLINE ONLINE rac2
    ora….SM1.asm application ONLINE ONLINE rac2
    ora.rac2.gsd application OFFLINE OFFLINE
    ora.rac2.ons application ONLINE ONLINE rac2
    ora.rac2.vip ora….t1.type ONLINE ONLINE rac2
    ora.scan1.vip ora….ip.type ONLINE OFFLINE

    [oracle@RAC1 ~]$ cluvfy stage -post crsinst -n rac1,rac2

    Performing post-checks for cluster services setup

    Checking node reachability…
    Node reachability check passed from node “RAC1”

    Checking user equivalence…
    PRVF-4007 : User equivalence check failed for user “oracle”
    Check failed on nodes:
    rac2

    WARNING:
    User equivalence is not set for nodes:
    rac2
    Verification will proceed with nodes:
    rac1

    Checking node connectivity…

    Checking hosts config file…

    Verification of the hosts config file successful

    Check: Node connectivity for interface “eth0”
    Node connectivity passed for interface “eth0”

    Check: Node connectivity for interface “eth1”
    Node connectivity passed for interface “eth1”

    Node connectivity check passed

    Time zone consistency check passed

    Checking Cluster manager integrity…

    Checking CSS daemon…
    Oracle Cluster Synchronization Services appear to be online.

    Cluster manager integrity check passed

    UDev attributes check for OCR locations started…
    UDev attributes check passed for OCR locations

    UDev attributes check for Voting Disk locations started…
    UDev attributes check passed for Voting Disk locations

    Default user file creation mask check passed

    Checking cluster integrity…

    Cluster integrity check passed

    Checking OCR integrity…

    Checking the absence of a non-clustered configuration…
    All nodes free of non-clustered, local-only configurations

    ASM Running check passed. ASM is running on all specified nodes

    Checking OCR config file “/etc/oracle/ocr.loc”…

    OCR config file “/etc/oracle/ocr.loc” check successful

    Disk group for ocr location “+DATA” available on all the nodes

    NOTE:
    This check does not verify the integrity of the OCR contents. Execute ‘ocrcheck’ as a privileged user to verify the contents of OCR.

    OCR integrity check passed

    Checking CRS integrity…

    CRS integrity check passed

    Checking node application existence…

    Checking existence of VIP node application (required)
    VIP node application check passed

    Checking existence of NETWORK node application (required)
    NETWORK node application check passed

    Checking existence of GSD node application (optional)
    GSD node application is offline on nodes “rac1”

    Checking existence of ONS node application (optional)
    ONS node application check passed

    Checking Single Client Access Name (SCAN)…

    Checking TCP connectivity to SCAN Listeners…
    TCP connectivity to SCAN Listeners exists on all cluster nodes

    Checking name resolution setup for “rac-scan”…

    ERROR:
    PRVF-4664 : Found inconsistent name resolution entries for SCAN name “rac-scan”

    ERROR:
    PRVF-4657 : Name resolution setup check for “rac-scan” (IP address: 192.168.161.132) failed

    ERROR:
    PRVF-4664 : Found inconsistent name resolution entries for SCAN name “rac-scan”

    Verification of SCAN VIP and Listener setup failed

    Checking OLR integrity…

    Checking OLR config file…

    OLR config file check successful

    Checking OLR file attributes…

    OLR file check successful

    WARNING:
    This check does not verify the integrity of the OLR contents. Execute ‘ocrcheck -local’ as a privileged user to verify the contents of OLR.

    OLR integrity check passed

    ACFS verification is not supported on platform “Linux (x86_64)”

    Checking Oracle Cluster Voting Disk configuration…

    ERROR:

    PRVF-4193 : Asm is not running on the following nodes. Proceeding with the remaining nodes.
    rac2

    Oracle Cluster Voting Disk configuration check passed

    User “oracle” is not part of “root” group. Check passed

    Checking if Clusterware is installed on all nodes…
    Check of Clusterware install passed

    Checking if CTSS Resource is running on all nodes…
    CTSS resource check passed

    Querying CTSS for time offset on all nodes…
    Query of CTSS for time offset passed

    Check CTSS state started…
    CTSS is in Observer state. Switching over to clock synchronization checks using NTP

    Starting Clock synchronization checks using Network Time Protocol(NTP)…

    NTP Configuration file check started…
    NTP Configuration file check passed

    Checking daemon liveness…
    Liveness check failed for “ntpd”
    Check failed on nodes:
    rac1
    PRVF-5494 : The NTP Daemon or Service was not alive on all nodes
    PRVF-5415 : Check to see if NTP daemon or service is running failed
    Clock synchronization check using Network Time Protocol(NTP) failed

    PRVF-9652 : Cluster Time Synchronization Services check failed
    Checking VIP configuration.
    Checking VIP Subnet configuration.
    Check for VIP Subnet configuration passed.
    Checking VIP reachability
    Check for VIP reachability passed.

    Post-check for cluster services setup was unsuccessful.
    Checks did not pass for the following node(s):
    rac2,rac1,RAC1

    Comment by Tjay | January 11, 2012 | Reply

    • Hi,

      I assume you are installing 11.2. Correct??

      There a lot of things that could go wrong!

      1. Issue olsnodes on both nodes to see if the cluster is properly configured.

      2. Examine the oraInvemtory file for corruption and make sure that both RDBMS homes exist on all nodes.

      3. Use cluvfy stage -ore dbcfg

      4. What is the output of ( for the cluster health)
      4.1 crsctl check cluster -all
      4.2 crsctl stat res -t

      Start from the foregoing points to investigate!

      Regards,

      Comment by gjilevski | January 11, 2012 | Reply

  4. Hi

    Thanks for your reply.
    Yes, I am trying to install 11.2
    Everything seems to be working fine. Notsure why oui doesnt detect node2 when I try to install the database.

    [oracle@RAC1 database]$ olsnodes
    rac1

    [oracle@RAC2 ~]$ olsnodes
    rac2

    [oracle@RAC1 database]$ crsctl check cluster -all
    **************************************************************
    rac1:
    CRS-4537: Cluster Ready Services is online
    CRS-4529: Cluster Synchronization Services is online
    CRS-4533: Event Manager is online
    **************************************************************

    [oracle@RAC2 ~]$ crsctl check cluster -all
    **************************************************************
    rac2:
    CRS-4537: Cluster Ready Services is online
    CRS-4529: Cluster Synchronization Services is online
    CRS-4533: Event Manager is online
    **************************************************************

    [oracle@RAC1 database]$ crsctl stat res -t
    ——————————————————————————–
    NAME TARGET STATE SERVER STATE_DETAILS
    ——————————————————————————–
    Local Resources
    ——————————————————————————–
    ora.DATA.dg
    ONLINE ONLINE rac1
    ora.LISTENER.lsnr
    ONLINE ONLINE rac1
    ora.asm
    ONLINE ONLINE rac1 Started
    ora.gsd
    OFFLINE OFFLINE rac1
    ora.net1.network
    ONLINE ONLINE rac1
    ora.ons
    ONLINE ONLINE rac1
    ——————————————————————————–
    Cluster Resources
    ——————————————————————————–
    ora.LISTENER_SCAN1.lsnr
    1 ONLINE ONLINE rac1
    ora.cvu
    1 ONLINE ONLINE rac1
    ora.oc4j
    1 ONLINE ONLINE rac1
    ora.rac1.vip
    1 ONLINE ONLINE rac1
    ora.scan1.vip
    1 ONLINE ONLINE rac1

    *************************************************************************

    [oracle@RAC2 ~]$ crsctl stat res -t
    ——————————————————————————–
    NAME TARGET STATE SERVER STATE_DETAILS
    ——————————————————————————–
    Local Resources
    ——————————————————————————–
    ora.DATA.dg
    ONLINE ONLINE rac2
    ora.asm
    ONLINE ONLINE rac2
    ora.gsd
    OFFLINE OFFLINE rac2
    ora.net1.network
    ONLINE ONLINE rac2
    ora.ons
    ONLINE ONLINE rac2
    ——————————————————————————–
    Cluster Resources
    ——————————————————————————–
    ora.LISTENER_SCAN1.lsnr
    1 OFFLINE OFFLINE
    ora.cvu
    1 OFFLINE OFFLINE
    ora.oc4j
    1 OFFLINE OFFLINE
    ora.rac2.vip
    1 ONLINE ONLINE rac2
    ora.scan1.vip
    1 ONLINE OFFLINE

    Comment by Tjay | January 11, 2012 | Reply

    • Hi,

      Your GI is not functioning as expected.

      olsnodes on each node should show both nodes. It should show

      [oracle@RAC2 ~]$ olsnodes
      rac1
      rac2

      ……
      [oracle@RAC1 ~]$ olsnodes
      rac1
      rac2

      Also should show
      [oracle@RAC1 database]$ crsctl check cluster -all

      rac1:
      CRS-4537: Cluster Ready Services is online
      CRS-4529: Cluster Synchronization Services is online
      CRS-4533: Event Manager is online
      **************************************************************
      rac2:
      CRS-4537: Cluster Ready Services is online
      CRS-4529: Cluster Synchronization Services is online
      CRS-4533: Event Manager is online
      **************************************************************

      Should show

      [oracle@RAC2 ~]$ crsctl check cluster -all
      rac1:
      CRS-4537: Cluster Ready Services is online
      CRS-4529: Cluster Synchronization Services is online
      CRS-4533: Event Manager is online
      **************************************************************
      rac2:
      CRS-4537: Cluster Ready Services is online
      CRS-4529: Cluster Synchronization Services is online
      CRS-4533: Event Manager is online
      **************************************************************

      How did you install GI?

      Did you run cluvfy stage -post crsindt -n rac1,rac2 -verbose

      What is shown is that each node is not aware of the other node. If you do not fix it you will never be able to install RDBMS and configure a database as a RAC database.

      Fix your GI first.

      Regards,

      Comment by gjilevski | January 11, 2012 | Reply

  5. Hi

    Appreciate your help.

    I installed GI using OUI, and I specfied both nodes during installation. ( I followed the document http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVMwareServer2.php

    Here is the output:

    [oracle@RAC1 ~]$ cluvfy stage -post crsinst -n rac1,rac2 -verbose

    Performing post-checks for cluster services setup

    Checking node reachability…

    Check: Node reachability from node “RAC1”
    Destination Node Reachable?
    ———————————— ————————
    rac2 yes
    rac1 yes
    Result: Node reachability check passed from node “RAC1”

    Checking user equivalence…

    Check: User equivalence for user “oracle”
    Node Name Comment
    ———————————— ————————
    rac2 failed
    rac1 passed
    Result: PRVF-4007 : User equivalence check failed for user “oracle”

    WARNING:
    User equivalence is not set for nodes:
    rac2
    Verification will proceed with nodes:
    rac1

    Checking node connectivity…

    Checking hosts config file…
    Node Name Status Comment
    ———— ———————— ————————
    rac1 passed

    Verification of the hosts config file successful

    Interface information for node “rac1”
    Name IP Address Subnet Gateway Def. Gateway HW Address MTU
    —— ————— ————— ————— ————— —————– ——
    eth0 192.168.161.128 192.168.161.0 0.0.0.0 192.168.161.1 00:0C:29:4D:A8:89 1500
    eth0 169.254.49.113 169.254.0.0 0.0.0.0 192.168.161.1 00:0C:29:4D:A8:89 1500
    eth1 192.168.161.129 192.168.161.0 0.0.0.0 192.168.161.1 00:0C:29:4D:A8:93 1500
    eth1 192.168.161.132 192.168.161.0 0.0.0.0 192.168.161.1 00:0C:29:4D:A8:93 1500
    eth1 192.168.161.135 192.168.161.0 0.0.0.0 192.168.161.1 00:0C:29:4D:A8:93 1500

    Check: Node connectivity for interface “eth0”
    Result: Node connectivity passed for interface “eth0”

    Check: Node connectivity for interface “eth1”
    Source Destination Connected?
    —————————— —————————— —————-
    rac1[192.168.161.129] rac1[192.168.161.132] yes
    rac1[192.168.161.129] rac1[192.168.161.135] yes
    rac1[192.168.161.132] rac1[192.168.161.135] yes
    Result: Node connectivity passed for interface “eth1”

    Result: Node connectivity check passed

    Check: Time zone consistency
    Result: Time zone consistency check passed

    Checking Cluster manager integrity…

    Checking CSS daemon…

    Node Name Status
    ———————————— ————————
    rac1 running

    Oracle Cluster Synchronization Services appear to be online.

    Cluster manager integrity check passed

    UDev attributes check for OCR locations started…
    Result: UDev attributes check passed for OCR locations

    UDev attributes check for Voting Disk locations started…
    Result: UDev attributes check passed for Voting Disk locations

    Check default user file creation mask
    Node Name Available Required Comment
    ———— ———————— ———————— ———-
    rac1 0022 0022 passed
    Result: Default user file creation mask check passed

    Checking cluster integrity…

    Node Name
    ————————————
    rac1

    Cluster integrity check passed

    Checking OCR integrity…

    Checking the absence of a non-clustered configuration…
    All nodes free of non-clustered, local-only configurations

    ASM Running check passed. ASM is running on all specified nodes

    Checking OCR config file “/etc/oracle/ocr.loc”…

    OCR config file “/etc/oracle/ocr.loc” check successful

    Disk group for ocr location “+DATA” available on all the nodes

    NOTE:
    This check does not verify the integrity of the OCR contents. Execute ‘ocrcheck’ as a privileged user to verify the contents of OCR.

    OCR integrity check passed

    Checking CRS integrity…
    The Oracle Clusterware is healthy on node “rac1”

    CRS integrity check passed

    Checking node application existence…

    Checking existence of VIP node application (required)
    Node Name Required Running? Comment
    ———— ———————— ———————— ———-
    rac1 yes yes passed
    VIP node application check passed

    Checking existence of NETWORK node application (required)
    Node Name Required Running? Comment
    ———— ———————— ———————— ———-
    rac1 yes yes passed
    NETWORK node application check passed

    Checking existence of GSD node application (optional)
    Node Name Required Running? Comment
    ———— ———————— ———————— ———-
    rac1 no no exists
    GSD node application is offline on nodes “rac1”

    Checking existence of ONS node application (optional)
    Node Name Required Running? Comment
    ———— ———————— ———————— ———-
    rac1 no yes passed
    ONS node application check passed

    Checking Single Client Access Name (SCAN)…
    SCAN Name Node Running? ListenerName Port Running?
    —————- ———— ———— ———— ———— ————
    rac-scan rac1 true LISTENER_SCAN1 1521 true

    Checking TCP connectivity to SCAN Listeners…
    Node ListenerName TCP connectivity?
    ———— ———————— ————————
    localnode LISTENER_SCAN1 yes
    TCP connectivity to SCAN Listeners exists on all cluster nodes

    Checking name resolution setup for “rac-scan”…

    ERROR:
    PRVF-4664 : Found inconsistent name resolution entries for SCAN name “rac-scan”
    SCAN Name IP Address Status Comment
    ———— ———————— ———————— ———-
    rac-scan 192.168.161.132 failed NIS Entry

    ERROR:
    PRVF-4657 : Name resolution setup check for “rac-scan” (IP address: 192.168.161.132) failed

    ERROR:
    PRVF-4664 : Found inconsistent name resolution entries for SCAN name “rac-scan”

    Verification of SCAN VIP and Listener setup failed

    Checking OLR integrity…

    Checking OLR config file…

    OLR config file check successful

    Checking OLR file attributes…

    OLR file check successful

    WARNING:
    This check does not verify the integrity of the OLR contents. Execute ‘ocrcheck -local’ as a privileged user to verify the contents of OLR.

    OLR integrity check passed

    ACFS verification is not supported on platform “Linux (x86_64)”

    Checking Oracle Cluster Voting Disk configuration…

    ERROR:

    PRVF-4193 : Asm is not running on the following nodes. Proceeding with the remaining nodes.
    rac2

    Oracle Cluster Voting Disk configuration check passed

    Checking to make sure user “oracle” is not in “root” group
    Node Name Status Comment
    ———— ———————— ————————
    rac1 does not exist passed
    Result: User “oracle” is not part of “root” group. Check passed

    Checking if Clusterware is installed on all nodes…
    Check of Clusterware install passed

    Checking if CTSS Resource is running on all nodes…
    Check: CTSS Resource running on all nodes
    Node Name Status
    ———————————— ————————
    rac1 passed
    Result: CTSS resource check passed

    Querying CTSS for time offset on all nodes…
    Result: Query of CTSS for time offset passed

    Check CTSS state started…
    Check: CTSS state
    Node Name State
    ———————————— ————————
    rac1 Observer
    CTSS is in Observer state. Switching over to clock synchronization checks using NTP

    Starting Clock synchronization checks using Network Time Protocol(NTP)…

    NTP Configuration file check started…
    The NTP configuration file “/etc/ntp.conf” is available on all nodes
    NTP Configuration file check passed

    Checking daemon liveness…

    Check: Liveness for “ntpd”
    Node Name Running?
    ———————————— ————————
    rac1 no
    Result: Liveness check failed for “ntpd”
    PRVF-5494 : The NTP Daemon or Service was not alive on all nodes
    PRVF-5415 : Check to see if NTP daemon or service is running failed
    Result: Clock synchronization check using Network Time Protocol(NTP) failed

    PRVF-9652 : Cluster Time Synchronization Services check failed
    Checking VIP configuration.
    Checking VIP Subnet configuration.
    Check for VIP Subnet configuration passed.
    Checking VIP reachability
    Check for VIP reachability passed.

    Post-check for cluster services setup was unsuccessful.
    Checks did not pass for the following node(s):
    rac2,rac1,RAC1
    [oracle@RAC1 ~]$

    Comment by Tjay | January 12, 2012 | Reply

    • Hi,

      Look at fixing the errors:

      Result: PRVF-4007 : User equivalence check failed for user “oracle”

      PRVF-4193 : Asm is not running on the following nodes. Proceeding with the remaining nodes.

      The idea is each node to be able to see the other cluster nodes.

      Regards,

      Comment by gjilevski | January 12, 2012 | Reply

    • Hi,

      Did GI install successfully? Did prerequisite checks pass?

      Regards,

      Comment by gjilevski | January 12, 2012 | Reply

  6. I ignored NTP and scan ip address errors during GI installation.
    I will uninstall Gi and start from the sractch.

    Many thanks for your help.

    Here are the couple of errors in the GI installation log

    INFO: PRVF-5494 : The NTP Daemon or Service was not alive on all nodes
    INFO: PRVF-5415 : Check to see if NTP daemon or service is running failed
    INFO: Clock synchronization check using Network Time Protocol(NTP) failed
    INFO: PRVF-9652 : Cluster Time Synchronization Services check failed

    INFO: SEVERE: ERROR: rac-cluster: rac-cluster
    INFO: SEVERE: [FATAL] [INS-40922] Invalid SCAN – unresolvable to an IP address.
    INFO: CAUSE: SCAN provided does not resolve to an IP address.
    INFO: ACTION: Ensure that the SCAN is valid, and that it is resolvable to an IP address.

    INFO: ERROR:
    INFO: PRVF-4193 : Asm is not running on the following nodes. Proceeding with the remaining nodes.
    INFO: Check failed on nodes:
    INFO: rac2

    Comment by Tjay | January 12, 2012 | Reply

    • Hi,

      Make sure that you meet the prerequisites!!

      Use

      1. cluvfy stage -pre crsinst -m rac1,rac2 -verbose
      2. Pay attention on the OUI screens as well

      Regards

      Comment by gjilevski | January 12, 2012 | Reply

  7. Thanks a lot for you assistance

    Comment by Tjay | January 12, 2012 | Reply

  8. Hi again gjilevski,

    When I try to install grid infrastructure, I got below errors:
    Any idea how to fix it ?

    I`ve followed following link to perform installation: http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVMwareServer2.php

    Prerequisites failed with below error:

    Task resolv.conf Integrity

    Verification result of failed node: rac2
    List of errors:
    -PRVG-5636: The DNS response time for an unreachable node exceeded “15000”ms in following nodes: rac1,rac2
    -Cause: The DNS response time for an unreachable node exceeded the value specfied on nodes specified.
    -Action: Make sure that ‘options timeout’, ‘options attempts’ and ‘nameserver’ entries in file resolv.conf are proper.
    On HpUX these entries will be ‘retrans’, ‘retry’ and ‘nameserver’.

    -PRVF-5622: search entry does not exist in file “/etc/resolv.conf” on nodes: “rac1”
    -Cause: The ‘search’ entry was not found on nodes indicated while it was present in others.
    -Action: Look at the file specified on all nodes. Make sure that either ‘search’ entry is defined on all nodes or is not defined any nodes.

    Verification result of failed node: rac1
    ..
    ..

    Comment by Tjay | February 14, 2012 | Reply

    • Hi again Tjay,

      The errors are self-explanatory.

      1. nslookup time should be less than the threshold specified.

      2. make sure that on both nodes file “/etc/resolv.conf” contains the same attributes – values.

      You can ignore the errors if using a virtual environment that you set up as a playground. (VMware or VirtualBox)

      Regards,

      Comment by gjilevski | February 14, 2012 | Reply

  9. Thanks gjilevski.

    Since I want to install it on virtual environment, I will ignore task resolve.conf integrity error.

    I have got one last failed prerequisite

    Device Checks For ASM

    Operation Failed on Nodes: rac2,rac1

    Verification result of failed node : rac2

    List of errors:

    -PRVF-5150: Path ORCL:DISK4 is not a valid path on all nodes
    -PRVF-5150: Path ORCL:DISK3 is not a valid path on all nodes
    -PRVF-5150: Path ORCL:DISK2 is not a valid path on all nodes
    -PRVF-5150: Path ORCL:DISK1 is not a valid path on all nodes

    Verification result of failed node : rac1

    List of errors:

    -PRVF-5150: Path ORCL:DISK4 is not a valid path on all nodes
    -PRVF-5150: Path ORCL:DISK3 is not a valid path on all nodes
    -PRVF-5150: Path ORCL:DISK2 is not a valid path on all nodes
    -PRVF-5150: Path ORCL:DISK1 is not a valid path on all nodes

    Any idea about this one ?

    Comment by Tjay | February 15, 2012 | Reply

    • Hi,

      For a playground or a sandbox virtualization and installation you can ignore the errors. For production pay attention and do due diligence.

      For the PRVF-5150 check if you REALLY have:

      1. properly configured shared disks
      2. Same disk name for the disk on all nodes. Example /dev/sdb1 on all nodes refer to same disk. /dev/sdc1 on all nodes refer to same disk. etc….
      3. There is also a known bug for this PRVF-5150 error in some early 11gR2 releases.

      https://gjilevski.wordpress.com/2010/10/03/fresh-oracle-11-2-0-2-grid-infrastructure-installation-prvf-5150-prvf-5184/

      So if you are sure that 1 and 2 are OK, you can safely ignore the PRVF-5150 error and go ahead.

      Good luck!

      Regards,

      Comment by gjilevski | February 15, 2012 | Reply

  10. Hi gjilevski

    During grid infastructure installation, I was prompted to run following scripts:

    /oracle/app/oraInventory/orainstRoot.sh rac1,rac2
    /oracle/app/11.2.0/grid/root.sh rac1,rac2

    I ran them on local node successfully, however, I am getting below errors when I try to run them on second node, can I ignore these errrors ?

    PRCR-1079 : Failed to start resource ora.scan1.vip
    CRS-5017: The resource action “ora.scan1.vip start” encountered the following error:
    CRS-5005: IP Address: 192.168.161.201 is already in use in the network

    CRS-2674: Start of ‘ora.scan1.vip’ on ‘rac2’ failed
    CRS-2632: There are no more servers to try to place resource ‘ora.scan1.vip’ on that would satisfy its placement policy

    start scan … failed
    FirstNode configuration failed at /oracle/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 8373.
    /oracle/app/11.2.0/grid/perl/bin/perl -I/oracle/app/11.2.0/grid/perl/lib -I/oracle/app/11.2.0/grid/crs/install /oracle/app/11.2.0/grid/crs/install/rootcrs.pl execution failed

    Comment by Tjay | February 15, 2012 | Reply

    • Hi,

      It should complete succesfully on both nodes.

      Try to trouble-shoot it. If necessary deconfig and start from scratch using

      https://gjilevski.wordpress.com/2010/08/12/how-to-clean-up-after-a-failed-11g-crs-install-what-is-new-in-11g-r2-2/

      Deconfigure Oracle Clusterware without removing the binaries:


      Log in as the root user on a node where you encountered an error. Change directory to $GRID_HOME/crs/install. For example:

      # cd $GRID_HOME/crs/install


      Run rootcrs.pl with the -deconfig -force flags on all but the last node.

      # perl rootcrs.pl -deconfig -force


      If you are deconfiguring Oracle Clusterware on all nodes in the cluster, then on the last node add the –lastnode flag that completes deconfiguration on the cluster including the OCR and the voting disks.

      # perl rootcrs.pl -deconfig -force -lastnode

      After that fix the problem. Verify and start oui again.

      Regards,

      Comment by gjilevski | February 15, 2012 | Reply

    • Hi,

      Check your network prerequisites. vips, subnets, interface names etc… the whole enchilada.

      Regards,

      Comment by gjilevski | February 15, 2012 | Reply

  11. Hi

    Deconfig process seems to be hung, waiting more than 20 minutes, no response.

    [root@RAC2 install]# perl rootcrs.pl -deconfig -force
    Using configuration parameter file: ./crsconfig_params
    PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type
    PRCR-1068 : Failed to query resources
    Cannot communicate with crsd
    PRCR-1070 : Failed to check if resource ora.gsd is registered
    Cannot communicate with crsd
    PRCR-1070 : Failed to check if resource ora.ons is registered
    Cannot communicate with crsd

    ACFS-9459: ADVM/ACFS is not supported on this OS version: ‘2.6.32-200.13.1.el5uek

    ACFS-9201: Not Supported

    Comment by Tjay | February 16, 2012 | Reply

    • Hi,

      Reboot the node and try again.

      Regards,

      Comment by gjilevski | February 16, 2012 | Reply

  12. Hi

    I believe I should fix below issue before proceeding with the installation:

    PRCR-1079 : Failed to start resource ora.scan1.vip
    CRS-5017: The resource action “ora.scan1.vip start” encountered the following error:
    CRS-5005: IP Address: 192.168.161.201 is already in use in the network

    I can ping 192.168.161.201

    [oracle@RAC2 Software]$ ping 192.168.161.201
    PING 192.168.161.201 (192.168.161.201) 56(84) bytes of data.
    64 bytes from 192.168.161.201: icmp_seq=1 ttl=64 time=2.09 ms
    64 bytes from 192.168.161.201: icmp_seq=2 ttl=64 time=0.641 ms
    64 bytes from 192.168.161.201: icmp_seq=3 ttl=64 time=0.605 ms

    These are the current scan ip details in /etc/hosts

    # SCAN
    192.168.161.201 rac-scan.localdomain rac-scan
    192.168.161.202 rac-scan.localdomain rac-scan
    192.168.161.203 rac-scan.localdomain rac-scan

    I`ve updated scan ip details /etc/hosts

    new:

    # SCAN
    192.168.161.202 rac-scan.localdomain rac-scan
    192.168.161.203 rac-scan.localdomain rac-scan
    192.168.161.204 rac-scan.localdomain rac-scan

    I can not ping anyone of the scan ip addresses at the moment.

    I deconfigured Oracle Clusterware as you suggested (perl rootcrs.pl -deconfig -force)
    and rerun following scripts on rac2.

    /oracle/app/oraInventory/orainstRoot.sh
    /oracle/app/11.2.0/grid/root.sh

    It is still failing with below error. Appericate your suggestion.

    PRCS-1037 : Single Client Access Name VIPs already exist
    PRCS-1028 : Single Client Access Name listeners already exist
    OC4J could not be created as it already exists
    PRCR-1086 : resource ora.oc4j is already registered
    PRCR-1086 : resource ora.cvu is already registered
    PRCR-1079 : Failed to start resource ora.scan1.vip
    CRS-5017: The resource action “ora.scan1.vip start” encountered the following er ror:
    CRS-5005: IP Address: 192.168.161.201 is already in use in the network

    CRS-2674: Start of ‘ora.scan1.vip’ on ‘rac2’ failed
    CRS-2632: There are no more servers to try to place resource ‘ora.scan1.vip’ on that would satisfy its placement policy

    start scan … failed
    FirstNode configuration failed at /oracle/app/11.2.0/grid/crs/install/crsconfig_ lib.pm line 8373.
    /oracle/app/11.2.0/grid/perl/bin/perl -I/oracle/app/11.2.0/grid/perl/lib -I/orac le/app/11.2.0/grid/crs/install /oracle/app/11.2.0/grid/crs/install/rootcrs.pl ex ecution failed

    I realized that it is still failing with same error, appreciate your suggestion?

    Comment by Tjay | February 16, 2012 | Reply

    • Hi,

      Troubleshoot the error. Fix the newtworking problem and find out why it is happening.

      See MOS, there are some Notes for similar potential problems.

      It is ony until after you face some issues that you really learn something. Do some research on MOS Metalink and/or google.

      Retry GI installation or configuration.

      Regards,

      Comment by gjilevski | February 16, 2012 | Reply

  13. Hi

    Thanks,

    At the begining of the installation, I had issues with virtual ips. After further investigation and reading, I realized that vips shouldnt be configured before the installation.

    I fixed that issue but now I am having issue with scan ip address.

    My question is, is it mandotary to configure scan ip addressses ? Should I configure seperate scan ip addresses in each node ?

    Like I said, Iam just following step by step rac installation document on vm, however my issues are not mentioned in the document.

    http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVMwareServer2.php

    I believe if I fix the scan issue, I can successfully do the installation.

    Comment by Tjay | February 16, 2012 | Reply

    • Hi,

      You need the same SCAN for the whole cluster. It must be the same on all nodes.

      Regards,

      Comment by gjilevski | February 16, 2012 | Reply

  14. Hi gjilevski,

    Sorry to be a pain, I did try installation from the scratch.
    ./root.sh was sucessfully executed on node1.
    However, I am getting below error on node2.
    Any idea what the issue might be ? I`m really lost.

    [root@RAC2 grid]# ./root.sh
    Running Oracle 11g root.sh script…

    The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME= /oracle/app/11.2.0/grid

    Enter the full pathname of the local bin directory: [/usr/local/bin]:
    Copying dbhome to /usr/local/bin …
    Copying oraenv to /usr/local/bin …
    Copying coraenv to /usr/local/bin …

    Creating /etc/oratab file…
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root.sh script.
    Now product-specific root actions will be performed.
    2012-02-28 11:19:23: Parsing the host name
    2012-02-28 11:19:23: Checking for super user privileges
    2012-02-28 11:19:23: User has super user privileges
    Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfi g_params
    Creating trace directory
    LOCAL ADD MODE
    Creating OCR keys for user ‘root’, privgrp ‘root’..
    Operation successful.
    Adding daemon to inittab
    CRS-4123: Oracle High Availability Services has been started.
    ohasd is starting
    acfsroot: ACFS-9301: ADVM/ACFS installation can not proceed:

    acfsroot: ACFS-9302: No installation files found at /oracle/app/11.2.0/grid/inst all/usm/EL5/x86_64/2.6.18-8/2.6.18-8.el5uek-x86_64/bin.

    CRS-2672: Attempting to start ‘ora.gipcd’ on ‘rac2’
    CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘rac2’
    CRS-2676: Start of ‘ora.gipcd’ on ‘rac2’ succeeded
    CRS-2676: Start of ‘ora.mdnsd’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘rac2’
    CRS-2676: Start of ‘ora.gpnpd’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘rac2’
    CRS-2676: Start of ‘ora.cssdmonitor’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.cssd’ on ‘rac2’
    CRS-2672: Attempting to start ‘ora.diskmon’ on ‘rac2’
    CRS-2676: Start of ‘ora.diskmon’ on ‘rac2’ succeeded
    CRS-2676: Start of ‘ora.cssd’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.ctssd’ on ‘rac2’
    CRS-2676: Start of ‘ora.ctssd’ on ‘rac2’ succeeded

    Disk Group DATA already exists. Cannot be created again

    Configuration of ASM failed, see logs for details
    Did not succssfully configure and start ASM
    CRS-2500: Cannot stop resource ‘ora.crsd’ as it is not running
    CRS-4000: Command Stop failed, or completed with errors.
    Command return code of 1 (256) from command: /oracle/app/11.2.0/grid/bin/crsctl stop resource ora.crsd -init
    Stop of resource “ora.crsd -init” failed
    Failed to stop CRSD
    CRS-2500: Cannot stop resource ‘ora.asm’ as it is not running
    CRS-4000: Command Stop failed, or completed with errors.
    Command return code of 1 (256) from command: /oracle/app/11.2.0/grid/bin/crsctl stop resource ora.asm -init
    Stop of resource “ora.asm -init” failed
    Failed to stop ASM
    CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘rac2’
    CRS-2677: Stop of ‘ora.ctssd’ on ‘rac2’ succeeded
    CRS-2673: Attempting to stop ‘ora.cssdmonitor’ on ‘rac2’
    CRS-2677: Stop of ‘ora.cssdmonitor’ on ‘rac2’ succeeded
    CRS-2673: Attempting to stop ‘ora.cssd’ on ‘rac2’
    CRS-2677: Stop of ‘ora.cssd’ on ‘rac2’ succeeded
    CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘rac2’
    CRS-2677: Stop of ‘ora.gpnpd’ on ‘rac2’ succeeded
    CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘rac2’
    CRS-2677: Stop of ‘ora.gipcd’ on ‘rac2’ succeeded
    CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘rac2’
    CRS-2677: Stop of ‘ora.mdnsd’ on ‘rac2’ succeeded
    Initial cluster configuration failed. See /oracle/app/11.2.0/grid/cfgtoollogs/crsconfig/rootcrs_rac2.log for details

    Comment by Tjay | February 28, 2012 | Reply

    • Hi,

      Examine why GI installation thinks that DATA exists!

      Did you retry the install from scratch but reused the shared disks after failed or partial GI install? If so all shared disks need to be without ASM headers as candidate disks. What does /oracle/app/11.2.0/grid/cfgtoollogs/crsconfig/rootcrs_rac2.log says?

      I do not know the contents of your logs, but If above is true I would zero the shared disks to make sure that all shared disks have a status as CANIDATE disks.

      Use dd : dd if=/dev/zero of=/dev/sdb1 — or whatever the shared device is. Repeat for all shared devices and rerun the GI instalation.

      Otherwise, I would also examine why GI installation thinks that DATA exists.

      Regards,

      Comment by gjilevski | February 28, 2012 | Reply

    • Hi,

      Something is very wrong!

      Look here how the second script shold be. https://gjilevski.wordpress.com/2011/10/04/virtualization-using-oracle-vm-virtualbox-for-building-two-node-oracle-rac-11gr2-11-2-0-3-cluster-on-oel-6-1-using-gns-based-on-dns-and-dhcp-with-multiple-private-interconnects-deploying-haip-feat/

      Once you run on the first node root.sh , on the second node root.sh should detect a cluster and join the cluster. OK?

      Somehow it does not detect an already partially started cluster on node1. Se what I have on the second node. In your case it seems that It is not seeing the first node. Look for a reason!

      [root@oel61b disks]# /u01/app/11.2.0/grid/root.sh

      Performing root user operation for Oracle 11g

      The following environment variables are set as:

      ORACLE_OWNER= grid

      ORACLE_HOME= /u01/app/11.2.0/grid

      Enter the full pathname of the local bin directory: [/usr/local/bin]:

      Copying dbhome to /usr/local/bin …

      Copying oraenv to /usr/local/bin …

      Copying coraenv to /usr/local/bin …

      Creating /etc/oratab file…

      Entries will be added to the /etc/oratab file as needed by

      Database Configuration Assistant when a database is created

      Finished running generic part of root script.

      Now product-specific root actions will be performed.

      Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

      Creating trace directory

      User ignored Prerequisites during installation

      OLR initialization – successful

      Adding Clusterware entries to upstart

      CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node oel61a, number 1, and is terminating

      An active cluster was found during exclusive startup, restarting to join the cluster

      Configure Oracle Grid Infrastructure for a Cluster … succeeded

      [root@oel61b disks]#

      Comment by gjilevski | February 29, 2012 | Reply

  15. Hi

    I`ve recreated the asm disks per your advise, root.sh has been successfully executed on node1, however I am still getting below error on node2. Can I ignore this error?
    Note that I am not using dns and trying to install 11.2.0.1 rac

    [root@RAC2 grid]# ./root.sh
    Running Oracle 11g root.sh script…

    The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME= /oracle/app/11.2.0/grid

    Enter the full pathname of the local bin directory: [/usr/local/bin]:
    Copying dbhome to /usr/local/bin …
    Copying oraenv to /usr/local/bin …
    Copying coraenv to /usr/local/bin …

    Creating /etc/oratab file…
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root.sh script.
    Now product-specific root actions will be performed.
    2012-02-29 14:30:43: Parsing the host name
    2012-02-29 14:30:43: Checking for super user privileges
    2012-02-29 14:30:43: User has super user privileges
    Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfi g_params
    Creating trace directory
    LOCAL ADD MODE
    Creating OCR keys for user ‘root’, privgrp ‘root’..
    Operation successful.
    Adding daemon to inittab
    CRS-4123: Oracle High Availability Services has been started.
    ohasd is starting
    acfsroot: ACFS-9301: ADVM/ACFS installation can not proceed:

    acfsroot: ACFS-9302: No installation files found at /oracle/app/11.2.0/grid/inst all/usm/EL5/x86_64/2.6.18-8/2.6.18-8.el5uek-x86_64/bin.

    CRS-2672: Attempting to start ‘ora.gipcd’ on ‘rac2’
    CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘rac2’
    CRS-2676: Start of ‘ora.gipcd’ on ‘rac2’ succeeded
    CRS-2676: Start of ‘ora.mdnsd’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘rac2’
    CRS-2676: Start of ‘ora.gpnpd’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘rac2’
    CRS-2676: Start of ‘ora.cssdmonitor’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.cssd’ on ‘rac2’
    CRS-2672: Attempting to start ‘ora.diskmon’ on ‘rac2’
    CRS-2676: Start of ‘ora.diskmon’ on ‘rac2’ succeeded
    CRS-2676: Start of ‘ora.cssd’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.ctssd’ on ‘rac2’
    CRS-2676: Start of ‘ora.ctssd’ on ‘rac2’ succeeded

    ASM created and started successfully.

    DiskGroup DATA created successfully.

    clscfg: -install mode specified
    Successfully accumulated necessary OCR keys.
    Creating OCR keys for user ‘root’, privgrp ‘root’..
    Operation successful.
    CRS-2672: Attempting to start ‘ora.crsd’ on ‘rac2’
    CRS-2676: Start of ‘ora.crsd’ on ‘rac2’ succeeded
    Successful addition of voting disk 3c3d6e20a8f84f39bf3484249fe6cf8b.
    Successfully replaced voting disk group with +DATA.
    CRS-4266: Voting file(s) successfully replaced
    ## STATE File Universal Id File Name Disk group
    — —– —————– ——— ———
    1. ONLINE 3c3d6e20a8f84f39bf3484249fe6cf8b (ORCL:DISK1) [DATA]
    Located 1 voting disk(s).
    CRS-2673: Attempting to stop ‘ora.crsd’ on ‘rac2’
    CRS-2677: Stop of ‘ora.crsd’ on ‘rac2’ succeeded
    CRS-2673: Attempting to stop ‘ora.asm’ on ‘rac2’
    CRS-2677: Stop of ‘ora.asm’ on ‘rac2’ succeeded
    CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘rac2’
    CRS-2677: Stop of ‘ora.ctssd’ on ‘rac2’ succeeded
    CRS-2673: Attempting to stop ‘ora.cssdmonitor’ on ‘rac2’
    CRS-2677: Stop of ‘ora.cssdmonitor’ on ‘rac2’ succeeded
    CRS-2673: Attempting to stop ‘ora.cssd’ on ‘rac2’
    CRS-2677: Stop of ‘ora.cssd’ on ‘rac2’ succeeded
    CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘rac2’
    CRS-2677: Stop of ‘ora.gpnpd’ on ‘rac2’ succeeded
    CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘rac2’
    CRS-2677: Stop of ‘ora.gipcd’ on ‘rac2’ succeeded
    CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘rac2’
    CRS-2677: Stop of ‘ora.mdnsd’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘rac2’
    CRS-2676: Start of ‘ora.mdnsd’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.gipcd’ on ‘rac2’
    CRS-2676: Start of ‘ora.gipcd’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘rac2’
    CRS-2676: Start of ‘ora.gpnpd’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘rac2’
    CRS-2676: Start of ‘ora.cssdmonitor’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.cssd’ on ‘rac2’
    CRS-2672: Attempting to start ‘ora.diskmon’ on ‘rac2’
    CRS-2676: Start of ‘ora.diskmon’ on ‘rac2’ succeeded
    CRS-2676: Start of ‘ora.cssd’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.ctssd’ on ‘rac2’
    CRS-2676: Start of ‘ora.ctssd’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.asm’ on ‘rac2’
    CRS-2676: Start of ‘ora.asm’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.crsd’ on ‘rac2’
    CRS-2676: Start of ‘ora.crsd’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.evmd’ on ‘rac2’
    CRS-2676: Start of ‘ora.evmd’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.asm’ on ‘rac2’
    CRS-2676: Start of ‘ora.asm’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.DATA.dg’ on ‘rac2’
    CRS-2676: Start of ‘ora.DATA.dg’ on ‘rac2’ succeeded
    PRCR-1079 : Failed to start resource ora.scan1.vip
    CRS-5005: IP Address: 192.168.161.201 is already in use in the network
    CRS-2674: Start of ‘ora.scan1.vip’ on ‘rac2’ failed
    CRS-2632: There are no more servers to try to place resource ‘ora.scan1.vip’ on that would satisfy its placement policy

    start scan … failed
    Preparing packages for installation…
    cvuqdisk-1.0.7-1
    Configure Oracle Grid Infrastructure for a Cluster … failed
    Updating inventory properties for clusterware
    Starting Oracle Universal Installer…

    Checking swap space: must be greater than 500 MB. Actual 2003 MB Passed
    The inventory pointer is located at /etc/oraInst.loc
    The inventory is located at /oracle/app/oraInventory
    ‘UpdateNodeList’ was successful.

    Comment by Tjay | February 29, 2012 | Reply

    • Hi,

      Try to figure out the error. This should not happen if the GI prerequisites are met. Google it or use MOS.

      PRCR-1079 : Failed to start resource ora.scan1.vip
      CRS-5005: IP Address: 192.168.161.201 is already in use in the network
      CRS-2674: Start of ‘ora.scan1.vip’ on ‘rac2′ failed
      CRS-2632: There are no more servers to try to place resource ‘ora.scan1.vip’ on that would satisfy its placement policy

      start scan … failed

      Should not be there!!!

      How do you define scans? How do you define VIPS? What is the /etc/hosts content?

      What are the public and private interfaces names on each node?

      Look at the network configurtion.

      Does cluvfy show any errors if you run?

      cluvfy stage -pre crsinst – n n1,n2 -verbose
      cluvfy stage -post hwos – n n1,n2 -verbose

      Figure it out!

      Regards,

      Comment by gjilevski | February 29, 2012 | Reply

    • Hi,

      Compare your root.sh output with the output from the article that you use.

      Regards,

      Comment by gjilevski | February 29, 2012 | Reply

    • Hi,

      Did you have shared disks?

      On the second node it should recognise the vote disk created on node 1 instead of creating it again on node 2.

      Instead you have on node 2

      Successful addition of voting disk 3c3d6e20a8f84f39bf3484249fe6cf8b.
      Successfully replaced voting disk group with +DATA.
      CRS-4266: Voting file(s) successfully replaced
      ## STATE File Universal Id File Name Disk group
      – —– —————– ——— ———
      1. ONLINE 3c3d6e20a8f84f39bf3484249fe6cf8b (ORCL:DISK1) [DATA]

      Check for the shared storage configuration. On node 2 it should join the cluster partially created from node 1 root.sh script.

      Regards,

      Comment by gjilevski | February 29, 2012 | Reply

  16. Thanks for your comments gjilevski.
    I will double check shared disk configuration.

    Comment by Tjay | February 29, 2012 | Reply

  17. Hi,

    I`ve sorted the shared disk issue.
    However, I am getting below error on rac2.

    Have you ever experienced any error like this ?

    [root@RAC2 grid]# ./root.sh
    Running Oracle 11g root.sh script…

    The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME= /oracle/app/11.2.0/grid

    Enter the full pathname of the local bin directory: [/usr/local/bin]:
    Copying dbhome to /usr/local/bin …
    Copying oraenv to /usr/local/bin …
    Copying coraenv to /usr/local/bin …

    Creating /etc/oratab file…
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root.sh script.
    Now product-specific root actions will be performed.
    2012-03-02 13:33:18: Parsing the host name
    2012-03-02 13:33:18: Checking for super user privileges
    2012-03-02 13:33:18: User has super user privileges
    Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params
    Creating trace directory
    LOCAL ADD MODE
    Creating OCR keys for user ‘root’, privgrp ‘root’..
    Operation successful.
    Adding daemon to inittab
    CRS-4123: Oracle High Availability Services has been started.
    ohasd is starting
    acfsroot: ACFS-9301: ADVM/ACFS installation can not proceed:

    acfsroot: ACFS-9302: No installation files found at /oracle/app/11.2.0/grid/install/usm/EL5/x86_64/2.6.18-8/2.6.18-8.el5uek-x86_64/bin.

    CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
    An active cluster was found during exclusive startup, restarting to join the cluster
    CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘rac2’
    CRS-2676: Start of ‘ora.mdnsd’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.gipcd’ on ‘rac2’
    CRS-2676: Start of ‘ora.gipcd’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘rac2’
    CRS-2676: Start of ‘ora.gpnpd’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘rac2’
    CRS-2676: Start of ‘ora.cssdmonitor’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.cssd’ on ‘rac2’
    CRS-2672: Attempting to start ‘ora.diskmon’ on ‘rac2’
    CRS-2676: Start of ‘ora.diskmon’ on ‘rac2’ succeeded
    CRS-2676: Start of ‘ora.cssd’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.ctssd’ on ‘rac2’
    CRS-2676: Start of ‘ora.ctssd’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.asm’ on ‘rac2’
    CRS-2676: Start of ‘ora.asm’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.crsd’ on ‘rac2’
    CRS-2676: Start of ‘ora.crsd’ on ‘rac2’ succeeded
    CRS-2672: Attempting to start ‘ora.evmd’ on ‘rac2’
    CRS-2676: Start of ‘ora.evmd’ on ‘rac2’ succeeded
    Timed out waiting for the CRS stack to start.

    [root@RAC2 grid]# ps -ef |grep pmon
    oracle 11121 1 0 13:36 ? 00:00:00 asm_pmon_+ASM2
    root 11585 4295 0 13:38 pts/1 00:00:00 grep pmon

    Comment by Tjay | March 2, 2012 | Reply

    • Hi,

      Interesting. Do you have enough memory? What is the size of the VM?

      Try to get GI installed and resize the VMs later.

      Check from the first node

      crectl check cluster -all
      crsctl stat res -t

      Do you see the resources on the second node? It can be something transient or intermittent issue as a result of low sized VM.

      What do you see on the second node if you issue

      crectl check cluster -all
      crsctl stat res -t

      Regards,

      Comment by gjilevski | March 2, 2012 | Reply

    • Hi,

      I have seen it and in some cases it is OK. If you see all the resources and cluster is healthy it is OK.

      Otherwise re-run the root.sh script on node 2 after rebooting node 2.

      Regards,

      Comment by gjilevski | March 2, 2012 | Reply

  18. hi

    Below is the result on rac1 and rac2. I have allocated 2gb memory to each vm.

    [oracle@RAC1 ~]$ crsctl check cluster -all
    **************************************************************
    rac1:
    CRS-4537: Cluster Ready Services is online
    CRS-4529: Cluster Synchronization Services is online
    CRS-4533: Event Manager is online
    **************************************************************
    rac2:
    CRS-4535: Cannot communicate with Cluster Ready Services
    CRS-4529: Cluster Synchronization Services is online
    CRS-4533: Event Manager is online
    **************************************************************

    oracle@RAC1 ~]$ crs_stat -t
    Name Type Target State Host
    ————————————————————
    ora.DATA.dg ora….up.type ONLINE ONLINE rac1
    ora….N1.lsnr ora….er.type ONLINE ONLINE rac1
    ora.asm ora.asm.type ONLINE ONLINE rac1
    ora.eons ora.eons.type ONLINE ONLINE rac1
    ora.gsd ora.gsd.type OFFLINE OFFLINE
    ora….network ora….rk.type ONLINE ONLINE rac1
    ora.oc4j ora.oc4j.type OFFLINE OFFLINE
    ora.ons ora.ons.type ONLINE ONLINE rac1
    ora….SM1.asm application ONLINE ONLINE rac1
    ora.rac1.gsd application OFFLINE OFFLINE
    ora.rac1.ons application ONLINE ONLINE rac1
    ora.rac1.vip ora….t1.type ONLINE ONLINE rac1
    ora.scan1.vip ora….ip.type ONLINE ONLINE rac1

    [oracle@RAC2 ~]$ crsctl check cluster -all
    **************************************************************
    rac1:
    CRS-4537: Cluster Ready Services is online
    CRS-4529: Cluster Synchronization Services is online
    CRS-4533: Event Manager is online
    **************************************************************
    rac2:
    CRS-4535: Cannot communicate with Cluster Ready Services
    CRS-4529: Cluster Synchronization Services is online
    CRS-4533: Event Manager is online
    **************************************************************

    [oracle@RAC2 ~]$ crs_stat -t
    CRS-0184: Cannot communicate with the CRS daemon.

    Comment by Tjay | March 2, 2012 | Reply

    • Hi,

      I have mine with 4GB. it is going to be a real challandge to have Oracle RAC 11gR2 on two VM (2GB each) installed.

      See if you could resize the VM to something like 4GB to stay out of similar troubles.

      Regards,

      Comment by gjilevski | March 2, 2012 | Reply

  19. Hi

    I finally finished GI installation successfully on both nodes. ( I have increase the memory to 3gb on both nodes)
    I shutdowned both vms and took a backup.

    When I start them again, none of the rac resources came up.

    In standalone database, I use “crsctl start has” command to start everything.

    How about in this case ?

    [oracle@rac1 ~]$ crsctl start has
    CRS-4563: Insufficient user privileges.

    CRS-4000: Command Start failed, or completed with errors.

    [oracle@rac1 ~]$ ps -ef |grep pmon
    oracle 6819 6628 0 14:22 pts/1 00:00:00 grep pmon

    Comment by Tjay | March 8, 2012 | Reply

    • Hi,

      I stronly recommend to look at 11gr2 manuals as it will be much easier for you!

      1. Oracle® Clusterware Administration and Deployment Guide 11g Release 2 (11.2)
      2. Oracle® Real Application Clusters Administration and Deployment Guide 11g Release 2 (11.2)
      3. Oracle® Automatic Storage Management Administrator’s Guide 11g Release 2 (11.2)

      In a GI cluster environment you do not use ‘crsctl start has’

      Look at GI components in the manual
      1. CRS stack
      2. High availability stack

      You use the following commands as root ONLY.

      crsctl start cluster -all
      crsctl start crs

      Checking can be as root or grid/oracle

      crsctl check cluster -all
      crsctl check crs

      So to begin with, try to start it as root manually and verify the cluster.

      cluvfy stage -post crsinst -n all ( as grid)
      crsctl stat res -t ( as grid or root)

      If not repeat the installation after cleanup on node1 and node2. If you still have OUI waiting on nodes 1 rerun root.sh on the second node after clenup on node 2.
      Regards,

      Comment by gjilevski | March 8, 2012 | Reply

  20. Hi

    Thanks a lot for the usefull information.
    root.sh was executed successfully on node1 and node2.
    Then I bounced the vms.

    I am unable to start any resources after the bounce, any idea what the issue might be ?

    [root@rac1 bin]# ./crsctl stat res -t
    CRS-4535: Cannot communicate with Cluster Ready Services
    CRS-4000: Command Status failed, or completed with errors.

    [root@rac1 bin]# ./crsctl check crs
    CRS-4638: Oracle High Availability Services is online
    CRS-4535: Cannot communicate with Cluster Ready Services
    CRS-4529: Cluster Synchronization Services is online
    CRS-4533: Event Manager is online

    [oracle@rac1 ~]$ cluvfy stage -post crsinst -n all

    Performing post-checks for cluster services setup

    Checking node reachability…
    Node reachability check passed from node “rac1”

    Checking user equivalence…
    User equivalence check passed for user “oracle”
    Checking time zone consistency…
    Time zone consistency check passed.

    Checking Cluster manager integrity…

    Checking CSS daemon…
    Oracle Cluster Synchronization Services appear to be online.

    Cluster manager integrity check passed

    UDev attributes check for OCR locations started…
    UDev attributes check passed for OCR locations

    UDev attributes check for Voting Disk locations started…
    UDev attributes check passed for Voting Disk locations

    Default user file creation mask check passed

    Checking cluster integrity…

    Cluster integrity check passed

    Checking OCR integrity…

    Checking the absence of a non-clustered configuration…
    All nodes free of non-clustered, local-only configurations

    ERROR:

    PRVF-4194 : Asm is not running on any of the nodes. Verification cannot proceed.

    OCR integrity check failed

    Checking CRS integrity…

    ERROR:
    PRVF-5305 : The Oracle clusterware is not healthy on node “rac2”
    CRS-4535: Cannot communicate with Cluster Ready Services
    CRS-4529: Cluster Synchronization Services is online
    CRS-4533: Event Manager is online

    ERROR:
    PRVF-5305 : The Oracle clusterware is not healthy on node “rac1”
    CRS-4535: Cannot communicate with Cluster Ready Services
    CRS-4529: Cluster Synchronization Services is online
    CRS-4533: Event Manager is online

    CRS integrity check failed

    Checking node application existence…

    Checking existence of VIP node application (required)
    Check failed.
    Check failed on nodes:
    rac2,rac1

    Checking existence of ONS node application (optional)
    Check ignored.
    Check failed on nodes:
    rac2,rac1

    Checking existence of GSD node application (optional)
    Check ignored.
    Check failed on nodes:
    rac2,rac1

    Checking existence of EONS node application (optional)
    Check ignored.
    Check failed on nodes:
    rac2,rac1

    Checking existence of NETWORK node application (optional)
    Check ignored.
    Check failed on nodes:
    rac2,rac1

    Checking Single Client Access Name (SCAN)…

    ERROR:
    PRVF-5054 : Verification of SCAN VIP and Listener setup failed
    PRCR-1068 : Failed to query resources
    Cannot communicate with crsd
    OCR detected on ASM. Running ACFS Integrity checks…

    Starting check to see if ASM is running on all cluster nodes…
    PRVF-5137 : Failure while checking ASM status on node “rac2”
    PRVF-5137 : Failure while checking ASM status on node “rac1”

    Starting Disk Groups check to see if at least one Disk Group configured…
    PRVF-5112 : An Exception occurred while checking for Disk Groups
    PRVF-5114 : Disk Group check failed. No Disk Groups configured

    Task ACFS Integrity check failed

    Checking Oracle Cluster Voting Disk configuration…

    Oracle Cluster Voting Disk configuration check passed

    User “oracle” is not part of “root” group. Check passed

    Checking if Clusterware is installed on all nodes…
    Check of Clusterware install passed

    Checking if CTSS Resource is running on all nodes…
    CTSS resource check passed

    Querying CTSS for time offset on all nodes…
    Query of CTSS for time offset passed

    Check CTSS state started…
    CTSS is in Active state. Proceeding with check of clock time offsets on all nodes…
    Check of clock time offsets passed

    Oracle Cluster Time Synchronization Services check passed

    Post-check for cluster services setup was unsuccessful on all the nodes.

    Comment by Tjay | March 8, 2012 | Reply

    • Hi,

      Did OUI completed successfully? Did rpoot.sh completed successfully? Did you verify it?

      Check the clusterware.

      ./crsctl check cluster -all

      After that verify what is going on

      [root@rac1 bin]# ./crsctl stat res -t

      If root.sh scripts on both nodes are successfully completed I guess it might simply take longer time to get Grid Infrastructure started and running.

      If it is stopped you can try to manually start it.

      ./crsctl start cluster -all

      Check $GI_HOME/log//alert.log for any messages indicating a potential problem. Tail it and see what is going on.

      Regards,

      Comment by gjilevski | March 8, 2012 | Reply

  21. Hi
    Yes, root.sh was completed successfully on both nodes.

    [root@rac1 bin]# ./crsctl check cluster -all
    **************************************************************
    rac1:
    CRS-4535: Cannot communicate with Cluster Ready Services
    CRS-4529: Cluster Synchronization Services is online
    CRS-4533: Event Manager is online
    **************************************************************
    rac2:
    CRS-4535: Cannot communicate with Cluster Ready Services
    CRS-4529: Cluster Synchronization Services is online
    CRS-4533: Event Manager is online
    **************************************************************

    [root@rac1 bin]# ./crsctl start cluster -all
    CRS-2672: Attempting to start ‘ora.asm’ on ‘rac1’
    CRS-2672: Attempting to start ‘ora.asm’ on ‘rac2’
    CRS-5011: Check of resource “+ASM” failed: details at “(:CLSN00006:)” in “/oracle/app/11.2.0/grid/log/rac2/agent/ohasd/oraagent_oracle/oraagent_oracle.log”
    CRS-5011: Check of resource “+ASM” failed: details at “(:CLSN00006:)” in “/oracle/app/11.2.0/grid/log/rac1/agent/ohasd/oraagent_oracle/oraagent_oracle.log”
    ORA-01034: ORACLE not available
    ORA-27101: shared memory realm does not exist
    Linux-x86_64 Error: 2: No such file or directory
    Process ID: 0
    Session ID: 0 Serial number: 0
    ORA-27154: post/wait create failed
    ORA-01034: ORACLE not available
    ORA-27101: shared memory realm does not exist
    Linux-x86_64 Error: 2: No such file or directory
    Process ID: 0
    Session ID: 0 Serial number: 0
    CRS-5011: Check of resource “+ASM” failed: details at “(:CLSN00006:)” in “/oracle/app/11.2.0/grid/log/rac1/agent/ohasd/oraagent_oracle/oraagent_oracle.log”
    ORA-27154: post/wait create failed
    CRS-5011: Check of resource “+ASM” failed: details at “(:CLSN00006:)” in “/oracle/app/11.2.0/grid/log/rac2/agent/ohasd/oraagent_oracle/oraagent_oracle.log”
    CRS-2674: Start of ‘ora.asm’ on ‘rac1’ failed
    CRS-2679: Attempting to clean ‘ora.asm’ on ‘rac1’
    CRS-5011: Check of resource “+ASM” failed: details at “(:CLSN00006:)” in “/oracle/app/11.2.0/grid/log/rac1/agent/ohasd/oraagent_oracle/oraagent_oracle.log”
    CRS-2674: Start of ‘ora.asm’ on ‘rac2’ failed
    CRS-2679: Attempting to clean ‘ora.asm’ on ‘rac2’
    CRS-5011: Check of resource “+ASM” failed: details at “(:CLSN00006:)” in “/oracle/app/11.2.0/grid/log/rac2/agent/ohasd/oraagent_oracle/oraagent_oracle.log”
    ORA-01034: ORACLE not available
    ORA-27101: shared memory realm does not exist
    Linux-x86_64 Error: 2: No such file or directory
    Process ID: 0
    Session ID: 0 Serial number: 0
    ORA-01034: ORACLE not available
    ORA-27101: shared memory realm does not exist
    Linux-x86_64 Error: 2: No such file or directory
    Process ID: 0
    Session ID: 0 Serial number: 0
    CRS-5011: Check of resource “+ASM” failed: details at “(:CLSN00006:)” in “/oracle/app/11.2.0/grid/log/rac2/agent/ohasd/oraagent_oracle/oraagent_oracle.log”
    CRS-5011: Check of resource “+ASM” failed: details at “(:CLSN00006:)” in “/oracle/app/11.2.0/grid/log/rac1/agent/ohasd/oraagent_oracle/oraagent_oracle.log”
    CRS-2681: Clean of ‘ora.asm’ on ‘rac2’ succeeded
    CRS-2681: Clean of ‘ora.asm’ on ‘rac1’ succeeded

    Comment by Tjay | March 8, 2012 | Reply

    • Hi,

      Debug further!! If You want to learn now you got the chance. Where else somebody will let you practice for free?!

      How about the content of all logs like
      /oracle/app/11.2.0/grid/log/rac2/agent/ohasd/oraagent_oracle/oraagent_oracle.log
      /oracle/app/11.2.0/grid/log/rac1/agent/ohasd/oraagent_oracle/oraagent_oracle.log
      Why is it?

      What is the content of the cluster alert?

      Look at and find out why. It will give you a clue.

      ORA-01034: ORACLE not available
      ORA-27101: shared memory realm does not exist
      Linux-x86_64 Error: 2: No such file or directory
      Process ID: 0
      Session ID: 0 Serial number: 0
      ORA-27154: post/wait create failed

      You have to deal with it.

      Good luck.

      Regards,

      Comment by gjilevski | March 8, 2012 | Reply

  22. Hi

    I`ve already checked “oraagent_oracle.log” logs. I see the same error.
    not sure how to find the rootcause of the issue.

    vi oraagent_oracle.log

    2012-03-09 10:17:54.131: [ AGFW][1188714816] sending status msg [ORA-01034: ORACLE not available
    ORA-27101: shared memory realm does not exist
    Linux-x86_64 Error: 2: No such file or directory
    Process ID: 0
    Session ID: 0 Serial number: 0
    ] for clean for resource: ora.asm 1 1
    2012-03-09 10:17:54.132: [ AGFW][1155144000] Agent sending reply for: RESOURCE_CLEAN[ora.asm 1 1] ID 4100:3474
    2012-03-09 10:17:54.134: [ora.asm][1188714816] [clean] InstConnection:connect: excp
    2012-03-09 10:17:54.134: [ora.asm][1188714816] [clean] InstAgent::stop: connect1 errcode 1034
    2012-03-09 10:17:54.137: [ora.asm][1188714816] [clean] makeConnectStr = (DESCRIPTION=(ADDRESS=(PROTOCOL=beq)(PROGRAM=/oracle/app/11.2.0/grid/bin/oracle)(ARGV0=oracle+ASM1)(ENVS=’ORACLE_HOME=/oracle/app/11.2.0/grid,ORACLE_SID=+ASM1′)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))’))(CONNECT_DATA=(SID=+ASM1)))
    2012-03-09 10:17:54.138: [ora.asm][1188714816] [clean] InstAgent::stop: connect2 oracleHome /oracle/app/11.2.0/grid oracleSid +ASM1
    2012-03-09 10:17:54.138: [ora.asm][1188714816] [clean] InstConnection::connectInt: server not attached
    2012-03-09 10:17:54.160: [ora.asm][1188714816] [clean] connect successful
    2012-03-09 10:17:54.160: [ora.asm][1188714816] [clean] AsmAgent::stopCbk: {
    2012-03-09 10:17:54.161: [ora.asm][1188714816] [clean] AsmAgent::stop: }
    2012-03-09 10:17:54.161: [ora.asm][1188714816] [clean] InstAgent::stop: }
    2012-03-09 10:17:54.161: [ora.asm][1188714816] [clean] clean }
    2012-03-09 10:17:54.161: [ora.asm][1188714816] [clean] clsn_agent::clean }
    2012-03-09 10:17:54.161: [ AGFW][1188714816] Command: clean for resource: ora.asm 1 1 completed with status: SUCCESS
    2012-03-09 10:17:54.162: [ AGFW][1155144000] Agent sending reply for: RESOURCE_CLEAN[ora.asm 1 1] ID 4100:3474
    2012-03-09 10:17:54.164: [ AGFW][1146751296] Executing command: check for resource: ora.asm 1 1
    2012-03-09 10:17:54.164: [ora.asm][1146751296] [check] Gimh::check OH /oracle/app/11.2.0/grid SID +ASM1
    2012-03-09 10:17:54.165: [ora.asm][1146751296] [check] Gimh::check condition (GIMH_NEXT_NUM) 0 exists
    2012-03-09 10:17:54.165: [ora.asm][1146751296] [check] (:CLSN00006:)AsmAgent::check failed gimh state 0
    2012-03-09 10:17:54.165: [ora.asm][1146751296] [check] Exception type=2 string=CRS-5011: Check of resource “+ASM” failed: details at “(:CLSN00006:)” in “/oracle/app/11.2.0/grid/log/rac1/agent/ohasd/oraagent_oracle/oraagent_oracle.log”

    2012-03-09 10:17:54.165: [ AGFW][1146751296] sending status msg [CRS-5011: Check of resource “+ASM” failed: details at “(:CLSN00006:)” in “/oracle/app/11.2.0/grid/log/rac1/agent/ohasd/oraagent_oracle/oraagent_oracle.log”
    ] for check for resource: ora.asm 1 1
    2012-03-09 10:17:54.165: [ora.asm][1146751296] [check] Crs returned nodeName rac1
    2012-03-09 10:17:54.166: [ora.asm][1146751296] [check] CrsCmd::ClscrsCmdData filter on LAST_SERVER eq rac1
    2012-03-09 10:17:54.166: [ora.asm][1146751296] [check] CrsCmd::ClscrsCmdData filter on NAME eq ora.asm
    2012-03-09 10:17:54.166: [ora.asm][1146751296] [check] CrsCmd::stat resName ora.asm statflag 0 useFilter 0
    2012-03-09 10:17:54.166: [ora.asm][1146751296] [check] CrsCmd::ClscrsCmdData::stat entity 1 statflag 0 useFilter 0
    2012-03-09 10:17:54.166: [ AGFW][1155144000] Agent sending reply for: RESOURCE_CLEAN[ora.asm 1 1] ID 4100:3474
    2012-03-09 10:17:54.506: [ COMMCRS][1289427264]clsc_connect: (0x2ee9af0) no listener at (ADDRESS=(PROTOCOL=IPC)(KEY=CRSD_UI_SOCKET))

    2012-03-09 10:17:54.507: [ USRTHRD][1146751296] clscconnect failed with clsc ret 9
    2012-03-09 10:17:54.507: [ USRTHRD][1146751296] error connecting to CRSD at [(ADDRESS=(PROTOCOL=IPC)(KEY=CRSD_UI_SOCKET))] clsccon 184

    Comment by Tjay | March 8, 2012 | Reply

    • Hi,

      cat /oracle/app/11.2.0/grid/log/rac1/agent/ohasd/oraagent_oracle/oraagent_oracle.log

      cat /oracle/app/11.2.0/grid/log/rac2/agent/ohasd/oraagent_oracle/oraagent_oracle.log

      Try to start ASM manually and see the reason for the ORA-27101

      2012-03-09 10:17:54.131: [ AGFW][1188714816] sending status msg [ORA-01034: ORACLE not available
      ORA-27101: shared memory realm does not exist
      Linux-x86_64 Error: 2: No such file or directory
      Process ID: 0
      Session ID: 0 Serial number: 0
      ] for clean for resource: ora.asm 1 1

      Regards,

      Comment by gjilevski | March 8, 2012 | Reply

    • Hi,

      Did you have shared memory in /etc/fstab?

      Put in /etc/fstab

      shmfs /dev/shm shm size=2g 0 0

      Reboot all nodes and try again.

      Regards,

      Comment by gjilevski | March 8, 2012 | Reply

  23. Hi

    just increase shared memory from 1.5gb to 2gb, still no luck.

    [root@rac1 bin]# df -h
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/VolGroup00-LogVol00
    15G 3.3G 11G 24% /
    /dev/sda1 99M 23M 71M 25% /boot
    /dev/sdb1 20G 9.9G 8.8G 53% /oracle
    shmfs 2.0G 0 2.0G 0% /dev/shm

    Anyways, I dont think I can find what the issue is.lol
    I better start from the scratch.

    Many thanks for your help

    Hope you enjoy rest of the day

    Comment by Tjay | March 9, 2012 | Reply

    • Hi,

      Did you reboot after resizing Shared memory? If so look for

      ORA-27101: shared memory realm does not exist
      Linux-x86_64 Error: 2: No such file or directory
      Process ID: 0
      Session ID: 0 Serial number: 0

      Try to start ASM using sqlplus and see what the outcome is. Seems that ASM cannot start and you need to find why. Look at it as if a single instance issue until you manage to start it on node 1 and node 2.

      Regards,

      Comment by gjilevski | March 9, 2012 | Reply

    • Hi,

      Did you use two users for installation grid/oracle or you use a single user oracle?

      Login as user installing GI and troubleshoot ASM startup.

      Regards,

      Comment by gjilevski | March 9, 2012 | Reply

  24. By the way, Ive discovered below errors in GI logs.
    I think the issue is related to asm disks.

    crsd(5072)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /oracle/app/11.2.0/grid/log/rac1/crsd/crsd.log.
    2012-03-09 11:01:03.911
    [ohasd(4586)]CRS-2765:Resource ‘ora.crsd’ has failed on server ‘rac1’.
    2012-03-09 11:01:05.111
    [crsd(5085)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /oracle/app/11.2.0/grid/log/rac1/crsd/crsd.log.
    2012-03-09 11:01:05.997

    ORA-15077: could not locate ASM instance serving a required diskgroup

    2012-03-09 11:01:21.799: [ CRSOCR][2308081392] OCR context init failure. Error: PROC-26: Error while accessing the physical storage ASM error [SLOS: cat=7, opn=kgfoAl06, dep=15077, loc=kgfokge
    ORA-15077: could not locate ASM instance serving a required diskgroup
    ] [7]

    Comment by Tjay | March 9, 2012 | Reply

    • Hi,

      1. Make sure that you have shmfs /dev/shm shm size=2g 0 0 in /etc/fstab on all nodes. Reboot all nodes.

      2. If cannot start ASM automatically,

      try to start it manually. Debug further why you have the error. Could be settings, permissions of files/directories.

      ORA-27101: shared memory realm does not exist
      Linux-x86_64 Error: 2: No such file or directory

      Regards,

      Comment by gjilevski | March 9, 2012 | Reply

  25. Thanks a lot gjilevski
    I will do more digging.

    Have an awesome day

    Comment by Tjay | March 9, 2012 | Reply

  26. Hi

    I did start from the stratch.
    GI installation and root.sh was successfull on both nodes.

    During database installation, I have encountered below errors.
    Have you ever experienced this ?
    I did try two times from the scratch, unfortunately I hit the same error each time.

    PRCR-1079: Failed to start resource ora.rac.db
    ORA-01092: Oracle instance terminated. Disconnection forced.
    ORA-00704: bootsrat process failure
    ORA-00704: bootsrat process failure
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01578: Oracle data block corrupted (file#1,block #337)
    ORA-01110: datafile 1: ‘+DATA/rac/datafile/system.256.777565859’
    Process ID: 11394
    Session ID: 1 Serial number:3

    CRS-2674: Start of ‘ora.rac.db’ on ‘rac2’ failed
    CRS-2632: There are no more servers to try to place resource ‘ora.rac.db’ on that would satisfy its placement polisy.

    Comment by Tjay | March 10, 2012 | Reply

    • Hi,

      Did you install the RDBMS software only? If not do it so by using OUI to install RDBMS software only on the two nodes.

      Use dbca to create a database. What template are you using? Did you try custom template?

      Do not forget to backup the VMs. In case something goes wrong you can use that backup to restore without all the installation from the very begining.

      Do verify that GI and RDBMS binaries are installed and configured properly?

      Regards,

      Comment by gjilevski | March 10, 2012 | Reply

  27. Many thanks for you assistance gjilevski.
    I finally got it sorted.

    Apologies for any inconvinience

    Comment by Tjay | March 13, 2012 | Reply

    • Hi,

      You are welcome. What was it that database creation failed with corruption?

      Regards,

      Comment by gjilevski | March 13, 2012 | Reply

  28. I am not entirely sure.
    I`ve just updated the virtual machine configuration file with the settings from below link and it worked.

    http://startoracle.com/2007/09/30/so-you-want-to-play-with-oracle-11gs-rac-heres-how/

    One last question,
    In rac environment there should be one database and multiple instances.

    If I perform database shutdown on node1, it says database closed,dismounted.
    However, database is not actually shutdowned, instance is shutdowned. Database is available in rac2.
    Is this normal ?

    [oracle@rac1 oracle]$ . oraenv
    ORACLE_SID = [rac1] ?
    The Oracle base for ORACLE_HOME=/oracle/app/oracle/product/11.2.0/dbhome_1 is /oracle/app/oracle

    SQL> select * from v$active_instances;

    INST_NAME
    ——————————————————————————–
    rac1.localdomain:rac1
    rac2.localdomain:rac2

    SQL> shut immediate;
    Database closed.
    Database dismounted.
    ORACLE instance shut down.

    [oracle@rac2 ~]

    SQL> select open_mode,name from v$database;

    OPEN_MODE NAME
    ——————– ———
    READ WRITE RAC

    Comment by Tjay | March 13, 2012 | Reply

    • Hi,

      In RAC you have one database on a shared storage that can be opened on each node. Thus, it provides High Availability (HA)in case of

      1. Node failure
      2. Instance failure

      With multiple open RAC instances you can implement, apart from HA, a load balancing, that is, to create a service with 2 preferred instances and to balance connections accross the two preferred instances.

      A RAC database is considered opened if it is opened in 1 or more instances.

      A RAC database is considered closed if it is closed on all instances.

      Look at Oracle documentaion here http://www.oracle.com/pls/db112/portal.portal_db?selected=16&frame=

      1. Real Application Clusters Administration and Deployment Guide
      2. Clusterware Administration and Deployment Guide
      3. Automatic Storage Management Administrator’s Guide

      You will save lot of time discovering things that are already documented.

      Good luck.

      Regards,

      Comment by gjilevski | March 13, 2012 | Reply

  29. Thanks heaps, I will go through the documentation

    Comment by Tjay | March 13, 2012 | Reply

  30. Hi

    With regards to removing a node from cluster, do I first need to remove the instance and then node ?

    I checked the doc but still have doubts.

    http://docs.oracle.com/cd/E11882_01/rac.112/e17264/addnodes.htm

    “Removing a Node From the Cluster
    Removing a node from the cluster can be as easy as simply shutting down the server. If the node was not pinned and does not host any Oracle databases using Oracle Database 11g release 1 or earlier, then the node is automatically removed from the cluster when it is shut down. If the node was pinned or if it hosts a database instance from previous releases, then explicit deletion is needed.”

    Comment by Tjay | March 20, 2012 | Reply

    • Hi,

      You will need to follow from 2 ( Remove a node)

      1. Remove instance
      2. Remove RDBMS software
      3. Remove GI software

      Look at the manuals where the steps are explained.

      1. Real Application Clusters Administration and Deployment Guide
      2. Clusterware Administration and Deployment Guide
      3. Automatic Storage Management Administrator’s Guide

      Regards,

      Comment by gjilevski | March 20, 2012 | Reply

  31. Thanks.
    I followed the steps mentioned in docs and removed everything on rac2.
    When I connect primary node, I still see rac2 resources.
    1-)
    Is this expected ? I presume I have to remove these resources manually.

    2-) How come oracle allow me to remove a node if I have two nodes ?
    I did rac installation on two nodes.
    If I remove a node, there will be one node which will make it standalone database.
    How come this is possible ?

    [oracle@rac1 bin]$ crs_stat -t
    Name Type Target State Host
    ————————————————————
    ora.DATA.dg ora….up.type ONLINE ONLINE rac1
    ora….ER.lsnr ora….er.type OFFLINE OFFLINE
    ora….N1.lsnr ora….er.type ONLINE ONLINE rac1
    ora.asm ora.asm.type ONLINE ONLINE rac1
    ora.eons ora.eons.type ONLINE ONLINE rac1
    ora.gsd ora.gsd.type OFFLINE OFFLINE
    ora….network ora….rk.type ONLINE ONLINE rac1
    ora.oc4j ora.oc4j.type OFFLINE OFFLINE
    ora.ons ora.ons.type ONLINE ONLINE rac1
    ora.rac.db ora….se.type ONLINE ONLINE rac1
    ora….SM1.asm application ONLINE ONLINE rac1
    ora….C1.lsnr application OFFLINE OFFLINE
    ora.rac1.gsd application OFFLINE OFFLINE
    ora.rac1.ons application ONLINE ONLINE rac1
    ora.rac1.vip ora….t1.type ONLINE ONLINE rac1
    ora….SM2.asm application ONLINE OFFLINE
    ora….C2.lsnr application OFFLINE OFFLINE
    ora.rac2.gsd application OFFLINE OFFLINE
    ora.rac2.ons application OFFLINE OFFLINE
    ora.scan1.vip ora….ip.type ONLINE ONLINE rac1

    Comment by Tjay | March 20, 2012 | Reply

    • Hi,

      Reboot the nodes. Do you still see the outcome you saw before?

      1. If you have properly followed the procedure for removing a node you will NOT see a node 2. Do you see node 2 when you run the following command from node 1?
      a)olsnodes -s -t

      What does cluvfy says if you run the following command after removing the node?

      b)cluvfy stage -post nodedel -n node2

      These two commands (a,b) are to verify node deletion.

      2. Wrong. You have GI cluster installation and RDBMS RAC installation. Removing a node will make it one node cluster. In a real woorld it does not make sense but will let you easilly add nodes to extend the cluster to more nodes. It is single instance clustered database protected bu GI. This is NOT a standalone database.

      Regards,

      Comment by gjilevski | March 20, 2012 | Reply

  32. Hi

    I`ve just rebootted the server and rac2 is no more available.

    [oracle@rac1 ~]$ olsnodes -s -t
    rac1 Active Unpinned

    Thanks.

    You are rac master. Thanks for sharing your knowledge with public.

    Much appreciated

    Comment by Tjay | March 21, 2012 | Reply

  33. Hi

    With regards to adding a new node. http://docs.oracle.com/cd/E11882_01/rac.112/e17264/addnodes.htm#CHDFBGFJ

    I have successfully extended the Oracle Grid Infrastructure Home to the New Node.
    root.sh was also successfull on the new node.

    Now, I am trying to Extend the Oracle RAC Home Directory.
    However, I am constantly getting below error, any idea what the issue is ?

    cd /oracle/app/oracle/product/11.2.0/dbhome_1/oui/bin

    [oracle@rac1 bin]$ ./addNode.sh -silent “CLUSTER_NEW_NODES={rac2}”
    Starting Oracle Universal Installer…

    Checking swap space: must be greater than 500 MB. Actual 4991 MB Passed
    Oracle Universal Installer, Version 11.2.0.1.0 Production
    Copyright (C) 1999, 2009, Oracle. All rights reserved.

    Performing tests to see whether nodes rac2,rac2 are available
    ……………………………………………………… 100% Done.

    SEVERE:The new nodes ‘rac2’ are already part of the cluster.

    [oracle@rac1 bin]$ olsnodes
    rac1
    rac2

    [oracle@rac2 oracle]$ ps -ef |grep pmon
    oracle 5783 1 0 16:04 ? 00:00:00 asm_pmon_+ASM2
    oracle 7760 7417 0 16:28 pts/1 00:00:00 grep pmon
    [oracle@rac2 oracle]$ . oraenv
    ORACLE_SID = [rac2] ? +ASM2
    The Oracle base for ORACLE_HOME=/oracle/app/11.2.0/grid is /oracle/app/oracle
    [oracle@rac2 oracle]$ crs_stat -t
    Name Type Target State Host
    ————————————————————
    ora.DATA.dg ora….up.type ONLINE ONLINE rac1
    ora….ER.lsnr ora….er.type ONLINE ONLINE rac1
    ora….N1.lsnr ora….er.type ONLINE ONLINE rac1
    ora.asm ora.asm.type ONLINE ONLINE rac1
    ora.eons ora.eons.type ONLINE ONLINE rac1
    ora.gsd ora.gsd.type OFFLINE OFFLINE
    ora….network ora….rk.type ONLINE ONLINE rac1
    ora.oc4j ora.oc4j.type OFFLINE OFFLINE
    ora.ons ora.ons.type ONLINE ONLINE rac1
    ora.rac.db ora….se.type ONLINE ONLINE rac1
    ora….SM1.asm application ONLINE ONLINE rac1
    ora….C1.lsnr application ONLINE ONLINE rac1
    ora.rac1.gsd application OFFLINE OFFLINE
    ora.rac1.ons application ONLINE ONLINE rac1
    ora.rac1.vip ora….t1.type ONLINE ONLINE rac1
    ora….SM2.asm application ONLINE ONLINE rac2
    ora….C2.lsnr application ONLINE ONLINE rac2
    ora.rac2.gsd application OFFLINE OFFLINE
    ora.rac2.ons application ONLINE ONLINE rac2
    ora.rac2.vip ora….t1.type ONLINE ONLINE rac2
    ora.scan1.vip ora….ip.type ONLINE ONLINE rac1

    Comment by Tjay | March 21, 2012 | Reply

    • Hi,

      Which user and $ORACLE_HOME and ORACLE_SID are you using ? Verify with pwd and echo of the environment varaibles.

      You should be running it from RDBMS_ORACLE_HOME/oui/bin as oracle RDBMS user and set $ORACLE_HOME and ORACLE_SID.

      Follow item 1. https://gjilevski.wordpress.com/2011/11/17/adding-and-deleting-a-node-from-oracle-rac-11-2-0-3/

      1.1 Add GI home. It is successfull
      1.2 Use RDBMS binaries as user that installed RDBMS and properly set environment. should be like

      oracle@raclinux2 bin]$ ./addNode.sh -silent “CLUSTER_NEW_NODES={raclinux3}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={raclinux3-vip}”

      1.3 Add instance to be done

      If the the problem persists and If you are using the correct binary from the correct location what is the content of your inventory on all nodes.

      For example

      
      [root@raclinux1 ContentsXML]# pwd
      /u01/app/oraInventory/ContentsXML
      [root@raclinux1 ContentsXML]# ls
      comps.xml  inventory.xml  libs.xml
      [root@raclinux1 ContentsXML]# cat inventory.xml
      <?xml version="1.0" standalone="yes" ?>
      <!-- Copyright (c) 1999, 2011, Oracle. All rights reserved. -->
      <!-- Do not modify the contents of this file by hand. -->
      <INVENTORY>
      <VERSION_INFO>
         <SAVED_WITH>11.2.0.3.0</SAVED_WITH>
         <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
      </VERSION_INFO>
      <HOME_LIST>
      <HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1">
         <NODE_LIST>
            <NODE NAME="raclinux1"/>
            <NODE NAME="raclinux2"/>
         </NODE_LIST>
      </HOME>
      <HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/db_1" TYPE="O" IDX="2">
         <NODE_LIST>
            <NODE NAME="raclinux1"/>
            <NODE NAME="raclinux2"/>
         </NODE_LIST>
      </HOME>
      <HOME NAME="OraDb10g_home1" LOC="/u01/app/oracle/product/10.2.0/db_1" TYPE="O" IDX="3">
         <NODE_LIST>
            <NODE NAME="raclinux1"/>
            <NODE NAME="raclinux2"/>
         </NODE_LIST>
      </HOME>
      <HOME NAME="Ora11g_gridinfrahome2" LOC="/u01/app/11.2.0.3/grid" TYPE="O" IDX="4" CRS="true">
         <NODE_LIST>
            <NODE NAME="raclinux1"/>
            <NODE NAME="raclinux2"/>
            <NODE NAME="raclinux3"/>
         </NODE_LIST>
      </HOME>
      <HOME NAME="OraDb11g_home2" LOC="/u01/app/oracle/product/11.2.0/db_3" TYPE="O" IDX="5">
         <NODE_LIST>
            <NODE NAME="raclinux1"/>
            <NODE NAME="raclinux2"/>
            <NODE NAME="raclinux3"/>
         </NODE_LIST>
      </HOME>
      </HOME_LIST>
      <COMPOSITEHOME_LIST>
      </COMPOSITEHOME_LIST>
      </INVENTORY>
      [root@raclinux1 ContentsXML]# clear
      
      [root@raclinux1 ContentsXML]# pwd
      /u01/app/oraInventory/ContentsXML
      [root@raclinux1 ContentsXML]# ls
      comps.xml  inventory.xml  libs.xml
      [root@raclinux1 ContentsXML]# cat inventory.xml
      <?xml version="1.0" standalone="yes" ?>
      <!-- Copyright (c) 1999, 2011, Oracle. All rights reserved. -->
      <!-- Do not modify the contents of this file by hand. -->
      <INVENTORY>
      <VERSION_INFO>
         <SAVED_WITH>11.2.0.3.0</SAVED_WITH>
         <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
      </VERSION_INFO>
      <HOME_LIST>
      <HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1">
         <NODE_LIST>
            <NODE NAME="raclinux1"/>
            <NODE NAME="raclinux2"/>
         </NODE_LIST>
      </HOME>
      <HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/db_1" TYPE="O" IDX="2">
         <NODE_LIST>
            <NODE NAME="raclinux1"/>
            <NODE NAME="raclinux2"/>
         </NODE_LIST>
      </HOME>
      <HOME NAME="OraDb10g_home1" LOC="/u01/app/oracle/product/10.2.0/db_1" TYPE="O" IDX="3">
         <NODE_LIST>
            <NODE NAME="raclinux1"/>
            <NODE NAME="raclinux2"/>
         </NODE_LIST>
      </HOME>
      <HOME NAME="Ora11g_gridinfrahome2" LOC="/u01/app/11.2.0.3/grid" TYPE="O" IDX="4" CRS="true">
         <NODE_LIST>
            <NODE NAME="raclinux1"/>
            <NODE NAME="raclinux2"/>
            <NODE NAME="raclinux3"/>
         </NODE_LIST>
      </HOME>
      <HOME NAME="OraDb11g_home2" LOC="/u01/app/oracle/product/11.2.0/db_3" TYPE="O" IDX="5">
         <NODE_LIST>
            <NODE NAME="raclinux1"/>
            <NODE NAME="raclinux2"/>
            <NODE NAME="raclinux3"/>
         </NODE_LIST>
      </HOME>
      </HOME_LIST>
      <COMPOSITEHOME_LIST>
      </COMPOSITEHOME_LIST>
      </INVENTORY>
      [root@raclinux1 ContentsXML]# 
      
      
      

      Make sure that on each node ( Node 1 and Node 2)you have all nodes $OH registered properly in the inventory.

      Regards,

      Comment by gjilevski | March 21, 2012 | Reply

  34. Hello

    [oracle@rac1 bin]$ . oraenv
    ORACLE_SID = [rac] ? rac1
    The Oracle base for ORACLE_HOME=/oracle/app/oracle/product/11.2.0/dbhome_1 is /oracle/app/oracle
    [oracle@rac1 bin]$ pwd
    /oracle/app/oracle/product/11.2.0/dbhome_1/oui/bin
    [oracle@rac1 bin]$ echo $ORACLE_HOME
    /oracle/app/oracle/product/11.2.0/dbhome_1
    [oracle@rac1 bin]$ echo $ORACLE_SID
    rac1
    [oracle@rac1 bin]$ ./addNode.sh -silent “CLUSTER_NEW_NODES={rac2}”
    Starting Oracle Universal Installer…

    Checking swap space: must be greater than 500 MB. Actual 4991 MB Passed
    Oracle Universal Installer, Version 11.2.0.1.0 Production
    Copyright (C) 1999, 2009, Oracle. All rights reserved.

    Performing tests to see whether nodes rac2,rac2 are available
    ……………………………………………………… 100% Done.

    SEVERE:The new nodes ‘rac2’ are already part of the cluster.

    [oracle@rac1 ContentsXML]$ cat inventory.xml


    11.2.0.1.0
    2.1.0.6.0

    I can confirm that there are no Rdbms binaries on new node.

    [oracle@rac2 oracle]$ ls -l /oracle/app/oracle/product/11.2.0/dbhome_1

    ls: /oracle/app/oracle/product/11.2.0/dbhome_1: No such file or directory

    Comment by Tjay | March 21, 2012 | Reply

    • Hi,

      I cannot read you xml file. Please enclode it with.

      
      < your xml>
      
      

      Post the file on node1 AND node2

      Pay attention for the RDBMS homes in the file on each nodes. Compare the file on each node.

      Regards,

      Comment by gjilevski | March 21, 2012 | Reply

    • Hi,

      How many nodes you have listed for $OH =/oracle/app/oracle/product/11.2.0/dbhome_1 in the inventory for

      1. Node 1
      2. Mode 2

      Post the xml with the tags ( without tags content is not displayed properly.

      Regards,

      Comment by gjilevski | March 21, 2012 | Reply

  35. [oracle@rac1 ContentsXML]$ cat inventory.xml
    
    
    
    
    
       11.2.0.1.0
       2.1.0.6.0
    
    
    
       
          
          
       
    
    
       
          
          
       
    
    
    
    
    

    Comment by Tjay | March 21, 2012 | Reply

    • Hi,

      Compare the two files.

      Regards,

      Comment by gjilevski | March 22, 2012 | Reply

  36.  
    [oracle@rac1 ContentsXML]$ cat inventory.xml
    <?xml version="1.0" standalone="yes" ?>
    <!-- Copyright (c) 1999, 2009, Oracle. All rights reserved. -->
    <!-- Do not modify the contents of this file by hand. -->
    <INVENTORY>
    <VERSION_INFO>
       <SAVED_WITH>11.2.0.1.0</SAVED_WITH>
       <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
    </VERSION_INFO>
    <HOME_LIST>
    <HOME NAME="Ora11g_gridinfrahome1" LOC="/oracle/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">
       <NODE_LIST>
          <NODE NAME="rac1"/>
          <NODE NAME="rac2"/>
       </NODE_LIST>
    </HOME>
    <HOME NAME="OraDb11g_home1" LOC="/oracle/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2">
       <NODE_LIST>
          <NODE NAME="rac1"/>
          <NODE NAME="rac2"/>
       </NODE_LIST>
    </HOME>
    </HOME_LIST>
    </INVENTORY>
    
    

    Comment by Tjay | March 22, 2012 | Reply

    • Hi,

      On Node 1 you have

      
      <HOME NAME="OraDb11g_home1" LOC="/oracle/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2">
         <NODE_LIST>
            <NODE NAME="rac1"/>
            <NODE NAME="rac2"/>
      

      On Node 2 you have Nothning for

      
      <HOME NAME="OraDb11g_home1" LOC="/oracle/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2">
      
      

      Remove

      
      <NODE NAME="rac2"/>
      
      

      Make it like

      [/sourcecode]

      Regards,

      Comment by gjilevski | March 22, 2012 | Reply

      • Hi,

        Make on Node1 look like

        
        
        <HOME NAME="OraDb11g_home1" LOC="/oracle/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2">
           <NODE_LIST>
              <NODE NAME="rac1"/>
              
        
        
        

        Comment by gjilevski | March 22, 2012

      • Hi,

        Make a copy of inventory before any editing!!!

        Regards,

        Comment by gjilevski | March 22, 2012

  37. This is new node:

    
    [oracle@rac2 ContentsXML]$ cat inventory.xml
    <?xml version="1.0" standalone="yes" ?>
    <!-- Copyright (c) 1999, 2009, Oracle. All rights reserved. -->
    <!-- Do not modify the contents of this file by hand. -->
    <INVENTORY>
    <VERSION_INFO>
       <SAVED_WITH>11.2.0.1.0</SAVED_WITH>
       <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
    </VERSION_INFO>
    <HOME_LIST>
    <HOME NAME="Ora11g_gridinfrahome1" LOC="/oracle/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">
       <NODE_LIST>
          <NODE NAME="rac1"/>
          <NODE NAME="rac2"/>
       </NODE_LIST>
    </HOME>
    </HOME_LIST>
    </INVENTORY>
    
    

    Comment by Tjay | March 22, 2012 | Reply

  38. Hi

    I tried but still no luck.( I even bounced the server)

    
    [oracle@rac1 ContentsXML]$ cat inventory.xml
    <?xml version="1.0" standalone="yes" ?>
    <!-- Copyright (c) 1999, 2009, Oracle. All rights reserved. -->
    <!-- Do not modify the contents of this file by hand. -->
    <INVENTORY>
    <VERSION_INFO>
       <SAVED_WITH>11.2.0.1.0</SAVED_WITH>
       <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
    </VERSION_INFO>
    <HOME_LIST>
    <HOME NAME="Ora11g_gridinfrahome1" LOC="/oracle/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">
       <NODE_LIST>
          <NODE NAME="rac1"/>
          <NODE NAME="rac2"/>
       </NODE_LIST>
    </HOME>
    <HOME NAME="OraDb11g_home1" LOC="/oracle/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2">
       <NODE_LIST>
          <NODE NAME="rac1"/>
       </NODE_LIST>
    </HOME>
    </HOME_LIST>
    </INVENTORY>
    

    [oracle@rac1 bin]$ . oraenv
    ORACLE_SID = [rac1] ?
    The Oracle base for ORACLE_HOME=/oracle/app/oracle/product/11.2.0/dbhome_1 is /oracle/app/oracle
    [oracle@rac1 bin]$ pwd
    /oracle/app/oracle/product/11.2.0/dbhome_1/oui/bin

    [oracle@rac1 bin]$ ./addNode.sh -silent “CLUSTER_NEW_NODES={rac2}”
    Starting Oracle Universal Installer…

    Checking swap space: must be greater than 500 MB. Actual 4991 MB Passed
    Oracle Universal Installer, Version 11.2.0.1.0 Production
    Copyright (C) 1999, 2009, Oracle. All rights reserved.

    Performing tests to see whether nodes rac2,rac2 are available
    ……………………………………………………… 100% Done.

    SEVERE:The new nodes ‘rac2’ are already part of the cluster.

    Comment by Tjay | March 22, 2012 | Reply

    • Hi,

      On node 1 from $RDBMS_HOME/oui/bin

      1. ./runInstaller -updateNodeList ORACLE_HOME=/oracle/app/oracle/product/11.2.0/dbhome_1 “CLUSTER_NODES=rac1”

      2. ./addNode.sh -silent “CLUSTER_NEW_NODES={rac2}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac2-vip}”

      rac2-vip replace with your vip on Node 2.

      Regards,

      Comment by gjilevski | March 22, 2012 | Reply

  39. Hi

    Thanks, this resolved the issue. I successfully cloned rdbms home.
    Notsure what the problem was.

    Now, I need to create an instance.

    Comment by Tjay | March 22, 2012 | Reply

    • Hi,

      The problem was that when you remove a node you need to follow the procedure. The last step is

      ./runInstaller -updateNodeList ORACLE_HOME=/oracle/app/oracle/product/11.2.0/dbhome_1 “CLUSTER_NODES=rac1”

      This cleans the registry. If the registry is not cleaned you cannot add a new OH on the new node 2 as registry already has information that $OH exists.

      You should really read the manuals if you want to understand what you are doing and why!

      Good luck.

      Regards,

      Comment by gjilevski | March 22, 2012 | Reply

  40. Thanks for you help.
    I was following the oracle docs.I might have missed that step.
    Instance is also created successfully and I am done. 🙂

    Comment by Tjay | March 22, 2012 | Reply

  41. Hi

    If i connect rac database with below (scan name), does it spread the backup on different nodes ? or does it just connect the least loaded node and kick of the backup only on one node?

    rman target sys/sys@ORCL

    ORCL = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = orcl-rac-scan)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) )

    Comment by john | September 22, 2014 | Reply

    • It will connect to a node as LBA and perform the backup. If you define parallellism in RMAN or connect to all instances than it will spread the backup. It is easy to test.

      Comment by gjilevski | September 22, 2014 | Reply

  42. Thanks.

    Lets say I allocate 4 channels like below and connect to scan name,
    Does it spread the channels accross different nodes ?

    rman target sys/sys@ORCL

    ALLOCATE CHANNEL CH01 TYPE DISK;
    ALLOCATE CHANNEL CH02 TYPE DISK;
    ALLOCATE CHANNEL CH03 TYPE DISK;
    ALLOCATE CHANNEL CH04 TYPE DISK;

    Comment by john | September 22, 2014 | Reply

  43. Thanks

    So in my example it will just take the backup on the least loaded node.
    It will not spread the backup accross different nodes unless i explicity specify below

    allocate channel ch1 type disk connect ‘sys/oracle@node1’;
    allocate channel ch1 type disk connect ‘sys/oracle@node2’;

    Is that correct ?

    Comment by john | September 23, 2014 | Reply

    • Seems so. Why do not test it?

      Comment by gjilevski | September 23, 2014 | Reply

  44. […] Adding and Deleting a Node from Oracle RAC 11.2.0.3 … – Adding and Deleting a Node from Oracle RAC 11.2.0.3 In the article you will have a look at the steps related to extending an existing Oracle cluster by adding a node … […]

    Pingback by Teradata Pmon User Guide | Nitanitaa | May 19, 2016 | Reply


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: