Guenadi N Jilevski's Oracle BLOG

Oracle RAC, DG, EBS, DR and HA DBA BLOG

Clone GI and RDBMS homes in Oracle RAC 11.2.0.3 with clone.pl

Clone GI and RDBMS homes in Oracle RAC 11.2.0.3 with clone.pl

In this article you will look at how to use clone.pl script to clone GI and RDBMS home of an existing Oracle RAC database installation to add a new node and extend the existing cluster. For additional approaches to add or delete a node to an Oracle RAC click here. The clone.pl can also be used to clone existing GI and RDBMS homes of one cluster to create a new Oracle clusters on another set of servers. You will have a look at how to manually add a RDBMS instance to the new node without using the dbca. The article refers to an existing Oracle cluster consisting of nodes raclinux1 and raclinux2 as described here and later upgraded to 11.2.0.3 as described here.

You will look at the following topics related to extending a node using clone.pl

  1. Clone GI home to the new node
  2. Clone RDBMS home to the new node
  3. Manually add an instance to the new node

Cloning existing home to either extend a cluster to a new node or to create a new cluster has an advantage of considerably reducing deployment time as cloning copies an original source GI and DRBMS home along with all patches and creates an identical image on the target node(s). Apart from distributing all patches from source to target location the cloning mechanism maintains the oracle inventory on the target location resulting in having cloned Oracle GI and RDBMS homes that can be maintained and patched as any standard Oracle home installation using opatch, OUI or OEM.

  1. Clone GI home to the new node

    The section outlines the steps to clone an existing Oracle GI installation to a new node that will become a member of the cluster.

    1. Assuming that GI is installed on the source stop the GI using the following command.

      [root@raclinux2 bin]# ./crsctl stop crs

    2. Create a stage directory and a tar ball of the source

      [root@raclinux2 grid]# mkdir /u01/stageGI

      [root@raclinux2 grid]# cp -prf /u01/app/11.2.0.3/grid /u01/stageGI

      [root@raclinux2 grid]# pwd

      /u01/stageGI/grid

      [root@raclinux2 grid]#

      [root@raclinux2 grid]# tar -cvf /tmp/tar11203.tar .

      [root@raclinux2 grid]#

    3. Start GI on the source no

      [root@raclinux2 bin]# ./crsctl start crs

    4. Create a software location on the new node raclinux3 and extract the tar ball.

      As root execute on the new node raclinux3

      mkdir –p /u01/app/11.2.0.3/grid

      mkdir –p /u01/app/grid

      mkdir –p /u01/app/oracle

      chown grid:oinstall /u01/app/11.2.0.3/grid

      chown grid:oinstall /u01/app/grid

      chown oracle:oinstall /u01/app/oracle

      chown –R grid:oinstall /u01

      mkdir –p /u01/app/oracle

      chmod –R 775 /u01/

      As grid user execute on the new node raclinux3

      cd /u01/app/11.2.0.3/grid

      [grid@raclinux3 grid]$ tar -xvf /tmp/tar11203.tar

    5. Clean the node specific configuration details and set proper permissions and ownership.

      As root execute the following

      cd /u01/app/11.2.0.3/grid

      rm -rf raclinux2

      rm -rf log/raclinux2

      rm -rf gpnp/raclinux2

      rm -rf crs/init

      rm -rf cdata

      rm -rf crf

      find gpnp -type f -exec rm -f {} \;

      rm -rf network/admin/*.ora

      find . -name ‘*.ouibak’ -exec rm {} \;

      find . -name ‘*.ouibak.1′ -exec rm {} \;

      rm -rf root.sh*

      cd cfgtoollogs

      find . -type f -exec rm -f {} \;

      chown -R grid:oinstall /u01/app/11.2.0.3/grid

      chown –R grid:oinstall /u01

      chmod –R 775 /u01/

      As grid user execute

      [grid@raclinux3 cfgtoollogs]$ chmod u+s /u01/app/11.2.0.3/grid/bin/oracle

      [grid@raclinux3 cfgtoollogs]$ chmod g+s /u01/app/11.2.0.3/grid/bin/oracle

    6. Run clone.pl from $GI_HOME/clone/bin on the new node raclinux3 as grid user

      Before running clone.pl prepare the following information:

  • ORACLE_BASE=
    /u01/app/grid
  • ORACLE_HOME=
    /u01/app/11.2.0.3/grid
  • ORACLE_HOME_NAME=
    Ora11g_gridinfrahome2 – use OUI from any existing cluster node

Run clone.pl as specified below

[grid@raclinux3 bin]$ perl clone.pl ORACLE_HOME=/u01/app/11.2.0.3/grid ORACLE_HOME_NAME=Ora11g_gridinfrahome2 ORACLE_BASE=/u01/app/grid SHOW_ROOTSH_CONFIRMATION=false

./runInstaller -clone -waitForCompletion “ORACLE_HOME=/u01/app/11.2.0.3/grid” “ORACLE_HOME_NAME=Ora11g_gridinfrahome2″ “ORACLE_BASE=/u01/app/grid” “SHOW_ROOTSH_CONFIRMATION=false” -silent -noConfig -nowait

Starting Oracle Universal Installer…

Checking swap space: must be greater than 500 MB. Actual 10001 MB Passed

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2011-11-15_07-36-57PM. Please wait …Oracle Universal Installer, Version 11.2.0.3.0 Production

Copyright (C) 1999, 2011, Oracle. All rights reserved.

You can find the log of this install session at:

 /u01/app/oraInventory/logs/cloneActions2011-11-15_07-36-57PM.log

………………………………………………………………………………………. 100% Done.

Installation in progress (Tuesday, November 15, 2011 7:38:24 PM EET)

………………………………………………………………………………….. 95% Done.

Install successful

Linking in progress (Tuesday, November 15, 2011 7:38:33 PM EET)

Link successful

Setup in progress (Tuesday, November 15, 2011 7:39:36 PM EET)

Setup successful

End of install phases.(Tuesday, November 15, 2011 7:39:59 PM EET)

The cloning of Ora11g_gridinfrahome2 was successful.

Please check ‘/u01/app/oraInventory/logs/cloneActions2011-11-15_07-36-57PM.log’ for more details.

[grid@raclinux3 bin]$

Verify the log file to ensure that the cloning process is successful.

  1. Verify from any existing cluster node, for example raclinux2 using cluvfy that the prerequisites are met

    [grid@raclinux2 gpnp]$ cluvfy stage -pre nodeadd -n raclinux3 -vip raclinux3-vip

    Performing pre-checks for node addition

    Checking node reachability…

    Node reachability check passed from node “raclinux2″

    Checking user equivalence…

    User equivalence check passed for user “grid”

    Checking node connectivity…

    Checking hosts config file…

    Verification of the hosts config file successful

    Check: Node connectivity for interface “eth0″

    Node connectivity passed for interface “eth0″

    TCP connectivity check passed for subnet “192.168.20.0″

    Checking subnet mask consistency…

    Subnet mask consistency check passed for subnet “192.168.20.0″.

    Subnet mask consistency check passed.

    Node connectivity check passed

    Checking multicast communication…

    Checking subnet “192.168.20.0″ for multicast communication with multicast group “230.0.1.0″…

    Check of subnet “192.168.20.0″ for multicast communication with multicast group “230.0.1.0″ passed.

    Check of multicast communication passed.

    Checking CRS integrity…

    Clusterware version consistency passed

    CRS integrity check passed

    Checking shared resources…

    Checking CRS home location…

    “/u01/app/11.2.0.3/grid” is not shared

    Shared resources check for node addition passed

    Checking node connectivity…

    Checking hosts config file…

    Verification of the hosts config file successful

    Check: Node connectivity for interface “eth0″

    Node connectivity passed for interface “eth0″

    TCP connectivity check passed for subnet “192.168.20.0″

    Check: Node connectivity for interface “eth1″

    Node connectivity passed for interface “eth1″

    TCP connectivity check passed for subnet “10.10.20.0″

    Checking subnet mask consistency…

    Subnet mask consistency check passed for subnet “192.168.20.0″.

    Subnet mask consistency check passed for subnet “10.10.20.0″.

    Subnet mask consistency check passed.

    Node connectivity check passed

    Checking multicast communication…

    Checking subnet “192.168.20.0″ for multicast communication with multicast group “230.0.1.0″…

    Check of subnet “192.168.20.0″ for multicast communication with multicast group “230.0.1.0″ passed.

    Checking subnet “10.10.20.0″ for multicast communication with multicast group “230.0.1.0″…

    Check of subnet “10.10.20.0″ for multicast communication with multicast group “230.0.1.0″ passed.

    Check of multicast communication passed.

    Total memory check passed

    Available memory check passed

    Swap space check passed

    Free disk space check passed for “raclinux3:/u01/app/11.2.0.3/grid”

    Free disk space check passed for “raclinux2:/u01/app/11.2.0.3/grid”

    Free disk space check passed for “raclinux3:/tmp”

    Free disk space check passed for “raclinux2:/tmp”

    Check for multiple users with UID value 1100 passed

    User existence check passed for “grid”

    Run level check passed

    Hard limits check passed for “maximum open file descriptors”

    Soft limits check passed for “maximum open file descriptors”

    Hard limits check passed for “maximum user processes”

    Soft limits check passed for “maximum user processes”

    System architecture check passed

    Kernel version check passed

    Kernel parameter check passed for “semmsl”

    Kernel parameter check passed for “semmns”

    Kernel parameter check passed for “semopm”

    Kernel parameter check passed for “semmni”

    Kernel parameter check passed for “shmmax”

    Kernel parameter check passed for “shmmni”

    Kernel parameter check passed for “shmall”

    Kernel parameter check passed for “file-max”

    Kernel parameter check passed for “ip_local_port_range”

    Kernel parameter check passed for “rmem_default”

    Kernel parameter check passed for “rmem_max”

    Kernel parameter check passed for “wmem_default”

    Kernel parameter check passed for “wmem_max”

    Kernel parameter check passed for “aio-max-nr”

    Package existence check passed for “make”

    Package existence check passed for “binutils”

    Package existence check passed for “gcc(x86_64)”

    Package existence check passed for “libaio(x86_64)”

    Package existence check passed for “glibc(x86_64)”

    Package existence check passed for “compat-libstdc++-33(x86_64)”

    Package existence check passed for “elfutils-libelf(x86_64)”

    Package existence check passed for “elfutils-libelf-devel”

    Package existence check passed for “glibc-common”

    Package existence check passed for “glibc-devel(x86_64)”

    Package existence check passed for “glibc-headers”

    Package existence check passed for “gcc-c++(x86_64)”

    Package existence check passed for “libaio-devel(x86_64)”

    Package existence check passed for “libgcc(x86_64)”

    Package existence check passed for “libstdc++(x86_64)”

    Package existence check passed for “libstdc++-devel(x86_64)”

    Package existence check passed for “sysstat”

    Package existence check passed for “ksh”

    Check for multiple users with UID value 0 passed

    Current group ID check passed

    Starting check for consistency of primary group of root user

    Check for consistency of root user’s primary group passed

    Checking OCR integrity…

    OCR integrity check passed

    Checking Oracle Cluster Voting Disk configuration…

    Oracle Cluster Voting Disk configuration check passed

    Time zone consistency check passed

    Starting Clock synchronization checks using Network Time Protocol(NTP)…

    NTP Configuration file check started…

    NTP Configuration file check passed

    Checking daemon liveness…

    Liveness check passed for “ntpd”

    Check for NTP daemon or service alive passed on all nodes

    NTP daemon slewing option check passed

    NTP daemon’s boot time configuration check for slewing option passed

    NTP common Time Server Check started…

    PRVF-5408 : NTP Time Server “80.190.97.205″ is common only to the following nodes “raclinux2″

    PRVF-5408 : NTP Time Server “192.43.244.18″ is common only to the following nodes “raclinux2″

    PRVF-5408 : NTP Time Server “129.69.1.153″ is common only to the following nodes “raclinux2″

    Check of common NTP Time Server passed

    Clock time offset check from NTP Time Server started…

    Clock time offset check passed

    Clock synchronization check using Network Time Protocol(NTP) passed

    User “grid” is not part of “root” group. Check passed

    Checking consistency of file “/etc/resolv.conf” across nodes

    File “/etc/resolv.conf” does not have both domain and search entries defined

    domain entry in file “/etc/resolv.conf” is consistent across nodes

    search entry in file “/etc/resolv.conf” is consistent across nodes

    PRVF-5636 : The DNS response time for an unreachable node exceeded “15000″ ms on following nodes: raclinux3

    File “/etc/resolv.conf” is not consistent across nodes

    Checking VIP configuration.

    Checking VIP Subnet configuration.

    Check for VIP Subnet configuration passed.

    Checking VIP reachability

    Check for VIP reachability passed.

    Pre-check for node addition was unsuccessful on all the nodes.

    [grid@raclinux2 gpnp]$

  2. From any existing cluster node, for example raclinux2 while logged in as grid user run the following command from $ORACLE_HOME_HOME/oui/bin

    [grid@raclinux2 bin]$ ./addNode.sh -silent -noCopy ORACLE_HOME=/u01/app/11.2.0.3/grid “CLUSTER_NEW_NODES={raclinux3}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={raclinux3-vip}” “CLUSTER_NEW_VIPS={raclinux3-vip}” CRS_ADDNODE=true CRS_DHCP_ENABLED=false

    Starting Oracle Universal Installer…

    Checking swap space: must be greater than 500 MB. Actual 9616 MB Passed

    Oracle Universal Installer, Version 11.2.0.3.0 Production

    Copyright (C) 1999, 2011, Oracle. All rights reserved.

    Performing tests to see whether nodes raclinux1,raclinux3 are available

    ……………………………………………………… 100% Done.

    .

    —————————————————————————–

    Cluster Node Addition Summary

    Global Settings

     Source: /u01/app/11.2.0.3/grid

    New Nodes

    Space Requirements

    New Nodes

     raclinux3

     /u01: Required 4.78GB : Available 265.43GB

    Installed Products

    Product Names

    Oracle Grid Infrastructure 11.2.0.3.0

     Sun JDK 1.5.0.30.03

    Installer SDK Component 11.2.0.3.0

    Oracle One-Off Patch Installer 11.2.0.1.7

    Oracle Universal Installer 11.2.0.3.0

     Oracle USM Deconfiguration 11.2.0.3.0

     Oracle Configuration Manager Deconfiguration 10.3.1.0.0

    Enterprise Manager Common Core Files 10.2.0.4.4

     Oracle DBCA Deconfiguration 11.2.0.3.0

     Oracle RAC Deconfiguration 11.2.0.3.0

    Oracle Quality of Service Management (Server) 11.2.0.3.0

    Installation Plugin Files 11.2.0.3.0

    Universal Storage Manager Files 11.2.0.3.0

    Oracle Text Required Support Files 11.2.0.3.0

    Automatic Storage Management Assistant 11.2.0.3.0

     Oracle Database 11g Multimedia Files 11.2.0.3.0

    Oracle Multimedia Java Advanced Imaging 11.2.0.3.0

    Oracle Globalization Support 11.2.0.3.0

     Oracle Multimedia Locator RDBMS Files 11.2.0.3.0

    Oracle Core Required Support Files 11.2.0.3.0

    Bali Share 1.1.18.0.0

     Oracle Database Deconfiguration 11.2.0.3.0

    Oracle Quality of Service Management (Client) 11.2.0.3.0

    Expat libraries 2.0.1.0.1

    Oracle Containers for Java 11.2.0.3.0

    Perl Modules 5.10.0.0.1

    Secure Socket Layer 11.2.0.3.0

     Oracle JDBC/OCI Instant Client 11.2.0.3.0

    Oracle Multimedia Client Option 11.2.0.3.0

    LDAP Required Support Files 11.2.0.3.0

    Character Set Migration Utility 11.2.0.3.0

    Perl Interpreter 5.10.0.0.2

    PL/SQL Embedded Gateway 11.2.0.3.0

     OLAP SQL Scripts 11.2.0.3.0

    Database SQL Scripts 11.2.0.3.0

    Oracle Extended Windowing Toolkit 3.4.47.0.0

     SSL Required Support Files for InstantClient 11.2.0.3.0

    SQL*Plus Files for Instant Client 11.2.0.3.0

    Oracle Net Required Support Files 11.2.0.3.0

    Oracle Database User Interface 2.2.13.0.0

     RDBMS Required Support Files for Instant Client 11.2.0.3.0

     RDBMS Required Support Files Runtime 11.2.0.3.0

    XML Parser for Java 11.2.0.3.0

    Oracle Security Developer Tools 11.2.0.3.0

    Oracle Wallet Manager 11.2.0.3.0

    Enterprise Manager plugin Common Files 11.2.0.3.0

    Platform Required Support Files 11.2.0.3.0

     Oracle JFC Extended Windowing Toolkit 4.2.36.0.0

     RDBMS Required Support Files 11.2.0.3.0

    Oracle Ice Browser 5.2.3.6.0

    Oracle Help For Java 4.2.9.0.0

    Enterprise Manager Common Files 10.2.0.4.3

     Deinstallation Tool 11.2.0.3.0

    Oracle Java Client 11.2.0.3.0

    Cluster Verification Utility Files 11.2.0.3.0

    Oracle Notification Service (eONS) 11.2.0.3.0

    Oracle LDAP administration 11.2.0.3.0

    Cluster Verification Utility Common Files 11.2.0.3.0

     Oracle Clusterware RDBMS Files 11.2.0.3.0

    Oracle Locale Builder 11.2.0.3.0

    Oracle Globalization Support 11.2.0.3.0

     Buildtools Common Files 11.2.0.3.0

    Oracle RAC Required Support Files-HAS 11.2.0.3.0

    SQL*Plus Required Support Files 11.2.0.3.0

     XDK Required Support Files 11.2.0.3.0

    Agent Required Support Files 10.2.0.4.3

    Parser Generator Required Support Files 11.2.0.3.0

     Precompiler Required Support Files 11.2.0.3.0

    Installation Common Files 11.2.0.3.0

    Required Support Files 11.2.0.3.0

     Oracle JDBC/THIN Interfaces 11.2.0.3.0

    Oracle Multimedia Locator 11.2.0.3.0

    Oracle Multimedia 11.2.0.3.0

    HAS Common Files 11.2.0.3.0

    Assistant Common Files 11.2.0.3.0

    PL/SQL 11.2.0.3.0

    HAS Files for DB 11.2.0.3.0

    Oracle Recovery Manager 11.2.0.3.0

    Oracle Database Utilities 11.2.0.3.0

    Oracle Notification Service 11.2.0.3.0

    SQL*Plus 11.2.0.3.0

     Oracle Netca Client 11.2.0.3.0

    Oracle Net 11.2.0.3.0

    Oracle JVM 11.2.0.3.0

    Oracle Internet Directory Client 11.2.0.3.0

    Oracle Net Listener 11.2.0.3.0

    Cluster Ready Services Files 11.2.0.3.0

     Oracle Database 11g 11.2.0.3.0

    —————————————————————————–

    Instantiating scripts for add node (Tuesday, November 15, 2011 8:22:13 PM EET)

    . 1% Done.

    Instantiation of add node scripts complete

    Saving inventory on nodes (Tuesday, November 15, 2011 8:22:18 PM EET)

    . 100% Done.

    Save inventory complete

    WARNING:

    The following configuration scripts need to be executed as the “root” user in each new cluster node. Each script in the list below is followed by a list of nodes.

    /u01/app/11.2.0.3/grid/root.sh #On nodes raclinux3

    To execute the configuration scripts:

    1. Open a terminal window

    2. Log in as “root”

    3. Run the scripts in each cluster node

    The Cluster Node Addition of /u01/app/11.2.0.3/grid was successful.

    Please check ‘/tmp/silentInstall.log’ for more details.

    [grid@raclinux2 bin]$

    DO NOT RUN root.sh after successful completion!!!

  3. Copy cluster configuration and gpnp profile and certificates to the new node raclinux3

    [grid@raclinux2 install]$ scp :/u01/app/11.2.0.3/grid/crs/install/crsconfig_params raclinux3:/u01/app/11.2.0.3/grid/crs/install/crsconfig_params

    crsconfig_params 100% 3965 3.9KB/s 00:00

    [grid@raclinux2 install]$ scp :/u01/app/11.2.0.3/grid/crs/install/crsconfig_addparams raclinux3:/u01/app/11.2.0.3/grid/crs/install/crsconfig_addparams

    crsconfig_addparams 100% 751 0.7KB/s 00:00

    [grid@raclinux2 install]$

    Copy the content of /u01/app/11.2.0.3/grid/gpnp on any existing cluster node, for example raclinux2 to /u01/app/11.2.0.3/grid/gpnp on raclinux3.

  4. Run root.sh from raclinux3 as root user

    If the re-link request do not happen the reconfiguration might be successful.

    [root@raclinux3 grid]# ./root.sh

    Check /u01/app/11.2.0.3/grid/install/root_raclinux3.gj.com_2011-11-15_20-19-20.log for the output of root script

    [root@raclinux3 grid]# cat /u01/app/11.2.0.3/grid/install/root_raclinux3.gj.com_2011-11-15_20-19-20.log

    Performing root user operation for Oracle 11g

    The following environment variables are set as:

    ORACLE_OWNER= grid

     ORACLE_HOME= /u01/app/11.2.0.3/grid

    Entries will be added to the /etc/oratab file as needed by

    Database Configuration Assistant when a database is created

    Finished running generic part of root script.

    Now product-specific root actions will be performed.

    Using configuration parameter file: /u01/app/11.2.0.3/grid/crs/install/crsconfig_params

    The oracle binary is currently linked with RAC disabled.

    Please execute the following steps to relink oracle binary

    and rerun the command with RAC enabled:

     setenv ORACLE_HOME

     cd /rdbms/lib

     make -f ins_rdbms.mk rac_on ioracle

    /u01/app/11.2.0.3/grid/perl/bin/perl -I/u01/app/11.2.0.3/grid/perl/lib -I/u01/app/11.2.0.3/grid/crs/install /u01/app/11.2.0.3/grid/crs/install/rootcrs.pl execution failed

    [root@raclinux3 grid]#

    Re-link the GI binaries and repeat the root.sh

    As root execute on the new node raclinux3

    chown grid:oinstall /u01/app/11.2.0.3/grid

    chmod –R 775 /u01/app/11.2.0.3/grid

    Re-link the GI binaries.

    [grid@raclinux3 lib]$ make -f ins_rdbms.mk rac_on ioracle

    rm -f /u01/app/11.2.0.3/grid/lib/libskgxp11.so

    cp /u01/app/11.2.0.3/grid/lib//libskgxpg.so /u01/app/11.2.0.3/grid/lib/libskgxp11.so

     - Use stub SKGXN library

    cp /u01/app/11.2.0.3/grid/lib/libskgxns.so /u01/app/11.2.0.3/grid/lib/libskgxn2.so

    /usr/bin/ar d /u01/app/11.2.0.3/grid/rdbms/lib/libknlopt.a ksnkcs.o

    /usr/bin/ar cr /u01/app/11.2.0.3/grid/rdbms/lib/libknlopt.a /u01/app/11.2.0.3/grid/rdbms/lib/kcsm.o

    chmod 755 /u01/app/11.2.0.3/grid/bin

    – Linking Oracle

    rm -f /u01/app/11.2.0.3/grid/rdbms/lib/oracle

    gcc -o /u01/app/11.2.0.3/grid/rdbms/lib/oracle -m64 -L/u01/app/11.2.0.3/grid/rdbms/lib/ -L/u01/app/11.2.0.3/grid/lib/ -L/u01/app/11.2.0.3/grid/lib/stubs/ -Wl,-E /u01/app/11.2.0.3/grid/rdbms/lib/opimai.o /u01/app/11.2.0.3/grid/rdbms/lib/ssoraed.o /u01/app/11.2.0.3/grid/rdbms/lib/ttcsoi.o -Wl,–whole-archive -lperfsrv11 -Wl,–no-whole-archive /u01/app/11.2.0.3/grid/lib/nautab.o /u01/app/11.2.0.3/grid/lib/naeet.o /u01/app/11.2.0.3/grid/lib/naect.o /u01/app/11.2.0.3/grid/lib/naedhs.o /u01/app/11.2.0.3/grid/rdbms/lib/config.o -lserver11 -lodm11 -lcell11 -lnnet11 -lskgxp11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lclient11 -lvsn11 -lcommon11 -lgeneric11 -lknlopt `if /usr/bin/ar tv /u01/app/11.2.0.3/grid/rdbms/lib/libknlopt.a | grep xsyeolap.o > /dev/null 2>&1 ; then echo “-loraolap11″ ; fi` -lslax11 -lpls11 -lrt -lplp11 -lserver11 -lclient11 -lvsn11 -lcommon11 -lgeneric11 `if [ -f /u01/app/11.2.0.3/grid/lib/libavserver11.a ] ; then echo “-lavserver11″ ; else echo “-lavstub11″; fi` `if [ -f /u01/app/11.2.0.3/grid/lib/libavclient11.a ] ; then echo “-lavclient11″ ; fi` -lknlopt -lslax11 -lpls11 -lrt -lplp11 -ljavavm11 -lserver11 -lwwg `cat /u01/app/11.2.0.3/grid/lib/ldflags` -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnro11 `cat /u01/app/11.2.0.3/grid/lib/ldflags` -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnnz11 -lzt11 -lmm -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lztkg11 `cat /u01/app/11.2.0.3/grid/lib/ldflags` -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnro11 `cat /u01/app/11.2.0.3/grid/lib/ldflags` -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnnz11 -lzt11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 `if /usr/bin/ar tv /u01/app/11.2.0.3/grid/rdbms/lib/libknlopt.a | grep “kxmnsd.o” > /dev/null 2>&1 ; then echo ” ” ; else echo “-lordsdo11″; fi` -L/u01/app/11.2.0.3/grid/ctx/lib/ -lctxc11 -lctx11 -lzx11 -lgx11 -lctx11 -lzx11 -lgx11 -lordimt11 -lclsra11 -ldbcfg11 -lhasgen11 -lskgxn2 -lnnz11 -lzt11 -lxml11 -locr11 -locrb11 -locrutl11 -lhasgen11 -lskgxn2 -lnnz11 -lzt11 -lxml11 -loraz -llzopro -lorabz2 -lipp_z -lipp_bz2 -lippdcemerged -lippsemerged -lippdcmerged -lippsmerged -lippcore -lippcpemerged -lippcpmerged -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lsnls11 -lunls11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lasmclnt11 -lcommon11 -lcore11 -laio `cat /u01/app/11.2.0.3/grid/lib/sysliblist` -Wl,-rpath,/u01/app/11.2.0.3/grid/lib -lm `cat /u01/app/11.2.0.3/grid/lib/sysliblist` -ldl -lm -L/u01/app/11.2.0.3/grid/lib

    test ! -f /u01/app/11.2.0.3/grid/bin/oracle ||\

     mv -f /u01/app/11.2.0.3/grid/bin/oracle /u01/app/11.2.0.3/grid/bin/oracleO

    mv /u01/app/11.2.0.3/grid/rdbms/lib/oracle /u01/app/11.2.0.3/grid/bin/oracle

    chmod 6751 /u01/app/11.2.0.3/grid/bin/oracle

    [grid@raclinux3 lib]$

    Rerun root.sh

    [root@raclinux3 grid]# ./root.sh

    Check /u01/app/11.2.0.3/grid/install/root_raclinux3.gj.com_2011-11-15_23-34-00.log for the output of root script

    [root@raclinux3 grid]# cat /u01/app/11.2.0.3/grid/install/root_raclinux3.gj.com_2011-11-15_23-34-00.log

    Performing root user operation for Oracle 11g

    The following environment variables are set as:

    ORACLE_OWNER= grid

     ORACLE_HOME= /u01/app/11.2.0.3/grid

    Entries will be added to the /etc/oratab file as needed by

    Database Configuration Assistant when a database is created

    Finished running generic part of root script.

    Now product-specific root actions will be performed.

    Using configuration parameter file: /u01/app/11.2.0.3/grid/crs/install/crsconfig_params

    CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node raclinux1, number 1, and is terminating

    An active cluster was found during exclusive startup, restarting to join the cluster

    clscfg: EXISTING configuration version 5 detected.

    clscfg: version 5 is 11g Release 2.

    Successfully accumulated necessary OCR keys.

    Creating OCR keys for user ‘root’, privgrp ‘root’..

    Operation successful.

    Preparing packages for installation…

    cvuqdisk-1.0.9-1

    Configure Oracle Grid Infrastructure for a Cluster … succeeded

    [root@raclinux3 grid]#

  5. Verify that Oracle Clusterware (OC) started.

    Use crsctl check cluster –all

    [root@raclinux3 bin]# ./crsctl check cluster -all

    **************************************************************

    raclinux1:

    CRS-4537: Cluster Ready Services is online

    CRS-4529: Cluster Synchronization Services is online

    CRS-4533: Event Manager is online

    **************************************************************

    raclinux2:

    CRS-4537: Cluster Ready Services is online

    CRS-4529: Cluster Synchronization Services is online

    CRS-4533: Event Manager is online

    **************************************************************

    raclinux3:

    CRS-4537: Cluster Ready Services is online

    CRS-4529: Cluster Synchronization Services is online

    CRS-4533: Event Manager is online

    **************************************************************

    [root@raclinux3 bin]#

    Use the following command to verify that the node was successfully added. Skip the failed check due to a SCAN registered with a single entry in /etc/hosts.

    [grid@raclinux2 gpnp]$ cluvfy stage -post nodeadd -n raclinux3 -verbose

    Performing post-checks for node addition

    Checking node reachability…

    Check: Node reachability from node “raclinux2″

    Destination Node Reachable?

    ———————————— ————————

     raclinux3 yes

    Result: Node reachability check passed from node “raclinux2″

    Checking user equivalence…

    Check: User equivalence for user “grid”

    Node Name Status

    ———————————— ————————

     raclinux3 passed

    Result: User equivalence check passed for user “grid”

    Checking node connectivity…

    Checking hosts config file…

    Node Name Status

    ———————————— ————————

     raclinux3 passed

     raclinux2 passed

     raclinux1 passed

    Verification of the hosts config file successful

    Interface information for node “raclinux3″

     Name IP Address Subnet Gateway Def. Gateway HW Address MTU

    —— ————— ————— ————— ————— —————– ——

     eth0 192.168.20.23 192.168.20.0 0.0.0.0 UNKNOWN 08:00:27:86:CA:20 1500

     eth0 192.168.20.53 192.168.20.0 0.0.0.0 UNKNOWN 08:00:27:86:CA:20 1500

     eth1 10.10.20.23 10.10.20.0 0.0.0.0 UNKNOWN 08:00:27:A4:7B:A4 1500

     eth1 169.254.105.52 169.254.0.0 0.0.0.0 UNKNOWN 08:00:27:A4:7B:A4 1500

     eth2 192.168.156.103 192.168.156.0 0.0.0.0 UNKNOWN 08:00:27:8C:A0:9B 1500

     eth3 192.168.2.24 192.168.2.0 0.0.0.0 UNKNOWN 08:00:27:E5:70:A8 1500

    Interface information for node “raclinux2″

     Name IP Address Subnet Gateway Def. Gateway HW Address MTU

    —— ————— ————— ————— ————— —————– ——

     eth0 192.168.20.22 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:F7:87:C6 1500

     eth0 192.168.20.111 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:F7:87:C6 1500

     eth0 192.168.20.100 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:F7:87:C6 1500

     eth0 192.168.20.112 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:F7:87:C6 1500

     eth0 192.168.20.52 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:F7:87:C6 1500

     eth1 10.10.20.22 10.10.20.0 0.0.0.0 10.0.5.2 08:00:27:41:52:72 1500

     eth1 169.254.206.240 169.254.0.0 0.0.0.0 10.0.5.2 08:00:27:41:52:72 1500

     eth2 192.168.156.102 192.168.156.0 0.0.0.0 10.0.5.2 08:00:27:13:BC:77 1500

     eth3 10.0.5.15 10.0.5.0 0.0.0.0 10.0.5.2 08:00:27:93:4A:17 1500

    Interface information for node “raclinux1″

     Name IP Address Subnet Gateway Def. Gateway HW Address MTU

    —— ————— ————— ————— ————— —————– ——

     eth0 192.168.20.21 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:80:E3:C1 1500

     eth0 192.168.20.51 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:80:E3:C1 1500

     eth1 10.10.20.21 10.10.20.0 0.0.0.0 10.0.5.2 08:00:27:FD:AA:42 1500

     eth1 169.254.89.140 169.254.0.0 0.0.0.0 10.0.5.2 08:00:27:FD:AA:42 1500

     eth2 192.168.156.101 192.168.156.0 0.0.0.0 10.0.5.2 08:00:27:B0:B4:C7 1500

     eth3 10.0.5.15 10.0.5.0 0.0.0.0 10.0.5.2 08:00:27:8D:38:97 1500

    Check: Node connectivity for interface “eth0″

    Source Destination Connected?

    —————————— —————————— —————-

     raclinux3[192.168.20.23] raclinux3[192.168.20.53] yes

     raclinux3[192.168.20.23] raclinux2[192.168.20.22] yes

     raclinux3[192.168.20.23] raclinux2[192.168.20.111] yes

     raclinux3[192.168.20.23] raclinux2[192.168.20.100] yes

     raclinux3[192.168.20.23] raclinux2[192.168.20.112] yes

     raclinux3[192.168.20.23] raclinux2[192.168.20.52] yes

     raclinux3[192.168.20.23] raclinux1[192.168.20.21] yes

     raclinux3[192.168.20.23] raclinux1[192.168.20.51] yes

     raclinux3[192.168.20.53] raclinux2[192.168.20.22] yes

     raclinux3[192.168.20.53] raclinux2[192.168.20.111] yes

     raclinux3[192.168.20.53] raclinux2[192.168.20.100] yes

     raclinux3[192.168.20.53] raclinux2[192.168.20.112] yes

     raclinux3[192.168.20.53] raclinux2[192.168.20.52] yes

     raclinux3[192.168.20.53] raclinux1[192.168.20.21] yes

     raclinux3[192.168.20.53] raclinux1[192.168.20.51] yes

     raclinux2[192.168.20.22] raclinux2[192.168.20.111] yes

     raclinux2[192.168.20.22] raclinux2[192.168.20.100] yes

     raclinux2[192.168.20.22] raclinux2[192.168.20.112] yes

     raclinux2[192.168.20.22] raclinux2[192.168.20.52] yes

     raclinux2[192.168.20.22] raclinux1[192.168.20.21] yes

     raclinux2[192.168.20.22] raclinux1[192.168.20.51] yes

     raclinux2[192.168.20.111] raclinux2[192.168.20.100] yes

     raclinux2[192.168.20.111] raclinux2[192.168.20.112] yes

     raclinux2[192.168.20.111] raclinux2[192.168.20.52] yes

     raclinux2[192.168.20.111] raclinux1[192.168.20.21] yes

     raclinux2[192.168.20.111] raclinux1[192.168.20.51] yes

     raclinux2[192.168.20.100] raclinux2[192.168.20.112] yes

     raclinux2[192.168.20.100] raclinux2[192.168.20.52] yes

     raclinux2[192.168.20.100] raclinux1[192.168.20.21] yes

     raclinux2[192.168.20.100] raclinux1[192.168.20.51] yes

     raclinux2[192.168.20.112] raclinux2[192.168.20.52] yes

     raclinux2[192.168.20.112] raclinux1[192.168.20.21] yes

     raclinux2[192.168.20.112] raclinux1[192.168.20.51] yes

     raclinux2[192.168.20.52] raclinux1[192.168.20.21] yes

     raclinux2[192.168.20.52] raclinux1[192.168.20.51] yes

     raclinux1[192.168.20.21] raclinux1[192.168.20.51] yes

    Result: Node connectivity passed for interface “eth0″

    Check: TCP connectivity of subnet “192.168.20.0″

    Source Destination Connected?

    —————————— —————————— —————-

     raclinux2:192.168.20.22 raclinux3:192.168.20.23 passed

     raclinux2:192.168.20.22 raclinux3:192.168.20.53 passed

     raclinux2:192.168.20.22 raclinux2:192.168.20.111 passed

     raclinux2:192.168.20.22 raclinux2:192.168.20.100 passed

     raclinux2:192.168.20.22 raclinux2:192.168.20.112 passed

     raclinux2:192.168.20.22 raclinux2:192.168.20.52 passed

     raclinux2:192.168.20.22 raclinux1:192.168.20.21 passed

     raclinux2:192.168.20.22 raclinux1:192.168.20.51 passed

    Result: TCP connectivity check passed for subnet “192.168.20.0″

    Checking subnet mask consistency…

    Subnet mask consistency check passed for subnet “192.168.20.0″.

    Subnet mask consistency check passed.

    Result: Node connectivity check passed

    Checking multicast communication…

    Checking subnet “192.168.20.0″ for multicast communication with multicast group “230.0.1.0″…

    Check of subnet “192.168.20.0″ for multicast communication with multicast group “230.0.1.0″ passed.

    Check of multicast communication passed.

    Checking cluster integrity…

    Node Name

    ————————————

     raclinux1

     raclinux2

     raclinux3

    Cluster integrity check passed

    Checking CRS integrity…

    Clusterware version consistency passed

    The Oracle Clusterware is healthy on node “raclinux3″

    The Oracle Clusterware is healthy on node “raclinux2″

    The Oracle Clusterware is healthy on node “raclinux1″

    CRS integrity check passed

    Checking shared resources…

    Checking CRS home location…

    “/u01/app/11.2.0.3/grid” is not shared

    Result: Shared resources check for node addition passed

    Checking node connectivity…

    Checking hosts config file…

    Node Name Status

    ———————————— ————————

     raclinux3 passed

     raclinux2 passed

     raclinux1 passed

    Verification of the hosts config file successful

    Interface information for node “raclinux3″

     Name IP Address Subnet Gateway Def. Gateway HW Address MTU

    —— ————— ————— ————— ————— —————– ——

     eth0 192.168.20.23 192.168.20.0 0.0.0.0 UNKNOWN 08:00:27:86:CA:20 1500

     eth0 192.168.20.53 192.168.20.0 0.0.0.0 UNKNOWN 08:00:27:86:CA:20 1500

     eth1 10.10.20.23 10.10.20.0 0.0.0.0 UNKNOWN 08:00:27:A4:7B:A4 1500

     eth1 169.254.105.52 169.254.0.0 0.0.0.0 UNKNOWN 08:00:27:A4:7B:A4 1500

     eth2 192.168.156.103 192.168.156.0 0.0.0.0 UNKNOWN 08:00:27:8C:A0:9B 1500

     eth3 192.168.2.24 192.168.2.0 0.0.0.0 UNKNOWN 08:00:27:E5:70:A8 1500

    Interface information for node “raclinux2″

     Name IP Address Subnet Gateway Def. Gateway HW Address MTU

    —— ————— ————— ————— ————— —————– ——

     eth0 192.168.20.22 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:F7:87:C6 1500

     eth0 192.168.20.111 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:F7:87:C6 1500

     eth0 192.168.20.100 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:F7:87:C6 1500

     eth0 192.168.20.112 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:F7:87:C6 1500

     eth0 192.168.20.52 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:F7:87:C6 1500

     eth1 10.10.20.22 10.10.20.0 0.0.0.0 10.0.5.2 08:00:27:41:52:72 1500

     eth1 169.254.206.240 169.254.0.0 0.0.0.0 10.0.5.2 08:00:27:41:52:72 1500

     eth2 192.168.156.102 192.168.156.0 0.0.0.0 10.0.5.2 08:00:27:13:BC:77 1500

     eth3 10.0.5.15 10.0.5.0 0.0.0.0 10.0.5.2 08:00:27:93:4A:17 1500

    Interface information for node “raclinux1″

     Name IP Address Subnet Gateway Def. Gateway HW Address MTU

    —— ————— ————— ————— ————— —————– ——

     eth0 192.168.20.21 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:80:E3:C1 1500

     eth0 192.168.20.51 192.168.20.0 0.0.0.0 10.0.5.2 08:00:27:80:E3:C1 1500

     eth1 10.10.20.21 10.10.20.0 0.0.0.0 10.0.5.2 08:00:27:FD:AA:42 1500

     eth1 169.254.89.140 169.254.0.0 0.0.0.0 10.0.5.2 08:00:27:FD:AA:42 1500

     eth2 192.168.156.101 192.168.156.0 0.0.0.0 10.0.5.2 08:00:27:B0:B4:C7 1500

     eth3 10.0.5.15 10.0.5.0 0.0.0.0 10.0.5.2 08:00:27:8D:38:97 1500

    Check: Node connectivity for interface “eth0″

    Source Destination Connected?

    —————————— —————————— —————-

     raclinux3[192.168.20.23] raclinux3[192.168.20.53] yes

     raclinux3[192.168.20.23] raclinux2[192.168.20.22] yes

     raclinux3[192.168.20.23] raclinux2[192.168.20.111] yes

     raclinux3[192.168.20.23] raclinux2[192.168.20.100] yes

     raclinux3[192.168.20.23] raclinux2[192.168.20.112] yes

     raclinux3[192.168.20.23] raclinux2[192.168.20.52] yes

     raclinux3[192.168.20.23] raclinux1[192.168.20.21] yes

     raclinux3[192.168.20.23] raclinux1[192.168.20.51] yes

     raclinux3[192.168.20.53] raclinux2[192.168.20.22] yes

     raclinux3[192.168.20.53] raclinux2[192.168.20.111] yes

     raclinux3[192.168.20.53] raclinux2[192.168.20.100] yes

     raclinux3[192.168.20.53] raclinux2[192.168.20.112] yes

     raclinux3[192.168.20.53] raclinux2[192.168.20.52] yes

     raclinux3[192.168.20.53] raclinux1[192.168.20.21] yes

     raclinux3[192.168.20.53] raclinux1[192.168.20.51] yes

     raclinux2[192.168.20.22] raclinux2[192.168.20.111] yes

     raclinux2[192.168.20.22] raclinux2[192.168.20.100] yes

     raclinux2[192.168.20.22] raclinux2[192.168.20.112] yes

     raclinux2[192.168.20.22] raclinux2[192.168.20.52] yes

     raclinux2[192.168.20.22] raclinux1[192.168.20.21] yes

     raclinux2[192.168.20.22] raclinux1[192.168.20.51] yes

     raclinux2[192.168.20.111] raclinux2[192.168.20.100] yes

     raclinux2[192.168.20.111] raclinux2[192.168.20.112] yes

     raclinux2[192.168.20.111] raclinux2[192.168.20.52] yes

     raclinux2[192.168.20.111] raclinux1[192.168.20.21] yes

     raclinux2[192.168.20.111] raclinux1[192.168.20.51] yes

     raclinux2[192.168.20.100] raclinux2[192.168.20.112] yes

     raclinux2[192.168.20.100] raclinux2[192.168.20.52] yes

     raclinux2[192.168.20.100] raclinux1[192.168.20.21] yes

     raclinux2[192.168.20.100] raclinux1[192.168.20.51] yes

     raclinux2[192.168.20.112] raclinux2[192.168.20.52] yes

     raclinux2[192.168.20.112] raclinux1[192.168.20.21] yes

     raclinux2[192.168.20.112] raclinux1[192.168.20.51] yes

     raclinux2[192.168.20.52] raclinux1[192.168.20.21] yes

     raclinux2[192.168.20.52] raclinux1[192.168.20.51] yes

     raclinux1[192.168.20.21] raclinux1[192.168.20.51] yes

    Result: Node connectivity passed for interface “eth0″

    Check: TCP connectivity of subnet “192.168.20.0″

    Source Destination Connected?

    —————————— —————————— —————-

     raclinux2:192.168.20.22 raclinux3:192.168.20.23 passed

     raclinux2:192.168.20.22 raclinux3:192.168.20.53 passed

    raclinux2:192.168.20.22 raclinux2:192.168.20.111 passed

    raclinux2:192.168.20.22 raclinux2:192.168.20.100 passed

    raclinux2:192.168.20.22 raclinux2:192.168.20.112 passed

    raclinux2:192.168.20.22 raclinux2:192.168.20.52 passed

     raclinux2:192.168.20.22 raclinux1:192.168.20.21 passed

     raclinux2:192.168.20.22 raclinux1:192.168.20.51 passed

    Result: TCP connectivity check passed for subnet “192.168.20.0″

    Check: Node connectivity for interface “eth1″

    Source Destination Connected?

    —————————— —————————— —————-

    raclinux3[10.10.20.23] raclinux2[10.10.20.22] yes

     raclinux3[10.10.20.23] raclinux1[10.10.20.21] yes

     raclinux2[10.10.20.22] raclinux1[10.10.20.21] yes

    Result: Node connectivity passed for interface “eth1″

    Check: TCP connectivity of subnet “10.10.20.0″

    Source Destination Connected?

    —————————— —————————— —————-

    raclinux2:10.10.20.22 raclinux3:10.10.20.23 passed

     raclinux2:10.10.20.22 raclinux1:10.10.20.21 passed

    Result: TCP connectivity check passed for subnet “10.10.20.0″

    Checking subnet mask consistency…

    Subnet mask consistency check passed for subnet “192.168.20.0″.

    Subnet mask consistency check passed for subnet “10.10.20.0″.

    Subnet mask consistency check passed.

    Result: Node connectivity check passed

    Checking multicast communication…

    Checking subnet “192.168.20.0″ for multicast communication with multicast group “230.0.1.0″…

    Check of subnet “192.168.20.0″ for multicast communication with multicast group “230.0.1.0″ passed.

    Checking subnet “10.10.20.0″ for multicast communication with multicast group “230.0.1.0″…

    Check of subnet “10.10.20.0″ for multicast communication with multicast group “230.0.1.0″ passed.

    Check of multicast communication passed.

    Checking node application existence…

    Checking existence of VIP node application (required)

    Node Name Required Running? Comment

    ———— ———————— ———————— ———-

     raclinux3 yes yes passed

     raclinux2 yes yes passed

     raclinux1 yes yes passed

    VIP node application check passed

    Checking existence of NETWORK node application (required)

    Node Name Required Running? Comment

    ———— ———————— ———————— ———-

     raclinux3 yes yes passed

     raclinux2 yes yes passed

     raclinux1 yes yes passed

    NETWORK node application check passed

    Checking existence of GSD node application (optional)

    Node Name Required Running? Comment

    ———— ———————— ———————— ———-

     raclinux3 no no exists

     raclinux2 no no exists

     raclinux1 no no exists

    GSD node application is offline on nodes “raclinux3,raclinux2,raclinux1″

    Checking existence of ONS node application (optional)

    Node Name Required Running? Comment

    ———— ———————— ———————— ———-

    raclinux3 no yes passed

    raclinux2 no yes passed

    raclinux1 no yes passed

    ONS node application check passed

    Checking Single Client Access Name (SCAN)…

     SCAN Name Node Running? ListenerName Port Running?

    —————- ———— ———— ———— ———— ————

     rac-scan raclinux2 true LISTENER_SCAN1 1521 true

    Checking TCP connectivity to SCAN Listeners…

     Node ListenerName TCP connectivity?

    ———— ———————— ————————

    raclinux2 LISTENER_SCAN1 yes

    TCP connectivity to SCAN Listeners exists on all cluster nodes

    Checking name resolution setup for “rac-scan”…

    ERROR:

    PRVG-1101 : SCAN name “rac-scan” failed to resolve

    SCAN Name IP Address Status Comment

    ———— ———————— ———————— ———-

     rac-scan 192.168.20.100 failed NIS Entry

    ERROR:

    PRVF-4657 : Name resolution setup check for “rac-scan” (IP address: 192.168.20.100) failed

    ERROR:

    PRVF-4664 : Found inconsistent name resolution entries for SCAN name “rac-scan”

    Verification of SCAN VIP and Listener setup failed

    Checking to make sure user “grid” is not in “root” group

    Node Name Status Comment

    ———— ———————— ————————

    raclinux3 passed does not exist

    Result: User “grid” is not part of “root” group. Check passed

    Checking if Clusterware is installed on all nodes…

    Check of Clusterware install passed

    Checking if CTSS Resource is running on all nodes…

    Check: CTSS Resource running on all nodes

    Node Name Status

    ———————————— ————————

    raclinux3 passed

    Result: CTSS resource check passed

    Querying CTSS for time offset on all nodes…

    Result: Query of CTSS for time offset passed

    Check CTSS state started…

    Check: CTSS state

    Node Name State

    ———————————— ————————

    raclinux3 Observer

    CTSS is in Observer state. Switching over to clock synchronization checks using NTP

    Starting Clock synchronization checks using Network Time Protocol(NTP)…

    NTP Configuration file check started…

    The NTP configuration file “/etc/ntp.conf” is available on all nodes

    NTP Configuration file check passed

    Checking daemon liveness…

    Check: Liveness for “ntpd”

    Node Name Running?

    ———————————— ————————

    raclinux3 yes

    Result: Liveness check passed for “ntpd”

    Check for NTP daemon or service alive passed on all nodes

    Checking NTP daemon command line for slewing option “-x”

    Check: NTP daemon command line

     Node Name Slewing Option Set?

    ———————————— ————————

    raclinux3 yes

    Result:

    NTP daemon slewing option check passed

    Checking NTP daemon’s boot time configuration, in file “/etc/sysconfig/ntpd”, for slewing option “-x”

    Check: NTP daemon’s boot time configuration

     Node Name Slewing Option Set?

    ———————————— ————————

    raclinux3 yes

    Result:

    NTP daemon’s boot time configuration check for slewing option passed

    Checking whether NTP daemon or service is using UDP port 123 on all nodes

    Check for NTP daemon or service using UDP port 123

    Node Name Port Open?

    ———————————— ————————

    raclinux3 yes

    NTP common Time Server Check started…

    NTP Time Server “.LOCL.” is common to all nodes on which the NTP daemon is running

    Check of common NTP Time Server passed

    Clock time offset check from NTP Time Server started…

    Checking on nodes “[raclinux3]“…

    Check: Clock time offset from NTP Time Server

    Time Server: .LOCL.

    Time Offset Limit: 1000.0 msecs

    Node Name Time Offset Status

    ———— ———————— ————————

    raclinux3 0.0 passed

    Time Server “.LOCL.” has time offsets that are within permissible limits for nodes “[raclinux3]“.

    Clock time offset check passed

    Result: Clock synchronization check using Network Time Protocol(NTP) passed

    Oracle Cluster Time Synchronization Services check passed

    Post-check for node addition was unsuccessful on all the nodes.

    [grid@raclinux2 gpnp]$

  6. Conclusion

    For some reason I was asked to re-link the binaries despite that a cluster enabled home was used as a source. At the end, after root.sh completed OC was up and running.

  1. Clone RDBMS home to the new node

    Cloning RDBMS home is conceptually similar to cloning GI and involves the following steps described below. The new node will be raclinux3 and the target directory will be /u01/app/oracle/product/11.2.0/db_3

    1. Prepare a stage directory and a tar ball

      [root@raclinux2 bin]# mkdir -p /u01/stageRDBMS

      [root@raclinux2 bin]# chmod -R 775 /u01/stageRDBMS

      [root@raclinux2 bin]# chown -R oracle:oinstall /u01/stageRDBMS

      [root@raclinux2 bin]#

      [root@raclinux2 stageRDBMS]# cp -prf /u01/app/oracle/product/11.2.0/db_3 /u01/stageRDBMS

      [root@raclinux2 stageRDBMS]# cd /u01/app/oracle/product/11.2.0/db_3

      [root@raclinux2 db_3]#

      [root@raclinux2 db_3]# tar -cvf /tmp/tarRDBMS.tar .

      [root@raclinux2 db_3]#

    2. Prepare the destination directory, extract the tar on the destination directory and set ownership and permissions

    [oracle@raclinux3 db_3]$ tar -xvf /tmp/tar11203.tar

    [root@raclinux3 db_3]# chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_3

    [root@raclinux3 db_3]# chmod u+s /u01/app/oracle/product/11.2.0/db_3/bin/oracle

    [root@raclinux3 db_3]# chmod g+s /u01/app/oracle/product/11.2.0/db_3/bin/oracle

    [root@raclinux3 db_3]#

    1. Prepare a script, for example named start_clone.sh calling clone.pl

      Login as oracle user to raclinux3 and create a start_clone.sh script with the following content.

      ORACLE_BASE=/u01/app/oracle

      ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_3

      cd $ORACLE_HOME/clone/bin

      THISNODE=’hostname -s’

      E01=ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_3

      E02=ORACLE_HOME_NAME=OraDb11g_home1

      E03=ORACLE_BASE=/u01/app/oracle

      C01=”CLUSTER_NODES={raclinux1,raclinux2,raclinux3}”

      ##C01=”-O’CLUSTER_NODES={raclinux1, raclinux2, raclinux3}’”

      C02=”LOCAL_NODE=raclinux3″

      perl $ORACLE_HOME/clone/bin/clone.pl $E01 $E02 $E03 $C01 $C02

    2. While logged in a oracle user to raclinux3 execute the start_clone.sh

      [oracle@raclinux3 bin]$ ./start_clone.sh

      ./runInstaller -clone -waitForCompletion “ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_3″ “ORACLE_HOME_NAME=OraDb11g_home1″ “ORACLE_BASE=/u01/app/oracle” “CLUSTER_NODES={raclinux1,raclinux2,raclinux3}” “LOCAL_NODE=raclinux3″ -silent -noConfig -nowait

      Starting Oracle Universal Installer…

      Checking swap space: must be greater than 500 MB. Actual 9818 MB Passed

      Preparing to launch Oracle Universal Installer from /tmp/OraInstall2011-11-16_03-07-49AM. Please wait …Oracle Universal Installer, Version 11.2.0.3.0 Production

      Copyright (C) 1999, 2011, Oracle. All rights reserved.

      You can find the log of this install session at:

       /u01/app/oraInventory/logs/cloneActions2011-11-16_03-07-49AM.log

      …..

      Performing tests to see whether nodes raclinux1,raclinux2 are available

      ……………………………………………………… 100% Done.

      Installation in progress (Wednesday, November 16, 2011 3:11:03 AM EET)

      …………………………………………………………………….. 80% Done.

      Install successful

      Linking in progress (Wednesday, November 16, 2011 3:11:17 AM EET)

      Link successful

      Setup in progress (Wednesday, November 16, 2011 3:13:51 AM EET)

      Setup successful

      End of install phases.(Wednesday, November 16, 2011 3:14:40 AM EET)

      WARNING:

      The following configuration scripts need to be executed as the “root” user in each new cluster node. Each script in the list below is followed by a list of nodes.

      /u01/app/oracle/product/11.2.0/db_3/root.sh #On nodes raclinux3

      To execute the configuration scripts:

      1. Open a terminal window

      2. Log in as “root”

      3. Run the scripts in each cluster node

      The cloning of OraDb11g_home1 was successful.

      Please check ‘/u01/app/oraInventory/logs/cloneActions2011-11-16_03-07-49AM.log’ for more details.

      [oracle@raclinux3 bin]$

    3. Login as root on raclinux3 and execute root.sh

      [root@raclinux3 bin]# /u01/app/oracle/product/11.2.0/db_3/root.sh

      Check /u01/app/oracle/product/11.2.0/db_3/install/root_raclinux3.gj.com_2011-11-16_03-21-22.log for the output of root script

      [root@raclinux3 bin]#

      [root@raclinux3 bin]# cat /u01/app/oracle/product/11.2.0/db_3/install/root_raclinux3.gj.com_2011-11-16_03-21-22.log

      Performing root user operation for Oracle 11g

      The following environment variables are set as:

      ORACLE_OWNER= oracle

       ORACLE_HOME= /u01/app/oracle/product/11.2.0/db_3

      Entries will be added to the /etc/oratab file as needed by

      Database Configuration Assistant when a database is created

      Finished running generic part of root script.

      Now product-specific root actions will be performed.

      Finished product-specific root actions.

      [root@raclinux3 bin]#

  2. Manually add an instance to the new node

    While dbca and OEM can be used to add an instance to raclinux3 the sections will explain how to manually add the instance.

    All steps are to be executed as oracle user form any node of the cluster, for example raclinux2.

    1. Add an instance RACDB3 for the RACDB database for node raclinux3

      [oracle@raclinux1 ~]$ srvctl add instance -d racdb -i RACDB3 -n raclinux3

      [oracle@raclinux1 ~]$

    2. Set config parameters

      SQL> show parameter create

      NAME TYPE VALUE

      ———————————— ———– ——————————

      create_bitmap_area_size integer 8388608

      create_stored_outlines string

      db_create_file_dest string +DATA

      db_create_online_log_dest_1 string

      db_create_online_log_dest_2 string

      db_create_online_log_dest_3 string

      db_create_online_log_dest_4 string

      db_create_online_log_dest_5 string

      SQL>

      SQL> alter system set thread=3 scope=both sid=’RACDB3′;

      System altered.

      SQL>

      SQL> alter system set instance_number=3 scope=spfile sid=’RACDB3′;

      System altered.

      SQL>

      SQL> alter system set UNDO_TABLESPACE=UNDOTBS3 scope=both sid=’RACDB3′;

      System altered.

      SQL>

    3. Create and enable redo log thread for the RACDB3 instance

      SQL> alter database add logfile thread 3 group 8;

      Database altered.

      SQL> alter database add logfile thread 3 group 9;

      Database altered.

      SQL> alter database add logfile thread 3 group 10;

      Database altered.

      SQL>

      SQL> alter database enable thread 3;

      Database altered.

      SQL>

    4. Start instance RACDB3 on raclinux3

      [oracle@raclinux1 ~]$ srvctl start instance -d racdb -i RACDB3

      [oracle@raclinux1 ~]$

    5. Verify add instance

      [oracle@raclinux3 ~]$ srvctl status database -d racdb

      Instance RACDB1 is running on node raclinux1

      Instance RACDB2 is running on node raclinux2

      Instance RACDB3 is running on node raclinux3

      [oracle@raclinux3 ~]$

      [oracle@raclinux3 ~]$ srvctl config database -d racdb

      Database unique name: RACDB

      Database name: RACDB

      Oracle home: /u01/app/oracle/product/11.2.0/db_3

      Oracle user: oracle

      Spfile: +DATA/racdb/spfileracdb.ora

      Domain:

      Start options: open

      Stop options: immediate

      Database role: PRIMARY

      Management policy: AUTOMATIC

      Server pools: RACDB

      Database instances: RACDB1,RACDB2,RACDB3

      Disk Groups: DATA,DATADG

      Mount point paths:

      Services:

      Type: RAC

      Database is administrator managed

      [oracle@raclinux3 ~]$

      SQL> select * from v$active_instances;

      INST_NUMBER INST_NAME

      ———– ————————————————————

       1 raclinux1.gj.com:RACDB1

       2 raclinux2.gj.com:RACDB2

      3 raclinux3.gj.com:RACDB3

      SQL>

  3. Summary

In the article you looked at some practical use of clone.pl to clone GI and RDBMMS home in a RAC environment to extend the cluster on a new node. You had a glimpse at how to manually add a instance to the new RAC cluster node.

November 17, 2011 - Posted by | oracle

No comments yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 614 other followers

%d bloggers like this: