Guenadi N Jilevski's Oracle BLOG

Oracle RAC, DG, EBS, DR and HA DBA BLOG

Upgrade Oracle RAC to 11.2.0.3 from 11.2.0.2 on Linux

Upgrade Oracle RAC to 11.2.0.3 from 11.2.0.2 on Linux

In the article you will have a look at the steps to upgrade two node Oracle 11.2.0.2 RAC cluster on Linux to Oracle 11.2.0.3. You will check the prerequisites to upgrade the cluster, upgrade GI software, upgrade RDBMS software and last but not least upgrade the database. The setup consists of two node Oracle cluster running 11.2.0.2 on Linux as configured here.

I expected Oracle patch set 11.2.0.3 that is distributed as capable of a complete installation, see here for complete 11.2.0.3 installation, to perform an upgrade without asking for prerequisite patches that is, to include all prerequisite patches and upgrade straight away from a previous patch set 11.2.0.2. However, during the upgrade of GI I was stopped by OUI asking for Oracle patch 12539000.

Oracle patch:12539000 – This test ensures that Oracle patch “12539000” has been applied in home “/u01/app/11.2.0/grid”.

 Check Failed on Nodes: [raclinux2,  raclinux1]

Verification result of failed node: raclinux2

 Details:

 –

PRVG-1253 : Required Oracle patch is not found on node “raclinux2” in home “/u01/app/11.2.0/grid”.  – Cause:  Required Oracle patch is not applied.  – Action:  Apply the required Oracle patch.

Back to Top

Verification result of failed node: raclinux1

 Details:

 –

PRVG-1253 : Required Oracle patch is not found on node “raclinux1” in home “/u01/app/11.2.0/grid”.  – Cause:  Required Oracle patch is not applied.  – Action:  Apply the required Oracle patch.

Back to Top


The upgrade process will involve the following steps

  1. Meet the 11.2.0.3 prerequisites
  2. Patch GI to 11.2.0.3
  3. Patch RDBMS to 11.2.0.3
  4. Patch the database with dbua

You need to download the following patches

  • Patch 10404530: Oracle patch set 11.2.0.3
  • Patch 12539000: prerequisite for 11.2.0.3
  • Patch 6880880: Latest OPatch version
  1. Meet the 11.2.0.3 prerequisites

The prerequisites checks involve the following steps.

  • Install patch 12539000 and 6880880.
  • Create GI_HOME_11.2.0.3 directories and set ownership and permissions.
  • Run Cluster verification utility to check prerequisites for the upgrade.

    Download patch 12539000 and 6880880 (latest OPatch) from MOS. Unzip patch 12539000 in a stage area on each node. Unzip patch 6880880 in $GI_HOME and $RDBMS_HOME on each node to override the OPatch directory there.

    In order to install patch 12539000 open a terminal session as root and set the PATH variable to include the path to $GI_HOME/OPatch. Patch 12539000 can be applied in a rolling manner, which is one node is brought down for patching while the others are running. Since I have two nodes I will apply the patch on the first node and when it is successfully applied and Oracle is up I will apply it on the second node.

    I had a problem even after upgrading to the latest version of the OPatch 11.2.0.1.8.

    [root@raclinux1 12539000]# opatch auto /u01/stage11203/12539000

    Executing /usr/bin/perl /u01/app/11.2.0/grid/OPatch/crs/patch112.pl -patchdir /u01/stage11203 -patchn 12539000 -paramfile /u01/app/11.2.0/grid/crs/install/crsconfig_params

    opatch auto log file location is /u01/app/11.2.0/grid/OPatch/crs/../../cfgtoollogs/opatchauto2011-11-07_22-33-31.log

    Detected Oracle Clusterware install

    Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

    OPatch is bundled with OCM, Enter the absolute OCM response file path:

    /u01/app/11.2.0/grid/OPatch/ocm/ocm.zip

    Enter ‘yes’ if you have unzipped this patch to an empty directory to proceed (yes/no):yes

    The opatch minimum version check for patch /u01/stage11203/12539000/etc failed for /u01/app/oracle/product/11.2.0/db_1

    The opatch minimum version check for patch /u01/stage11203/12539000/files failed for /u01/app/oracle/product/11.2.0/db_1

    Opatch version check failed for oracle home /u01/app/oracle/product/11.2.0/db_1

    The opatch minimum version check for patch /u01/stage11203/12539000/etc failed for /u01/app/oracle/product/10.2.0/db_1

    The opatch minimum version check for patch /u01/stage11203/12539000/files failed for /u01/app/oracle/product/10.2.0/db_1

    Opatch version check failed for oracle home /u01/app/oracle/product/10.2.0/db_1

    Opatch version check failed

    update the opatch version for the failed homes and retry

    [root@raclinux1 12539000]#

    Looking at the README.txt Oracle advises to do the following:

    Workaround: Apply this patch to the 11202 GI Home and Oracle database homes as follows:

    #opatch auto -oh

    #opatch auto -oh <11202 ORACLE_HOME1_PATH>,<11202 ORACLE_HOME1_PATH>

    I split the patching process into two phases

    Patching GI home with 12539000

    As this patch is a polling patch execute the following command on the first node and wait for the Oracle Clusterware to be restarted on the node.

    [root@raclinux1 OPatch]# ./opatch auto /u01/stage11203 -oh /u01/app/11.2.0/grid

    After first node patching succeeded execute the following command on the second node and wait for completion.

    [root@raclinux2 OPatch]# ./opatch auto /u01/stage11203 -oh /u01/app/11.2.0/grid

    For detail output see Annex 1A

    Patching RDBMS_11.2 home with 12539000

    Proceed with a rolling application of the patch.

    On the first node execute

    [root@raclinux1 OPatch]# ./opatch auto /u01/stage11203 -oh /u01/app/oracle/product/11.2.0/db_1

    Once the first node completes execute on the second node

    [root@raclinux2 OPatch]# ./opatch auto /u01/stage11203 -oh /u01/app/oracle/product/11.2.0/db_1

    For detail output see Annex 1B

    Create GI_HOME 11.2.0.3 directories and set ownership and permissions on each cluster node.

    mkdir -p /u01/app/11.2.0.3/grid

    chown -R grid:oinstall /u01/app/11.2.0.3/grid

    chmod -R 775 /u01/app/11.2.0.3/grid

    Run Cluster Verification Utility

    [grid@raclinux1 grid]$ ./runcluvfy.sh stage -pre crsinst -upgrade -n all -rolling -src_crshome /u01/app/11.2.0/grid -dest_crshome /u01/app/11.2.0.3 -dest_version 11.2.0.3.0

    For detail output see Annex 1C

  1. Patch GI to 11.2.0.3

    Download and unzip patch 1040453. Invoke the OUI from the staging directory for grid software for patch 10404530.

    Select skip software updates and press Next to continue.


    Select Upgrade GI and press Nect.


    Select Next.


    Select the groups and press Next.


    Enter the new location for GI 11.2.0.3 and press Next to continue.


    Run a fixup script if necessary.


    Once the problem is resolved press Next.


    Press Install.


    Wait for the installation to complete.


    If you experience the above error open a second terminal session and run the command as grid user.

    [grid@raclinux2 stage11203]$ /u01/app/11.2.0.3/grid/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/u01/app/11.2.0.3/grid ORACLE_HOME_NAME=Ora11g_gridinfrahome2 CLUSTER_NODES=raclinux1,raclinux2 “INVENTORY_LOCATION=/u01/app/oraInventory” LOCAL_NODE=raclinux2

    You do not have sufficient permissions to access the inventory ‘/u01/app/oraInventory/oui’. Installation cannot continue. It is required that the primary group of the install user is same as the inventory owner group. Make sure that the install user is part of the inventory owner group and restart the installer.: Permission denied

    [grid@raclinux2 stage11203]$ /u01/app/11.2.0.3/grid/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/u01/app/11.2.0.3/grid ORACLE_HOME_NAME=Ora11g_gridinfrahome2 CLUSTER_NODES=raclinux1,raclinux2 “INVENTORY_LOCATION=/u01/app/oraInventory” LOCAL_NODE=raclinux2

    Starting Oracle Universal Installer…

    Checking swap space: must be greater than 500 MB. Actual 9873 MB Passed

    The inventory pointer is located at /etc/oraInst.loc

    The inventory is located at /u01/app/oraInventory

    Please execute the ‘null’ script at the end of the session.

    ‘AttachHome’ was successful.

    [grid@raclinux2 stage11203]$


    Run each of the scripts.

    [root@raclinux1 OPatch]# /u01/app/11.2.0.3/grid/rootupgrade.sh

    Performing root user operation for Oracle 11g

    The following environment variables are set as:

    ORACLE_OWNER= grid

     ORACLE_HOME= /u01/app/11.2.0.3/grid

    Enter the full pathname of the local bin directory: [/usr/local/bin]:

    The contents of “dbhome” have not changed. No need to overwrite.

    The contents of “oraenv” have not changed. No need to overwrite.

    The contents of “coraenv” have not changed. No need to overwrite.

    Entries will be added to the /etc/oratab file as needed by

    Database Configuration Assistant when a database is created

    Finished running generic part of root script.

    Now product-specific root actions will be performed.

    Using configuration parameter file: /u01/app/11.2.0.3/grid/crs/install/crsconfig_params

    Creating trace directory

    ASM upgrade has started on first node.

    CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘raclinux1’

    CRS-2673: Attempting to stop ‘ora.crsd’ on ‘raclinux1’

    CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on ‘raclinux1’

    CRS-2673: Attempting to stop ‘ora.LISTENER_SCAN1.lsnr’ on ‘raclinux1’

    CRS-2673: Attempting to stop ‘ora.DATADG.dg’ on ‘raclinux1’

    CRS-2673: Attempting to stop ‘ora.RAC10G.RAC10G1.inst’ on ‘raclinux1’

    CRS-2673: Attempting to stop ‘ora.registry.acfs’ on ‘raclinux1’

    CRS-2673: Attempting to stop ‘ora.DATA.dg’ on ‘raclinux1’

    CRS-2673: Attempting to stop ‘ora.LISTENER.lsnr’ on ‘raclinux1’

    CRS-2673: Attempting to stop ‘ora.cvu’ on ‘raclinux1’

    CRS-2673: Attempting to stop ‘ora.oc4j’ on ‘raclinux1’

    CRS-2673: Attempting to stop ‘ora.RAC10G.db’ on ‘raclinux1’

    CRS-2677: Stop of ‘ora.cvu’ on ‘raclinux1’ succeeded

    CRS-2672: Attempting to start ‘ora.cvu’ on ‘raclinux2’

    CRS-2677: Stop of ‘ora.LISTENER.lsnr’ on ‘raclinux1’ succeeded

    CRS-2673: Attempting to stop ‘ora.raclinux1.vip’ on ‘raclinux1’

    CRS-2677: Stop of ‘ora.LISTENER_SCAN1.lsnr’ on ‘raclinux1’ succeeded

    CRS-2673: Attempting to stop ‘ora.scan1.vip’ on ‘raclinux1’

    CRS-2677: Stop of ‘ora.scan1.vip’ on ‘raclinux1’ succeeded

    CRS-2672: Attempting to start ‘ora.scan1.vip’ on ‘raclinux2’

    CRS-2677: Stop of ‘ora.raclinux1.vip’ on ‘raclinux1’ succeeded

    CRS-2672: Attempting to start ‘ora.raclinux1.vip’ on ‘raclinux2’

    CRS-2677: Stop of ‘ora.registry.acfs’ on ‘raclinux1’ succeeded

    CRS-2676: Start of ‘ora.scan1.vip’ on ‘raclinux2’ succeeded

    CRS-2672: Attempting to start ‘ora.LISTENER_SCAN1.lsnr’ on ‘raclinux2’

    CRS-2677: Stop of ‘ora.RAC10G.db’ on ‘raclinux1’ succeeded

    CRS-2672: Attempting to start ‘ora.RAC10G.db’ on ‘raclinux2’

    CRS-2676: Start of ‘ora.raclinux1.vip’ on ‘raclinux2’ succeeded

    CRS-2676: Start of ‘ora.cvu’ on ‘raclinux2’ succeeded

    CRS-2676: Start of ‘ora.LISTENER_SCAN1.lsnr’ on ‘raclinux2’ succeeded

    CRS-2676: Start of ‘ora.RAC10G.db’ on ‘raclinux2’ succeeded

    CRS-2677: Stop of ‘ora.RAC10G.RAC10G1.inst’ on ‘raclinux1’ succeeded

    CRS-2677: Stop of ‘ora.oc4j’ on ‘raclinux1’ succeeded

    CRS-2672: Attempting to start ‘ora.oc4j’ on ‘raclinux2’

    CRS-2676: Start of ‘ora.oc4j’ on ‘raclinux2’ succeeded

    CRS-2677: Stop of ‘ora.DATADG.dg’ on ‘raclinux1’ succeeded

    CRS-2677: Stop of ‘ora.DATA.dg’ on ‘raclinux1’ succeeded

    CRS-2673: Attempting to stop ‘ora.asm’ on ‘raclinux1’

    CRS-2677: Stop of ‘ora.asm’ on ‘raclinux1’ succeeded

    CRS-2673: Attempting to stop ‘ora.ons’ on ‘raclinux1’

    CRS-2677: Stop of ‘ora.ons’ on ‘raclinux1’ succeeded

    CRS-2673: Attempting to stop ‘ora.net1.network’ on ‘raclinux1’

    CRS-2677: Stop of ‘ora.net1.network’ on ‘raclinux1’ succeeded

    CRS-2792: Shutdown of Cluster Ready Services-managed resources on ‘raclinux1’ has completed

    CRS-2677: Stop of ‘ora.crsd’ on ‘raclinux1’ succeeded

    CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘raclinux1’

    CRS-2673: Attempting to stop ‘ora.evmd’ on ‘raclinux1’

    CRS-2673: Attempting to stop ‘ora.asm’ on ‘raclinux1’

    CRS-2673: Attempting to stop ‘ora.drivers.acfs’ on ‘raclinux1’

    CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘raclinux1’

    CRS-2677: Stop of ‘ora.asm’ on ‘raclinux1’ succeeded

    CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip’ on ‘raclinux1’

    CRS-2677: Stop of ‘ora.evmd’ on ‘raclinux1’ succeeded

    CRS-2677: Stop of ‘ora.mdnsd’ on ‘raclinux1’ succeeded

    CRS-2677: Stop of ‘ora.cluster_interconnect.haip’ on ‘raclinux1’ succeeded

    CRS-2677: Stop of ‘ora.ctssd’ on ‘raclinux1’ succeeded

    CRS-2673: Attempting to stop ‘ora.cssd’ on ‘raclinux1’

    CRS-2677: Stop of ‘ora.cssd’ on ‘raclinux1’ succeeded

    CRS-2673: Attempting to stop ‘ora.diskmon’ on ‘raclinux1’

    CRS-2673: Attempting to stop ‘ora.crf’ on ‘raclinux1’

    CRS-2677: Stop of ‘ora.diskmon’ on ‘raclinux1’ succeeded

    CRS-2677: Stop of ‘ora.crf’ on ‘raclinux1’ succeeded

    CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘raclinux1’

    CRS-2677: Stop of ‘ora.gipcd’ on ‘raclinux1’ succeeded

    CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘raclinux1’

    CRS-2677: Stop of ‘ora.gpnpd’ on ‘raclinux1’ succeeded

    CRS-2677: Stop of ‘ora.drivers.acfs’ on ‘raclinux1’ succeeded

    CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘raclinux1’ has completed

    CRS-4133: Oracle High Availability Services has been stopped.

    OLR initialization – successful

    Replacing Clusterware entries in inittab

    clscfg: EXISTING configuration version 5 detected.

    clscfg: version 5 is 11g Release 2.

    Successfully accumulated necessary OCR keys.

    Creating OCR keys for user ‘root’, privgrp ‘root’..

    Operation successful.

    Configure Oracle Grid Infrastructure for a Cluster … succeeded

    [root@raclinux1 OPatch]#

    [root@raclinux2 grid]# ./rootupgrade.sh

    Performing root user operation for Oracle 11g

    The following environment variables are set as:

    ORACLE_OWNER= grid

     ORACLE_HOME= /u01/app/11.2.0.3/grid

    Enter the full pathname of the local bin directory: [/usr/local/bin]:

    The contents of “dbhome” have not changed. No need to overwrite.

    The contents of “oraenv” have not changed. No need to overwrite.

    The contents of “coraenv” have not changed. No need to overwrite.

    Entries will be added to the /etc/oratab file as needed by

    Database Configuration Assistant when a database is created

    Finished running generic part of root script.

    Now product-specific root actions will be performed.

    Using configuration parameter file: /u01/app/11.2.0.3/grid/crs/install/crsconfig_params

    Creating trace directory

    CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘raclinux2’

    CRS-2673: Attempting to stop ‘ora.crsd’ on ‘raclinux2’

    CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on ‘raclinux2’

    CRS-2673: Attempting to stop ‘ora.DATADG.dg’ on ‘raclinux2’

    CRS-2673: Attempting to stop ‘ora.RAC10G.RAC10G2.inst’ on ‘raclinux2’

    CRS-2673: Attempting to stop ‘ora.registry.acfs’ on ‘raclinux2’

    CRS-2673: Attempting to stop ‘ora.DATA.dg’ on ‘raclinux2’

    CRS-2673: Attempting to stop ‘ora.LISTENER_SCAN1.lsnr’ on ‘raclinux2’

    CRS-2673: Attempting to stop ‘ora.cvu’ on ‘raclinux2’

    CRS-2673: Attempting to stop ‘ora.oc4j’ on ‘raclinux2’

    CRS-2673: Attempting to stop ‘ora.RAC10G.db’ on ‘raclinux2’

    CRS-2673: Attempting to stop ‘ora.LISTENER.lsnr’ on ‘raclinux2’

    CRS-2677: Stop of ‘ora.cvu’ on ‘raclinux2’ succeeded

    CRS-2672: Attempting to start ‘ora.cvu’ on ‘raclinux1’

    CRS-2677: Stop of ‘ora.LISTENER.lsnr’ on ‘raclinux2’ succeeded

    CRS-2673: Attempting to stop ‘ora.raclinux2.vip’ on ‘raclinux2’

    CRS-2677: Stop of ‘ora.LISTENER_SCAN1.lsnr’ on ‘raclinux2’ succeeded

    CRS-2673: Attempting to stop ‘ora.scan1.vip’ on ‘raclinux2’

    CRS-2677: Stop of ‘ora.scan1.vip’ on ‘raclinux2’ succeeded

    CRS-2672: Attempting to start ‘ora.scan1.vip’ on ‘raclinux1’

    CRS-2677: Stop of ‘ora.raclinux2.vip’ on ‘raclinux2’ succeeded

    CRS-2672: Attempting to start ‘ora.raclinux2.vip’ on ‘raclinux1’

    CRS-2677: Stop of ‘ora.registry.acfs’ on ‘raclinux2’ succeeded

    CRS-2676: Start of ‘ora.scan1.vip’ on ‘raclinux1’ succeeded

    CRS-2676: Start of ‘ora.raclinux2.vip’ on ‘raclinux1’ succeeded

    CRS-2672: Attempting to start ‘ora.LISTENER_SCAN1.lsnr’ on ‘raclinux1’

    CRS-2677: Stop of ‘ora.RAC10G.db’ on ‘raclinux2’ succeeded

    CRS-2672: Attempting to start ‘ora.RAC10G.db’ on ‘raclinux1’

    CRS-2676: Start of ‘ora.cvu’ on ‘raclinux1’ succeeded

    CRS-2676: Start of ‘ora.RAC10G.db’ on ‘raclinux1’ succeeded

    CRS-2676: Start of ‘ora.LISTENER_SCAN1.lsnr’ on ‘raclinux1’ succeeded

    CRS-2677: Stop of ‘ora.RAC10G.RAC10G2.inst’ on ‘raclinux2’ succeeded

    CRS-2677: Stop of ‘ora.oc4j’ on ‘raclinux2’ succeeded

    CRS-2672: Attempting to start ‘ora.oc4j’ on ‘raclinux1’

    CRS-2676: Start of ‘ora.oc4j’ on ‘raclinux1’ succeeded

    CRS-2677: Stop of ‘ora.DATA.dg’ on ‘raclinux2’ succeeded

    CRS-2677: Stop of ‘ora.DATADG.dg’ on ‘raclinux2’ succeeded

    CRS-2673: Attempting to stop ‘ora.asm’ on ‘raclinux2’

    CRS-2677: Stop of ‘ora.asm’ on ‘raclinux2’ succeeded

    CRS-2673: Attempting to stop ‘ora.ons’ on ‘raclinux2’

    CRS-2677: Stop of ‘ora.ons’ on ‘raclinux2’ succeeded

    CRS-2673: Attempting to stop ‘ora.net1.network’ on ‘raclinux2’

    CRS-2677: Stop of ‘ora.net1.network’ on ‘raclinux2’ succeeded

    CRS-2792: Shutdown of Cluster Ready Services-managed resources on ‘raclinux2’ has completed

    CRS-2677: Stop of ‘ora.crsd’ on ‘raclinux2’ succeeded

    CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘raclinux2’

    CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘raclinux2’

    CRS-2673: Attempting to stop ‘ora.evmd’ on ‘raclinux2’

    CRS-2673: Attempting to stop ‘ora.asm’ on ‘raclinux2’

    CRS-2673: Attempting to stop ‘ora.drivers.acfs’ on ‘raclinux2’

    CRS-2677: Stop of ‘ora.asm’ on ‘raclinux2’ succeeded

    CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip’ on ‘raclinux2’

    CRS-2677: Stop of ‘ora.evmd’ on ‘raclinux2’ succeeded

    CRS-2677: Stop of ‘ora.ctssd’ on ‘raclinux2’ succeeded

    CRS-2677: Stop of ‘ora.cluster_interconnect.haip’ on ‘raclinux2’ succeeded

    CRS-2673: Attempting to stop ‘ora.cssd’ on ‘raclinux2’

    CRS-2677: Stop of ‘ora.mdnsd’ on ‘raclinux2’ succeeded

    CRS-2677: Stop of ‘ora.cssd’ on ‘raclinux2’ succeeded

    CRS-2673: Attempting to stop ‘ora.diskmon’ on ‘raclinux2’

    CRS-2673: Attempting to stop ‘ora.crf’ on ‘raclinux2’

    CRS-2677: Stop of ‘ora.crf’ on ‘raclinux2’ succeeded

    CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘raclinux2’

    CRS-2677: Stop of ‘ora.diskmon’ on ‘raclinux2’ succeeded

    CRS-2677: Stop of ‘ora.gipcd’ on ‘raclinux2’ succeeded

    CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘raclinux2’

    CRS-2677: Stop of ‘ora.drivers.acfs’ on ‘raclinux2’ succeeded

    CRS-2677: Stop of ‘ora.gpnpd’ on ‘raclinux2’ succeeded

    CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘raclinux2’ has completed

    CRS-4133: Oracle High Availability Services has been stopped.

    OLR initialization – successful

    Replacing Clusterware entries in inittab

    clscfg: EXISTING configuration version 5 detected.

    clscfg: version 5 is 11g Release 2.

    Successfully accumulated necessary OCR keys.

    Creating OCR keys for user ‘root’, privgrp ‘root’..

    Operation successful.

    Started to upgrade the Oracle Clusterware. This operation may take a few minutes.

    Started to upgrade the CSS.

    The CSS was successfully upgraded.

    Started to upgrade the CRS.

    The CRS was successfully upgraded.

    Oracle Clusterware operating version was successfully set to 11.2.0.3.0

    ASM upgrade has finished on last node.

    PRKO-2116 : OC4J is already enabled

    Configure Oracle Grid Infrastructure for a Cluster … succeeded

    [root@raclinux2 grid]#

    The error is due to a SCAN configured in /etc/hosts so can be ignored. Press Next to continue.


    Press Close.


  2. Patch RDBMS to 11.2.0.3

    Run the OUI from the staging directory for database software.

    Press Next to continue.


    Select Skip software updates and press Next to continue.


    Select Install software only and press Next to continue.


    Select all nodes and Oracle RAC Cluster database and press Next to continue.


    Select language and press Next.


    Select Enterprise Edition and press Next to continue.


    Select ORACLE_BASE and ORACLE_HOME and press Next to continue.


    Select the groups and press Next to continue.


    Ignore the SCAN message and press Next to continue.


    Press Install.


    Wait for prompt to run scripts as root to pop up.


    Run the scripts. The output will be similar to the one shown below.

    [root@raclinux1 OPatch]# /u01/app/oracle/product/11.2.0/db_3/root.sh

    Performing root user operation for Oracle 11g

    The following environment variables are set as:

    ORACLE_OWNER= oracle

     ORACLE_HOME= /u01/app/oracle/product/11.2.0/db_3

    Enter the full pathname of the local bin directory: [/usr/local/bin]:

    The contents of “dbhome” have not changed. No need to overwrite.

    The contents of “oraenv” have not changed. No need to overwrite.

    The contents of “coraenv” have not changed. No need to overwrite.

    Entries will be added to the /etc/oratab file as needed by

    Database Configuration Assistant when a database is created

    Finished running generic part of root script.

    Now product-specific root actions will be performed.

    Finished product-specific root actions.

    [root@raclinux1 OPatch]#

    [root@raclinux2 db_3]# ./root.sh

    Performing root user operation for Oracle 11g

    The following environment variables are set as:

    ORACLE_OWNER= oracle

     ORACLE_HOME= /u01/app/oracle/product/11.2.0/db_3

    Enter the full pathname of the local bin directory: [/usr/local/bin]:

    The contents of “dbhome” have not changed. No need to overwrite.

    The contents of “oraenv” have not changed. No need to overwrite.

    The contents of “coraenv” have not changed. No need to overwrite.

    Entries will be added to the /etc/oratab file as needed by

    Database Configuration Assistant when a database is created

    Finished running generic part of root script.

    Now product-specific root actions will be performed.

    Finished product-specific root actions.

    [root@raclinux2 db_3]#

    Press Close.


  3. Patch the database with dbua

Stop the database to be patched on all nodes. Invoke dbua from the new ORACLE_HOME. Press Next.


Select the database (RACDB) to be patched and press Next.


Press Yes in order to continue with the upgrade.


Press Next.


Select FRS and FRA size and press Next to continue.


Review the summary and press Finish if you are satisfied or else press Back until you get to a screen to fix a setting.


Wait for the upgrade to complete.


At the end you have opportunity to review. Press close.


Verify that the upgrade was successful.

[root@raclinux2 bin]# ./crsctl query crs activeversion

Oracle Clusterware active version on the cluster is [11.2.0.3.0]

[root@raclinux2 bin]# ./crsctl query crs releaseversion

Oracle High Availability Services release version on the local node is [11.2.0.3.0]

[root@raclinux2 bin]#

SQL> select * from gv$version;

INST_ID BANNER

———- ——————————————————————————–

 1 Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production

1 PL/SQL Release 11.2.0.3.0 – Production

1 CORE 11.2.0.3.0 Production

 1 TNS for Linux: Version 11.2.0.3.0 – Production

 1 NLSRTL Version 11.2.0.3.0 – Production

 2 Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production

2 PL/SQL Release 11.2.0.3.0 – Production

2 CORE 11.2.0.3.0 Production

 2 TNS for Linux: Version 11.2.0.3.0 – Production

 2 NLSRTL Version 11.2.0.3.0 – Production

10 rows selected.

SQL>

Annex 1A

[root@raclinux1 OPatch]# ./opatch auto /u01/stage11203 -oh /u01/app/11.2.0/grid

Executing /usr/bin/perl ./crs/patch112.pl -patchdir /u01 -patchn stage11203 -oh /u01/app/11.2.0/grid -paramfile /u01/app/11.2.0/grid/crs/install/crsconfig_params

opatch auto log file location is /u01/app/11.2.0/grid/OPatch/crs/../../cfgtoollogs/opatchauto2011-11-07_22-51-51.log

Detected Oracle Clusterware install

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

OPatch is bundled with OCM, Enter the absolute OCM response file path:

/u01/app/11.2.0/grid/OPatch/ocm/ocm.zip

Enter ‘yes’ if you have unzipped this patch to an empty directory to proceed (yes/no):yes

The opatch minimum version check for patch /u01/stage11203/client failed for /u01/app/11.2.0/grid

The opatch minimum version check for patch /u01/stage11203/database failed for /u01/app/11.2.0/grid

The opatch minimum version check for patch /u01/stage11203/deinstall failed for /u01/app/11.2.0/grid

The opatch minimum version check for patch /u01/stage11203/examples failed for /u01/app/11.2.0/grid

The opatch minimum version check for patch /u01/stage11203/gateways failed for /u01/app/11.2.0/grid

The opatch minimum version check for patch /u01/stage11203/grid failed for /u01/app/11.2.0/grid

The opatch minimum version check for patch /u01/stage11203/OPatch failed for /u01/app/11.2.0/grid

The opatch Component check failed. This patch is not applicable for /u01/app/11.2.0/grid

The opatch Component check failed. This patch is not applicable for /u01/app/11.2.0/grid

The opatch Component check failed. This patch is not applicable for /u01/app/11.2.0/grid

The opatch Component check failed. This patch is not applicable for /u01/app/11.2.0/grid

The opatch Component check failed. This patch is not applicable for /u01/app/11.2.0/grid

The opatch Component check failed. This patch is not applicable for /u01/app/11.2.0/grid

The opatch Component check failed. This patch is not applicable for /u01/app/11.2.0/grid

Successfully unlock /u01/app/11.2.0/grid

The opatch Applicable check failed for /u01/app/11.2.0/grid. The patch is not applicable for /u01/app/11.2.0/grid

The opatch Applicable check failed for /u01/app/11.2.0/grid. The patch is not applicable for /u01/app/11.2.0/grid

The opatch Applicable check failed for /u01/app/11.2.0/grid. The patch is not applicable for /u01/app/11.2.0/grid

The opatch Applicable check failed for /u01/app/11.2.0/grid. The patch is not applicable for /u01/app/11.2.0/grid

The opatch Applicable check failed for /u01/app/11.2.0/grid. The patch is not applicable for /u01/app/11.2.0/grid

The opatch Applicable check failed for /u01/app/11.2.0/grid. The patch is not applicable for /u01/app/11.2.0/grid

The opatch Applicable check failed for /u01/app/11.2.0/grid. The patch is not applicable for /u01/app/11.2.0/grid

patch /u01/stage11203/12539000 apply successful for home /u01/app/11.2.0/grid

patch /u01/stage11203/client apply failed for home /u01/app/11.2.0/grid

ACFS-9300: ADVM/ACFS distribution files found.

ACFS-9312: Existing ADVM/ACFS installation detected.

ACFS-9314: Removing previous ADVM/ACFS installation.

ACFS-9315: Previous ADVM/ACFS components successfully removed.

ACFS-9307: Installing requested ADVM/ACFS software.

ACFS-9308: Loading installed ADVM/ACFS drivers.

ACFS-9321: Creating udev for ADVM/ACFS.

ACFS-9323: Creating module dependencies – this may take some time.

ACFS-9327: Verifying ADVM/ACFS devices.

ACFS-9309: ADVM/ACFS installation correctness verified.

CRS-4123: Oracle High Availability Services has been started.

[root@raclinux1 OPatch]#

[grid@raclinux1 OPatch]$ ./opatch lsinventory

Invoking OPatch 11.2.0.1.8

Oracle Interim Patch Installer version 11.2.0.1.8

Copyright (c) 2011, Oracle Corporation. All rights reserved.

Oracle Home : /u01/app/11.2.0/grid

Central Inventory : /u01/app/oraInventory

from : /etc/oraInst.loc

OPatch version : 11.2.0.1.8

OUI version : 11.2.0.2.0

Log file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch2011-11-07_23-04-12PM.log

Lsinventory Output file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2011-11-07_23-04-12PM.txt

——————————————————————————–

Installed Top-level Products (1):

Oracle Grid Infrastructure 11.2.0.2.0

There are 1 products installed in this Oracle Home.

Interim patches (3) :

Patch 12539000 : applied on Mon Nov 07 22:58:12 EET 2011

Unique Patch ID: 13976979

 Created on 28 Jul 2011, 12:37:42 hrs PST8PDT

Bugs fixed:

12539000

Patch 11724916 : applied on Wed Jul 27 01:22:07 EEST 2011

Unique Patch ID: 13762085

 Created on 1 Apr 2011, 07:09:05 hrs PST8PDT

Bugs fixed:

10151017, 10158965, 11724916, 10190642, 10129643, 10018789, 9744252

10248523, 9956713, 10356513, 9715581, 9770451, 10170431, 10425676

10222719, 9591812, 10127360, 10094201, 11069199, 10245086, 10205230

10052141, 11818335, 11830776, 11830777, 9905049, 11830778, 10077191

10358019, 10219576, 10264680, 10209232, 10102506, 11067567, 9881076

10040531, 10218814, 9788588, 9735237, 10230571, 10079168, 10228151

10013431, 10217802, 10238786, 10052956, 11699057, 10080579, 10332111

10227288, 10329146, 10332589, 10110863, 10073683, 10019218, 10229719

10373381, 11724984, 9539440, 10411618, 10022980, 10187168, 6523037

9724970, 10084145, 10157402, 9651350, 10299224

Patch 12311357 : applied on Wed Jul 27 01:20:22 EEST 2011

Unique Patch ID: 13762085

 Created on 5 Apr 2011, 09:13:41 hrs PST8PDT

Bugs fixed:

12311357, 10425672, 10244210, 11655840, 10634513, 9891341, 11782423

11077756, 10376847, 10157506, 10178670, 9959110, 10314123, 10014392

10157622, 10089120, 10057296, 10053985, 9864003, 10044622, 9812970

10083789, 10073372, 9876201, 9963327, 10375649, 9336825, 10062301

10018215, 10105195, 10007185, 10071992, 10038791, 10048487, 9926027

10260251, 10052721, 10028235, 10027079, 10028343, 10231906, 10065216

10045436, 9907089, 10175855, 10284828, 10072474, 10036834, 10028637

10029900, 9974223, 9979706, 10016083, 10015460, 9918485, 9971646, 10040647

9978765, 10069541, 9915329, 10107380, 10110969, 10305361, 10029119

10233159, 10083009, 9812956, 10008467, 10036193, 10048027, 10040109

10015210, 9944978, 10033106, 9978195, 10042143, 10284693, 9679401

10111010, 10075643, 10057680, 10205290, 10124517, 10078086, 9944948

10146768, 10052529, 10011084, 10073075, 10248739, 10236074, 10128191

9975837, 10168006, 9949676, 10228079, 10015603, 10241696, 9861790

10069698, 10056808, 10087118, 10283058, 10252497, 10146744, 10326548

10157625, 10283167, 10019796, 9975343, 10216878, 9906432, 10045316

10029794, 10425675, 10061534, 10193581, 10070563, 10283549, 10311856

10150020, 10268642, 10283596

Rac system comprising of multiple nodes

 Local node = raclinux1

 Remote node = raclinux2

——————————————————————————–

OPatch succeeded.

[grid@raclinux1 OPatch]$

[root@raclinux2 OPatch]# ./opatch auto /u01/stage11203 -oh /u01/app/11.2.0/grid

Executing /usr/bin/perl ./crs/patch112.pl -patchdir /u01 -patchn stage11203 -oh /u01/app/11.2.0/grid -paramfile /u01/app/11.2.0/grid/crs/install/crsconfig_params

opatch auto log file location is /u01/app/11.2.0/grid/OPatch/crs/../../cfgtoollogs/opatchauto2011-11-07_23-09-14.log

Detected Oracle Clusterware install

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

OPatch is bundled with OCM, Enter the absolute OCM response file path:

/u01/app/11.2.0/grid/OPatch/ocm/ocm.zip

Enter ‘yes’ if you have unzipped this patch to an empty directory to proceed (yes/no):yes

The opatch minimum version check for patch /u01/stage11203/client failed for /u01/app/11.2.0/grid

The opatch minimum version check for patch /u01/stage11203/database failed for /u01/app/11.2.0/grid

The opatch minimum version check for patch /u01/stage11203/deinstall failed for /u01/app/11.2.0/grid

The opatch minimum version check for patch /u01/stage11203/examples failed for /u01/app/11.2.0/grid

The opatch minimum version check for patch /u01/stage11203/gateways failed for /u01/app/11.2.0/grid

The opatch minimum version check for patch /u01/stage11203/grid failed for /u01/app/11.2.0/grid

Opatch version check failed for oracle home /u01/app/11.2.0/grid

Opatch version check failed

update the opatch version for the failed homes and retry

[root@raclinux2 OPatch]# ./opatch auto /u01/stage11203 -oh /u01/app/11.2.0/grid

Executing /usr/bin/perl ./crs/patch112.pl -patchdir /u01 -patchn stage11203 -oh /u01/app/11.2.0/grid -paramfile /u01/app/11.2.0/grid/crs/install/crsconfig_params

opatch auto log file location is /u01/app/11.2.0/grid/OPatch/crs/../../cfgtoollogs/opatchauto2011-11-07_23-13-49.log

Detected Oracle Clusterware install

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

OPatch is bundled with OCM, Enter the absolute OCM response file path:

/u01/app/11.2.0/grid/OPatch/ocm/ocm.zip

Enter ‘yes’ if you have unzipped this patch to an empty directory to proceed (yes/no):yes

The opatch minimum version check for patch /u01/stage11203/client failed for /u01/app/11.2.0/grid

The opatch minimum version check for patch /u01/stage11203/database failed for /u01/app/11.2.0/grid

The opatch minimum version check for patch /u01/stage11203/deinstall failed for /u01/app/11.2.0/grid

The opatch minimum version check for patch /u01/stage11203/examples failed for /u01/app/11.2.0/grid

The opatch minimum version check for patch /u01/stage11203/gateways failed for /u01/app/11.2.0/grid

The opatch minimum version check for patch /u01/stage11203/grid failed for /u01/app/11.2.0/grid

The opatch Component check failed. This patch is not applicable for /u01/app/11.2.0/grid

The opatch Component check failed. This patch is not applicable for /u01/app/11.2.0/grid

The opatch Component check failed. This patch is not applicable for /u01/app/11.2.0/grid

The opatch Component check failed. This patch is not applicable for /u01/app/11.2.0/grid

The opatch Component check failed. This patch is not applicable for /u01/app/11.2.0/grid

The opatch Component check failed. This patch is not applicable for /u01/app/11.2.0/grid

Successfully unlock /u01/app/11.2.0/grid

The opatch Applicable check failed for /u01/app/11.2.0/grid. The patch is not applicable for /u01/app/11.2.0/grid

The opatch Applicable check failed for /u01/app/11.2.0/grid. The patch is not applicable for /u01/app/11.2.0/grid

The opatch Applicable check failed for /u01/app/11.2.0/grid. The patch is not applicable for /u01/app/11.2.0/grid

The opatch Applicable check failed for /u01/app/11.2.0/grid. The patch is not applicable for /u01/app/11.2.0/grid

The opatch Applicable check failed for /u01/app/11.2.0/grid. The patch is not applicable for /u01/app/11.2.0/grid

The opatch Applicable check failed for /u01/app/11.2.0/grid. The patch is not applicable for /u01/app/11.2.0/grid

patch /u01/stage11203/12539000 apply successful for home /u01/app/11.2.0/grid

patch /u01/stage11203/client apply failed for home /u01/app/11.2.0/grid

ACFS-9300: ADVM/ACFS distribution files found.

ACFS-9312: Existing ADVM/ACFS installation detected.

ACFS-9314: Removing previous ADVM/ACFS installation.

ACFS-9315: Previous ADVM/ACFS components successfully removed.

ACFS-9307: Installing requested ADVM/ACFS software.

ACFS-9308: Loading installed ADVM/ACFS drivers.

ACFS-9321: Creating udev for ADVM/ACFS.

ACFS-9323: Creating module dependencies – this may take some time.

ACFS-9327: Verifying ADVM/ACFS devices.

ACFS-9309: ADVM/ACFS installation correctness verified.

CRS-4123: Oracle High Availability Services has been started.

[root@raclinux2 OPatch]#

[grid@raclinux2 OPatch]$ ./opatch lsinventory

Invoking OPatch 11.2.0.1.8

Oracle Interim Patch Installer version 11.2.0.1.8

Copyright (c) 2011, Oracle Corporation. All rights reserved.

Oracle Home : /u01/app/11.2.0/grid

Central Inventory : /u01/app/oraInventory

from : /etc/oraInst.loc

OPatch version : 11.2.0.1.8

OUI version : 11.2.0.2.0

Log file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch2011-11-07_23-25-38PM.log

Lsinventory Output file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2011-11-07_23-25-38PM.txt

——————————————————————————–

Installed Top-level Products (1):

Oracle Grid Infrastructure 11.2.0.2.0

There are 1 products installed in this Oracle Home.

Interim patches (3) :

Patch 12539000 : applied on Mon Nov 07 23:19:47 EET 2011

Unique Patch ID: 13976979

 Created on 28 Jul 2011, 12:37:42 hrs PST8PDT

Bugs fixed:

12539000

Patch 11724916 : applied on Wed Jul 27 01:46:01 EEST 2011

Unique Patch ID: 13762085

 Created on 1 Apr 2011, 07:09:05 hrs PST8PDT

Bugs fixed:

10151017, 10158965, 11724916, 10190642, 10129643, 10018789, 9744252

10248523, 9956713, 10356513, 9715581, 9770451, 10170431, 10425676

10222719, 9591812, 10127360, 10094201, 11069199, 10245086, 10205230

10052141, 11818335, 11830776, 11830777, 9905049, 11830778, 10077191

10358019, 10219576, 10264680, 10209232, 10102506, 11067567, 9881076

10040531, 10218814, 9788588, 9735237, 10230571, 10079168, 10228151

10013431, 10217802, 10238786, 10052956, 11699057, 10080579, 10332111

10227288, 10329146, 10332589, 10110863, 10073683, 10019218, 10229719

10373381, 11724984, 9539440, 10411618, 10022980, 10187168, 6523037

9724970, 10084145, 10157402, 9651350, 10299224

Patch 12311357 : applied on Wed Jul 27 01:44:15 EEST 2011

Unique Patch ID: 13762085

 Created on 5 Apr 2011, 09:13:41 hrs PST8PDT

Bugs fixed:

12311357, 10425672, 10244210, 11655840, 10634513, 9891341, 11782423

11077756, 10376847, 10157506, 10178670, 9959110, 10314123, 10014392

10157622, 10089120, 10057296, 10053985, 9864003, 10044622, 9812970

10083789, 10073372, 9876201, 9963327, 10375649, 9336825, 10062301

10018215, 10105195, 10007185, 10071992, 10038791, 10048487, 9926027

10260251, 10052721, 10028235, 10027079, 10028343, 10231906, 10065216

10045436, 9907089, 10175855, 10284828, 10072474, 10036834, 10028637

10029900, 9974223, 9979706, 10016083, 10015460, 9918485, 9971646, 10040647

9978765, 10069541, 9915329, 10107380, 10110969, 10305361, 10029119

10233159, 10083009, 9812956, 10008467, 10036193, 10048027, 10040109

10015210, 9944978, 10033106, 9978195, 10042143, 10284693, 9679401

10111010, 10075643, 10057680, 10205290, 10124517, 10078086, 9944948

10146768, 10052529, 10011084, 10073075, 10248739, 10236074, 10128191

9975837, 10168006, 9949676, 10228079, 10015603, 10241696, 9861790

10069698, 10056808, 10087118, 10283058, 10252497, 10146744, 10326548

10157625, 10283167, 10019796, 9975343, 10216878, 9906432, 10045316

10029794, 10425675, 10061534, 10193581, 10070563, 10283549, 10311856

10150020, 10268642, 10283596

Rac system comprising of multiple nodes

 Local node = raclinux2

 Remote node = raclinux1

——————————————————————————–

OPatch succeeded.

[grid@raclinux2 OPatch]$

Annex 1B

[root@raclinux1 OPatch]# ./opatch auto /u01/stage11203 -oh /u01/app/oracle/product/11.2.0/db_1

Executing /usr/bin/perl ./crs/patch112.pl -patchdir /u01 -patchn stage11203 -oh /u01/app/oracle/product/11.2.0/db_1 -paramfile /u01/app/11.2.0/grid/crs/install/crsconfig_params

opatch auto log file location is /u01/app/11.2.0/grid/OPatch/crs/../../cfgtoollogs/opatchauto2011-11-07_23-31-52.log

Detected Oracle Clusterware install

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

OPatch is bundled with OCM, Enter the absolute OCM response file path:

/u01/app/oracle/product/11.2.0/db_1/OPatch/ocm/bin/ocm.rsp

Enter ‘yes’ if you have unzipped this patch to an empty directory to proceed (yes/no):yes

The opatch minimum version check for patch /u01/stage11203/client failed for /u01/app/oracle/product/11.2.0/db_1

The opatch minimum version check for patch /u01/stage11203/database failed for /u01/app/oracle/product/11.2.0/db_1

The opatch minimum version check for patch /u01/stage11203/deinstall failed for /u01/app/oracle/product/11.2.0/db_1

The opatch minimum version check for patch /u01/stage11203/examples failed for /u01/app/oracle/product/11.2.0/db_1

The opatch minimum version check for patch /u01/stage11203/gateways failed for /u01/app/oracle/product/11.2.0/db_1

The opatch minimum version check for patch /u01/stage11203/grid failed for /u01/app/oracle/product/11.2.0/db_1

The opatch minimum version check for patch /u01/stage11203/OPatch failed for /u01/app/oracle/product/11.2.0/db_1

The opatch Component check failed. This patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Component check failed. This patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Component check failed. This patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Component check failed. This patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Component check failed. This patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Component check failed. This patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Component check failed. This patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Applicable check failed for /u01/app/oracle/product/11.2.0/db_1. The patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Applicable check failed for /u01/app/oracle/product/11.2.0/db_1. The patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Applicable check failed for /u01/app/oracle/product/11.2.0/db_1. The patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Applicable check failed for /u01/app/oracle/product/11.2.0/db_1. The patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Applicable check failed for /u01/app/oracle/product/11.2.0/db_1. The patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Applicable check failed for /u01/app/oracle/product/11.2.0/db_1. The patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Applicable check failed for /u01/app/oracle/product/11.2.0/db_1. The patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

patch /u01/stage11203/12539000 apply successful for home /u01/app/oracle/product/11.2.0/db_1

patch /u01/stage11203/client apply failed for home /u01/app/oracle/product/11.2.0/db_1

[root@raclinux1 OPatch]#

[oracle@raclinux1 OPatch]$ ./opatch lsinventory

Invoking OPatch 11.2.0.1.8

Oracle Interim Patch Installer version 11.2.0.1.8

Copyright (c) 2011, Oracle Corporation. All rights reserved.

Oracle Home : /u01/app/oracle/product/11.2.0/db_1

Central Inventory : /u01/app/oraInventory

from : /etc/oraInst.loc

OPatch version : 11.2.0.1.8

OUI version : 11.2.0.2.0

Log file location : /u01/app/oracle/product/11.2.0/db_1/cfgtoollogs/opatch/opatch2011-11-07_23-38-24PM.log

Lsinventory Output file location : /u01/app/oracle/product/11.2.0/db_1/cfgtoollogs/opatch/lsinv/lsinventory2011-11-07_23-38-24PM.txt

——————————————————————————–

Installed Top-level Products (2):

Oracle Database 11g 11.2.0.2.0

Oracle Database 11g Examples 11.2.0.2.0

There are 2 products installed in this Oracle Home.

Interim patches (1) :

Patch 12539000 : applied on Mon Nov 07 23:35:59 EET 2011

Unique Patch ID: 13976979

 Created on 28 Jul 2011, 12:37:42 hrs PST8PDT

Bugs fixed:

12539000

Rac system comprising of multiple nodes

 Local node = raclinux1

 Remote node = raclinux2

——————————————————————————–

OPatch succeeded.

[oracle@raclinux1 OPatch]$

[root@raclinux2 OPatch]# ./opatch auto /u01/stage11203 -oh /u01/app/oracle/product/11.2.0/db_1

Executing /usr/bin/perl ./crs/patch112.pl -patchdir /u01 -patchn stage11203 -oh /u01/app/oracle/product/11.2.0/db_1 -paramfile /u01/app/11.2.0/grid/crs/install/crsconfig_params

opatch auto log file location is /u01/app/11.2.0/grid/OPatch/crs/../../cfgtoollogs/opatchauto2011-11-07_23-39-18.log

Detected Oracle Clusterware install

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

OPatch is bundled with OCM, Enter the absolute OCM response file path:

/u01/app/oracle/product/11.2.0/db_1/OPatch/ocm/bin/ocm.rsp

Enter ‘yes’ if you have unzipped this patch to an empty directory to proceed (yes/no):yes

The opatch minimum version check for patch /u01/stage11203/client failed for /u01/app/oracle/product/11.2.0/db_1

The opatch minimum version check for patch /u01/stage11203/database failed for /u01/app/oracle/product/11.2.0/db_1

The opatch minimum version check for patch /u01/stage11203/deinstall failed for /u01/app/oracle/product/11.2.0/db_1

The opatch minimum version check for patch /u01/stage11203/examples failed for /u01/app/oracle/product/11.2.0/db_1

The opatch minimum version check for patch /u01/stage11203/gateways failed for /u01/app/oracle/product/11.2.0/db_1

The opatch minimum version check for patch /u01/stage11203/grid failed for /u01/app/oracle/product/11.2.0/db_1

The opatch Component check failed. This patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Component check failed. This patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Component check failed. This patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Component check failed. This patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Component check failed. This patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Component check failed. This patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Applicable check failed for /u01/app/oracle/product/11.2.0/db_1. The patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Applicable check failed for /u01/app/oracle/product/11.2.0/db_1. The patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Applicable check failed for /u01/app/oracle/product/11.2.0/db_1. The patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Applicable check failed for /u01/app/oracle/product/11.2.0/db_1. The patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Applicable check failed for /u01/app/oracle/product/11.2.0/db_1. The patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

The opatch Applicable check failed for /u01/app/oracle/product/11.2.0/db_1. The patch is not applicable for /u01/app/oracle/product/11.2.0/db_1

patch /u01/stage11203/12539000 apply successful for home /u01/app/oracle/product/11.2.0/db_1

patch /u01/stage11203/client apply failed for home /u01/app/oracle/product/11.2.0/db_1

[root@raclinux2 OPatch]#

[oracle@raclinux2 OPatch]$ ./opatch lsinventory

Invoking OPatch 11.2.0.1.8

Oracle Interim Patch Installer version 11.2.0.1.8

Copyright (c) 2011, Oracle Corporation. All rights reserved.

Oracle Home : /u01/app/oracle/product/11.2.0/db_1

Central Inventory : /u01/app/oraInventory

from : /etc/oraInst.loc

OPatch version : 11.2.0.1.8

OUI version : 11.2.0.2.0

Log file location : /u01/app/oracle/product/11.2.0/db_1/cfgtoollogs/opatch/opatch2011-11-07_23-45-03PM.log

Lsinventory Output file location : /u01/app/oracle/product/11.2.0/db_1/cfgtoollogs/opatch/lsinv/lsinventory2011-11-07_23-45-03PM.txt

——————————————————————————–

Installed Top-level Products (2):

Oracle Database 11g 11.2.0.2.0

Oracle Database 11g Examples 11.2.0.2.0

There are 2 products installed in this Oracle Home.

Interim patches (1) :

Patch 12539000 : applied on Mon Nov 07 23:41:19 EET 2011

Unique Patch ID: 13976979

 Created on 28 Jul 2011, 12:37:42 hrs PST8PDT

Bugs fixed:

12539000

Rac system comprising of multiple nodes

Local node = raclinux2

 Remote node = raclinux1

——————————————————————————–

OPatch succeeded.

[oracle@raclinux2 OPatch]$

ANNEX 1C

[grid@raclinux1 grid]$ ./runcluvfy.sh stage -pre crsinst -upgrade -n all -rolling -src_crshome /u01/app/11.2.0/grid -dest_crshome /u01/app/11.2.0.3 -dest_version 11.2.0.3.0

Performing pre-checks for cluster services setup

Checking node reachability…

Node reachability check passed from node “raclinux1”

Checking user equivalence…

User equivalence check passed for user “grid”

Checking CRS user consistency

CRS user consistency check successful

Checking node connectivity…

Checking hosts config file…

Verification of the hosts config file successful

Check: Node connectivity for interface “eth0”

Node connectivity passed for interface “eth0”

TCP connectivity check passed for subnet “192.168.20.0”

Check: Node connectivity for interface “eth1”

Node connectivity passed for interface “eth1”

TCP connectivity check passed for subnet “10.10.20.0”

Checking subnet mask consistency…

Subnet mask consistency check passed for subnet “192.168.20.0”.

Subnet mask consistency check passed for subnet “10.10.20.0”.

Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication…

Checking subnet “192.168.20.0” for multicast communication with multicast group “230.0.1.0”…

Check of subnet “192.168.20.0” for multicast communication with multicast group “230.0.1.0” passed.

Checking subnet “10.10.20.0” for multicast communication with multicast group “230.0.1.0”…

Check of subnet “10.10.20.0” for multicast communication with multicast group “230.0.1.0” passed.

Check of multicast communication passed.

Checking OCR integrity…

OCR integrity check passed

Checking ASMLib configuration.

Check for ASMLib configuration passed.

Total memory check passed

Available memory check passed

Swap space check passed

Free disk space check passed for “raclinux2:/u01/app/11.2.0.3”

Free disk space check passed for “raclinux1:/u01/app/11.2.0.3”

Free disk space check passed for “raclinux2:/tmp”

Free disk space check passed for “raclinux1:/tmp”

Check for multiple users with UID value 1100 passed

User existence check passed for “grid”

Group existence check passed for “oinstall”

Membership check for user “grid” in group “oinstall” [as Primary] passed

Run level check passed

Hard limits check passed for “maximum open file descriptors”

Soft limits check passed for “maximum open file descriptors”

Hard limits check passed for “maximum user processes”

Soft limits check passed for “maximum user processes”

Check for Oracle patch “12539000” in home “/u01/app/11.2.0/grid” failed

Check failed on nodes:

raclinux2,raclinux1

There are no oracle patches required for home “/u01/app/11.2.0.3”.

System architecture check passed

Kernel version check passed

Kernel parameter check passed for “semmsl”

Kernel parameter check passed for “semmns”

Kernel parameter check passed for “semopm”

Kernel parameter check passed for “semmni”

Kernel parameter check failed for “shmmax”

Check failed on nodes:

raclinux2,raclinux1

Kernel parameter check passed for “shmmni”

Kernel parameter check passed for “shmall”

Kernel parameter check passed for “file-max”

Kernel parameter check passed for “ip_local_port_range”

Kernel parameter check passed for “rmem_default”

Kernel parameter check passed for “rmem_max”

Kernel parameter check passed for “wmem_default”

Kernel parameter check passed for “wmem_max”

Kernel parameter check passed for “aio-max-nr”

Package existence check passed for “make”

Package existence check passed for “binutils”

Package existence check passed for “gcc(x86_64)”

Package existence check passed for “libaio(x86_64)”

Package existence check passed for “glibc(x86_64)”

Package existence check passed for “compat-libstdc++-33(x86_64)”

Package existence check passed for “elfutils-libelf(x86_64)”

Package existence check passed for “elfutils-libelf-devel”

Package existence check passed for “glibc-common”

Package existence check passed for “glibc-devel(x86_64)”

Package existence check passed for “glibc-headers”

Package existence check passed for “gcc-c++(x86_64)”

Package existence check passed for “libaio-devel(x86_64)”

Package existence check passed for “libgcc(x86_64)”

Package existence check passed for “libstdc++(x86_64)”

Package existence check passed for “libstdc++-devel(x86_64)”

Package existence check passed for “sysstat”

Package existence check passed for “ksh”

Check for multiple users with UID value 0 passed

Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user’s primary group passed

Package existence check passed for “cvuqdisk”

Starting Clock synchronization checks using Network Time Protocol(NTP)…

NTP Configuration file check started…

NTP Configuration file check passed

Checking daemon liveness…

Liveness check passed for “ntpd”

Check for NTP daemon or service alive passed on all nodes

NTP daemon slewing option check passed

NTP daemon’s boot time configuration check for slewing option passed

NTP common Time Server Check started…

PRVF-5408 : NTP Time Server “78.47.24.68” is common only to the following nodes “raclinux1”

PRVF-5408 : NTP Time Server “192.43.244.18” is common only to the following nodes “raclinux2”

PRVF-5408 : NTP Time Server “80.190.97.205” is common only to the following nodes “raclinux2”

PRVF-5408 : NTP Time Server “130.133.1.10” is common only to the following nodes “raclinux1”

Check of common NTP Time Server passed

Clock time offset check from NTP Time Server started…

PRVF-5413 : Node “raclinux2” has a time offset of 34653.0 that is beyond permissible limit of 1000.0 from NTP Time Server “188.138.107.156”

PRVF-5413 : Node “raclinux1” has a time offset of 48808.1 that is beyond permissible limit of 1000.0 from NTP Time Server “188.138.107.156”

Clock time offset check passed

Clock synchronization check using Network Time Protocol(NTP) passed

Core file name pattern consistency check passed.

User “grid” is not part of “root” group. Check passed

Default user file creation mask check passed

Checking consistency of file “/etc/resolv.conf” across nodes

File “/etc/resolv.conf” does not have both domain and search entries defined

domain entry in file “/etc/resolv.conf” is consistent across nodes

search entry in file “/etc/resolv.conf” is consistent across nodes

The DNS response time for an unreachable node is within acceptable limit on all nodes

File “/etc/resolv.conf” is consistent across nodes

UDev attributes check for OCR locations started…

UDev attributes check passed for OCR locations

UDev attributes check for Voting Disk locations started…

UDev attributes check passed for Voting Disk locations

Time zone consistency check passed

Checking VIP configuration.

Checking VIP Subnet configuration.

Check for VIP Subnet configuration passed.

Checking VIP reachability

Check for VIP reachability passed.

Checking Oracle Cluster Voting Disk configuration…

ASM Running check passed. ASM is running on all specified nodes

Oracle Cluster Voting Disk configuration check passed

Clusterware version consistency passed

Pre-check for cluster services setup was unsuccessful on all the nodes.

[grid@raclinux1 grid]$

November 13, 2011 - Posted by | oracle

8 Comments »

  1. […] the database is admin policy managed. You can find information related to the upgrade to 11.2.0.3 here. The article will cover the following […]

    Pingback by Adding and Deleting a Node from Oracle RAC 11.2.0.3 « Guenadi N Jilevski's Oracle BLOG | November 17, 2011 | Reply

  2. […] In this article you will look at how to use clone.pl script to clone GI and RDBMS home of an existing Oracle RAC database installation to add a new node and extend the existing cluster. For additional approaches to add or delete a node to an Oracle RAC click here. The clone.pl can also be used to clone existing GI and RDBMS homes of one cluster to create a new Oracle clusters on another set of servers. You will have a look at how to manually add a RDBMS instance to the new node without using the dbca. The article refers to an existing Oracle cluster consisting of nodes raclinux1 and raclinux2 as described here and later upgraded to 11.2.0.3 as described here. […]

    Pingback by Clone GI and RDBMS homes in Oracle RAC 11.2.0.3 with clone.pl « Guenadi N Jilevski's Oracle BLOG | November 17, 2011 | Reply

  3. So to clarify, you didn’t need to install any other PSU’s to the 11.2.0.2 system prior to upgrading to 11.2.0.3? You only needed to install 12539000 and then go forward with the 11.2.0.3 upgrade?

    Thanks!

    Comment by Steve | December 12, 2011 | Reply

    • Hi,

      Correct.

      Regards,

      Comment by gjilevski | December 12, 2011 | Reply

  4. […] Oracle RAC database from Oracle 11.2.0.2 to Oracle 11.2.0.3. The cluster configuration is described here. The approach discussed in the article is available from Oracle 11gR1 onwards only and is similar […]

    Pingback by Using Physical Standby with transient Logical Standby (SQL Apply) for near zero downtime upgrade of two node Oracle RAC database from 11.2.0.2 to 11.2.0.3 « Guenadi N Jilevski's Oracle BLOG | July 3, 2012 | Reply

  5. […] Oracle RAC database from Oracle 11.2.0.2 to Oracle 11.2.0.3. The cluster configuration is described here. The approach discussed in the article is available from Oracle 11gR1 onwards only and is similar […]

    Pingback by Using Physical Standby with transient Logical Standby (SQL Apply) for near zero downtime upgrade of two node Oracle RAC database from 11.2.0.2 to 11.2.0.3 – Download « Guenadi N Jilevski's Oracle BLOG | July 3, 2012 | Reply

  6. […] running Oracle GI 11.2.0.3 and Oracle RAC RDBMS 11.2.0.3 and Oracle RAC RDBMS 10.2.0.5 described here and here. An ACFS mount point /u02 is created to host the OGG installation directory. The source […]

    Pingback by Installing Oracle GoldenGate (OGG) 11.2.1.0.1 in an Oracle cluster 11.2.0.3 (ACFS) and configuring a direct load and classic Change Data Capture (CDC) and Data Apply « Guenadi N Jilevski's Oracle BLOG | July 8, 2012 | Reply

  7. […] running Oracle GI 11.2.0.3 and Oracle RAC RDBMS 11.2.0.3 and Oracle RAC RDBMS 10.2.0.5 described here and here. An ACFS mount point /u02 is created to host the OGG installation directory. The source […]

    Pingback by Installing Oracle GoldenGate (OGG) 11.2.1.0.1 in an Oracle cluster 11.2.0.3 (ACFS) and configuring a direct load and classic Change Data Capture (CDC) and Data Apply – Download « Guenadi N Jilevski's Oracle BLOG | July 8, 2012 | Reply


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: