Upgrade to Oracle 11.2.0.2 from Oracle 11.2.0.1
Upgrade to Oracle 11.2.0.2 from Oracle 11.2.0.1
Starting with the first patch set for Oracle Database 11g Release 2 (11.2.0.2), Oracle Database patch sets are full installations of the Oracle Database software. In past releases, Oracle Database patch sets consisted of a set of files that replaced files in an existing Oracle home. Beginning with Oracle Database 11g Release 2, patch sets are full installations that replace existing installations as per MOS Important Changes to Oracle Database Patch Sets Starting With 11.2.0.2 [ID 1189783.1]
In this article we will look at how to upgrade existing Oracle GI and will install new Oracle RDBMS. For information on installing fresh Oracle 11.2.0.2 GI see here.
Prerequisites for Oracle 11gR2 11.2.0.2 installation is to install patch 9655006 to the 11.2.0.1 GI home before upgrading to 11.2.0.2 from 11.2.0.1. See Bug 9413827 on MOS.
If not patched upon running ./rootupgrade.sh will fail as shown below.
[root@raclinux1 grid]# ./rootupgrade.sh Running Oracle 11g root script... The following environment variables are set as: <p> ORACLE_OWNER= oracle </p> ORACLE_HOME= /u01/app/11.2.0.2/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: <p>The contents of "dbhome" have not changed. No need to overwrite. </p> The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) <p>[n]: </p> The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) <p>[n]: </p> Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0.2/grid/crs/install/crsconfig_params <p>Creating trace directory </p> <p>Failed to add (property/value):('OLD_OCR_ID/'-1') for checkpoint:ROOTCRS_OLDHOMEINFO.Error code is 256 </p> The fixes for bug 9413827 are not present in the 11.2.0.1 crs home Apply the patches for these bugs in the 11.2.0.1 crs home and then run rootupgrade.sh /u01/app/11.2.0.2/grid/perl/bin/perl -I/u01/app/11.2.0.2/grid/perl/lib -I/u01/app/11.2.0.2/grid/crs/install /u01/app/11.2.0.2/grid/crs/install/rootcrs.pl execution failed [root@raclinux1 grid]# <strong>
Patching 11.2.0.1 to 9655006
- Download and unzip latest OPatch utility with patch number 6880880 in $GI_HOME
- Download and unzip patch 9655006 in a stage directory in my case in /u01/stage. This will create two directories 9655006 and 9654983.
-
Invoke OPatch from $GI_HOME/OPatch as root
./opatch auto /u01/stage -och /u01/app/11.2.0/grid
- Verify patching with opatch lsinventory. See the ANNEX for sample output.
-
Refer to MOS ‘How to Manually Apply A Grid Infrastructure PSU Without Opatch Auto’ [ID 1210964.1] in case of problems.
Patching Oracle GI to 11.2.0.2
Create a new directory /u01/app/11.2.0.2/grid for Oracle 11.2.0.2 GI on all nodes.
mkdir –p /u01/app/11.2.0.2/grid
chmod –R 775 /u01/app/11.2.0.2/
chown –R oracle:oinstall /u01/app/11.2.0.2/
Download and unzip patch 10098816 into a stage directory in my case /u01/stage/11.2.0.2.
Start the installer from /u01/stage/11.2.0.2/grid. Either enter MOS credentials to check for updates or select ‘Skip software updates’. Press Next to continue.
Select Upgrade Oracle Grid Infrastructure or Oracle ASM and press Next to continue.
Select the languages and press Next to continue.
Make sure that all nodes are selected and press Next to continue.
Keep the defaults and press Next to continue.
Enter the new location for Oracle GI 11.2.0.2 as /u01/app/11.2.0.2/grid and press Next to continue.
Wait for the prerequisite checks to complete.
Review the Summary and press Install to continue.
Wait for the installation to complete.
Run rootupgrade.sh script first on raclinux1 and after that on raclinux2. Look at the output of the script in the Annex.
Wait for the Oracle GI installation to complete.
Verify that the GI is upgraded successfully from the new GI home
[root@raclinux1 bin]# pwd /u01/app/11.2.0.2/grid/bin [root@raclinux1 bin]# ./crsctl check cluster -all ************************************************************** raclinux1: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** raclinux2: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** [root@raclinux1 bin]#
Install new Oracle 11.2.0.2 RDBMS binaries
Start the installer from /u01/stage/11.2.0.2/database. Press Next to continue.
Acknowledge that you do not want at present to provide MOS credentials and continue.
Either enter MOS credentials to check for updates or select ‘Skip software updates’. Press Next to continue.
Select ‘Install database software only’ and press Next to continue.
Select Oracle Real Application clusters database installation, select all nodes on the cluster and press Next to continue. We will show in later articles Oracle RAC and RAC ONE node database creation using dbca. Here we are installing only Oracle RDBMS binaries.
Optionally select the language and press Next to continue.
Select Enterprise Edition and press Next to continue.
Select the location for the New Oracle RDBMS home and press Next to continue.
Chose the defaults and press Next to continue.
Check the errors and fix them. In this case I select the Ignore All and press Next to continue.
Verify the Summary and press Install to continue.
Wait for the installation to complete.
Press Close upon successful completion.
In the next posts we will look into creating RAC and RAC ONE Node databases using dbca. If you have an existing database and you want to upgrade it using dbua.
Annex
Sample opach lsinventory ouput
[oracle@raclinux2 OPatch]$ ./opatch lsinventory Invoking OPatch 11.2.0.1.3 Oracle Interim Patch Installer version 11.2.0.1.3 Copyright (c) 2010, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/11.2.0/grid Central Inventory : /u01/app/oraInventory from : /etc/oraInst.loc OPatch version : 11.2.0.1.3 OUI version : 11.2.0.1.0 OUI location : /u01/app/11.2.0/grid/oui Log file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch2010-09-23_19-45-20PM.log Patch history file: /u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch_history.txt Lsinventory Output file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2010-09-23_19-45-20PM.txt -------------------------------------------------------------------------------- Installed Top-level Products (1): Oracle Grid Infrastructure 11.2.0.1.0 There are 1 products installed in this Oracle Home. Interim patches (2) : Patch 9655006 : applied on Thu Sep 23 19:34:31 EDT 2010 Unique Patch ID: 12651761 Created on 6 Jul 2010, 12:00:17 hrs PST8PDT Bugs fixed: 9655006, 9778840, 9343627, 9783609, 9262748, 9262722 Patch 9654983 : applied on Thu Sep 23 19:20:20 EDT 2010 Unique Patch ID: 12651761 Created on 18 Jun 2010, 00:16:02 hrs PST8PDT Bugs fixed: 9068088, 9363384, 8865718, 8898852, 8801119, 9054253, 8725286, 8974548 9093300, 8909984, 8755082, 8780372, 8664189, 8769569, 7519406, 8822531 7705591, 8650719, 9637033, 8639114, 8723477, 8729793, 8919682, 8856478 9001453, 8733749, 8565708, 8735201, 8684517, 8870559, 8773383, 8981059 8812705, 9488887, 8813366, 9242411, 8822832, 8897784, 8760714, 8775569 8671349, 8898589, 9714832, 8642202, 9011088, 9170608, 9369797, 9165206 8834636, 8891037, 8431487, 8570322, 8685253, 8872096, 8718952, 8799099 9032717, 9399090, 9546223, 9713537, 8588519, 8783738, 8834425, 9454385 8856497, 8890026, 8721315, 8818175, 8674263, 9145541, 8720447, 9272086 9467635, 9010222, 9197917, 8991997, 8661168, 8803762, 8769239, 9654983 8706590, 8778277, 8815639, 9027691, 9454036, 9454037, 9454038, 9255542 8761974, 9275072, 8496830, 8702892, 8818983, 8475069, 8875671, 9328668 8798317, 8891929, 8774868, 8820324, 8544696, 8702535, 8268775, 9036013 9363145, 8933870, 8405205, 9467727, 8822365, 9676419, 8761260, 8790767 8795418, 8913269, 8717461, 8607693, 8861700, 8330783, 8780281, 8780711 8784929, 9341448, 9015983, 9119194, 8828328, 8665189, 8717031, 8832205 9676420, 8633358, 9321701, 9655013, 8796511, 9167285, 8782971, 8756598 8703064, 9066116, 9007102, 9461782, 9352237, 8505803, 8753903, 9216806 8918433, 9057443, 8790561, 8733225, 9067282, 8928276, 9210925, 8837736 Rac system comprising of multiple nodes Local node = raclinux2 Remote node = raclinux1 -------------------------------------------------------------------------------- OPatch succeeded. [oracle@raclinux2 OPatch]$ [oracle@raclinux1 OPatch]$ ./opatch lsinventory Invoking OPatch 11.2.0.1.3 Oracle Interim Patch Installer version 11.2.0.1.3 Copyright (c) 2010, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/11.2.0/grid Central Inventory : /u01/app/oraInventory from : /etc/oraInst.loc OPatch version : 11.2.0.1.3 OUI version : 11.2.0.1.0 OUI location : /u01/app/11.2.0/grid/oui Log file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch2010-09-23_19-41-01PM.log Patch history file: /u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch_history.txt Lsinventory Output file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2010-09-23_19-41-01PM.txt -------------------------------------------------------------------------------- Installed Top-level Products (1): Oracle Grid Infrastructure 11.2.0.1.0 There are 1 products installed in this Oracle Home. Interim patches (2) : Patch 9655006 : applied on Thu Sep 23 19:39:38 EDT 2010 Unique Patch ID: 12651761 Created on 6 Jul 2010, 12:00:17 hrs PST8PDT Bugs fixed: 9655006, 9778840, 9343627, 9783609, 9262748, 9262722 Patch 9654983 : applied on Thu Sep 23 19:20:43 EDT 2010 Unique Patch ID: 12651761 Created on 18 Jun 2010, 00:16:02 hrs PST8PDT Bugs fixed: 9068088, 9363384, 8865718, 8898852, 8801119, 9054253, 8725286, 8974548 9093300, 8909984, 8755082, 8780372, 8664189, 8769569, 7519406, 8822531 7705591, 8650719, 9637033, 8639114, 8723477, 8729793, 8919682, 8856478 9001453, 8733749, 8565708, 8735201, 8684517, 8870559, 8773383, 8981059 8812705, 9488887, 8813366, 9242411, 8822832, 8897784, 8760714, 8775569 8671349, 8898589, 9714832, 8642202, 9011088, 9170608, 9369797, 9165206 8834636, 8891037, 8431487, 8570322, 8685253, 8872096, 8718952, 8799099 9032717, 9399090, 9546223, 9713537, 8588519, 8783738, 8834425, 9454385 8856497, 8890026, 8721315, 8818175, 8674263, 9145541, 8720447, 9272086 9467635, 9010222, 9197917, 8991997, 8661168, 8803762, 8769239, 9654983 8706590, 8778277, 8815639, 9027691, 9454036, 9454037, 9454038, 9255542 8761974, 9275072, 8496830, 8702892, 8818983, 8475069, 8875671, 9328668 8798317, 8891929, 8774868, 8820324, 8544696, 8702535, 8268775, 9036013 9363145, 8933870, 8405205, 9467727, 8822365, 9676419, 8761260, 8790767 8795418, 8913269, 8717461, 8607693, 8861700, 8330783, 8780281, 8780711 8784929, 9341448, 9015983, 9119194, 8828328, 8665189, 8717031, 8832205 9676420, 8633358, 9321701, 9655013, 8796511, 9167285, 8782971, 8756598 8703064, 9066116, 9007102, 9461782, 9352237, 8505803, 8753903, 9216806 8918433, 9057443, 8790561, 8733225, 9067282, 8928276, 9210925, 8837736 Rac system comprising of multiple nodes Local node = raclinux1 Remote node = raclinux2 -------------------------------------------------------------------------------- OPatch succeeded. [oracle@raclinux1 OPatch]$ Output of running rootupgrade.sh on raclinux1 [root@raclinux1 grid]# ./rootupgrade.sh Running Oracle 11g root script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/11.2.0.2/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin ... The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0.2/grid/crs/install/crsconfig_params Creating trace directory Failed to add (property/value):('OLD_OCR_ID/'-1') for checkpoint:ROOTCRS_OLDHOMEINFO.Error code is 256 ASM upgrade has started on first node. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'raclinux1' CRS-2673: Attempting to stop 'ora.crsd' on 'raclinux1' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'raclinux1' CRS-2673: Attempting to stop 'ora.DATA.dg' on 'raclinux1' CRS-2673: Attempting to stop 'ora.registry.acfs' on 'raclinux1' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'raclinux1' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'raclinux1' CRS-2677: Stop of 'ora.DATA.dg' on 'raclinux1' succeeded CRS-2677: Stop of 'ora.registry.acfs' on 'raclinux1' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'raclinux1' CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'raclinux1' succeeded CRS-2673: Attempting to stop 'ora.scan1.vip' on 'raclinux1' CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'raclinux1' succeeded CRS-2673: Attempting to stop 'ora.raclinux1.vip' on 'raclinux1' CRS-2677: Stop of 'ora.scan1.vip' on 'raclinux1' succeeded CRS-2672: Attempting to start 'ora.scan1.vip' on 'raclinux2' CRS-2677: Stop of 'ora.asm' on 'raclinux1' succeeded CRS-2677: Stop of 'ora.raclinux1.vip' on 'raclinux1' succeeded CRS-2672: Attempting to start 'ora.raclinux1.vip' on 'raclinux2' CRS-2676: Start of 'ora.scan1.vip' on 'raclinux2' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'raclinux2' CRS-2676: Start of 'ora.raclinux1.vip' on 'raclinux2' succeeded CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'raclinux2' succeeded CRS-2673: Attempting to stop 'ora.ons' on 'raclinux1' CRS-2673: Attempting to stop 'ora.eons' on 'raclinux1' CRS-2677: Stop of 'ora.ons' on 'raclinux1' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on 'raclinux1' CRS-2677: Stop of 'ora.net1.network' on 'raclinux1' succeeded CRS-2677: Stop of 'ora.eons' on 'raclinux1' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'raclinux1' has completed CRS-2677: Stop of 'ora.crsd' on 'raclinux1' succeeded CRS-2673: Attempting to stop 'ora.mdnsd' on 'raclinux1' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'raclinux1' CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'raclinux1' CRS-2673: Attempting to stop 'ora.ctssd' on 'raclinux1' CRS-2673: Attempting to stop 'ora.evmd' on 'raclinux1' CRS-2673: Attempting to stop 'ora.asm' on 'raclinux1' CRS-2677: Stop of 'ora.cssdmonitor' on 'raclinux1' succeeded CRS-2677: Stop of 'ora.evmd' on 'raclinux1' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'raclinux1' succeeded CRS-2677: Stop of 'ora.ctssd' on 'raclinux1' succeeded CRS-2677: Stop of 'ora.drivers.acfs' on 'raclinux1' succeeded CRS-2677: Stop of 'ora.asm' on 'raclinux1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'raclinux1' CRS-2677: Stop of 'ora.cssd' on 'raclinux1' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'raclinux1' CRS-2673: Attempting to stop 'ora.diskmon' on 'raclinux1' CRS-2677: Stop of 'ora.gpnpd' on 'raclinux1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'raclinux1' CRS-2677: Stop of 'ora.diskmon' on 'raclinux1' succeeded CRS-2677: Stop of 'ora.gipcd' on 'raclinux1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'raclinux1' has completed CRS-4133: Oracle High Availability Services has been stopped. Successfully deleted 1 keys from OCR. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. OLR initialization - successful Adding daemon to inittab ACFS-9200: Supported ACFS-9300: ADVM/ACFS distribution files found. ACFS-9312: Existing ADVM/ACFS installation detected. ACFS-9314: Removing previous ADVM/ACFS installation. ACFS-9315: Previous ADVM/ACFS components successfully removed. ACFS-9307: Installing requested ADVM/ACFS software. ACFS-9308: Loading installed ADVM/ACFS drivers. ACFS-9321: Creating udev for ADVM/ACFS. ACFS-9323: Creating module dependencies - this may take some time. ACFS-9327: Verifying ADVM/ACFS devices. ACFS-9309: ADVM/ACFS installation correctness verified. clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 11g Release 2. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Configure Oracle Grid Infrastructure for a Cluster ... succeeded [root@raclinux1 grid]# Output of running rootupgrade.sh on raclinux2. [root@raclinux2 grid]# ./rootupgrade.sh Running Oracle 11g root script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/11.2.0.2/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0.2/grid/crs/install/crsconfig_params clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 11g Release 2. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. CRS-1115: Oracle Clusterware has already been upgraded. ASM upgrade has finished on last node. Configure Oracle Grid Infrastructure for a Cluster ... succeeded [root@raclinux2 grid]#
54 Comments »
Leave a Reply
-
Archives
- February 2017 (1)
- November 2016 (1)
- October 2016 (1)
- May 2016 (2)
- March 2016 (3)
- December 2014 (2)
- July 2014 (1)
- June 2014 (6)
- May 2014 (5)
- February 2014 (1)
- December 2012 (2)
- November 2012 (8)
-
Categories
-
RSS
Entries RSS
Comments RSS
[…] Oracle RAC and RAC ONE Node database using dbca. For install/upgrade to Oracle 11.2.0.2 look here. […]
Winderful job. Thanks a bunch. After reading your blog, this upgrade became a cakewalk.
[…] and RDBMS are used and ASM is used as storage. Oracle 11gR2 GI and RDBMS installation or update is a prerequisite for setup a DR primary and standby sites using RAC. The primary database is a RAC […]
Hi Guenadi,
Do we need to run the DBUA to upgrade the DB from 11.2.0.1 TO 11.2.0.2 ?
Hi,
Surely, you need to upgrade the database as well. DBUA is one way.
Regards,
Thanks. So we have to follow all the steps as mentioned above and lastly do a upgrade either manually or DBUA. My case we basically want to implement the 11.2.0.2 patchset on the 11.2.0.1 db.
Hi,
Correct. First get the 11.2.0.2 binaries installed and last get dbua to perform the database upgrade.
Regards,
Hi Guenadi,
Looking at the steps again I found that;;
1. During GI upgrade, don’t we need to fill in any SCAN info (did not see any snap on that)
Don’t we need to check Multicasting and NTP are configured.
Also don’t we need to run the cluster verify utility separately.
2. While Installing Oracle Binaries do we need to create a New Oracle Home before or the installer creates a new default home?
3. Do you have the steps/snaps for the db upgrade?
Hi,
for 1. Follow the installer prompts for the upgrade ( out of place in a separate grid home) and run the root*.sh scripts it will take care of it automaticaly. This is an upgrade not an install. NTP are checked with OUI. cluvfy script can be run prior to the upgrade from the stage directory. OUI will also run a verification for you. SCANS are configured already at installation. OUI will detect the current configuration and will go from there.
for 2. You better do out of place upgrade and install the binaries in a second Oracle home. The Installer let you specify a second home as well. If you decide to precreate it make sure that it is empty.
for 3. dbua it is quite simple just follow the screen. I can post a detail screen shots of the dbua screens.
Regards,
Thanks. Planned to do a out-of-place upgrade.
Easier to rollback later.
You are also doing a out-of-place upgrade, since with GI its mandatory.
When you install the Opatch 6880880, does it install the patch 9655006 automatically?
If you post the detail screenshots of DBUA will be helpful.
Regards,
Hi,
I strongly recommend carefully reading through each patch readme note and the MOS links. OPatch installation does not install automatically any patch.
I will post the dbua screens for the update. https://gjilevski.wordpress.com/2011/02/09/using-dbua-for-upgrade-to-11-2-0-2/
Regards,
[…] is a brief post, originating from an earlier request, related to using DBUA for database upgrade to 11.2.0.2. Screenshots in the posts are on Windows […]
Hi,
Thanks for the post.
Have a ASM file system and no place to replicate in DISK. So after the patch update to 11.2.0.2 using out-of-place upgrade to a new home I use DBUA.
Should I use the option of in ‘Move Datafiles Section’ ?
Will my old ASM diskgroups be upgarded by this option?
Just editing my earlier post;
Should I use the option of in ‘Move Datafiles Section’ ?
Hi,
Depends on your situation.
Regards,
Hi,
Thanks for the post. Got some errors in posting so re-sending;
Have a ASM file system and no space to replicate in SAN. So after the patch update to 11.2.0.2 using out-of-place upgrade to a new home I use DBUA.
Should I use the option of ‘Do not move datafiles as part of upgrade’ in ‘Move Datafiles Section’ ?
Will my old ASM diskgroups be upgarded by this option?
Regards,
Hi,
Yeah, You can specify ‘Do not move datafiles as part of upgrade’ .
ASM is upgrade during the Grid Infrastructure upgrade.
Regards,
Hi,
You can keep the files in the old location.
Regards,
Hi,
1>Is this patch 11.2.0.2 a rolling upgrade patch?
2> How to de-install the patch and downgrade the db. Mean the fallback plan.?
Regards,
Hi,
Please look at README notes and MOS notes.
Oracle 11gR2 Upgrade Companion [ID 785351.1]
Oracle Clusterware (formerly CRS) Rolling Upgrades [ID 338706.1]
Inside you will find the answers.
1. You have two GI home (11.2.0.1 and 11.2.0.2) thus, you can do a rolling upgrade. You start at the first node and carry on on the next nodes. See the docs for detail steps and prerequisites.
2. See the docs/MOS notes
Regards,
Hi,
We have to keep the oracle home same. Can we install the binaries in the OLD home (means use an in-place upgrade for Oracle Binaries)?
But we do install GI into a new home.(Out of place upgrade for GI) which is mandatory.
Regards,
Hi,
Oracle recommends to install out-of-place for RDBMS $ORACLE_HOME and you will get an warning to specify a new RDBMS $OH.
Why do you want to override the old 11.2.0.1 $OH?
Regards,
Reason is the applications will face problems in switching and connecting. Also disk space is another problem.
So we intend to do in-place for Oracle Home.
GI Home we do out-of-place.
Upgrade guide says it will give a warning but can do some workarounds. Detach and rename the oracle home etc.
But is it possible to do a in-place for Oracle Home and Out-of-place for GI?
regards;
Hi,
I would stick with recommendations. You can test it though.
Regards,
Hi,
It is cleaner using new $OH. dbua will take care of creating new configuration files in 11.2.0.2 $OH and OCR registration.
Regards,
Hi,
1>The GI we can do a rolling upgrade, node by node. In your case above you did not mention about the sequence of shutting or starting the instances. Did you do a all node patching, since I saw your step > Run rootupgrade.sh on both nodes ?
2> Before GI upgrade do we need to unset the oracle variables like ORA_CRS_HOME, ORACLE_HOME.?
3> After Patching/DB upgrade does it show the update in 5th digit of oracle release ?
It was said in Oracle doc will do from 11.2 PSU
Regards,
Hi,
I wrote ‘Run rootupgrade.sh script first on raclinux1 and after that on raclinux2. Look at the output of the script in the Annex.’ I am running it first on raclinux1 after successful completion I ran it on raclinux2.
I did not set ORA_CRS_HOME and did not unset any of them.
Check the upgrade status. Is it OK?
Read very carefully the patch read me notes ant the notes.
Regards,
Hi,
Query the DBA_REGISTRY, V$VERSION to determine upgrade status after the upgrade.
Regards,
[…] In this article you will have a look at the guidelines and steps to upgrade Oracle GI and RDBMS to 11.2.0.2 from 11.2.0.1. The database will be upgraded using dbua. The configuration is a two node Solaris 10 x86-64 cluster described here. For Linux based upgrade to Oracle 11.2.0.2 from Oracle 11.2.0.1click here. […]
Hi,
I am trying to upgrade from 11.2.0.1 to 11.2.0.2 and got the Failed to add (property/value):(‘OLD_OCR_ID/’-1′) for checkpoint:ROOTCRS_OLDHOMEINFO.Error code is 256 error and that’s when I came across this note.
I have run the upgraderoot.sh on both the nodes (since it is a 2 node cluster) and after that the installer is stuck at 94% (Oracle Cluster Verification Utility), is there a way we can check if it is stuck or running? The install log is not moving….. Also since we have not applied the patch required before the upgrade, would we have to remove the 11.2.0.2 install and re-install it?
Thanks a lot!
Hi,
Did you run successfully rootupgrade on both nodes?
If not look at:
For clean up of a failed GI install refer to
https://gjilevski.wordpress.com/2010/08/12/how-to-clean-up-after-a-failed-11g-crs-install-what-is-new-in-11g-r2-2/
The note above will address how to remove the GI after failed install. After that you install the patch and start GI upgrade again.
if both rootupgrade.sh was succesful where is the problem? Is it only cluvfy?
Did you manually run cluvfy and what is the outcome?
Regards,
Hi,
Thank you for the prompt response. Yes we ran rootupgrade on both nodes, and it gave the same (patch missing error). After that I continued the installer and it is now stuck at 94% where it does the cluster verification utility.
I did not run the cluvfy manually since the OUI is still running and probably stuck at cluster verification. At the last step of installer ‘Oracle Cluster Verification Utility’ it is showing as ‘In Progress’. All the steps before that suceeded. It is stuck from last 2 hours.
Will it be better to exit the installer, perform the cleanup, apply the missing patch on 11.2.0.1 and then start upgrade again?
Also, do we need to bring down the existing cluster and database to apply the 11.2.0.1 patch?
Thanks a lot for your help.
Hi,
1. make sure that you are running a CRS from 11.2.0.1 home. Also make sure that 11.2.0.1 home utility are used. If not reffer to MOS to rollback to 11.2.0.1
2. Patch 11.2.0.1
3. clean up 11.2.0.2
4. Start upgrade again.
Regards,
If we remove the failed GI install, will it not remove the existing 11.2.0.1 as well? I just wanted to cleanup the 11.2.0.2 and keep the existing 11.2.0.1 cluster and database running.
Thanks!
Hi,
If rootupgrade has failed most likely you are running from 11.2.0.1 home CRS. Make sure that you are not switched to 11.2.0.2. If so rollback to 11.2.0.1 and start as in the steps
1. make sure that you are running a CRS from 11.2.0.1 home. Also make sure that 11.2.0.1 home utility are used. If not reffer to MOS to rollback to 11.2.0.1
2. Patch 11.2.0.1
3. clean up 11.2.0.2
4. Start upgrade again.
Regards,
Sorry, I didn’t understand exactly how to deinstall the 11.2.0.2 binaries. Do I just need to exit out of OUI and then run the ‘deinstall’ script in the 11.2.0.2 GRID_HOME? Also, as a part of clean up should be just remove the 11.2.0.2 home directory after deinstall?
Can you please explain the steps required in my case, where the root scripts had errors due to missing patches and the OUI got stuck at the last step of Cluster Verification Utility?
Sorry for asking the same thing again, but just want to get a clear idea on how to perform the deinstall.
Thanks a lot for your help.
Hi,
Exit OUI and follow the steps
1. make sure that you are running a CRS from 11.2.0.1 home. Also make sure that 11.2.0.1 home utility are used. If not reffer to MOS to rollback to 11.2.0.1 if using 11.2.0.2.
2. Patch 11.2.0.1
3. clean up 11.2.0.2 run deinstall or manually
4. Start upgrade again.
You can deinstall 11.2.0.2 using deinstall or manually but take care of the oraInventory see MOS. Also clear all files in GI_HOME 11.2.0.2 on all nodes.
Regards,
How do we make sure that the CRS is running from 11.2.0.1?
Also, just want to confirm the steps I will be following to deinstall 11.2.0.2 –> 1. Deconfigure Oracle Clusterware without removing the binaries:
•Log in as the root user on a node where you encountered an error. Change directory to $GRID_HOME/crs/install. For example:
# cd $GRID_HOME/crs/install
•Run rootcrs.pl with the -deconfig -force flags on all but the last node.
# perl rootcrs.pl -deconfig -force
•If you are deconfiguring Oracle Clusterware on all nodes in the cluster, then on the last node add the –lastnode flag that completes deconfiguration on the cluster including the OCR and the voting disks.
# perl rootcrs.pl -deconfig -force -lastnode
2. $ cd /u01/app/11.2.0/grid/deinstall/
$ ./deinstall
Is this correct?
Thanks a lot for your help.
Hi,
It is more than that see OSS or MOS
[root@oel61 bin]# ./crsctl query crs
Parse error:
Missing additional arguments
Usage:
crsctl query crs administrator
Display admin list
crsctl query crs activeversion
Lists the Oracle Clusterware operating version
crsctl query crs releaseversion
Lists the Oracle Clusterware release version
crsctl query crs softwareversion []
Lists the version of Oracle Clusterware software installed
where
Default List software version of the local node
nodename List software version of named node
[root@oel61 bin]#
To determine the version use above.
Exit OUI and follow the steps. No need to run rootcrs.pl is all in 11.2.0.1
1. make sure that you are running a CRS from 11.2.0.1 home. Also make sure that 11.2.0.1 home utility are used. If not reffer to MOS to rollback to 11.2.0.1 if using 11.2.0.2.
2. Patch 11.2.0.1
3. clean up 11.2.0.2 run deinstall or manually
4. Start upgrade again.
You can deinstall 11.2.0.2 using deinstall or manually but take care of the oraInventory see MOS. Also clear all files in GI_HOME 11.2.0.2 on all nodes.
Regards,
Hi,
I would suggest to familiarize yourself with some RAC concepts and management and admin procedures.
Regards,
Hi,
Thank you so much for quick reply. Using the command you provided, I was able to find the version of CRS, it is 11.2.0.1.
>crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.1.0]
>crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.1.0]
>crsctl query crs softwareversion
Oracle Clusterware version on node [pcrf-o-dev-av] is [11.2.0.1.0]
So, now I will proceed with applying the patch to 11.2.0.1. Is it ok to do cleanup of 11.2.0.2 before applying the patch to 11.2.0.1?
Thanks a lot for your help.
Hi,
It is up to you. It will not make any harm to 11.2.0.1 GI_HOME if you clean 11.2.0.2 GI_HOME 11.2.0.2
Regards,
Thanks a lot for all your help. I will continue working on it and let you know how it goes.
Thanks again!
Sure.
Hello,
The 11.2.0.1 to 11.2.0.2 GI upgrade completed successfully. Here is what I did:
– Applied the missing patch on 11.2.0.1
– Re-run the rootupgrade.sh script again and it completed successfully. I did not have to remove the 11.2.0.2 and re-install.
Sorry about the late reply. Thanks a lot for all your help on this.
Hi,
I am glad that it worked. I was a bit conservative with my suggestion related to removing 11.2.0.2 in order for you to start afresh.
Regards,
Thanks a lot for your help, really appreciate it.
Hi Guenadi, do you mind sharing your email address with me? I had some questions about dbua. Thanks a lot!
Hi,
Post your questions. You should have in mind that I am not support.
Regards,
Hi,
it has mentioned that we have to create new oracle home for 11.2.0.2 for upgrade, how ever it is not mentioned anywhere that do we have to create new oracle base also. can you please clarify this for me , I will be upgrading test isntance soon.
thanks in advance,
Nikhil
Hi,
ORACLE_BASE is per OS (Linux/UNIX) user, so assume that you use existing $OB (/u01/app/oracle and/or /u01/app/grid) you do not have to create a new $OB.
All you need to do is to have a new $OH and share the existing $OB.
Regards,
Hi,
I am getting the below error while running the rootupgrade.sh script on last node:
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9312: Existing ADVM/ACFS installation detected.
ACFS-9314: Removing previous ADVM/ACFS installation.
ACFS-9315: Previous ADVM/ACFS components successfully removed.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies – this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
Start of resource “ora.asm” failed
CRS-2672: Attempting to start ‘ora.drivers.acfs’ on ‘rac2’
CRS-2676: Start of ‘ora.drivers.acfs’ on ‘rac2’ succeeded
CRS-2672: Attempting to start ‘ora.asm’ on ‘rac2’
CRS-5017: The resource action “ora.asm start” encountered the following error:
ORA-03113: end-of-file on communication channel
Process ID: 0
Session ID: 0 Serial number: 0CRS-2674: Start of ‘ora.asm’ on ‘rac2’ failed
CRS-2679: Attempting to clean ‘ora.asm’ on ‘rac2’
CRS-2681: Clean of ‘ora.asm’ on ‘rac2’ succeeded
CRS-2673: Attempting to stop ‘ora.drivers.acfs’ on ‘rac2’
CRS-2677: Stop of ‘ora.drivers.acfs’ on ‘rac2’ succeeded
CRS-4000: Command Start failed, or completed with errors.
Failed to start Oracle Clusterware stack
Failed to start ASM at /u01/app/11.2.0.2/grid/crs/install/crsconfig_lib.pm line 1051.
/u01/app/11.2.0.2/grid/perl/bin/perl -I/u01/app/11.2.0.2/grid/perl/lib -I/u01/app/11.2.0.2/grid/crs/install /u01/app/11.2.0.2/grid/crs/install/rootcrs.pl execution failed
Please suggest what to do ?
Thanks in advanced
Hi,
Investigate further to determine what is going on with ASM. Look at cluster alert and logs for details in $GI_HOME/log/.
Is 11.2.0.1 running on all nodes without problems? Are you using ASM for OCR/vote disks? Do you have proper permissions and ownership of the ASM devices that are to be used for ASM disks?
What is in ASM alert? How many nodes? OS? Does the fist node upgrade succeed? Did you compare nodes with cluvfy (cluvfy comp peer ….)?
You need to get more detailed information as this is quite generic message.
Regards,
Hi Guenadi,
Thanks for your kind response. I will provide the details soon regarding this.