Oracle 11g R2 (11.2.0.2) RAC One Node management
Oracle 11g R2 11.2.0.2 RAC One Node management
In Oracle 11gR2 11.2.0.2 RAC One Node database creation and management is simplified. The traditional srvctl utility used to manage RAC databases now can natively manage RAC One Node databases as well. Another enhancement in 11.2.0.2 is the ability to easy create RAC One Node database using dbca utility as specified here. To recount, prior to Oracle 11gR2 11.2.0.2 patch 9004119 had to be applied to provide the utilities for Oracle RAC One Node management as described here. Starting with Oracle 11gR2 11.2.0.2 srvctl utility adds features to relocate RAC One Node database to another server or to convert between Oracle RAC database and Oracle RAC One Node database.
In this article we will look at managing RAC One Node databases using the srvctl utility for relocation to another server and for conversion between Oracle RAC and Oracle RAC One Node database.
Let’s set Transparent Application Failover (TAF) service
Create an entry in tnsnames.ora as follows.
RONETAF = (DESCRIPTION = (ENABLE = BROKEN) (LOAD_BALANCE = OFF) (FAILOVER = ON) (ADDRESS = (PROTOCOL = TCP)(HOST = rac-scan)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RONE) (FAILOVER_MODE = (TYPE = SELECT) (METHOD = BASIC) (BACKUP = RONE) ) ) )
Relocate a RAC One Node database instance to another server from the list of candidate servers.
In this example we will use RONE RAC One Node database running on raclinux2 node and with candidate servers defined as raclinux1 and raclinux2, that is, the RONE can be relocated between raclinux1 and raclinux2.
[oracle@raclinux2 admin]$ srvctl status database -d rone Instance RONE_1 is running on node raclinux2 Online relocation: INACTIVE [oracle@raclinux2 admin]$ srvctl config database -d rone Database unique name: RONE Database name: RONE Oracle home: /u01/app/oracle/product/11.2.0/db_20 Oracle user: oracle Spfile: +DATA/RONE/spfileRONE.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: RONE Database instances: Disk Groups: DATA Mount point paths: Services: RACONE Type: RACOneNode Online relocation timeout: 30 Instance name prefix: RONE Candidate servers: raclinux2,raclinux1 Database is administrator managed [oracle@raclinux2 admin]$
Connect to the database using the TAF tnsnames.ora entry and run a query in one terminal session and relocate the instance to another candidate server while query is running in another terminal session
10pt;"> 10pt;">[oracle@raclinux2 admin]$ sqlplus system/sys1@ronetaf SQL*Plus: Release 11.2.0.2.0 Production on Tue Oct 5 14:21:50 2010 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> select * from gv$instance; INST_ID INSTANCE_NUMBER INSTANCE_NAME ---------- --------------- ---------------- HOST_NAME ---------------------------------------------------------------- VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT ----------------- --------- ------------ --- ---------- ------- --------------- LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO ---------- --- ----------------- ------------------ --------- --- 1 1 RONE_1 raclinux2.gj.com 11.2.0.2.0 05-OCT-10 OPEN YES 1 STOPPED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO SQL> SQL> @/u03/testtaf.sql SID SERIAL# FAILOVER_TYPE FAILOVER_METHOD FAILED_OVER ---- --------- ------------- --------------- ----------- 37 7 SELECT BASIC NO INSTANCE_NAME ---------------- RONE_1 COUNT(*) ---------- 628628 SID SERIAL# FAILOVER_TYPE FAILOVER_METHOD FAILED_OVER ---- --------- ------------- --------------- ----------- 53 3 SELECT BASIC YES INSTANCE_NAME ---------------- RONE_2 COUNT(*) ---------- 628628 SQL>
Once the query is started relocate the instance to node raclinux1 in another terminal session.
[oracle@raclinux1 ~]$ srvctl status database -d RONE Instance RONE_1 is running on node raclinux2 Online relocation: INACTIVE [oracle@raclinux1 ~]$ srvctl relocate database -d RONE -n raclinux1 [oracle@raclinux1 ~]$ srvctl status database -d RONE Instance RONE_2 is running on node raclinux1 Online relocation: INACTIVE [oracle@raclinux1 ~]$
Monitor the failover in the first terminal session where the SQL is executing. RONE get relocated to node raclinux1 and the active instance is RONE_2. The TAF works and the session fails over to RONE_2.
Converting RAC One Node database to RAC database
Here we will use the srvctl utility to convert RAC One Node database to RAC database (srvctl convert database –c RAC). Initially RONE is configured as RAC One Node database with active instance running onraclinux1.
[oracle@raclinux1 db_20]$ srvctl config database -d RONE Database unique name: RONE Database name: RONE Oracle home: /u01/app/oracle/product/11.2.0/db_20 Oracle user: oracle Spfile: +DATA/RONE/spfileRONE.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: RONE Database instances: Disk Groups: DATA Mount point paths: Services: RACONE Type: RACOneNode Online relocation timeout: 30 Instance name prefix: RONE Candidate servers: raclinux1 Database is administrator managed [oracle@raclinux1 db_20]$
Use srvctl to convert the database to RAC and create and start second instance as shown below.
[oracle@raclinux1 db_20]$ srvctl convert database -d RONE -c RAC [oracle@raclinux1 db_20]$ srvctl status database -d RONE Instance RONE_1 is running on node raclinux1 [oracle@raclinux1 db_20]$ srvctl config database -d RONE Database unique name: RONE Database name: RONE Oracle home: /u01/app/oracle/product/11.2.0/db_20 Oracle user: oracle Spfile: +DATA/RONE/spfileRONE.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: RONE Database instances: RONE_1 Disk Groups: DATA Mount point paths: Services: RACONE Type: RAC Database is administrator managed [oracle@raclinux1 db_20]$ [oracle@raclinux1 db_20]$ srvctl add instance -d RONE -i RONE_2 -n raclinux2 [oracle@raclinux1 db_20]$ srvctl config database -d RONE Database unique name: RONE Database name: RONE Oracle home: /u01/app/oracle/product/11.2.0/db_20 Oracle user: oracle Spfile: +DATA/RONE/spfileRONE.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: RONE Database instances: RONE_1,RONE_2 Disk Groups: DATA Mount point paths: Services: RACONE Type: RAC Database is administrator managed [oracle@raclinux1 db_20]$ [oracle@raclinux1 db_20]$ srvctl start instance -d RONE -i RONE_2 [oracle@raclinux1 db_20]$ srvctl status database -d RONE Instance RONE_1 is running on node raclinux1 Instance RONE_2 is running on node raclinux2 [oracle@raclinux1 db_20]$ [oracle@raclinux1 db_20]$ sqlplus system/sys1@rone SQL*Plus: Release 11.2.0.2.0 Production on Wed Oct 6 14:44:57 2010 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> select * from gv$instance; INST_ID INSTANCE_NUMBER INSTANCE_NAME ---------- --------------- ---------------- HOST_NAME ---------------------------------------------------------------- VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT ----------------- --------- ------------ --- ---------- ------- --------------- LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO ---------- --- ----------------- ------------------ --------- --- 1 1 RONE_1 raclinux1.gj.com 11.2.0.2.0 06-OCT-10 OPEN YES 1 STOPPED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO INST_ID INSTANCE_NUMBER INSTANCE_NAME ---------- --------------- ---------------- HOST_NAME ---------------------------------------------------------------- VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT ----------------- --------- ------------ --- ---------- ------- --------------- LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO ---------- --- ----------------- ------------------ --------- --- 2 2 RONE_2 raclinux2.gj.com 11.2.0.2.0 06-OCT-10 OPEN YES 2 STOPPED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO SQL>
Converting RAC database to RAC One Node database
In order to convert a RAC database to RAC One Node database we need to
- Remove the second instance
- Use srvct to conver to RAC One Node database (srvctl convert database –c RACONENODE)
[oracle@raclinux1 db_20]$ srvctl stop instance -d RONE -i RONE_2 [oracle@raclinux1 db_20]$ srvctl status database -d RONE Instance RONE_1 is running on node raclinux1 Instance RONE_2 is not running on node raclinux2 [oracle@raclinux1 db_20]$ [oracle@raclinux1 db_20]$ srvctl stop instance -d RONE -i RONE_2 [oracle@raclinux1 db_20]$ srvctl status database -d RONE Instance RONE_1 is running on node raclinux1 Instance RONE_2 is not running on node raclinux2 [oracle@raclinux1 db_20]$ srvctl remove instance -d RONE -i RONE_2 Remove instance from the database RONE? (y/[n]) y [oracle@raclinux1 db_20]$ srvctl status database -d RONE Instance RONE_1 is running on node raclinux1 [oracle@raclinux1 db_20]$ [oracle@raclinux1 db_20]$ srvctl config database -d RONE Database unique name: RONE Database name: RONE Oracle home: /u01/app/oracle/product/11.2.0/db_20 Oracle user: oracle Spfile: +DATA/RONE/spfileRONE.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: RONE Database instances: RONE_1 Disk Groups: DATA Mount point paths: Services: RACONE Type: RAC Database is administrator managed [oracle@raclinux1 db_20]$ [oracle@raclinux1 db_20]$ srvctl convert database -d RONE -c RACONENODE PRKO-2159 : Option '-i' should be specified to convert an administrator-managed RAC database to its equivalent RAC One Node database configuration [oracle@raclinux1 db_20]$ srvctl convert database -d RONE -c RACONENODE -i RONE_1 [oracle@raclinux1 db_20]$ srvctl config database -d RONE Database unique name: RONE Database name: RONE Oracle home: /u01/app/oracle/product/11.2.0/db_20 Oracle user: oracle Spfile: +DATA/RONE/spfileRONE.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: RONE Database instances: Disk Groups: DATA Mount point paths: Services: RACONE Type: RACOneNode Online relocation timeout: 30 Instance name prefix: RONE Candidate servers: raclinux1 Database is administrator managed [oracle@raclinux1 db_20]$ [oracle@raclinux1 db_20]$ srvctl status database -d RONE Instance RONE_1 is running on node raclinux1 Online relocation: INACTIVE [oracle@raclinux1 db_20]$ [oracle@raclinux1 db_20]$ sqlplus system/sys1@rone SQL*Plus: Release 11.2.0.2.0 Production on Wed Oct 6 15:13:24 2010 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> select * from gv$instance; INST_ID INSTANCE_NUMBER INSTANCE_NAME ---------- --------------- ---------------- HOST_NAME ---------------------------------------------------------------- VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT ----------------- --------- ------------ --- ---------- ------- --------------- LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO ---------- --- ----------------- ------------------ --------- --- 1 1 RONE_1 raclinux1.gj.com 11.2.0.2.0 06-OCT-10 OPEN YES 1 STOPPED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO SQL>
Summary
Staring with Oracle 11gR2 version 11.2.0.2 srvctl utility is used entirely to manage RAC ONE Node database activities such as relocation and conversion to and from RAC database to name a few.
Create Oracle RAC and Oracle RAC ONE Node Database using dbca in Oracle 11g R2 (11.2.0.2)
Create Oracle RAC and Oracle RAC ONE Node Database using dbca in Oracle 11g R2 (11.2.0.2)
In this article we will look at creating Oracle RAC and RAC ONE Node database using dbca. For install/upgrade to Oracle 11.2.0.2 look here.
Create Oracle RAC ONE Node database using dbca
Start dbca and select Oracle RAC One Node database and press Next to continue.
Select ‘Create a Database’ and press Next to continue.
Select Custom Database and press Next to continue.
Select Admin-Managed, enter database global name, SID prefix and service name and press Next to continue. DBCA will create one database with one instance which can be migrated across the nodes selected.
Enter and confirm the password(s) and press Next to continue.
Specify storage type and storage location and press Next to continue.
Specify the Fast recovery area (formerly flash recovery area) and press Next to continue.
Select the database components and component location and press Next to continue.
Select the init parameters. In this case use AMM is selected and the memory size is specified. Press Next to continue.
Keep the defaults and press Next to continue.
Select the options for dbca to generate a template and generation scripts and press Next to continue.
Review the summary and press OK to continue.
Wait for the dbca to complete.
Optionally choose the password management to change the passwords and unlock the accounts and once done press Exit to exit from dbca.
Create Oracle RAC database using dbca
Start dbca and select Oracle RAC database and press Next to continue.
Select ‘Create a Database’ and press Next to continue.
Select General Purpose or Transaction Processing Database and press Next to continue.
Select Admin-Managed, enter database global name, SID prefix and press Next to continue. DBCA will create one database with two instances on each node respectively.
Select Configure Enterprise Manager and select Automatic Maintenance Tasks Tab.
Select Enable automatic maintenance tasks and press Next to continue.
Enter and confirm the password(s) and press Next to continue.
Specify storage type and storage location and press Next to continue.
Specify the Fast recovery area (formerly flash recovery area) and press Next to continue.
Select the database components and component location and press Next to continue.
Select the init parameters. In this case use AMM is selected and the memory size is specified. Press Next to continue.
Keep the defaults and press Next to continue.
Select the options for dbca to generate a template and generation scripts and press Next to continue.
Review the summary and press OK to continue.
Wait for the dbca to complete.
Wait for the dbca to complete.
Wait for the dbca to complete. Optionally choose the password management to change the passwords and unlock the accounts and once done press Exit to exit from dbca.
Verify RACDB installation
[oracle@raclinux1 ~]$ srvctl status database -d RACDB Instance RACDB1 is not running on node raclinux1 Instance RACDB2 is not running on node raclinux2 [oracle@raclinux1 ~]$ srvctl start database -d RACDB [oracle@raclinux1 ~]$ srvctl status database -d RACDB Instance RACDB1 is running on node raclinux1 Instance RACDB2 is running on node raclinux2 [oracle@raclinux1 ~]$ srvctl config database -d RACDB Database unique name: RACDB Database name: RACDB Oracle home: /u01/app/oracle/product/11.2.0/db_10 Oracle user: oracle Spfile: +DGDUP/RACDB/spfileRACDB.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: RACDB Database instances: RACDB1,RACDB2 Disk Groups: DGDUP,DATA Mount point paths: Services: Type: RAC Database is administrator managed [oracle@raclinux1 ~]$ [oracle@raclinux1 ~]$ sqlplus system/sys1@racdb SQL*Plus: Release 11.2.0.2.0 Production on Tue Oct 5 14:26:32 2010 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> select * from gv$instance; INST_ID INSTANCE_NUMBER INSTANCE_NAME ---------- --------------- ---------------- HOST_NAME ---------------------------------------------------------------- VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT ----------------- --------- ------------ --- ---------- ------- --------------- LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO ---------- --- ----------------- ------------------ --------- --- 1 1 RACDB1 raclinux1.gj.com 11.2.0.2.0 05-OCT-10 OPEN YES 1 STOPPED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO INST_ID INSTANCE_NUMBER INSTANCE_NAME ---------- --------------- ---------------- HOST_NAME ---------------------------------------------------------------- VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT ----------------- --------- ------------ --- ---------- ------- --------------- LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO ---------- --- ----------------- ------------------ --------- --- 2 2 RACDB2 raclinux2.gj.com 11.2.0.2.0 05-OCT-10 OPEN YES 2 STOPPED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO SQL>
Verify RONE installation
[oracle@raclinux2 admin]$ srvctl status database -d rone Instance RONE_1 is running on node raclinux2 Online relocation: INACTIVE [oracle@raclinux2 admin]$ srvctl config database -d rone Database unique name: RONE Database name: RONE Oracle home: /u01/app/oracle/product/11.2.0/db_20 Oracle user: oracle Spfile: +DATA/RONE/spfileRONE.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: RONE Database instances: Disk Groups: DATA Mount point paths: Services: RACONE Type: RACOneNode Online relocation timeout: 30 Instance name prefix: RONE Candidate servers: raclinux2,raclinux1 Database is administrator managed [oracle@raclinux2 admin]$ [oracle@raclinux2 admin]$ sqlplus system/sys1@ronetaf SQL*Plus: Release 11.2.0.2.0 Production on Tue Oct 5 14:21:50 2010 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> select * from gv$instance; INST_ID INSTANCE_NUMBER INSTANCE_NAME ---------- --------------- ---------------- HOST_NAME ---------------------------------------------------------------- VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT ----------------- --------- ------------ --- ---------- ------- --------------- LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO ---------- --- ----------------- ------------------ --------- --- 1 1 RONE_1 raclinux2.gj.com 11.2.0.2.0 05-OCT-10 OPEN YES 1 STOPPED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO SQL>
Summary
We looked at creating RAC and RAC ONE Node databases using dbca in Oracle 11gR2 11.2.0.2. We verified that the databases was successfully created.
Upgrade to Oracle 11.2.0.2 from Oracle 11.2.0.1
Upgrade to Oracle 11.2.0.2 from Oracle 11.2.0.1
Starting with the first patch set for Oracle Database 11g Release 2 (11.2.0.2), Oracle Database patch sets are full installations of the Oracle Database software. In past releases, Oracle Database patch sets consisted of a set of files that replaced files in an existing Oracle home. Beginning with Oracle Database 11g Release 2, patch sets are full installations that replace existing installations as per MOS Important Changes to Oracle Database Patch Sets Starting With 11.2.0.2 [ID 1189783.1]
In this article we will look at how to upgrade existing Oracle GI and will install new Oracle RDBMS. For information on installing fresh Oracle 11.2.0.2 GI see here.
Prerequisites for Oracle 11gR2 11.2.0.2 installation is to install patch 9655006 to the 11.2.0.1 GI home before upgrading to 11.2.0.2 from 11.2.0.1. See Bug 9413827 on MOS.
If not patched upon running ./rootupgrade.sh will fail as shown below.
[root@raclinux1 grid]# ./rootupgrade.sh Running Oracle 11g root script... The following environment variables are set as: <p> ORACLE_OWNER= oracle </p> ORACLE_HOME= /u01/app/11.2.0.2/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: <p>The contents of "dbhome" have not changed. No need to overwrite. </p> The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) <p>[n]: </p> The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) <p>[n]: </p> Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0.2/grid/crs/install/crsconfig_params <p>Creating trace directory </p> <p>Failed to add (property/value):('OLD_OCR_ID/'-1') for checkpoint:ROOTCRS_OLDHOMEINFO.Error code is 256 </p> The fixes for bug 9413827 are not present in the 11.2.0.1 crs home Apply the patches for these bugs in the 11.2.0.1 crs home and then run rootupgrade.sh /u01/app/11.2.0.2/grid/perl/bin/perl -I/u01/app/11.2.0.2/grid/perl/lib -I/u01/app/11.2.0.2/grid/crs/install /u01/app/11.2.0.2/grid/crs/install/rootcrs.pl execution failed [root@raclinux1 grid]# <strong>
Patching 11.2.0.1 to 9655006
- Download and unzip latest OPatch utility with patch number 6880880 in $GI_HOME
- Download and unzip patch 9655006 in a stage directory in my case in /u01/stage. This will create two directories 9655006 and 9654983.
-
Invoke OPatch from $GI_HOME/OPatch as root
./opatch auto /u01/stage -och /u01/app/11.2.0/grid
- Verify patching with opatch lsinventory. See the ANNEX for sample output.
-
Refer to MOS ‘How to Manually Apply A Grid Infrastructure PSU Without Opatch Auto’ [ID 1210964.1] in case of problems.
Patching Oracle GI to 11.2.0.2
Create a new directory /u01/app/11.2.0.2/grid for Oracle 11.2.0.2 GI on all nodes.
mkdir –p /u01/app/11.2.0.2/grid
chmod –R 775 /u01/app/11.2.0.2/
chown –R oracle:oinstall /u01/app/11.2.0.2/
Download and unzip patch 10098816 into a stage directory in my case /u01/stage/11.2.0.2.
Start the installer from /u01/stage/11.2.0.2/grid. Either enter MOS credentials to check for updates or select ‘Skip software updates’. Press Next to continue.
Select Upgrade Oracle Grid Infrastructure or Oracle ASM and press Next to continue.
Select the languages and press Next to continue.
Make sure that all nodes are selected and press Next to continue.
Keep the defaults and press Next to continue.
Enter the new location for Oracle GI 11.2.0.2 as /u01/app/11.2.0.2/grid and press Next to continue.
Wait for the prerequisite checks to complete.
Review the Summary and press Install to continue.
Wait for the installation to complete.
Run rootupgrade.sh script first on raclinux1 and after that on raclinux2. Look at the output of the script in the Annex.
Wait for the Oracle GI installation to complete.
Verify that the GI is upgraded successfully from the new GI home
[root@raclinux1 bin]# pwd /u01/app/11.2.0.2/grid/bin [root@raclinux1 bin]# ./crsctl check cluster -all ************************************************************** raclinux1: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** raclinux2: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** [root@raclinux1 bin]#
Install new Oracle 11.2.0.2 RDBMS binaries
Start the installer from /u01/stage/11.2.0.2/database. Press Next to continue.
Acknowledge that you do not want at present to provide MOS credentials and continue.
Either enter MOS credentials to check for updates or select ‘Skip software updates’. Press Next to continue.
Select ‘Install database software only’ and press Next to continue.
Select Oracle Real Application clusters database installation, select all nodes on the cluster and press Next to continue. We will show in later articles Oracle RAC and RAC ONE node database creation using dbca. Here we are installing only Oracle RDBMS binaries.
Optionally select the language and press Next to continue.
Select Enterprise Edition and press Next to continue.
Select the location for the New Oracle RDBMS home and press Next to continue.
Chose the defaults and press Next to continue.
Check the errors and fix them. In this case I select the Ignore All and press Next to continue.
Verify the Summary and press Install to continue.
Wait for the installation to complete.
Press Close upon successful completion.
In the next posts we will look into creating RAC and RAC ONE Node databases using dbca. If you have an existing database and you want to upgrade it using dbua.
Annex
Sample opach lsinventory ouput
[oracle@raclinux2 OPatch]$ ./opatch lsinventory Invoking OPatch 11.2.0.1.3 Oracle Interim Patch Installer version 11.2.0.1.3 Copyright (c) 2010, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/11.2.0/grid Central Inventory : /u01/app/oraInventory from : /etc/oraInst.loc OPatch version : 11.2.0.1.3 OUI version : 11.2.0.1.0 OUI location : /u01/app/11.2.0/grid/oui Log file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch2010-09-23_19-45-20PM.log Patch history file: /u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch_history.txt Lsinventory Output file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2010-09-23_19-45-20PM.txt -------------------------------------------------------------------------------- Installed Top-level Products (1): Oracle Grid Infrastructure 11.2.0.1.0 There are 1 products installed in this Oracle Home. Interim patches (2) : Patch 9655006 : applied on Thu Sep 23 19:34:31 EDT 2010 Unique Patch ID: 12651761 Created on 6 Jul 2010, 12:00:17 hrs PST8PDT Bugs fixed: 9655006, 9778840, 9343627, 9783609, 9262748, 9262722 Patch 9654983 : applied on Thu Sep 23 19:20:20 EDT 2010 Unique Patch ID: 12651761 Created on 18 Jun 2010, 00:16:02 hrs PST8PDT Bugs fixed: 9068088, 9363384, 8865718, 8898852, 8801119, 9054253, 8725286, 8974548 9093300, 8909984, 8755082, 8780372, 8664189, 8769569, 7519406, 8822531 7705591, 8650719, 9637033, 8639114, 8723477, 8729793, 8919682, 8856478 9001453, 8733749, 8565708, 8735201, 8684517, 8870559, 8773383, 8981059 8812705, 9488887, 8813366, 9242411, 8822832, 8897784, 8760714, 8775569 8671349, 8898589, 9714832, 8642202, 9011088, 9170608, 9369797, 9165206 8834636, 8891037, 8431487, 8570322, 8685253, 8872096, 8718952, 8799099 9032717, 9399090, 9546223, 9713537, 8588519, 8783738, 8834425, 9454385 8856497, 8890026, 8721315, 8818175, 8674263, 9145541, 8720447, 9272086 9467635, 9010222, 9197917, 8991997, 8661168, 8803762, 8769239, 9654983 8706590, 8778277, 8815639, 9027691, 9454036, 9454037, 9454038, 9255542 8761974, 9275072, 8496830, 8702892, 8818983, 8475069, 8875671, 9328668 8798317, 8891929, 8774868, 8820324, 8544696, 8702535, 8268775, 9036013 9363145, 8933870, 8405205, 9467727, 8822365, 9676419, 8761260, 8790767 8795418, 8913269, 8717461, 8607693, 8861700, 8330783, 8780281, 8780711 8784929, 9341448, 9015983, 9119194, 8828328, 8665189, 8717031, 8832205 9676420, 8633358, 9321701, 9655013, 8796511, 9167285, 8782971, 8756598 8703064, 9066116, 9007102, 9461782, 9352237, 8505803, 8753903, 9216806 8918433, 9057443, 8790561, 8733225, 9067282, 8928276, 9210925, 8837736 Rac system comprising of multiple nodes Local node = raclinux2 Remote node = raclinux1 -------------------------------------------------------------------------------- OPatch succeeded. [oracle@raclinux2 OPatch]$ [oracle@raclinux1 OPatch]$ ./opatch lsinventory Invoking OPatch 11.2.0.1.3 Oracle Interim Patch Installer version 11.2.0.1.3 Copyright (c) 2010, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/11.2.0/grid Central Inventory : /u01/app/oraInventory from : /etc/oraInst.loc OPatch version : 11.2.0.1.3 OUI version : 11.2.0.1.0 OUI location : /u01/app/11.2.0/grid/oui Log file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch2010-09-23_19-41-01PM.log Patch history file: /u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch_history.txt Lsinventory Output file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2010-09-23_19-41-01PM.txt -------------------------------------------------------------------------------- Installed Top-level Products (1): Oracle Grid Infrastructure 11.2.0.1.0 There are 1 products installed in this Oracle Home. Interim patches (2) : Patch 9655006 : applied on Thu Sep 23 19:39:38 EDT 2010 Unique Patch ID: 12651761 Created on 6 Jul 2010, 12:00:17 hrs PST8PDT Bugs fixed: 9655006, 9778840, 9343627, 9783609, 9262748, 9262722 Patch 9654983 : applied on Thu Sep 23 19:20:43 EDT 2010 Unique Patch ID: 12651761 Created on 18 Jun 2010, 00:16:02 hrs PST8PDT Bugs fixed: 9068088, 9363384, 8865718, 8898852, 8801119, 9054253, 8725286, 8974548 9093300, 8909984, 8755082, 8780372, 8664189, 8769569, 7519406, 8822531 7705591, 8650719, 9637033, 8639114, 8723477, 8729793, 8919682, 8856478 9001453, 8733749, 8565708, 8735201, 8684517, 8870559, 8773383, 8981059 8812705, 9488887, 8813366, 9242411, 8822832, 8897784, 8760714, 8775569 8671349, 8898589, 9714832, 8642202, 9011088, 9170608, 9369797, 9165206 8834636, 8891037, 8431487, 8570322, 8685253, 8872096, 8718952, 8799099 9032717, 9399090, 9546223, 9713537, 8588519, 8783738, 8834425, 9454385 8856497, 8890026, 8721315, 8818175, 8674263, 9145541, 8720447, 9272086 9467635, 9010222, 9197917, 8991997, 8661168, 8803762, 8769239, 9654983 8706590, 8778277, 8815639, 9027691, 9454036, 9454037, 9454038, 9255542 8761974, 9275072, 8496830, 8702892, 8818983, 8475069, 8875671, 9328668 8798317, 8891929, 8774868, 8820324, 8544696, 8702535, 8268775, 9036013 9363145, 8933870, 8405205, 9467727, 8822365, 9676419, 8761260, 8790767 8795418, 8913269, 8717461, 8607693, 8861700, 8330783, 8780281, 8780711 8784929, 9341448, 9015983, 9119194, 8828328, 8665189, 8717031, 8832205 9676420, 8633358, 9321701, 9655013, 8796511, 9167285, 8782971, 8756598 8703064, 9066116, 9007102, 9461782, 9352237, 8505803, 8753903, 9216806 8918433, 9057443, 8790561, 8733225, 9067282, 8928276, 9210925, 8837736 Rac system comprising of multiple nodes Local node = raclinux1 Remote node = raclinux2 -------------------------------------------------------------------------------- OPatch succeeded. [oracle@raclinux1 OPatch]$ Output of running rootupgrade.sh on raclinux1 [root@raclinux1 grid]# ./rootupgrade.sh Running Oracle 11g root script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/11.2.0.2/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin ... The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0.2/grid/crs/install/crsconfig_params Creating trace directory Failed to add (property/value):('OLD_OCR_ID/'-1') for checkpoint:ROOTCRS_OLDHOMEINFO.Error code is 256 ASM upgrade has started on first node. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'raclinux1' CRS-2673: Attempting to stop 'ora.crsd' on 'raclinux1' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'raclinux1' CRS-2673: Attempting to stop 'ora.DATA.dg' on 'raclinux1' CRS-2673: Attempting to stop 'ora.registry.acfs' on 'raclinux1' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'raclinux1' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'raclinux1' CRS-2677: Stop of 'ora.DATA.dg' on 'raclinux1' succeeded CRS-2677: Stop of 'ora.registry.acfs' on 'raclinux1' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'raclinux1' CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'raclinux1' succeeded CRS-2673: Attempting to stop 'ora.scan1.vip' on 'raclinux1' CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'raclinux1' succeeded CRS-2673: Attempting to stop 'ora.raclinux1.vip' on 'raclinux1' CRS-2677: Stop of 'ora.scan1.vip' on 'raclinux1' succeeded CRS-2672: Attempting to start 'ora.scan1.vip' on 'raclinux2' CRS-2677: Stop of 'ora.asm' on 'raclinux1' succeeded CRS-2677: Stop of 'ora.raclinux1.vip' on 'raclinux1' succeeded CRS-2672: Attempting to start 'ora.raclinux1.vip' on 'raclinux2' CRS-2676: Start of 'ora.scan1.vip' on 'raclinux2' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'raclinux2' CRS-2676: Start of 'ora.raclinux1.vip' on 'raclinux2' succeeded CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'raclinux2' succeeded CRS-2673: Attempting to stop 'ora.ons' on 'raclinux1' CRS-2673: Attempting to stop 'ora.eons' on 'raclinux1' CRS-2677: Stop of 'ora.ons' on 'raclinux1' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on 'raclinux1' CRS-2677: Stop of 'ora.net1.network' on 'raclinux1' succeeded CRS-2677: Stop of 'ora.eons' on 'raclinux1' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'raclinux1' has completed CRS-2677: Stop of 'ora.crsd' on 'raclinux1' succeeded CRS-2673: Attempting to stop 'ora.mdnsd' on 'raclinux1' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'raclinux1' CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'raclinux1' CRS-2673: Attempting to stop 'ora.ctssd' on 'raclinux1' CRS-2673: Attempting to stop 'ora.evmd' on 'raclinux1' CRS-2673: Attempting to stop 'ora.asm' on 'raclinux1' CRS-2677: Stop of 'ora.cssdmonitor' on 'raclinux1' succeeded CRS-2677: Stop of 'ora.evmd' on 'raclinux1' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'raclinux1' succeeded CRS-2677: Stop of 'ora.ctssd' on 'raclinux1' succeeded CRS-2677: Stop of 'ora.drivers.acfs' on 'raclinux1' succeeded CRS-2677: Stop of 'ora.asm' on 'raclinux1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'raclinux1' CRS-2677: Stop of 'ora.cssd' on 'raclinux1' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'raclinux1' CRS-2673: Attempting to stop 'ora.diskmon' on 'raclinux1' CRS-2677: Stop of 'ora.gpnpd' on 'raclinux1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'raclinux1' CRS-2677: Stop of 'ora.diskmon' on 'raclinux1' succeeded CRS-2677: Stop of 'ora.gipcd' on 'raclinux1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'raclinux1' has completed CRS-4133: Oracle High Availability Services has been stopped. Successfully deleted 1 keys from OCR. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. OLR initialization - successful Adding daemon to inittab ACFS-9200: Supported ACFS-9300: ADVM/ACFS distribution files found. ACFS-9312: Existing ADVM/ACFS installation detected. ACFS-9314: Removing previous ADVM/ACFS installation. ACFS-9315: Previous ADVM/ACFS components successfully removed. ACFS-9307: Installing requested ADVM/ACFS software. ACFS-9308: Loading installed ADVM/ACFS drivers. ACFS-9321: Creating udev for ADVM/ACFS. ACFS-9323: Creating module dependencies - this may take some time. ACFS-9327: Verifying ADVM/ACFS devices. ACFS-9309: ADVM/ACFS installation correctness verified. clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 11g Release 2. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Configure Oracle Grid Infrastructure for a Cluster ... succeeded [root@raclinux1 grid]# Output of running rootupgrade.sh on raclinux2. [root@raclinux2 grid]# ./rootupgrade.sh Running Oracle 11g root script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/11.2.0.2/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0.2/grid/crs/install/crsconfig_params clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 11g Release 2. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. CRS-1115: Oracle Clusterware has already been upgraded. ASM upgrade has finished on last node. Configure Oracle Grid Infrastructure for a Cluster ... succeeded [root@raclinux2 grid]#
Fresh Oracle 11.2.0.2 Grid Infrastructure Installation PRVF-5150 PRVF-5184
Fresh Oracle 11.2.0.2 Grid Infrastructure Installation PRVF-5150 PRVF-5184
Oracle 11gR2 Grid Infrastructure installation version 11.2.0.1 is covered here. Linux installation is covered here. The prerequisites for the Oracle 11gR2 installation are described here. Setting up a VMware clusters for Oracle 11gR2 installation is covered here.
The article will not cover a detail step-by-step installation of 11.2.0.2 GI installation as it is similar to 11.2.0.1. Using runcluvfy confirmed that the prerequisites for Oracle GI and RAC are met. Using the OUI from GI 11.2.0.1 also confirmed that the installation prerequisites checks. For the output of the runcluvfy output see the Annex.
However, during a fresh installation of 11.2.0.2 Oracle GI following the 11.2.0.1 document the following errors were encountered, that is, PRVF-5150 and PRVF-5184 while executing the prerequisites checks.
If I use /dev/oracleasm/DISK* instead of ORCL:DISK* as a search patch I get the following error PRVF-5184.
Apparently there is something with the OUI/cluvfy in regards to Udev that is buggy or not documented.
The Oracle® Database Readme 11g Release 2 (11.2) suggests a bug 10044507.
I skipped the prerequisite checks selecting Ignore All check box and pressing Install button. Following the concepts for the 11.2.0.1 install the Oracle GI 11.2.0.2 installation succeeded.
MOS shed some light as well: PRVF-5449 : Check of Voting Disk location “ORCL:(ORCL:)” failed [ID 1267569.1] See Johan Westerduin comment.
Annex
[oracle@raclinux1 grid]$ ./runcluvfy.sh stage -pre crsinst -n raclinux1,raclinux2 Performing pre-checks for cluster services setup Checking node reachability... Node reachability check passed from node "raclinux1" Checking user equivalence... User equivalence check passed for user "oracle" Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Node connectivity passed for subnet "10.0.2.0" with node(s) raclinux2,raclinux1 ERROR: PRVF-7617 : Node connectivity between "raclinux1 : 10.0.2.15" and "raclinux2 : 10.0.2.15" failed TCP connectivity check failed for subnet "10.0.2.0" Node connectivity passed for subnet "192.168.56.0" with node(s) raclinux2,raclinux1 TCP connectivity check passed for subnet "192.168.56.0" Node connectivity passed for subnet "10.10.20.0" with node(s) raclinux2,raclinux1 TCP connectivity check passed for subnet "10.10.20.0" Node connectivity passed for subnet "192.168.20.0" with node(s) raclinux2,raclinux1 TCP connectivity check passed for subnet "192.168.20.0" Interfaces found on subnet "10.0.2.0" that are likely candidates for VIP are: raclinux2 eth0:10.0.2.15 raclinux1 eth0:10.0.2.15 Interfaces found on subnet "192.168.56.0" that are likely candidates for a private interconnect are: raclinux2 eth1:192.168.56.102 raclinux1 eth1:192.168.56.101 Node connectivity check failed Checking ASMLib configuration. Check for ASMLib configuration passed. Total memory check passed Available memory check passed Swap space check passed Free disk space check passed for "raclinux2:/tmp" Free disk space check passed for "raclinux1:/tmp" Check for multiple users with UID value 1100 passed User existence check passed for "oracle" Group existence check passed for "oinstall" Group existence check passed for "dba" Membership check for user "oracle" in group "oinstall" [as Primary] passed Membership check for user "oracle" in group "dba" passed Run level check passed Hard limits check passed for "maximum open file descriptors" Soft limits check passed for "maximum open file descriptors" Hard limits check passed for "maximum user processes" Soft limits check passed for "maximum user processes" System architecture check passed Kernel version check passed Kernel parameter check passed for "semmsl" Kernel parameter check passed for "semmns" Kernel parameter check passed for "semopm" Kernel parameter check passed for "semmni" Kernel parameter check passed for "shmmax" Kernel parameter check passed for "shmmni" Kernel parameter check passed for "shmall" Kernel parameter check passed for "file-max" Kernel parameter check passed for "ip_local_port_range" Kernel parameter check passed for "rmem_default" Kernel parameter check passed for "rmem_max" Kernel parameter check passed for "wmem_default" Kernel parameter check passed for "wmem_max" Kernel parameter check passed for "aio-max-nr" Package existence check passed for "make-3.81( x86_64)" Package existence check passed for "binutils-2.17.50.0.6( x86_64)" Package existence check passed for "gcc-4.1.2 (x86_64)( x86_64)" Package existence check passed for "libaio-0.3.106 (x86_64)( x86_64)" Package existence check passed for "glibc-2.5-24 (x86_64)( x86_64)" Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)( x86_64)" Package existence check passed for "elfutils-libelf-0.125 (x86_64)( x86_64)" WARNING: PRVF-7584 : Multiple versions of package "elfutils-libelf-devel" found on node raclinux1: elfutils-libelf-devel-0.137-3.el5 (x86_64),elfutils-libelf-devel-0.137-3.el5 (i386) Package existence check passed for "elfutils-libelf-devel-0.125( x86_64)" Package existence check passed for "glibc-common-2.5( x86_64)" Package existence check passed for "glibc-devel-2.5 (x86_64)( x86_64)" Package existence check passed for "glibc-headers-2.5( x86_64)" Package existence check passed for "gcc-c++-4.1.2 (x86_64)( x86_64)" Package existence check passed for "libaio-devel-0.3.106 (x86_64)( x86_64)" Package existence check passed for "libgcc-4.1.2 (x86_64)( x86_64)" Package existence check passed for "libstdc++-4.1.2 (x86_64)( x86_64)" Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)( x86_64)" Package existence check passed for "sysstat-7.0.2( x86_64)" Package existence check passed for "ksh-20060214( x86_64)" Check for multiple users with UID value 0 passed Current group ID check passed Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started... No NTP Daemons or Services were found to be running Clock synchronization check using Network Time Protocol(NTP) passed Core file name pattern consistency check passed. User "oracle" is not part of "root" group. Check passed Default user file creation mask check passed Checking consistency of file "/etc/resolv.conf" across nodes File "/etc/resolv.conf" does not have both domain and search entries defined domain entry in file "/etc/resolv.conf" is consistent across nodes search entry in file "/etc/resolv.conf" is consistent across nodes The DNS response time for an unreachable node is within acceptable limit on all nodes File "/etc/resolv.conf" is consistent across nodes Time zone consistency check passed Starting check for Huge Pages Existence ... Check for Huge Pages Existence passed Starting check for Hardware Clock synchronization at shutdown ... Check for Hardware Clock synchronization at shutdown passed Pre-check for cluster services setup was unsuccessful. Checks did not pass for the following node(s): raclinux1 : 10.0.2.15 [oracle@raclinux1 grid]$ [oracle@raclinux1 grid]$ ./runcluvfy.sh stage -pre crsinst -n raclinux1,raclinux2 -verbose Performing pre-checks for cluster services setup Checking node reachability... Check: Node reachability from node "raclinux1" Destination Node Reachable? ------------------------------------ ------------------------ raclinux1 yes raclinux2 yes Result: Node reachability check passed from node "raclinux1" Checking user equivalence... Check: User equivalence for user "oracle" Node Name Comment ------------------------------------ ------------------------ raclinux2 passed raclinux1 passed Result: User equivalence check passed for user "oracle" Checking node connectivity... Checking hosts config file... Node Name Status Comment ------------ ------------------------ ------------------------ raclinux2 passed raclinux1 passed Verification of the hosts config file successful Interface information for node "raclinux2" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 10.0.2.15 10.0.2.0 0.0.0.0 10.0.2.2 08:00:27:9C:41:1A 1500 eth1 192.168.56.102 192.168.56.0 0.0.0.0 10.0.2.2 08:00:27:3A:50:B2 1500 eth2 10.10.20.22 10.10.20.0 0.0.0.0 10.0.2.2 08:00:27:CA:35:14 1500 eth3 192.168.20.22 192.168.20.0 0.0.0.0 10.0.2.2 08:00:27:B1:72:31 1500 Interface information for node "raclinux1" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 10.0.2.15 10.0.2.0 0.0.0.0 10.0.2.2 08:00:27:97:73:42 1500 eth1 192.168.56.101 192.168.56.0 0.0.0.0 10.0.2.2 08:00:27:A8:70:2A 1500 eth2 192.168.20.21 192.168.20.0 0.0.0.0 10.0.2.2 08:00:27:BF:C3:12 1500 eth3 10.10.20.21 10.10.20.0 0.0.0.0 10.0.2.2 08:00:27:68:35:9F 1500 Interface information for node "raclinux1" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 10.0.2.15 10.0.2.0 0.0.0.0 10.0.2.2 08:00:27:97:73:42 1500 eth1 192.168.56.101 192.168.56.0 0.0.0.0 10.0.2.2 08:00:27:A8:70:2A 1500 eth2 192.168.20.21 192.168.20.0 0.0.0.0 10.0.2.2 08:00:27:BF:C3:12 1500 eth3 10.10.20.21 10.10.20.0 0.0.0.0 10.0.2.2 08:00:27:68:35:9F 1500 Check: Node connectivity of subnet "10.0.2.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- raclinux2[10.0.2.15] raclinux1[10.0.2.15] yes Result: Node connectivity passed for subnet "10.0.2.0" with node(s) raclinux2,raclinux1 Check: TCP connectivity of subnet "10.0.2.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- raclinux1:10.0.2.15 raclinux2:10.0.2.15 failed ERROR: PRVF-7617 : Node connectivity between "raclinux1 : 10.0.2.15" and "raclinux2 : 10.0.2.15" failed Result: TCP connectivity check failed for subnet "10.0.2.0" Check: Node connectivity of subnet "192.168.56.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- raclinux2[192.168.56.102] raclinux1[192.168.56.101] yes Result: Node connectivity passed for subnet "192.168.56.0" with node(s) raclinux2,raclinux1 Check: TCP connectivity of subnet "192.168.56.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- raclinux1:192.168.56.101 raclinux2:192.168.56.102 passed Result: TCP connectivity check passed for subnet "192.168.56.0" Check: Node connectivity of subnet "10.10.20.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- raclinux2[10.10.20.22] raclinux1[10.10.20.21] yes Result: Node connectivity passed for subnet "10.10.20.0" with node(s) raclinux2,raclinux1 Check: TCP connectivity of subnet "10.10.20.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- raclinux1:10.10.20.21 raclinux2:10.10.20.22 passed Result: TCP connectivity check passed for subnet "10.10.20.0" Check: Node connectivity of subnet "192.168.20.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- raclinux2[192.168.20.22] raclinux1[192.168.20.21] yes Result: Node connectivity passed for subnet "192.168.20.0" with node(s) raclinux2,raclinux1 Check: TCP connectivity of subnet "192.168.20.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- raclinux1:192.168.20.21 raclinux2:192.168.20.22 passed Result: TCP connectivity check passed for subnet "192.168.20.0" Interfaces found on subnet "10.0.2.0" that are likely candidates for VIP are: raclinux2 eth0:10.0.2.15 raclinux1 eth0:10.0.2.15 Interfaces found on subnet "192.168.56.0" that are likely candidates for a private interconnect are: raclinux2 eth1:192.168.56.102 raclinux1 eth1:192.168.56.101 Result: Node connectivity check failed Checking ASMLib configuration. Node Name Comment ------------------------------------ ------------------------ raclinux2 passed raclinux1 passed Result: Check for ASMLib configuration passed. Check: Total memory Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 2.8773GB (3017016.0KB) 1.5GB (1572864.0KB) passed raclinux1 2.8773GB (3017016.0KB) 1.5GB (1572864.0KB) passed Result: Total memory check passed Check: Available memory Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 2.71GB (2841676.0KB) 50MB (51200.0KB) passed raclinux1 2.6299GB (2757680.0KB) 50MB (51200.0KB) passed Result: Available memory check passed Check: Swap space Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 4.8437GB (5079032.0KB) 2.8773GB (3017016.0KB) passed raclinux1 4.8437GB (5079032.0KB) 2.8773GB (3017016.0KB) passed Result: Swap space check passed Check: Free disk space for "raclinux2:/tmp" Path Node Name Mount point Available Required Comment ---------------- ------------ ------------ ------------ ------------ ------------ /tmp raclinux2 / 366.0488GB 1GB passed Result: Free disk space check passed for "raclinux2:/tmp" Check: Free disk space for "raclinux1:/tmp" Path Node Name Mount point Available Required Comment ---------------- ------------ ------------ ------------ ------------ ------------ /tmp raclinux1 / 356.672GB 1GB passed Result: Free disk space check passed for "raclinux1:/tmp" Check: User existence for "oracle" Node Name Status Comment ------------ ------------------------ ------------------------ raclinux2 exists(1100) passed raclinux1 exists(1100) passed Checking for multiple users with UID value 1100 Result: Check for multiple users with UID value 1100 passed Result: User existence check passed for "oracle" Check: Group existence for "oinstall" Node Name Status Comment ------------ ------------------------ ------------------------ raclinux2 exists passed raclinux1 exists passed Result: Group existence check passed for "oinstall" Check: Group existence for "dba" Node Name Status Comment ------------ ------------------------ ------------------------ raclinux2 exists passed raclinux1 exists passed Result: Group existence check passed for "dba" Check: Membership of user "oracle" in group "oinstall" [as Primary] Node Name User Exists Group Exists User in Group Primary Comment ---------------- ------------ ------------ ------------ ------------ ------------ raclinux2 yes yes yes yes passed raclinux1 yes yes yes yes passed Result: Membership check for user "oracle" in group "oinstall" [as Primary] passed Check: Membership of user "oracle" in group "dba" Node Name User Exists Group Exists User in Group Comment ---------------- ------------ ------------ ------------ ---------------- raclinux2 yes yes yes passed raclinux1 yes yes yes passed Result: Membership check for user "oracle" in group "dba" passed Check: Run level Node Name run level Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 5 3,5 passed raclinux1 5 3,5 passed Result: Run level check passed Check: Hard limits for "maximum open file descriptors" Node Name Type Available Required Comment ---------------- ------------ ------------ ------------ ---------------- raclinux2 hard 65536 65536 passed raclinux1 hard 65536 65536 passed Result: Hard limits check passed for "maximum open file descriptors" Check: Soft limits for "maximum open file descriptors" Node Name Type Available Required Comment ---------------- ------------ ------------ ------------ ---------------- raclinux2 soft 65536 1024 passed raclinux1 soft 65536 1024 passed Result: Soft limits check passed for "maximum open file descriptors" Check: Hard limits for "maximum user processes" Node Name Type Available Required Comment ---------------- ------------ ------------ ------------ ---------------- raclinux2 hard 16384 16384 passed raclinux1 hard 16384 16384 passed Result: Hard limits check passed for "maximum user processes" Check: Soft limits for "maximum user processes" Node Name Type Available Required Comment ---------------- ------------ ------------ ------------ ---------------- raclinux2 soft 16384 2047 passed raclinux1 soft 16384 2047 passed Result: Soft limits check passed for "maximum user processes" Check: System architecture Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 x86_64 x86_64 passed raclinux1 x86_64 x86_64 passed Result: System architecture check passed Check: Kernel version Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 2.6.18-194.el5 2.6.18 passed raclinux1 2.6.18-194.el5 2.6.18 passed Result: Kernel version check passed Check: Kernel parameter for "semmsl" Node Name Configured Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 250 250 passed raclinux1 250 250 passed Result: Kernel parameter check passed for "semmsl" Check: Kernel parameter for "semmns" Node Name Configured Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 32000 32000 passed raclinux1 32000 32000 passed Result: Kernel parameter check passed for "semmns" Check: Kernel parameter for "semopm" Node Name Configured Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 100 100 passed raclinux1 100 100 passed Result: Kernel parameter check passed for "semopm" Check: Kernel parameter for "semmni" Node Name Configured Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 128 128 passed raclinux1 128 128 passed Result: Kernel parameter check passed for "semmni" Check: Kernel parameter for "shmmax" Node Name Configured Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 1544712192 1544712192 passed raclinux1 1544712192 1544712192 passed Result: Kernel parameter check passed for "shmmax" Check: Kernel parameter for "shmmni" Node Name Configured Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 4096 4096 passed raclinux1 4096 4096 passed Result: Kernel parameter check passed for "shmmni" Check: Kernel parameter for "shmall" Node Name Configured Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 2097152 2097152 passed raclinux1 2097152 2097152 passed Result: Kernel parameter check passed for "shmall" Check: Kernel parameter for "file-max" Node Name Configured Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 6815744 6815744 passed raclinux1 6815744 6815744 passed Result: Kernel parameter check passed for "file-max" Check: Kernel parameter for "ip_local_port_range" Node Name Configured Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 between 9000 & 65500 between 9000 & 65500 passed raclinux1 between 9000 & 65500 between 9000 & 65500 passed Result: Kernel parameter check passed for "ip_local_port_range" Check: Kernel parameter for "rmem_default" Node Name Configured Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 262144 262144 passed raclinux1 262144 262144 passed Result: Kernel parameter check passed for "rmem_default" Check: Kernel parameter for "rmem_max" Node Name Configured Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 4194304 4194304 passed raclinux1 4194304 4194304 passed Result: Kernel parameter check passed for "rmem_max" Check: Kernel parameter for "wmem_default" Node Name Configured Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 262144 262144 passed raclinux1 262144 262144 passed Result: Kernel parameter check passed for "wmem_default" Check: Kernel parameter for "wmem_max" Node Name Configured Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 1048586 1048576 passed raclinux1 1048586 1048576 passed Result: Kernel parameter check passed for "wmem_max" Check: Kernel parameter for "aio-max-nr" Node Name Configured Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 1048576 1048576 passed raclinux1 1048576 1048576 passed Result: Kernel parameter check passed for "aio-max-nr" Check: Package existence for "make-3.81( x86_64)" Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 make-3.81-3.el5 make-3.81( x86_64) passed raclinux1 make-3.81-3.el5 make-3.81( x86_64) passed Result: Package existence check passed for "make-3.81( x86_64)" Check: Package existence for "binutils-2.17.50.0.6( x86_64)" Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 binutils-2.17.50.0.6-14.el5 binutils-2.17.50.0.6( x86_64) passed raclinux1 binutils-2.17.50.0.6-14.el5 binutils-2.17.50.0.6( x86_64) passed Result: Package existence check passed for "binutils-2.17.50.0.6( x86_64)" Check: Package existence for "gcc-4.1.2 (x86_64)( x86_64)" Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 gcc-4.1.2-48.el5 (x86_64) gcc-4.1.2 (x86_64)( x86_64) passed raclinux1 gcc-4.1.2-48.el5 (x86_64) gcc-4.1.2 (x86_64)( x86_64) passed Result: Package existence check passed for "gcc-4.1.2 (x86_64)( x86_64)" Check: Package existence for "libaio-0.3.106 (x86_64)( x86_64)" Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 libaio-0.3.106-5 (x86_64) libaio-0.3.106 (x86_64)( x86_64) passed raclinux1 libaio-0.3.106-5 (x86_64) libaio-0.3.106 (x86_64)( x86_64) passed Result: Package existence check passed for "libaio-0.3.106 (x86_64)( x86_64)" Check: Package existence for "glibc-2.5-24 (x86_64)( x86_64)" Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 glibc-2.5-49 (x86_64) glibc-2.5-24 (x86_64)( x86_64) passed raclinux1 glibc-2.5-49 (x86_64) glibc-2.5-24 (x86_64)( x86_64) passed Result: Package existence check passed for "glibc-2.5-24 (x86_64)( x86_64)" Check: Package existence for "compat-libstdc++-33-3.2.3 (x86_64)( x86_64)" Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 compat-libstdc++-33-3.2.3-61 (x86_64) compat-libstdc++-33-3.2.3 (x86_64)( x86_64) passed raclinux1 compat-libstdc++-33-3.2.3-61 (x86_64) compat-libstdc++-33-3.2.3 (x86_64)( x86_64) passed Result: Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)( x86_64)" Check: Package existence for "elfutils-libelf-0.125 (x86_64)( x86_64)" Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 elfutils-libelf-0.137-3.el5 (x86_64) elfutils-libelf-0.125 (x86_64)( x86_64) passed raclinux1 elfutils-libelf-0.137-3.el5 (x86_64) elfutils-libelf-0.125 (x86_64)( x86_64) passed Result: Package existence check passed for "elfutils-libelf-0.125 (x86_64)( x86_64)" Check: Package existence for "elfutils-libelf-devel-0.125( x86_64)" Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 elfutils-libelf-devel-0.137-3.el5 elfutils-libelf-devel-0.125( x86_64) passed raclinux1 elfutils-libelf-devel-0.137-3.el5 elfutils-libelf-devel-0.125( x86_64) passed WARNING: PRVF-7584 : Multiple versions of package "elfutils-libelf-devel" found on node raclinux1: elfutils-libelf-devel-0.137-3.el5 (x86_64),elfutils-libelf-devel-0.137-3.el5 (i386) Result: Package existence check passed for "elfutils-libelf-devel-0.125( x86_64)" Check: Package existence for "glibc-common-2.5( x86_64)" Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 glibc-common-2.5-49 glibc-common-2.5( x86_64) passed raclinux1 glibc-common-2.5-49 glibc-common-2.5( x86_64) passed Result: Package existence check passed for "glibc-common-2.5( x86_64)" Check: Package existence for "glibc-devel-2.5 (x86_64)( x86_64)" Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 glibc-devel-2.5-49 (x86_64) glibc-devel-2.5 (x86_64)( x86_64) passed raclinux1 glibc-devel-2.5-49 (x86_64) glibc-devel-2.5 (x86_64)( x86_64) passed Result: Package existence check passed for "glibc-devel-2.5 (x86_64)( x86_64)" Check: Package existence for "glibc-headers-2.5( x86_64)" Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 glibc-headers-2.5-49 glibc-headers-2.5( x86_64) passed raclinux1 glibc-headers-2.5-49 glibc-headers-2.5( x86_64) passed Result: Package existence check passed for "glibc-headers-2.5( x86_64)" Check: Package existence for "gcc-c++-4.1.2 (x86_64)( x86_64)" Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 gcc-c++-4.1.2-48.el5 (x86_64) gcc-c++-4.1.2 (x86_64)( x86_64) passed raclinux1 gcc-c++-4.1.2-48.el5 (x86_64) gcc-c++-4.1.2 (x86_64)( x86_64) passed Result: Package existence check passed for "gcc-c++-4.1.2 (x86_64)( x86_64)" Check: Package existence for "libaio-devel-0.3.106 (x86_64)( x86_64)" Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 libaio-devel-0.3.106-5 (x86_64) libaio-devel-0.3.106 (x86_64)( x86_64) passed raclinux1 libaio-devel-0.3.106-5 (x86_64) libaio-devel-0.3.106 (x86_64)( x86_64) passed Result: Package existence check passed for "libaio-devel-0.3.106 (x86_64)( x86_64)" Check: Package existence for "libgcc-4.1.2 (x86_64)( x86_64)" Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 libgcc-4.1.2-48.el5 (x86_64) libgcc-4.1.2 (x86_64)( x86_64) passed raclinux1 libgcc-4.1.2-48.el5 (x86_64) libgcc-4.1.2 (x86_64)( x86_64) passed Result: Package existence check passed for "libgcc-4.1.2 (x86_64)( x86_64)" Check: Package existence for "libstdc++-4.1.2 (x86_64)( x86_64)" Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 libstdc++-4.1.2-48.el5 (x86_64) libstdc++-4.1.2 (x86_64)( x86_64) passed raclinux1 libstdc++-4.1.2-48.el5 (x86_64) libstdc++-4.1.2 (x86_64)( x86_64) passed Result: Package existence check passed for "libstdc++-4.1.2 (x86_64)( x86_64)" Check: Package existence for "libstdc++-devel-4.1.2 (x86_64)( x86_64)" Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 libstdc++-devel-4.1.2-48.el5 (x86_64) libstdc++-devel-4.1.2 (x86_64)( x86_64) passed raclinux1 libstdc++-devel-4.1.2-48.el5 (x86_64) libstdc++-devel-4.1.2 (x86_64)( x86_64) passed Result: Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)( x86_64)" Check: Package existence for "sysstat-7.0.2( x86_64)" Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 sysstat-7.0.2-3.el5 sysstat-7.0.2( x86_64) passed raclinux1 sysstat-7.0.2-3.el5 sysstat-7.0.2( x86_64) passed Result: Package existence check passed for "sysstat-7.0.2( x86_64)" Check: Package existence for "ksh-20060214( x86_64)" Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 ksh-20100202-1.el5 ksh-20060214( x86_64) passed raclinux1 ksh-20100202-1.el5 ksh-20060214( x86_64) passed Result: Package existence check passed for "ksh-20060214( x86_64)" Checking for multiple users with UID value 0 Result: Check for multiple users with UID value 0 passed Check: Current group ID Result: Current group ID check passed Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started... Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes No NTP Daemons or Services were found to be running Result: Clock synchronization check using Network Time Protocol(NTP) passed Checking Core file name pattern consistency... Core file name pattern consistency check passed. Checking to make sure user "oracle" is not in "root" group Node Name Status Comment ------------ ------------------------ ------------------------ raclinux2 does not exist passed raclinux1 does not exist passed Result: User "oracle" is not part of "root" group. Check passed Check default user file creation mask Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- raclinux2 0022 0022 passed raclinux1 0022 0022 passed Result: Default user file creation mask check passed Checking consistency of file "/etc/resolv.conf" across nodes Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined File "/etc/resolv.conf" does not have both domain and search entries defined Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes... domain entry in file "/etc/resolv.conf" is consistent across nodes Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes... search entry in file "/etc/resolv.conf" is consistent across nodes Checking DNS response time for an unreachable node Node Name Status ------------------------------------ ------------------------ raclinux2 passed raclinux1 passed The DNS response time for an unreachable node is within acceptable limit on all nodes File "/etc/resolv.conf" is consistent across nodes Check: Time zone consistency Result: Time zone consistency check passed Starting check for Huge Pages Existence ... Check for Huge Pages Existence passed Starting check for Hardware Clock synchronization at shutdown ... Check for Hardware Clock synchronization at shutdown passed Pre-check for cluster services setup was unsuccessful. Checks did not pass for the following node(s): raclinux1 : 10.0.2.15 [oracle@raclinux1 grid]$
Oracle RAC and Oracle ONE NODE RAC 11.2.0 and Transparent Application Failover (TAF)
Oracle RAC and Oracle ONE NODE RAC 11.2.0 and Transparent Application Failover (TAF)
In this article we will look at Transparent Application Failover (TAF) setup with RAC. The TAF concept is tested with Oracle RAC and will use it to test with Oracle ONE NODE RAC. We will setup a tnsnames.ora entry for TAF and verify the TAF failover while migrating with Omotion the instance to another node. Managing Oracle ONE NODE RAC was a subject of an earlier article. The tnanames.ora entry RUPTAF is in the ANNEX. We will run a query as shown in the ANNEX in testtaf.sql and monitor the fail over while the ONE NODE RAC instance is moved to another node of the cluster. Before and after the query execution we will check the failover_method , failover_type and failed_over from v$session. We will connect to RUP database (instance RUP_1 on raclinux2) and will use Omotion to move it to raclinux1 node.
- How is defined RUPTAF
RUPTAF= (DESCRIPTION = (ENABLE=BROKEN) (LOAD_BALANCE = OFF) (FAILOVER = ON) (ADDRESS = (PROTOCOL = TCP)(HOST = rac-scan)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = raclinux1-vip)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = raclinux2-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RUP) (FAILOVER_MODE = (TYPE=SELECT) (METHOD=BASIC) (BACKUP=RUP) ) ) )
2. Setup RAC ONE NODE
We looked at managing Oracle ONE NODE RAC here. We will setup Oracle RAC ONE NODE database and will verify it.
[oracle@raclinux2 ~]$ raconeinit Candidate Databases on this cluster: # Database RAC One Node Fix Required === ======== ============ ============ [1] RAC0 NO N/A [2] RONE NO N/A [3] RUP NO N/A Enter the database to initialize [1]: 3 Database RUP is now running on server raclinux2 Candidate servers that may be used for this DB: raclinux1 Enter the names of additional candidate servers where this DB may run (space delimited): raclinux1 Please wait, this may take a few minutes to finish....... Database configuration modified. [oracle@raclinux2 ~]$ [oracle@raclinux2 ~]$ raconestatus RAC One Node databases on this cluster: Database UP Fix Required Current Server Candidate Server Names ======== == ============ ============================== ======================================== RUP Y N raclinux2 raclinux2 raclinux1 Available Free Servers: [oracle@raclinux2 ~]$
3. Start a SQL statement and migrate the database with Omotion as in step 4
[oracle@raclinux2 admin]$ sqlplus system/sys1@ruptaf SQL*Plus: Release 11.2.0.1.0 Production on Thu Sep 30 13:47:32 2010 Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> @/u03/testtaf.sql SID SERIAL# FAILOVER_TYPE FAILOVER_METHOD FAILED_OVER ---- --------- ------------- --------------- ----------- 49 685 SELECT BASIC NO INSTANCE_NAME ---------------- RUP_1 COUNT(*) ---------- 623183 SID SERIAL# FAILOVER_TYPE FAILOVER_METHOD FAILED_OVER ---- --------- ------------- --------------- ----------- 49 2 SELECT BASIC YES INSTANCE_NAME ---------------- RUP_2 COUNT(*) ---------- 623183 SQL>
4. Migrate with Omotion the RUP database while the SQL query in step 3 is running
[oracle@raclinux2 ~]$ Omotion -v RAC One Node databases on this cluster: # Database Server Fix Required === ======== ============================== ============ [1] RUP raclinux2 N Enter number of the database to migrate [1]: 1 Specify maximum time in minutes for migration to complete (max 30) [30]: RUP Database is administrator managed . RUP database is running in RUP server pool. Current Running instance: RUP_1 Current Active Server : raclinux2 Available Target Server(s) : # Server Available === ================== ========= [1] raclinux1 Y Enter number of the target node [1]: 1 Omotion Started... Starting target instance on raclinux1... Migrating sessions... Stopping source instance on raclinux2... Omotion Completed... === Current Status === Database RUP is running on node raclinux1 [oracle@raclinux2 ~]$ [oracle@raclinux2 ~]$ Omotion RAC One Node databases on this cluster: # Database Server Fix Required === ======== ============================== ============ [1] RUP raclinux1 N Enter number of the database to migrate [1]: 1 Specify maximum time in minutes for migration to complete (max 30) [30]: 30 Available Target Server(s) : # Server Available === ================== ========= [1] raclinux2 Y Enter number of the target node [1]: 1 Omotion Started... Starting target instance on raclinux2... Migrating sessions... Stopping source instance on raclinux1... Omotion Completed... === Current Status === Database RUP is running on node raclinux2 [oracle@raclinux2 ~]$
Summary
We configured a TAF tnsnames.ora entry and used Omotion to move the RUP database to another node of the cluster. Using the defined RUPTAF connect string we confirmed that the session fails over to the second instance started by Omotion using the defined failover attributes.
Annex
The RUPTAF tnsnames.ora entry and the test script follows.
# in tnsnames.ora RUPTAF= (DESCRIPTION = (ENABLE=BROKEN) (LOAD_BALANCE = OFF) (FAILOVER = ON) (ADDRESS = (PROTOCOL = TCP)(HOST = rac-scan)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = raclinux1-vip)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = raclinux2-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RUP) (FAILOVER_MODE = (TYPE=SELECT) (METHOD=BASIC) (BACKUP=RUP) ) ) ) [oracle@raclinux2 admin]$ cat /u03/testtaf.sql col sid format 999 col serial# format 99999999 col failover_type format a13 col failover_method format a15 col failed_over format a11 select sid, serial#, failover_type, failover_method, failed_over from v$session where username ='SYSTEM'; select instance_name from v$instance; select count(*) from ( select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source ); col sid format 999 col serial# format 99999999 col failover_type format a13 col failover_method format a15 col failed_over format a11 select sid, serial#, failover_type, failover_method, failed_over from v$session where username ='SYSTEM'; select instance_name from v$instance; select count(*) from ( select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source ); [oracle@raclinux2 admin]$
Multiple OCRs and vote disks on ASM in Oracle 11gR2
Multiple OCRs and vote disks on ASM in Oracle 11gR2
Edited: 3-August-2012
Oracle Clusterware used to provide installation option for multiple locations for the OCR and vote disks prior to 11gR2. In Oracle 11gR2 ASM disk groups are used to store the OCR and vote disks. While we can use the mirroring capabilities of ASM disk groups we still can manually add additional OCR disks in Oracle 11gR2 on ASM disk groups to provide redundancy and High Availability. Here in this post we will look at adding additional OCR disks to new ASM disks groups and moving the vote disk to a disk group with high redundancy.
Initially we have Oracle GI installed on a disk group DATA with external redundancy. We created a disk group dgdup1 high redundancy with 5 disks and a disk group dgdup2 with external redundancy.
Make sure that Oracle GI is running
[root@raclinux2 ~]# cd /u01/app/11.2.0.2/grid [root@raclinux2 grid]# cd bin [root@raclinux2 bin]# ./crsctl check cluster -all ************************************************************** raclinux1: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** raclinux2: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** [root@raclinux2 bin]#
Vote disks
Let’s move the vote disk to a high redundancy group.
[root@raclinux2 bin]# [root@raclinux2 bin]# ./crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 7b7f9d9ae9484f2cbf7e62c164aa221b (/dev/oracleasm/disks/DISK1) [DATA] Located 1 voting disk(s). [root@raclinux2 bin]# ./crsctl replace votedisk +dgdup1 Successful addition of voting disk 786515d3e2cc4fa7bfc5fd0b6e87cfeb. Successful addition of voting disk afe5c0e3da484f62bfcc36b1c0eb4aa4. Successful addition of voting disk 1ec83edcf92b4f55bfdc9711c49e0ddd. Successful addition of voting disk 97278bb3f36f4f84bf5965682efe87d2. Successful addition of voting disk 46021fd53e394fb8bf0f476f1fa210dc. Successful deletion of voting disk 7b7f9d9ae9484f2cbf7e62c164aa221b. Successfully replaced voting disk group with +dgdup1. CRS-4266: Voting file(s) successfully replaced [root@raclinux2 bin]# ./crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 786515d3e2cc4fa7bfc5fd0b6e87cfeb (/dev/oracleasm/disks/DISK11) [DGDUP1] 2. ONLINE afe5c0e3da484f62bfcc36b1c0eb4aa4 (/dev/oracleasm/disks/DISK12) [DGDUP1] 3. ONLINE 1ec83edcf92b4f55bfdc9711c49e0ddd (/dev/oracleasm/disks/DISK13) [DGDUP1] 4. ONLINE 97278bb3f36f4f84bf5965682efe87d2 (/dev/oracleasm/disks/DISK14) [DGDUP1] 5. ONLINE 46021fd53e394fb8bf0f476f1fa210dc (/dev/oracleasm/disks/DISK15) [DGDUP1] Located 5 voting disk(s). [root@raclinux2 bin]# [root@raclinux2 bin]# ./crsctl add css votedisk +data CRS-4671: This command is not supported for ASM diskgroups. CRS-4000: Command Add failed, or completed with errors. [root@raclinux2 bin]# ./crsctl add css votedisk /u03/vote_acfs.dsk CRS-4258: Addition and deletion of voting files are not allowed because the voting files are on ASM [root@raclinux2 bin]#
As seen from above we can move a vote disk to a high redundancy disk group dgdup1 and benefit from the High Redundancy of the group. We cannot use the ‘crsctl add css votedisk’ to add a vote disk on ASM disk group or ACFS file system.
OCR disks
While we can benefit from the redundancy that the disk group storing the OCR provides we still can manually add OCR to different disk groups.
[root@raclinux2 bin]# cat /etc/oracle/ocr.loc #Device/file getting replaced by device +dgdup2 ocrconfig_loc=+DATA local_only=false [root@raclinux2 bin]# ./ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 3380 Available space (kbytes) : 258740 ID : 1332773503 Device/File Name : +DATA Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded [root@raclinux2 bin]# [root@raclinux2 bin]# ./ocrconfig -h Name: ocrconfig - Configuration tool for Oracle Cluster/Local Registry. Synopsis: ocrconfig [option] option: [-local] -export - Export OCR/OLR contents to a file [-local] -import - Import OCR/OLR contents from a file [-local] -upgrade [ []] - Upgrade OCR from previous version -downgrade [-version ] - Downgrade OCR to the specified version [-local] -backuploc - Configure OCR/OLR backup location [-local] -showbackup [auto|manual] - Show OCR/OLR backup information [-local] -manualbackup - Perform OCR/OLR backup [-local] -restore - Restore OCR/OLR from physical backup -replace -replacement - Replace a OCR device/file <filename1> with <filename2> -add - Add a new OCR device/file -delete - Remove a OCR device/file -overwrite - Overwrite OCR configuration on disk -repair -add | -delete | -replace -replacement - Repair OCR configuration on the local node -help - Print out this help information Note: * A log file will be created in $ORACLE_HOME/log//client/ocrconfig_.log. Please ensure you have file creation privileges in the above directory before running this tool. * Only -local -showbackup [manual] is supported. * Use option '-local' to indicate that the operation is to be performed on the Oracle Local Registry. [root@raclinux2 bin]# [root@raclinux2 bin]# [root@raclinux2 bin]# ./ocrconfig -add +dgdup2 [root@raclinux2 bin]# ./ocrconfig -add +dgdup1 [root@raclinux2 bin]# ./ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 3380 Available space (kbytes) : 258740 ID : 1332773503 Device/File Name : +DATA Device/File integrity check succeeded Device/File Name : +dgdup2 Device/File integrity check succeeded Device/File Name : +dgdup1 Device/File integrity check succeeded Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded [root@raclinux2 bin]# [root@raclinux2 bin]# cat /etc/oracle/ocr.loc #Device/file getting replaced by device +dgdup1 ocrconfig_loc=+DATA ocrmirrorconfig_loc=+dgdup2 ocrconfig_loc3=+dgdup1 local_only=false[root@raclinux2 bin]#
Summary:
While we benefit from the redundancy a disk group provides we can add multiple OCR on a different disk groups. We can move the VOTE disk to a disk group with high redundancy.
To be precise vote disk in ASM has the following specifics.
No file in ASM spans a disk group including a vote file corresponding to the vote disks. Difference is that vote file is not mirrored and stripped like any other files (data, redo, control, OCR etc…) and has a fixed assignment to a failure group in a disk. Vote disk can be accessed even if ASM is not up whereas another file cannot be accessed if ASM is not up and running.
You cannot mix vote files on ASM and vote files on non-ASM. Instead you can dedicate either one ASM DG for all you cluster vote disk files or you can use non-ASM but never both except for initial migration to 11.2 when previous versions were on non-ASM storage.
For vote files you need a special number of failure groups in a disk groups as indicated below
- For External redundancy DG you need 1 failure group for one Vote disk file
- For Normal redundancy DG you need 3 failure groups for 3 vote disk file
- For High redundancy DG you need 5 failure groups for 5 vote disk files
-
Archives
- February 2017 (1)
- November 2016 (1)
- October 2016 (1)
- May 2016 (2)
- March 2016 (3)
- December 2014 (2)
- July 2014 (1)
- June 2014 (6)
- May 2014 (5)
- February 2014 (1)
- December 2012 (2)
- November 2012 (8)
-
Categories
-
RSS
Entries RSS
Comments RSS