Guenadi N Jilevski's Oracle BLOG

Oracle RAC, DG, EBS, DR and HA DBA BLOG

RAC enabling Oracle EBS R12 12.1.1

RAC enabling Oracle EBS R12 12.1.1

Having EBS operational with Oracle 11gR RAC provides ability for the database to scale out as it grow, resilience and high availability at database tier as a mandatory requirement to support today 24X7 accessibility business demands. Oracle Applications Release 12 has numerous configuration options that can be chosen to suit particular business scenarios, uptime requirements, hardware capability, and availability requirements. This section describes how to migrate Oracle Applications Release 12.1.1 running on a single database instance to an Oracle Real Application Clusters (Oracle RAC) environment running Oracle Database 11g Release 1 (11.1.0.7) with Automatic Storage Management (ASM). The most current version of this document can be obtained in My Oracle Support (formerly Oracle MetaLink) Knowledge Document 466649.1.

Cluster Terminology Overview

Let’s briefly refresh the key terminology used in a cluster environment.

  • Automatic Storage Management (ASM) is an Oracle database component that acts as an integrated file system and volume manager, providing the performance of raw devices with the ease of management of a file system. In an ASM environment, you specify a disk group rather than the traditional datafile when creating or modifying a database structure such as a tablespace. ASM then creates and manages the underlying files automatically.
  • Cluster Ready Services (CRS) is the primary program that manages high availability operations in an Oracle RAC environment. The crs process manages designated cluster resources, such as databases, services, and listeners.
  • Parallel Concurrent Processing (PCP) is an extension of the Concurrent Processing architecture. PCP allows concurrent processing activities to be distributed across multiple nodes in an Oracle RAC environment, maximizing throughput and providing resilience to node failure.
  • Oracle Real Application Clusters (Oracle RAC) is an Oracle database technology that allows multiple machines to work on the same data in parallel, reducing processing time significantly. An Oracle RAC environment also offering resilience if one or more machines become temporarily unavailable as a result of planned or unplanned downtime.

Legend listing the Oracle Applications EBS naming conventions is as follows.

Convention Meaning
Application tier Machines (nodes) running Forms, Web, and other services (servers). Sometimes called middle tier.
Database tier Machines (nodes) running Oracle Applications database.
oracle User account that owns the database file system (database ORACLE_HOME and files).
applmgr User account that owns the application file system.
CONTEXT_NAME The CONTEXT_NAME variable specifies the name of the Applications context that is used by AutoConfig. The default is _.
CONTEXT_FILE Full path to the Applications context file on the application tier or database tier. The default locations are as follows.
Application tier context file:
/appl/admin/.xml
Database tier context file:
<RDBMS ORACLE_HOME>/appsutil/.xml
APPSpwd EBS database APPS user password.
Configuration Prerequisites

In order to install and RAC enable EBS we need to make sure that the prerequisites to install EBS and to configure RAC are met. The basic prerequisites for using Oracle RAC with Oracle Applications Release 12.1 are:

  • If you do not already have an existing single instance environment, perform an installation of Oracle Applications (EBS) with Rapid Install or you should apply the Oracle E-Business Suite Release 12.1.1 Maintenance Pack (patch 7303030, also delivered by Release 12.1.1 Rapid Install).Installation of EBS R12 12.1.1was illustrated in section Installing EBS 12.1.1. Upgrade to 12.1.1 is out of the scope in the book.
  • Set up the required cluster hardware and interconnect medium and install Oracle 11g R1 11.1.0.7 CRS, ASM and RDBMS as per chapter 3 and 9 and the required interoperability patches described in section ‘Upgrading an EBS 12 with the latest release of Oracle 11gR1 RDBMS’  or Oracle My support (Metalink) Note 802875.1.  We will have the following Oracle Homes after completion of the task.
ORACLE_HOME Purpose
Rapid Install Database ORACLE_HOME Database ORACLE_HOME installed by Oracle Applications Release 12 rapidwiz. /u01/oracle/VIS/db/tech_st/11.1.0
Database 11g ORACLE_HOME Database ORACLE_HOME installed for Oracle 11g RAC Database. /u01/app/oracle/product/11.1.0/db_2
Database 11g CRS_ORACLE_HOME ORACLE_HOME installed for 11g Clusterware (formerly Cluster Ready Services). /u01/crs/oracle/product/11.1.0/crs_1
Database 11g ASM ORACLE_HOME ORACLE_HOME used for creation of ASM instances.

/u01/app/orace/product/11.1.0/db_1

OracleAS 10.1.2 ORACLE_HOME ORACLE_HOME installed on Application Tier for forms and reports by rapidwiz. $INST_TOP/tech_st/10.1.2
OracleAS 10.1.3 ORACLE_HOME ORACLE_HOME installed on Application Tier for HTTP server by rapidwiz. $INST_TOP/tech_st/10.1.3

As a refresher from chapter 3 and 9, the Oracle Homes for CRS, ASM and RDBMS must be installed with Oracle 11g R1 and patched to Oracle 11.1.0.7. The patch number is 6890831. For ASM and RDBMS Oracle home install Oracle Database 11g Products from the 11g Examples CD after Oracle 11gR1 install but prior to applying the patch for 11.1.0.7 (patch number 6890831).

After the successful install of the Oracle 11gR1 CRS and ASM the output of ./crs_stat –t –v should display.

Name           Type           Target    State     Host

————————————————————

ora….SM1.asm application    ONLINE    ONLINE    raclinux1

ora….X1.lsnr application    ONLINE    ONLINE    raclinux1

ora….ux1.gsd application    ONLINE    ONLINE    raclinux1

ora….ux1.ons application    ONLINE    ONLINE    raclinux1

ora….ux1.vip application    ONLINE    ONLINE    raclinux1

ora….SM2.asm application    ONLINE    ONLINE    raclinux2

ora….X2.lsnr application    ONLINE    ONLINE    raclinux2

ora….ux2.gsd application    ONLINE    ONLINE    raclinux2

ora….ux2.ons application    ONLINE    ONLINE    raclinux2

ora….ux2.vip application    ONLINE    ONLINE    raclinux2

Also ./crsctl check crs  should look like the picture below.

This is the criteria that CRS and ASM are installed and running on the two nodes of the cluster consisting of raclinux1 and raclinux2 nodes. As we can see there are two instances belonging to ASM and two listeners for ASM on each node. Resources are as follows.

  • ora.raclinux1.+ASM1.asm on node raclinux1
  • ora.raclinux2.+ASM2.asm on node raclinux2
  • ora.raclinux1.LISTENER_RACLINUX1.lsnr on node raclinux1
  • ora.raclinux2.LISTENER_RACLINUX2.lsnr on node raclinux2

There are two disk groups created and mounted in ASM,  that is DATA with 400GB and FLASH with 200GB.  Now we can start with RAC enabling the EBS as described in the following section.

Running rconfig for move to ASM and RAC enabling the database

So far we have a single instance database based EBS 12.1.1. We will use the rconfig to move the database to ASM and RAC enable the VIS EBS database. We will set a flash recovery area (FRA) for the EBS database VIS using the parameters and commands described below.

After logging into the server using oracle Linux user account go to $ORACLE_HOME/assistants/rconfig/sampleXMLs directory. Make a copy of the template file ConvertToRAC.xml to convert.xml and convert1.xml. Modify the content of convert.xml and convert1.xml identically with exception of . In the file convert.xml place whereas in convert1.xml place . The possible values are “ONLY”, “YES” and “NO”.  Utility called rconfig is used along with the xml file to perform the following activities.

  • Migrate the database to ASM storage (only if ASM is specified as storage option in the configuration XML file).
  • Create database instances on all nodes in the cluster.
  • Configure listener and Net Service entries.
  • Configure and register CRS resources.
  • Start the instances on all nodes in the cluster.

Please take a note than value “ONLY” performs a validation of the parameters and will identify any problems that needs to be corrected prior to the actual conversion but does not performs a conversion after completing the prerequisite checks. If using Convertverify=”YES”: rconfig performs checks to ensure that the prerequisites for single-instance to Oracle RAC conversion have been met before it starts conversion. If using Convert verify=”NO”: rconfig does not perform prerequisite checks, and starts conversion. Content of the convert.xml is displayed below. In the file convert.xml place whereas in convert1.xml place . In both files convert.xml and convert1.xml we specify the following information.

  • source pre-conversion EBS RDBMS Oracle home of non-RAC database –  /u01/oracle/VIS/db/tech_st/11.1.0
  • destination post-conversion EBS RDBMS Oracle home of the RAC database – /u01/app/oracle/product/11.1.0/db_2
  • SID for non-RAC database and credentials – VIS
  • list of nodes that should have RAC instances running – raclinux1, raclinux2
  • instance prefix – VIS
  • storage type – please note that storage type is ASM.
  • ASM disk groups for Oracle data file and FRA – DATA and FLASH.

Exact content of convert.xml is as hollows.

<?xml version=”1.0″ encoding=”UTF-8″ ?>

RConfig xmlns:n=”http://www.oracle.com/rconfig&#8221; xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance&#8221; xsi:schemaLocation=”http://www.oracle.com/rconfig”&gt;

<n:ConvertToRAC>

– <!–

Verify does a precheck to ensure all pre-requisites are met, before the conversion is attempted. Allowable values are: YES|NO|ONLY

–>

<n:Convert verify=”ONLY”>

– <!–

Specify current OracleHome of non-rac database for SourceDBHome

–>

/u01/oracle/VIS/db/tech_st/11.1.0

– <!–

Specify OracleHome where the rac database should be configured. It can be same as SourceDBHome

–>

/u01/app/oracle/product/11.1.0/db_2

– <!–

Specify SID of non-rac database and credential. User with sysdba role is required to perform conversion

–>

SourceDBInfo SID=”VIS”>

<n:Credentials>

<n:User>sys</n:User>

<n:Password>sys1</n:Password>

<n:Role>sysdba</n:Role>

</n:Credentials>

</n:SourceDBInfo>

– <!–

ASMInfo element is required only if the current non-rac database uses ASM Storage

–>

ASMInfo SID=”+ASM1″>

<n:Credentials>

<n:User>sys</n:User>

<n:Password>sys1</n:Password>

<n:Role>sysasm</n:Role>

</n:Credentials>

</n:ASMInfo>

– <!–

Specify the list of nodes that should have rac instances running. LocalNode should be the first node in this nodelist.

–>

<n:NodeList>

<n:Node name=”raclinux1.gj.com” />

<n:Node name=”raclinux2.gj.com” />

</n:NodeList>

– <!–

Specify prefix for rac instances. It can be same as the instance name for non-rac database or different. The instance number will be attached to this prefix.

–>

<n:InstancePrefix>VIS</n:InstancePrefix>

– <!–

Specify port for the listener to be configured for rac database.If port=””, alistener existing on localhost will be used for rac database.The listener will be extended to all nodes in the nodelist

–>

<n:Listener port=”” />

– <!–

Specify the type of storage to be used by rac database. Allowable values are CFS|ASM. The non-rac database should have same storage type.

–>

SharedStorage type=”ASM”>

– <!–

Specify Database Area Location to be configured for rac database.If this field is left empty, current storage will be used for rac database. For CFS, this field will have directory path.

–>

<n:TargetDatabaseArea>+DATA</n:TargetDatabaseArea>

– <!–

Specify Flash Recovery Area to be configured for rac database. If this field is left empty, current recovery area of non-rac database will be configured for rac database. If current database is not using recovery Area, the resulting rac database will not have a recovery area.

–>

<n:TargetFlashRecoveryArea>+FLASH</n:TargetFlashRecoveryArea>

</n:SharedStorage>

</n:Convert>

</n:ConvertToRAC>

</n:RConfig>

We use the rconfig utility to verify the conversion process using the convert.xml file. The output of the verification is shown below.

If you wish to specify a NEW_ORACLE_HOME as it is in our case for the Oracle home of the freshly installed Oracle 11g release 11.1.0.7, start the database from the new Oracle Home using the command

SQL>startup pfile=/dbs/init.ora;.

Shut down the database. Create a spfile from the pfile using the command

SQL>create spfile from pfile;.

Move the $ORACLE_HOME/dbs/spfile.ora for this instance to the shared location. Take a backup of existing $ORACLE_HOME/dbs/init.ora and create a new $ORACLE_HOME/dbs/init.ora with the following parameter spfile=/spfile.ora. Start up the instance. Using netca, create local and remote listener tnsnames.ora aliases for database instances. Use listener_VIS1 and listener_VIS2 as the alias name for the local listener, and listeners_VIS for the remote listener alias. Execute netca from $ORACLE_HOME/bin.

  • Choose “Cluster Configuration” option in the netca assistant.
  • Choose the current nodename from the nodes list.
  • Choose “Local Net Service Name Configuration” option and click Next.
  • Select “Add” and in next screen enter the service name and click Next.
  • Enter the current node as Server Name and the port 1521 defined during the ASM listener creation.
  • Select “Do not perform Test” and click Next.
  • Enter the listener TNS alias name like LISTENER_VIS1for local listener.
  • Repeat the above steps for remote listener, with the server name as the secondary node and the listener name LISTENERS_VIS.

Note: Ensure that local and remote aliases are created on all nodes in the cluster.

After making sure that the parameters are valid and no errors were identified that could be a problem we start the real conversion using the rconfig and convert1.xml file. The output of the execution is shown below.

After completion of the VIS database conversion from a single instance database residing on a file system to a RAC database on raclinux1 and raclinux2 servers cluster residing on ASM we can validate the conversion looking at the running services produced by ./crs_stat –t –v  command output.

Running AutoConfig

EBS is a complex application that can be deployed in a single node or multi-node configuration. Instrumental component of the EBS architecture is the context file in xml format comprising a repository of all parameters for the EBS configuration. Changing the context file parameter we reconfigure the EBS. Note that as of EBS 12.1.1 Oracle Application Manager (OAM) is the only approved method for changing the context file. Running Auto Config the changes in the configuration parameters stored into the context file get applied to the application. Thus, the management framework based on the context file and other ad utilities greatly facilitates the configuration and management of the EBS. There is no need for the EBS DBA to change parameters in many locations, instead the context file need to be changed and Auto Config needs to be run.  The content files are for both the database tier and the application tier. Thus, a multi-node configuration can be managed and configured independently. In the context of the chapter for RAC enabling the EBS we will cover the process of creation of the context file on the new Oracle 11g 11.1.0.7 Oracle home and making the EBS aware of the changes. Also we will change the context file of the application tier so that the application tier can communicate with the new database tier. We will cover the detailed steps that are involved in the process of update the context files and propagating the changes across the application so that they take effect and the application is aware of it. The management ad utilities based on the context files generate a lot of additional configuration files that are used by EBS both database and application tier.

As we have installed new Oracle 11g homes for ASM and RDBMS we need to make EBS aware of them. This is achieved by running Auto Config. Running AutoConfig on the database tier is required in the following scenarios.

  • After migrating a patch to the database tier, the Check Config utility reports any potential changes to the templates.
  • After customizations on the database tier
  • After a database or application tier upgrade
  • After restoration of the database or Oracle Home from a backup tape
  • After a JDK upgrade on the database tier
  • After the Net Services Topology Information is manually cleaned up using one of the supported procedures (eg. fnd_conc_clone.setup_clean). Subsequently, AutoConfig must be run on the application tier nodes.
  • After registration of a RAC node.
  • After setting up the APPL_TOP on a shared file system.
  • All other cases where documentation says that AutoConfig should be run on the database tier.

Now we are going to Enable Autoconfig on the new Oracle 11g 11.1.0.7 home database tier as this is freshly installed Oracle home. Complete the steps in this section (in the order listed) to migrate to AutoConfig on the Database Tier.

Copy AutoConfig to the new RDBMS ORACLE_HOME for Oracle 11g R1 11.1.0.7

Ensure that you have applied any patches listed in the pre-requisites section above. Update the RDBMS ORACLE_HOME file system with the AutoConfig files by performing the following steps:

  • On the Application Tier (as the applmgr user):

Log in to the APPL_TOP environment (source the environment file)

Create appsutil.zip file
perl /bin/admkappsutil.pl

This will create appsutil.zip in $INST_TOP/admin/out .

  • On the Database Tier (as the ORACLE user):

Copy or FTP the appsutil.zip file to the

cd <RDBMS ORACLE_HOME>
unzip -o appsutil.zip

  • Copy the jre directory from /appsutil to 11g NEW_ORACLE_HOME>/appsutil.
  • Create a directory under $ORACLE_HOME/network/admin. Use the new instance name while creating the context directory. Append the instance number to the instance prefix that you put in the rconfig XML file. For example, if your database name is VIS, and you want to use “VIS” as the instance prefix, create the context_name directory as VIS1_ or VIS2_ where hostname can be either raclinux1 or raclinux2.
  • Set the following environment variables in the .bash_profile as follows.
  • De-register the current configuration using the Apps schema package FND_CONC_CLONE.SETUP_CLEAN executing the command SQL>exec fnd_conc_clone.setup_clean; while logged into the database as apps user.
  • Copy the tnsnames.ora file from $ORACLE_HOME/network/admin to $TNS_ADMIN/tnsnames.ora file and edit it to change the aliases for SID=<new Oracle RAC instance name>.
  • To preserve TNS aliases (LISTENERS_ and LISTENER_) of ASM , create a file _ifile.ora under $TNS_ADMIN, and copy those entries to that file.
  • Create the listener.ora as per the sample file in Appendix 1. Change the instance name and Oracle home to match this environment.
  • Start the listener.
Generate your Database Context File

From the 11g ORACLE_HOME/appsutil/bin directory, create an instance-specific XML context file by executing the command

  • On UNIX
    cd <RDBMS ORACLE_HOME>
    . <CONTEXT_NAME>.env
    cd <RDBMS 11g ORACLE_HOME>/appsutil/bin
    perl adbldxml.pl tier=db appsuser=
Note that adbldxml.pl uses your current environment settings to generate the context file. Therefore ensure that your environment is correctly sourced.
Note that If you build the context file for an EBS instance that runs on RAC, all your RAC instances have to be up and running while executing the adbldxml utility. The utility connects to all RAC instances to gather information about the configuration.
Prepare for AutoConfig by completing the following AutoConfig steps.

  • Set the value of s_virtual host_name to point to the virtual hostname (VIP alias) for the database host, by editing the database context file $ORACLE_HOME/appsutil/_hostname.xml.
  • Rename $ORACLE_HOME/dbs/init.ora, to a new name (i.e. init.ora.old) in order to allow AutoConfig to regenerate the file using the Oracle RAC specific parameters.
  • Ensure that the following context variable parameters are correctly specified.
  • s_jdktop=<11g ORACLE_HOME_PATH>/appsutil/jre
    s_jretop=<11g ORACLE_HOME_PATH>/appsutil/jre
    s_adjvaprg=<11g ORACLE_HOME_PATH>/appsutil/jre/bin/java
  • Review Prior Manual Configuration Changes.
    The database context file may not include manual post-install configuration changes made after the Rapid Install completed. Before running the AutoConfig portion of this patch, review any modifications to specific configuration files and reconcile them with the database context file.
Note: Prior modifications include any changes made to configuration files as instructed in patch READMEs or other accompanying documents.
Generate and Apply AutoConfig Configuration files
Note that this step performs the conversion to AutoConfig. Once completed, the previous configuration will not be available.
Note that the database server and the database listener must remain available during the AutoConfig run. All the other database tier services should be shut down.

Execute the following commands on Linux/UNIX.
cd <RDBMS ORACLE_HOME>/appsutil/bin/perl adconfig.pl

Warning: Running AutoConfig on the database node will update the RDBMS network listener file. Be sure to review the configuration changes from step ‘Prepare for AutoConfig by completing the following AutoConfig steps’. The new AutoConfig network listener file supports the use of IFILE to allow for values to be customized or added as needed.

Note: Running AutoConfig on the database tier will NOT overwrite any existing init.ora file in the /dbs directory. If no init.ora file exists in your instance, AutoConfig will generate an init.ora file in the /dbs directory for you.

Note: Running AutoConfig might change your existing environment files. After running AutoConfig, you should always set the environment before you run any Applications utilities in order to apply the changed environment variables.

Check the AutoConfig log file located in 11g ORACLE_HOME/appsutil/log//

If ASM/OCFS is being used, note down the new location of the control file.

sqlplus / as sysdba;

SQL> show parameters control_files

Perform all of the above steps starting from sub section ‘Copy AutoConfig to the new RDBMS ORACLE_HOME for Oracle 11g R1 11.1.0.7’ on all other database nodes in the cluster.

Execute AutoConfig on all database nodes in the cluster

Execute AutoConfig on all database nodes in the cluster by running the command  $ORACLE_HOME/appsutil/scripts/adautocfg.sh . Shut down the instances and listeners.

Init file, tnsnames and listener file activities

Edit $ORACLE_HOME/dbs/_APPS_BASE.ora file on all nodes. If ASM is being used, change the following parameter control_files =

Create a spfile from the pfile on all nodes as follows:

  • Create an spfile from the pfile, and then create a pfile in a temporary location from the new spfile, with commands as shown in the following example:
  • SQL>create spfile= from pfile;
  • SQL>create pfile=/tmp/init.ora from spfile=;
  • Repeat this step on all nodes.
  • Combine the initialization parameter files for all instances into one initdbname.ora file by copying all existing shared contents. All shared parameters defined in your initdbname.ora file must be global, with the format *.parameter=value
  • Modify all instance-specific parameter definitions in init<SID>.ora files using the following syntax, where the variable <SID> is the system identifier of the instance: <SID>.parameter=value

Note: Ensure that the parameters LOCAL_LISTENER,diagnostic_dest,undo_tablespace,thread,instance_number,instance_name are in .parameter format; for example, .LOCAL_LISTENER=. These parameters must have one entry for an instance.

  • Create the spfile in the shared location where rconfig created the spfile from the pfile init.ora above.

SQL>create spfile= from pfile;

Ensure that listener.ora and tnsnames.ora are generated as per the format shown in Appendix 1.

As AutoConfig creates the listener.ora and tnsnames.ora files in a context directory, and not in the $ORACLE_HOME/network/admin directory, the TNS_ADMIN path must be updated in CRS. Run the following command as the root user:

# srvctl setenv nodeapps -n \

-t TNS_ADMIN=/network/admin/

Start up the database instances and listeners on all nodes.

Run AutoConfig all nodes to ensure each instance registers with all remote listeners.

Shut down and restart the database instances and listeners on all nodes.

Restart the database instances and listeners on all nodes.

De-register any old listeners and register the new listeners with CRS using the commands:

# srvctl remove listener –n -l

# srvctl add listener -n -o -l

Establish Applications Environment for Oracle RAC
Preparatory Steps

The steps below are important to make the application tier aware of the new database tier. Carry out the following steps on all application tier nodes.

  • Source the Oracle Applications environment.
  • Edit SID=<Instance 1> and PORT=<New listener port > in $TNS_ADMIN/tnsnames.ora file, to set up connection one of the instances in the Oracle RAC environment.
  • Confirm you are able to connect to one of the instances in the Oracle RAC environment.
  • Edit the context variable jdbc_url, adding the instance name to the connect_data parameter.
  • Run AutoConfig using the command: $AD_TOP/bin/adconfig.sh contextfile=$INST_TOP/appl/admin/.

For more information on AutoConfig, see My Oracle Support Knowledge Document 387859.1, Using AutoConfig to Manage System Configurations with Oracle E-Business Suite Release 12.

  • Check the $INST_TOP/admin/log/ AutoConfig log file for errors.
  • Source the environment by using the latest environment file generated.
  • Verify the tnsnames.ora and listener.ora files. Copies of both are located in the $INST_TOP/ora/10.1.2/network/admin directory and $INST_TOP/ora/10.1.3/network/admin directory. In these files, ensure that the correct TNS aliases have been generated for load balance and failover, and that all the aliases are defined using the virtual hostnames.
  • Verify the dbc file located at $FND_SECURE. Ensure that the parameter APPS_JDBC_URL is configured with all instances in the environment, and that load_balance is set to YES.
Set Up Load Balancing

Load balancing is important as it enable you to distribute the load of the various EBS components to the less loaded instance. Implementing load balancing for the Oracle Applications database connections is achieved following the steps outlined below.

  • Run the Context Editor (through the Oracle Applications Manager interface) and set the value of “Tools OH TWO_TASK” (s_tools_two_task), “iAS OH TWO_TASK” (s_weboh_twotask) and “Apps JDBC Connect Alias” (s_apps_jdbc_connect_alias).
  • To load balance the forms based applications database connections, set the value of “Tools OH TWO_TASK” to point to the <database_name>_balance alias generated in the tnsnames.ora file.
  • To load balance the self-service applications database connections, set the value of “iAS OH TWO_TASK” and “Apps JDBC Connect Alias” to point to the _balance alias generated in the tnsnames.ora file.
  • Execute AutoConfig by running the command. $AD_TOP/bin/adconfig.sh contextfile=$INST_TOP/appl/admin/
  • Restart the Applications processes, using the new scripts generated by AutoConfig.
  • Ensure that value of the profile option “Application Database ID” is set to dbc file name generated in $FND_SECURE.

Note: If you are adding a new node to the application tier, repeat the above steps 1-6 for setting up load balancing on the new application tier node.

Configure Parallel Concurrent Processing

Parallel Concurrent Processing (PCP) is an extension of the Concurrent Processing architecture. PCP allows concurrent processing activities to be distributed across multiple nodes in an Oracle RAC environment, maximizing throughput and providing resilience and high availability to node failure. User interactions with EBS data can be conducted via HTML-based applications or the more traditional forms based applications. However, there are also reporting programs and data updating updating programs that need to run either periodically or an ad-hoc basis. These programs, that are running in the background while users continue to work on other tasks may require a large number of data intensive computations and are run using the Concurrent Processing architecture. Concurrent processing is an Oracle EBS feature that allows these non-interactive and potentially long running functions to be executed efficiently alongside interactive operations. It uses the operating system facilities to facilitate background scheduling of data or resource intensive jobs via a set of programs and forms. To ensure that resource intensive concurrent processing operations do not interfere with the interactive operations, they run on a specialized server, the Concurrent Processing server. It is worth evaluating whether to have it on the application or database tier. In some cases running the concurrent processing server on the database tier improves performance. Summary of the Concurrent processing architecture is show below.

Check prerequisites for setting up Parallel Concurrent Processing

To set up Parallel Concurrent Processing (PCP), you must have more than one Concurrent Processing node in your environment. If you do not have this, follow the appropriate instructions in My Oracle Support Knowledge Document 406982.1, Cloning Oracle Applications Release 12 with Rapid Clone. Brief overview of the EBS cloning process will be covered in the next subsection.

Note:  If you are planning to implement a shared Application tier file system, refer to My Oracle Support Knowledge Document 384248.1, sharing the Application Tier File System in Oracle E-Business Suite Release 12, for configuration steps. If you are adding a new Concurrent Processing node to the application tier, you will need to set up load balancing on the new application by repeating steps 1-6 in section ‘Set up load balancing’.

Cloning EBS concepts in brief

Cloning in EBS is a methodology allowing moving components of existing EBS system to a different location either on the same server or a different server without reinstall of the EBS. Cloning is the process used to create a copy of an existing EBS system. There are various scenarios for cloning an EBS system.

  • Standard cloning – Making a copy of an existing Oracle Applications system, for example a copy of a production system to test updates.
  • System scale-up – Adding new machines to an Oracle Applications system to provide the capacity for processing an increased workload.
  • System transformations – Altering system data or file systems, including actions such as platform migration, data scrambling, and provisioning of high availability architectures.
  • Patching and upgrading – Delivering new versions of Applications components, and providing a mechanism for creating rolling environments to minimize downtime.

An important principle in EBS cloning is that the system is cloned, rather than the topology. Producing an exact copy of the patch level and data is much more important than creating an exact copy of the topology, as a cloned system must be able to provide the same output to the end user as the source system. However, while a cloned system need not have the full topology of its source, it must have available to it all the topology components that are available to the source.  Cloning in EBS basically involves three steps. First we prepare the source system. Second step is copying the source system to the target system. Last third step is configuring the target system. Cloning methodology enable us also to add a new node to existing EBS system or to clone RAC enabled EBS.

Prepare the source system by executing the following commands to prepare the source system for cloning while database and applications are running.

  • Prepare the source system database tier for cloning
    Log on to the source system as the oracle user, and run the following commands.

$ cd <RDBMS ORACLE_HOME>/appsutil/scripts/
$ perl adpreclone.pl dbTier

  • Prepare the source system application tier for cloning
    Log on to the source system as the applmgr user, and run the following commands on each node that contains an APPL_TOP.

$ cd <INST_TOP>/admin/scripts
$ perl adpreclone.pl appsTier

Note: If new Rapid Clone or AutoConfig updates are applied to the system, adpreclone.pl must be executed again on the dbTier and on the appsTier in order to apply the new files into the clone directory structures that will be used during the cloning configuration stage.
Copy the application tier file system from the source EBS system to the target node by executing the following steps in the order listed. Ensure the application tier files copied to the target system are owned by the target applmgr user, and that the database node files are owned by the target oracle user.

Note: The tar command can be used to compress the directories into a temporary staging area. If you use this command, you may require the -h option to follow symbolic links, as following symbolic links is not the default behavior on all platforms. Consult the UNIX man page for the tar command.

  • Copy the application tier file system
    Log on to the source system application tier nodes as the applmgr user and shut down the application tier server processes. Copy the following application tier directories from the source node to the target application tier node:
  • <APPL_TOP>
  • <COMMON_TOP>
  • Applications Technology Stack  <OracleAS Tools ORACLE_HOME>
  • Applications Technology Stack  <OracleAS Web IAS_ORACLE_HOME>
  • Copy the database node file system
    Log on to the source system database node as the ORACLE user, and then:
  • Perform a normal shutdown of the source system database
  • Copy the database (.dbf) files from the source system to the target system
  • Copy the source database ORACLE_HOME to the target system
  • Start the source Applications system database and application tier processes

Configure the target system by running the following commands to configure the target system. You will be prompted for specific target system values such as SID, paths, and ports to name a few.

  • Configure the target system database server
    Log on to the target system as the oracle user and enter the following commands:

$ cd <RDBMS ORACLE_HOME>/appsutil/clone/bin
$ perl adcfgclone.pl dbTier

  • Configure the target system application tier server nodes
    Log on to the target system as the applmgr user and enter the following commands:

$ cd <COMMON_TOP>/clone/bin
$ perl adcfgclone.pl appsTier

Add a New Node to an Existing System. You can use Rapid Clone to clone a node and add it to the existing EBS system, a process also known as scale up or scale out. The new node can run the same services as the source node, or different services. Follow the instructions in the Application tier part of Cloning Tasks in Note 406982.1.

After adcfgclone.pl completes, source the EBS environment and run the following commands on the target system:

$ cd <COMMON_TOP>/clone/bin
$ perl adaddnode.pl

Note: After adding new nodes, refer to My Oracle Support Knowledge Document 380489.1 for details of how to set up load balancing.

Note: If SQL*Net Access security is enabled in the existing system, you first need to authorize the new node to access the database through SQL*Net. See the Oracle Applications Manager on line help for instructions on how to accomplish this.

Set Up PCP
  • Edit the applications context file via Oracle Applications Manager, and set the value of the variable APPLDCP to ON.
  • Execute AutoConfig by running the following command on all concurrent processing nodes $INST_TOP/admin/scripts/adautocfg.sh .
  • Source the Applications environment.
  • Check the tnsnames.ora and listener.ora configuration files, located in $INST_TOP/ora/10.1.2/network/admin. Ensure that the required FNDSM and FNDFS entries are present for all other concurrent nodes.
  • Restart the Applications listener processes on each application tier node.
  • Log on to Oracle E-Business Suite Release 12 using the SYSADMIN account, and choose the System Administrator Responsibility. Navigate to Install > Nodes screen, and ensure that each node in the cluster is registered.
  • Verify that the Internal Monitor for each node is defined properly, with correct primary and secondary node specification, and work shift details. For example, Internal Monitor: Host2 must have primary node as host2 and secondary node as host3. Also ensure that the Internal Monitor manager is activated: this can be done from Concurrent > Manager > Administrator.
  • Set the $APPLCSF environment variable on all the Concurrent Processing nodes to point to a log directory on a shared file system.
  • Set the $APPLPTMP environment variable on all the CP nodes to the value of the UTL_FILE_DIR entry in init.ora on the database nodes. (This value should be pointing to a directory on a shared file system.)
  • Set profile option ‘Concurrent: PCP Instance Check’ to OFF if database instance-sensitive failover is not required. By setting it to ‘ON’, a concurrent manager will fail over to a secondary Application tier node if the database instance to which it is connected becomes unavailable for some reason.
Set Up Transaction Managers
  • Shut down the application services (servers) on all nodes.
  • Shut down all the database instances cleanly in the Oracle RAC environment, using the command SQL>shutdown immediate;.
  • Edit $ORACLE_HOME/dbs/_ifile.ora. Add the following parameters             _lm_global_posts=TRUE and _immediate_commit_propagation=TRUE.
  • Start the instances on all database nodes, one by one.
  • Start up the application services (servers) on all nodes.
  • Log on to Oracle E-Business Suite Release 12 using the SYSADMIN account, and choose the System Administrator responsibility. Navigate to Profile > System, change the profile option ‘Concurrent: TM Transport Type’ to ‘QUEUE’, and verify that the transaction manager works across the Oracle RAC instance.
  • Navigate to Concurrent > Manager > Define screen, and set up the primary and secondary node names for transaction managers.
  • Restart the concurrent managers.
  • If any of the transaction managers are in deactivated status, activate them from Concurrent > Manager > Administrator.
Set Up Load Balancing on Concurrent Processing Nodes
  • Edit the applications context file through the Oracle Applications Manager interface, and set the value of Concurrent Manager TWO_TASK (s_cp_twotask) to the load balancing alias (<service_name>_balance>).
  • Execute AutoConfig by running $INST_TOP/admin/scripts/adautocfg.sh on all concurrent nodes.

December 13, 2009 - Posted by | oracle

5 Comments »

  1. Dear sir,

    Could you please help me with the cloning process? We have two R12.1.1 application nodes with shared file system and two 11g R2 db nodes with RAC.

    We have similar setup on our test instance as well. I have to clone from PROD to TEST now. Could you please help me with the steps to clone it?

    Regards,
    Mehmood

    Comment by Mehmood | May 27, 2010 | Reply

    • Hello,

      There are few very usefull notes from MOS ( formerly matalink ) that will be very usefull.

      Please look at 559518.1 and 406982.1. If you have further questins do not hesitate to contact me.

      Wish you luck and please let me know how it goes.

      I am pasting the note as of 27-May-2010.

      Cloning Oracle E-Business Suite Release 12 RAC-Enabled Systems with Rapid Clone [ID 559518.1]

      ——————————————————————————–

      Modified 04-MAR-2010 Type WHITE PAPER Status PUBLISHED
      Cloning Oracle E-Business Suite Release 12 RAC-Enabled Systems with Rapid Clone
      Last Updated: Mar 4, 2010
      This document describes the process of using the Oracle Applications Rapid Clone utility to create a clone (copy) of an Oracle E-Business Suite Release 12 system that utilizes the Oracle Database 10g Real Application Clusters feature.

      The resulting duplicate Oracle Applications Release 12 RAC environment can then be used for purposes such as:

      •Patch testing
      •User Acceptance testing
      •Performance testing
      •Load testing
      •QA validation
      •Disaster recovery
      The most current version of this document can be obtained in OracleMetaLink Note 559518.1.

      There is a change log at the end of this document.

      In This Document
      •Section 1: Overview, Prerequisites and Restrictions
      •Section 2: Configuration Requirements for the Source RAC System
      •Section 3: Configuration requirements for the Target RAC System
      •Section 4: Prepare Source RAC System
      •Section 5: RAC-to-RAC Cloning
      •Section 6: Applications Tier Cloning for RAC
      •Section 7: Advanced RAC Cloning Scenarios
      •Appendix A: Configuring Oracle Clusterware on the Target System Database Nodes

      Note: At present, the procedures described in this document apply to UNIX and Linux platforms only, and are not suitable for Oracle Applications Release 12 RAC-enabled systems running on WIndows.

      A number of conventions are used in describing the Oracle Applications architecture:

      Convention Meaning
      Application tier Machines (nodes) running Forms, Web, and other services (servers). Also called middle tier.
      Database tier Machines (nodes) running the Oracle Applications database.
      oracle User account that owns the database file system (database ORACLE_HOME and files).
      CONTEXT_NAME The CONTEXT_NAME variable specifies the name of the Applications context that is used by AutoConfig. The default is [SID]_[hostname].
      CONTEXT_FILE Full path to the Applications context file on the application tier or database tier.
      APPSpwd Oracle Applications database user password.
      Source System Original Applications and database system that is to be duplicated.
      Target System New Applications and database system that is being created as a copy of the source system.
      ORACLE_HOME The top-level directory into which the database software has been installed.
      CRS_ORACLE_HOME The top-level directory into which the Cluster Ready Services (CRS) software has been installed.
      ASM_ORACLE_HOME The top-level directory into which the Automatic Storage Management (ASM) software has been installed.
      RMAN Oracle’s Recovery Manager utility, which ships with the 10g Database.
      Image The RMAN proprietary-format files from the source system backup.
      Monospace Text Represents command line text. Type such a command exactly as shown.
      [ ]
      Text enclosed in square brackets represents a variable. Substitute a value for the variable text. Do not type the square brackets.
      \
      On UNIX, the backslash character is entered at the end of a command line to indicate continuation of the command on the next line.

      Section 1: Overview, Prerequisites and Restrictions
      1.1 Overview
      Converting Oracle E-Business Suite Release 12 from a single instance database to a multi-node Oracle Real Application Clusters (Oracle RAC) enabled database (described in OracleMetalink Note 388577.1) is a complex and time-consuming process. It is therefore common for many sites to maintain only a single system in which Oracle RAC is enabled with the E-Business Suite environment. Typically, this will be the main production system. In many large enterprises, however, there is often a need to maintain two or more Oracle RAC-enabled environments that are exact copies (or clones) of each other. This may be needed, for example, when undertaking specialized development, testing patches, working with Oracle Global Support Services, and other scenarios. It is not advisable to carry out such tasks on a live production system, even if it is the only environment enabled to use Oracle Real Application Clusters.

      The goal of this document (and the patches mentioned herein) is to provide a rapid, clear-cut, and easily achievable method of cloning an Oracle RAC enabled E-Business Suite Release 12 environment to a new set of machines on which a duplicate RAC enabled E-Business Suite system is to be deployed.

      This process will be referred to as RAC-To-RAC cloning from here on.

      1.1.2 Cluster Terminology
      You should understand the terminology used in a cluster environment. Key terms include the following.

      •Automatic Storage Management (ASM) is an Oracle database component that acts as an integrated file system and volume manager, providing the performance of raw devices with the ease of management of a file system. In an ASM environment, you specify a disk group rather than the traditional datafile when creating or modifying a database structure such as a tablespace. ASM then creates and manages the underlying files automatically.

      •Oracle Cluster File System (OCFS2) is a general purpose cluster file system which can, for example, be used to store Oracle database files on a shared disk.

      •Certified Network File Systems is an Oracle-certified network attached storage (NAS) filer: such products are available from EMC, HP, NetApp, and other vendors. See the Oracle Release 10g Real Application Clusters installation and user guides for details on supported NAS devices and certified cluster file systems.

      •Cluster Ready Services (CRS) is the primary program that manages high availability operations in an Oracle RAC environment. The crs process manages designated cluster resources, such as databases, instances, services, and listeners.

      •Oracle Real Application Clusters (Oracle RAC) is a database feature that allows multiple machines to work on the same data in parallel, reducing processing time. Of equal or greater significance, depending on the specific need, an Oracle RAC environment also offers resilience if one or more machines become temporarily unavailable as a result of planned or unplanned downtime.
      1.3 Prerequisites
      •This document is only for use in RAC-To-RAC cloning of a source Oracle E-Business Suite Release 12 RAC System to a target Oracle E-Business Suite RAC System.
      •The steps described in this note are for use by accomplished Applications and Database Administrators, who should be:
      ◦Familiar with the principles of cloning an Oracle E-Business Suite system, as described in OracleMetaLink Note 406982.1, Cloning Oracle Applications Release 12 with Rapid Clone.
      ◦Familiar with Oracle Database Server 10g, and have at least a basic knowledge of Oracle Real Application Clusters (Oracle RAC).
      ◦Experienced in the use of of RapidClone, AutoConfig, and AD utilities, as well as the steps required to convert from a single instance Oracle E-Business Suite installation to a RAC-enabled one.
      •The source system must remain in a running and active state during database Image creation.
      •The addition of database RAC nodes (beyond the assumed secondary node) is, from the RapidClone perspective, easily handled. However, the Clusterware software stack and cluster-specific configuration must be in place first, to allow RapidClone to configure the database technology stack properly. The CRS specific steps required for the addition of database nodes are briefly covered further in Appendix A however the Oracle Clusterware product documentation should be referred to for greater detail and understanding.
      •Details such as operating system configuration of mount points, installation and configuration of ASM, OCFS2, NFS or other forms of cluster file systems are not covered in this document.
      •Oracle Clusterware installation and component service registration are not covered in this document.
      •Oracle Real Application Clusters Setup and Configuration Guide 10g Release 2 (10.2) is a useful reference when planning to set up Oracle Real Application Clusters and shared devices.
      1.4 Restrictions
      Before using RapidClone to create a clone of an Oracle E-Business Suite Release 12 RAC-enabled system, you should be aware of the following restrictions and limitations:

      •This RAC-To-RAC cloning procedure can be used on Oracle Database 10g and 11g RAC Systems.
      •The final cloned RAC environment will:
      ◦Use the Oracle Managed Files option for datafile names.
      ◦Contain the same number of redo log threads as the source system.
      ◦Have all datafiles located under a single “DATA_TOP” location.
      ◦Contain only a single control file, without any of the extra copies that the DBA typically expects.
      •During the cloning process, no allowance is made for the use of a Flash Recovery Area (FRA). If an FRA needs to be configured on the target system, it must be done manually.
      •At the conclusion of the cloning process, the final cloned Oracle RAC environment will use a pfile (parameter file) instead of an spfile. For proper CRS functionality, you should create an spfile and locate it in a shared storage location that is accessible from both Oracle RAC nodes.
      •Beside ASM and OCFS2, only NetApp branded devices (certified NFS clustered file systems) have been confirmed to work at present. While other certified clustered file systems should work for RAC-To-RAC cloning, shared storage combinations not specifically mentioned in this the article are not guaranteed to work, and will therefore only be supported on a best-efforts basis.
      Section 2: Configuration Requirements for the Source Oracle RAC System
      2.1 Required Patches
      Please refer to OracleMetaLink Note 406982.1, “Cloning Oracle Applications Release 12 with Rapid Clone” to obtain the latest required RapidClone Consolidated Update patch number. Download and apply the latest required RapidClone Consolidated Update patch at this time.

      Warning: After applying any new Rapid Clone, AD or AutoConfig patch, the ORACLE_HOME(s) on the source system must be updated with the files included in those patches. To synchronize the Rapid Clone and AutoConfig files within the RDBMS ORACLE_HOME using the admkappsutil.pl utility, refer to OracleMetaLink Note 387859.1, Using AutoConfig to Manage System Configurations in Oracle E-Business Suite Release 12, and follow the instructions in section System Configuration and Maintenance, subsection Patching AutoConfig

      2.2 Supported Oracle RAC Migration
      The source Oracle E-Business Suite RAC environment must be created in accordance with OracleMetalink Note 388577.1, Using Oracle 10g Release 2 Real Application Clusters and Automatic Storage Management with Oracle E-Business Suite Release 12. The RAC-To-RAC cloning process described here has only been validated for use on Oracle E-Business Suite Release 12 systems that have been converted to use Oracle RAC as per this note.

      2.3 AutoConfig Compliance on Oracle RAC Nodes
      Also in accordance with OracleMetalink Note 388577.1, AutoConfig must have been used during Oracle RAC configuration of the source system (following conversion).

      2.4 Supported Datafile Storage Methods
      The storage method used for the source system datafiles must be one of the following Oracle 10g RAC Certified types:

      •NFS Clustered File Systems (such as NetApp Filers)

      •ASM (Oracle Automatic Storage Management)

      •OCFS2 (Oracle Cluster File System V2)
      2.5 Archive Log Mode
      The source system database instances must be in archive log mode, and the archive log files must be located within the shared storage area where the datafiles are currently stored. This conforms to standard Oracle RAC best practices.

      Warning: If the source system was not previously in archive log mode, but it has recently been enabled, or if the source system parameter ARCHIVE_LOG_DEST was at some point set to any local disk directory location, you must ensure that RMAN has a properly maintained list of valid archive logs located exclusively in the shared storage area.

      To confirm RMAN knows only of your archive logs located on the shared disk storage area, do the following.

      First, use SQL*Plus or RMAN to show the locations of the archive logs. For example:

      SQL>archive log list

      If the output shows a local disk location, change this location appropriately, and back up or relocated any archive log files to the shared storage area. It will then be necessary to correct the RMAN archive log manifest, as follows:

      RMAN>crosscheck archivelog all;

      Review the output archive log file locations and, assuming you have relocated or removed any locally stored archive logs, you will need to correct the invalid or expired archive logs as follows:

      RMAN>delete expired archivelog all;

      It is essential to carry out the above steps (if applicable) before you continue with the Oracle E-Business Suite Release 12 RAC cloning procedure.

      2.6 Control File Location
      The database instance control files must be located in the shared storage area as well.

      Section 3: Configuration Requirements for the Target RAC System
      3.1 User Equivalence between Oracle RAC Nodes
      Set up ssh and rsh user equivalence (that is, without password prompting) between primary and secondary target Oracle RAC nodes. This is described in Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide 10g Release 2 (10.2), with the required steps being listed in Section 2.4.7, “Configuring SSH on All Cluster Nodes”.

      3.2 Install Cluster Manager
      Install Oracle Cluster Manager, and update the version to match that of the source system database. For example, if the original source system database is 10.2.0.3, Cluster Manager must also be patched to the 10.2.0.3 level.

      Note: For detailed instructions regarding the installation and usage of Oracle’s Clusterware software as it relates to Oracle Real Applications Clusters, see the following article: Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide10g Release 2 (10.2).

      3.3 Verify Shared Mount Points or Disks
      Ensure that all shared disk sub-systems are fully and properly configured: they need to have adequate space, be writable by the future oracle software owner, and be accessible from both primary and secondary nodes.

      Note: For details on configuring ASM, OCFS, and NFS with NetApp Filer, see the following articles:

      •Oracle Database Administrator’s Guide 10g Release 2(10.2) contains details on creating the ASM instances. For ASM best practices, refer to Automatic Storage Management Technical Best Practices.
      •Oracle Cluster File System User’s Guide contains details on installing and configuring OCFS2. For OCFS best practices, refer to Linux OCFS – Best Practices.
      •Linux/NetApp RHEL/SUSE Setup Recommendations for NetApp Filer Storage contains details specific to Linux NFS mount options and please see Configuring Network Appliance’s NetApp To Work With Oracle for details on where to find NetApp co-authored articles related to using NetApp-branded devices with Oracle products.

      Note: For ASM target deployments, it is strongly recommended that a separate $ORACLE_HOME be installed for ASM management, whatever the the location of your ASM listener configuration, and it is required to change the default listener configuration via the netca executable. The ASM default listener name (or service name) must not be of the form LISTENER_[HOSTNAME]. This listener name (LISTENER_[HOSTNAME]) will be specified and used later by AutoConfig for the RAC-enabled Oracle E-Business Suite database listener.

      3.4 Verify Network Layer Interconnects
      Ensure that the network layer is properly defined for private, public and VIP (Clusterware) Interconnects. This should not be a problem if runcluvfy.sh from the Oracle Clusterware software stage area was executed without error prior to CRS installation.

      Section 4: Preparing the Source Oracle RAC System for Cloning
      4.1 Update the File System with the latest Oracle RAC Patches
      The latest RapidClone Consolidate Update patch (with the post-patch steps in its README) and all pre-requisite patches should have already been applied above from Section 2 of this note. After patch application, adpreclone.pl must be re-executed on all the application tiers and database tiers. For example, on the database tier, the following command would be used:

      $ cd $ORACLE_HOME/appsutil/scripts/[context_name]
      $ perl adpreclone.pl dbTierAfter executing adpreclone.pl on all all the application and database tiers, perform the steps below.

      4.2 Create Database Image
      Note: Do NOT shut down the source system database services to complete the steps on this section. The database must remain mounted and open for the imaging process to successfully complete. RapidClone for RAC-enabled Oracle E-Business Suite Release 12 systems operates differently from single instance cloning.

      Login to the primary Oracle RAC node, navigate to [ORACLE_HOME]/appsutil/clone/bin, and run the adclone.pl utility from a shell as follows:

      perl adclone.pl \
      java=[JDK 1.5 Location] \
      mode=stage \
      stage=[Stage Directory] \
      component=database \
      method=RMAN \
      dbctx=[RAC DB Context File] \
      showProgressWhere:

      Parameter Usage
      stage Any directory or mount point location outside the current ORACLE_HOME location, with enough space to hold the existing database datafiles in an uncompressed form.
      dbctx Full Path to the existing Oracle RAC database context file.

      The above command will create a series of directories under the specified stage location.

      After the stage creation is completed, navigate to [stage]/data/stage. In this directory, you will find several 2GB RMAN backup/image files. These files will have names like “1jj9c44g_1_1”. The number of files present will depend on the source system configuration. The files, along with the “backup_controlfile.ctl”, will need to be transferred to the target system upon which you wish to creation your new primary Oracle RAC node.

      These files should be placed into a temporary holding area, which will ultimately be removed later.

      4.3 Archive the ORACLE_HOME
      Note: The database may be left up and running during the ORACLE_HOME archive creation process.

      Create an archive of the source system ORACLE_HOME on the primary node:

      $ cd $ORACLE_HOME/..
      $ tar -cvzf rac_db_oh.tgz [DATABASE TOP LEVEL DIRECTORY]Note: Consider using data integrity utilities such as md5sum, sha1sum, or cksum to validate the file sum both before and after transfer to the target system.

      This source system ORACLE_HOME archive should now be transferred to the target system RAC nodes upon which you will be configuring the new system, and placed in the directory you wish to use as the new $ORACLE_HOME.

      Section 5: RAC-to-RAC Cloning
      5.1 Target System Primary Node Configuration (Clone Initial Node)
      Follow the steps below to clone the primary node (i.e. Node 1) to the new target system.

      5.1.1 Uncompress ORACLE_HOME
      Uncompress the ORACLE_HOME archive that was transferred from the source system. Choose a suitable location, and rename the extracted top-level directory name to something meaningful on the new target system.

      $ tar -xvzf rac_db_oh.tgz5.1.2 Create pairsfile.txt File for Primary Node
      Create a [NEW_ORACLE_HOME]/appsutils/clone/pairsfile.txt text file with contents as shown below:

      s_undo_tablespace=[UNDOTBS1 for Initial Node]
      s_dbClusterInst=[Total number of Instances in a cluster e.g. 2]
      s_db_oh=[Location of new ORACLE_HOME]5.1.3 Create Context File for Primary Node
      Execute the following command to create a new context file, providing carefully determined answers to the prompts.

      Navigate to [NEW_ORACLE_HOME]/appsutil/clone/bin and run the adclonectx.pl utility with the following parameters:

      perl adclonectx.pl \
      contextfile=[PATH to OLD Source RAC contextfile.xml] \
      template=[NEW ORACLE_HOME]/appsutil/template/adxdbctx.tmp \
      pairsfile=[NEW ORACLE_HOME]/appsutil/clone/pairsfile.txt \
      initialnodeWhere:

      Parameter Usage
      contextfile Full path to the old source RAC database context file.
      template Full path to the existing database context file template.
      pairsfile Full path to the pairsfile created in the last step.

      Note: A new and unique global database name (DB name) must be selected when creating the new target system context file. Do not use the source system global database name or sid name uring any of the context file interview prompts as shown below.

      You will be present with the following questions [sample answers provided]:

      Target System Hostname (virtual or normal) [kawasaki] [Enter appropriate value if not defaulted]

      Do you want the inputs to be validated (y/n) [n] ? : [Enter n]

      Target Instance is RAC (y/n) [y] : [Enter y]

      Target System Database Name : [Enter new desired global DB name, not a SID; motoGP global name was selected here]

      Do you want the target system to have the same port values as the source system (y/n) [y] ? : [Select yes or no]

      Provide information for the initial RAC node:

      Host name [ducati] : [Always need to change this value to the current public machine name, for example kawasaki]

      Virtual Host name [null] : [Enter the Clusterware VIP interconnect name, for example kawasaki-vip ]

      Instance number [1] : 1 [Enter 1, as this will always be the instance number when you are on the primary target node]

      Private interconnect name [kawasaki] [Always need to change this value; enter the private interconnect name, such as kawasaki-priv]

      Target System quorum disk location required for cluster manager and node monitor : /tmp [Legacy parameter; just enter /tmp]

      Target System cluster manager service port : 9998 [This is a default port used for CRS ]

      Target System Base Directory : [Enter the base directory that contains the new_oh_loc dir]

      Oracle OS User [oracle] : [Should default to correct current user; just hit enter]

      Oracle OS Group [dba] : [Should default to correct current group; just hit enter]

      Target System utl_file_dir Directory List : /usr/tmp [Specify an appropriate value for your requirements]

      Number of DATA_TOP’s on the Target System [2] : 1 [At present, you can only have one data_top with RAC-To-RAC cloning]

      Target System DATA_TOP Directory 1 : +APPS_RAC_DISK [The shared storage location; ASM diskgroup/NetApps NFS mount point/OCFS mount point]

      Do you want to preserve the Display [null] (y/n) ? : [Respond according to your requirements]

      New context path and file name [/s1/atgrac/racdb/appsutil/motoGP1_kawasaki.xml] : [Double-check proposed location, and amend if needed]Note: It is critical that the correct values are selected above: if you are uncertain, review the newly-written context file and compare it with values selected during source system migration to RAC (as per OracleMetalink Note 388577.1).

      When making comparisons, always ensure that any path differences between the source and target systems are understood and accounted for.

      5.1.4 Restore Database on Target System Primary Node
      Warning: It is NOT recommended to clone an E-Business Suite RAC enabled environments to the same host however if the source and target systems must be the same host, make certain the source system is cleanly shutdown and the datafiles moved to a temporarily inaccessible location prior to restoring/recovering the new target system.

      Failure to understand this warning could result in corrupt redo logs on the source system. Same host RAC cloning requires the source system to be down.

      Warning: In addition to same host RAC node cloning, it is also NOT recommended to attempt cloning E-Business Suite RAC enabled environments to a target system which can directly access source system dbf files (perhaps via an nfs shared mount). If the intended target file system has access to the to the source dbf files, corruption of redo log files can occur on the source system. It is also possible that corruption might occur if ANY dbf files exist on the new intended target file system which match the original source mount point [i.e. /foo/datafiles]. If existing datafiles on the target are in a file system location as is present on the source server [i.e. /foo/datafiles], shutdown the database which owns these datafiles.

      Failure to understand this warning could result in corrupt redo logs on the source system or any existing database on the target host, having a mount point the same as the original and perhaps unrelated source system. If unsure, shutdown any database which stores datafiles in a path which existed on the source system and in which datafiles were stored.

      Restore the database after the new ORACLE_HOME is configured.

      5.1.4.1 Run adclone.pl to Restore and Rename Database on New Target System
      Navigate to [NEW_ORACLE_HOME]/appsutil/clone/bin and run Rapid Clone (adclone.pl utility) with the following parameters:

      perl adclone.pl \
      java=[JDK 1.5 Location] \
      component=dbTier \
      mode=apply \
      stage=[ORACLE_HOME]/appsutil/clone \
      method=CUSTOM \
      dbctxtg=[Full Path to the Target Context File] \
      rmanstage=[Location of the Source RMAN dump files… i.e. RMAN_STAGE/data/stage] \
      rmantgtloc=[Shared storage loc for datafiles…ASM diskgroup / NetApps NFS mount / OCFS mount point]
      srcdbname=[Source RAC system GLOBAL name] \
      pwd=[APPS Password] \
      showProgressodeWhere:

      Parameter Usage
      java Full path to the directory where JDK 1.5 is installed.
      stage This parameter is static and refers to the newly-unzipped [ORACLE_HOME]/appsutil/clone directory.
      dbctxtg Full path to the new context file created by adclonectx.pl under [ORACLE_HOME]/appsutil.
      rmanstage Temporary location where you have placed database “image” files transferred from the source system to the new target host.
      rmantgtloc Base directory or ASM diskgroup location into which you wish the database (dbf) files to be extracted. The recreation process will create subdirectories of [GLOBAL_DB_NAME]/data, into which the dbf files will be placed. Only the shared storage mount point top level location needs be supplied.
      srcdbname Source system GLOBAL_DB_NAME (not the SID of a specific node). Refer to the source system context file parameter s_global_database_name. Note that no domain suffix should be added.
      pwd Password for the APPS user.

      Note: The directories and mount points selected for the rmanstage and rmantgtloc locations should not contain datafiles for any other databases. The presence of unrelated datafiles may result in very lengthy restore operations, and on some systems a potential hang of the adclone.pl restore command .

      Running the adclone.pl command may take several hours. From a terminal window, you can run:

      $ tail -f [ORACLE_HOME]/appsutil/log/$CONTEXT_NAME/ ApplyDatabase_[time].logThis will display and periodically refresh the last few lines of the main log file (mentioned when you run adclone.pl), where you will see references to additional log files that can help show the current actions being executed.

      5.1.4.2 Verify TNS Listener has been started
      After the above process exits, and it has been confirmed that no errors were encountered, you will have a running database and TNS listener, with the new SID name chosen earlier.

      Confirm that the TNS listener is running, and has the appropriate service name format as follows:

      $ ps -ef | grep tns | awk ‘{ print $9}’The output from the above command should return a string of the form LISTENER_[hostname]. If does not, verify the listener.ora file in the $TNS_ADMIN location before continuing with the next steps: the listener must be up and running before executing AutoConfig.
      5.1.4.3 Run AutoConfig
      At this point, the new database is fully functional. However, to complete the configuration you must navigate to [ORACLE_HOME]/appsutil/scripts/[CONTEXT_NAME] and execute the following command to run AutoConfig:

      $ adautocfg.sh appspass=[APPS Password]5.2 Target System Secondary Node Configuration (Clone Additional Nodes)
      Follow the steps below to clone the secondary nodes (for example, Node 2) on to the target system.

      5.2.1 Uncompress the archived ORACLE_HOME transferred from the Source System
      Uncompress the source system ORACLE_HOME archive to a location matching that present on your target system primary node. The directory structure should match that present on the newly created target system primary node.

      $ tar -xvzf rac_db_oh.tgz5.2.2 Archive the [ORACLE_HOME]/appsutil directory structure from the new Primary Node
      Log in to the new target system primary node, and execute the following commands:

      $ cd [ORACLE_HOME]
      $ zip -r appsutil_node1.zip appsutil5.2.3 Copy appsutil_node1.zip to the Secondary Target Node
      Transfer and then expand the appsutil_node1.zip into the secondary target RAC node [NEW ORACLE_HOME].

      $ cd [NEW ORACLE_HOME]
      $ unzip -o appsutil_node1.zip5.2.4 Update pairsfile.txt for the Secondary Target Node
      Alter the existing pairsfile.txt (from the first target node) and change the s_undo_tablespace parameter. As this is the second node, the correct value would be UNDOTBS2. As an example, the [NEW_ORACLE_HOME]/appsutils/clone/pairsfile.txt would look like:
      s_undo_tablespace=[Or UNDOTBS(+1) for additional Nodes]
      s_dbClusterInst=[Total number of Instances in a cluster e.g. 2]
      s_db_oh=[Location of new ORACLE_HOME]5.2.5 Create a Context File for the Secondary Node
      Navigate to [NEW_ORACLE_HOME]/appsutil/clone/bin and run the adclonectx.pl utility as follows:

      perl adclonectx.pl \
      contextfile=[Path to Existing Context File from the First Node] \
      template=[NEW ORACLE_HOME]/appsutil/template/adxdbctx.tmp \
      pairsfile=[NEW ORACLE_HOME]/appsutil/clone/pairsfile.txt \
      addnodeWhere:

      Parameter Usage
      contextfile Full path to the existing context file from the first (primary) node.
      template Full path to the existing database context file template.
      pairsfile Full path to the pairsfile created on last step.

      Several of the interview prompts are the same as on Node 1. However, there are some new questions which are specific to the “addnode” option used when on the second node.

      Note: When answering the questions below, review your responses carefully before entering them. The rest of the inputs (not shown) are the same as those encountered during the context file creation on the initial node (primary node).

      Host name of the live RAC node : kawasaki [enter appropriate value if not defaulted]

      Domain name of the live RAC node : yourdomain.com [enter appropriate value if not defaulted]

      Database SID of the live RAC node : motoGP1 [enter the individual SID, NOT the Global DB name]

      Listener port number of the live RAC node : 1548 [enter the port # of the Primary Target Node you just created]

      Provide information for the new Node:

      Host name : suzuki [enter appropriate value if not defaulted, like suzuki]

      Virtual Host name : suzuki-vip [enter the Clusterware VIP interconnect name, like suzuki-vip.yourdomain.com]

      Instance number : 2 [enter the instance # for this current node]

      Private interconnect name : suzuki-priv [enter the private interconnect name, like suzuki-priv]

      Current Node:

      Host Name : suzuki

      SID : motoGP2

      Instance Name : motoGP2

      Instance Number : 2

      Instance Thread : 2

      Undo Table Space: UNDOTBS2 [enter value earlier added to pairsfile.txt, if not defaulted]

      Listener Port : 1548

      Target System quorum disk location required for cluster manager and node monitor : [legacy parameter, enter /tmp]Note: At the conclusion of these “interview” questions related to context file creation, look carefully at the generated context file and ensure that the values contained therein compare to the values entered during context file creation on Node 1. The values should be almost identical, a small but important exception being the local instance name will have a number 2 instead of a 1.

      5.2.6 Configure NEW ORACLE_HOME
      Run the commands below to move to the correct directory and continue the cloning process:
      $ cd [NEW ORACLE_HOME]/appsutil/clone/bin
      $ perl adcfgclone.pl dbTechStack [Full path to the database context file created in previous step]Note: At the conclusion of this command, you will receive a console message indicating that the process exited with status 1 and that the addlnctl.sh script failed to start a listener named [SID]. That is expected, as this is not the proper service name. Start the proper listener by executing the following command:

      [NEW_ORACLE_HOME]/appsutil/scripts/[CONTEXT_NAME]/addlnctl.sh start LISTENER_[hostname].

      This command will start the correct (RAC-specific) listener with the proper service name.

      5.2.7 Source the new environment file in the ORACLE_HOME
      Run the commands below to move to the correct directory and source the environment:

      $ cd [NEW ORACLE_HOME]
      $ ./[CONTEXT_NAME].env5.2.8 Modify [SID]_APPS_BASE.ora
      Edit the [SID]_APPS_BASE.ora file and change the control file parameter to reflect the correct control file location on the shared storage. This will be the same value as in the [SID]_APPS_BASE.ora on the target system primary node which was just created.

      5.2.9 Start Oracle RAC Database
      Start the database using the following commands:

      $ sqlplus /nolog
      SQL> connect / as sysdba
      SQL> startup5.2.10 Execute AutoConfig
      Run AutoConfig to generate the proper listener.ora and tnsnames.ora files:

      $ cd $ORACLE_HOME/appsutil/scripts/$CONTEXT_NAME
      $ ./adautocfg.sh appspass=[APPS Password]5.3 Carry Out Target System (Primary Node) Final Oracle RAC Configuration Tasks
      5.3.1 Recreate TNSNAMES and LISTENER.ORA
      Login again to the target primary node (Node 1) and run AutoConfig to perform the final Oracle RAC configuration and create new listener.ora and tnsnames.ora (as the FND_NODES table did not contain the second node hostname until AutoConfig was run on the secondary target RAC node).

      $ cd $ORACLE_HOME/appsutil/scripts/[CONTEXT_NAME]
      $ ./adautocfg.sh appspass=[APPS Password]Note: This execution of AutoConfig on the primary target RAC Node 1 will add the second RAC Node connection information to the first node’s tnsnames.ora, such that listener load balancing can occur. If you have more than two nodes in your new target system cluster, you must repeat Sections 4.2 and 4.3 for all subsequent nodes.

      Section 6: Target System Applications Tier Cloning for RAC
      The target system Applications Tier may be located in any one of these locations:

      •Primary target database node
      •Secondary target database node
      •An independent machine, running neither of the target system RAC nodes
      •Shared between two or more machines
      Because of the complexities which might arise, it is suggested that the applications tier should initially be configured to connect to a single database instance. After proper configuration with one of the two target system RAC nodes has been achieved, context variable changes can be made such that JDBC and TNS Listener load balancing are enabled.

      6.1 Clone the Applications Tier
      In order to clone the applications tier, follow the standard steps for the applications node posted on Sections 2 and 3 from OracleMetalink Note 406982.1, Cloning Oracle Applications Release 12 with Rapid Clone. This includes adpreclone steps, copy the bits to the target, configuration portion, and finishing tasks steps.

      Note: On the applications tier, during the adcfgclone.pl execution, you will be asked for a database to which the applications tier services should connect to. Enter the information specific to a single target system RAC node (such as the primary node). On successful completion of this step, the applications node services will be started, and you should be able to log in and use the new target system Applications system.

      6.2 Configure Application Tier JDBC and Listener Load Balancing
      Reconfigure the applications node context variables such that database listener/instance load balancing can occur.

      Note: The following details have been extracted from OracleMetalink Note 388577.1 for your convenience. Consult this note for further information.

      Implement load balancing for the Applications database connections:

      a.Run the context editor (through Oracle Applications Manager) and set the value of “Tools OH TWO_TASK” (s_tools_two_task),”iAS OH TWO_TASK”(s_weboh_twotask) and “Apps JDBC Connect Alias” (s_apps_jdbc_connect_alias).

      b.To load-balance the Forms-based Applications database connections, set the value of “Tools OH TWO_TASK” to point to the [database_name]_balance alias generated in the tnsnames.ora file.

      c.To load-balance the self-service Applications database connections, set the value of iAS OH TWO_TASK” and “Apps JDBC Connect Alias” to point to the [database_name]_balance alias generated in the tnsnames.ora file.

      Execute AutoConfig by running the command:
      cd $ADMIN_SCRIPTS_HOME; ./adautocfg.shd.After successful completion of AutoConfig, restart the Applications tier processes via the scripts located in $ADMIN_SCRIPTS_HOME.

      e.Ensure that value of the profile option “Application Database ID” is set to dbc file name generated at $FND_SECURE.
      Section 7: Advanced RAC Cloning Scenarios
      7.1 Cloning the Database Separately
      In certain cases, customers may require the RAC database to be recreated separately, without using the full lock-step mechanism employed during a regular E-Business Suite RAC RapidClone scenario.

      This section documents the steps needed to allow for manual creation of the target RAC database control files (or the reuse of existing control files) within the Rapid Clone process.

      Unless otherwise noted, all command are specific to the primary target database instance.

      Follow ONLY steps 1 and 2 in Section 2: Cloning Tasks of OracleMetalink Note 406982.1, then continue with these steps below to complete Cloning the Database Separately.

      a.Log on to the primary target system host as the ORACLE UNIX user.

      b.Configure the [RDBMS ORACLE_HOME] as note above in Section 5: RAC-to-RAC Cloning: execute ONLY steps 5.1.1, 5.1.2 and 5.1.3

      c.Create the target database control files manually (if needed) or modify the existing control files as needed to define datafile, redo and archive log locations along with any other relevant and required setting.

      In this step, you copy and recreate the database using your preferred method, such as RMAN restore, Flash Copy, Snap View, or Mirror View.

      d.Start the new target RAC database in open mode.

      e.Run the library update script against the RAC database.

      $ cd [RDBMS ORACLE_HOME]/appsutil/install/[CONTEXT_NAME]
      $ sqlplus “/ as sysdba” @adupdlib.sql [libext] Where [libext] should be set to ‘sl’ for HP-UX, ‘so’ for any other UNIX platform, or ‘dll’ for Windows.

      f.Configure the primary target database

      The database must be running and open before performing this step.

      $ cd [RDBMS ORACLE_HOME]/appsutil/clone/bin
      $ perl adcfgclone.pl dbconfig [Database target context file] Where Database target context file is: [RDBMS ORACLE_HOME]/appsutil/[Target CONTEXT_NAME].xml.

      Note: The dbconfig option will configure the database with the required settings for the new target, but it will not recreate the control files.

      g.When the above tasks (1-6) are completed on the primary target database instance, see “5.2 Target System Secondary Node Configuration(Clone Additional Nodes)” to configure any secondary database instances.

      7.2 Additional Advanced RAC Cloning Scenarios
      Rapid Clone is only certified for RAC-to-RAC Cloning. Addition or removal of RAC nodes during the cloning process is not currently supported.

      For complete details on the certified RAC scenarios for E-Business Suite Cloning, please refer to Document 783188.1 available in OracleMetaLink.

      Appendix A: Configuring Oracle Clusterware on the Target System Database Nodes
      Associating Target System Oracle RAC Database instances and listeners with Clusterware (CRS)

      Add target system database, instances and listeners to CRS by running the following commands as the owner of the CRS installation:

      $ srvctl add database -d [database_name] -o [oracle_home]
      $ srvctl add instance -d [database_name] -i [instance_name] -n [host_name]
      $ srvctl add service -d [name] -s [service_name] Note:For detailed instructions regarding the installation and usage of Oracle Clusterware software as it relates to Real Applications Clusters, see the following article:

      Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide10g Release 2 (10.2)

      Change Log
      Date Description
      Mar 4, 2010 •Added further warning details to section 5.1.4 regarding RMAN limitation

      Feb 15, 2010 •Numerous small updates and clarifications as suggested by members of support
      •Cosmetic changes to adjust formatting issues
      •Added “same host cloning” warning
      •Added Section 7.1 cloning Database separately

      Feb 14, 2010 •Removed required patch 7164226 to point to most current RC CUP

      May 12, 2009 •Removed references to 5767290 and replace with 7164226

      Mar 02, 2009 •Updates made for supported RAC scenarios and reference to Note 783188.1

      Jul 28, 2008 •Edited in readiness for publication

      Jul 22, 2008 •Formatting updates

      Jun 18, 2008 •Added RAC to Single Instance Scale-Down

      May 27, 2008 •Added ASM specific details

      May 16, 2008 •Initial internal release

      Apr 4, 2008 •Document creation

      Note 559518.1 by Oracle E-Business Suite Development
      Copyright 2008, Oracle

      Related

      ——————————————————————————–
      Products
      ——————————————————————————–

      •Oracle E-Business Suite > Applications Technology > Lifecycle Management > Oracle Applications Manager

      Back to top

      Comment by gjilevski | May 27, 2010 | Reply

  2. Could you please help me with the cloning process? We have two R12.1.1 application nodes with shared file system and two 11g R2 db nodes with RAC.

    We have similar setup on our test instance as well. I have to clone from PROD to TEST now. Could you please help me with the steps to clone it?

    Regards,
    Mehmod

    Comment by Mehmod | May 27, 2010 | Reply

    • Hello,

      There are few very usefull notes from MOS ( formerly matalink ) that will be very usefull.

      Please look at 559518.1 and 406982.1. If you have further questins do not hesitate to contact me.

      Wish you luck and please let me know how it goes.

      Cloning Oracle E-Business Suite Release 12 RAC-Enabled Systems with Rapid Clone [ID 559518.1]
      ________________________________________
      Modified 04-MAR-2010 Type WHITE PAPER Status PUBLISHED
      Cloning Oracle E-Business Suite Release 12 RAC-Enabled Systems with Rapid Clone
      Last Updated: Mar 4, 2010
      This document describes the process of using the Oracle Applications Rapid Clone utility to create a clone (copy) of an Oracle E-Business Suite Release 12 system that utilizes the Oracle Database 10g Real Application Clusters feature.
      The resulting duplicate Oracle Applications Release 12 RAC environment can then be used for purposes such as:
      • Patch testing
      • User Acceptance testing
      • Performance testing
      • Load testing
      • QA validation
      • Disaster recovery
      The most current version of this document can be obtained in OracleMetaLink Note 559518.1.
      There is a change log at the end of this document.
      In This Document
      • Section 1: Overview, Prerequisites and Restrictions
      • Section 2: Configuration Requirements for the Source RAC System
      • Section 3: Configuration requirements for the Target RAC System
      • Section 4: Prepare Source RAC System
      • Section 5: RAC-to-RAC Cloning
      • Section 6: Applications Tier Cloning for RAC
      • Section 7: Advanced RAC Cloning Scenarios
      • Appendix A: Configuring Oracle Clusterware on the Target System Database Nodes
      Note: At present, the procedures described in this document apply to UNIX and Linux platforms only, and are not suitable for Oracle Applications Release 12 RAC-enabled systems running on WIndows.
      A number of conventions are used in describing the Oracle Applications architecture:
      Convention Meaning
      Application tier Machines (nodes) running Forms, Web, and other services (servers). Also called middle tier.
      Database tier Machines (nodes) running the Oracle Applications database.
      Oracle User account that owns the database file system (database ORACLE_HOME and files).
      CONTEXT_NAME The CONTEXT_NAME variable specifies the name of the Applications context that is used by AutoConfig. The default is [SID]_[hostname].
      CONTEXT_FILE Full path to the Applications context file on the application tier or database tier.
      APPSpwd Oracle Applications database user password.
      Source System Original Applications and database system that is to be duplicated.
      Target System New Applications and database system that is being created as a copy of the source system.
      ORACLE_HOME The top-level directory into which the database software has been installed.
      CRS_ORACLE_HOME The top-level directory into which the Cluster Ready Services (CRS) software has been installed.
      ASM_ORACLE_HOME The top-level directory into which the Automatic Storage Management (ASM) software has been installed.
      RMAN Oracle’s Recovery Manager utility, which ships with the 10g Database.
      Image The RMAN proprietary-format files from the source system backup.
      Monospace Text Represents command line text. Type such a command exactly as shown.
      [ ] Text enclosed in square brackets represents a variable. Substitute a value for the variable text. Do not type the square brackets.
      \ On UNIX, the backslash character is entered at the end of a command line to indicate continuation of the command on the next line.
      Section 1: Overview, Prerequisites and Restrictions
      1.1 Overview
      Converting Oracle E-Business Suite Release 12 from a single instance database to a multi-node Oracle Real Application Clusters (Oracle RAC) enabled database (described in OracleMetalink Note 388577.1) is a complex and time-consuming process. It is therefore common for many sites to maintain only a single system in which Oracle RAC is enabled with the E-Business Suite environment. Typically, this will be the main production system. In many large enterprises, however, there is often a need to maintain two or more Oracle RAC-enabled environments that are exact copies (or clones) of each other. This may be needed, for example, when undertaking specialized development, testing patches, working with Oracle Global Support Services, and other scenarios. It is not advisable to carry out such tasks on a live production system, even if it is the only environment enabled to use Oracle Real Application Clusters.
      The goal of this document (and the patches mentioned herein) is to provide a rapid, clear-cut, and easily achievable method of cloning an Oracle RAC enabled E-Business Suite Release 12 environment to a new set of machines on which a duplicate RAC enabled E-Business Suite system is to be deployed.
      This process will be referred to as RAC-To-RAC cloning from here on.
      1.1.2 Cluster Terminology
      You should understand the terminology used in a cluster environment. Key terms include the following.
      • Automatic Storage Management (ASM) is an Oracle database component that acts as an integrated file system and volume manager, providing the performance of raw devices with the ease of management of a file system. In an ASM environment, you specify a disk group rather than the traditional datafile when creating or modifying a database structure such as a tablespace. ASM then creates and manages the underlying files automatically.
      • Oracle Cluster File System (OCFS2) is a general purpose cluster file system which can, for example, be used to store Oracle database files on a shared disk.
      • Certified Network File Systems is an Oracle-certified network attached storage (NAS) filer: such products are available from EMC, HP, NetApp, and other vendors. See the Oracle Release 10g Real Application Clusters installation and user guides for details on supported NAS devices and certified cluster file systems.
      • Cluster Ready Services (CRS) is the primary program that manages high availability operations in an Oracle RAC environment. The crs process manages designated cluster resources, such as databases, instances, services, and listeners.
      • Oracle Real Application Clusters (Oracle RAC) is a database feature that allows multiple machines to work on the same data in parallel, reducing processing time. Of equal or greater significance, depending on the specific need, an Oracle RAC environment also offers resilience if one or more machines become temporarily unavailable as a result of planned or unplanned downtime.
      1.3 Prerequisites
      • This document is only for use in RAC-To-RAC cloning of a source Oracle E-Business Suite Release 12 RAC System to a target Oracle E-Business Suite RAC System.
      • The steps described in this note are for use by accomplished Applications and Database Administrators, who should be:
      o Familiar with the principles of cloning an Oracle E-Business Suite system, as described in OracleMetaLink Note 406982.1, Cloning Oracle Applications Release 12 with Rapid Clone.
      o Familiar with Oracle Database Server 10g, and have at least a basic knowledge of Oracle Real Application Clusters (Oracle RAC).
      o Experienced in the use of of RapidClone, AutoConfig, and AD utilities, as well as the steps required to convert from a single instance Oracle E-Business Suite installation to a RAC-enabled one.
      • The source system must remain in a running and active state during database Image creation.
      • The addition of database RAC nodes (beyond the assumed secondary node) is, from the RapidClone perspective, easily handled. However, the Clusterware software stack and cluster-specific configuration must be in place first, to allow RapidClone to configure the database technology stack properly. The CRS specific steps required for the addition of database nodes are briefly covered further in Appendix A however the Oracle Clusterware product documentation should be referred to for greater detail and understanding.
      • Details such as operating system configuration of mount points, installation and configuration of ASM, OCFS2, NFS or other forms of cluster file systems are not covered in this document.
      • Oracle Clusterware installation and component service registration are not covered in this document.
      • Oracle Real Application Clusters Setup and Configuration Guide 10g Release 2 (10.2) is a useful reference when planning to set up Oracle Real Application Clusters and shared devices.
      1.4 Restrictions
      Before using RapidClone to create a clone of an Oracle E-Business Suite Release 12 RAC-enabled system, you should be aware of the following restrictions and limitations:
      • This RAC-To-RAC cloning procedure can be used on Oracle Database 10g and 11g RAC Systems.
      • The final cloned RAC environment will:
      o Use the Oracle Managed Files option for datafile names.
      o Contain the same number of redo log threads as the source system.
      o Have all datafiles located under a single “DATA_TOP” location.
      o Contain only a single control file, without any of the extra copies that the DBA typically expects.
      • During the cloning process, no allowance is made for the use of a Flash Recovery Area (FRA). If an FRA needs to be configured on the target system, it must be done manually.
      • At the conclusion of the cloning process, the final cloned Oracle RAC environment will use a pfile (parameter file) instead of an spfile. For proper CRS functionality, you should create an spfile and locate it in a shared storage location that is accessible from both Oracle RAC nodes.
      • Beside ASM and OCFS2, only NetApp branded devices (certified NFS clustered file systems) have been confirmed to work at present. While other certified clustered file systems should work for RAC-To-RAC cloning, shared storage combinations not specifically mentioned in this the article are not guaranteed to work, and will therefore only be supported on a best-efforts basis.
      Section 2: Configuration Requirements for the Source Oracle RAC System
      2.1 Required Patches
      Please refer to OracleMetaLink Note 406982.1, “Cloning Oracle Applications Release 12 with Rapid Clone” to obtain the latest required RapidClone Consolidated Update patch number. Download and apply the latest required RapidClone Consolidated Update patch at this time.
      Warning: After applying any new Rapid Clone, AD or AutoConfig patch, the ORACLE_HOME(s) on the source system must be updated with the files included in those patches. To synchronize the Rapid Clone and AutoConfig files within the RDBMS ORACLE_HOME using the admkappsutil.pl utility, refer to OracleMetaLink Note 387859.1, Using AutoConfig to Manage System Configurations in Oracle E-Business Suite Release 12, and follow the instructions in section System Configuration and Maintenance, subsection Patching AutoConfig
      2.2 Supported Oracle RAC Migration
      The source Oracle E-Business Suite RAC environment must be created in accordance with OracleMetalink Note 388577.1, Using Oracle 10g Release 2 Real Application Clusters and Automatic Storage Management with Oracle E-Business Suite Release 12. The RAC-To-RAC cloning process described here has only been validated for use on Oracle E-Business Suite Release 12 systems that have been converted to use Oracle RAC as per this note.
      2.3 AutoConfig Compliance on Oracle RAC Nodes
      Also in accordance with OracleMetalink Note 388577.1, AutoConfig must have been used during Oracle RAC configuration of the source system (following conversion).
      2.4 Supported Datafile Storage Methods
      The storage method used for the source system datafiles must be one of the following Oracle 10g RAC Certified types:
      • NFS Clustered File Systems (such as NetApp Filers)
      • ASM (Oracle Automatic Storage Management)
      • OCFS2 (Oracle Cluster File System V2)
      2.5 Archive Log Mode
      The source system database instances must be in archive log mode, and the archive log files must be located within the shared storage area where the datafiles are currently stored. This conforms to standard Oracle RAC best practices.
      Warning: If the source system was not previously in archive log mode, but it has recently been enabled, or if the source system parameter ARCHIVE_LOG_DEST was at some point set to any local disk directory location, you must ensure that RMAN has a properly maintained list of valid archive logs located exclusively in the shared storage area.
      To confirm RMAN knows only of your archive logs located on the shared disk storage area, do the following.
      First, use SQL*Plus or RMAN to show the locations of the archive logs. For example:
      SQL>archive log list
      If the output shows a local disk location, change this location appropriately, and back up or relocated any archive log files to the shared storage area. It will then be necessary to correct the RMAN archive log manifest, as follows:
      RMAN>crosscheck archivelog all;
      Review the output archive log file locations and, assuming you have relocated or removed any locally stored archive logs, you will need to correct the invalid or expired archive logs as follows:
      RMAN>delete expired archivelog all;
      It is essential to carry out the above steps (if applicable) before you continue with the Oracle E-Business Suite Release 12 RAC cloning procedure.
      2.6 Control File Location
      The database instance control files must be located in the shared storage area as well.
      Section 3: Configuration Requirements for the Target RAC System
      3.1 User Equivalence between Oracle RAC Nodes
      Set up ssh and rsh user equivalence (that is, without password prompting) between primary and secondary target Oracle RAC nodes. This is described in Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide 10g Release 2 (10.2), with the required steps being listed in Section 2.4.7, “Configuring SSH on All Cluster Nodes”.
      3.2 Install Cluster Manager
      Install Oracle Cluster Manager, and update the version to match that of the source system database. For example, if the original source system database is 10.2.0.3, Cluster Manager must also be patched to the 10.2.0.3 level.
      Note: For detailed instructions regarding the installation and usage of Oracle’s Clusterware software as it relates to Oracle Real Applications Clusters, see the following article: Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide10g Release 2 (10.2).
      3.3 Verify Shared Mount Points or Disks
      Ensure that all shared disk sub-systems are fully and properly configured: they need to have adequate space, be writable by the future oracle software owner, and be accessible from both primary and secondary nodes.
      Note: For details on configuring ASM, OCFS, and NFS with NetApp Filer, see the following articles:
      • Oracle Database Administrator’s Guide 10g Release 2(10.2) contains details on creating the ASM instances. For ASM best practices, refer to Automatic Storage Management Technical Best Practices.
      • Oracle Cluster File System User’s Guide contains details on installing and configuring OCFS2. For OCFS best practices, refer to Linux OCFS – Best Practices.
      • Linux/NetApp RHEL/SUSE Setup Recommendations for NetApp Filer Storage contains details specific to Linux NFS mount options and please see Configuring Network Appliance’s NetApp To Work With Oracle for details on where to find NetApp co-authored articles related to using NetApp-branded devices with Oracle products.
      Note: For ASM target deployments, it is strongly recommended that a separate $ORACLE_HOME be installed for ASM management, whatever the the location of your ASM listener configuration, and it is required to change the default listener configuration via the netca executable. The ASM default listener name (or service name) must not be of the form LISTENER_[HOSTNAME]. This listener name (LISTENER_[HOSTNAME]) will be specified and used later by AutoConfig for the RAC-enabled Oracle E-Business Suite database listener.
      3.4 Verify Network Layer Interconnects
      Ensure that the network layer is properly defined for private, public and VIP (Clusterware) Interconnects. This should not be a problem if runcluvfy.sh from the Oracle Clusterware software stage area was executed without error prior to CRS installation.
      Section 4: Preparing the Source Oracle RAC System for Cloning
      4.1 Update the File System with the latest Oracle RAC Patches
      The latest RapidClone Consolidate Update patch (with the post-patch steps in its README) and all pre-requisite patches should have already been applied above from Section 2 of this note. After patch application, adpreclone.pl must be re-executed on all the application tiers and database tiers. For example, on the database tier, the following command would be used:
      $ cd $ORACLE_HOME/appsutil/scripts/[context_name]
      $ perl adpreclone.pl dbTier
      After executing adpreclone.pl on all all the application and database tiers, perform the steps below.
      4.2 Create Database Image
      Note: Do NOT shut down the source system database services to complete the steps on this section. The database must remain mounted and open for the imaging process to successfully complete. RapidClone for RAC-enabled Oracle E-Business Suite Release 12 systems operates differently from single instance cloning.
      Login to the primary Oracle RAC node, navigate to [ORACLE_HOME]/appsutil/clone/bin, and run the adclone.pl utility from a shell as follows:
      perl adclone.pl \
      java=[JDK 1.5 Location] \
      mode=stage \
      stage=[Stage Directory] \
      component=database \
      method=RMAN \
      dbctx=[RAC DB Context File] \
      showProgress
      Where:
      Parameter Usage
      Stage Any directory or mount point location outside the current ORACLE_HOME location, with enough space to hold the existing database datafiles in an uncompressed form.
      Dbctx Full Path to the existing Oracle RAC database context file.
      The above command will create a series of directories under the specified stage location.
      After the stage creation is completed, navigate to [stage]/data/stage. In this directory, you will find several 2GB RMAN backup/image files. These files will have names like “1jj9c44g_1_1”. The number of files present will depend on the source system configuration. The files, along with the “backup_controlfile.ctl”, will need to be transferred to the target system upon which you wish to creation your new primary Oracle RAC node.

      These files should be placed into a temporary holding area, which will ultimately be removed later.
      4.3 Archive the ORACLE_HOME
      Note: The database may be left up and running during the ORACLE_HOME archive creation process.
      Create an archive of the source system ORACLE_HOME on the primary node:
      $ cd $ORACLE_HOME/..
      $ tar -cvzf rac_db_oh.tgz [DATABASE TOP LEVEL DIRECTORY]
      Note: Consider using data integrity utilities such as md5sum, sha1sum, or cksum to validate the file sum both before and after transfer to the target system.
      This source system ORACLE_HOME archive should now be transferred to the target system RAC nodes upon which you will be configuring the new system, and placed in the directory you wish to use as the new $ORACLE_HOME.
      Section 5: RAC-to-RAC Cloning
      5.1 Target System Primary Node Configuration (Clone Initial Node)
      Follow the steps below to clone the primary node (i.e. Node 1) to the new target system.
      5.1.1 Uncompress ORACLE_HOME
      Uncompress the ORACLE_HOME archive that was transferred from the source system. Choose a suitable location, and rename the extracted top-level directory name to something meaningful on the new target system.
      $ tar -xvzf rac_db_oh.tgz
      5.1.2 Create pairsfile.txt File for Primary Node
      Create a [NEW_ORACLE_HOME]/appsutils/clone/pairsfile.txt text file with contents as shown below:
      s_undo_tablespace=[UNDOTBS1 for Initial Node]
      s_dbClusterInst=[Total number of Instances in a cluster e.g. 2]
      s_db_oh=[Location of new ORACLE_HOME]
      5.1.3 Create Context File for Primary Node
      Execute the following command to create a new context file, providing carefully determined answers to the prompts.
      Navigate to [NEW_ORACLE_HOME]/appsutil/clone/bin and run the adclonectx.pl utility with the following parameters:
      perl adclonectx.pl \
      contextfile=[PATH to OLD Source RAC contextfile.xml] \
      template=[NEW ORACLE_HOME]/appsutil/template/adxdbctx.tmp \
      pairsfile=[NEW ORACLE_HOME]/appsutil/clone/pairsfile.txt \
      initialnode
      Where:
      Parameter Usage
      contextfile Full path to the old source RAC database context file.
      Template Full path to the existing database context file template.
      Pairsfile Full path to the pairsfile created in the last step.
      Note: A new and unique global database name (DB name) must be selected when creating the new target system context file. Do not use the source system global database name or sid name uring any of the context file interview prompts as shown below.
      You will be present with the following questions [sample answers provided]:
      Target System Hostname (virtual or normal) [kawasaki] [Enter appropriate value if not defaulted]

      Do you want the inputs to be validated (y/n) [n] ? : [Enter n]

      Target Instance is RAC (y/n) [y] : [Enter y]

      Target System Database Name : [Enter new desired global DB name, not a SID; motoGP global name was selected here]

      Do you want the target system to have the same port values as the source system (y/n) [y] ? : [Select yes or no]

      Provide information for the initial RAC node:

      Host name [ducati] : [Always need to change this value to the current public machine name, for example kawasaki]

      Virtual Host name [null] : [Enter the Clusterware VIP interconnect name, for example kawasaki-vip ]

      Instance number [1] : 1 [Enter 1, as this will always be the instance number when you are on the primary target node]

      Private interconnect name [kawasaki] [Always need to change this value; enter the private interconnect name, such as kawasaki-priv]

      Target System quorum disk location required for cluster manager and node monitor : /tmp [Legacy parameter; just enter /tmp]

      Target System cluster manager service port : 9998 [This is a default port used for CRS ]

      Target System Base Directory : [Enter the base directory that contains the new_oh_loc dir]

      Oracle OS User [oracle] : [Should default to correct current user; just hit enter]

      Oracle OS Group [dba] : [Should default to correct current group; just hit enter]

      Target System utl_file_dir Directory List : /usr/tmp [Specify an appropriate value for your requirements]

      Number of DATA_TOP’s on the Target System [2] : 1 [At present, you can only have one data_top with RAC-To-RAC cloning]

      Target System DATA_TOP Directory 1 : +APPS_RAC_DISK [The shared storage location; ASM diskgroup/NetApps NFS mount point/OCFS mount point]

      Do you want to preserve the Display [null] (y/n) ? : [Respond according to your requirements]

      New context path and file name [/s1/atgrac/racdb/appsutil/motoGP1_kawasaki.xml] : [Double-check proposed location, and amend if needed]
      Note: It is critical that the correct values are selected above: if you are uncertain, review the newly-written context file and compare it with values selected during source system migration to RAC (as per OracleMetalink Note 388577.1).
      When making comparisons, always ensure that any path differences between the source and target systems are understood and accounted for.
      5.1.4 Restore Database on Target System Primary Node
      Warning: It is NOT recommended to clone an E-Business Suite RAC enabled environments to the same host however if the source and target systems must be the same host, make certain the source system is cleanly shutdown and the datafiles moved to a temporarily inaccessible location prior to restoring/recovering the new target system.
      Failure to understand this warning could result in corrupt redo logs on the source system. Same host RAC cloning requires the source system to be down.
      Warning: In addition to same host RAC node cloning, it is also NOT recommended to attempt cloning E-Business Suite RAC enabled environments to a target system which can directly access source system dbf files (perhaps via an nfs shared mount). If the intended target file system has access to the to the source dbf files, corruption of redo log files can occur on the source system. It is also possible that corruption might occur if ANY dbf files exist on the new intended target file system which match the original source mount point [i.e. /foo/datafiles]. If existing datafiles on the target are in a file system location as is present on the source server [i.e. /foo/datafiles], shutdown the database which owns these datafiles.
      Failure to understand this warning could result in corrupt redo logs on the source system or any existing database on the target host, having a mount point the same as the original and perhaps unrelated source system. If unsure, shutdown any database which stores datafiles in a path which existed on the source system and in which datafiles were stored.
      Restore the database after the new ORACLE_HOME is configured.
      5.1.4.1 Run adclone.pl to Restore and Rename Database on New Target System
      Navigate to [NEW_ORACLE_HOME]/appsutil/clone/bin and run Rapid Clone (adclone.pl utility) with the following parameters:
      perl adclone.pl \
      java=[JDK 1.5 Location] \
      component=dbTier \
      mode=apply \
      stage=[ORACLE_HOME]/appsutil/clone \
      method=CUSTOM \
      dbctxtg=[Full Path to the Target Context File] \
      rmanstage=[Location of the Source RMAN dump files… i.e. RMAN_STAGE/data/stage] \
      rmantgtloc=[Shared storage loc for datafiles…ASM diskgroup / NetApps NFS mount / OCFS mount point]
      srcdbname=[Source RAC system GLOBAL name] \
      pwd=[APPS Password] \
      showProgressode
      Where:
      Parameter Usage
      java Full path to the directory where JDK 1.5 is installed.
      stage This parameter is static and refers to the newly-unzipped [ORACLE_HOME]/appsutil/clone directory.
      dbctxtg Full path to the new context file created by adclonectx.pl under [ORACLE_HOME]/appsutil.
      rmanstage Temporary location where you have placed database “image” files transferred from the source system to the new target host.
      rmantgtloc Base directory or ASM diskgroup location into which you wish the database (dbf) files to be extracted. The recreation process will create subdirectories of [GLOBAL_DB_NAME]/data, into which the dbf files will be placed. Only the shared storage mount point top level location needs be supplied.
      srcdbname Source system GLOBAL_DB_NAME (not the SID of a specific node). Refer to the source system context file parameter s_global_database_name. Note that no domain suffix should be added.
      pwd Password for the APPS user.
      Note: The directories and mount points selected for the rmanstage and rmantgtloc locations should not contain datafiles for any other databases. The presence of unrelated datafiles may result in very lengthy restore operations, and on some systems a potential hang of the adclone.pl restore command .
      Running the adclone.pl command may take several hours. From a terminal window, you can run:
      $ tail -f [ORACLE_HOME]/appsutil/log/$CONTEXT_NAME/ ApplyDatabase_[time].log
      This will display and periodically refresh the last few lines of the main log file (mentioned when you run adclone.pl), where you will see references to additional log files that can help show the current actions being executed.
      5.1.4.2 Verify TNS Listener has been started
      After the above process exits, and it has been confirmed that no errors were encountered, you will have a running database and TNS listener, with the new SID name chosen earlier.
      Confirm that the TNS listener is running, and has the appropriate service name format as follows:
      $ ps -ef | grep tns | awk ‘{ print $9}’
      The output from the above command should return a string of the form LISTENER_[hostname]. If does not, verify the listener.ora file in the $TNS_ADMIN location before continuing with the next steps: the listener must be up and running before executing AutoConfig.
      5.1.4.3 Run AutoConfig
      At this point, the new database is fully functional. However, to complete the configuration you must navigate to [ORACLE_HOME]/appsutil/scripts/[CONTEXT_NAME] and execute the following command to run AutoConfig:
      $ adautocfg.sh appspass=[APPS Password]
      5.2 Target System Secondary Node Configuration (Clone Additional Nodes)
      Follow the steps below to clone the secondary nodes (for example, Node 2) on to the target system.
      5.2.1 Uncompress the archived ORACLE_HOME transferred from the Source System
      Uncompress the source system ORACLE_HOME archive to a location matching that present on your target system primary node. The directory structure should match that present on the newly created target system primary node.
      $ tar -xvzf rac_db_oh.tgz
      5.2.2 Archive the [ORACLE_HOME]/appsutil directory structure from the new Primary Node
      Log in to the new target system primary node, and execute the following commands:
      $ cd [ORACLE_HOME]
      $ zip -r appsutil_node1.zip appsutil
      5.2.3 Copy appsutil_node1.zip to the Secondary Target Node
      Transfer and then expand the appsutil_node1.zip into the secondary target RAC node [NEW ORACLE_HOME].
      $ cd [NEW ORACLE_HOME]
      $ unzip -o appsutil_node1.zip
      5.2.4 Update pairsfile.txt for the Secondary Target Node
      Alter the existing pairsfile.txt (from the first target node) and change the s_undo_tablespace parameter. As this is the second node, the correct value would be UNDOTBS2. As an example, the [NEW_ORACLE_HOME]/appsutils/clone/pairsfile.txt would look like:
      s_undo_tablespace=[Or UNDOTBS(+1) for additional Nodes]
      s_dbClusterInst=[Total number of Instances in a cluster e.g. 2]
      s_db_oh=[Location of new ORACLE_HOME]
      5.2.5 Create a Context File for the Secondary Node
      Navigate to [NEW_ORACLE_HOME]/appsutil/clone/bin and run the adclonectx.pl utility as follows:
      perl adclonectx.pl \
      contextfile=[Path to Existing Context File from the First Node] \
      template=[NEW ORACLE_HOME]/appsutil/template/adxdbctx.tmp \
      pairsfile=[NEW ORACLE_HOME]/appsutil/clone/pairsfile.txt \
      addnode
      Where:
      Parameter Usage
      contextfile Full path to the existing context file from the first (primary) node.
      template Full path to the existing database context file template.
      pairsfile Full path to the pairsfile created on last step.
      Several of the interview prompts are the same as on Node 1. However, there are some new questions which are specific to the “addnode” option used when on the second node.
      Note: When answering the questions below, review your responses carefully before entering them. The rest of the inputs (not shown) are the same as those encountered during the context file creation on the initial node (primary node).
      Host name of the live RAC node : kawasaki [enter appropriate value if not defaulted]

      Domain name of the live RAC node : yourdomain.com [enter appropriate value if not defaulted]

      Database SID of the live RAC node : motoGP1 [enter the individual SID, NOT the Global DB name]

      Listener port number of the live RAC node : 1548 [enter the port # of the Primary Target Node you just created]

      Provide information for the new Node:

      Host name : suzuki [enter appropriate value if not defaulted, like suzuki]

      Virtual Host name : suzuki-vip [enter the Clusterware VIP interconnect name, like suzuki-vip.yourdomain.com]

      Instance number : 2 [enter the instance # for this current node]

      Private interconnect name : suzuki-priv [enter the private interconnect name, like suzuki-priv]

      Current Node:

      Host Name : suzuki

      SID : motoGP2

      Instance Name : motoGP2

      Instance Number : 2

      Instance Thread : 2

      Undo Table Space: UNDOTBS2 [enter value earlier added to pairsfile.txt, if not defaulted]

      Listener Port : 1548

      Target System quorum disk location required for cluster manager and node monitor : [legacy parameter, enter /tmp]
      Note: At the conclusion of these “interview” questions related to context file creation, look carefully at the generated context file and ensure that the values contained therein compare to the values entered during context file creation on Node 1. The values should be almost identical, a small but important exception being the local instance name will have a number 2 instead of a 1.
      5.2.6 Configure NEW ORACLE_HOME
      Run the commands below to move to the correct directory and continue the cloning process:
      $ cd [NEW ORACLE_HOME]/appsutil/clone/bin
      $ perl adcfgclone.pl dbTechStack [Full path to the database context file created in previous step]
      Note: At the conclusion of this command, you will receive a console message indicating that the process exited with status 1 and that the addlnctl.sh script failed to start a listener named [SID]. That is expected, as this is not the proper service name. Start the proper listener by executing the following command:

      [NEW_ORACLE_HOME]/appsutil/scripts/[CONTEXT_NAME]/addlnctl.sh start LISTENER_[hostname].

      This command will start the correct (RAC-specific) listener with the proper service name.
      5.2.7 Source the new environment file in the ORACLE_HOME
      Run the commands below to move to the correct directory and source the environment:
      $ cd [NEW ORACLE_HOME]
      $ ./[CONTEXT_NAME].env
      5.2.8 Modify [SID]_APPS_BASE.ora
      Edit the [SID]_APPS_BASE.ora file and change the control file parameter to reflect the correct control file location on the shared storage. This will be the same value as in the [SID]_APPS_BASE.ora on the target system primary node which was just created.
      5.2.9 Start Oracle RAC Database
      Start the database using the following commands:
      $ sqlplus /nolog
      SQL> connect / as sysdba
      SQL> startup
      5.2.10 Execute AutoConfig
      Run AutoConfig to generate the proper listener.ora and tnsnames.ora files:
      $ cd $ORACLE_HOME/appsutil/scripts/$CONTEXT_NAME
      $ ./adautocfg.sh appspass=[APPS Password]
      5.3 Carry Out Target System (Primary Node) Final Oracle RAC Configuration Tasks
      5.3.1 Recreate TNSNAMES and LISTENER.ORA
      Login again to the target primary node (Node 1) and run AutoConfig to perform the final Oracle RAC configuration and create new listener.ora and tnsnames.ora (as the FND_NODES table did not contain the second node hostname until AutoConfig was run on the secondary target RAC node).
      $ cd $ORACLE_HOME/appsutil/scripts/[CONTEXT_NAME]
      $ ./adautocfg.sh appspass=[APPS Password]
      Note: This execution of AutoConfig on the primary target RAC Node 1 will add the second RAC Node connection information to the first node’s tnsnames.ora, such that listener load balancing can occur. If you have more than two nodes in your new target system cluster, you must repeat Sections 4.2 and 4.3 for all subsequent nodes.
      Section 6: Target System Applications Tier Cloning for RAC
      The target system Applications Tier may be located in any one of these locations:
      • Primary target database node
      • Secondary target database node
      • An independent machine, running neither of the target system RAC nodes
      • Shared between two or more machines
      Because of the complexities which might arise, it is suggested that the applications tier should initially be configured to connect to a single database instance. After proper configuration with one of the two target system RAC nodes has been achieved, context variable changes can be made such that JDBC and TNS Listener load balancing are enabled.
      6.1 Clone the Applications Tier
      In order to clone the applications tier, follow the standard steps for the applications node posted on Sections 2 and 3 from OracleMetalink Note 406982.1, Cloning Oracle Applications Release 12 with Rapid Clone. This includes adpreclone steps, copy the bits to the target, configuration portion, and finishing tasks steps.
      Note: On the applications tier, during the adcfgclone.pl execution, you will be asked for a database to which the applications tier services should connect to. Enter the information specific to a single target system RAC node (such as the primary node). On successful completion of this step, the applications node services will be started, and you should be able to log in and use the new target system Applications system.
      6.2 Configure Application Tier JDBC and Listener Load Balancing
      Reconfigure the applications node context variables such that database listener/instance load balancing can occur.
      Note: The following details have been extracted from OracleMetalink Note 388577.1 for your convenience. Consult this note for further information.
      Implement load balancing for the Applications database connections:
      a. Run the context editor (through Oracle Applications Manager) and set the value of “Tools OH TWO_TASK” (s_tools_two_task),”iAS OH TWO_TASK”(s_weboh_twotask) and “Apps JDBC Connect Alias” (s_apps_jdbc_connect_alias).
      b. To load-balance the Forms-based Applications database connections, set the value of “Tools OH TWO_TASK” to point to the [database_name]_balance alias generated in the tnsnames.ora file.
      c. To load-balance the self-service Applications database connections, set the value of iAS OH TWO_TASK” and “Apps JDBC Connect Alias” to point to the [database_name]_balance alias generated in the tnsnames.ora file.
      Execute AutoConfig by running the command:
      cd $ADMIN_SCRIPTS_HOME; ./adautocfg.sh
      d. After successful completion of AutoConfig, restart the Applications tier processes via the scripts located in $ADMIN_SCRIPTS_HOME.
      e. Ensure that value of the profile option “Application Database ID” is set to dbc file name generated at $FND_SECURE.
      Section 7: Advanced RAC Cloning Scenarios
      7.1 Cloning the Database Separately
      In certain cases, customers may require the RAC database to be recreated separately, without using the full lock-step mechanism employed during a regular E-Business Suite RAC RapidClone scenario.
      This section documents the steps needed to allow for manual creation of the target RAC database control files (or the reuse of existing control files) within the Rapid Clone process.
      Unless otherwise noted, all command are specific to the primary target database instance.
      Follow ONLY steps 1 and 2 in Section 2: Cloning Tasks of OracleMetalink Note 406982.1, then continue with these steps below to complete Cloning the Database Separately.
      a. Log on to the primary target system host as the ORACLE UNIX user.
      b. Configure the [RDBMS ORACLE_HOME] as note above in Section 5: RAC-to-RAC Cloning: execute ONLY steps 5.1.1, 5.1.2 and 5.1.3
      c. Create the target database control files manually (if needed) or modify the existing control files as needed to define datafile, redo and archive log locations along with any other relevant and required setting.
      In this step, you copy and recreate the database using your preferred method, such as RMAN restore, Flash Copy, Snap View, or Mirror View.
      d. Start the new target RAC database in open mode.
      e. Run the library update script against the RAC database.
      $ cd [RDBMS ORACLE_HOME]/appsutil/install/[CONTEXT_NAME]
      $ sqlplus “/ as sysdba” @adupdlib.sql [libext]
      Where [libext] should be set to ‘sl’ for HP-UX, ‘so’ for any other UNIX platform, or ‘dll’ for Windows.
      f. Configure the primary target database
      The database must be running and open before performing this step.
      $ cd [RDBMS ORACLE_HOME]/appsutil/clone/bin
      $ perl adcfgclone.pl dbconfig [Database target context file]
      Where Database target context file is: [RDBMS ORACLE_HOME]/appsutil/[Target CONTEXT_NAME].xml.
      Note: The dbconfig option will configure the database with the required settings for the new target, but it will not recreate the control files.
      g. When the above tasks (1-6) are completed on the primary target database instance, see “5.2 Target System Secondary Node Configuration(Clone Additional Nodes)” to configure any secondary database instances.
      7.2 Additional Advanced RAC Cloning Scenarios
      Rapid Clone is only certified for RAC-to-RAC Cloning. Addition or removal of RAC nodes during the cloning process is not currently supported.

      For complete details on the certified RAC scenarios for E-Business Suite Cloning, please refer to Document 783188.1 available in OracleMetaLink.
      Appendix A: Configuring Oracle Clusterware on the Target System Database Nodes
      Associating Target System Oracle RAC Database instances and listeners with Clusterware (CRS)
      Add target system database, instances and listeners to CRS by running the following commands as the owner of the CRS installation:
      $ srvctl add database -d [database_name] -o [oracle_home]
      $ srvctl add instance -d [database_name] -i [instance_name] -n [host_name]
      $ srvctl add service -d [name] -s [service_name]
      Note:For detailed instructions regarding the installation and usage of Oracle Clusterware software as it relates to Real Applications Clusters, see the following article:

      Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide10g Release 2 (10.2)
      Change Log
      Date Description
      Mar 4, 2010 • Added further warning details to section 5.1.4 regarding RMAN limitation
      Feb 15, 2010 • Numerous small updates and clarifications as suggested by members of support
      • Cosmetic changes to adjust formatting issues
      • Added “same host cloning” warning
      • Added Section 7.1 cloning Database separately
      Feb 14, 2010 • Removed required patch 7164226 to point to most current RC CUP
      May 12, 2009 • Removed references to 5767290 and replace with 7164226
      Mar 02, 2009 • Updates made for supported RAC scenarios and reference to Note 783188.1
      Jul 28, 2008 • Edited in readiness for publication
      Jul 22, 2008 • Formatting updates
      Jun 18, 2008 • Added RAC to Single Instance Scale-Down
      May 27, 2008 • Added ASM specific details
      May 16, 2008 • Initial internal release
      Apr 4, 2008 • Document creation
      Note 559518.1 by Oracle E-Business Suite Development
      Copyright 2008, Oracle

      Related
      ________________________________________
      Products
      ________________________________________
      • Oracle E-Business Suite > Applications Technology > Lifecycle Management > Oracle Applications Manager

      Back to top

      Comment by gjilevski | May 27, 2010 | Reply

  3. one of the best RAC conversion document .

    Comment by Sanjay | April 8, 2016 | Reply


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: