Guenadi N Jilevski's Oracle BLOG

Oracle RAC, DG, EBS, DR and HA DBA BLOG

Oracle RAC 11g services

Oracle RAC 11g services

Oracle 11g RAC consists of many nodes with OS RDBMS & Clusterware.  Since Oracle 11g R2 the ASM &  OCR (grid)  are bundled together.

Services

Now when a user connects to a database, it is preferable to connect to the service layer by mentioning the service in the connection string. As I mentioned above, we can create more services to address and logically differentiate the needs of the clients and applications without mingling into the nodes. Let’s see various service level deployments.

1.      Using Oracle Services

2.      Default Service Connections

3.      Connection Load Balancing

Oracle Services: This is a perfect way to manage applications or a subset of applications. Simply, OLTP users, DWH/Batch Operations can have their own services. Service level requirements should be the same for users /applications assigned to the same service. When defining a service, you have the opportunity to define which nodes will support that service. They become preferred instances while the ones that will provide failover support are known as available instances.

When you specify PREFERRED instances, then a particular set of instances are brought in to assist and support that service and the subset of applications. Should one or more of these PREFERRED instances fail, then the failover takes place and the services are moved over to the AVAILABLE instances. Should the failed instances come back online the services will not fall back to the PREFERRED instances simply because it has successfully met the service level requirement and is doing a fine job of providing high availability. Thus, there is no need to enact another outage to bring them back to the originally determined PREFERRED instances. Do however note that you can easily fail them back to the original situation by using FAN callouts.

Also, note that Resource profiles are automatically created when you define a service. A resource profile takes care of the manageability of the service and defines service dependencies for the instance and database. Stopping a service would mean stopping the database and the instances associated with the service, so use caution when you attempt to bring down the services.

Services are integrated with Resource Manager thus enabling us to restrict the resources that are being used by a service. Consumer groups are mapped to the services so users connecting are members of the specific consumer group. AWR (Automatic Workload Repository) helps us monitor the performance per service.

ONS (Oracle Net Services) provides connection load balancing. This can be simply done by setting the CLB goal in the listener, CLB_GOAL. It is also possible to specify a single TAF policy for all users of a specific service by defining the FAILOVER_METHOD, FAILOVER_TYPE, etc.

December 13, 2009 Posted by | oracle | Leave a comment

Oracle 11g cache fusion

Oracle 11g cache fusion

RAC Database System has two important services. They are Global Cache Service (GCS) and Global Enqueue Service (GES). These are basically collections of background processes.  These two processes together cover and manage the total Cache Fusion process, resource transfers, and resource escalations among the instances.

Global Resource Directory

GES and GCS together maintain a Global Resource Directory (GRD) to record the information about the resources and the enqueues. GRD remains in the memory and is stored on all the instances. Each instance manages a portion of the directory. This distributed nature is a key point for fault tolerance of the RAC.

Global Resource Directory (GRD) is the internal database that records and stores the current status of the data blocks. Whenever a block is transferred out of a local cache to another instance’s cache the GRD is updated. The following resources information is available in GRD.

* Data Block Identifiers (DBI)

* Location of most current version

* Modes of the data blocks: (N)Null, (S)Shared, (X)Exclusive

* The Roles of the data blocks (local or global) held by each instance

* Buffer caches on multiple nodes in the cluster

Global Cache Service (LMSx)

The GCS tracks the location, status (mode and role) of data blocks, and the access privileges of all instances. Oracle uses the GCS for cache coherency when the current version of a data block is in one instance’s buffer cache, and another instance requests that block for modification. It is also used for reading blocks from other instances.

Following the first read of exclusive resources, multiple transactions running in a single RAC instance can share access to a set of data blocks without involvement of the GCS as long as the block is not transferred out of the local cache. This is similar to non-RAC Oracle. If the block has to be transferred out of the local cache, then the GCS updates the Global Resource Directory in the shared pool.

GCS and Cache Coherency

The GCS manages all types of data blocks. Cache coherency is maintained through the GCS by requiring that instances acquire a resource (lock or enqueue on a block) cluster-wide before modifying or reading a database block. The GCS is used to synchronize global cache access, allowing only one instance to modify a block at any single point in time. The GCS, through the RAC wide Global Resource Directory, ensures that the status of data blocks cached in any mode in the cluster is globally visible and maintained.

Oracle’s RAC has multi-versioning architecture. This multi-versioning architecture distinguishes between current data blocks and one or more consistent read (CR) versions of a block. A current block contains changes for all committed and yet-to-be-committed transactions. A consistent read (CR) version of a block represents a consistent snapshot of the data at a previous point in time. A data block can reside in many buffer caches under the auspices of shared resources.

In Oracle RAC, applying rollback segment information to current blocks produces consistent read versions of a block. Both the current and consistent read blocks are managed by the GCS.

To transfer data blocks among database caches, buffers are shipped by means of the high speed IPC interconnect. Disk writes are only required for cache replacement. A past image (PI) of a block is kept in memory before the block is sent if it is a dirty (modified) block. In the event of failure, Oracle reconstructs the current version of the block by reading the PI blocks.

GCS Resource Modes and Roles

The GCS global resource dictionary tracks resource blocks transmitted throughout the RAC system. The same block can exist in multiple caches as a result of block transfers. The block is held in different modes depending on whether a resource holder (instance) intends to modify the resource data or merely read it.

It is important to understand that a RAC resource is identified by two factors:

Mode – The modes are null, shared, and exclusive.

Role – The roles are local and global

Resource modes are generally set by the holder, as part of a request for a resource. The resource modes determine whether the holder can modify the block. The modes of a RAC resource are defined as:

Null – Identified with an N. Holding a resource at this level conveys no access rights.

Shared – Identified with an S. This signifies a protected read. When a resource is held at this level, a process cannot modify it and multiple processes can read the same resource.

Exclusive – Identified with an X. This grants the holding process exclusive access. Other processes cannot write to the resource, but consistent reads of older blocks are still available through the PI process.

Resource roles are set by type of access, local, or global. A resource in exclusive mode is local by definition, while a null or shared mode resource is global.

Some Performance metrics specific to the RAC environment

Cache Coherency

What exactly do we mean by Cache Coherency? Our Oracle RAC environment needs some added sets of metrics rather than a regular Oracle RAC installation, which I sometimes refer to as a “Single-Node RAC”. I call it a Single-Node RAC because someday that Oracle application will also grow and need a ticket to “RACdom”. A typical production DBA, responsible for uptime and upkeep of his RAC Database needs more that just some AWR runs; he will need to measure the health of his HSI (High Speed Interconnects) Network interfaces, he will have to monitor and diagnose the traffic volume across the nodes and response times. A typical high intensive OLTP environment can keep you pretty busy. To measure the traffic we will concentrate on two categories:

GCS (Global Cache Services)

GES (Global Enqueue Services)

So what are they?

Global Cache Service

Process that implement Cache Fusion. It maintains the block mode for blocks in the global role. It is responsible for block transfers between instances. The Global Cache Service employs various background processes such as the Global Cache Service Processes (LMSn) and Global Enqueue Service Daemon (LMD).

It actually is more or less like your buffer cache, but here it acts globally across the nodes. This process is an integral part to the cache-fusion concepts. So what does a buffer have, data blocks obviously. Simply said, the coherency in the Global Buffer Cache is maintained by making sure that whenever an attempt to modify the database block is made, a global lock is acquired. . Now this “asking instance” will have both the past copy of the block (for redo purposes) as well as the current version of the block containing both committed and uncommitted transactions. Should another node come asking for that block, then it is the GCS’s responsibility to do a “Block Version Lookup” at the node, which is currently holding the global lock to the block. The LMSn processes are crucial for a successful operation of GCS and do the block version lookup, block mode etc.

Global Enqueue Service

A service that coordinates enqueues that are shared globally.

The blocks in your RAC environment do most of the work themselves, but there is a crucial area when GES or the Global Enqueue Services come in. A seamless coordination across the nodes is crucial for RAC’s operation. The GES is primarily responsible for maintaining coherency in the dictionary and library caches. The dictionary cache consists of the data dictionary master information for each node in its SGA (System Global Area) primarily for quicker lookup and access. Any DML committed from a requesting node needs to be synched and written across all data dictionaries in all nodes of the RAC environment. The GES makes sure that the changes remain consistent across the nodes and that there are no discrepancies. Moreover, with the same directive, the locks must be created and maintained across the nodes and GES must ensure that there are no deadlocks across requesting nodes over access to the same objects. LMON, LCK and LMD processes work in tandem to make the GES operate in a smooth and seamless fashion.

GV$ Views

Obviously, the meat part of the whole equation is, “Where are my RAC views?” RAC environment has additional views known as Global Views. A typical view for a Single Node installation is V$ but for RAC you have GV$ views. In addition, all these views have additional columns like INST_ID to identify nodes across the RAC environment. So a typical 4 node RAC will give you four nodes in our 4-node RAC with their own data when querying the GV$ view. Obviously, you can query individual nodes from any node. To get started try doing this:

SQL> select * from gv$sysstat where name like ‘%gcs %’; This will give you a result set with specific attention to GCS messages sent across the nodes. If this value is inconsistent across nodes or if huge differences are apparent then it might be time to investigate.

The Oracle RAC processes and their identifiers are as follows:

ACMS: Atomic Controlfile to Memory Service (ACMS)

In an Oracle RAC environment, the ACMS per-instance process is an agent that contributes to ensuring a distributed SGA memory update is either globally committed on success or globally aborted if a failure occurs.

GTX0-j: Global Transaction Process

The GTX0-j process provides transparent support for XA global transactions in a RAC environment. The database autotunes the number of these processes based on the workload of XA global transactions.

LMON: Global Enqueue Service Monitor

The LMON process monitors global enqueues and resources across the cluster and performs global enqueue recovery operations.

LMD: Global Enqueue Service Daemon

The LMD process manages incoming remote resource requests within each instance.

LMS: Global Cache Service Process

The LMS process maintains records of the data file statuses and each cached block by recording information in a Global Resource Directory (GRD). The LMS process also controls the flow of messages to remote instances and manages global data block access and transmits block images between the buffer caches of different instances. This processing is part of the Cache Fusion feature.

LCK0: Instance Enqueue Process

The LCK0 process manages non-Cache Fusion resource requests such as library and row cache requests.

RMSn: Oracle RAC Management Processes (RMSn)

The RMSn processes perform manageability tasks for Oracle RAC. Tasks accomplished by an RMSn process include creation of resources related to Oracle RAC when new instances are added to the clusters.

RSMN: Remote Slave Monitor manages background slave process creation and communication on remote instances. These background slave processes perform tasks on behalf of a coordinating process running in another instance.

December 13, 2009 Posted by | oracle | 2 Comments

Oracle Cluster Registry (OCR) considerations

Oracle Cluster Registry (OCR) considerations
The Oracle Cluster Registry’s (OCR) purpose is to hold cluster and database configuration information for RAC and Cluster Ready Services (CRS) such as the cluster node list, and cluster database instance to node mapping, and CRS application resource profiles. As such, the OCR contains all configuration information for the Oracle 11g Clusterware, including network communication settings where the Clusterware daemons or background processes listen as well as the cluster interconnect information for 11g RAC network configuration and the location for the 11g Voting Disk.
The OCR must be stored on either shared raw devices or OCFS/OCFS2 (Oracle Cluster Filesystem, approved NFS or supported cluster filesystem) within an Oracle 11g RAC environment. One quick way to check the status and details for the OCR with the 11g Clusterware is to issue the OCRCHECK command as shown below:

[oracle@raclinux1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version                  :          2
Total space (kbytes)     :     297084
Used space (kbytes)      :       3852
Available space (kbytes) :     293232
ID                       : 2007457116
Device/File Name         : /dev/raw/raw5
Device/File integrity check succeeded
Device/File Name         : /dev/raw/raw6
Device/File integrity check succeeded

Cluster registry integrity check succeeded

December 13, 2009 Posted by | oracle | Leave a comment

Prerequisites for CRS install for Oracle 11g R2

Prerequisites for CRS install for Oracle 11g R2

Once the basic installation is complete, install the following packages whilst logged in as the root user. This includes the 64-bit and 32-bit versions of some packages.
# From Enterprise Linux 5 DVD
cd /media/cdrom/Server
rpm -Uvh binutils-2.*
rpm -Uvh compat-libstdc++-33*
rpm -Uvh elfutils-libelf-0.*
rpm -Uvh elfutils-libelf-devel-0.*
rpm -Uvh gcc-4.*
rpm -Uvh gcc-c++-4.*
rpm -Uvh glibc-2.*
rpm -Uvh glibc-common-2.*
rpm -Uvh glibc-devel-2.*
rpm -Uvh glibc-headers-2.*
rpm -Uvh ksh-2*
rpm -Uvh libaio-0.*
rpm -Uvh libaio-devel-0.*
rpm -Uvh libgcc-4.*
rpm -Uvh libstdc++-4.*
rpm -Uvh libstdc++-devel-4.*
rpm -Uvh make-3.*
rpm -Uvh sysstat-7.*
rpm -Uvh unixODBC-2.*
rpm -Uvh unixODBC-devel-2.*
cd /
eject

Oracle Installation Prerequisites

Perform the following steps whilst logged into the raclinux1 machine as the root user.

Make sure the shared memory filesystem is big enough for Automatic Memory Manager to work.
# umount tmpfs
# mount -t tmpfs shmfs -o size=1500m /dev/shm
The “/etc/hosts” file must contain the following information.
127.0.0.1       localhost.localdomain   localhost
# Public
92.148.2.101   raclinux1.gj.com        raclinux1
92.148.2.102   raclinux2.gj.com        raclinux2
# Private
10.10.10.101   raclinux1-priv.gj.com   raclinux1-priv
10.10.10.102   raclinux2-priv.gj.com   raclinux2-priv
# Virtual
92.148.2.111   raclinux1-vip.gj.com    raclinux1-vip
92.148.2.112   raclinux2-vip.gj.com    raclinux2-vip
# SCAN
92.148.2.201   rac-cluster.gj.com rac-cluster
Note. The SCAN address should not really be defined in the hosts file. Instead is should be defined on the DNS to round-robin between 3 addresses on the same subnet as the public IPs. For this installation, we will compromise and use the hosts file.

Add or amend the following lines to the “/etc/sysctl.conf” file.

fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048586
Run the following command to change the current kernel parameters in order for them to take effect without re-boothing the system.
/sbin/sysctl -p
Add the following lines to the “/etc/security/limits.conf” file to ensure that the hard and soft shell limits for the oracle Linux user are met.
oracle               soft    nproc   2047
oracle               hard    nproc   16384
oracle               soft    nofile  1024
oracle               hard    nofile  65536
Add the following lines to the “/etc/pam.d/login” file, if it does not already exist. This way you will make sure that the values specified in  /etc/security/limits.conf file take effect. This is in accordance to PAM Linux.

session    required     pam_limits.so
Disable secure linux by editing the “/etc/selinux/config” file, making sure the SELINUX flag is set as follows.
SELINUX=disabled
Alternatively, this alteration can be done using the GUI tool (System > Administration > Security Level and Firewall). Click on the SELinux tab and disable the feature.

SELINUX has problems with ASMlib, Oracle 11g RAC with iSCSI, OEM Grid Control, CRS and some Oracle 11g R2 libraries.

Either configure NTP, or make sure it is not configured so the Oracle Cluster Time Synchronization Service (ctssd) can synchronize the times of the RAC nodes. In this case we will deconfigure NTP.
# service ntpd stop
Shutting down ntpd:                                        [  OK  ]
# chkconfig ntpd off
# mv /etc/ntp.conf /etc/ntp.conf.org
# rm /var/run/ntpd.pid
If you are using NTP, you must add the “-x” option into the following line in the “/etc/sysconfig/ntpd” file.
OPTIONS=”-x -u ntp:ntp -p /var/run/ntpd.pid”
Then restart NTP.
# service ntp restart
Create the new groups and users.
groupadd -g 1000 oinstall
groupadd -g 1200 dba
useradd -u 1100 -g oinstall -G dba oracle
passwd oracle
Create the directories in which the Oracle software will be installed.
mkdir -p  /u01/app/11.2.0/grid
mkdir -p /u01/app/oracle/product/11.2.0/db_1
chown -R oracle:oinstall /u01
chmod -R 775 /u01/
Login as the oracle user and add the following lines at the end of the .bash_profile file.
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR

ORACLE_HOSTNAME=raclinux1.gj.com; export ORACLE_HOSTNAME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOME
ORACLE_SID=RACDB1; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
PATH=/usr/sbin:$PATH; export PATH
PATH=$ORACLE_HOME/bin:$PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH

if [ $USER = “oracle” ]; then
if [ $SHELL = “/bin/ksh” ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
Add the following commands to the /etc/rc.local file for all the disks that will be used by CRS and Oracle RAC.
chown oracle:oinstall /dev/sdb1
chown oracle:oinstall /dev/sdc1
chown oracle:oinstall /dev/sdd1
chown oracle:oinstall /dev/sde1
chown oracle:oinstall /dev/sdf1
chmod 650 /dev/sdb1
chmod 650 /dev/sdc1
chmod 650 /dev/sdd1
chmod 650 /dev/sde1
chmod 650 /dev/sdf1

Edit the /home/oracle/.bash_profile file on the raclinux2 node to correct the ORACLE_SID and ORACLE_HOSTNAME values.
ORACLE_SID=RACDB2; export ORACLE_SID
ORACLE_HOSTNAME=raclinux2.gj.com; export ORACLE_HOSTNAME
Start the raclinux1  machine and restart the raclinux1 machine. When both nodes have started, check they can both ping all the public and private IP addresses using the following commands.
ping -c 3 raclinux1
ping -c 3 raclinux1-priv
ping -c 3 raclinux2
ping -c 3 raclinux2-priv
At this point the virtual IP addresses defined in the /etc/hosts file will not work, so don’t bother testing them.

Configure SSH on each node in the cluster. Log in as the “oracle” user and perform the following tasks on each node.
su – oracle
mkdir ~/.ssh
chmod 700 ~/.ssh
/usr/bin/ssh-keygen -t rsa # Accept the default settings.
The RSA public key is written to the ~/.ssh/id_rsa.pub file and the private key to the ~/.ssh/id_rsa file.

Log in as the “oracle” user on raclinux1, generate an “authorized_keys” file on raclinux1 and copy it to raclinux2 using the following commands.
su – oracle
cd ~/.ssh
cat id_rsa.pub >> authorized_keys
scp authorized_keys raclinux2:/home/oracle/.ssh/
Next, log in as the “oracle” user on raclinux2 and perform the following commands.
su – oracle
cd ~/.ssh
cat id_rsa.pub >> authorized_keys
scp authorized_keys raclinux1:/home/oracle/.ssh/
The “authorized_keys” file on both servers now contains the public keys generated on all RAC nodes.

To enable SSH user equivalency on the cluster member nodes issue the following commands on each node.
ssh raclinux1 date
ssh raclinux2 date
ssh raclinux1.gj.com date
ssh raclinux2.gj.com date
exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add
You should now be able to SSH and SCP between servers without entering passwords.

December 13, 2009 Posted by | oracle | 1 Comment

Prerequisites for CRS install for Oracle 11gR1 11.1.0.6

Prerequisites for CRS install for Oracle 11g R1 11.1.0.6

Once the basic installation is complete, install the following packages while logged in as the root user.
# From Enterprise Linux 5 DVD
cd /media/cdrom/Server
rpm -Uvh binutils-2.*
rpm -Uvh elfutils-libelf-0.*
rpm -Uvh glibc-2.*
rpm -Uvh glibc-common-2.*
rpm -Uvh libaio-0.*
rpm -Uvh libgcc-4.*
rpm -Uvh libstdc++-4.*
rpm -Uvh make-3.*
rpm -Uvh compat-libstdc++-33*
rpm -Uvh elfutils-libelf-devel-*
rpm -Uvh glibc-headers*
rpm -Uvh glibc-devel-2.*
rpm -Uvh libgomp*
rpm -Uvh gcc-4.*
rpm -Uvh gcc-c++-4.*
rpm -Uvh libaio-devel-0.*
rpm -Uvh libstdc++-devel-4.*
rpm -Uvh unixODBC-2.*
rpm -Uvh unixODBC-devel-2.*
rpm -Uvh sysstat-7.*
cd /
eject
Oracle Installation Prerequisites
Perform the following steps whilst logged into the raclinux1 virtual machine as the root user.

The /etc/hosts file must contain the following information.
127.0.0.1       localhost.localdomain   localhost
# Public
92.148.2.101   raclinux1.gj.com        raclinux1
92.148.2.102   raclinux2.gj.com        raclinux2
#Private
10.10.10.101   raclinux1-priv.gj.com   raclinux1-priv
10.10.10.102   raclinux2-priv.gj.com   rac2linux-priv
#Virtual
92.148.2.111   raclinux1-vip.gj.com    raclinux1-vip
92.148.2.112   raclinux2-vip.gj.com    raclinux2-vip
Add the following lines to the /etc/sysctl.conf file.

kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default=4194304
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=262144
Run the following command to change the current kernel parameters in order for them to take effect without rebooting the system.
/sbin/sysctl -p
Add the following lines to the /etc/security/limits.conf file to ensure hard and soft shell limits for the oracle Linux user are met.
oracle               soft    nproc   2047
oracle               hard    nproc   16384
oracle               soft    nofile  1024
oracle               hard    nofile  65536
Add the following lines to the /etc/pam.d/login file, if it does not already exist. This way you will make sutre that the values specified in  /etc/security/limits.conf file take effect. This is in accordance to PAM Linux.
session    required     pam_limits.so
Disable secure linux by editing the /etc/selinux/config file, making sure the SELINUX flag is set as follows.
SELINUX=disabled
Alternatively, this alteration can be done using the GUI tool (System > Administration > Security Level and Firewall). Click on the SELinux tab and disable the feature.SELINUX has problems with ASMlib, Oracle 11g RAC with iSCSI, OEM Grid Control, CRS and some Oracle 11g R1 libraries.

Create the new groups and users.
groupadd oinstall
groupadd dba
groupadd oper
groupadd asmadmin

useradd -u 500 -g oinstall -G dba,oper,asmadmin oracle
passwd oracle
Create the directories in which the Oracle software will be installed. Note that for crs it shoud not be in $ORACLE_BASE as all directories will be owned by user root. After CRS install root.sh change ownership to /u01/crs to root.
mkdir -p /u01/crs/oracle/product/11.1.0/crs
mkdir -p /u01/app/oracle/product/11.1.0/db_1
chown -R oracle:oinstall /u01
Login as the oracle user and add the following lines at the end of the .bash_profile file.
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR

ORACLE_HOSTNAME=raclinux1.gj.com; export ORACLE_HOSTNAME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.1.0/db_1; export ORACLE_HOME
ORACLE_SID=RACDB1; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
PATH=/usr/sbin:$PATH; export PATH
PATH=$ORACLE_HOME/bin:$PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH

if [ $USER = “oracle” ]; then
if [ $SHELL = “/bin/ksh” ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi

Add the following commands to the /etc/rc.local file for all the disks that will be used by CRS and Oracle RAC..
chown oracle:oinstall /dev/sdb1
chown oracle:oinstall /dev/sdc1
chown oracle:oinstall /dev/sdd1
chown oracle:oinstall /dev/sde1
chown oracle:oinstall /dev/sdf1
chmod 650 /dev/sdb1
chmod 650 /dev/sdc1
chmod 650 /dev/sdd1
chmod 650 /dev/sde1
chmod 650 /dev/sdf1

Edit the /home/oracle/.bash_profile file on the raclinux2 node to correct the ORACLE_SID and ORACLE_HOSTNAME values.
ORACLE_SID=RACDB2; export ORACLE_SID
ORACLE_HOSTNAME=raclinux2.gj.com; export ORACLE_HOSTNAME
Start the raclinux1  machine and restart the raclinux1 machine. When both nodes have started, check they can both ping all the public and private IP addresses using the following commands.
ping -c 3 raclinux1
ping -c 3 raclinux1-priv
ping -c 3 raclinux2
ping -c 3 raclinux2-priv
At this point the virtual IP addresses defined in the /etc/hosts file will not work, so don’t bother testing them.

Configure SSH on each node in the cluster. Log in as the “oracle” user and perform the following tasks on each node.
su – oracle
mkdir ~/.ssh
chmod 700 ~/.ssh
/usr/bin/ssh-keygen -t rsa # Accept the default settings.
The RSA public key is written to the ~/.ssh/id_rsa.pub file and the private key to the ~/.ssh/id_rsa file.

Log in as the “oracle” user on raclinux1, generate an “authorized_keys” file on raclinux1 and copy it to raclinux2 using the following commands.
su – oracle
cd ~/.ssh
cat id_rsa.pub >> authorized_keys
scp authorized_keys raclinux2:/home/oracle/.ssh/
Next, log in as the “oracle” user on raclinux2 and perform the following commands.
su – oracle
cd ~/.ssh
cat id_rsa.pub >> authorized_keys
scp authorized_keys raclinux1:/home/oracle/.ssh/
The “authorized_keys” file on both servers now contains the public keys generated on all RAC nodes.

To enable SSH user equivalency on the cluster member nodes issue the following commands on each node.
ssh raclinux1 date
ssh raclinux2 date
ssh raclinux1.gj.com date
ssh raclinux2.gj.com date
exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add
You should now be able to SSH and SCP between servers without entering passwords.

December 13, 2009 Posted by | oracle | Leave a comment

Review and Resolve Manual Configuration Changes in Oracle EBS 12R 12.1.1 on Oracle RDBMS 11g

Review and Resolve Manual Configuration Changes in Oracle EBS 12R 12.1.1 on Oracle RDBMS 11g

The adchkcfg utility verifies the context file. Below follows detail explanation of the functionality of the adchkcfg utility.

  • The Check Config utility (adchkcfg) is located in locations as specified in the table below.
Tier Location
Application <AD_TOP>/bin
Database <RDBMS ORACLE_HOME>/appsutil/bin
  • Check the AutoConfig configuration files by executing the following command:
  • On UNIX
    adchkcfg.sh contextfile=<CONTEXT>
  • This script will generate both html and text reports that provide information about all file changes, profile option changes and other important database updates that will be done during the next normal execution of AutoConfig. The report consists of two tabs:
  • File System Changes
    This report provides information about all the files that will be changed during the next normal execution of AutoConfig. The report is divided into the following sections:
  • AutoConfig Context File Changes
    Displays information about the location of the context file, the content of the currently active context file, the content of the context file that will be generated in the next AutoConfig run. In addition it also displays an html report highlighting the differences between the current and the new context file.
  • Changed Configuration Files
    Displays a list of all the files that will be changed during an AutoConfig execution. For each file, information is displayed about the location of the runtime file, the content of the currently active file, the content of the file that will be generated in the next AutoConfig run, an html report highlighting the differences between the current and the new configuration file and the location of the AutoConfig template file.
  • New Configuration Files
    Displays a list of all the new files that will be created during an AutoConfig execution. For each file, information is displayed about the location of the runtime file, the content of the new file and the location of the AutoConfig template file.
  • Database Changes
    This report provides information about all the profile options that get changed during the next normal execution of AutoConfig. This report is divided into three sections.
  • Profile Value Changes
    Displays the details only for profiles whose value would be changed in the next AutoConfig run. For each such profile, the current value in the Database, the new AutoConfig value that would be set for it, the Profile Level and the name of the AutoConfig script that changes the profile value is displayed.
  • Profile Values
    Displays the details as in previous section for all Apps Database profiles managed by AutoConfig.
  • Other Database updates
    Displays the details for important database updates that will be done in the next run of AutoConfig. The table name, column name, the current column value in the Database and the new AutoConfig value is displayed along with the name of the updating AutoConfig script and a brief description.

Resolve any reported differences between your existing files and the files AutoConfig will create.

The script will also create a zip file report “ADXcfgcheck.zip” that contains all the files and reports mentioned above, so that the ADXcfgcheck.zip can be copied to a local desktop PC and the html report can be viewed there without breaking the hyper-links in the report.

December 13, 2009 Posted by | oracle | 2 Comments

RAC enabling Oracle EBS R12 12.1.1

RAC enabling Oracle EBS R12 12.1.1

Having EBS operational with Oracle 11gR RAC provides ability for the database to scale out as it grow, resilience and high availability at database tier as a mandatory requirement to support today 24X7 accessibility business demands. Oracle Applications Release 12 has numerous configuration options that can be chosen to suit particular business scenarios, uptime requirements, hardware capability, and availability requirements. This section describes how to migrate Oracle Applications Release 12.1.1 running on a single database instance to an Oracle Real Application Clusters (Oracle RAC) environment running Oracle Database 11g Release 1 (11.1.0.7) with Automatic Storage Management (ASM). The most current version of this document can be obtained in My Oracle Support (formerly Oracle MetaLink) Knowledge Document 466649.1.

Cluster Terminology Overview

Let’s briefly refresh the key terminology used in a cluster environment.

  • Automatic Storage Management (ASM) is an Oracle database component that acts as an integrated file system and volume manager, providing the performance of raw devices with the ease of management of a file system. In an ASM environment, you specify a disk group rather than the traditional datafile when creating or modifying a database structure such as a tablespace. ASM then creates and manages the underlying files automatically.
  • Cluster Ready Services (CRS) is the primary program that manages high availability operations in an Oracle RAC environment. The crs process manages designated cluster resources, such as databases, services, and listeners.
  • Parallel Concurrent Processing (PCP) is an extension of the Concurrent Processing architecture. PCP allows concurrent processing activities to be distributed across multiple nodes in an Oracle RAC environment, maximizing throughput and providing resilience to node failure.
  • Oracle Real Application Clusters (Oracle RAC) is an Oracle database technology that allows multiple machines to work on the same data in parallel, reducing processing time significantly. An Oracle RAC environment also offering resilience if one or more machines become temporarily unavailable as a result of planned or unplanned downtime.

Legend listing the Oracle Applications EBS naming conventions is as follows.

Convention Meaning
Application tier Machines (nodes) running Forms, Web, and other services (servers). Sometimes called middle tier.
Database tier Machines (nodes) running Oracle Applications database.
oracle User account that owns the database file system (database ORACLE_HOME and files).
applmgr User account that owns the application file system.
CONTEXT_NAME The CONTEXT_NAME variable specifies the name of the Applications context that is used by AutoConfig. The default is _.
CONTEXT_FILE Full path to the Applications context file on the application tier or database tier. The default locations are as follows.
Application tier context file:
/appl/admin/.xml
Database tier context file:
<RDBMS ORACLE_HOME>/appsutil/.xml
APPSpwd EBS database APPS user password.
Configuration Prerequisites

In order to install and RAC enable EBS we need to make sure that the prerequisites to install EBS and to configure RAC are met. The basic prerequisites for using Oracle RAC with Oracle Applications Release 12.1 are:

  • If you do not already have an existing single instance environment, perform an installation of Oracle Applications (EBS) with Rapid Install or you should apply the Oracle E-Business Suite Release 12.1.1 Maintenance Pack (patch 7303030, also delivered by Release 12.1.1 Rapid Install).Installation of EBS R12 12.1.1was illustrated in section Installing EBS 12.1.1. Upgrade to 12.1.1 is out of the scope in the book.
  • Set up the required cluster hardware and interconnect medium and install Oracle 11g R1 11.1.0.7 CRS, ASM and RDBMS as per chapter 3 and 9 and the required interoperability patches described in section ‘Upgrading an EBS 12 with the latest release of Oracle 11gR1 RDBMS’  or Oracle My support (Metalink) Note 802875.1.  We will have the following Oracle Homes after completion of the task.
ORACLE_HOME Purpose
Rapid Install Database ORACLE_HOME Database ORACLE_HOME installed by Oracle Applications Release 12 rapidwiz. /u01/oracle/VIS/db/tech_st/11.1.0
Database 11g ORACLE_HOME Database ORACLE_HOME installed for Oracle 11g RAC Database. /u01/app/oracle/product/11.1.0/db_2
Database 11g CRS_ORACLE_HOME ORACLE_HOME installed for 11g Clusterware (formerly Cluster Ready Services). /u01/crs/oracle/product/11.1.0/crs_1
Database 11g ASM ORACLE_HOME ORACLE_HOME used for creation of ASM instances.

/u01/app/orace/product/11.1.0/db_1

OracleAS 10.1.2 ORACLE_HOME ORACLE_HOME installed on Application Tier for forms and reports by rapidwiz. $INST_TOP/tech_st/10.1.2
OracleAS 10.1.3 ORACLE_HOME ORACLE_HOME installed on Application Tier for HTTP server by rapidwiz. $INST_TOP/tech_st/10.1.3

As a refresher from chapter 3 and 9, the Oracle Homes for CRS, ASM and RDBMS must be installed with Oracle 11g R1 and patched to Oracle 11.1.0.7. The patch number is 6890831. For ASM and RDBMS Oracle home install Oracle Database 11g Products from the 11g Examples CD after Oracle 11gR1 install but prior to applying the patch for 11.1.0.7 (patch number 6890831).

After the successful install of the Oracle 11gR1 CRS and ASM the output of ./crs_stat –t –v should display.

Name           Type           Target    State     Host

————————————————————

ora….SM1.asm application    ONLINE    ONLINE    raclinux1

ora….X1.lsnr application    ONLINE    ONLINE    raclinux1

ora….ux1.gsd application    ONLINE    ONLINE    raclinux1

ora….ux1.ons application    ONLINE    ONLINE    raclinux1

ora….ux1.vip application    ONLINE    ONLINE    raclinux1

ora….SM2.asm application    ONLINE    ONLINE    raclinux2

ora….X2.lsnr application    ONLINE    ONLINE    raclinux2

ora….ux2.gsd application    ONLINE    ONLINE    raclinux2

ora….ux2.ons application    ONLINE    ONLINE    raclinux2

ora….ux2.vip application    ONLINE    ONLINE    raclinux2

Also ./crsctl check crs  should look like the picture below.

This is the criteria that CRS and ASM are installed and running on the two nodes of the cluster consisting of raclinux1 and raclinux2 nodes. As we can see there are two instances belonging to ASM and two listeners for ASM on each node. Resources are as follows.

  • ora.raclinux1.+ASM1.asm on node raclinux1
  • ora.raclinux2.+ASM2.asm on node raclinux2
  • ora.raclinux1.LISTENER_RACLINUX1.lsnr on node raclinux1
  • ora.raclinux2.LISTENER_RACLINUX2.lsnr on node raclinux2

There are two disk groups created and mounted in ASM,  that is DATA with 400GB and FLASH with 200GB.  Now we can start with RAC enabling the EBS as described in the following section.

Running rconfig for move to ASM and RAC enabling the database

So far we have a single instance database based EBS 12.1.1. We will use the rconfig to move the database to ASM and RAC enable the VIS EBS database. We will set a flash recovery area (FRA) for the EBS database VIS using the parameters and commands described below.

After logging into the server using oracle Linux user account go to $ORACLE_HOME/assistants/rconfig/sampleXMLs directory. Make a copy of the template file ConvertToRAC.xml to convert.xml and convert1.xml. Modify the content of convert.xml and convert1.xml identically with exception of . In the file convert.xml place whereas in convert1.xml place . The possible values are “ONLY”, “YES” and “NO”.  Utility called rconfig is used along with the xml file to perform the following activities.

  • Migrate the database to ASM storage (only if ASM is specified as storage option in the configuration XML file).
  • Create database instances on all nodes in the cluster.
  • Configure listener and Net Service entries.
  • Configure and register CRS resources.
  • Start the instances on all nodes in the cluster.

Please take a note than value “ONLY” performs a validation of the parameters and will identify any problems that needs to be corrected prior to the actual conversion but does not performs a conversion after completing the prerequisite checks. If using Convertverify=”YES”: rconfig performs checks to ensure that the prerequisites for single-instance to Oracle RAC conversion have been met before it starts conversion. If using Convert verify=”NO”: rconfig does not perform prerequisite checks, and starts conversion. Content of the convert.xml is displayed below. In the file convert.xml place whereas in convert1.xml place . In both files convert.xml and convert1.xml we specify the following information.

  • source pre-conversion EBS RDBMS Oracle home of non-RAC database –  /u01/oracle/VIS/db/tech_st/11.1.0
  • destination post-conversion EBS RDBMS Oracle home of the RAC database – /u01/app/oracle/product/11.1.0/db_2
  • SID for non-RAC database and credentials – VIS
  • list of nodes that should have RAC instances running – raclinux1, raclinux2
  • instance prefix – VIS
  • storage type – please note that storage type is ASM.
  • ASM disk groups for Oracle data file and FRA – DATA and FLASH.

Exact content of convert.xml is as hollows.

<?xml version=”1.0″ encoding=”UTF-8″ ?>

RConfig xmlns:n=”http://www.oracle.com/rconfig&#8221; xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance&#8221; xsi:schemaLocation=”http://www.oracle.com/rconfig”&gt;

<n:ConvertToRAC>

– <!–

Verify does a precheck to ensure all pre-requisites are met, before the conversion is attempted. Allowable values are: YES|NO|ONLY

–>

<n:Convert verify=”ONLY”>

– <!–

Specify current OracleHome of non-rac database for SourceDBHome

–>

/u01/oracle/VIS/db/tech_st/11.1.0

– <!–

Specify OracleHome where the rac database should be configured. It can be same as SourceDBHome

–>

/u01/app/oracle/product/11.1.0/db_2

– <!–

Specify SID of non-rac database and credential. User with sysdba role is required to perform conversion

–>

SourceDBInfo SID=”VIS”>

<n:Credentials>

<n:User>sys</n:User>

<n:Password>sys1</n:Password>

<n:Role>sysdba</n:Role>

</n:Credentials>

</n:SourceDBInfo>

– <!–

ASMInfo element is required only if the current non-rac database uses ASM Storage

–>

ASMInfo SID=”+ASM1″>

<n:Credentials>

<n:User>sys</n:User>

<n:Password>sys1</n:Password>

<n:Role>sysasm</n:Role>

</n:Credentials>

</n:ASMInfo>

– <!–

Specify the list of nodes that should have rac instances running. LocalNode should be the first node in this nodelist.

–>

<n:NodeList>

<n:Node name=”raclinux1.gj.com” />

<n:Node name=”raclinux2.gj.com” />

</n:NodeList>

– <!–

Specify prefix for rac instances. It can be same as the instance name for non-rac database or different. The instance number will be attached to this prefix.

–>

<n:InstancePrefix>VIS</n:InstancePrefix>

– <!–

Specify port for the listener to be configured for rac database.If port=””, alistener existing on localhost will be used for rac database.The listener will be extended to all nodes in the nodelist

–>

<n:Listener port=”” />

– <!–

Specify the type of storage to be used by rac database. Allowable values are CFS|ASM. The non-rac database should have same storage type.

–>

SharedStorage type=”ASM”>

– <!–

Specify Database Area Location to be configured for rac database.If this field is left empty, current storage will be used for rac database. For CFS, this field will have directory path.

–>

<n:TargetDatabaseArea>+DATA</n:TargetDatabaseArea>

– <!–

Specify Flash Recovery Area to be configured for rac database. If this field is left empty, current recovery area of non-rac database will be configured for rac database. If current database is not using recovery Area, the resulting rac database will not have a recovery area.

–>

<n:TargetFlashRecoveryArea>+FLASH</n:TargetFlashRecoveryArea>

</n:SharedStorage>

</n:Convert>

</n:ConvertToRAC>

</n:RConfig>

We use the rconfig utility to verify the conversion process using the convert.xml file. The output of the verification is shown below.

If you wish to specify a NEW_ORACLE_HOME as it is in our case for the Oracle home of the freshly installed Oracle 11g release 11.1.0.7, start the database from the new Oracle Home using the command

SQL>startup pfile=/dbs/init.ora;.

Shut down the database. Create a spfile from the pfile using the command

SQL>create spfile from pfile;.

Move the $ORACLE_HOME/dbs/spfile.ora for this instance to the shared location. Take a backup of existing $ORACLE_HOME/dbs/init.ora and create a new $ORACLE_HOME/dbs/init.ora with the following parameter spfile=/spfile.ora. Start up the instance. Using netca, create local and remote listener tnsnames.ora aliases for database instances. Use listener_VIS1 and listener_VIS2 as the alias name for the local listener, and listeners_VIS for the remote listener alias. Execute netca from $ORACLE_HOME/bin.

  • Choose “Cluster Configuration” option in the netca assistant.
  • Choose the current nodename from the nodes list.
  • Choose “Local Net Service Name Configuration” option and click Next.
  • Select “Add” and in next screen enter the service name and click Next.
  • Enter the current node as Server Name and the port 1521 defined during the ASM listener creation.
  • Select “Do not perform Test” and click Next.
  • Enter the listener TNS alias name like LISTENER_VIS1for local listener.
  • Repeat the above steps for remote listener, with the server name as the secondary node and the listener name LISTENERS_VIS.

Note: Ensure that local and remote aliases are created on all nodes in the cluster.

After making sure that the parameters are valid and no errors were identified that could be a problem we start the real conversion using the rconfig and convert1.xml file. The output of the execution is shown below.

After completion of the VIS database conversion from a single instance database residing on a file system to a RAC database on raclinux1 and raclinux2 servers cluster residing on ASM we can validate the conversion looking at the running services produced by ./crs_stat –t –v  command output.

Running AutoConfig

EBS is a complex application that can be deployed in a single node or multi-node configuration. Instrumental component of the EBS architecture is the context file in xml format comprising a repository of all parameters for the EBS configuration. Changing the context file parameter we reconfigure the EBS. Note that as of EBS 12.1.1 Oracle Application Manager (OAM) is the only approved method for changing the context file. Running Auto Config the changes in the configuration parameters stored into the context file get applied to the application. Thus, the management framework based on the context file and other ad utilities greatly facilitates the configuration and management of the EBS. There is no need for the EBS DBA to change parameters in many locations, instead the context file need to be changed and Auto Config needs to be run.  The content files are for both the database tier and the application tier. Thus, a multi-node configuration can be managed and configured independently. In the context of the chapter for RAC enabling the EBS we will cover the process of creation of the context file on the new Oracle 11g 11.1.0.7 Oracle home and making the EBS aware of the changes. Also we will change the context file of the application tier so that the application tier can communicate with the new database tier. We will cover the detailed steps that are involved in the process of update the context files and propagating the changes across the application so that they take effect and the application is aware of it. The management ad utilities based on the context files generate a lot of additional configuration files that are used by EBS both database and application tier.

As we have installed new Oracle 11g homes for ASM and RDBMS we need to make EBS aware of them. This is achieved by running Auto Config. Running AutoConfig on the database tier is required in the following scenarios.

  • After migrating a patch to the database tier, the Check Config utility reports any potential changes to the templates.
  • After customizations on the database tier
  • After a database or application tier upgrade
  • After restoration of the database or Oracle Home from a backup tape
  • After a JDK upgrade on the database tier
  • After the Net Services Topology Information is manually cleaned up using one of the supported procedures (eg. fnd_conc_clone.setup_clean). Subsequently, AutoConfig must be run on the application tier nodes.
  • After registration of a RAC node.
  • After setting up the APPL_TOP on a shared file system.
  • All other cases where documentation says that AutoConfig should be run on the database tier.

Now we are going to Enable Autoconfig on the new Oracle 11g 11.1.0.7 home database tier as this is freshly installed Oracle home. Complete the steps in this section (in the order listed) to migrate to AutoConfig on the Database Tier.

Copy AutoConfig to the new RDBMS ORACLE_HOME for Oracle 11g R1 11.1.0.7

Ensure that you have applied any patches listed in the pre-requisites section above. Update the RDBMS ORACLE_HOME file system with the AutoConfig files by performing the following steps:

  • On the Application Tier (as the applmgr user):

Log in to the APPL_TOP environment (source the environment file)

Create appsutil.zip file
perl /bin/admkappsutil.pl

This will create appsutil.zip in $INST_TOP/admin/out .

  • On the Database Tier (as the ORACLE user):

Copy or FTP the appsutil.zip file to the

cd <RDBMS ORACLE_HOME>
unzip -o appsutil.zip

  • Copy the jre directory from /appsutil to 11g NEW_ORACLE_HOME>/appsutil.
  • Create a directory under $ORACLE_HOME/network/admin. Use the new instance name while creating the context directory. Append the instance number to the instance prefix that you put in the rconfig XML file. For example, if your database name is VIS, and you want to use “VIS” as the instance prefix, create the context_name directory as VIS1_ or VIS2_ where hostname can be either raclinux1 or raclinux2.
  • Set the following environment variables in the .bash_profile as follows.
  • De-register the current configuration using the Apps schema package FND_CONC_CLONE.SETUP_CLEAN executing the command SQL>exec fnd_conc_clone.setup_clean; while logged into the database as apps user.
  • Copy the tnsnames.ora file from $ORACLE_HOME/network/admin to $TNS_ADMIN/tnsnames.ora file and edit it to change the aliases for SID=<new Oracle RAC instance name>.
  • To preserve TNS aliases (LISTENERS_ and LISTENER_) of ASM , create a file _ifile.ora under $TNS_ADMIN, and copy those entries to that file.
  • Create the listener.ora as per the sample file in Appendix 1. Change the instance name and Oracle home to match this environment.
  • Start the listener.
Generate your Database Context File

From the 11g ORACLE_HOME/appsutil/bin directory, create an instance-specific XML context file by executing the command

  • On UNIX
    cd <RDBMS ORACLE_HOME>
    . <CONTEXT_NAME>.env
    cd <RDBMS 11g ORACLE_HOME>/appsutil/bin
    perl adbldxml.pl tier=db appsuser=
Note that adbldxml.pl uses your current environment settings to generate the context file. Therefore ensure that your environment is correctly sourced.
Note that If you build the context file for an EBS instance that runs on RAC, all your RAC instances have to be up and running while executing the adbldxml utility. The utility connects to all RAC instances to gather information about the configuration.
Prepare for AutoConfig by completing the following AutoConfig steps.

  • Set the value of s_virtual host_name to point to the virtual hostname (VIP alias) for the database host, by editing the database context file $ORACLE_HOME/appsutil/_hostname.xml.
  • Rename $ORACLE_HOME/dbs/init.ora, to a new name (i.e. init.ora.old) in order to allow AutoConfig to regenerate the file using the Oracle RAC specific parameters.
  • Ensure that the following context variable parameters are correctly specified.
  • s_jdktop=<11g ORACLE_HOME_PATH>/appsutil/jre
    s_jretop=<11g ORACLE_HOME_PATH>/appsutil/jre
    s_adjvaprg=<11g ORACLE_HOME_PATH>/appsutil/jre/bin/java
  • Review Prior Manual Configuration Changes.
    The database context file may not include manual post-install configuration changes made after the Rapid Install completed. Before running the AutoConfig portion of this patch, review any modifications to specific configuration files and reconcile them with the database context file.
Note: Prior modifications include any changes made to configuration files as instructed in patch READMEs or other accompanying documents.
Generate and Apply AutoConfig Configuration files
Note that this step performs the conversion to AutoConfig. Once completed, the previous configuration will not be available.
Note that the database server and the database listener must remain available during the AutoConfig run. All the other database tier services should be shut down.

Execute the following commands on Linux/UNIX.
cd <RDBMS ORACLE_HOME>/appsutil/bin/perl adconfig.pl

Warning: Running AutoConfig on the database node will update the RDBMS network listener file. Be sure to review the configuration changes from step ‘Prepare for AutoConfig by completing the following AutoConfig steps’. The new AutoConfig network listener file supports the use of IFILE to allow for values to be customized or added as needed.

Note: Running AutoConfig on the database tier will NOT overwrite any existing init.ora file in the /dbs directory. If no init.ora file exists in your instance, AutoConfig will generate an init.ora file in the /dbs directory for you.

Note: Running AutoConfig might change your existing environment files. After running AutoConfig, you should always set the environment before you run any Applications utilities in order to apply the changed environment variables.

Check the AutoConfig log file located in 11g ORACLE_HOME/appsutil/log//

If ASM/OCFS is being used, note down the new location of the control file.

sqlplus / as sysdba;

SQL> show parameters control_files

Perform all of the above steps starting from sub section ‘Copy AutoConfig to the new RDBMS ORACLE_HOME for Oracle 11g R1 11.1.0.7’ on all other database nodes in the cluster.

Execute AutoConfig on all database nodes in the cluster

Execute AutoConfig on all database nodes in the cluster by running the command  $ORACLE_HOME/appsutil/scripts/adautocfg.sh . Shut down the instances and listeners.

Init file, tnsnames and listener file activities

Edit $ORACLE_HOME/dbs/_APPS_BASE.ora file on all nodes. If ASM is being used, change the following parameter control_files =

Create a spfile from the pfile on all nodes as follows:

  • Create an spfile from the pfile, and then create a pfile in a temporary location from the new spfile, with commands as shown in the following example:
  • SQL>create spfile= from pfile;
  • SQL>create pfile=/tmp/init.ora from spfile=;
  • Repeat this step on all nodes.
  • Combine the initialization parameter files for all instances into one initdbname.ora file by copying all existing shared contents. All shared parameters defined in your initdbname.ora file must be global, with the format *.parameter=value
  • Modify all instance-specific parameter definitions in init<SID>.ora files using the following syntax, where the variable <SID> is the system identifier of the instance: <SID>.parameter=value

Note: Ensure that the parameters LOCAL_LISTENER,diagnostic_dest,undo_tablespace,thread,instance_number,instance_name are in .parameter format; for example, .LOCAL_LISTENER=. These parameters must have one entry for an instance.

  • Create the spfile in the shared location where rconfig created the spfile from the pfile init.ora above.

SQL>create spfile= from pfile;

Ensure that listener.ora and tnsnames.ora are generated as per the format shown in Appendix 1.

As AutoConfig creates the listener.ora and tnsnames.ora files in a context directory, and not in the $ORACLE_HOME/network/admin directory, the TNS_ADMIN path must be updated in CRS. Run the following command as the root user:

# srvctl setenv nodeapps -n \

-t TNS_ADMIN=/network/admin/

Start up the database instances and listeners on all nodes.

Run AutoConfig all nodes to ensure each instance registers with all remote listeners.

Shut down and restart the database instances and listeners on all nodes.

Restart the database instances and listeners on all nodes.

De-register any old listeners and register the new listeners with CRS using the commands:

# srvctl remove listener –n -l

# srvctl add listener -n -o -l

Establish Applications Environment for Oracle RAC
Preparatory Steps

The steps below are important to make the application tier aware of the new database tier. Carry out the following steps on all application tier nodes.

  • Source the Oracle Applications environment.
  • Edit SID=<Instance 1> and PORT=<New listener port > in $TNS_ADMIN/tnsnames.ora file, to set up connection one of the instances in the Oracle RAC environment.
  • Confirm you are able to connect to one of the instances in the Oracle RAC environment.
  • Edit the context variable jdbc_url, adding the instance name to the connect_data parameter.
  • Run AutoConfig using the command: $AD_TOP/bin/adconfig.sh contextfile=$INST_TOP/appl/admin/.

For more information on AutoConfig, see My Oracle Support Knowledge Document 387859.1, Using AutoConfig to Manage System Configurations with Oracle E-Business Suite Release 12.

  • Check the $INST_TOP/admin/log/ AutoConfig log file for errors.
  • Source the environment by using the latest environment file generated.
  • Verify the tnsnames.ora and listener.ora files. Copies of both are located in the $INST_TOP/ora/10.1.2/network/admin directory and $INST_TOP/ora/10.1.3/network/admin directory. In these files, ensure that the correct TNS aliases have been generated for load balance and failover, and that all the aliases are defined using the virtual hostnames.
  • Verify the dbc file located at $FND_SECURE. Ensure that the parameter APPS_JDBC_URL is configured with all instances in the environment, and that load_balance is set to YES.
Set Up Load Balancing

Load balancing is important as it enable you to distribute the load of the various EBS components to the less loaded instance. Implementing load balancing for the Oracle Applications database connections is achieved following the steps outlined below.

  • Run the Context Editor (through the Oracle Applications Manager interface) and set the value of “Tools OH TWO_TASK” (s_tools_two_task), “iAS OH TWO_TASK” (s_weboh_twotask) and “Apps JDBC Connect Alias” (s_apps_jdbc_connect_alias).
  • To load balance the forms based applications database connections, set the value of “Tools OH TWO_TASK” to point to the <database_name>_balance alias generated in the tnsnames.ora file.
  • To load balance the self-service applications database connections, set the value of “iAS OH TWO_TASK” and “Apps JDBC Connect Alias” to point to the _balance alias generated in the tnsnames.ora file.
  • Execute AutoConfig by running the command. $AD_TOP/bin/adconfig.sh contextfile=$INST_TOP/appl/admin/
  • Restart the Applications processes, using the new scripts generated by AutoConfig.
  • Ensure that value of the profile option “Application Database ID” is set to dbc file name generated in $FND_SECURE.

Note: If you are adding a new node to the application tier, repeat the above steps 1-6 for setting up load balancing on the new application tier node.

Configure Parallel Concurrent Processing

Parallel Concurrent Processing (PCP) is an extension of the Concurrent Processing architecture. PCP allows concurrent processing activities to be distributed across multiple nodes in an Oracle RAC environment, maximizing throughput and providing resilience and high availability to node failure. User interactions with EBS data can be conducted via HTML-based applications or the more traditional forms based applications. However, there are also reporting programs and data updating updating programs that need to run either periodically or an ad-hoc basis. These programs, that are running in the background while users continue to work on other tasks may require a large number of data intensive computations and are run using the Concurrent Processing architecture. Concurrent processing is an Oracle EBS feature that allows these non-interactive and potentially long running functions to be executed efficiently alongside interactive operations. It uses the operating system facilities to facilitate background scheduling of data or resource intensive jobs via a set of programs and forms. To ensure that resource intensive concurrent processing operations do not interfere with the interactive operations, they run on a specialized server, the Concurrent Processing server. It is worth evaluating whether to have it on the application or database tier. In some cases running the concurrent processing server on the database tier improves performance. Summary of the Concurrent processing architecture is show below.

Check prerequisites for setting up Parallel Concurrent Processing

To set up Parallel Concurrent Processing (PCP), you must have more than one Concurrent Processing node in your environment. If you do not have this, follow the appropriate instructions in My Oracle Support Knowledge Document 406982.1, Cloning Oracle Applications Release 12 with Rapid Clone. Brief overview of the EBS cloning process will be covered in the next subsection.

Note:  If you are planning to implement a shared Application tier file system, refer to My Oracle Support Knowledge Document 384248.1, sharing the Application Tier File System in Oracle E-Business Suite Release 12, for configuration steps. If you are adding a new Concurrent Processing node to the application tier, you will need to set up load balancing on the new application by repeating steps 1-6 in section ‘Set up load balancing’.

Cloning EBS concepts in brief

Cloning in EBS is a methodology allowing moving components of existing EBS system to a different location either on the same server or a different server without reinstall of the EBS. Cloning is the process used to create a copy of an existing EBS system. There are various scenarios for cloning an EBS system.

  • Standard cloning – Making a copy of an existing Oracle Applications system, for example a copy of a production system to test updates.
  • System scale-up – Adding new machines to an Oracle Applications system to provide the capacity for processing an increased workload.
  • System transformations – Altering system data or file systems, including actions such as platform migration, data scrambling, and provisioning of high availability architectures.
  • Patching and upgrading – Delivering new versions of Applications components, and providing a mechanism for creating rolling environments to minimize downtime.

An important principle in EBS cloning is that the system is cloned, rather than the topology. Producing an exact copy of the patch level and data is much more important than creating an exact copy of the topology, as a cloned system must be able to provide the same output to the end user as the source system. However, while a cloned system need not have the full topology of its source, it must have available to it all the topology components that are available to the source.  Cloning in EBS basically involves three steps. First we prepare the source system. Second step is copying the source system to the target system. Last third step is configuring the target system. Cloning methodology enable us also to add a new node to existing EBS system or to clone RAC enabled EBS.

Prepare the source system by executing the following commands to prepare the source system for cloning while database and applications are running.

  • Prepare the source system database tier for cloning
    Log on to the source system as the oracle user, and run the following commands.

$ cd <RDBMS ORACLE_HOME>/appsutil/scripts/
$ perl adpreclone.pl dbTier

  • Prepare the source system application tier for cloning
    Log on to the source system as the applmgr user, and run the following commands on each node that contains an APPL_TOP.

$ cd <INST_TOP>/admin/scripts
$ perl adpreclone.pl appsTier

Note: If new Rapid Clone or AutoConfig updates are applied to the system, adpreclone.pl must be executed again on the dbTier and on the appsTier in order to apply the new files into the clone directory structures that will be used during the cloning configuration stage.
Copy the application tier file system from the source EBS system to the target node by executing the following steps in the order listed. Ensure the application tier files copied to the target system are owned by the target applmgr user, and that the database node files are owned by the target oracle user.

Note: The tar command can be used to compress the directories into a temporary staging area. If you use this command, you may require the -h option to follow symbolic links, as following symbolic links is not the default behavior on all platforms. Consult the UNIX man page for the tar command.

  • Copy the application tier file system
    Log on to the source system application tier nodes as the applmgr user and shut down the application tier server processes. Copy the following application tier directories from the source node to the target application tier node:
  • <APPL_TOP>
  • <COMMON_TOP>
  • Applications Technology Stack  <OracleAS Tools ORACLE_HOME>
  • Applications Technology Stack  <OracleAS Web IAS_ORACLE_HOME>
  • Copy the database node file system
    Log on to the source system database node as the ORACLE user, and then:
  • Perform a normal shutdown of the source system database
  • Copy the database (.dbf) files from the source system to the target system
  • Copy the source database ORACLE_HOME to the target system
  • Start the source Applications system database and application tier processes

Configure the target system by running the following commands to configure the target system. You will be prompted for specific target system values such as SID, paths, and ports to name a few.

  • Configure the target system database server
    Log on to the target system as the oracle user and enter the following commands:

$ cd <RDBMS ORACLE_HOME>/appsutil/clone/bin
$ perl adcfgclone.pl dbTier

  • Configure the target system application tier server nodes
    Log on to the target system as the applmgr user and enter the following commands:

$ cd <COMMON_TOP>/clone/bin
$ perl adcfgclone.pl appsTier

Add a New Node to an Existing System. You can use Rapid Clone to clone a node and add it to the existing EBS system, a process also known as scale up or scale out. The new node can run the same services as the source node, or different services. Follow the instructions in the Application tier part of Cloning Tasks in Note 406982.1.

After adcfgclone.pl completes, source the EBS environment and run the following commands on the target system:

$ cd <COMMON_TOP>/clone/bin
$ perl adaddnode.pl

Note: After adding new nodes, refer to My Oracle Support Knowledge Document 380489.1 for details of how to set up load balancing.

Note: If SQL*Net Access security is enabled in the existing system, you first need to authorize the new node to access the database through SQL*Net. See the Oracle Applications Manager on line help for instructions on how to accomplish this.

Set Up PCP
  • Edit the applications context file via Oracle Applications Manager, and set the value of the variable APPLDCP to ON.
  • Execute AutoConfig by running the following command on all concurrent processing nodes $INST_TOP/admin/scripts/adautocfg.sh .
  • Source the Applications environment.
  • Check the tnsnames.ora and listener.ora configuration files, located in $INST_TOP/ora/10.1.2/network/admin. Ensure that the required FNDSM and FNDFS entries are present for all other concurrent nodes.
  • Restart the Applications listener processes on each application tier node.
  • Log on to Oracle E-Business Suite Release 12 using the SYSADMIN account, and choose the System Administrator Responsibility. Navigate to Install > Nodes screen, and ensure that each node in the cluster is registered.
  • Verify that the Internal Monitor for each node is defined properly, with correct primary and secondary node specification, and work shift details. For example, Internal Monitor: Host2 must have primary node as host2 and secondary node as host3. Also ensure that the Internal Monitor manager is activated: this can be done from Concurrent > Manager > Administrator.
  • Set the $APPLCSF environment variable on all the Concurrent Processing nodes to point to a log directory on a shared file system.
  • Set the $APPLPTMP environment variable on all the CP nodes to the value of the UTL_FILE_DIR entry in init.ora on the database nodes. (This value should be pointing to a directory on a shared file system.)
  • Set profile option ‘Concurrent: PCP Instance Check’ to OFF if database instance-sensitive failover is not required. By setting it to ‘ON’, a concurrent manager will fail over to a secondary Application tier node if the database instance to which it is connected becomes unavailable for some reason.
Set Up Transaction Managers
  • Shut down the application services (servers) on all nodes.
  • Shut down all the database instances cleanly in the Oracle RAC environment, using the command SQL>shutdown immediate;.
  • Edit $ORACLE_HOME/dbs/_ifile.ora. Add the following parameters             _lm_global_posts=TRUE and _immediate_commit_propagation=TRUE.
  • Start the instances on all database nodes, one by one.
  • Start up the application services (servers) on all nodes.
  • Log on to Oracle E-Business Suite Release 12 using the SYSADMIN account, and choose the System Administrator responsibility. Navigate to Profile > System, change the profile option ‘Concurrent: TM Transport Type’ to ‘QUEUE’, and verify that the transaction manager works across the Oracle RAC instance.
  • Navigate to Concurrent > Manager > Define screen, and set up the primary and secondary node names for transaction managers.
  • Restart the concurrent managers.
  • If any of the transaction managers are in deactivated status, activate them from Concurrent > Manager > Administrator.
Set Up Load Balancing on Concurrent Processing Nodes
  • Edit the applications context file through the Oracle Applications Manager interface, and set the value of Concurrent Manager TWO_TASK (s_cp_twotask) to the load balancing alias (<service_name>_balance>).
  • Execute AutoConfig by running $INST_TOP/admin/scripts/adautocfg.sh on all concurrent nodes.

December 13, 2009 Posted by | oracle | 5 Comments

Upgrading EBS 12.1.1 to latest Oralcle 11g

Upgrading Oracle Applications EBS R12 12.1.1 to latest Oracle RDBMS 11g

Keeping the Oracle database up to the latest version certified with EBS is of paramount importance to ensure that all new performance, manageability, administrative and security features to name a few are available. Upgrading an EBS 12 with the latest release of Oracle conceptually involves the following steps as follows.

  • Create a new Oracle home for the latest Oracle 11g release, install the Oracle 11gR1 software. After that, install Oracle 11gR1 database examples from the Oracle Example media. Patch to the latest 11gR1 release. Note as of the time of writing EBS 12.1.1 was not certified with Oracle 11gR2 instead only EBS 11i was certified with Oracle 11gR2.
  • Apply inter-operability patches for both EBS application and DB tier.
  • Shutdown the application tier and all the listeners and perform a database upgrade, modify init parameters, perform all the patch special instructions and post install tasks. Optionally you can choose to run an EBS PL/SQL database objects in a natively compiled mode with Oracle 11g.
  • After the database upgrade start the new listener in the new oracle home. Run adgrants.sql from sqlplus while logged in as sysdba. Run the following command $ sqlplus apps/ @adctxprv.sql CTXSYS where adctxprv.sql is from $AD_TOP/patch/115/sql/adctxprv.sql. Execute sqlplus / @wfaqupfix.sql where wfaqupfix.sql is from $FND_TOP/patch/115/sql/wfaqupfix.sql.
  • Implement and run AutoConfig in the new Oracle home on the database server node. If the database listener of the new Oracle home is defined differently than the old Oracle home, you must also run AutoConfig on each application tier server node to update the system with the new listener. Please see Note 387859.1 Using AutoConfig to Manage System Configurations in Oracle E-Business Suite Release 12. Shut down all processes, including the database and the listener, and restart them to load the new environment settings.
  • Gather statistics for the sys schema. Connect as sysdba using sqlplus and run the script $APPL_TOP/admin/adstats.sql in restricted mode.

$ sqlplus “/ as sysdba”

SQL> shutdown normal;

SQL> startup restrict;

SQL> @adstats.sql

SQL> shutdown normal;

SQL> startup;

SQL> exit;

  • Recreate Demantra privileges note 730883.1
  • Re-create custom database links
  • Re-create grants and synonyms. Oracle 11g contains new functionality for grants and synonyms compared to previous database releases. As a result, you must re-create the grants and the synonyms in the APPS schema. On the server node, as owner of the application file system, run AD administration and select ‘Recreate grants and synonyms for APPS schema’ from the Maintain Applications Database Objects menu.
  • Restart application server processes. Restart all the Application tier server processes that you shut down previously. Remember that the Oracle Net listener for the database instance, as well as the database instance itself, need to be started in the 11.1 Oracle home. Users may return to the system.
  • Synchronize Workflow views. Log on to Oracle EBS with the “System Administrator” responsibility. Click Requests > Run > Single Request and the OK button. Enter the following parameters:
  • Request Name = Workflow Directory Services User/Role Validation
  • p_BatchSize = 10000
  • p_Check_Dangling = Yes
  • Add missing user/role assignments = Yes
  • Update WHO columns in WF tables = No

Click “OK” and “Submit”.

December 13, 2009 Posted by | oracle | Leave a comment