Guenadi N Jilevski's Oracle BLOG

Oracle RAC, DG, EBS, DR and HA DBA BLOG

FLASHBACK ON while database open in Oracle 11.2 (11gR2)

 Starting in Oracle 11gR2 high availability is further enhanced allowing enabling and disabling Flashback database while database is online. In Oracle 11gR2 the database does not need to be restarted to enable Flashback Database.

 

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------

Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE    11.2.0.1.0      Production
TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production

SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------
NO

SQL> select status from v$instance;

STATUS
------------
OPEN

SQL> alter database flashback on;

Database altered.

SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------
YES

SQL>

August 29, 2010 Posted by | oracle | 2 Comments

ASM Cluster File System (ACFS) in Oracle 11gR2

ASM Cluster File System (ACFS) in Oracle 11gR2

Oracle in 11gR2 extends the functionality of ASM and introduces a new ASM Cluster File System (ACFS) based on ASM. ACFS is a general purpose cluster file system that allows storing all type of data files in similar manner to any Linux file system for example ext3. Oracle recommends using ASM for storing database files as it was conceptually done since the first introduction of ASM in release10g. ACFS cannot be used to install the Oracle 11gR2 Grid Infrastructure that is, Oracle 11gR2 also uses ASM to store the Grid Infrastructure OCR and the voting disk if a shared file system option is not used while installing Grid Infrastructure. ACFS can be used as a file system for Oracle homes storing the Oracle binaries. In Oracle 11gR2 ASM is already a part of the Grid Infrastructure and is installed with Grid Infrastructure in comparison to earlier releases where ASM was part of RDBMS. Oracle 11gR2 new features enable us to create, mount and manage ACFS using familiar Linux commands. ACFS supports snapshots and dynamically online resizing of existing file systems using internally ASM Dynamic Volume Manager (ADVM). ACFS is based on volumes that are created in a disk group. Once created volume is identified by a volume device, that is used by Linux to address the volume and this volume device is used to create the ACFS. In the article we will look at how to create volumes and ACFS based on the volumes. We will look at various ways to create ASM volumes such as:

  • ASMCA
  • OEM
  • ASMCMD
  • sqlplus

 

To use and deploy ACFS the compatibility in ASM has to be at least 11.2 for ASM, RDBMS and ADVM. The article is based on two node RAC cluster on OEL 5.5. All utilities and commands will be executed while logged in as Linux user owner of Oracle Grid Infrastructure. In order to create and manage ACFS Oracle Grid Infrastructure must be started on the cluster as ADVM and ASM drivers and libraries are loaded. Similar Approach can be used in Oracle Restart to manage ACFS though there some limitations related to automatically performed tasks such as:

  • Loading ACFS drivers
  • Mounting ACFS systems registered with acfsutil utility.

 

In Oracle Restart the libraries and ACFS drivers need to be loaded using acfsload start –s as a root user.

The latter does not need to be done if using RAC.

  1. Using ASMCA for ACFS creation.

Start asmca from Linux command line. Once loaded all tabs will be available if the ADVM drivers are loaded on all nodes of the cluster, otherwise the tabs will be dimmed and not available for access.

Disk Groups tab shows the available disk groups along with their properties and state. Here we can create new disk groups or mount or dismount existing disk groups.

Volumes tab shows existing volumes and their properties and state. Here we can create new volumes and respectively enable or disable existing volumes. Press the Create button to create a new volume.

Enter the volume name, specify the disk group where the volume will be created and specify the size of the volume.

Press Show Advanced Options if you want to modify the defaults.

Here we will keep the defaults and will press OK to continue. Wait for the volume to be created.

Acknowledge the message for the successful completion.

The newly create volume ASM_VOL1 is listed in the Volumes tab.

Select the ASM Cluster File Systems tab. Here we see the existing ACFS. We will use the newly created volume ASM_VOL1 to create a new ACFS. Press Create button.

As the volume is already created we select the ASM_VOL1 and specify General purpose file system while leaving the default mount point.

Press Show Command to see the steps involved to create and register the ACFS.

Press OK to exit and OK to start the ACFS creation. Wait for the process to complete.

The new ACFS based on ASM_VOL1 is created and listed in the ASM Cluster File System tab.

Note that acfsutil registers and mounts the ACFS. Thus, after restart of the Oracle Grid Infrastructure the ACFS will be mounted.

  1. Using sqlplus to create an ACFS volume.

Invoke sqlplus while logged in as Linux user owner of Grid Infrastructure. Create a volume ACFS_VOL2 with 2GB size as shown below. Query V$ASM_VOLUME for the volume name and volume device. The output confirms that the volume is created successfully.

  1. Using ASMCMD to create an ACFS volume.

 

Invoke asmcmd while logged in as a Linux user owner of the Oracle Grid Infrastructure. Use the asmcmd volcreate command to create a volume ACFS_VOL1 with 1GB size in disk group DATA as shown in the example. Use the asmcmd volinfo command to obtain information related to the newly created volume ACFS_VOL1.

In similar fashion use the volinfo command to obtain information related to the ACFS_VOL2 volume created with sqlplus.

Use the mkfs command to make the file systems based on the ACFS volumes that we created with sqlplus and asmcmd. The –t option specifies that the file system is ACFS type.

Create /u01/acfs1 and /u01/acfs2 directories. Use the acfsutil utility to register the created ACFS file systems in the registry. The registry serves a similar purpose to /etc/fstab in that the file systems get mounted automatically upon Oracle Grid Infrastructure restart. The acfsutil needs the ACFS volume device and the mount point. Note that, when registering an ACFS file system acfsutil mounts the ACFS file system as well.

The new ACFS file systems /u01/acfs1 and /u01/acfs2 are now available.

We can dynamically resize the ACFS on the fly as shown below.

  1. Resize ACFS file systems

 

The acfsutil utility can be used to resize the ACFS utility online. Look at the example below.

  1. Creating Snapshots

     

The acfsutil utility can be used to create a snapshot of the ACFS system. Upon creation of ACFS file system a . ACFS directory is created as a subdirectory to the ACFS file system. In the .ACFS directory there are two directories snaps and repl. The snaps directory is used to store the snapshots. In the example below a snapshot named snap1 is created for the ACFS mounted on /u01/acfs1. The information for the snap1 snapshot is stored in /u01/acfs1/.ACFS/snaps/snap1. The test confirms that ACFS snapshot behaves as expected as the snapshot cannot be deleted if the current directory is the directory dedicated to store the snapshot data.

Here we create a snapshot named snap1 for the ACFS system /u01/acfs1. Automatically with the creation of the snapshot the existing file is transferred to the snapshots directory /u01/acfs1/.ACFS/snaps/snap1. If the file is deleted then, it can be restored from the snapshot. In the example we have an existing rpm file in the ACFS file system /u01/acfs1. The snapshot snap1 is created. The rpm is transferred to /u01/acfs1/.ACFS/snaps/snap1 directory. If we delete the rpm from /u01/acfs1 we can always restore the file from the snapshot snap1.

The acfsutil utility can also be used to gather information about a specific ACFS file system as specified in the example below.

The acfsutil utility can be used to query the content of the registry. Note that the pupose of the registry is similar to /etc/fstab in Linux file systems that is, to ensure that ACFS file systems are mounted either on node reboot or upon Oracle Grid Infrastructure restart.

  1. Using OEM Grid Control to manage ACFS

OEM Grid control can also be used to create and manage ACFS volumes and file systems.

From the ASM page on OEM Grid Control navigate to Disk Groups tab.

We can see that the DATA disk Group is mounted. Navigate to ASM Cluster File System.

Here we have different options to manage ACFS. Press Create button.

Press Create ASM Volume.

We will create a volume named ASM_VOLOEM with size 2GB within the DATA disk group.

The Show Command button shows the SQL statement to create the volume. Press Return to return to ACFS creation.

Upon successful volume creation we continue with ACFS creation. We specify the directory to serve as a mount point (pre-created prior to ACFS creation), the volume device and the volume label.

Show Command displays the steps to make the ACFS and to register it.

Return to the Create ASM Cluster File System page and press OK. Enter the Grid Infrastructure Linux credentials for the creation and registration of the ACFS.

The ACFS is created. If it takes time to get mounted we can opt to mount it explicitly. Select the newly created ACFS and press Mount button.

Specify the mount point. Press Generate Command to see the command.

Press Return.

The /u01/acfs3 ACFS based on ASM_VOLOEM volume is mounted as shown below.

We can use OEM Grid Control to resize the ACFS /u01/acfs3 from 2GB to 6GB. Select the /u01/acfs3 ACFS and in the Actions choose Resize and press Go. On the page that appears enter 6GB size as shown. Press Show Command.

This is the Linux command that will be executed as show below. Press Return and Press OK to continue.

Enter the Oracle Grid Infrastructure owner credentials and press Continue.

OEM Grid Control resizes the ACFS as requested and we can see the result below. The ACFS file system /u01/acfs3 is now 6GB from the initial size of2GB.

This concludes how to use the OEM Grid Control for managing ACFS.

Summary

We looked at the ASM volume creation using asmca, asmcmd, OEM Grid Control and sqlplus. We created ACFS based on the ASM volume using mkfs. We explored the acfsutil utility enabling ACFS management and registration. We resized online ACFS file system and created a snapshot to restore files residing on ACFS using acfsutil utility.

August 28, 2010 Posted by | oracle | 7 Comments

Deferred Segment Creation (Segment Creation On-Demand) in Oracle 11gR2

Deferred Segment Creation (Segment Creation On-Demand) in Oracle 11gR2

In the post we will look at the Deferred Segment Creation feature in Oracle 11g R2. In Oracle Database 11g Release 2, when creating a non-partitioned heap-organized table in a locally managed tablespace, table segment creation is deferred by default until the first row is inserted. In addition creation of segments is deferred for any LOB columns of the table, any indexes created implicitly as part of table creation, and any indexes subsequently explicitly created on the table. This functionality is enabled by default with the initialization parameter DEFERRED_SEGMENT_CREATION set to TRUE. This parameter can be set via ALTER SYSTEM or ALTER SESSION commands at instance or session level respectively. To enable deferred segment creation, compatibility must be set to ‘11.2.0’ or higher. You can disable deferred segment creation by setting the initialization parameter DEFERRED_SEGMENT_CREATION to FALSE. The new clauses SEGMENT CREATION DEFERRED and SEGMENT CREATION IMMEDIATE are available for the CREATE TABLE statement. These clauses override the setting of the DEFERRED_SEGMENT_CREATION initialization parameter. The advantages of this new space allocation method are:

• A significant amount of disk space can be saved for applications that create hundreds or thousands of tables upon installation, many of which might never be populated.

         • The application installation time is reduced, because the creation of a table is a data dictionary operation only.

When you insert the first row into the table, the segments are created for the base table, its LOB columns, and its indexes. During segment creation, cursors on the table are invalidated. These operations have a small additional impact on performance. With this new allocation method, it is essential that you do proper capacity planning so that the database has enough disk space to handle segment creation when tables are populated. For more details, see the Oracle Database Administrator’s Guide.

Create a demo user for the testing.

SQL> create user demo identified by demo default tablespace users temporary tablespace temp;

User created.

SQL> grant resource to demo;

Grant succeeded.

SQL>

 

 

Deferred Segment creation enabled

Connect as the demo user and check the default settings.

SQL> connect demo/demo

Connected.

SQL> show parameter def

NAME TYPE VALUE

———————————— ———– ——————————

deferred_segment_creation boolean TRUE

SQL>

SQL> show parameter compatible

NAME TYPE VALUE

———————————— ———– ——————————

compatible string 11.2.0.0.0

SQL>

 

Create a table using the default settings enabling deferred segment creation.

SQL> create table t1

( no number

constraint t1_pk primary key,

id number

constraint t1_id unique,

cust_name varchar2(32),

l_o clob

)

lob( l_O )

store as t1_l_o_lob

(index t1_l_o_lobidx);

2 3 4 5 6 7 8 9 10 11

Table created.

SQL>

SQL>

 

Check for any created segments.

 

SQL> select segment_name, extent_id,bytes from user_extents order by segment_name;

no rows selected

SQL>

 

Insert a row in the table.

SQL> insert into t1 values (1,1,’ERIC CLAPTON’,’LAYLA, COCAINE’);

1 row created.

SQL> commit;

Commit complete.

SQL>

 

Check for any created segments. As expected segments are created for the table,the respective pk and unique constraints indexes and for the lob and the lob indexes.

SQL> column segment_name format a20

SQL> select segment_name, extent_id,bytes from user_extents order by segment_name;

 

SEGMENT_NAME EXTENT_ID BYTES

——————– ———- ———-

T1 0 65536

T1_ID 0 65536

T1_L_O_LOB 0 65536

T1_L_O_LOBIDX 0 65536

T1_PK 0 65536

SQL>

 

Create a table explicitly enabling segment creation.

SQL> create table t1_1

( no number

constraint t1_1_pk primary key,

id number

constraint t1_1_id unique,

cust_name varchar2(32),

l_o clob

)SEGMENT CREATION IMMEDIATE

lob( l_O )

store as t1_1_l_o_lob

(index t1_1_l_o_lobidx); 2 3 4 5 6 7 8 9 10 11

Table created.

SQL>

 

We see that specifying explicit segment creation overrides the DEFERRED_SEGMENT_CREATION init parameter.

SQL> select segment_name, extent_id,bytes from user_extents order by segment_name;

SEGMENT_NAME EXTENT_ID BYTES

——————– ———- ———-

T1 0 65536

T1_1 0 65536

T1_1_ID 0 65536

T1_1_L_O_LOB 0 65536

T1_1_L_O_LOBIDX 0 65536

T1_1_PK 0 65536

T1_ID 0 65536

T1_L_O_LOB 0 65536

T1_L_O_LOBIDX 0 65536

T1_PK 0 65536

10 rows selected.

SQL>

 

Deferred Segment creation disabled

SQL> show parameter def

NAME TYPE VALUE

———————————— ———– ——————————

deferred_segment_creation boolean TRUE

SQL> show parameter compatible

NAME TYPE VALUE

———————————— ———– ——————————

compatible string 11.2.0.0.0

SQL>

SQL> alter session set deferred_segment_creation=false;

Session altered.

SQL> show parameter def

NAME TYPE VALUE

———————————— ———– ——————————

deferred_segment_creation boolean FALSE

SQL>

 

Create a table

SQL> create table t1

( no number

constraint t1_pk primary key,

id number

constraint t1_id unique,

cust_name varchar2(32),

l_o clob

)

lob( l_O )

store as t1_l_o_lob

(index t1_l_o_lobidx); 2 3 4 5 6 7 8 9 10 11

Table created.

SQL>

 

Check for any created segments. Segments are created. This is what we are used to see in pre-11gR2 releases.

 

 

SQL> select segment_name, extent_id,bytes from user_extents order by segment_name;

SEGMENT_NAME EXTENT_ID BYTES

——————– ———- ———-

T1 0 65536

T1_ID 0 65536

T1_L_O_LOB 0 65536

T1_L_O_LOBIDX 0 65536

T1_PK 0 65536

SQL>

 

Create a table specifying the deferred segment creation option.

 

SQL> create table t1_1

( no number

constraint t1_1_pk primary key,

id number

constraint t1_1_id unique,

cust_name varchar2(32),

l_o clob

)SEGMENT CREATION DEFERRED

lob( l_O )

store as t1_1_l_o_lob

(index t1_1_l_o_lobidx); 2 3 4 5 6 7 8 9 10 11

Table created.

SQL>

 

Check for any created segments. The clause overrides the DEFERRED_SEGMENT_CREATION init parameter.

SQL> select segment_name, extent_id,bytes from user_extents order by segment_name;

SEGMENT_NAME EXTENT_ID BYTES

——————– ———- ———-

T1 0 65536

T1_ID 0 65536

T1_L_O_LOB 0 65536

T1_L_O_LOBIDX 0 65536

T1_PK 0 65536

SQL>

 

Insert a row in the table.

 

SQL> insert into t1_1 values (1,1,’MARILLION’,’LAVENDER,NEVERLAND,KAYLEIGH,INCOMUNICADO,ASSASSING, SHE CHAMELEON’);

1 row created.

SQL> commit;

Commit complete.

SQL>

 

We see that inserting a row creates the segments.

SQL> select segment_name, extent_id,bytes from user_extents order by segment_name;

SEGMENT_NAME EXTENT_ID BYTES

——————– ———- ———-

T1 0 65536

T1_1 0 65536

T1_1_ID 0 65536

T1_1_L_O_LOB 0 65536

T1_1_L_O_LOBIDX 0 65536

T1_1_PK 0 65536

T1_ID 0 65536

T1_L_O_LOB 0 65536

T1_L_O_LOBIDX 0 65536

T1_PK 0 65536

10 rows selected.

SQL>

 

Source:

Oracle Database Administrator’s Guide.

MOS Note 887962.1

August 14, 2010 Posted by | oracle | 1 Comment

How to clean up after a failed 11g CRS install. What is new in 11g R2?

How to clean up after a failed 11g CRS install. What is new in 11g R2?

Although it sounds pretty much the same as a previous post How to Clean Up After a Failed 11g CRS Install in Linux
here we are looking at specifics related to Oracle 11g R2.

In Oracle 11g R1 if the Oracle Clusterware installation failed a manual cleanup was required. Steps to re-install Oracle clusterware was:

  1. Manually cleanup after the failed Oracle Clusterware install
  2. Fix the problem
  3. Restart the Oracle Clusterware installation.

In Oracle 11g R2 the installation and configuration are made more flexible and there is a clearer line between installation and configuration. In Oracle 11gR2 a new script roocrs.pl is available to de-configure and clean the Grid Infrastructure installation without removing the binaries. This script also cleans the OCR and vote disks created on ASM. The script allows to clean up the Grid Infrastructure without removing the binaries, fix the problems and re-run root.sh.

Manual Cleanup for RAC 11gR1

The Oracle provided scripts rootdelete.sh and rootdeinstall.sh remove Oracle Clusterware from your system. After running these scripts, run Oracle Universal Installer to remove the Oracle Clusterware home.
The rootdelete.sh script should be run from the Oracle Clusterware home on each node. It stops the Oracle Clusterware stack, removes inittab entries, and deletes some of the Oracle Clusterware files. The rootdeinstall.sh script should be run on the local node only, after rootdelete.sh has been run on all nodes of the cluster. Use this command either to remove the Oracle Clusterware OCR file, or to downgrade your existing installation. If for some reasons there is no access to the scripts look at the post How to Clean Up After a Failed 11g CRS Install in Linux how to remove the inittab entries and Oracle Clusterware. Although the method described in the post applies to Oracle 11g R2 the new tools can suffice most of the time.

What is new in Oracle 11g R2?

In Oracle 11g R2 a tool rootcrs.pl is provided allowing de-configuration without deinstall. After Oracle Clusterware is de-configured we can fix the problem and re-run the root.sh to re-start the Oracle Clusterware configuration.

New deinstall utility removes the binaries from the server in a similar way to OUI in previous Oracle versions.

So if root.sh fails while we install Oracle Grid infrastructure we can gather error messages from the logs, de-configure Oracle clusterware and troubleshoot the reason for the failure. After Oracle Clusterware de-configuration and successful troubleshooting we can proceed further with the configuration re-running root.sh if the problem can be fixed. Thus, in case of errors such as permissions we can save time re-installing Oracle Clusterware binaries. If necessary, we still have an option to completely remove the Oracle Clusterware binaries using the deinstall utility. Prior to running deinstall Oracle Clusterware must be de-configured using rootcrs.pl tool. Successful deinstall preceded by successful de-configure gives a pristine environment to re-start Oracle Grid installation after the failed install is troubleshot based on information gathered examining the logs.

Deconfigure Oracle Clusterware without removing the binaries:

  • Log in as the root user on a node where you encountered an error. Change directory to $GRID_HOME/crs/install. For example:

    # cd $GRID_HOME/crs/install

  • Run rootcrs.pl with the -deconfig -force flags on all but the last node.

    # perl rootcrs.pl -deconfig -force

  • If you are deconfiguring Oracle Clusterware on all nodes in the cluster, then on the last node add the –lastnode flag that completes deconfiguration on the cluster including the OCR and the voting disks.

    # perl rootcrs.pl -deconfig -force -lastnode

 

Deinstall Command for Oracle Clusterware and ASM

 

In Oracle 11gR2 binaries cannot be removed using the OUI. Instead Oracle provides deinstall utility. The deinstall utility removes the Oracle clusterware and ASM from the server. The deinstallation Tool (deinstall) stops Oracle software, and removes Oracle software and configuration files on the operating system. It is available in the installation media before installation, and is available in Oracle home directories after installation. It is located in the path $ORACLE_HOME/deinstall. You can use the Deinstallation Tool (deinstall) to remove failed or incomplete installations. It is available as a separate download from the Oracle Technology Network (OTN) Web site.

As the deinstall command runs, you are prompted to provide the home directory of the Oracle software that you want to remove from your system. Provide additional information as prompted. To run the deinstall command from an Oracle grid infrastructure for a cluster home, enter the following command.

$ cd /u01/app/11.2.0/grid/deinstall/ 
$ ./deinstall 

You can generate a deinstall parameter file by running the deinstall command using the -checkonly flag before you run the command to deinstall the home, or you can use the response file template and manually edit it to create the parameter file to use with the deinstall command.


August 12, 2010 Posted by | oracle | 8 Comments

MEMORY_TARGET not supported on this system ORA-00845

 MEMORY_TARGET not supported on this system – ORA-00845

Starting from Oracle 11g the automatic memory management feature is  defined with MEMORY_TARGET and MEMORY_MAX_TARGET parameters . The Automatic Memory Management (AMM) manages both SGA and PGA. The user specifies a value for MEMORY_TARGET and the max value that the parameter can grow is limited by MEMORY_MAX_TARGET. Once the MEMORY_TARGET is set Oracle determines the values for SGA_TARGET and PGA_AGGREGATE_TARGET. User does not need to set the above parameter values, instead can rely on Oracle to control those parameters value. Prior to Oracle 11g the user was responsible to set the SGA_TARGET and PGA_AGGRAGATE_TARGET. On Linux based systems the shared memory file system has to be mounted on /dev/shm. The MEMORY_MAX_TARGET need to be less than the shared memory mounted on /dev/shm. Make sure the shared memory filesystem is big enough for Automatic Memory Manager to work.

How to mount shared memory file system ? Execute as root.

# umount tmpfs
# mount -t tmpfs shmfs -o size=4500m /dev/shm

How to make the shared memory file system persistent across reboots?  Add an entry in /etc/fstab.

shmfs /dev/shm tmpfs size=4500m 0 

The ORA-00845 error usually appears if MEMORY_MAX_TARGET is set to a value greater or equal to the amount of memory allocated for /dev/shm.

Make sure that /dev/shm is mounted and there is sufficient memory for the shared memory file system to support AMM in Oracle.

Make sure that Oracle parameters MEMORY_MAX_TARGET, MEMORY_TARGET are less that the memory allocated to the shared memory file system in /dev/shm.

August 12, 2010 Posted by | oracle | 1 Comment

Oracle 11gR2 imp utility

Oracle 11gR2 imp utility

Since Oracle introduced data pump import/export in 10g conventional imp/exp utilities were not added new functionality. I discovered recently that DATA_ONLY option exists in version 11gR2 allowing importing only table data. In older times the same import used to be done with IGNORE=Y and ROWS=Y options instead of DATA_ONLY.

[oracle@oel55 ~]$ imp -help

 

Import: Release 11.2.0.1.0 – Production on Tue Aug 10 10:03:38 2010

 

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

 

 

 

You can let Import prompt you for parameters by entering the IMP

command followed by your username/password:

 

Example: IMP SCOTT/TIGER

 

Or, you can control how Import runs by entering the IMP command followed

by various arguments. To specify parameters, you use keywords:

 

Format: IMP KEYWORD=value or KEYWORD=(value1,value2,…,valueN)

Example: IMP SCOTT/TIGER IGNORE=Y TABLES=(EMP,DEPT) FULL=N

or TABLES=(T1:P1,T1:P2), if T1 is partitioned table

 

USERID must be the first parameter on the command line.

 

Keyword Description (Default) Keyword Description (Default)

————————————————————————–

USERID username/password FULL import entire file (N)

BUFFER size of data buffer FROMUSER list of owner usernames

FILE input files (EXPDAT.DMP) TOUSER list of usernames

SHOW just list file contents (N) TABLES list of table names

IGNORE ignore create errors (N) RECORDLENGTH length of IO record

GRANTS import grants (Y) INCTYPE incremental import type

INDEXES import indexes (Y) COMMIT commit array insert (N)

ROWS import data rows (Y) PARFILE parameter filename

LOG log file of screen output CONSTRAINTS import constraints (Y)

DESTROY overwrite tablespace data file (N)

INDEXFILE write table/index info to specified file

SKIP_UNUSABLE_INDEXES skip maintenance of unusable indexes (N)

FEEDBACK display progress every x rows(0)

TOID_NOVALIDATE skip validation of specified type ids

FILESIZE maximum size of each dump file

STATISTICS import precomputed statistics (always)

RESUMABLE suspend when a space related error is encountered(N)

RESUMABLE_NAME text string used to identify resumable statement

RESUMABLE_TIMEOUT wait time for RESUMABLE

COMPILE compile procedures, packages, and functions (Y)

STREAMS_CONFIGURATION import streams general metadata (Y)

STREAMS_INSTANTIATION import streams instantiation metadata (N)

DATA_ONLY import only data (N)

VOLSIZE number of bytes in file on each volume of a file on tape

 

The following keywords only apply to transportable tablespaces

TRANSPORT_TABLESPACE import transportable tablespace metadata (N)

TABLESPACES tablespaces to be transported into database

DATAFILES datafiles to be transported into database

TTS_OWNERS users that own data in the transportable tablespace set

 

Import terminated successfully without warnings.

[oracle@oel55 ~]$

August 11, 2010 Posted by | oracle | Leave a comment

Oracle 11g R1 / R2 Real Application Clusters Handbook

Oracle 11g R1 / R2 Real Application Clusters Handbook

The book Oracle 11g R1 / R2 Real Application Clusters I coauthored is published and available from PACKT Publishing. A synopsis from the book’s content and a preview chapter ‘High Availability’ are available for download here. The table of content can be browsed here.

 

August 3, 2010 Posted by | oracle | 1 Comment