Guenadi N Jilevski's Oracle BLOG

Oracle RAC, DG, EBS, DR and HA DBA BLOG

Updating a table based on another table

Updating a table based on another table

In many times we need to update column(s) of a table based on the data of another table column(s). There are several ways to do the task.

Let’s show example by creating table and entering values into it.

Create table table_1(id number, code varchar2(20));
insert into table_1 values(1,’First Row’);
insert into table_1 values(2, ‘Rows to be updated’);
Create table table_2(id number, code varchar2(20));
insert into table_2 values(2,’Second Row’);

After above statements let’s look at the data on the table.


SQL> select * from table_1;

ID CODE
———- ——————–
1 First Row
2 Rows to be updated

SQL> select * from table_2;

ID CODE
———- ——————–
2 Second Row


Now my requirement is to update table_1 based on table_2 id column data. If corresponding id in table_1 exist then that row’s code will be updated.

Method 01:


SQL> update table_1 set code=
(select t2.code from table_2 t2 JOIN table_1 t1 ON t1.id=t2.id)
where table_1.id in(select id from table_2);

1 row updated.

SQL> select * from table_1;

ID CODE
———- ——————–
1 First Row
2 Second Row


Method 02:


SQL> update table_1 t1 set code=
(select t2.code from table_2 t2 JOIN table_1 t1 ON t2.id=t1.id)
where exists
(select t2.code from table_2 t2 where t1.id=t2.id);

1 row updated.

SQL> select * from table_1;

ID CODE
———- ——————–
1 First Row
2 Second Row



Method 03:

In order to apply method 03 you need a primary or unique key column in the source table i.e in the table from where we are fetching data for update. It is needed because if this CONSTRAINT is not there then it will result in multiple rows which will create an ambiguous situation.

So, I am adding an unique constraint in table_2.


SQL> alter table table_2 add constraint table_2_UK UNIQUE (id);

Table altered.

SQL> update
(select t1.code col1, t2.code col2 from table_1 t1
JOIN table_2 t2 ON t1.id=t2.id)
set col1=col2;

1 row updated.

SQL> select * from table_1;

ID CODE
———- ——————–
1 First Row
2 Second Row

In most cases method 03 will perform better than other method.

January 24, 2008 Posted by | oracle | Leave a comment

Optimizer Hints in Oracle 11g

Optimizer Hints in Oracle 11g

Optimizer hints are an interesting feature of Oracle. On one hand, the documentation goes out of its way to tell you to use them sparingly, but on the other hand, you can choose from more than 60 hints in 10g and more than 70 in 11g (64 and 71, to be precise). That’s a lot to choose from, and even with a decent working knowledge of them, you probably cannot accurately describe more than 15 to 20 of them.

You would be correct to assume that the hints are categorized, but even with that assumption, can you name the categories? If not by the category name Oracle uses, then if asked about different tuning or usage scenarios, a moderately experienced DBA can hit upon the purpose or function of a category. These questions aren’t meant to be “Trivia Pursuit, the Oracle edition,” but rather, a means to identify and clarify some aspects of optimizer hints.

The optimizer goals are also values for the OPTIMIZER_MODE initialization parameter, so there is a minor bit of crossover between a statement-level hint and a system-level setting.

The table below shows the hints (from 11g) by category and name (and number). Italicized and asterisked (and red if reading on the Web) hints are deprecated and are not counted in the 71 mentioned earlier. Bold font hints are new in 11g, and RULE is no longer supported.

Optimization Goals and Approaches (2) Access Path Hints (17) Other (20) Join Operation (7)
ALL_ROWS

FIRST_ROWS

RULE

CLUSTER

FULL

HASH

INDEX

NO_INDEX

INDEX_ASC

INDEX_DESC

INDEX_COMBINE

INDEX_JOIN

INDEX_FFS

INDEX_SS

INDEX_SS_ASC

INDEX_SS_DESC

NATIVE_FULL_OUTER_JOIN

NO_NATIVE_FULL_OUTER_JOIN

NO_INDEX_FFS

NO_INDEX_SS

APPEND

NOAPPEND

CACHE

NOCACHE

CURSOR_SHARING_EXACT

DRIVING_SITE

DYNAMIC_SAMPLING

MODEL_MIN_ANALYSIS

MONITOR

NO_MONITOR

OPT_PARAM

PUSH_PRED

NO_PUSH_PRED

PUSH_SUBQ

NO_PUSH_SUBQ

PX_JOIN_FILTER

NO_PX_JOIN_FILTER

QB_NAME

RESULT_CACHE

NO_RESULT_CACHE

USE_HASH

NO_USE_HASH

USE_MERGE

NO_USE_MERGE

USE_NL

USE_NL_WITH_INDEX

NO_USE_NL

Join Order (2) Query Transformation (13) XML (2) Parallel Execution (5)
ORDERED

LEADING

FACT

NO_FACT

MERGE

NO_MERGE

NO_EXPAND

USE_CONCAT

REWRITE

NO_REWRITE

NOREWRITE*

UNNEST

NO_UNNEST

STAR_TRANSFORMATION

NO_STAR_TRANSFORMATION

NO_QUERY_TRANSFORMATION

NO_XMLINDEX_REWRITE

NO_XML_QUERY_REWRITE

PARALLEL

NOPARALLEL*

NO_PARALLEL

PARALLEL_INDEX

NO_PARALLEL_INDEX

NOPARALLEL_INDEX*

PQ_DISTRIBUTE

The deprecated hints are just name changes to match other NO_WHATEVER formatting.

Several changes are found in the installation footprint for 11g. One major addition is that of Oracle Warehouse Builder. Another is the inclusion of SQL Developer. After starting SQL Developer, look over on the top right part of the window and click the “Snippets” button. If the button is not present, make it visible via the View > Snippets menu. Change the drop-down selector to Optimizer Hints. The default list contains 56 hints. Several index hints, MONITOR, the NATIVE ones, OPT_PARAM, to name a few, are not listed. In other words, none of the new hints in 11g are included in SQL Developer. That doesn’t mean they can’t be used – the point is that the syntax is not automatically included for you. However, you do have the option of adding more to the library.

Hint Overkill

Let’s come back to the admonition about using hints sparingly. Are you a better coder than what Oracle comes up with via an execution plan? I would say there are plenty of developers and DBAs (i.e., whomever is coding SQL) who can match what Oracle comes up with without ever having to use a hint. Any why should they have to? Having recent/valid statistics and following best practices or recommendations for crafting SQL statements more often than not will not need any pushing or nudging of the optimizer.

Here is an extract from an AWR report covering a 24-hour period. It’s more of a big picture across the day as opposed to something more granular like an hour or so. The top five timed events are pretty typical (as in not obscure), but how much of them is good (or bad)?


This is an order of magnitude difference between db file sequential read (index usage) and its counterpart, db file scattered read (table scan). Is that good or bad? Let’s see what’s up with background wait events. Why would a background wait even be of interest here?


This system has millions of waits related to log files. What’s happening with those? Using Toad (or a query where you can get the same information), how frequently are the log files switching?


During business hours, this database is being slammed with log file switches. What generates input into the redo logs? Not only DML on tables, but also what takes place with indexes, that is, index maintenance.

The investigative path, so far, started with a report. In actuality, it could have started with complaints from users about an application being slow, or appearing to have become slower lately. There was a big number for a common event. The big number by itself doesn’t necessarily mean anything until it is put into context with related events, items, or statistics. Sequential read and heavy redo log switching frequency sounds like there could be a problem with too many indexes being used, or used in the sense that redundant indexes are also being updated along with the essential or set-covering ones. To confirm this hypothesis, we need to look at the SQL statements, and this is where we come back to the use of hints.

In SQL ordered by Gets, the top lines bear investigating.


And further down there is a lot of the same SQL text (a lot more than what the picture shows, but you get the idea).


Drilling down to the SQL ID, the jobs all point back to the same package and subprograms. Crack open the code and it becomes almost a foregone conclusion where part of the problem lay: lots and lots of hints, and bad ones at that (cursor name is partially obscured for privacy).


Using the INDEX (table name or alias, or indexspec) hint, now formally knowing it is an access path hint, is based on what knowledge? That you want to tell Oracle to do what it wants to do in the first place (find the best execution plan for you) or tell Oracle that it must use an index (if it exists) when, in fact, a full table scan would be more efficient? Or consider this: some developer is coding in index hints that are essentially useless. Case in point: explain plans with and without an index hint are shown below.

Base table is MY_OBJECTS, which is a CTAS from ALL_OBJECTS (and table is analyzed after creating an index name IDX_MY_OBJ).

SQL> explain plan for

2 select /*+ index (a) */

3 object_name

4 from my_objects a

5 where object_type = ‘INDEXTYPE’;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT

——————————————————————————————–

Plan hash value: 28651213

——————————————————————————————

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |

——————————————————————————————

| 0 | SELECT STATEMENT | | 18 | 648 | 2 (0)| 00:00:01 |

| 1 | TABLE ACCESS BY INDEX ROWID| MY_OBJECTS | 18 | 648 | 2 (0)| 00:00:01 |

|* 2 | INDEX RANGE SCAN | IDX_MY_OBJ | 18 | | 1 (0)| 00:00:01 |

——————————————————————————————

Predicate Information (identified by operation id):

—————————————————

2 – access(“OBJECT_TYPE”=’INDEXTYPE’)

14 rows selected.

SQL> explain plan for

2 select object_name

3 from my_objects

4 where object_type = ‘INDEXTYPE’;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT

——————————————————————————————–

Plan hash value: 28651213

——————————————————————————————

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |

——————————————————————————————

| 0 | SELECT STATEMENT | | 18 | 648 | 2 (0)| 00:00:01 |

| 1 | TABLE ACCESS BY INDEX ROWID| MY_OBJECTS | 18 | 648 | 2 (0)| 00:00:01 |

|* 2 | INDEX RANGE SCAN | IDX_MY_OBJ | 18 | | 1 (0)| 00:00:01 |

——————————————————————————————

Predicate Information (identified by operation id):

—————————————————

2 – access(“OBJECT_TYPE”=’INDEXTYPE’)

14 rows selected.

No surprise, Oracle wanted to use that index anyway (it’s very selective). Let’s drop the index and run it again.

SQL> drop index idx_my_obj;

Index dropped.

SQL> explain plan for

2 select object_name

3 from my_objects

4 where object_type = ‘INDEXTYPE’;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT

———————————————————————————

Plan hash value: 880823944

——————————————————————————–

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |

——————————————————————————–

| 0 | SELECT STATEMENT | | 18 | 648 | 118 (3)| 00:00:02 |

|* 1 | TABLE ACCESS FULL| MY_OBJECTS | 18 | 648 | 118 (3)| 00:00:02 |

——————————————————————————–

Predicate Information (identified by operation id):

—————————————————

1 – filter(“OBJECT_TYPE”=’INDEXTYPE’)

We incurred a full table scan, but it wasn’t too painful as the table is not that big. Now, create an index where OBJECT_TYPE is at the end, and force Oracle to consider all indexes on that table (which is only one, but one is enough).

SQL> create index idx_big on my_objects(owner, object_id, data_object_id,timestamp, object_name);

Index created.

SQL> exec dbms_stats.gather_table_stats(‘SCOTT’,’MY_OBJECTS’,cascade => true);

PL/SQL procedure successfully completed.

SQL> explain plan for

2 select /*+ index (a) */

3 object_name

4 from my_objects a

5 where object_type = ‘INDEXTYPE’;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT

——————————————————————————————

Plan hash value: 85280455

——————————————————————————————

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |

——————————————————————————————

| 0 | SELECT STATEMENT | | 1 | 36 | 1487 (1)| 00:00:18 |

|* 1 | TABLE ACCESS BY INDEX ROWID| MY_OBJECTS | 1 | 36 | 1487 (1)| 00:00:18 |

| 2 | INDEX FULL SCAN | IDX_BIG | 40859 | | 392 (1)| 00:00:05 |

——————————————————————————————

Although cost isn’t always an accurate discriminator between plans, when you consider rows evaluated, bytes, cost and time, you can plainly see that forcing a relatively bad index on a statement, especially by being lazy (more on that in a moment) is, well, dumb.

Using INDEX by itself means someone probably doesn’t know what they’re doing. In fact, this person may be doing more damage than good by using an inappropriate hint. Damage in this case refers to poor performance and unnecessary resource usage. Those archived redo logs take up space too.

What’s even worse about this database is that tables are over-indexed. There are tables with an index based on columns A, B, and C, an index on C, B, A, an index on B, C, and yet another on A, C and B. What does this represent? A DBA who has no idea how to tune and thinks that matching an index to every WHERE clause contained in the code ready to be put into production will make things better. The truth is, the DBA made a significant contribution to the poor performance of this database.

Conclusion

Hints, as Oracle recommends, should be used sparingly. When and where is that? Sorry to be ambiguous, but the answer is: it depends. For whatever reason, using hint X may make a difference (for the better, obviously) in an execution plan. How do you discover this? Under some narrow conditions, and also by trial and error. Maybe the plan is based on bad statistics that cannot be changed, so try something else hint-wise is one situation. It just depends.

The take-away here is this: Oracle has lots of hints to choose from. Know what they are and how they are different from one another before tossing them into production (plus don’t forget to stay abreast of changes in what’s available in the first place). With good statistics, you normally/generally/usually need not ever include them into DML and select statements.

January 18, 2008 Posted by | oracle | Leave a comment

Looking at ORA-4031

Looking at ORA-4031

I will share my approach for Resolving ORA -4031 error. First we will see what ORA-4031 actually means.

04031, 00000, “unable to allocate %s bytes of shared memory (\”%s\”,\”%s\”,\”%s\,\”%s\”)”

// *Cause: More shared memory is needed than was allocated in the shared pool.

// *Action: If the shared pool is out of memory, either use the

// dbms_shared_pool package to pin large packages,

// reduce your use of shared memory, or increase the amount of

// available shared memory by increasing the value of the

// INIT.ORA parameters “shared_pool_reserved_size” and

// “shared_pool_size”.

// If the large pool is out of memory, increase the INIT.ORA

// parameter “large_pool_size”.

ORA-4031 error is encountered when we do not have sufficient memory available in shared pool/large pool to service a memory request. But in actual ORA – 4031 can be encountered in any of these areas

1) Shared pool
2) Large Pool
3) Java Pool
4)Streams pool (new to 10g)

This brings us to the first step in our pursuit for finding the cause for ORA -4031.

Step1: Identify the Pool associated with error

Like any other Oracle error, we first need to check Database Alert Log file and also any trace files which gets generated during that time in user_dump_dest,background_dump_dest. Though there are cases when ORA-4031 error is not recorded in alert.log. Starting from 9.2.0.5, you should be able to see trace files which gets generated in udump/bdump location (Depending on whether background process or user process encountered the error).

ORA – 4031 has basically three arguments

1) Size requested
2) Area
3) Comment

ORA-4031: unable to allocate bytes of shared memory (“area “,”comment”)

e.g ORA-4031: unable to allocate 2196 bytes of shared memory

(shared pool,”JOB$”,”KGLS heap”,”KGLS MEM BLOCK))

So we see from above that the error has occurred in Shared Pool. This is very important step as in case of other pools, ORA-4031 errors are resolved by increasing Java_pool_size and Streams_pool _size.

In this article I will be discussing mostly about errors encountered in Shared pool with small section on Large Pool.

Step2: What is value of SHARED_POOL_SIZE?

Current settings for shared pool related parameters can be found using below query

SQL>col name for a50

SQL>col value for a10

SQL> select nam.ksppinm NAME, val.KSPPSTVL VALUE from x$ksppi nam, x$ksppsv val

where nam.indx = val.indx and nam.ksppinm like ‘%shared_pool%’ order by 1;

NAME VALUE

————————————————– ———-

__shared_pool_size 654311424

_dm_max_shared_pool_pct 1

_enable_shared_pool_durations TRUE

_io_shared_pool_size 4194304

_shared_pool_max_size 0

_shared_pool_minsize_on FALSE

_shared_pool_reserved_min_alloc 4400

_shared_pool_reserved_pct 5

shared_pool_reserved_size 19293798

shared_pool_size 0

You can use following notes for checking the minimum shared pool size

Note 105813.1 – SCRIPT TO SUGGEST MINIMUM SHARED POOL SIZE

In case of 10g, you can use SGA_TARGET parameter for managing values of Shared Pool,Large pool, Streams Pool,Java Pool, Buffer Cache (DB_CACHE_SIZE). Following note can be used for 10g

Note 270935.1 – Shared pool sizing in 10g

It is recommended to set a lower limit for SHARED_POOL_SIZE parameter.

You can also use V$LIBRARYCACHE view (AWR/Statspack report also has this section) and check if there were lot of Reloads happening for SQL AREA and TABLE/PROCEDURE Namespace. This gives indication that Shared Pool is not appropriately sized. In case you see high value for Invalidations, then this could be due to executing DDL against the objects, gathering stats (DBMS_STATS), or granting/revoking privileges.

High Value for Hard parses in AWR/Statspack report can also be caused by shared pool sizing issues but it cannot be used as a sole criteria as High hard parses can be caused by use of literals and presence of version counts/Child Cursors. This is discussed in section Using Literals Instead of Bind Variables and Multiple Child Cursors/High Version Count.

Some more key points related to Shared pool Sizing

-Shared pool memory consumption varies from release to release

-10g might fail with shared pool of 300 Mb though 8i was working fine

-Some part of memory allocated to fixed structures. Parameters like db_files, open_cursors and processes
contribute to Overhead. When you use “Show SGA” command, you will see that “Variable Size” will be more then sum of “Shared Pool + Large Pool + Java Pool”. This is attributed to the value of these parameters.

Please note that in case you specify a low value for SGA_MAX_SIZE, you will see Oracle bumping the value to higher value so as to accomodate high value of Overhead memory.

Staring from 10g, Overhead memory is accomodated in shared_pool_size.

e.g If you specify SHARED_POOL_SIZE as 200 MB and your internal overhead is 100 Mb, then your actual shared pool value available to instance is only 100Mb.

You can read Note:351018.1 – Minimum for SHARED_POOL_SIZE Parameter in 10.2 Version for more information.

Shared Pool Fragmentation

Shared Pool fragmentation also can cause ORA-4031. This is caused when your queries are not being shared and you are seeing lot of reloads and Hard parses in the Statspack Report. In this case check the request failure size

ORA-4031: unable to allocate 16400 bytes of shared memory

We see that failure size is 16K. In this case you can see if you are using Shared_pool_reserved_size parameter for defining shared pool reserved area. Algorithm for memory allocation is such that it will first try to get memory from the shared pool and then if the requested memory size is greater then
_Shared_pool_reserved_min_alloc
, then it will get the memory from Shared Pool Reserved area. By default this value is set to 4400 bytes. In case the failure value is say 4200, you can try reducing the value of this parameter to reduce the occurences. Though this is not the complete solution. You can also identify shared pool fragmentation by querying X$KSMSP

select ‘sga heap(‘||KSMCHIDX||’,0)’sga_heap,ksmchcom ChunkComment,

decode(round(ksmchsiz/1000),0,’0-1K’, 1,’1-2K’, 2,’2-3K’,

3,’3-4K’,4,’4-5K’,5,’5-6k’,6,’6-7k’,7,’7-8k’,8,’8-9k’, 9,’9-10k’,’> 10K’) “Size”,

count(*), ksmchcls “Status”, sum(ksmchsiz) “Bytes” from x$ksmsp

where KSMCHCOM = ‘free memory’ group by ‘sga heap(‘||KSMCHIDX||’,0)’,

ksmchcom, ksmchcls, decode(round(ksmchsiz/1000),0,’0-1K’, 1,’1-2K’, 2,’2-3K’,

3,’3-4K’,4,’4-5K’,5,’5-6k’,6,’6-7k’,7,’7-8k’,8,’8-9k’, 9,’9-10k’,’> 10K’)

SGA_HEAP CHUNKCOMMENT Size COUNT(*) Status Bytes

————– —————- —– ———- ———- ———-

sga heap(1,0) free memory > 10K 393 free 11296600

sga heap(1,0) free memory 3-4K 256 free 781928

sga heap(1,0) free memory 8-9k 63 free 510656

sga heap(1,0) free memory 6-7k 60 free 367076

sga heap(1,0) free memory 2-3K 555 free 1071448

sga heap(1,0) free memory 1-2K 1818 free 1397244

sga heap(1,0) free memory 0-1K 3418 free 348344

sga heap(1,0) free memory 9-10k 30 free 269820

sga heap(1,0) free memory 4-5K 154 free 640332

sga heap(1,0) free memory 5-6k 75 free 381920

sga heap(1,0) free memory > 10K 39 R-free 8302632

sga heap(1,0) free memory 7-8k 22 free 152328

If you see lot of memory chunks in 1-4k and very few in buckets >5K then it indicates Shared Pool Fragmentation. In this case you need to also look at Hard Parses (Statspack/AWR Report). This is discussed in section Using Literals Instead of Bind Variables and Multiple Child Cursors/High Version Count.

Note: – It is not recommended to run queries on X$KSMSP as it can lead to Latching issues. Do not run them frequently (I have seen people scheduling them as part of Oracle Hourly jobs. This should be avoided)

Step3: Is it MTS? If Yes, then are you using LARGE_POOL_SIZE?

LARGE_POOL_SIZE recommended for many features of Oracle which are designed to utilize large shared memory chunks like

– Recovery Manager (RMAN)

– parallel processing/IO slave processing. e.g px msg pool consuming more memory

– Shared Server Configuration

UGA will be allocated from shared pool in case large pool is not configured. So this can cause issues while using Shared Server Mode (MTS). Ensure that you are using LARGE_POOL_SIZE parameter or SGA_TARGET.

Step4: Are you having Multiple Subpools?

Subpool concept introduced from 9i R2. Instead of one big shared pool, memory will be divided into many sub pools.To determine number of subpools, you can use below query

SQL> select nam.ksppinm NAME, val.KSPPSTVL VALUE from x$ksppi nam, x$ksppsv val

where nam.indx = val.indx and nam.ksppinm like ‘%kghdsidx%’ order by 1 ;

NAME VALUE

—————————— ——————–

_kghdsidx_count 4

Above query indicates that there are 4 subpools

In case you get ORA-4031 and trace file gets generated, then the trace file can also be used to know the number of subpools configured. To do this search on “Memory Utilization of Subpool”
e.g
Memory Utilization of Subpool 1
========================
free memory 10485760
Memory Utilization of Subpool 2
========================

free memory 20971520

This means that there are two subpools configured for your database.

Oracle suggest having 500M as minimum subpool size. I will say that in case you are not facing serious Shared pool Latch contention, 2 subpools should be sufficient (though I believe most of contention issues can be solved by tuning the application). To change the number of subpools, we need to set parameter _kghdsidx_count in pfile or spfile and restart the database

In case of Spfile

alter system set “_kghdsidx_count”=1 scope=spfile;

Restart of database is required as it is a Static parameter. Please note that Large pool has same number of subpools as shared pool so you might be required to change number of subpools in case you are observing ORA-4031 in large pool.

Step5: Is Sqlarea consuming lot of Memory?

Actually this can also be categorized into “Bad Application Design” as most of the cases are caused by way applications have been designed. High value for sqlarea in V$SGASTAT (or AWR/Statspack report) can be attributed to following causes

Using Literals Instead of Bind Variables

This is the most common cause for ORA-4031. Tom Kyte explains this on one of his post consequences of not using bind variables

If you do not use bind variables and you flood the server with
hundreds/thousands of unique queries you will
-run dog slow
-consume a ton of RAM (and maybe run out)
-not scale beyond a handful of users, if thatamong other really bad side effects.

The above statement is true and you can find lot of cases where not using Bind variables caused excessive Parsing issues (leading to CPU contention) and ORA-4031 issues. One of the way to locate such statements is by running following query.

SELECT substr(sql_text,1,90) “SQL”,count(*) “SQL Copies”,

sum(executions) “TotExecs”, sum(sharable_mem) “TotMemory”

FROM v$sqlarea

WHERE executions < 5

GROUP BY substr(sql_text,1,90) HAVING count(*) > 30

ORDER BY 2;

I personally try to use script from Asktom website to find these statements. You can find ,more information by clicking here

create table t1 as select sql_text from v$sqlarea;

alter table t1 add sql_text_wo_constants varchar2(1000);

create or replace function

remove_constants( p_query in varchar2 ) return varchar2

as

l_query long;

l_char varchar2(1);

l_in_quotes boolean default FALSE;

begin

for i in 1 .. length( p_query )

loop

l_char := substr(p_query,i,1);

if ( l_char = ”” and l_in_quotes )

then

l_in_quotes := FALSE;

elsif ( l_char = ”” and NOT l_in_quotes )

then

l_in_quotes := TRUE;

l_query := l_query ”’#’;

end if;

if ( NOT l_in_quotes ) then

l_query := l_query l_char;

end if;

end loop;

l_query := translate( l_query, ‘0123456789’, ‘@@@@@@@@@@’ );

for i in 0 .. 8 loop

l_query := replace( l_query, lpad(‘@’,10-i,’@’), ‘@’ );

l_query := replace( l_query, lpad(‘ ‘,10-i,’ ‘), ‘ ‘ );

end loop;

return upper(l_query);

end;

/

update t1 set sql_text_wo_constants = remove_constants(sql_text);

select sql_text_wo_constants, count(*)

from t1

group by sql_text_wo_constants

having count(*) > 100

order by 2

/

Above query will give you queries which are using literals and should be modified to use bind variables. Sometimes it is not possible to modify the application, in that case you can use CURSOR_SHARING=SIMILAR/FORCE to force the application to use bind variables. Please note that this can cause issues (especially CURSOR_SHARING=SIMILAR), so it is recommended to test the application in Test environment before implementing in Production. Applications like Oracle Apps do not certify use of this parameter so also check with your application vendor if this can be used.

Multiple Child Cursors/High Version Count

This is also one of the cause for high usage of memory in SQLAREA region. Child cursors are generated in Shared pool when the SQL text is same but Oracle cannot share it because the underlying objects are different or different optimizer settings, etc. To know about child cursors, refer to following Metalink note

Note 296377.1 – Handling and resolving unshared cursors/large version_counts

In case of Oracle 10g, you can use Statspack/AWR report for finding the child cursors under category “SQL ordered by Version Counts”. Following statements can also be run to identify if child cursors are being generated in your database


For 10g

SQL> select sa.sql_text,sa.version_count ,ss.*from v$sqlarea sa,v$sql_shared_cursor ss

where sa.address=ss.address and sa.version_count > 50 order by sa.version_count ;

For 8i/9i

select sa.sql_text,sa.version_count ,ss.*from v$sqlarea sa,v$sql_shared_cursor ss

where sa.address=ss.KGLHDPAR and sa.version_count > 50 order by sa.version_count ;

Results returned by above query reports SQL which are not being shared due to some reason. You should find column with Value Y to find the cause. Most of these issues are encountered while using CURSOR_SHARING=SIMILAR. In case you are using this parameter with columns having Histograms, then it is expected behavior.Read more about Cursor issues related to Histograms in Note:261020.1 – High Version Count with CURSOR_SHARING = SIMILAR or FORCE

There are cases where none of the column value returns Y value. Most of these cases, you need to work with Oracle support to find the cause as this could be a bug.

Child Cursors are problematic as they increase shared pool memory consumption, High parsing and also as the number of child cursors increase, Oracle will take more time to span all the child cursors to match if it can reuse them, if not then it spawns a new child cursor. This results in High Parsing time and CPU contention.

High Sharable Memory per SQL

One more cause for high value of SQLAREA in V$SGASTAT is high memory consumption for SQL statement. This can be due to poorly written SQL statement or due to Oracle Bugs.

In case of Oracle 10g, you can use Statspack/AWR report for finding the statements with high value of Sharable Memory. You can also use Sharable_mem column in V$SQLAREA to find these queries.

Step6:What Next?

You have followed all the above steps and find everything is ok. Now what do we check next?

We can look for any trace file which got generated during the time of error and see which component was taking more memory. You can try searching in metalink with that component. Else you can take a heapdump at time of error and upload the file to support.

Heapdump event
The Heapdump event is used to dump memory from different subheaps. Errors ora-4030 are associated with problems in the pga, uga or cga heaps, and error ora-4031 is related only to problems with the shared pool/large pool/Java Pool/Streams Pool.

command – > alter system set events ‘4031 trace name heapdump level 2’;

init.ora – >events=’4031 trace name heapdump, level 2′

SQL>oradebug setmypid

SQL>oradebug dump heapdump 2

SQL>oradebug tracefile_name

Staring from 9.2.0.5, level 536870914 can be used for generating heapdump which will gather more diagnostic information for support to diagnose the cause.

Also it is not recommended to set Heapdump event in init.ora or spfile since it will force multiple dumps at time of Shared Pool memory issues. Oracle requires Shared pool Latch for dumping heapdump, so this can worsen the Latching situation. You can set Errorstack event to generate trace file at time of ORA-4031 error

alter system set events ‘4031 trace name errorstack level 3’;

Use immediate trace option or Oradebug command at time of error

SQL> connect / as sysdba

SQL> alter session set events ‘immediate trace name heapdump level 536870914’;

OR

sqlplus “/ as sysdba”

oradebug setmypid

oradebug unlimit

oradebug dump heapdump 536870914

oradebug tracefile_name

exit

Upload the tracefile to Oracle support.

Using the above approach will help you to resolve ORA-4031 in Shared Pool.

Large Pool

While working on ORA-4031 in large pool, you need to follow below approach

1)Check size for LARGE_POOL_SIZE. If possible increase it.

2)Check number of subpools. Ensure that you have sufficient memory in each subpool. _kghdsidx_count is used to control the number of subpools in large pool also. So you would have to either increase memory available in each subpool or decrease the count.

3)In case of MTS, check if any session is consuming lot of memory. It’s a case where instead of getting ORA-4030, you get ORA-4031 in large pool (In MTS, UGA is part of large pool).

4)If all above suggestions have been tried, then capture heapdump and upload the file to Oracle Support. You can use level 32 or 536870944 i.e

SQL> connect / as sysdba

SQL> alter session set events ‘immediate trace name heapdump level 32’;

or

SQL> alter session set events ‘immediate trace name heapdump level 536870944’;

I hope this article helps in following a methodology for resolving ORA-4031. At present this article is not exhaustive article on this error and it will be more useful if it can be used as a approach after you have gone through below metalink notes.

Note:62143.1 – Understanding and Tuning the Shared Pool

Note:396940.1 – Troubleshooting and Diagnosing ORA-4031 Error

Note:146599.1 – Diagnosing and Resolving Error ORA-04031

January 8, 2008 Posted by | oracle | Leave a comment

How to get Oracle Error Message from Database or OS

How to get Oracle Error Message from Database or OS

If you are on UNIX platform then from unix machine using unix command you can easily get error description, cause and action easily. You don’t need to go to internet and then search error about it immediately. Suppose you got the error ORA-04043 and now you want to know the cause of the error and action of the error. From Unix machine (Linux, Solaris etc) you can easily get it by using oerr command. In order to use it For example, if you want to get cause and action of ORA-7300, then “ora” is the facility and “7300” is the error. So you should type “oerr ora 7300“.

If you get LCD-111, type “oerr lcd 111“, and so on.

Below is an example.

SQL> desc t;
ERROR:
ORA-04043: object t does not exist


SQL> !oerr ora 04043

04043, 00000, “object %s does not exist”
// *Cause: An object name was specified that was not recognized by the system.
// There are several possible causes:
// – An invalid name for a table, view, sequence, procedure, function,
// package, or package body was entered. Since the system could not
// recognize the invalid name, it responded with the message that the
// named object does not exist.
// – An attempt was made to rename an index or a cluster, or some
// other object that cannot be renamed.
// *Action: Check the spelling of the named object and rerun the code. (Valid
// names of tables, views, functions, etc. can be listed by querying
// the data dictionary.)

Similarly you can check other error message too.

Like,
$ oerr ora 600
00600, 00000, “internal error code, arguments: [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s]”
// *Cause: This is the generic internal error number for Oracle program
// exceptions. This indicates that a process has encountered an
// exceptional condition.
// *Action: Report as a bug – the first argument is the internal error number

However if you are not on UNIX platform you still can get your desired error message description.
In that case you have to use the function SQLERRM.
The error number passed to SQLERRM should be negative. Passing a zero to SQLERRM always returns the ORA-0000: normal, successful completion message. Passing a positive number to SQLERRM always returns the User-Defined Exception message unless you pass +100, in which case SQLERRM returns the ORA-01403: no data found message.

An Example:
——————

SET SERVEROUT ON
prompt Please enter error numbers as negatives. E.g. -1
prompt
exec dbms_output.put_line(‘==> ‘||sqlerrm( &errno ) );
/

SQL> Enter value for errno: -7445
==> ORA-07445: exception encountered: core dump [] [] [] [] [] []

January 4, 2008 Posted by | oracle | 1 Comment