Saturday, November 20, 2010

Removing orphan DataPump jobs and filtering included tables with a query

Say you define a DP job from the API (DBMS_DATAPUMP) you may end up with jobs with status NOT RUNNING until you get it right. Verify if you have such a job with

select job_name,state
from user_datapump_jobs;

Then you may try to remove it with:

l_h number;

If the query above still reports the job with the same status it can be removed by dropping the master table, according to Note 336014.1:

drop table YOUR_JOB;

If the database is using a recycle bin the table has been renamed to a name starting with 'BIN'; if you rerun the query above the job is renamed to reflect this, but if you purge the recycle bin the job will be finally removed:

purge recyclebin;

If your PL/SQL code fails during definitions before you do a detach you cannot do a second attach. Since you have no reference to the job the only solution I'm aware of is to reconnect (i.e. create another database session). Better yet is to define an exception on ORA-39001 and retrieve the error message with dbms_datapump.get_status, and dbms_datapump.detach(l_handle) in the exception handler:

invalid_arg exception;
pragma exception_init(invalid_arg,-39001);
exception when invalid_arg then

If you drop the master table that is in the defining state, you will not be able to attach to the job, but if you try to attach to such a job the job will be removed for the user_datapump_jobs table.

If you try to add a metadata filter of type NAME_EXPR and you add a subquery in the expression, make sure that any table name in the subquery is prefixed with table owner or else you may receive ORA-00942 in the logfile:

dbms_datapump.metadata_filter(handle => l_h, name => 'NAME_EXPR', value =>'IN (SELECT TABLE_NAME FROM SCOTT.CHANGE_TABLE)' );

Typical error in logfile when schema is missing:

Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
ORA-39125: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS while calling DBMS_METADATA.FETCH_XML_CLOB []
ORA-00942: table or view does not exist
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 6307
----- PL/SQL Call Stack -----
object line object
handle number name
0x7bfb80b0 15032 package body SYS.KUPW$WORKER

Tuesday, November 16, 2010

Slow performance with TDE and CHAR data type on

In version of the database there is a bug when using Transparent Data Encryption (TDE) on a column with the CHAR datatype. In the following test a table is created where the primary key is defined as CHAR(11) and then encrypted with NO SALT. A simple lookup on this PK works as expected by using the respective index on But on a trace on event 10053 shows that an INTERNAL_FUNCTION is wrapped around the PK-column and therefore impedes use of the index.

The following test was performed on an OVM guest created from a template downloaded from It contains Oracle Server EE running on Oracle Enterprise Linux 5, 64-bit. Configuration of TDE and wallet is not shown.

create table foo (
id char(11) encrypt no salt primary key ,
msg varchar2(1000)

insert into foo ( select * from (
with generator as (
select rownum rn
from dual
connect by
rownum <= 100
dbms_random.string('x',11) id,
dbms_random.string('x',1000) msg
generator g1,
generator g2
rownum <= 10000
exec dbms_stats.gather_table_stats(user,'FOO', cascade=>true,method_opt=>'for all indexed columns size 1',estimate_percent=>100)
variable lid char(11)
exec :lid:='abc123def45';
alter session set events='10053 trace name context forever, level 1';
select msg from foo where id=:lid;

The query returns immediately since the index is used and the trace file shows the good plan (a simple explain plan will show the same):

| Id | Operation | Name | Rows | Bytes | Cost | Time |
| 0 | SELECT STATEMENT | | | | 2 | |
| 1 | TABLE ACCESS BY INDEX ROWID | FOO | 1 | 514 | 2 | 00:00:01 |
| 2 | INDEX UNIQUE SCAN | SYS_C007236| 1 | | 1 | 00:00:01 |
Predicate Information:
2 - access("ID"=:LID)

Now, the database is patched to version (patch 8202632). There are no invalid objects in the database after the upgrade. Then repeating the last query from above together with the trace on event 10053, the query takes a long time and the trace file shows this plan:

| Id | Operation | Name | Rows | Bytes | Cost | Time |
| 0 | SELECT STATEMENT | | | | 388 | |
| 1 | TABLE ACCESS FULL | FOO | 100 | 50K | 388 | 00:00:05 |
Predicate Information:

The INTERNAL_FUNCTION is used among other situations when oracle has to perform an implicit data conversion. This may happen when the data type of a bind variable is different from the data type used in the table.

This kind of mismatch may happen during upgrades of JDBC drivers where the mapping may change; Java had support for TimeStamp before it was introduced in the Oracle database. Later when Oracle supported TimeStamp this mapping changed, leading to this kind of errors. There are other reasons for such mismatches; a trace on event 10053 will reveal the data type of the bind variable:

Peeked values of the binds in SQL statement
oacdty=96 mxl=32(11) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1000000 frm=01 csi=31 siz=32 off=0
kxsbbbfp=2b19acc082a0 bln=32 avl=11 flg=05

The part "oacdty=96" shows the type is CHAR or NCHAR. This did not change between the tests, so there has to be another reason why the INTERNAL_FUNCTION is used. Also in this case the only change is the upgrade, even the parameter compatible remained the same ( The tests where performed from sqlplus on the database server.

This is bug 10296606 on MOS.

Update: It was discovered that if you use a variable of VARCHAR type the INTERNAL_FUNCTION is not applied, meaning that columns of CHAR type are converted to VARCHAR after encryption, but that USER_TAB_COLUMNS continues to report CHAR for the column.

Wednesday, October 20, 2010

A few notes on Oracle VM

This is an old post I kept as draft 8 months before I published it...

I've been playing around with Oracle Virtual Manager (OVM) version 2.2 lately. This is collection of a few notes, not all directly related to OVM. The project started with building the server from parts I hoped would play along (they do). The CPU is an Intel i7-930, 12GB of RAM and two Western Digital 1.5TB disks (WD15EARS-00Z5B1). Current version of OVM (2.2) uses update 3 of OEL 5. The kernel lags behind a bit and support for certain wirless card is not included. The AR5008 Wireless chip from Atheros (a Dlink card) was not supported; in order to avoid more cables in the house I created a bridge between the server and another PC nearby.

The installation of OVM went smooth, but the disks are a disgrace with a horrible I/O performance until you do the partitioning right (each partition should start on a sector divisible by 4096 bytes). Meaning I had to choose a custom layout of the partitions and file systems during installation.

The OVM manager was installed in a guest based on a OVM template downloaded from Oracle.

Configuration of OVM Manager fails because host names cannot be resolved:

The server resides on a private network (192.168.x.x) without any DNS; if you get error messages indicating connection problems to the OVM server then make sure the hostname is defined in /etc/hosts, also make sure the hostname is not resolved to localhost (; which was the case on a fresh installation of OVM Server. This should be done both on the OVM Server and in the guest hosting OVM Manager. (If the initial configuration of OVM Manager fails, connect with ssh and fix /etc/hosts before restarting).

Connecting to the Manager database from afar:
The listener binds to standard port 1521 on localhost. In case you want to connect to the database over SQL*Net you need to change listener.ora and replace localhost with a public IP address/name of the Manager's host. Then you need to add a line to /etc/sysconfig/iptables allowing remote access to the listener:

-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 1521 -j ACCEPT

Configuring yum:
I added the following lines to the file /etc/yum.repos.d/public-yum-el5.repo

name=Enterprise Linux $releasever U3 - $basearch - base

name=OracleVM 2 - 2.2.1 - $basearch - base

Then gcc and others can be installed with

yum install gcc

Tuning network
After the disk performance was improved I started to get annoyed about the slow refresh of VNC. The following command reported an increasing queue length when I interacted with a GUI through VNC:

netstat -o -t -c

This Guide was helpful.

Using physical devices for storage
I could not find any way to add a physical device to a guest in the Manager. From the documentation though it is clear how one goes about to do this. Disadvantage is that the added physical device does not show up in Manager after. Anyway, the performance gain outweighs that. I had one disk that I gave entirely to a volume group and then created a logical device with pvcreate, vgcreate and lvcreate.

To add the logical volume as a physical volume to the guest:

xm block-attach 24 phy:/dev/vg00/oradb1_asm1 xvdc 'w'

24 is the id of the guest as reported by xm list, xvdc is the device name at the guest (/dev/xvdc), and 'w' means read/write. At the guest this disk device needs to partitioned with fdisk or similar as with any other physical device.

To make the change permanent the following line (in bold) was added to the disk configuration in vm.cfg:

disk = ['file:/var/ovs/mount/3FEB9655D88E4177B8571AFCF81C5B8A/running_pool/50_oel2/System.img,xvda,w',

Monitoring with sar -d showed that the service time substantially lower (1/10) with the physical device, even when the physical device had almost max utilization on writing.

VNC and maximum resolution
Any changes made to the screen resolution was quietly ignored and the connection to the vnc server that is started from dom0 was unreliable. After a while I gave up and started a new vnc server from inside the guest.

Update: Using OVM for my own lab was not optimal, I installed Fedora 15 which among other things supports the wireless card without any hassle. I plan to install Virtual Box on top of it for my lab needs.

Sunday, October 10, 2010

Permission denied when adding ASM disk

This is a simple one, a reminder till next time I forget this, since stuff encountered after a Google-search led me astray...

Just in case you try to add disks to an existing ASM disk group using syntax like this:

alter diskgroup data
add disk '/dev/xvdd1'
add disk '/dev/xvde1';

You may get error messages like these:

ORA-15032: not all alterations performed
ORA-15031: disk specification '/dev/xvde1' matches no disks
ORA-15025: could not open disk '/dev/xvde1'
ORA-15056: additional error message
Linux-x86_64 Error: 13: Permission denied
Additional information: 42
Additional information: -1614995373
Additional information: 1730318408
ORA-15031: disk specification '/dev/xvdd1' matches no disks
ORA-15025: could not open disk '/dev/xvdd1'
ORA-15056: additional error message
Linux-x86_64 Error: 13: Permission denied
Additional information: 42
Additional information: 87328232
Additional information: 8192

Solution: Use the ASM disk names as given when creating them with oracleasm:

alter diskgroup data
add disk 'ORCL:VOL3'
add disk 'ORCL:VOL4';

Here VOL3 and VOL4 are the names given when creating the ASM-disk with oracleasm:

oracleasm createdisk VOL3 /dev/xvdd1
oracleasm createdisk VOL4 /dev/xvde1

The initialization parameter asm_diskstring is empty as is default.

The error messages from OS (Linux in this case) may lead you to think that there is something wrong with the owner or attributes of the devices you've just created, even they have the same permissions as the devices previously allocated to ASM... Then when you against reason set the owner and group to be like 'oracle:dba' the error message from Linux does not show up, but you will still get error messages like this:

ORA-15032: not all alterations performed
ORA-15031: disk specification '/dev/xvde1' matches no disks
ORA-15014: location '/dev/xvde1' is not in the discovery set
ORA-15031: disk specification '/dev/xvdd1' matches no disks
ORA-15014: location '/dev/xvdd1' is not in the discovery set

A note on MOS with id 603210.1 is one example of this confusion, it contains this command indicating that it is OK to use the physical device name (which it may be in certain environments for what I know):

create diskgroup DATADG external redundancy disk '/dev/sda14';

I think this is one of several reasons why certain un*x guys used to old and mature volume managers and file systems do not easily fall in love with ASM... Once one has passed these initial obstacles I find ASM OK to work with, even for non-RAC databases. It usually shortens the path from getting a LUN allocated on the SAN or whatever until more free space is available to the tablespace in the database.

[This testing was done on a Oracle VM guest, created from an OVM template downloaded from Oracle. It comes with OEL 5 64-bit and version of the database. Oracle calls this OS for JeOS (Just enough OS), but for some reason it does not come with the 'man' command, so for me it is JneOS.]

Tuesday, June 29, 2010

Error when using 11g export client on a 10g database

The only reason to read this post is if you have googled for the error "ORA-00904: "POLTYP": invalid identifier". This error occurs if you try the old export command from an 11g client against a database on version 10g or lower. The export command runs a query against a table called EXU9RLS in the SYS schema. On 11g this table was expanded with the column POLTYP and the export command (exp) expects to find this column. This should not be much of a problem since Data Pump export can be used.

Saturday, May 8, 2010

Someone trying to outperform Oracle's cache

The following code is from a migration project I'm working on. It is part of a package named PERFORMANCE and is run every 15 minutes:

test number;
Cursor c_worklist
select *
from v_worklist
where id = v_id;

FOR rec IN c_worklist LOOP
test := 0; -- does nothing

A nice little wtf-snippet. The purpose? According to "the vendor" the cursor that loops through every row in a view is supposed to keep the data in Oracle's cache, and if not in place some web service will time out. I believe we have better tools to keep stuff in cache.

Saturday, March 27, 2010

Collatz conjecture in PL/SQL

A simple implementation of Collatz conjecture in PL/SQL:

create or replace type int_tab_typ is table of integer;

create or replace function collatz(p_n in integer)
RETURN int_tab_typ PIPELINED as
n integer;
if p_n < 1 or mod(p_n,1)>0 then
end if;
while n > 1
pipe row (n);
if mod(n,2)=1 then
n:=n / 2;
end if;
end loop;
pipe row(n);

select * from table(collatz(101));

More on Collatz Conjecture (Wikipedia).

The point? Just for fun. Showed it to my 11 years old kid and asked him to figure out the 17 steps sequence of collatz(7) in his head. "Is this going to stop?", he asked half through. "That is what we are going to find out, nobody really knows", I told him.

If you have absolutely nothing useful to do, but have CPU and storage to waste, you may create a table containing n and the number of steps for collatz(n), with values of n ranging from 1 to 231-1 or until storage is exhausted, whichever comes first. Before doing so you may consider changing the function above to just count the steps and look up previous calculations from the table.

Saturday, March 6, 2010

Using trace to find wallet location

Not talking about a lost wallet with money, but the wallet used for encryption in the Oracle database and with encrypted backup.

A wallet is used for storing keys used to encrypt/decrypt data to/from the database, as with Transparent Data Encryption. The documentation on Advanced Security says that Oracle searches the parameter ENCRYPTION_WALLET_LOCATION in sqlnet.ora, or if not found it searches for WALLET_LOCATION in the same file. If none of them are given it searches in the default location of the wallet for the database. I found different references to where this default location is, depending on underlying OS and version. After a few rounds of trial and error (always receiving ORA-28368) I gave up and resorted to tracing my process:

select p.spid
from v$session s join v$process p on(s.paddr=p.addr)
where s.sid in (select sid from v$mystat);

Then using strace on the process:

strace -p 42

Among the output from strace something like this can be found:

access("/u01/app/oracle/admin/FOO/wallet/ewallet.p12", F_OK) = -1 ENOENT (No such file or directory)
open("/u01/app/oracle/admin/FOO/wallet/ewallet.p12", O_RDWR|O_CREAT|O_TRUNC, 0666) = -1 ENOENT (No such file or directory)

In this case the directory did not exist. It needs to be created with proper privileges for the OS user running the instance (usually oracle).

On Windows I imagine Process Monitor can be used to track down the location.

Morgans Library has a nice reference on wallets.

Friday, January 29, 2010

Database triggers are evil

If you have a lousy database model, you can get around it with triggers. If you need to hide some business logic from everybody else, you can do it with triggers. If you need a mechanism that bypass the consistency in the database, you can do it with triggers. They live a life on their own, seemingly autonomous from everything else and one easily forgets they exist. That is why I hate them. What matters if you do an database export in consistent mode if some active triggers make the data inconsistent during import? Triggers are like secret police, they are threat to the democracy if they take over.

Oh, just a rant.

Note to myself: next time import fails with ORA-1 or ORA-2298, check for evil triggers.

Monday, January 25, 2010

Changing character set in database

If your database is using anything else than UTF8 as database character set you may consider to migrate from it. Oracle states in the Globalization Guide (10gR2):
At the top of the list of character sets Oracle recommends for all new system deployment is the Unicode character set AL32UTF8.
Depending on what characters you actually have in your database you have three options on how to do this:
  1. Changing the character set with the package CSALTER. Only data dictionary is migrated.
  2. Using CSALTER and convert application data using export/import for all users
  3. Using CSALTER and convert a subset of application data.
Option 1 migrates the data dictionary, that is, the meta data regarding your application data. Nothing else is changed and Oracle, after successfully executing CSALTER, behaves as if your database was created with the new character set. That works OK if the characters used are actually encoded the same way in the new character set as before, if not funny letters will show up when querying the database, meaning you have to go for option 2 (or possibly 3).

Character expansion happens when a symbol changes from one character set to another with more bytes being used for storage in the latter. This is the case for the euro sign (€) in WE8MSWIN, where it requires one byte, in AL32UTF8 it takes three bytes. This requires a conversion of application data using export/import.

Oracle has made an utility for this, csscan. It checks the data dictionary and application data and gives you an analysis telling you what to do. The following procedure was executed on Oracle

First create these two directories are defined in the database:
  • log_file_dir
  • data_file_dir
I don't think they are really necessary, they don't seem to be used, but the following script grants access to them for the CSMIG user. Directories are listed in DBA_DIRECTORIES and you may create these anywhere you like with:

create directory log_file_dir as '/tmp/csscan';
create directory data_file_dir as '/tmp/csscan';

Run the following script as SYS in sqlplus to install the tool:


It will ask for a password which you'll use later (on 9iR2 it uses standard password CSMIG). Then start the scanning with:

csscan csmig/your_password full=y tochar=al32utf8 process=3 array=1024000

The csscan command is documented in chapter 12 in the Globalization Guide. The above analyzes the entire database. Otherwise owner, tables or columns may be specified. The process parameter selects the number of simultaneous scanning processes.

When scanning is completed csscan create a database scan report and individual exception reports as scan.txt and scan.err respectively. The file scan.out contains the output from the command above. These files are created in the directory that was current when the command was executed.