Friday, October 24, 2014

Restore XE database from an AWS instance

Of course there is nothing special about restoring a database from the cloud on your own down-on-earth server, but it helps making this post cool and cloudy.

When I deployed XE to an AWS instance I made a small script to backup the database once pr day (scheduled in crontab), and another to fetch the backupset to my server here on earth. Backup script looks like this:



rman target / << EOF > $HOME/script/rman.$$
backup database plus archivelog delete input;
delete noprompt obsolete;
list backup of database summary;

By default the XE is installed without archiving so you need to change that when you install your production XE database. I also turned on auto backup of the controlfile. The RMAN configuration for the XE database looks like this:

CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/11.2.0/xe/dbs/snapcf_XE.f'; # default

It is mostly default values.

A script that runs on my own box simply does a recursive secure copy (scp):

scp -r -i ssh_key.pem ec2-user@`date +%Y_%m_%d` /home/alone/RMAN/backupset
scp -r -i ssh_key.pem ec2-user@`date +%Y_%m_%d` /home/alone/RMAN/autobackup

This fetches today's backup and stores it locally.

I installed XE on my own box and executed the following in RMAN to restore the database. (If the instance is up, take it down with shutdown abort first):

rman target /
startup nomount
restore controlfile from '/home/alone/RMAN/autobackup/2014_10_21/o1_mf_s_861554053_b4dvq5pl_.bkp';
alter database mount;
catalog start with '/home/alone/RMAN/autbackup/2014_10_21';
restore database;
recover database;
alter database open resetlogs;

That's it. 

But don't trust me on this procedure without trying it yourself. Remember a DBA can make mistakes, except when it comes to backup AND recovery.

Wednesday, October 22, 2014

Oracle VM: Templates and Assemblies

One of the cool things with Oracle VM is that Oracle offers many templates you can download from and import into your repository with Oracle VM Manager (OVMM). Well, since I did the same mistake that you did, that is, not reading the entire README file, I confused templates with assemblies.

Say you want to download a template for Oracle Linux 6, you have to select product pack Oracle VM Templates. The downloaded zip contains one file with extension .ova (does not mean Oracle Virtual Assembly, but Open Virtualization Format Archive). Since it is written "VM Templates" several places on the web site, it is easy to think it should be imported under VM Templates, but you need to select Assemblies as shown here and do the import from there:

Trying to import the file with extension .ova under VM Templates will fail and throw an error message at you similar to:
No VM configure file found
This is obvious if you pay attention to section 2 in the README file, that states:

2. Creating an Oracle VM Virtual Machine from the Template
The OVF based Oracle VM Templates should be imported as Assemblies.
Please review the Assemblies section of Oracle VM User's Guide for details
about importing an assembly, and create a virtual machine template from the assembly. 

Assemblies are explained in 7.5.4 in the User Guide (for version 3.2 at least). 
You can create one or many VM Templates from an assembly and from there your new and shiny VMs.

By the way, one easy way to import them is to use a local HTTP server that comes with Python.

Saturday, October 18, 2014

Oracle database XE and APEX on Amazon Web Services

In our user group OUGN we used to collect abstracts for our annual conference with a simple Google Form.

  • Easy to set up 
  • No possibility for users to reconnect and correct what they had already submitted
  • Lots of manual work
  • Difficult to work with in the process later (selection, agenda, late submissions, special guests)
  • Not really cool or cloudy
Google Forms is really just a shared spreadsheet with a form interface wrapped around it. And like any other spread sheet it is easy to work with in the beginning, but as the number of people involved increases and data grows it soon gets cumbersome to work with.

This year we moved it to Oracle Application Express (APEX). We should have done that long time ago, especially since we are an Oracle User Group...

I figured out that the Express Edition (XE) would be sufficient, even 300 abstracts with speaker info won't take all of the 11GB of space available. Next issue was to figure out where to host it. I looked around and decided that a micro-instance at Amazon Web Services (AWS) should work. It is actually located in Ireland; Hi, Brendan!

The rest of this post is about experiences I made with running APEX in XE on AWS.

OS One of the default Red Hat Linux 6.5 images from AWS works well. You can then download the rpm-file for XE from OTN. The rpm-file is quite complete, no need to install a lot of extra packages or install the oracle-rdbms-server-11gR2-preinstall package.

Swap space During installation of XE the installer may complain about not enough or missing swap space. The default installation that I used did not come with much swap space, and since you don't want your database server to start swapping you shouldn't need any either. But, I like to get rid of warning messages so I created temporary swap space like this to stop the complaints from runInstaller:

dd if=/dev/zero of=/tmp/swap bs=1024 count=1300000 
mkswap /tmp/swap 
swapon /tmp/swap

Upgrade APEX in db The version of APEX that comes with XE 11g is old, but it is easy to upgrade. Download a new version from OTN. Start with 3.5.2 in the Installation Guide

Installing full version of APEX Though strictly not necessary I find it easier to have the full version of APEX installed, not only the run-time version. Running the script apexins.sql took 3.5 hours on this micro-instance, so plan ahead for some other work to do during installation.

URL not found If you start getting messages like The requested URL /apex/f was not found on this server then check the alert log, you may find ORA-04031 errors there, see next bullet.

Shared pool size too small The XE instance is configured during installation with a small shared pool. Try to increase it in small steps (to avoid swapping), our instance is now running with 250MB shared pool. We'll see how it works out as people connects to it. You may well reduce the java pool to a minimum too; set it to 0 and Oracle will allocate 1 granule for it, which is 4MB on this instance. As always, make a pfile copy of the spfile before you do any changes to the spfile that requires a restart of the instance. Certain combinations of memory parameters are not supported and I had to revert the changes a couple of times to get the instance up.

Deploying new versions in APEX I developed the application on a local database and for each round of fixes I exported the application to a file and then uploaded it to the AWS instance with build status Run Application Only. With a good separation of data and application it was easy to deploy new versions as feedback from the user group board members came in. I also learned that in order to deploy a new version you have to change the build status to Run and build application before you can delete or replace it. That is obvious when you've done it once...

Disable access to workspace login With the full installation installed the database will by default allow you to login to the APEX workspace, e.g. This is typically not something you want in this case, but can be turned off easily with the following executed as SYS or other privileged user:

alter session set current_schema=APEX_040200;
When you need to redeploy you turn it on again replacing Y with N.

Using standard http port 80 You need to change the ownership of the listener to root and start it as root:

tnslsnr LISTENER -user 0 -group 0 

You will to need to run something similar to this as SYSTEM:


The IP address is the internal IP address for the network interface. Check with ifconfig eth0 to see what address is being used.

Not using SSL / HTTPS Of course we could have bought an SSL certificate to protect the usernames and passwords from being sent in clear text. But there is really not much sensitive material here, if you disagree strongly I would like to hear your opinion. Most of the work was actually spent on login and account administration. I can say that the passwords are MD5 hashed before being stored in the database.

Start early with a good data model This has nothing to do with APEX, XE or AWS. It is just that you need to be reminded that everything goes smoother when you start modelling the whole thing in SQL Developer Data Modeller (SDDM) before you start developing code and APEX-application. Keep future features in mind a little without actually including them in the current data model, meaning imagine how you can expand it later for other usage like a membership administration tool. Me being not too much experienced with modelling think that a good understanding of entities and the relationship between them brings you a long way in the right direction. When Heli has finished her book we all will become experts.

Wednesday, October 8, 2014

Include roles and synonyms in DataPump export

For schema level exports it may be useful to include roles and public synonyms relevant to the schemas exported. Instead of generating them manually they can be included in the DataPump export. The following example of a parameter file shows how this can be done:

INCLUDE=PUBLIC_SYNONYM/SYNONYM:"IN (SELECT synonym_name FROM dba_synonyms WHERE owner = 'PUBLIC' AND table_owner in ('APEX','SCOTT'))"
INCLUDE=ROLE:"IN (select role from dba_roles where role in (select grantee from dba_tab_privs where grantor in ('APEX','SCOTT')))"  

When deciding what to include in an export you may want to check the table DATABASE_EXPORT_OBJECTS to find the correct path as in this example:

select * 
where object_path like '%ROLE%';

Wednesday, September 24, 2014

OOW 2014

There is one night every year I can stay up as long as I can manage without fearing the next day. Because, on that particular day I can sleep as much as I want on the plane, trying to fool my body into GMT-8, better known as PDT or  Pacific Daylight Time. Yes, it is Oracle Open World again, in the second nicest city in the world, only beaten by Rio de Janeiro.

It is also an opportunity to blog at an unusual hour (at least for me). Though it is not like many have inquired about my travel plans, but it feels good to put it down in writing that I am happy to see Oracle friends again, some I only see in SFO.

Since last OOW, I became an Oracle ACE Associate, and more than ever I am looking forward to meet friends in the OTN Lounge, and that party, you know.

What else... OUGN is paying for most of my expenses and my employer is letting me go, so I'll try to give them both some satisfaction back. Mostly I will try to target and convince some speakers to come to our conference next year, increasingly known as the Boat Conference. If you are reading this and are curious, please interrupt me whatever I'm doing (mostly).

I've learned a few lessons and so far not overloaded my agenda. This week I will go to the global user group leaders meeting (IOUC) on Thursday and Friday. Wednesday morning until 2 pm I'll be at the EMEA user group stand. I will go to different sessions at the Oaktable World, hang out on different social events and of course a bunch of sessions.

Somewhat related, Tim Hall, aka Oracle-Base blogged this today. Surely he will feel reinvigorated after the Bloggers' Meetup and the small talk with Oracle Nerds from all over the world looking for freebies in the OTN Lounge.

The easiest way to get in touch during OOW next week, is probably on Twitter.


PS. I did not know how to spell reinvigorated, the spell checker helped me.
PS2. Come to think about it, there are probably a few more late nights without anxiety the following day. But I can't find a better opening right now.

Wednesday, August 27, 2014

Revoke privileges from PUBLIC may cause ORA-600

Oracle Enterprise Manager (EM) reports on many security policies when they are violated. One of them are about execute privileges on standard PL/SQL packages granted to PUBLIC. One a day this summer with nothing better to do I went on to revoke these privileges. The wages for my efforts was several  ORA-600 errors:
ORA-00600: internal error code, arguments: [qmxqtmChkXQAtomMapSQL:2], [], [], [], [], [], [], [], [], [], [], []
00600. 00000 -  "internal error code, arguments: [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s]"
*Cause:    This is the generic internal error number for Oracle program
           exceptions.     This indicates that a process has encountered an
           exceptional condition.
*Action:   Report as a bug - the first argument is the internal error number
A quick research on My Oracle Support indicated that some standard packages depends on these privileges granted to PUBLIC. The solution is to grant back to PUBLIC if you are in a hurry, or grant the privilege to the owners of the individual packages (like XDB). The errors continued after the granting was done, but stopped after I bounced the database instance.

Just because EM reports something and the solution seems obvious, it is not alway wise to apply remedy immediately, especially in production. It surprised me a little that EM suggests a security policy that Oracle's own packages do not adhere to.

Sunday, August 24, 2014

Correlation between restore time and used space in a datafile

Recently during restore of a large database (over 10 TB) we monitored the alert log looking for messages like  «Full restore complete of datafile 42 to datafile copy … Elapsed time: 0:39:32»

We saw that the time to restore a 32GB file varied a lot, from around 20 minutes to around 75 minutes. This made me wonder why. We noticed that some of the files belonging to the UNDO tablespace  took usually little time, and then I found a couple of more data files that took little time to restore. Then I thought that maybe the UNDO tablespace at the time of the backup had many unused blocks (because there weren’t many active transactions at the time). So I checked the other two files that did not take long time to restore and found out they had lots of free space in them too. 

So I decided to take the file id and the restore time from all the messages in the alert log and load it into Oracle, in the source database (this all happened when we were making a clone for a RAT session). Then I converted the elapsed time to seconds, found  file size from DBA_DATA_FILES and free space for each file from DBA_FREE_SPACE. Calculating the correlation between restore time and (file size - free space) resulted in 0.96, but for the correlation between restore time and file size, resulted in 0.38. (I used the CORR function.)

By the way the backup was stored on a file system created on a logical volume from a group whose   LUNs where stored on an EMC VMAX 20, but the data files were restored to ASM on LUNs in an EMC XtremIO. 

 As pointed out on Twitter:

it could be caused by the RMAN compression of unused blocks. Next step would be a test with creating a database with several files with different degree of free space, and doing two RMAN backups, one with and another without compression and compare restore time.

This may sounds like a lot of fuzz about nothing, but what we had in mind was the deduplication that the XtremIO does; wondering if the deduplication of empty blocks could  take less time than blocks with real data in it.