Tuesday, April 29, 2008

Create a standby database with RMAN



After the incident with missing forced logging in primary database the logical standby had to be created again. Compared to what is normal these days the database is not huge, but the standby database remains on a server in a different city and the link between isn't exactly a digital super highway. Also I had to restrict the consumed bandwidth during transfer with secure copy (scp -l 2500, restricts to 2500 kbs) or else connections to the primary database might suffer.

RMAN provides the command duplicate target database for standby, but you don't have to use this in order to create a physical standby. Actually some prefer not to use the duplicate command; the target database RMAN needs to be connected to while the duplicate process executes is the source database, which may be your main production database. If it takes some time to finish and you forget that your target is not the new database (the clone), but the primary, and you then do a shutdown immediate... well, you probably do that mistake only once.

Anyway you need a backup of the database and of the controlfile in standby format. In RMAN:

run {
backup current controlfile for standby
format '/s01/backup/foo/standby.ctl';
allocate channel c1 device type disk
format '/s01/backup/foo/%U' maxpiecesize 10G
backup database ;
}


I use the MAXPIECESIZE-option in order to limit the size of each file, it makes it all more manageable. Another useful option is RATE, which limits how much is read pr second from the database files. I did not use it since RMAN did not put a severe load on our SAN while the backup was running. The next three days where spent copying the files to the other server.

The instance for the physical standby was started in nomount and with RMAN connected to the soon to be physical database as target, restore was done like this:

restore standby controlfile
from '/s01/backup/bar/standby.ctl';
alter database mount;
catalog start with '/s01/backup/bar/ld';
crosscheck backup;
restore database;


The path in the catalog-command is just a common part for the backup pieces, the files started with ld. The restore actually finished several days after the backup was started on primary and physical standby would have to apply many archivelogs to catch up with primary. Here is where another useful feature of RMAN comes to rescue, incremental backup from a specific SCN. With the database at least in mount run the following query to find SCN of the physical standby:

select current_scn from v$database;


Then on the primary start an incremental backup in RMAN:

backup incremental from scn 42 database
format '/s01/backup/foo/incr%U';


42 might not be your SCN (wasn't our either), I used the SCN from the query minus one (a small overlap, didn't bother to check the manual). The incremental backup was of course much smaller and took less time than a full backup to complete and transfer. After the backup was transferred to the other server I executed an incomplete recovery:

catalog start with '/s01/backup/bar/incr';
recover database noredo;


This does not take much time to complete and the physical standby is not far behind primary. Normal apply can now be started with:

alter database recover managed standby database disconnect;

Sunday, April 20, 2008

Understanding a legacy database


Any database that has gone into production is a legacy database according to Tom Kyte. I agree, but to me some databases give me associations to legacy more often than others. In a post at the OraStory blog Dom complains about companies that hire contractors for a maximum of one year, and due to the fact that it takes half a year to understand the business and the data, you hardly get to show what you can deliver.

Understanding the business is one thing, but how it is projected into the database is another matter. With a crash introduction to SQL any developer can create a few tables and start dumping data. Everyone working with the code knows the connections, but lack of documentation, no constraints, and apparently inconsistency in the data makes it difficult for the newly hired DBA to understand what is going on or what is supposed to be going on. And this agile movement doesn't make it easier, documentation is not exactly given high priority (...the primary goal of software development is to create software, not documents- Ambler; Agile Database Techniques 2003, p8). In places where more attention has been given to the applications than the database the database can easily become a hard to manage legacy system. The characteristics are tables with few or no constraints (besides the primary key on an id column with values from a sequence, of course), tables with similar column names but with different content and meaning for the same column name, same data stored twice or more (and not in sync), same column used again in another table but with different type, missing data (not null-constraint missing) and so on.


If you have a tuning problem, adding constraints can help, but how can you add a constraint if you don't know the data model or the connections in the data? Adding an unique index can certainly speed things up, but what if a column has 99% unique data, is that an error in the data or just coincidental? I have found the following techniques and strategies useful for understanding the data model:
  • Ask questions - some like to talk about the code they wrote, but there is a limit to all interruptions.
  • Talk to staff likely to be in need for reports and find out if they use reports from the database, then you trace the reports back to the database, i.e. the actual SQL used. Sometimes there is an intermediate data dump that is later imported to Excel for further processing. Looking at each step, asking for an interpretation of the report (if it is not obvious) you may see some new connections.
  • Views in the database are useful to store complex queries, hide access to columns, create a standard interface to a set of tables, and more. But I have seen views defined as select col1 newname1, col2 newname2, ... from foo; i.e. the only change from the underlying table is new column names that relates to reports or a term that is used outside the database.
  • Stored procedure and packages are more likely to have comments than tables and views. If some strange logic is applied in the source, it motivates a question and may reveal something about the data model.
  • Anticipate dirty data (if constraints are few). Say the quality of the applications are reasonable good, most of the data may be fine, but there will be some orphan rows, illegal values, etc. Doing some digging with simple select statements counting and looking for seemingly missing parent values (as per an imagined business rule, since the constraint is not defined) may show a set of dirty data that should be looked into. I have used this as a pretext to ask about the tables ("Is it not so that ... is supposed to have ...'cause I have seen some ... that does not have a corresponding ..."). This may be an opportunity to suggest to clean up the data and add constraints (because management understands the value of the data and the danger of not be able to trust them). Invalid data are more likely to have entered early in the history of the database ("well, some testing was done on the production database") or because the data model evolved with new columns added or business rules changed at an early stage.
  • Besides finding corrupt data, researching the statistical distribution is useful for tuning purposes and identifying other constraints. When a column with a discrete set of distinct values have relative few nulls I suspect a candidate for a not null constraint.

I wish the oracle blogosphere would discuss database refactoring more often. The biggest challenge is to identify what can break elsewhere when you try to fix wrongdoings (impact analysis is what they call it) It is not trivial. Do people undertake this, or is it sometimes a relief when the contracts expires after twelve months so you can move on?

Monday, April 14, 2008

Data Guard and forced logging

When you start with Oracle Data Guard you have three sources of useful information: The Data Guard Concepts and Administration manual, which is quite good, other peoples experience, and after some time your own. The manual does not include all the mistakes one can do, nor what many really don't use (like the data guard broker; not everybody would leave switchover, or worse failover to a robot).

After a planned switchover to the physical standby the following ora-600 error message was received on our logical standby:

ORA-00600: internal error code, arguments: [krvxbpx20], [1],
[34735], [60], [16], [], [],[]

SQL Apply terminated after this error and neither restart of sql apply or bouncing the instance helped. On Metalink the only relevant information I found was Bug 6022014, which has status "Closed, not a bug". An ora-600 error that is not a bug...well, if it's my fault a decent error message would help. An SR was opened with Oracle Support and it was suspected quite right that supplemental logging was not enabled on primary and asked me to run the following query on the new primary:

select FORCE_LOGGING, SUPPLEMENTAL_LOG_DATA_PK,
SUPPLEMENTAL_LOG_DATA_UI, SUPPLEMENTAL_LOG_DATA_FK,
SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_ALL
from v$database;

Which returned:

FOR SUP SUP SUP SUPPLEME SUP
--- --- --- --- -------- ---
NO NO NO NO NO NO

Suddenly our logical standby was not my biggest concern. How come the primary does not run with forced logging? Following the manual this is one of the first step when preparing for data guard, mentioned in section 3.1.1 (10g) and from table 3-1 looks like a requirement. The manual doesn't tell you to execute it again for the new physical standby, neither before or after the first switchover. It surprised me that it is possible to successfully execute a switchover without having the new primary running with forced logging. Though I have not tested it, I imagine that the clone for the first physical standby was made before forced logging was enabled in the production database, and if I had enabled it in the order given in the manual the setting would follow the copy of the controlfile (in standby format) used to make the physical standby. At least that is the only explanation I have.

The solution was simple, just turn on forced logging with:

alter database force logging;

This is not recommended in an OLTP database with many ongoing transactions and can take some time to complete, but will finish soon when activity is low.

Just in case you don't know, forced logging makes sure nobody can create a table, index or other object without logging everything to the redo log, i.e. the NOLOGGING option in create statements is ignored. Checking that every node runs with forced logging should be part of a regular review of your data guard configuration.

Our logical standby must be recreated since many transactions have been lost, but that's OK now that a potential bigger problem have been avoided. Why do we have a physical and a logical standby ? The former serves the fastest and safest switchover or failover if necessary. The latter is used for off-site recovery options and as a source for our data warehouse. Remarkably stable and easy to setup once you have data guard with physical standby in place.

Sunday, April 13, 2008

First post...

A Sunday morning is a perfect moment for starting your blog. Not sure where this is going, or if it's going. Here the other day I found a solution to a problem I thought others may have encountered. I figured that my own blog would be a place to write it down. At least I can find the solution again if I make the same mistake twice.

This blog will concern mostly about stuff related to Oracle, the database, that is, not the company. Posts for fellow DBAs and maybe developers (I'm a hybrid now, working as a semi-DBA among developers...)

Until more, take care.