I wrote about my problems with Informix. Now we are facing similar problem on Oracle. It results in deadlock. The SQLs are more complex, but core problem is still in full table scan on small data locking the whole table because of full table scan.
We have to find config setting to force Oracle to always use indexes. We know only hint, that increases chance of using indexes – ALTER SESSION SET OPTIMIZER_INDEX_COST_ADJ = 1.
Advice1: don’t use micro benchmark.
Advice2: Don’t slouch on deadlock, you should be able to live them. It’s compromise for gained performance.
Solution: If an index exists on the foreign key column of the child table, no DML locks, other than a lock over the rows being modified, are required.
If the index is not created, a share lock is taken out on the child table for the duration of the transaction.
The referential integrity validation could take several minutes or even hours to resolve. The share lock over the child table will allow other users to simultaneously read from the table, while restricting certain types of modification. The share lock over the table can actually block other normal, everyday modification of other rows in that table.
I wrote about my problems with Informix IDS several days ago.
That problem is sovled now:
“The number of rows in the table is small enough to cause the database server to perform a sequential scan, and an index scan on a table with lock mode configured to row.”
You can setup IDS to use indexes all time which gets you off my problem. See Informix suport page for further info.
The infromation about locking previous and next record in index was outdated. Informix does not make this since v7.
I will bethink pretty much locking problems with other databases.
We still have problems with deleting data from tables using SQL.
Solution: OPTCOMPIND parameter
OPTCOMPIND is an environment variable or a parameter in the Informix configuration file. The optimizer uses its value to determine its choice of the data access method. It has one of three values (0, 1 and 2), which indicate the following:
- If the value is set to 0, then the optimizer chooses index scans over table scans if appropriate indexes exist, without even considering the estimated costs.
- If the value is set to 1, then the optimizer behaves as it does for value 0 if the transaction isolation mode is not Repeatable Read. If the transaction isolation mode is Repeatable Read, then the optimizer would base its choice purely on the estimated costs.
- If value is set to 2, the optimizer would uses estimated costs to determine an execution plan regardless of the transaction isolation mode.
You may either set OPTCOMPIND either as an environment variable or as a parameter in the configuration file, but setting it as a parameter will take precedence on execution.
Link: Tuning Informix SQL
While trying to run our application with Informix IDS server I had to solve two problems:
- Cannot delete many rows from single table because of long running transaction error.
- Receive locking error during parallel batch processing on distinct data (ie. one table, distinct rows).
So I’ve read some manuals about Informix: Administrator’s Guide, Database Design and Implementation Guide, Guide to SQL – Syntax, Installation Guide for Microsoft Windows, JDBC Driver Programmer’s Guide and Informix Unleashed book. It took me 4 days to understand Informix specific issues The database is quite complex and has many interesting features (eg. table inheritance, vritual processors, …).
The result is, that there is no good solution
The “solution” is following:
- Drop all indexes, primary and foreing keys on such table, change table type to raw (without transaction log). Delete data. Recreate indexes, primary and foreing keys. (or delete data by chunks, use XPS server and operational table type). That’s quite clumsy isn’t it?
- Change default locking granurality for all table to row mode (default is page mode) and set sime timeout for waiting for exclusive lock (default is no wait). That doesn’t help to get rid of all locking errors, becaus Informix locks also previous and following record in all indexes over table…
I wrote a simple test to test locking problem on other databases we use (DB2 AS400, DB2 9.1, PostgreSQL, Oracle 10g, Firebird and MS SQL Server 2005).
The test simmulates parallel batch processing:
- create connection 1 (session – we use Hibernate)
- start transaction1
- update some rows in tableA
- create connection2
- start transaction2
- update some other rows in tableA
- commit transaction2
- commit transaction1
The result’s are:
- Informix IDS – fails with default settings, second update tries to lock index records that are already locked by first update … this is more or less probable. Wait with my settings until transaction1 commits.
- MS SQL Server 2005 – Wait until transaction1 commits, second update tries to lock page that is already locked by first update, which causes waiting. The documentation says that server chooses appropriate lock (db, table, page, row) but as I could see it does not. You can force MS SQL Server 2005 to use row lock. But only at pre statement basis and that’s not comfortable.
- DB2 AS400, DB2 9.1, PostgreSQL, Oracle 10g and Firebird passed my test without waiting.
Now I’m little frustrated about what database to use for combined OLTP and batch processing thas are required by our application.