I have run into a strange problem lately, programming in an Oracle database: inside a serializable transaction, i do a mass insert (INSERT ... SELECT), and immediately after, I open a cursor with a SELECT on the altered table. I assumed that this cursor would include the newly inserted rows, but, to my surprise, its contents are erratic, sometimes including all the newly inserted rows, and sometimes only a subset.
I have solved this problem by commiting before opening the cursor, but the behaviour has puzzled me. Can a select after an insert inside the same transaction, without an intervining commit, actually be trusted? Or is this behaviour somehow related to the transaction being serializable?
Followup: When trying to create a reproducible test case, I was only able to obtain this behaviour once I added an index (in this case a primary key index, on the actual code it was a regular index). Perhaps the problem lies in the time spent building the index, so that the SELECT actually uses an incomplete index to retrieve the results? Anyway, here goes a reproducible test case:
-- Create empty source table
CREATE TABLE TEST_CASE_1 AS
(SELECT 'CONTENT' AS CONTENT
FROM DUAL
WHERE 1 = 2)
-- Add primary key
ALTER TABLE TEST_CASE_1
ADD CONSTRAINT TEST_CASE_1_PK PRIMARY KEY (CONTENT);
-- Create empty destination table
CREATE TABLE TEST_CASE_2 AS
(SELECT 'CONTENT' AS CONTENT
FROM DUAL
WHERE 1 = 2)
-- Example of faulty code
BEGIN
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
-- Populate with 100.000 rows (I used ALL_OBJECTS but any source of 100.000 rows is good)
INSERT INTO TEST_CASE_1
(SELECT ROWNUM
FROM ALL_OBJECTS
WHERE ROWNUM <= 100000);
INSERT INTO TEST_CASE_2
(SELECT *
FROM TEST_CASE_1
WHERE CONTENT > 0);
COMMIT;
END;
In this example, I would expect TEST_CASE_2 to also have 100.000 rows. Reproducing this test case (in a load-free database), I obtained about 400-500 rows inserted. Removing the statement setting the transaction as serializable, I obtained the correct 100.000 row count.