I'm experiencing some interesting behaviour with using user-defined functions within a SELECT statement.
I have a couple stored procedures that read and purge data from a single table. These stored procedures are used by multiple sources.
In my observations, it appears that the user defined functions are sometimes evaluated arbitrarily, not always immediately after or during the execution of the SELECT statement that it is used in.
For example, in a stored procedure, I have a select statement might look something like this:
SELECT Something, MyFunction(Something) FROM Somewhere;
This is followed by a call to another stored procedure, which purges data from the table. The amount of data purged is governed by another table, which stores maximum ID read. This is so that a purge should not delete any data that has not yet been read by another instance of the stored procedure executing.
In my test code, MyFunction just returns the number of rows in the table Somewhere. Thus, I would imagine that it should always be equal to the number of rows that the SELECT statement returns. However, in cases where I run two instances of this stored procedure, I get results something like this:
First query instance:
Something MyFunction(Something)
--------- ---------------------
A 3
B 3
C 3
Second query instance:
Something MyFunction(Something)
--------- ---------------------
A 0
B 0
C 0
Why is it that the second query returns all rows, but the user defined function that operates on the same table reports that there are no more rows in the table?
Is there anyway that I can ensure that the second query instance is consistent in that the user defined functions still see the same data that the parent stored procedure is seeing?