I wouldn't approach the problem that way in PostgreSQL.
From a software engineering point of view, I think I'd separate generating a random integer between x and y, generating 'n' of those integers, and guaranteeing the result is a set.
-- Returns a random integer in the interval [n, m].
-- Not rigorously tested. For rigorous testing, see Knuth, TAOCP vol 2.
CREATE OR REPLACE FUNCTION random_integer(integer, integer)
RETURNS integer AS
$BODY$
select cast(floor(random()*($2 - $1 +1)) + $1 as integer);
$BODY$
LANGUAGE sql VOLATILE
Then to select a single random integer between 1 and 1000,
select random_integer(1, 1000);
To select 100 random integers between 1 and 1000,
select random_integer(1, 1000)
from generate_series(1,100);
You can guarantee uniqueness in either application code or in the database. Ruby implements a Set
class. Other languages have similar capabilities under various names.
One way to do this in the database uses a local temporary table. Erwin's right about the need to generate more integers than you need, to compensate for the removal of duplicates. This code generates 20, and selects the first 8 rows in the order they were inserted.
create local temp table unique_integers (
id serial primary key,
n integer unique
);
insert into unique_integers (n)
select random_integer(1, 1001) n
from generate_series(1, 20)
on conflict (n) do nothing;
select n
from unique_integers
order by id
fetch first 8 rows only;