Using Postgres 9.5, I have a table addresses
.
CREATE TABLE addresses (
id integer PRIMARY KEY,
address text
);
In that table I have 7.5 million rows. Example:
1, "1600 Pennsylvania Avenue NW, Washington, DC, 20500"
I'm using this table for an autosuggest search in my application so I need to use this type of query:
SELECT * FROM addresses WHERE address LIKE '123 Main St%';
I created this index:
CREATE INDEX address_idx ON addresses (address);
But the problem is it's taking around 1 sec which is too slow.
Here's the query plan:
EXPLAIN SELECT * FROM addresses WHERE address LIKE '123 Main St%';
----
Seq Scan on addresses (cost=0.00..161309.76 rows=740 width=41)
Filter: (address ~~ '123 Main St%'::text)
I tried creating a few types of gin
indexes but they either had no effect or made the query slower. I'm not sure if I was using them correctly though.
Any ideas on how to create an index that's optimized for this kind of query?
EDIT
The best solution found so far is to use a text range scan:
SELECT *
FROM addresses
WHERE address >= '123 Main St' AND
address <= concat('123 Main St', 'z');