PostgREST / PostgreSQL Cannot enlarge string buffer message
Asked Answered
N

3

3

I run into a Cannot enlarge string buffer message on my running postgREST API. I guess some tables are too large to work successful with the API.

I am using the docker postgrest/postgrest container from https://hub.docker.com/r/postgrest/postgrest with the version PostgREST 5.1.0.

Everything is working as expected but if the tables size getting too large, I get following error message.

hint    null
details "Cannot enlarge string buffer containing 1073741822 bytes by 1 more bytes."
code    "54000"
message "out of memory"

I can't determine the threshold when it's working or not. Is there a possibility to enlarge the string buffer in some config file or is this hardcoded?

Are there any limits from the table size working with the API. So far I couldn’t find any information in the docu.

=========== Update

The postgres logs give me following SQL query:

WITH pg_source AS (
  SELECT "public"."n_osm_bawue_line".* 
  FROM "public"."n_osm_bawue_line"
) 
SELECT null AS total_result_set, 
       pg_catalog.count(_postgrest_t) AS page_total, 
       array[]::text[] AS header, 
       coalesce(json_agg(_postgrest_t), '[]')::character varying AS body 
FROM ( 
  SELECT * 
  FROM pg_source
) _postgrest_t

I use following postgres version:

"PostgreSQL 11.1 (Debian 11.1-1.pgdg90+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit"
Neogene answered 22/6, 2019 at 9:28 Comment(1)
Sounds as if the total result of your query is converted into a single JSON string on the Postgres (server) side and that single value exceeds the maximum length of 1GBSybyl
B
5

Unless you recompile PostgreSQL is not possible to upgrade the limit(defined here).

My suggestion would be to try to reduce the size of the payload(are you sure you need all the data?) or get all of the payload in multiple requests.

With PostgREST you can do vertical filtering(just select the columns that you need) or paginate to reduce the number of rows you get in one request.

Bounds answered 26/6, 2019 at 0:3 Comment(0)
B
1

The error message comes from PostgreSQL. PostgREST just wraps the message in JSON and sends the HTTP response.

As a first step for finding the problem, look what is the exact HTTP request you do to trigger the error.

Then, enable PostgreSQL logging and repeat the request, check the logs and then you'll see what is the SQL query that causes this error. Run the query through pgAdmin or psql to make sure you got the problematic query.

Update your question with your findings. The SQL query would be what is needed to continue.

After that you could add a postgresql tag to your question.

Bounds answered 23/6, 2019 at 5:41 Comment(1)
Thanks for the very usefull information. I updated my postNeogene
D
0

There is always the possibility that the file being imported is either corrupted or malformed because of any number of reasons.

I just happened to have discovered in my case that my file had something like incorrect line endings (long story, unnecessary here) which caused the whole file to appear as one line, thus causing the obvious result. You may have something similar in your case that requires a find+replace kind of solution.

For whatever benefit to anyone else, I used this to resolve it:

tr -d '\0' < bad_file.csv > bad_file.csv.fixed

Deuteranope answered 21/5, 2021 at 3:22 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.