Best way to configure storage retention with Loki + S3
Asked Answered
A

1

5

I am using Loki v2.4.2 and have configured S3 as a storage backend for both index and chunk.

I want to ensure that all logs older than 90 days are deleted without risk of corruption. The documentation about retention is confusing, and steps are not clear. Should I just set TTL on object storage on root prefix i.e., /. Or should I configure something like this? I don't want to run the compactor.

table_manager:
  retention_deletes_enabled: true
  retention_period: 2160h

Here is my Loki configuration. Please suggest what changes should be made in this configuration and the corresponding S3 TTL. I don't want to run the compactor.

config:
  # existingSecret:
  auth_enabled: false
  ingester:
    chunk_idle_period: 3m
    chunk_block_size: 262144
    chunk_retain_period: 1m
    max_transfer_retries: 0
    wal:
      dir: /data/loki/wal
    lifecycler:
      ring:
        kvstore:
          store: inmemory
        replication_factor: 1

      ## Different ring configs can be used. E.g. Consul
      # ring:
      #   store: consul
      #   replication_factor: 1
      #   consul:
      #     host: "consul:8500"
      #     prefix: ""
      #     http_client_timeout: "20s"
      #     consistent_reads: true
  limits_config:
    max_query_series: 5000
    enforce_metric_name: false
    reject_old_samples: true
    reject_old_samples_max_age: 168h
  schema_config:
    configs:
    - from: 2021-09-27
      store: boltdb-shipper
      object_store: s3
      schema: v11
      index:
        prefix: index_
        period: 24h
  server:
    http_listen_port: 3100
  storage_config:
    aws:
      s3: s3://ap-southeast-1/loki-s3-bucket
    boltdb_shipper:
      active_index_directory: /data/loki/boltdb-shipper-active
      cache_location: /data/loki/boltdb-shipper-cache
      cache_ttl: 24h         # Can be increased for faster performance over longer query periods, uses more disk space
      shared_store: s3
    filesystem:
      directory: /data/loki/chunks
  chunk_store_config:
    max_look_back_period: 0s
  table_manager:
    retention_deletes_enabled: false
    retention_period: 0s
  compactor:
    working_directory: /data/loki/boltdb-shipper-compactor
    shared_store: filesystem
Arras answered 19/8, 2022 at 6:7 Comment(2)
"I don't want to be in a position where I cannot retrieve logs that are not older than 90 days." -- This wording (double negative) is confusing. Do you mean that you want to be sure you can access stuff older than 90 days? That's how I interpret it (after trying to untangle it...) but then it seems like you wouldn't need retention at all?Nora
@TravisBear sorry for the confusion, I deleted that line. Please read the question again.Arras
N
10

Deleting old log and index data seems to be the responsibility of S3, not Loki. You'll need to add one or more lifecycle rules to your buckets to handle this.

https://grafana.com/docs/loki/latest/operations/storage/retention/#table-manager

"When using S3 or GCS, the bucket storing the chunks needs to have the expiry policy set correctly. For more details check S3’s documentation or GCS’s documentation."

IMO the Loki documentation is very weak on this topic, I'd like it if they talked about this in more detail.

Nora answered 2/11, 2022 at 17:35 Comment(2)
Their documentation is absolutely horrendous in most regards.Downhill
Is there any documentation on what the lifecycle rule would look like? For instance, there's a loki_cluster_seed.json file at the root of the bucket that never changes, and thus would get deleted by a generic lifecycle policy.Cheder

© 2022 - 2024 — McMap. All rights reserved.