How to handle field collisions from different logging sources in Elasticsearch?
Asked Answered
E

0

6

We send logs from a variety of services running in a Kubernetes cluster to Elasticsearch via Filebeat. Some of these services we develop ourselves, others are third-party. We use dynamic mapping in our indices. We've hit an issue where sometimes a field used by logs from one service happens to share the same name with logs from a difference service, and the type of data in that field is different. For example, in logs from one service, the url field might be a string, but in another, it might be a structured object. We then get errors ingesting the logs, saying:

{
  "type": "mapper_parsing_exception",
  "reason": "object mapping for [url] tried to parse field [url] as object, but found a concrete value"
}

What strategies might we use to get around these collisions?

Erigena answered 20/7, 2021 at 13:53 Comment(1)
(Turns out in this particular case the collision is with the default index template set up by Filebeat, but the broader question still stands)Erigena

© 2022 - 2024 — McMap. All rights reserved.