How to correctly use OpenTelemetry exporter with OpenTelemetry collector in client and server?
Asked Answered
W

2

9

I am trying to make OpenTelemetry exporter to work with OpenTelemetry collector.

I found this OpenTelemetry collector demo.

So I copied these four config files

  • docker-compose.yml (In my app, I removed generators part and prometheus which I currently having issue running)
  • otel-agent-config.yaml
  • otel-collector-config.yaml
  • .env

to my app.

Also based on these two demos in open-telemetry/opentelemetry-js repo:

I came up with my version (sorry for a bit long, really hard to set up a minimum working version due to the lack of docs):

.env

OTELCOL_IMG=otel/opentelemetry-collector-dev:latest
OTELCOL_ARGS=

docker-compose.yml

version: '3.7'
services:
  # Jaeger
  jaeger-all-in-one:
    image: jaegertracing/all-in-one:latest
    ports:
      - "16686:16686"
      - "14268"
      - "14250"

  # Zipkin
  zipkin-all-in-one:
    image: openzipkin/zipkin:latest
    ports:
      - "9411:9411"

  # Collector
  otel-collector:
    image: ${OTELCOL_IMG}
    command: ["--config=/etc/otel-collector-config.yaml", "${OTELCOL_ARGS}"]
    volumes:
      - ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
    ports:
      - "1888:1888"   # pprof extension
      - "8888:8888"   # Prometheus metrics exposed by the collector
      - "8889:8889"   # Prometheus exporter metrics
      - "13133:13133" # health_check extension
      - "55678"       # OpenCensus receiver
      - "55680:55679" # zpages extension
    depends_on:
      - jaeger-all-in-one
      - zipkin-all-in-one

  # Agent
  otel-agent:
    image: ${OTELCOL_IMG}
    command: ["--config=/etc/otel-agent-config.yaml", "${OTELCOL_ARGS}"]
    volumes:
      - ./otel-agent-config.yaml:/etc/otel-agent-config.yaml
    ports:
      - "1777:1777"   # pprof extension
      - "8887:8888"   # Prometheus metrics exposed by the agent
      - "14268"       # Jaeger receiver
      - "55678"       # OpenCensus receiver
      - "55679:55679" # zpages extension
      - "13133"       # health_check
    depends_on:
      - otel-collector

otel-agent-config.yaml

receivers:
  opencensus:
  zipkin:
    endpoint: :9411
  jaeger:
    protocols:
      thrift_http:

exporters:
  opencensus:
    endpoint: "otel-collector:55678"
    insecure: true
  logging:
    loglevel: debug

processors:
  batch:
  queued_retry:

extensions:
  pprof:
    endpoint: :1777
  zpages:
    endpoint: :55679
  health_check:

service:
  extensions: [health_check, pprof, zpages]
  pipelines:
    traces:
      receivers: [opencensus, jaeger, zipkin]
      processors: [batch, queued_retry]
      exporters: [opencensus, logging]
    metrics:
      receivers: [opencensus]
      processors: [batch]
      exporters: [logging,opencensus]

otel-collector-config.yaml

receivers:
  opencensus:

exporters:
  prometheus:
    endpoint: "0.0.0.0:8889"
    namespace: promexample
    const_labels:
      label1: value1
  logging:

  zipkin:
    endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
    format: proto

  jaeger:
    endpoint: jaeger-all-in-one:14250
    insecure: true

processors:
  batch:
  queued_retry:

extensions:
  health_check:
  pprof:
    endpoint: :1888
  zpages:
    endpoint: :55679

service:
  extensions: [pprof, zpages, health_check]
  pipelines:
    traces:
      receivers: [opencensus]
      processors: [batch, queued_retry]
      exporters: [logging, zipkin, jaeger]
    metrics:
      receivers: [opencensus]
      processors: [batch]
      exporters: [logging]

After running docker-compose up -d, I can open Jaeger (http://localhost:16686) and Zipkin UI (http://localhost:9411).

And my ConsoleSpanExporter works in both web client and Express.js server.

However, I tried this OpenTelemetry exporter code in both client and server, I am still having issue to connect OpenTelemetry collector.

Please see my comment about URL inside of the code

import { CollectorTraceExporter } from '@opentelemetry/exporter-collector';

// ...
tracerProvider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
tracerProvider.addSpanProcessor(
  new SimpleSpanProcessor(
    new CollectorTraceExporter({
      serviceName: 'my-service',
      // url: 'http://localhost:55680/v1/trace', // Return error 404.
      // url: 'http://localhost:55681/v1/trace', // No response, not exists.
      // url: 'http://localhost:14268/v1/trace', // No response, not exists.
    })
  )
);

Any idea? Thanks

Wapiti answered 19/8, 2020 at 11:8 Comment(0)
F
10

The demo you tried is using older configuration and opencensus which should be replaced with otlp receiver. Having said that this is a working example https://github.com/open-telemetry/opentelemetry-js/tree/master/examples/collector-exporter-node/docker So I'm copying the files from there:

docker-compose.yaml

version: "3"
services:
  # Collector
  collector:
    image: otel/opentelemetry-collector:latest
    command: ["--config=/conf/collector-config.yaml", "--log-level=DEBUG"]
    volumes:
      - ./collector-config.yaml:/conf/collector-config.yaml
    ports:
      - "9464:9464"
      - "55680:55680"
      - "55681:55681"
    depends_on:
      - zipkin-all-in-one

  # Zipkin
  zipkin-all-in-one:
    image: openzipkin/zipkin:latest
    ports:
      - "9411:9411"

  # Prometheus
  prometheus:
    container_name: prometheus
    image: prom/prometheus:latest
    volumes:
      - ./prometheus.yaml:/etc/prometheus/prometheus.yml
    ports:
      - "9090:9090"

collector-config.yaml

receivers:
  otlp:
    protocols:
      grpc:
      http:

exporters:
  zipkin:
    endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
  prometheus:
    endpoint: "0.0.0.0:9464"

processors:
  batch:
  queued_retry:

service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [zipkin]
      processors: [batch, queued_retry]
    metrics:
      receivers: [otlp]
      exporters: [prometheus]
      processors: [batch, queued_retry]

prometheus.yaml

global:
  scrape_interval: 15s # Default is every 1 minute.

scrape_configs:
  - job_name: 'collector'
    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.
    static_configs:
      - targets: ['collector:9464']

This should work fine with opentelemetry-js ver. 0.10.2

Default port for traces is 55680 and for metrics 55681

The link I posted previously - you will always find there the latest up to date working example: https://github.com/open-telemetry/opentelemetry-js/tree/master/examples/collector-exporter-node And for web example you can use the same docker and see all working examples here: https://github.com/open-telemetry/opentelemetry-js/tree/master/examples/tracer-web/

Forced answered 19/8, 2020 at 12:18 Comment(3)
Thanks @Forced ! Is it possible for a web client app, sending from OpenTelemetry exporter to OpenTelemetry collector, then forward to Jaeger instead of Zipkin? That is actually where I stuck.Wapiti
Oh just figured, posted the other answer!Wapiti
Thanks for the answer, I just wanted to mention that the default port for OTLP HTTP traces is 4318 now.Graaf
W
7

Thank you sooo much for @BObecny's help! This is a complement of @BObecny's answer.

Since I am more interested in integrating with Jaeger. So here is the config to set up with all Jaeger, Zipkin, Prometheus. And now it works on both front end and back end.

First both front end and back end use same exporter code:

import { CollectorTraceExporter } from '@opentelemetry/exporter-collector';

new SimpleSpanProcessor(
  new CollectorTraceExporter({
    serviceName: 'my-service',
  })
)

docker-compose.yaml

version: "3"
services:
  # Collector
  collector:
    image: otel/opentelemetry-collector:latest
    command: ["--config=/conf/collector-config.yaml", "--log-level=DEBUG"]
    volumes:
      - ./collector-config.yaml:/conf/collector-config.yaml
    ports:
      - "9464:9464"
      - "55680:55680"
      - "55681:55681"
    depends_on:
      - jaeger-all-in-one
      - zipkin-all-in-one

  # Jaeger
  jaeger-all-in-one:
    image: jaegertracing/all-in-one:latest
    ports:
      - "16686:16686"
      - "14268"
      - "14250"

  # Zipkin
  zipkin-all-in-one:
    image: openzipkin/zipkin:latest
    ports:
      - "9411:9411"

  # Prometheus
  prometheus:
    container_name: prometheus
    image: prom/prometheus:latest
    volumes:
      - ./prometheus.yaml:/etc/prometheus/prometheus.yml
    ports:
      - "9090:9090"

collector-config.yaml

receivers:
  otlp:
    protocols:
      grpc:
      http:

exporters:
  jaeger:
    endpoint: jaeger-all-in-one:14250
    insecure: true
  zipkin:
    endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
  prometheus:
    endpoint: "0.0.0.0:9464"

processors:
  batch:
  queued_retry:

service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [zipkin]
      processors: [batch, queued_retry]
    metrics:
      receivers: [otlp]
      exporters: [prometheus]
      processors: [batch, queued_retry]

prometheus.yaml

global:
  scrape_interval: 15s # Default is every 1 minute.

scrape_configs:
  - job_name: 'collector'
    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.
    static_configs:
      - targets: ['collector:9464']
Wapiti answered 19/8, 2020 at 14:27 Comment(2)
Why use zipkin AND jaeger? And jaeger is not part of the pipelines?Piccaninny
@Piccaninny oh you don't have to, I wrote in my answer I was only interested in Jaeger. You can simply remove Zipkin part if you are interested in Jaeger too. Also, note I am using jaeger-all-in-one in this demo. For a full setup using jaeger-collector, jaeger-agent, jaeger-query, elasticsearch, you can refer to docker-compose.development.yml file at my GitHub repo.Wapiti

© 2022 - 2024 — McMap. All rights reserved.