Modern platforms usually treat observability as three core signals:
- Metrics
- Logs
- Traces
OpenTelemetry (OTel) is the standard way to produce and ship all three signals. This article explains a practical setup for InterSystems IRIS running in Docker Compose, with a full local observability stack:
- Prometheus (metrics)
- Loki (logs)
- Tempo (traces, for Grafana Traces Drilldown)
- Jaeger (optional, alternate trace viewer)
- Grafana (unified UI)
- OpenTelemetry Collector
What you’ll build
- IRIS emits OTLP/HTTP telemetry to the OpenTelemetry Collector
- Collector routes:
- traces -> Tempo (for Grafana)/Jaeger (alternate trace UI)
- metrics -> Prometheus
- logs -> Loki
- Grafana reads from Prometheus + Tempo + Jaeger + Loki and shows everything in one place

You can download the full working Docker Compose example here: https://github.com/isc-jyin/iris-otel-example
Prerequisites
- Docker
Step-by-Step Guide
1. Create a folder structure
otel-minimal-config/
iris/
cpf-merge.cpf
configs/
otel-collector-config.yaml
prometheus.yaml
tempo.yaml
grafana/
provisioning/
datasources/
datasources.yaml
docker-compose.yaml
README.md
README.pdf
2. Configure the OpenTelemetry Collector
The Collector is the router that receives OTLP from IRIS, then exports to each backend in the correct format.
Create otel-collector-config.yaml:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
exporters:
# export Metrics to Prometheus for viewing
prometheus:
endpoint: "0.0.0.0:8889"
# export traces to Jaeger for viewing
otlp/jaeger:
endpoint: jaeger:4317
tls:
insecure: true
otlphttp/loki:
endpoint: "http://loki:3100/otlp"
tls:
insecure: true
otlp/tempo:
endpoint: tempo:4317
tls:
insecure: true
debug:
verbosity: detailed
processors:
batch:
service:
pipelines:
metrics:
receivers: [otlp]
processors: [batch]
exporters: [prometheus]
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp/loki, debug]
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp/jaeger, otlp/tempo, debug]
3. Configure Prometheus scraping
Prometheus scrapes metrics from an HTTP endpoint exposed by the Collector. It also stores metrics generated by Tempo.
Create prometheus.yaml:
scrape_configs:
- job_name: 'otel-collector'
scrape_interval: 10s
static_configs:
- targets: ['otel-collector:8889']
- targets: ['otel-collector:8888']
4. Configure Tempo
Tempo is the trace backend that Grafana's Traces Drilldown needs. We also enable Tempo's metrics generator so Grafana can build metrics (rate/errors/duration) from traces.
Create tempo.yaml:
server:
http_listen_port: 3200
distributor:
receivers:
otlp:
protocols:
grpc:
endpoint: "tempo:4317"
storage:
trace:
backend: local
local:
path: /tmp/tempo/traces
wal:
path: /tmp/tempo/wal
metrics_generator:
registry:
external_labels:
source: tempo
cluster: docker-compose
storage:
path: /var/tempo/generator/wal
remote_write:
- url: http://prometheus:9090/api/v1/write
send_exemplars: true
traces_storage:
path: /var/tempo/generator/traces
overrides:
defaults:
metrics_generator:
processors: [service-graphs, span-metrics, local-blocks] # enables metrics generator
generate_native_histograms: both
5. Configure Grafana datasources
Grafana reads from Prometheus + Tempo + Jaeger + Loki and shows everything in one place
Create datasources.yaml:
apiVersion: 1
datasources:
- name: Loki
type: loki
access: proxy
url: http://loki:3100
- name: Prometheus
type: prometheus
access: proxy
url: http://prometheus:9090
isDefault: true
- name: Jaeger
type: jaeger
access: proxy
url: http://jaeger:16686
- name: Tempo
type: tempo
access: proxy
url: http://tempo:3200
6. Enable OTel metrics and logs in IRIS
Create iris.cpf
[Monitor]
OTELMetrics=1
OTELLogs=1
OTELLogLevel=INFO
7. Create docker-compose.yaml
Compose starts everything consistently under the same network, so DNS names like tempo, loki, otel-collector work.
services:
iris:
image: containers.intersystems.com/intersystems/iris-community:2025.3
environment:
- ISC_CPF_MERGE_FILE=/iris/cpf-merge.cpf
- OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4318
- OTEL_EXPORTER_OTLP_TIMEOUT=500
- OTEL_SERVICE_NAME=iris
depends_on:
- otel-collector
volumes:
- ./iris:/iris
otel-collector:
image: otel/opentelemetry-collector-contrib:0.143.0
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./configs/otel-collector-config.yaml:/etc/otel-collector-config.yaml:ro
tempo:
image: grafana/tempo:latest
command: ["-config.file=/etc/tempo.yaml"]
volumes:
- ./configs/tempo.yaml:/etc/tempo.yaml:ro
ports:
- 3200
- 4317
loki:
image: grafana/loki:latest
ports:
- 3100
prometheus:
image: prom/prometheus:v3.9.1
volumes:
- ./configs/prometheus.yaml:/etc/prometheus/prometheus.yml:ro
ports:
- "9090:9090"
# Jaeger
jaeger:
image: jaegertracing/all-in-one:1.76.0
ports:
- "16686:16686"
- 4317
environment:
- LOG_LEVEL=debug
grafana:
image: grafana/grafana:12.3.1
environment:
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_BASIC_ENABLED=false
- GF_FEATURE_TOGGLES_ENABLE=traceqlEditor
volumes:
- ./configs/grafana/provisioning:/etc/grafana/provisioning:ro
ports:
- 3000:3000/tcp
depends_on:
- prometheus
- loki
- jaeger
8. Start the stack
Start the environment
docker compose up -d
Open a terminal session in the IRIS container:
docker compose exec iris iris session IRIS
Then run the built-in trace demo:
Do ##class(%Trace.Tracer).Test()
9. Viewing Results
Grafana - Traces + Metrics + Logs
Open http://localhost:3000/drilldown
Metrics:
.png)
Logs:
.png)
Traces:
.png)
.png)
Jaeger - Traces
.png)
Prometheus - Metrics
.png)