-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Description
Component(s)
exporter/prometheusremotewrite
What happened?
Description
When converting OTLP metrics to Prometheus format, "service.name" resource attribute is mapped to the job label and "service.instance.id" resource attribute is mapped to the instance label.
However, when resource_to_telemetry_conversion is set to true, these resource attributes are added as Prometheus labels, too.
I think it needs to be fixed so that service.name and service.instance.id are excluded even when resource_to_telemetry_conversion is true.
If this looks acceptable, I'd like to contribute.
Steps to Reproduce
Set resource_to_telemetry_conversion to true and receive metrics
Expected Result
node_cpu_seconds_total{cluster="ellie-test", instance="worker-nrm4", job="node-exporter", mode="idle", node_ip="10.202.42.48", zone="dev"}
Actual Result
node_cpu_seconds_total{cluster="ellie-test", instance="worker-nrm4", job="node-exporter", mode="idle", node_ip="10.202.42.48", service_instance_id="worker-nrm4", service_name="node-exporter", zone="dev"}
Collector version
0.136.0
Environment information
OpenTelemetry Collector configuration
scrape_configs_file: "test_scrape_config.yaml"
exporters:
prometheusremotewrite:
endpoint: "${remotewrite_endpoint}"
headers:
Authorization: "${credential}"
timeout: 10s
target_info:
enabled: false
max_batch_request_parallelism: 4
remote_write_queue:
enabled: true
num_consumers: 4
queue_size: 10000
resource_to_telemetry_conversion:
enabled: true
receivers:
kubeletstats:
auth_type: serviceAccount
collection_interval: 60s
endpoint: https://${OTEL_K8S_NODE_IP}:10250
extra_metadata_labels:
- k8s.volume.type
insecure_skip_verify: true
k8s_api_config:
auth_type: serviceAccount
metric_groups:
- pod
- volume
- container
processors:
k8sattributes:
extract:
labels:
- from: pod
key: app.kubernetes.io/name
tag_name: service.name
- from: pod
key: k8s-app
tag_name: service.name
- from: node
key: topology.kubernetes.io/region
tag_name: k8s.node.region
metadata:
- k8s.node.name
passthrough: false
pod_association:
- sources:
- from: resource_attribute
name: k8s.pod.uid
- sources:
- from: resource_attribute
name: k8s.node.name
- sources:
- from: connection
service:
pipelines:
metrics:
receivers: [ kubeletstats, otlp, prometheus ]
processors: [ k8sattributes]
exporters: [ prometheusremotewrite]
telemetry:
logs:
level: debug
metrics:
level: normalLog output
Additional context
No response
Tip
React with 👍 to help prioritize this issue. Please use comments to provide useful context, avoiding +1 or me too, to help us triage it. Learn more here.