-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Description
Component(s)
receiver/prometheus
What happened?
Description
After upgrading the OpenTelemetry Collector from v0.126.0 to v0.127.0, all my prometheusreceiver scrapes that use basic_auth started failing. Targets that do not use basic_auth continue to work normally. Only scrapes with basic_auth are affected.
There were no infrastructure or configuration changes, only the version bump of the Collector. After rolling back to version v0.126.0, scraping with basic_auth resumed functioning as expected.
The environment variables used for the passwords are correctly set. The same targets respond with 200 OK when accessed using curl and the same credentials.
Steps to Reproduce
- Deploy the OpenTelemetry Collector version
v0.127.0using theprometheusreceiverwith ascrape_configthat includesbasic_authcredentials. - Make sure the target endpoint is available and responds with HTTP 200 when accessed manually using the same credentials (e.g., via
curl). - Observe that the target is not scraped successfully and warnings appear in the Collector logs
- Rollback the Collector to version
v0.126.0, keeping the same configuration. - Observe that scraping resumes successfully and no warnings appear in the logs.
Expected Result
Targets that use basic_auth should be scraped successfully, just like they were in version v0.126.0.
Actual Result
All targets configured with basic_auth fail to be scraped after upgrading to version v0.127.0. The logs show repeated Failed to scrape Prometheus endpoint warnings. Targets without basic_auth continue to work correctly.
Collector version
v0.127.0
Environment
OpenTelemetry Collector version (before upgrade): v0.126.0
Deployment environment: Kubernetes Cluster
Kubernetes version: 1.31.8-gke.1045000
Collector deployed as: StatefulSet
OpenTelemetry Collector configuration
receivers:
prometheus:
config:
scrape_configs:
- job_name: 'kong-production'
scrape_interval: 30s
metrics_path: /metrics
scheme: http
static_configs:
- targets: ['api-gateway.ctbz.prd']
basic_auth:
username: 'otelcol'
password: '${KONG_METRICS_PRD_PASSWORD}'
- job_name: 'kong-staging'
scrape_interval: 30s
metrics_path: /metrics
scheme: http
static_configs:
- targets: ['api-gateway.ctbz.stg']
basic_auth:
username: 'otelcol'
password: '${KONG_METRICS_STG_PASSWORD}'
- job_name: 'kong-development'
scrape_interval: 30s
metrics_path: /metrics
scheme: http
static_configs:
- targets: ['api-gateway.ctbz.dev']
basic_auth:
username: 'otelcol'
password: '${KONG_METRICS_DEV_PASSWORD}'Log output
2025-07-03T20:24:08.520Z info prometheusreceiver@v0.127.0/metrics_receiver.go:227 Starting scrape manager {"resource": {}, "otelcol.component.id": "prometheus", "otelcol.component.kind": "receiver", "otelcol.signal": "metrics"}
2025-07-03T20:24:15.419Z warn internal/transaction.go:150 Failed to scrape Prometheus endpoint {"resource": {}, "otelcol.component.id": "prometheus", "otelcol.component.kind": "receiver", "otelcol.signal": "metrics", "scrape_timestamp": 1751574255405, "target_labels": "{__name__=\"up\", instance=\"api-gateway.ctbz.dev\", job=\"kong-development\"}"}
2025-07-03T20:24:21.110Z warn internal/transaction.go:150 Failed to scrape Prometheus endpoint {"resource": {}, "otelcol.component.id": "prometheus", "otelcol.component.kind": "receiver", "otelcol.signal": "metrics", "scrape_timestamp": 1751574261092, "target_labels": "{__name__=\"up\", instance=\"api-gateway.ctbz.prd\", job=\"kong-production\"}"}
2025-07-03T20:24:27.529Z warn internal/transaction.go:150 Failed to scrape Prometheus endpoint {"resource": {}, "otelcol.component.id": "prometheus", "otelcol.component.kind": "receiver", "otelcol.signal": "metrics", "scrape_timestamp": 1751574267514, "target_labels": "{__name__=\"up\", instance=\"api-gateway.ctbz.stg\", job=\"kong-staging\"}"}Additional context
No response
Tip
React with 👍 to help prioritize this issue. Please use comments to provide useful context, avoiding +1 or me too, to help us triage it. Learn more here.