Fix k8s resolver parsing so loadbalancing exporter works with service FQDNs#44519
Fix k8s resolver parsing so loadbalancing exporter works with service FQDNs#44519ChrsMark merged 7 commits intoopen-telemetry:mainfrom
Conversation
… FQDNs Signed-off-by: Israel Blancas <iblancasa@gmail.com>
| name, namespace := nAddr[0], "default" | ||
| if len(nAddr) > 1 { | ||
| namespace = nAddr[1] | ||
| parts := strings.Split(service, ".") |
There was a problem hiding this comment.
Thanks for the fix, according to the changes you made, the loadbalancing exporter should work correctly (even without your fix), when using the standard <svcName>.<namespace> domain name format of the headless service. I am still experiencing the same issue even when using the collector-backend.default in the config mentioned in the issue. Am I missing something here? Thanks!
There was a problem hiding this comment.
Let me retest with that and see if I missed something during the fix.
There was a problem hiding this comment.
I just reran with the provided config against a local kind cluster using a collector build that includes the FQDN parsing fix. With telemetrygen driving 50 traces through a port-forward, both backend pods received traffic and kubectl logs deploy/lb-collector never produced the couldn’t find the exporter for the endpoint "" error. So, I think, <svc>.<namespace> now works as expected.
The front collector’s service account must be allowed to list/watch endpointslices in the namespace. Without the RBAC role binding, the informer never populates and the ring stays empty, which yields the same error message even with a correct service string.
Signed-off-by: Israel Blancas <iblancasa@gmail.com>
|
@rlankfo please review as codeowner |
…contrib into 44472
Signed-off-by: Israel Blancas <iblancasa@gmail.com>
…tor-contrib into 44472
|
Is there anything else pending here? |
|
@open-telemetry/collector-contrib-approvers anything else here? |
Link to tracking issue
Fixes #44472
Testing
Added some tests and manual testing.