Skip to content

feat(kafka): add OidcManaged enum value to OAuthBearerMethod#3385

Open
aaaristo wants to merge 1 commit intoAzure:mainfrom
aaaristo:feat/oidc-managed-attribute
Open

feat(kafka): add OidcManaged enum value to OAuthBearerMethod#3385
aaaristo wants to merge 1 commit intoAzure:mainfrom
aaaristo:feat/oidc-managed-attribute

Conversation

@aaaristo
Copy link
Copy Markdown

Summary

Adds OAuthBearerMethod.OidcManaged to the Kafka extension's OAuthBearerMethod enum so the isolated-worker [KafkaTrigger] / [Kafka] attributes can carry the new value through to the host extension. Existing Default and Oidc values are unchanged.

This is the worker-side companion to the host extension change in Azure/azure-functions-kafka-extension#635. The host extension is where the runtime behavior lives.

Motivation

On some Linux images (notably Azure Functions Flex Consumption), librdkafka's built-in OIDC path uses libcurl with a hardcoded CA-bundle path (/etc/pki/tls/certs/ca-bundle.crt) that doesn't exist, producing:

error setting certificate file: /etc/pki/tls/certs/ca-bundle.crt (-1)

The natural fix would be https.ca.location in ProducerConfig / ConsumerConfig, but that property was only added in librdkafka 2.11. Confluent.Kafka 2.4.0 (the host extension's pinned version) ships an older librdkafka that doesn't expose it. Bumping that dependency is a non-trivial cascade through Confluent.SchemaRegistry and serializers.

The host extension solves this without changing the librdkafka pin: it adds OAuthBearerMethod.OidcManaged, which performs OIDC client-credentials token acquisition in managed .NET code (HttpClient) and pushes the token onto librdkafka via SetOAuthBearerTokenRefreshHandler plus a synchronous OAuthBearerSetToken call right after Build(). HttpClient uses the OS trust store on every supported platform, so the CA-bundle problem disappears.

This PR is the worker-side part: it adds the new enum value to the attribute surface so users can write:

[Function("Heartbeat")]
public Task Run([KafkaTrigger(
    brokerList: "...", topic: "...",
    AuthenticationMode = BrokerAuthenticationMode.OAuthBearer,
    OAuthBearerMethod = OAuthBearerMethod.OidcManaged,                 // <-- new
    OAuthBearerClientId = "%...%", OAuthBearerClientSecret = "%...%",
    OAuthBearerTokenEndpointUrl = "https://login.microsoftonline.com/{tenant}/oauth2/v2.0/token",
    OAuthBearerScope = "...",
    OAuthBearerExtensions = "logicalCluster=lkc-...,identityPoolId=pool-...")] byte[][] events,
    FunctionContext context) { /* ... */ }

The trigger metadata serializes the enum value into function.json, and the host extension picks it up at binding time. There is no runtime logic in this package — it is purely the attribute surface.

Operational note for Flex Consumption

The host PR documents this in detail, but for cross-reference: end-to-end auto-scaling on Flex Consumption with OidcManaged currently requires alwaysReady to be set on the Kafka function group(s).

Per Architecture.md § Scale Controller Integration, Azure's Functions Scale Controller embeds a pinned reference to Microsoft.Azure.WebJobs.Extensions.Kafka and uses reflection-based delegation for scaling decisions. Until the Scale Controller's pinned version contains the host change that ships alongside this PR, it can't deserialize OAuthBearerMethod: "OidcManaged" from function metadata, so it can't compute lag for these triggers and won't scale them out from zero.

The listener still works correctly with alwaysReady=1. Once the Scale Controller's pinned reference is bumped, dynamic scale-from-zero will work without alwaysReady.

Backwards compatibility

  • Purely additive: Default = 0, Oidc = 1, OidcManaged = 2. No reordering.
  • Existing code using Default or Oidc is unaffected.
  • Older host extension versions that don't recognise OidcManaged will silently fall back to library default behavior on the metadata cast, so this enum value is only meaningful when paired with the matching host extension release. Recommended: gate user-facing code on the host extension version that contains the corresponding host change.

Test plan

  • Build clean (the existing nullable-reference-types warning in Worker.Extensions.Shared is pre-existing on main)
  • No tests added or modified — this is a one-line enum addition with no runtime behavior
  • Reviewer to confirm CI passes against the worker SDK version pinned in global.json

Companion PR

Host-side runtime behavior: Azure/azure-functions-kafka-extension#635 — adds the OidcManaged runtime path, OidcTokenProvider, eager PrimeToken, and Schema Registry auth-provider integration.

Both PRs need to ship for users to opt in. Recommended merge order: host PR first, this one after.

Files

extensions/Worker.Extensions.Kafka/src/OAuthBearerMethod.cs   +10/-1   (new enum value + XML doc)

Adds OAuthBearerMethod.OidcManaged for use with the Kafka trigger/
output binding attribute on isolated workers.

This is the worker-side companion to the host extension change in
Azure/azure-functions-kafka-extension. The host extension performs
OIDC client-credentials token acquisition in managed .NET code
(HttpClient + OAuthBearerSetToken) instead of librdkafka's libcurl-
based path, sidestepping a hardcoded CA-bundle path that does not
exist on some Linux images (notably Azure Functions Flex Consumption).

Existing values (Default, Oidc) are unchanged; this is purely an
additive enum value. Requires a host extension version that
recognises OidcManaged at runtime.

Usage:

  [KafkaTrigger(
      brokerList: "...", topic: "...",
      AuthenticationMode = BrokerAuthenticationMode.OAuthBearer,
      OAuthBearerMethod = OAuthBearerMethod.OidcManaged,
      OAuthBearerClientId = "%...%",
      OAuthBearerClientSecret = "%...%",
      OAuthBearerTokenEndpointUrl = "...",
      OAuthBearerScope = "...",
      OAuthBearerExtensions = "k=v,...")]
Copilot AI review requested due to automatic review settings April 27, 2026 15:07
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a new OAuthBearerMethod.OidcManaged enum value to the Kafka isolated-worker extension so [KafkaTrigger] / [KafkaOutput] metadata can carry the new mode through to the host Kafka extension (where the runtime behavior is implemented).

Changes:

  • Add OAuthBearerMethod.OidcManaged enum value.
  • Document the intent and host-extension requirement via XML doc comment on the enum member.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

/// mode; avoids the platform-specific CA-bundle issue that affects
/// librdkafka's OIDC path on some Linux images (e.g. Azure Functions Flex).
/// </summary>
OidcManaged
Copy link

Copilot AI Apr 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider assigning explicit numeric values to the enum members (e.g., Default = 0, Oidc = 1, OidcManaged = 2) to lock in the on-wire/serialized representation and prevent accidental renumbering if the enum is ever reordered or new values are inserted in the middle.

Copilot uses AI. Check for mistakes.
Comment on lines +18 to +24
/// <summary>
/// OIDC client-credentials flow performed in managed .NET code rather than
/// delegated to librdkafka's libcurl-based token fetch. Requires a host
/// extension (Microsoft.Azure.WebJobs.Extensions.Kafka) that supports this
/// mode; avoids the platform-specific CA-bundle issue that affects
/// librdkafka's OIDC path on some Linux images (e.g. Azure Functions Flex).
/// </summary>
Copy link

Copilot AI Apr 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This new enum value is expected to flow into generated functions.metadata via JsonStringEnumConverter. Please add/extend a generator test to assert that setting OAuthBearerMethod = OidcManaged on [KafkaTrigger]/[KafkaOutput] emits the correct string value, so regressions in metadata serialization are caught.

Copilot uses AI. Check for mistakes.
@satvu satvu requested a review from TsuyoshiUshio April 28, 2026 23:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants