Replies: 7 comments 3 replies
-
|
hey @mlevkov, it always warms my heart when I see someone talking positive about iggy. I went through the full design doc and the codebase. about http sink: it looks well researched, and since you will be using it, it's even better that you will implement it. one thing is that the I have a bit more to add about runtime issues (both confirmed, i checkedin code):
your proposed fix for retry logic is somewhat OK but it does not fix resolve the coree issue: if process crashes mid-consume those messages are gone regardless of what happens inside the callback. I think I have somewhat of a plan for proper resolution: step 1 - visibility:
this gives operators immediate visibility into failures that are currently invisible, without changing the FFI contract or auto-commit strategy. all existing sinks benefit. step 2 - proper at-least-once delivery
step 1 is a standalone PR. step 2 probably splits into SDK changes and runtime changes - we can finalize the details in the tracking issues. finally, answering your questions:
looking forward to seeing the issues and the PR! |
Beta Was this translation helpful? Give feedback.
-
PR and Issues FiledThe HTTP sink connector implementation is ready for review:
While analyzing the runtime to inform the design, we found two issues affecting all existing sinks. Filed separately:
Integration testsThe PR includes 7 integration tests in
Test counts
|
Beta Was this translation helpful? Give feedback.
-
|
I really like the idea of generic HTTP connector, given the existing transforms etc. this could be used to quickly build integrations for lots of APIs out there. Actually, at some point, it could be even possible to have the custom HTTP client + message processor, and reuse it across other sinks/sources directly using HTTP API so that for all these plugins we could rely on a single, configurable, well-tested HTTP component to do all the processing. |
Beta Was this translation helpful? Give feedback.
-
|
Thank you @spetz — that's a great perspective on where this can go. We designed the HTTP layer with that kind of reuse in mind, even if it's currently packaged as a single sink plugin. The internals are already separated into composable pieces:
Extracting the HTTP client + retry + serialization into a shared Happy to explore that direction once the base sink lands and stabilizes. |
Beta Was this translation helpful? Give feedback.
-
Feature idea: Per-message HTTP header forwardingWhile working on the HTTP sink PR (#2925), a question came up about dynamic per-message headers. Today, all HTTP headers are static (config-derived, same for every request). But there are real use cases for per-message headers: Use cases:
Proposed design — additive layering: // Static config headers (pre-built once in open(), cloned per-request)
let mut request = build_request(self.method, client, &self.url)
.headers(self.request_headers.clone())
.header("content-type", content_type);
// Per-message dynamic headers (opt-in via config)
if self.include_metadata_headers {
request = request
.header("x-iggy-offset", offset.to_string())
.header("x-iggy-topic", &topic_metadata.topic)
.header("x-iggy-partition", partition_id.to_string());
}
// Forward Iggy user headers as HTTP headers (opt-in)
if self.forward_iggy_headers {
if let Some(headers) = &message.headers {
for (key, value) in headers {
request = request.header(
format!("x-iggy-{}", key.to_string_value()),
value.to_string_value(),
);
}
}
}Config surface: [plugin_config]
# Include iggy metadata (offset, topic, partition) as HTTP headers
include_metadata_headers = false
# Forward iggy user-defined message headers as HTTP headers (prefixed with x-iggy-)
forward_iggy_headers = falseThis preserves the current pre-built Thoughts? This could be a follow-up PR if there is interest. |
Beta Was this translation helpful? Give feedback.
-
|
@mlevkov can we make it a separate discussion? |
Beta Was this translation helpful? Give feedback.
-
|
Done — moved to a separate discussion: #3029 Added some extra design considerations around batch mode implications and header name conflicts. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi team,
We've been running Iggy in production for our ad mediation platform since January 2026 (migrated from Fluvio) and would like to contribute a generic HTTP sink connector back to the ecosystem.
The connector framework has 6 sinks today, but no generic HTTP sink for arbitrary endpoints — webhooks, Lambda functions, REST APIs, SaaS integrations. Every HTTP-using sink (Quickwit, Elasticsearch, Iceberg) re-implements its own client logic, error handling, and retry strategy independently.
We've put together a detailed design document (attached) covering the full proposal: four batch modes (individual, ndjson, json_array, raw), configurable metadata envelopes with proper u128/binary serialization, exponential backoff retry with transient error classification, and an opt-in health check. The design follows patterns from the existing sinks — reqwest from Quickwit, retry logic from PostgreSQL/MongoDB, AtomicU64 counters from MongoDB. No new workspace dependencies required.
During our analysis of the runtime and existing sinks, we also identified two issues in the connector runtime that affect
all sinks — details in the attached document under "Runtime Issues Discovered During Analysis." Happy to file these separately and contribute fixes if the team agrees they should be addressed.
We'd love your feedback on the design before we start implementation. A few specific questions:
Looking forward to your feedback.
IDEA-010-iggy-http-sink-github-discussion.md
Beta Was this translation helpful? Give feedback.
All reactions