link-sh is a Bun + TypeScript URL shortener system with a separate click-aggregation pipeline.
It currently includes:
- Short link creation
- Redirect handling
- Redis caching and negative caching
- Redirect rate limiting
- Kafka-based click event ingestion
- Background analytics aggregation into Postgres
- OpenTelemetry metrics, traces, and log shipping
- Docker-based local development and full-stack runs
- Project Summary
- What Is Implemented
- Architecture
- Repository Layout
- How To Run
- API Endpoints
- Configuration
- Database Schema
- Observability
- Operational Notes
- How To Extend This README
This repository is split into two application services:
link-redirectHandles link creation, short-code resolution, caching, rate limiting, health checks, and Kafka event publishing.aggregationConsumes click events from Kafka and writes aggregated analytics into Postgres.
Supporting infrastructure is provided through Docker Compose:
- Postgres
- Redis
- Kafka
- OpenTelemetry Collector
- Prometheus
- Loki
- Tempo
- Grafana
POST /linkscreates a new short link from a validlongUrlGET /:shortCoderesolves and redirects to the original URLGET /healthchecks Postgres and Redis connectivity
- Short codes are generated with
nanoidusing a 7-character alphabet - Link creation retries on unique short-code collisions
- Newly created links are written into Redis immediately to warm the cache
- Redirects use Redis as the first lookup layer
- Missing short codes are stored in Redis with a negative-cache sentinel to reduce repeated DB misses
- Cache stampede protection is implemented with a per-key Redis lock
- Redirect requests are rate limited by client IP
- Click events are pushed to Kafka for asynchronous analytics processing
- Redirect and create request metrics are recorded with OpenTelemetry
- Kafka topic:
link.clicks - Click events are consumed in batches
- Aggregation writes are stored in Postgres for:
- total clicks per link
- hourly clicks per link
- clicks by country
- clicks by device type
- Country is derived with
geoip-lite - Device type is derived from the user agent with
ua-parser-js
- OpenTelemetry traces exported to Tempo through the collector
- OpenTelemetry metrics exported to Prometheus through the collector
- Container logs shipped to Loki through the collector
- Grafana is included in the Docker stack for visualization
links.expires_atexists in the database schema- Expiry enforcement is not currently applied in the redirect flow
- There is no public analytics read API yet; analytics are written to database tables only
- Client sends
POST /links - Service validates
longUrl - Service generates a short code and inserts into Postgres
- Service warms Redis with
link:{shortCode} -> longUrl - Service returns the final short URL
- Client requests
GET /:shortCode - Service applies IP-based rate limiting
- Service checks Redis
- If cache miss happens, the service uses a Redis lock to avoid stampede on the same key
- Service reads Postgres when needed
- Service stores either the real URL or a negative-cache sentinel in Redis
- Service publishes a click event to Kafka
- Service responds with HTTP redirect
- Redirect service publishes click events to Kafka
- Aggregation service consumes Kafka batches
- Events are grouped in memory by total, hour, country, and device
- Aggregated counters are flushed to Postgres in a transaction
.
|-- services/
| |-- link-redirect/ # Fastify redirect + creation service
| |-- aggregation/ # Kafka consumer and analytics writer
| `-- shared/ # Shared types
|-- infra/docker/ # Dockerfiles, compose files, OTEL/Prometheus config
|-- migrations/ # Postgres schema migrations
|-- package.json # Workspace scripts
`-- README.md
- Bun
- Docker and Docker Compose
bun installThis starts infra plus both application services in watch mode.
docker compose -f infra/docker/docker-compose.dev.yml -f infra/docker/docker-compose.dev.dev2.yml up -d --force-recreateUseful endpoints after startup:
- Redirect service:
http://localhost:3000 - Grafana:
http://localhost:3001 - Prometheus:
http://localhost:9090 - Loki:
http://localhost:3100 - Tempo:
http://localhost:3200
To stop it:
docker compose -f infra/docker/docker-compose.dev.yml -f infra/docker/docker-compose.dev.dev2.yml downStart shared dependencies:
docker compose -f infra/docker/docker-compose.dev.yml up -dRun migrations:
bun run migrate:upSet environment variables for the redirect service:
$env:NODE_ENV="development"
$env:PORT="3000"
$env:BASE_URL="http://localhost:3000"
$env:DATABASE_URL="postgres://postgres:postgres@localhost:5432/links"
$env:REDIS_URL="redis://localhost:6379"
$env:KAFKA_BROKERS="localhost:9092"
$env:OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4318"
$env:LOG_LEVEL="info"Start the redirect service:
bun run dev:redirectIn another terminal, set environment variables for the aggregation service:
$env:NODE_ENV="development"
$env:DATABASE_URL="postgres://postgres:postgres@localhost:5432/links"
$env:KAFKA_BROKERS="localhost:9092"
$env:OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4318"
$env:LOG_LEVEL="info"
$env:TOPIC="link.clicks"Start the aggregation service:
bun run dev:aggregatorTo stop infra:
docker compose -f infra/docker/docker-compose.dev.yml downdocker compose -f infra/docker/docker-compose.yml up -d --buildTo stop it:
docker compose -f infra/docker/docker-compose.yml downCreate a new migration:
bun run migrate:create <migration_name>Apply migrations:
bun run migrate:upRollback the latest migration:
bun run migrate:downPOST /links
Request:
{
"longUrl": "https://example.com/some/very/long/path"
}Success response:
{
"shortUrl": "http://localhost:3000/abc123X"
}Possible responses:
201 Created400 Bad Requestfor invalid URLs500 Internal Server Error
Example:
curl -X POST http://localhost:3000/links -H "Content-Type: application/json" -d "{\"longUrl\":\"https://example.com\"}"GET /:shortCode
Possible responses:
302or framework redirect response to the original URL404 Not Foundwhen the short code does not exist429 Too Many Requestswhen the IP rate limit is exceeded500 Internal Server Error
Example:
curl -i http://localhost:3000/abc123XGET /health
Checks:
- Postgres connectivity
- Redis connectivity
Example:
curl http://localhost:3000/healthGET /otel-test
This route exists for manual trace validation.
Required:
DATABASE_URLREDIS_URLBASE_URLKAFKA_BROKERS
Optional with defaults:
NODE_ENV=developmentPORT=3000CACHE_TTL_SECONDS=3600NEGATIVE_CACHE_TTL_SECONDS=30CACHE_LOCK_TTL_SECONDS=5CACHE_WAIT_MS=50CACHE_WAIT_RETRIES=20KAFKA_BATCH_SIZE=100KAFKA_BATCH_MAX_WAIT_MS=25KAFKA_MAX_BUFFERED_MESSAGES=10000KAFKA_PRODUCER_RETRIES=8KAFKA_PRODUCER_RETRY_INITIAL_MS=300KAFKA_PRODUCER_RETRY_MAX_MS=30000KAFKA_PRODUCER_ACK_TIMEOUT_MS=30000LOG_LEVEL=infoOTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
Required:
DATABASE_URLKAFKA_BROKERS
Optional with defaults:
NODE_ENV=developmentTOPIC=link.clicksLOG_LEVEL=infoOTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
Stores the source mapping for each short code.
Columns currently added by migrations:
idshort_codelong_urlcreated_atexpires_atclick_count
Aggregated hourly click counts per short code.
Key:
(short_code, date, hour)
Aggregated click counts per country.
Key:
(short_code, country)
Aggregated click counts by device type.
Key:
(short_code, device_type)
- OpenTelemetry Collector
- Prometheus
- Loki
- Tempo
- Grafana
Implemented in the redirect service:
redirect_requests_totalFinal outcomes:redirect,not_found,rate_limited,errorcreate_requests_totalFinal outcomes:created,invalid_url,errorrate_limit_checks_totalFinal results:allowed,blockedredis_lookups_totalFinal results:hit,miss,negative_hitrequest_duration_msLabels include route, method, and outcome
OTEL counters commonly appear in Prometheus with an additional _total suffix.
Examples:
redirect_requests_total_totalcreate_requests_total_totalrate_limit_checks_total_totalredis_lookups_total_totalrequest_duration_ms_bucketrequest_duration_ms_sumrequest_duration_ms_count
sum by (outcome) (redirect_requests_total_total)
sum by (outcome) (create_requests_total_total)
sum by (result) (rate_limit_checks_total_total)
sum by (result) (redis_lookups_total_total)
histogram_quantile(0.95, sum(rate(request_duration_ms_bucket[5m])) by (le, route))
link.clicksis created automatically by thekafka-initcontainer- In Docker development mode,
workspace-installinstalls workspace dependencies once before app containers start - In Docker development mode, source code is bind-mounted, so code changes do not require image rebuilds
- Use
--buildwhen Dockerfiles or copied image content changes
Verify that the Kafka topic exists:
docker compose -f infra/docker/docker-compose.dev.yml -f infra/docker/docker-compose.dev.dev2.yml exec -T kafka bash -lc "/opt/kafka/bin/kafka-topics.sh --bootstrap-server kafka:29092 --list"When new work is added:
- Add new product capabilities under
What Is Implemented - Add new request flows under
Architecture - Add new commands under
How To Run - Add new endpoints under
API Endpoints - Add new env vars under
Configuration - Add new tables under
Database Schema - Add new telemetry or dashboards under
Observability