Summary
Creating a proxy host (via UI or REST API) succeeds in the database and shows green "Online" status in the UI, but no [Nginx] Building proxy host #N for: ... log entry is emitted and no .conf file is written to /data/nginx/proxy_host/. The proxy therefore never routes traffic and clients receive tlsv1 unrecognized name SNI alerts.
Only a [Nginx] Reloading Nginx log entry follows the save — the config-generation step in between is silently skipped, without any error.
Same behavior on :latest and :2.14.0 with completely fresh data directories. Cert uploads and Access List generation on the same instance work normally — only proxy host generation is affected.
Strongly suspected root cause: Docker Engine 29 with the new default containerd image store. Details under "Environment / Hypothesis" below.
Nginx Proxy Manager Version
Reproduced on both:
jc21/nginx-proxy-manager:latest (image built 2026-02-17)
jc21/nginx-proxy-manager:2.14.0 (image built 2026-02-17)
No newer release exists at the time of writing.
To Reproduce
Minimal compose:
services:
npm:
image: jc21/nginx-proxy-manager:2.14.0
container_name: nginx-proxy-manager
restart: unless-stopped
ports:
- "8080:80"
- "8443:443"
- "81:81"
volumes:
- /opt/docker/npm/data:/data
- /opt/docker/npm/letsencrypt:/etc/letsencrypt
environment:
DISABLE_IPV6: 'true'
extra_hosts:
- "host.docker.internal:host-gateway"
Steps:
- Deploy with empty
/opt/docker/npm/data and /opt/docker/npm/letsencrypt
- Log in with
admin@example.com / changeme, set new admin user
- Hosts → Proxy Hosts → Add Proxy Host with the minimum:
- Domain Names:
test.example.com
- Scheme:
http
- Forward Hostname/IP:
host.docker.internal
- Forward Port:
9000 (or any reachable upstream)
- All checkboxes off, no SSL cert, no Access List, no Advanced config
- Save
Same result via the REST API:
TOKEN=$(curl -s -X POST http://localhost:81/api/tokens \
-H "Content-Type: application/json" \
-d '{"identity":"admin@example.com","secret":"NEW_PASSWORD"}' \
| jq -r .token)
curl -X POST http://localhost:81/api/nginx/proxy-hosts \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"domain_names": ["test.example.com"],
"forward_scheme": "http",
"forward_host": "host.docker.internal",
"forward_port": 9000,
"access_list_id": 0, "certificate_id": 0,
"ssl_forced": false, "caching_enabled": false,
"block_exploits": false, "advanced_config": "",
"meta": {}, "allow_websocket_upgrade": false,
"http2_support": false, "hsts_enabled": false,
"hsts_subdomains": false, "locations": []
}'
API returns HTTP 201 with the full proxy host object. UI lists it as Online. But no config file is generated.
Expected behavior
Container logs should show:
[Nginx] Building proxy host #1 for: test.example.com
[Nginx] Reloading Nginx
/data/nginx/proxy_host/1.conf should exist.
Actual behavior
Container logs only show:
The Building proxy host line that normally precedes the reload is missing entirely. /data/nginx/proxy_host/ remains empty:
$ docker exec nginx-proxy-manager ls -la /data/nginx/proxy_host/
total 8
drwxr-xr-x 2 root root 4096 May 12 16:52 .
drwxr-xr-x 9 root root 4096 May 12 16:43 ..
nginx -T inside the container confirms only NPM's built-in defaults are loaded — no user-created proxy host configs.
No errors are logged anywhere:
$ docker logs nginx-proxy-manager 2>&1 | grep -iE "error|fail|denied|exception"
(no output)
Diagnostic log sequence
Container startup is clean — all migrations succeed, including the recent trust_forwarded_proto:
[Migrate] info [initial-schema] Migrating Up...
... (all migrations OK) ...
[Migrate] info [trust_forwarded_proto] proxy_host Table altered
[Setup] info Default settings added
[Global] info Backend PID 182 listening on port 3000 ...
Custom Certificate upload works (writes to disk, log entry present):
[SSL] info Writing Custom Certificate: { id: 1, ... }
Access List creation works (writes file, log entry present, reload follows):
[Access] info Building Access file #1 for: intern-only
[Nginx] info Reloading Nginx
Proxy host save fails silently (only the reload, no Building entry):
[Nginx] info Reloading Nginx
What I ruled out
| Hypothesis |
Test |
Result |
| File permissions |
docker exec npm touch /data/nginx/proxy_host/test.conf |
Succeeds |
| Template corruption |
cat /app/templates/proxy_host.conf |
Intact, 1235 bytes |
| Container user mismatch |
docker exec npm id |
uid=0(root) |
| UI-only bug |
API call via curl |
Same silent failure |
| Custom Locations / Advanced config |
Minimal host with everything off |
Same silent failure |
| SSL cert / Access List interaction |
Minimal host with no SSL, no AL |
Same silent failure |
| NPM version |
Tested :latest and :2.14.0 on fresh data |
Same on both |
| Disk space |
df -h /data |
Plenty free |
| Parallel host nginx |
(still to verify by stopping host nginx) |
– |
Environment / Hypothesis
- Docker Engine: 29.4.2 (April 2026), API 1.54
- Docker Compose: v5.1.3
- OS: Ubuntu Server 24
- Architecture: x86_64
- Deployed via Portainer Stack
- Storage driver: default containerd snapshotter (Docker 29's new default)
Hypothesis: Docker Engine 29.0 (released March 2026) made the containerd image store the default for new installations. The NPM 2.14.0 image was built on 2026-02-17, i.e. before Docker 29 GA, so it has never been tested against the new default snapshotter.
The symptom — DB writes succeed, file writes to /data succeed for certificates and access lists, but the specific code path that generates /data/nginx/proxy_host/N.conf silently completes without writing or logging — is consistent with a subtle filesystem-semantics difference between the legacy graphdriver and the containerd snapshotter when running NPM's Node.js template-rendering code.
Proposed test for maintainers to confirm:
Set /etc/docker/daemon.json to:
{
"features": {
"containerd-snapshotter": false
}
}
Then sudo systemctl restart docker and re-run the reproducer. If proxy host generation works under the legacy graphdriver but fails under the containerd snapshotter, the cause is confirmed.
Additional notes
- The "Online" status badge in the UI is misleading: it only checks upstream reachability, not whether the proxy itself is routing traffic.
- The database insert succeeds (host appears in UI and is returned by
GET /api/nginx/proxy-hosts).
- The certificate writing step succeeds (
/data/custom_ssl/... is populated).
- The access list file generation succeeds (
/data/access/... is populated).
- Only the proxy host config generation step (
internalNginx.configure(model, 'proxy_host') or whatever the equivalent in the current codebase is) appears to be silently skipped without an error.
If helpful, I can also provide:
- Full container startup log
- Output of
nginx -T inside the container
- SQLite DB dump showing the orphaned proxy host record
- Docker info /
docker system info output
Summary
Creating a proxy host (via UI or REST API) succeeds in the database and shows green "Online" status in the UI, but no
[Nginx] Building proxy host #N for: ...log entry is emitted and no.conffile is written to/data/nginx/proxy_host/. The proxy therefore never routes traffic and clients receivetlsv1 unrecognized nameSNI alerts.Only a
[Nginx] Reloading Nginxlog entry follows the save — the config-generation step in between is silently skipped, without any error.Same behavior on
:latestand:2.14.0with completely fresh data directories. Cert uploads and Access List generation on the same instance work normally — only proxy host generation is affected.Strongly suspected root cause: Docker Engine 29 with the new default containerd image store. Details under "Environment / Hypothesis" below.
Nginx Proxy Manager Version
Reproduced on both:
jc21/nginx-proxy-manager:latest(image built 2026-02-17)jc21/nginx-proxy-manager:2.14.0(image built 2026-02-17)No newer release exists at the time of writing.
To Reproduce
Minimal compose:
Steps:
/opt/docker/npm/dataand/opt/docker/npm/letsencryptadmin@example.com/changeme, set new admin usertest.example.comhttphost.docker.internal9000(or any reachable upstream)Same result via the REST API:
API returns HTTP 201 with the full proxy host object. UI lists it as Online. But no config file is generated.
Expected behavior
Container logs should show:
/data/nginx/proxy_host/1.confshould exist.Actual behavior
Container logs only show:
The
Building proxy hostline that normally precedes the reload is missing entirely./data/nginx/proxy_host/remains empty:nginx -Tinside the container confirms only NPM's built-in defaults are loaded — no user-created proxy host configs.No errors are logged anywhere:
Diagnostic log sequence
Container startup is clean — all migrations succeed, including the recent
trust_forwarded_proto:Custom Certificate upload works (writes to disk, log entry present):
Access List creation works (writes file, log entry present, reload follows):
Proxy host save fails silently (only the reload, no Building entry):
What I ruled out
docker exec npm touch /data/nginx/proxy_host/test.confcat /app/templates/proxy_host.confdocker exec npm iduid=0(root):latestand:2.14.0on fresh datadf -h /dataEnvironment / Hypothesis
Hypothesis: Docker Engine 29.0 (released March 2026) made the containerd image store the default for new installations. The NPM 2.14.0 image was built on 2026-02-17, i.e. before Docker 29 GA, so it has never been tested against the new default snapshotter.
The symptom — DB writes succeed, file writes to
/datasucceed for certificates and access lists, but the specific code path that generates/data/nginx/proxy_host/N.confsilently completes without writing or logging — is consistent with a subtle filesystem-semantics difference between the legacy graphdriver and the containerd snapshotter when running NPM's Node.js template-rendering code.Proposed test for maintainers to confirm:
Set
/etc/docker/daemon.jsonto:{ "features": { "containerd-snapshotter": false } }Then
sudo systemctl restart dockerand re-run the reproducer. If proxy host generation works under the legacy graphdriver but fails under the containerd snapshotter, the cause is confirmed.Additional notes
GET /api/nginx/proxy-hosts)./data/custom_ssl/...is populated)./data/access/...is populated).internalNginx.configure(model, 'proxy_host')or whatever the equivalent in the current codebase is) appears to be silently skipped without an error.If helpful, I can also provide:
nginx -Tinside the containerdocker system infooutput