Skip to content

[2.7] Fix RxTask self-deadlock on stream error cleanup#4204

Merged
pcnudde merged 3 commits intoNVIDIA:2.7from
pcnudde:codex/fix-rxtask-map-lock-deadlock
Feb 20, 2026
Merged

[2.7] Fix RxTask self-deadlock on stream error cleanup#4204
pcnudde merged 3 commits intoNVIDIA:2.7from
pcnudde:codex/fix-rxtask-map-lock-deadlock

Conversation

@pcnudde
Copy link
Collaborator

@pcnudde pcnudde commented Feb 19, 2026

Summary

  • avoid calling RxTask.stop() while holding RxTask.map_lock in find_or_create_task
  • stop the existing task only after leaving the map lock to prevent self-deadlock
  • add a regression unit test for the existing-task error path

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 19, 2026

Greptile Summary

Fixes a critical self-deadlock bug in RxTask.find_or_create_task() where calling task.stop() while holding map_lock caused the thread to deadlock when stop() tried to re-acquire the same lock.

Changes:

  • Defers task.stop() call until after releasing map_lock by storing task reference and calling it outside the critical section
  • Adds regression test with custom _DeadlockDetectingLock that raises on same-thread re-acquire
  • Verifies fix by confirming stop() is called with map_lock unlocked

Impact:
This prevents production hangs when stream errors occur on existing tasks, particularly in the error cleanup path for duplicate or failed streams.

Confidence Score: 5/5

  • Safe to merge - clean deadlock fix with excellent test coverage
  • The fix is minimal, correct, and follows proper locking discipline. Tests explicitly verify the deadlock scenario and confirm the fix works.
  • No files require special attention

Important Files Changed

Filename Overview
nvflare/fuel/f3/streaming/byte_receiver.py Correctly fixes self-deadlock by deferring stop() call until after releasing map_lock
tests/unit_test/fuel/f3/streaming/byte_receiver_test.py Comprehensive regression tests validating deadlock fix with custom lock detector

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[find_or_create_task called] --> B[Acquire map_lock]
    B --> C{Task exists?}
    C -->|No| D{Error message?}
    D -->|Yes| E[Return None]
    D -->|No| F[Create new RxTask<br/>Add to map]
    C -->|Yes| G{Error message?}
    G -->|No| H[Release map_lock<br/>Return task]
    G -->|Yes| I[Store task_to_stop]
    I --> J[Release map_lock]
    J --> K[Call task_to_stop.stop]
    K --> L[stop acquires map_lock<br/>Removes from map<br/>✓ No deadlock]
    L --> M[Return None]
    F --> H
    E --> N[End]
    H --> N
    M --> N
    
    style K fill:#90EE90
    style L fill:#90EE90
    style I fill:#90EE90
Loading

Last reviewed commit: 635575e

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 files reviewed, no comments

Edit Code Review Agent Settings | Greptile

@pcnudde
Copy link
Collaborator Author

pcnudde commented Feb 19, 2026

/build

Copy link
Collaborator

@chesterxgchen chesterxgchen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggest update the tests

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 files reviewed, no comments

Edit Code Review Agent Settings | Greptile

@pcnudde
Copy link
Collaborator Author

pcnudde commented Feb 20, 2026

/build

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 files reviewed, no comments

Edit Code Review Agent Settings | Greptile

@chesterxgchen
Copy link
Collaborator

/build

@pcnudde pcnudde enabled auto-merge (squash) February 20, 2026 05:18
@pcnudde pcnudde merged commit ce04e38 into NVIDIA:2.7 Feb 20, 2026
19 checks passed
chesterxgchen pushed a commit to chesterxgchen/NVFlare that referenced this pull request Feb 20, 2026
## Summary
- avoid calling `RxTask.stop()` while holding `RxTask.map_lock` in
`find_or_create_task`
- stop the existing task only after leaving the map lock to prevent
self-deadlock
- add a regression unit test for the existing-task error path
chesterxgchen added a commit to chesterxgchen/NVFlare that referenced this pull request Feb 22, 2026
…ent, hierarchical startup stability

Add three major new sections to flare_272.rst covering work merged after the
initial 2.7.2 draft:

Memory Management (restructured):
- Zero Tensor Copy at CJ process via LazyDownloadRef pass-through (PR NVIDIA#4210)
- Client-side memory management: malloc_trim, jemalloc, torch.cuda.empty_cache
  injected after flare.send() without training script changes (PR NVIDIA#4211)
- Retain existing TensorDownloader and server-side cleanup content

F3 Streaming Reliability and Performance (new section):
- HOL stall mitigation: bounded send_frame() timeout, ACK watchdog, stall
  detection/recovery with recommended env-var settings (PR NVIDIA#4206)
- Stream pool starvation fix: blob callbacks dispatched to dedicated thread
  pool, preventing stream worker exhaustion (PR NVIDIA#4171/NVIDIA#4172)
- Streaming download retry with exponential backoff on timeout (PR NVIDIA#4167)
- RxTask self-deadlock fix: stop() deferred until after map_lock released (PR NVIDIA#4204)
- Lock contention reduction in produce_item() for concurrent model downloads (PR NVIDIA#4174)

Hierarchical FL Startup Stability (new section):
- Deployment timeout correctly classified as failure; min_sites check applied
  at deployment phase (PR NVIDIA#4209)
- Startup grace period for dead-client detection (debounce default=true) (PR NVIDIA#4209)
- Selective client exclusion on start-job timeout instead of full abort (PR NVIDIA#4209)
- Hardened job metadata parsing: TypeError replaced with descriptive RuntimeError (PR NVIDIA#4209)
- Recommended config snippets for HPC/Lustre environments (Frontier/ORNL scale)

Bug Fixes section updated with all streaming and hierarchical startup fixes.
Intro paragraph updated to reflect system hardening scope.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
pcnudde added a commit that referenced this pull request Feb 22, 2026
…eanup (#4213)

## Summary
- avoid calling `RxTask.stop()` while holding `RxTask.map_lock` in
`find_or_create_task`
- stop the existing task only after leaving the map lock to prevent
self-deadlock
- include regression coverage for both the pre-fix deadlock path and the
fixed path


Related: #4204 (2.7 branch)
chesterxgchen pushed a commit to chesterxgchen/NVFlare that referenced this pull request Feb 24, 2026
## Summary
- avoid calling `RxTask.stop()` while holding `RxTask.map_lock` in
`find_or_create_task`
- stop the existing task only after leaving the map lock to prevent
self-deadlock
- add a regression unit test for the existing-task error path
chesterxgchen added a commit that referenced this pull request Feb 24, 2026
…ctive client exclusion, and dead-detection debounce (#4209)

## Problem

Large-scale hierarchical FL jobs (e.g. BERT NER, 144 clients, 6 relays
on Frontier) abort in
Round 0 due to a cascading startup failure chain.  The root sequence is:

1. F3 streaming HOL stall (PR #4206) delays deployment ACKs from
relay-connected clients
2. **`_deploy_job()`** treats `reply=None` (timeout) as `"unknown"` —
not a failure — so
   timed-out clients silently appear to have been deployed
3. **`_start_run()`** tries to start those clients; they again time out,
and
   `check_client_replies()` ignores the `None` reply
4. **`_sync_client_jobs()`** fires dead-job notification on the very
first heartbeat with
   no startup grace period
5. FedAvg requires 144/144 — one or two missing clients → abort
6. A late-starting CJ crashes with `TypeError: 'NoneType' object is not
iterable` when
`get_job_clients()` receives `None` metadata from an already-aborted job

PRs #4206, #4204, #4174, #4172, #4186, #4211, #4210 (all merged in
2.7.2) address the
transport layer.  This PR addresses the remaining job lifecycle layer.

---

## Fixes Included

### 1 — `_deploy_job()`: Treat deployment timeout as failure
(`job_runner.py`)

**Root bug**: `reply=None` was logged as `"unknown"` and excluded from
`failed_clients`,
so timed-out clients counted as "successfully deployed" for the
`min_sites` check.

**Fix**: Add timed-out clients to `failed_clients` with a `"deployment
timeout"` label.
The existing `min_sites` / `required_sites` logic then correctly decides
whether to abort.

### 2 — `check_client_replies()`: Return timed-out clients instead of
raising (`admin.py`)

**Root bug**: In strict mode, any timeout raised immediately, aborting
the whole job even
when the remaining active clients satisfied `min_sites`.

**Fix**: In strict mode, collect timed-out clients into a return list
rather than raising.
Explicit errors (non-OK return code or error body) still raise. Also
fixes the non-strict
mode to use name-keyed dict lookup instead of fragile positional
`zip()`.

New signature: `check_client_replies(...) -> List[str]` (timed-out
client names; empty = none).

### 3 — `_start_run()`: Selective exclusion with min_sites re-evaluation
(`job_runner.py`)

**Root bug**: A start-job timeout under strict mode aborted the entire
job with no
tolerance for stragglers within `min_sites` bounds.

**Fix**: Use the returned timed-out list from `check_client_replies()`.
If remaining
active clients >= `min_sites`, log a warning and proceed. Only abort
when below tolerance.

### 4 — `_sync_client_jobs()`: Require-prior-report default changed to
`True` (`fed_server.py`)

**Root bug**: `SYNC_CLIENT_JOBS_REQUIRE_PREVIOUS_REPORT` defaulted to
`False`, meaning
the bug fix was opt-in and the unsafe behaviour remained the default.

**Fix**: Default changed to `True`. Operators who want the aggressive
legacy detection
can set it to `False` explicitly.

### 5 — `_sync_client_jobs()`: Move `_reported_clients` out of
`job_info` dict (`fed_server.py`)

**Root bug**: Positive-observation tracking was stored as
`job_info["_reported_clients"]`,
injecting algorithm state into a data dict with no corresponding
`RunProcessKey` constant.

**Fix**: Tracking moved to `self._job_reported_clients: Dict[str, set]`
on `FederatedServer`.
Stale entries are purged whenever a job is no longer in `run_processes`.

### 6 — `ClientRunManager.get_job_clients()`: Explicit meta validation
(`client_run_manager.py`)

Raises `RuntimeError` with a descriptive message instead of an opaque
`TypeError` when
`JOB_CLIENTS` is absent or the wrong type.

---

## Configuration Recommendations (No Code Change Needed)

| Setting | Recommended value | Effect |
|---|---|---|
| `FedAvg(min_clients=...)` | 96-98% of `num_clients` | Tolerates a few
startup stragglers |
| `runner_sync_timeout` | `120` s | Allows Lustre-backed deployments
time to complete |
| `strict_start_job_reply_check` | `true` | Start-job timeouts surfaced,
straggler clients excluded |
| `sync_client_jobs_require_previous_report` | `true` (now the default)
| Prevents premature dead-job from startup delay |
| `SFM_CLOSE_STALLED_CONNECTION` (PR #4206) | `true` after staging |
Disconnects stalled relay connections |

---

## Files Changed

- `nvflare/private/fed/server/job_runner.py` — `_deploy_job()` timeout
as failure; `_start_run()` selective exclusion
- `nvflare/private/fed/server/admin.py` — `check_client_replies()`
returns timed-out list; dict-keyed non-strict path
- `nvflare/private/fed/server/fed_server.py` — `_sync_client_jobs()`
default `True`; `_job_reported_clients` attr; stale cleanup
- `nvflare/private/fed/client/client_run_manager.py` — explicit meta
validation in `get_job_clients()`

---

## Test Coverage

New and updated unit tests with both positive and negative cases:

| File | Tests | What they cover |
|---|---|---|
| `admin_test.py` | 8 | Timeout returned not raised; dict lookup; error
still raises; reorder OK |
| `job_runner_test.py` | 4 | strict flag wiring; timeout within
tolerance → warn; timeout below tolerance → raise |
| `job_runner_deploy_test.py` | 9 (new file) | Timeout counted as
failure; OK reply not failed; mixed outcomes; detail label; min_sites
with timeouts; integration sequence |
| `fed_server_test.py` | 5 | Default requires-prior-report; legacy
explicit-False still fires; tracking in server attr not job_info; stale
cleanup |

All 29 targeted unit tests pass.

## Test Plan

- [x] Unit tests for each changed function (positive + negative)
- [x] New `job_runner_deploy_test.py` covering deployment timeout
classification end-to-end
- [x] All 29 targeted unit tests pass
- [ ] Hierarchical staging run with all flags at default
- [ ] Hierarchical staging run with `strict_start_job_reply_check=true`
and reduced `min_clients`
- [ ] Verify no regression on standard (non-hierarchical) FL jobs

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
chesterxgchen added a commit to chesterxgchen/NVFlare that referenced this pull request Feb 24, 2026
…ctive client exclusion, and dead-detection debounce (NVIDIA#4209)

## Problem

Large-scale hierarchical FL jobs (e.g. BERT NER, 144 clients, 6 relays
on Frontier) abort in
Round 0 due to a cascading startup failure chain.  The root sequence is:

1. F3 streaming HOL stall (PR NVIDIA#4206) delays deployment ACKs from
relay-connected clients
2. **`_deploy_job()`** treats `reply=None` (timeout) as `"unknown"` —
not a failure — so
   timed-out clients silently appear to have been deployed
3. **`_start_run()`** tries to start those clients; they again time out,
and
   `check_client_replies()` ignores the `None` reply
4. **`_sync_client_jobs()`** fires dead-job notification on the very
first heartbeat with
   no startup grace period
5. FedAvg requires 144/144 — one or two missing clients → abort
6. A late-starting CJ crashes with `TypeError: 'NoneType' object is not
iterable` when
`get_job_clients()` receives `None` metadata from an already-aborted job

PRs NVIDIA#4206, NVIDIA#4204, NVIDIA#4174, NVIDIA#4172, NVIDIA#4186, NVIDIA#4211, NVIDIA#4210 (all merged in
2.7.2) address the
transport layer.  This PR addresses the remaining job lifecycle layer.

---

## Fixes Included

### 1 — `_deploy_job()`: Treat deployment timeout as failure
(`job_runner.py`)

**Root bug**: `reply=None` was logged as `"unknown"` and excluded from
`failed_clients`,
so timed-out clients counted as "successfully deployed" for the
`min_sites` check.

**Fix**: Add timed-out clients to `failed_clients` with a `"deployment
timeout"` label.
The existing `min_sites` / `required_sites` logic then correctly decides
whether to abort.

### 2 — `check_client_replies()`: Return timed-out clients instead of
raising (`admin.py`)

**Root bug**: In strict mode, any timeout raised immediately, aborting
the whole job even
when the remaining active clients satisfied `min_sites`.

**Fix**: In strict mode, collect timed-out clients into a return list
rather than raising.
Explicit errors (non-OK return code or error body) still raise. Also
fixes the non-strict
mode to use name-keyed dict lookup instead of fragile positional
`zip()`.

New signature: `check_client_replies(...) -> List[str]` (timed-out
client names; empty = none).

### 3 — `_start_run()`: Selective exclusion with min_sites re-evaluation
(`job_runner.py`)

**Root bug**: A start-job timeout under strict mode aborted the entire
job with no
tolerance for stragglers within `min_sites` bounds.

**Fix**: Use the returned timed-out list from `check_client_replies()`.
If remaining
active clients >= `min_sites`, log a warning and proceed. Only abort
when below tolerance.

### 4 — `_sync_client_jobs()`: Require-prior-report default changed to
`True` (`fed_server.py`)

**Root bug**: `SYNC_CLIENT_JOBS_REQUIRE_PREVIOUS_REPORT` defaulted to
`False`, meaning
the bug fix was opt-in and the unsafe behaviour remained the default.

**Fix**: Default changed to `True`. Operators who want the aggressive
legacy detection
can set it to `False` explicitly.

### 5 — `_sync_client_jobs()`: Move `_reported_clients` out of
`job_info` dict (`fed_server.py`)

**Root bug**: Positive-observation tracking was stored as
`job_info["_reported_clients"]`,
injecting algorithm state into a data dict with no corresponding
`RunProcessKey` constant.

**Fix**: Tracking moved to `self._job_reported_clients: Dict[str, set]`
on `FederatedServer`.
Stale entries are purged whenever a job is no longer in `run_processes`.

### 6 — `ClientRunManager.get_job_clients()`: Explicit meta validation
(`client_run_manager.py`)

Raises `RuntimeError` with a descriptive message instead of an opaque
`TypeError` when
`JOB_CLIENTS` is absent or the wrong type.

---

## Configuration Recommendations (No Code Change Needed)

| Setting | Recommended value | Effect |
|---|---|---|
| `FedAvg(min_clients=...)` | 96-98% of `num_clients` | Tolerates a few
startup stragglers |
| `runner_sync_timeout` | `120` s | Allows Lustre-backed deployments
time to complete |
| `strict_start_job_reply_check` | `true` | Start-job timeouts surfaced,
straggler clients excluded |
| `sync_client_jobs_require_previous_report` | `true` (now the default)
| Prevents premature dead-job from startup delay |
| `SFM_CLOSE_STALLED_CONNECTION` (PR NVIDIA#4206) | `true` after staging |
Disconnects stalled relay connections |

---

## Files Changed

- `nvflare/private/fed/server/job_runner.py` — `_deploy_job()` timeout
as failure; `_start_run()` selective exclusion
- `nvflare/private/fed/server/admin.py` — `check_client_replies()`
returns timed-out list; dict-keyed non-strict path
- `nvflare/private/fed/server/fed_server.py` — `_sync_client_jobs()`
default `True`; `_job_reported_clients` attr; stale cleanup
- `nvflare/private/fed/client/client_run_manager.py` — explicit meta
validation in `get_job_clients()`

---

## Test Coverage

New and updated unit tests with both positive and negative cases:

| File | Tests | What they cover |
|---|---|---|
| `admin_test.py` | 8 | Timeout returned not raised; dict lookup; error
still raises; reorder OK |
| `job_runner_test.py` | 4 | strict flag wiring; timeout within
tolerance → warn; timeout below tolerance → raise |
| `job_runner_deploy_test.py` | 9 (new file) | Timeout counted as
failure; OK reply not failed; mixed outcomes; detail label; min_sites
with timeouts; integration sequence |
| `fed_server_test.py` | 5 | Default requires-prior-report; legacy
explicit-False still fires; tracking in server attr not job_info; stale
cleanup |

All 29 targeted unit tests pass.

## Test Plan

- [x] Unit tests for each changed function (positive + negative)
- [x] New `job_runner_deploy_test.py` covering deployment timeout
classification end-to-end
- [x] All 29 targeted unit tests pass
- [ ] Hierarchical staging run with all flags at default
- [ ] Hierarchical staging run with `strict_start_job_reply_check=true`
and reduced `min_clients`
- [ ] Verify no regression on standard (non-hierarchical) FL jobs

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
chesterxgchen added a commit to chesterxgchen/NVFlare that referenced this pull request Feb 24, 2026
…ent, hierarchical startup stability

Add three major new sections to flare_272.rst covering work merged after the
initial 2.7.2 draft:

Memory Management (restructured):
- Zero Tensor Copy at CJ process via LazyDownloadRef pass-through (PR NVIDIA#4210)
- Client-side memory management: malloc_trim, jemalloc, torch.cuda.empty_cache
  injected after flare.send() without training script changes (PR NVIDIA#4211)
- Retain existing TensorDownloader and server-side cleanup content

F3 Streaming Reliability and Performance (new section):
- HOL stall mitigation: bounded send_frame() timeout, ACK watchdog, stall
  detection/recovery with recommended env-var settings (PR NVIDIA#4206)
- Stream pool starvation fix: blob callbacks dispatched to dedicated thread
  pool, preventing stream worker exhaustion (PR NVIDIA#4171/NVIDIA#4172)
- Streaming download retry with exponential backoff on timeout (PR NVIDIA#4167)
- RxTask self-deadlock fix: stop() deferred until after map_lock released (PR NVIDIA#4204)
- Lock contention reduction in produce_item() for concurrent model downloads (PR NVIDIA#4174)

Hierarchical FL Startup Stability (new section):
- Deployment timeout correctly classified as failure; min_sites check applied
  at deployment phase (PR NVIDIA#4209)
- Startup grace period for dead-client detection (debounce default=true) (PR NVIDIA#4209)
- Selective client exclusion on start-job timeout instead of full abort (PR NVIDIA#4209)
- Hardened job metadata parsing: TypeError replaced with descriptive RuntimeError (PR NVIDIA#4209)
- Recommended config snippets for HPC/Lustre environments (Frontier/ORNL scale)

Bug Fixes section updated with all streaming and hierarchical startup fixes.
Intro paragraph updated to reflect system hardening scope.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
chesterxgchen added a commit to chesterxgchen/NVFlare that referenced this pull request Feb 25, 2026
…ent, hierarchical startup stability

Add three major new sections to flare_272.rst covering work merged after the
initial 2.7.2 draft:

Memory Management (restructured):
- Zero Tensor Copy at CJ process via LazyDownloadRef pass-through (PR NVIDIA#4210)
- Client-side memory management: malloc_trim, jemalloc, torch.cuda.empty_cache
  injected after flare.send() without training script changes (PR NVIDIA#4211)
- Retain existing TensorDownloader and server-side cleanup content

F3 Streaming Reliability and Performance (new section):
- HOL stall mitigation: bounded send_frame() timeout, ACK watchdog, stall
  detection/recovery with recommended env-var settings (PR NVIDIA#4206)
- Stream pool starvation fix: blob callbacks dispatched to dedicated thread
  pool, preventing stream worker exhaustion (PR NVIDIA#4171/NVIDIA#4172)
- Streaming download retry with exponential backoff on timeout (PR NVIDIA#4167)
- RxTask self-deadlock fix: stop() deferred until after map_lock released (PR NVIDIA#4204)
- Lock contention reduction in produce_item() for concurrent model downloads (PR NVIDIA#4174)

Hierarchical FL Startup Stability (new section):
- Deployment timeout correctly classified as failure; min_sites check applied
  at deployment phase (PR NVIDIA#4209)
- Startup grace period for dead-client detection (debounce default=true) (PR NVIDIA#4209)
- Selective client exclusion on start-job timeout instead of full abort (PR NVIDIA#4209)
- Hardened job metadata parsing: TypeError replaced with descriptive RuntimeError (PR NVIDIA#4209)
- Recommended config snippets for HPC/Lustre environments (Frontier/ORNL scale)

Bug Fixes section updated with all streaming and hierarchical startup fixes.
Intro paragraph updated to reflect system hardening scope.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
chesterxgchen added a commit that referenced this pull request Feb 25, 2026
…ent, hierarchical startup stability [skip ci] (#4218)

## Merge Dependency

> ⚠️ **Depends on #4209** — The *Hierarchical FL Startup Stability*
section documents changes introduced by PR #4209 (currently open).
**Merge PR #4209 into `2.7` before merging this PR.** All other sections
cover already-merged PRs.

---

## Summary

This PR updates `docs/release_notes/flare_272.rst` to reflect all major
changes merged into the 2.7.x line after the initial 2.7.2 draft,
covering three new areas:

- **Memory Management** — restructured and expanded with Zero Tensor
Copy at CJ (PR #4210) and client-side memory lifecycle management (PR
#4211)
- **F3 Streaming Reliability and Performance** — new section covering
HOL stall mitigation (PR #4206), stream pool starvation fix (PR
#4171/#4172), streaming download retry (PR #4167), RxTask self-deadlock
fix (PR #4204), and lock contention reduction (PR #4174)
- **Hierarchical FL Startup Stability** — new section covering
deployment timeout classification, startup grace period, selective
client exclusion, and metadata hardening (PR #4209 — pending merge),
with recommended config snippets for HPC/Lustre environments

The Bug Fixes section and intro paragraph are also updated accordingly.

A source-level RST comment has been added above the Hierarchical FL
section in the file to alert future maintainers to the merge dependency.

## Merged PRs Documented

| PR | Area | Status |
|---|---|---|
| #4171 / #4172 | Stream pool starvation fix | Merged |
| #4174 | Lock contention reduction | Merged |
| #4167 | Streaming download retry | Merged |
| #4204 | RxTask self-deadlock fix | Merged |
| #4206 | HOL stall mitigation | Merged |
| #4210 | Zero tensor copy at CJ | Merged |
| #4211 | Client-side memory management | Merged |
| #4209 | Hierarchical FL startup stability | **Open — merge before this
PR** |

## Changes

### Memory Management (restructured)

- **Zero Tensor Copy at CJ** (`ClientAPILauncherExecutor`): CJ now holds
`LazyDownloadRef` placeholders instead of materializing full tensors,
eliminating the CJ as a memory bottleneck for LLM-scale models.
- **Client-Side Memory Management**: `gc.collect()` + `malloc_trim(0)` /
jemalloc purge / `torch.cuda.empty_cache()` injected after every
`flare.send()`, configurable via `client_memory_gc_rounds`.
- Existing TensorDownloader and server-side cleanup content retained.

### F3 Streaming Reliability and Performance (new section)

- **HOL Stall Mitigation**: Bounded `send_frame()` timeout, ACK-progress
watchdog, and stall detection/recovery. Includes recommended environment
variable settings for large hierarchical deployments.
- **Stream Pool Starvation Fix**: Blob callbacks dispatched to a
dedicated `callback_thread_pool`, keeping stream workers free for
concurrent downloads.
- **Streaming Download Retry**: Exponential-backoff retry (up to 3
attempts, capped at 60 s) on `TIMEOUT` errors; abort-signal aware.
- **RxTask Self-Deadlock Fix**: `stop()` deferred until after `map_lock`
released, eliminating stream-error-triggered deadlock.
- **Lock Contention Reduction**: `produce_item()` runs outside
`self.lock`; compare-and-store for cache write. Reduces model-download
latency under high client concurrency.

### Hierarchical FL Startup Stability (new section — pending PR #4209)

- **Deployment Timeout as Failure**: `reply=None` correctly counted
against `min_sites`; timed-out clients excluded before
`start_client_job`.
- **Startup Grace Period**: Dead-client detection debounced — client
must be observed once before absence triggers dead-job notification.
Default changed to `True`.
- **Selective Client Exclusion**: Stragglers at start-job excluded
rather than causing full abort, if remaining count ≥ `min_clients`.
- **Metadata Hardening**: `TypeError` on absent job metadata replaced
with descriptive `RuntimeError`.
- Recommended `config_fed_server.json` / `config_fed_client.json`
snippets for HPC (Frontier/ORNL) scale.

## Test plan

- [ ] Sphinx build (`make html`) passes without RST warnings on the
updated file
- [ ] All new cross-references (`.. code-block::`, `.. note::`) render
correctly in the docs build
- [ ] Verify section hierarchy (underline characters) is consistent
throughout the file
- [ ] Confirm PR #4209 is merged before this PR is merged

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
@pcnudde pcnudde deleted the codex/fix-rxtask-map-lock-deadlock branch March 11, 2026 02:33
chesterxgchen added a commit that referenced this pull request Mar 12, 2026
…meouts, selective client exclusion, and dead-detection debounce (#4209) (#4288)

## Problem

Large-scale hierarchical FL jobs (e.g. BERT NER, 144 clients, 6 relays
on Frontier) abort in
Round 0 due to a cascading startup failure chain.  The root sequence is:

1. F3 streaming HOL stall (PR #4206) delays deployment ACKs from
relay-connected clients
2. **`_deploy_job()`** treats `reply=None` (timeout) as `"unknown"` —
not a failure — so
   timed-out clients silently appear to have been deployed
3. **`_start_run()`** tries to start those clients; they again time out,
and
   `check_client_replies()` ignores the `None` reply
4. **`_sync_client_jobs()`** fires dead-job notification on the very
first heartbeat with
   no startup grace period
5. FedAvg requires 144/144 — one or two missing clients → abort
6. A late-starting CJ crashes with `TypeError: 'NoneType' object is not
iterable` when
`get_job_clients()` receives `None` metadata from an already-aborted job

PRs #4206, #4204, #4174, #4172, #4186, #4211, #4210 (all merged in
2.7.2) address the
transport layer.  This PR addresses the remaining job lifecycle layer.

---

## Fixes Included

### 1 — `_deploy_job()`: Treat deployment timeout as failure
(`job_runner.py`)

**Root bug**: `reply=None` was logged as `"unknown"` and excluded from
`failed_clients`,
so timed-out clients counted as "successfully deployed" for the
`min_sites` check.

**Fix**: Add timed-out clients to `failed_clients` with a `"deployment
timeout"` label.
The existing `min_sites` / `required_sites` logic then correctly decides
whether to abort.

### 2 — `check_client_replies()`: Return timed-out clients instead of
raising (`admin.py`)

**Root bug**: In strict mode, any timeout raised immediately, aborting
the whole job even
when the remaining active clients satisfied `min_sites`.

**Fix**: In strict mode, collect timed-out clients into a return list
rather than raising.
Explicit errors (non-OK return code or error body) still raise. Also
fixes the non-strict
mode to use name-keyed dict lookup instead of fragile positional
`zip()`.

New signature: `check_client_replies(...) -> List[str]` (timed-out
client names; empty = none).

### 3 — `_start_run()`: Selective exclusion with min_sites re-evaluation
(`job_runner.py`)

**Root bug**: A start-job timeout under strict mode aborted the entire
job with no
tolerance for stragglers within `min_sites` bounds.

**Fix**: Use the returned timed-out list from `check_client_replies()`.
If remaining
active clients >= `min_sites`, log a warning and proceed. Only abort
when below tolerance.

### 4 — `_sync_client_jobs()`: Require-prior-report default changed to
`True` (`fed_server.py`)

**Root bug**: `SYNC_CLIENT_JOBS_REQUIRE_PREVIOUS_REPORT` defaulted to
`False`, meaning
the bug fix was opt-in and the unsafe behaviour remained the default.

**Fix**: Default changed to `True`. Operators who want the aggressive
legacy detection
can set it to `False` explicitly.

### 5 — `_sync_client_jobs()`: Move `_reported_clients` out of
`job_info` dict (`fed_server.py`)

**Root bug**: Positive-observation tracking was stored as
`job_info["_reported_clients"]`,
injecting algorithm state into a data dict with no corresponding
`RunProcessKey` constant.

**Fix**: Tracking moved to `self._job_reported_clients: Dict[str, set]`
on `FederatedServer`.
Stale entries are purged whenever a job is no longer in `run_processes`.

### 6 — `ClientRunManager.get_job_clients()`: Explicit meta validation
(`client_run_manager.py`)

Raises `RuntimeError` with a descriptive message instead of an opaque
`TypeError` when
`JOB_CLIENTS` is absent or the wrong type.

---

## Configuration Recommendations (No Code Change Needed)

| Setting | Recommended value | Effect |
|---|---|---|
| `FedAvg(min_clients=...)` | 96-98% of `num_clients` | Tolerates a few
startup stragglers |
| `runner_sync_timeout` | `120` s | Allows Lustre-backed deployments
time to complete |
| `strict_start_job_reply_check` | `true` | Start-job timeouts surfaced,
straggler clients excluded |
| `sync_client_jobs_require_previous_report` | `true` (now the default)
| Prevents premature dead-job from startup delay | |
`SFM_CLOSE_STALLED_CONNECTION` (PR #4206) | `true` after staging |
Disconnects stalled relay connections |

---

## Files Changed

- `nvflare/private/fed/server/job_runner.py` — `_deploy_job()` timeout
as failure; `_start_run()` selective exclusion
- `nvflare/private/fed/server/admin.py` — `check_client_replies()`
returns timed-out list; dict-keyed non-strict path
- `nvflare/private/fed/server/fed_server.py` — `_sync_client_jobs()`
default `True`; `_job_reported_clients` attr; stale cleanup
- `nvflare/private/fed/client/client_run_manager.py` — explicit meta
validation in `get_job_clients()`

---

## Test Coverage

New and updated unit tests with both positive and negative cases:

| File | Tests | What they cover |
|---|---|---|
| `admin_test.py` | 8 | Timeout returned not raised; dict lookup; error
still raises; reorder OK |
| `job_runner_test.py` | 4 | strict flag wiring; timeout within
tolerance → warn; timeout below tolerance → raise | |
`job_runner_deploy_test.py` | 9 (new file) | Timeout counted as failure;
OK reply not failed; mixed outcomes; detail label; min_sites with
timeouts; integration sequence |
| `fed_server_test.py` | 5 | Default requires-prior-report; legacy
explicit-False still fires; tracking in server attr not job_info; stale
cleanup |

All 29 targeted unit tests pass.

## Test Plan

- [x] Unit tests for each changed function (positive + negative)
- [x] New `job_runner_deploy_test.py` covering deployment timeout
classification end-to-end
- [x] All 29 targeted unit tests pass
- [ ] Hierarchical staging run with all flags at default
- [ ] Hierarchical staging run with `strict_start_job_reply_check=true`
and reduced `min_clients`
- [ ] Verify no regression on standard (non-hierarchical) FL jobs

---------

Fixes # .

### Description

A few sentences describing the changes proposed in this pull request.

### Types of changes
<!--- Put an `x` in all the boxes that apply, and remove the not
applicable items -->
- [x] Non-breaking change (fix or new feature that would not break
existing functionality).
- [ ] Breaking change (fix or new feature that would cause existing
functionality to change).
- [ ] New tests added to cover the changes.
- [ ] Quick tests passed locally by running `./runtest.sh`.
- [ ] In-line docstrings updated.
- [ ] Documentation updated.

Co-authored-by: Chester Chen <512707+chesterxgchen@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Peter Cnudde <pcnudde@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants