Skip to content

Having filesystem mounted can break zfs receive without any explicit modification  #12580

@rincebrain

Description

@rincebrain

System information

Type Version/Name
Distribution Name Debian
Distribution Version 10.10
Kernel Version 4.19.0-17-amd64
Architecture x86_64
OpenZFS Version 2.0.3-8~bpo10-1

Describe the problem you're observing

(I suppose this might be known and a reason for people using zfs recv -u, but was certainly surprising to me, having not encountered it that I can recall, and I don't see any bug report or note about it, though the search terms are pretty generic...)

While I was shuffling around a large dataset to recompress it, I had the following odd experience:

# zfs send -RLe -I monolith/sorted/backups/urbackup@zfs-auto-snap_daily-2021-09-18-1554 monolith/sorted/backups/urbackup@zfs-auto-snap_hourly-2021-09-21-0317 | mbuffer -m 2G | zfs recv -o compress=zstd -o checksum=skein -o com.sun:auto-snapshot=false workspace/urbackup_zstd
in @  0.0 kiB/s, out @  176 MiB/s, 22.0 GiB total, buffer   1% full
summary: 22.0 GiByte in 31min 49.9sec - average of 11.8 MiB/s
cannot receive incremental stream: destination workspace/urbackup_zstd has been modified
since most recent snapshot
#

...huh, that's curious, nothing should be touching it, and the auto-snapshotting should be ignoring it...

# zfs list -o name,used,lused,refer,lrefer,guid -t all -r workspace/urbackup_zstd | tail -n 5; zfs list -o name,used,lused,refer,lrefer,guid -t all -r monolith/sorted/backups/urbackup | grep -A20 zfs-auto-snap_daily-2021-09-20-1632;
workspace/urbackup_zstd@zfs-auto-snap_daily-2021-09-20-1632       0B      -     2.79T   3.39T  10176713998951144728
workspace/urbackup_zstd@zfs-auto-snap_hourly-2021-09-20-1717   38.6M      -     2.79T   3.39T  5443784001748964352
workspace/urbackup_zstd@zfs-auto-snap_hourly-2021-09-20-1818   15.0M      -     2.79T   3.39T  10358248322821333959
workspace/urbackup_zstd@zfs-auto-snap_hourly-2021-09-20-1918    183M      -     2.78T   3.37T  11396114093498370378
workspace/urbackup_zstd@zfs-auto-snap_hourly-2021-09-20-2017    432K      -     2.08T   2.64T  15065966855315306501
monolith/sorted/backups/urbackup@zfs-auto-snap_daily-2021-09-20-1632       0B      -     2.91T   3.39T  10176713998951144728
monolith/sorted/backups/urbackup@zfs-auto-snap_hourly-2021-09-20-1717   25.4M      -     2.91T   3.39T  5443784001748964352
monolith/sorted/backups/urbackup@zfs-auto-snap_hourly-2021-09-20-1818   6.22M      -     2.91T   3.39T  10358248322821333959
monolith/sorted/backups/urbackup@zfs-auto-snap_hourly-2021-09-20-1918    173M      -     2.90T   3.37T  11396114093498370378
monolith/sorted/backups/urbackup@zfs-auto-snap_hourly-2021-09-20-2017      0B      -     2.19T   2.64T  15065966855315306501
monolith/sorted/backups/urbackup@zfs-auto-snap_hourly-2021-09-20-2117      0B      -     2.19T   2.64T  8328899955974102268
monolith/sorted/backups/urbackup@zfs-auto-snap_hourly-2021-09-20-2217      0B      -     2.19T   2.64T  1657122139991726938
monolith/sorted/backups/urbackup@zfs-auto-snap_hourly-2021-09-20-2317      0B      -     2.19T   2.64T  11471628937756454123
monolith/sorted/backups/urbackup@zfs-auto-snap_hourly-2021-09-21-0017      0B      -     2.19T   2.64T  11846511501685479153
monolith/sorted/backups/urbackup@zfs-auto-snap_hourly-2021-09-21-0117      0B      -     2.19T   2.64T  2973001046682453498
monolith/sorted/backups/urbackup@zfs-auto-snap_hourly-2021-09-21-0217      0B      -     2.19T   2.64T  4463158981212865079
monolith/sorted/backups/urbackup@zfs-auto-snap_hourly-2021-09-21-0317      0B      -     2.19T   2.64T  15126046486804738953
monolith/sorted/backups/urbackup@zfs-auto-snap_hourly-2021-09-21-0507      0B      -     2.19T   2.64T  10243722194683015519
monolith/sorted/backups/urbackup@zfs-auto-snap_hourly-2021-09-21-0541      0B      -     2.19T   2.64T  17223118383772674624
monolith/sorted/backups/urbackup@zfs-auto-snap_hourly-2021-09-21-0627      0B      -     2.19T   2.64T  14809183698294611028

...huh, the snapshot shows 432K changed...

# zfs diff workspace/urbackup_zstd@zfs-auto-snap_hourly-2021-09-20-2017
#

...but no diff.
Okay, let's roll back...

# zfs rollback workspace/urbackup_zstd@zfs-auto-snap_hourly-2021-09-20-2017
# zfs list -o name,used,lused,refer,lrefer,guid -t all -r workspace/urbackup_zstd | tail -n 5; zfs list -o name,used,lused,refer,lrefer,guid -t all -r monolith/sorted/backups/urbackup | grep -A20 zfs-auto-snap_daily-2021-09-20-1632;
workspace/urbackup_zstd@zfs-auto-snap_daily-2021-09-20-1632       0B      -     2.79T   3.39T  10176713998951144728
workspace/urbackup_zstd@zfs-auto-snap_hourly-2021-09-20-1717   38.6M      -     2.79T   3.39T  5443784001748964352
workspace/urbackup_zstd@zfs-auto-snap_hourly-2021-09-20-1818   15.0M      -     2.79T   3.39T  10358248322821333959
workspace/urbackup_zstd@zfs-auto-snap_hourly-2021-09-20-1918    183M      -     2.78T   3.37T  11396114093498370378
workspace/urbackup_zstd@zfs-auto-snap_hourly-2021-09-20-2017    432K      -     2.08T   2.64T  15065966855315306501

...what.

Then I sighed, and wildly guessed that something was in a queue being processed when it was mounted.

# zfs umount workspace/urbackup_zstd
# zfs rollback workspace/urbackup_zstd@zfs-auto-snap_hourly-2021-09-20-2017
# zfs list -o name,used,lused,refer,lrefer,guid -t all -r workspace/urbackup_zstd | tail -n 5;
workspace/urbackup_zstd@zfs-auto-snap_daily-2021-09-20-1632       0B      -     2.79T   3.39T  10176713998951144728
workspace/urbackup_zstd@zfs-auto-snap_hourly-2021-09-20-1717   38.6M      -     2.79T   3.39T  5443784001748964352
workspace/urbackup_zstd@zfs-auto-snap_hourly-2021-09-20-1818   15.0M      -     2.79T   3.39T  10358248322821333959
workspace/urbackup_zstd@zfs-auto-snap_hourly-2021-09-20-1918    183M      -     2.78T   3.37T  11396114093498370378
workspace/urbackup_zstd@zfs-auto-snap_hourly-2021-09-20-2017      0B      -     2.08T   2.64T  15065966855315306501
# 

...okay...let's try again.

# zfs send -RLe -I monolith/sorted/backups/urbackup@zfs-auto-snap_hourly-2021-09-20-2017 monolith/sorted/backups/urbackup@zfs-auto-snap_hourly-2021-09-21-0317 | mbuffer -m 2G | zfs recv -o compress=zstd -o checksum=skein -o com.sun:auto-snapshot=false workspace/urbackup_zstd
summary: 17.0 kiByte in  1.5sec - average of 11.2 kiB/s
# 

Describe how to reproduce the problem

My speculation is inline above.

Include any warning/errors/backtraces from the system logs

...

Metadata

Metadata

Assignees

No one assigned

    Labels

    Bot: Not StaleOverride for the stale botType: DefectIncorrect behavior (e.g. crash, hang)

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions