A pluggable SFTP server with S3 and custom backend support, written in Rust.
- SFTP server using russh
- Pluggable backend trait for custom storage implementations
- Built-in backends:
- Memory - In-memory storage for testing/development
- S3 - Amazon S3 or S3-compatible storage (LocalStack, MinIO)
- Password authentication
- Async/await with Tokio
use sftp_s3::{Server, ServerConfig, MemoryBackend};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let backend = MemoryBackend::new();
let config = ServerConfig::new()
.port(2222)
.with_generated_key();
Server::new(backend)
.config(config)
.with_users(vec![("user".into(), "pass".into())])
.run()
.await
}use sftp_s3::{Server, ServerConfig, S3Backend, S3Config};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let s3_config = S3Config::new("my-bucket")
.with_prefix("sftp/");
let backend = S3Backend::from_env(s3_config).await;
Server::new(backend)
.config(ServerConfig::new().with_generated_key())
.with_users(vec![("user".into(), "pass".into())])
.run()
.await
}Configure AWS credentials via environment variables:
AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEYAWS_REGION(orAWS_DEFAULT_REGION)AWS_ENDPOINT_URL(for LocalStack/MinIO)
Implement the Backend trait for custom storage:
use sftp_s3::backend::{Backend, BackendResult, DirEntry, FileInfo};
use async_trait::async_trait;
struct MyBackend;
#[async_trait]
impl Backend for MyBackend {
async fn list_dir(&self, path: &str) -> BackendResult<Vec<DirEntry>> {
// Implementation
}
async fn file_info(&self, path: &str) -> BackendResult<FileInfo> {
// Implementation
}
async fn make_dir(&self, path: &str) -> BackendResult<()> {
// Implementation
}
async fn del_dir(&self, path: &str) -> BackendResult<()> {
// Implementation
}
async fn delete(&self, path: &str) -> BackendResult<()> {
// Implementation
}
async fn rename(&self, src: &str, dst: &str) -> BackendResult<()> {
// Implementation
}
async fn read_file(&self, path: &str) -> BackendResult<Vec<u8>> {
// Implementation
}
async fn write_file(&self, path: &str, content: Vec<u8>) -> BackendResult<()> {
// Implementation
}
}Run the memory backend example:
cargo run --example memory_serverRun the S3 backend example:
SFTP_BUCKET=my-bucket cargo run --example s3_serverConnect with an SFTP client:
sftp -P 2222 user@localhost# Set up directories and host key
./scripts/docker-setup.sh
# Start the memory backend (uses default credentials from docker-compose.yml)
docker-compose up -d sftp-memory
# Connect with default credentials
sftp -P 2222 user@localhost
# password: changemeTo use custom credentials, either:
- Edit
docker-compose.ymland set the SFTP_USERS variable for sftp-memory - Or use environment variables:
SFTP_USERS=myuser:mypass docker-compose up -d sftp-memory
# Set up directories and host key (if not already done)
./scripts/docker-setup.sh
docker-compose up -d sftp-local
# Files are stored in the 'sftp-data' Docker volume
sftp -P 2223 user@localhost # password: changeme# Set up directories and host key (if not already done)
./scripts/docker-setup.sh
# Edit .env with your AWS credentials and SFTP_USERS
nano .env
# Start the S3 backend
docker-compose up -d sftp-s3
sftp -P 2224 user@localhost # uses password from .env SFTP_USERS# Set up directories and host key (if not already done)
./scripts/docker-setup.sh
# Start LocalStack and SFTP
docker-compose up -d localstack sftp-s3-local
# Initialize LocalStack bucket
./scripts/localstack-init.sh
# Connect via SFTP
sftp -P 2225 user@localhost # password: localstacktest
# Verify files in LocalStack S3
aws s3 ls s3://test-bucket/sftp/ --endpoint-url="http://localhost:4566"| Variable | Default | Backend | Purpose |
|---|---|---|---|
BACKEND |
memory |
All | Storage backend: memory, local, or s3 |
PORT |
2222 |
All | SFTP listening port |
SFTP_USERS |
- | All | Comma-separated user:password pairs (required unless using authorized_keys) |
RUST_LOG |
sftp_s3=info |
All | Logging level |
HOST_KEY_FILE |
/keys/ssh_host_ed25519_key |
All | Path to SSH host key |
AUTHORIZED_KEYS_FILE |
/config/authorized_keys |
All | Path to authorized public keys |
LOCAL_ROOT |
. (optional) |
local | Root directory for local filesystem backend |
S3_BUCKET |
- | s3 | AWS S3 bucket name |
S3_PREFIX |
(empty) | s3 | Prefix for objects in S3 |
S3_ENDPOINT |
- | s3 | Custom S3-compatible endpoint (LocalStack, MinIO) |
AWS_REGION |
us-east-1 |
s3 | AWS region |
AWS_ACCESS_KEY_ID |
- | s3 | AWS access key |
AWS_SECRET_ACCESS_KEY |
- | s3 | AWS secret key |
| Path | Purpose | Mode |
|---|---|---|
/data |
Local backend storage | Read-Write |
/keys |
SSH host keys | Read-Only |
/config |
Config files (authorized_keys) | Read-Only |
docker build -t sftp-s3:latest .docker build -f Dockerfile.alpine -t sftp-s3:alpine .- Standard (Debian): ~20MB
- Alpine: ~8MB
./scripts/docker-setup.sh
# Generates key at: ./keys/ssh_host_ed25519_key./scripts/generate-host-key.shIf you have an existing SSH host key (not your personal key), you can use it:
# Copy an existing HOST key (not your personal ~/.ssh/id_ed25519!)
cp /path/to/existing/host_key ./keys/ssh_host_ed25519_key
chmod 600 ./keys/ssh_host_ed25519_keyWarning: Never use your personal SSH key as a host key. Always generate a dedicated host key using ./scripts/generate-host-key.sh.
Configure SFTP user credentials in docker-compose.yml or .env:
environment:
SFTP_USERS: "myuser:strong-password"Then connect:
sftp -P 2222 myuser@localhost- Add your public key to
./config/authorized_keys:
cat ~/.ssh/id_ed25519.pub >> ./config/authorized_keys
chmod 600 ./config/authorized_keys- Connect without password:
sftp -P 2222 user@localhost# Create secrets
echo "my-bucket" | docker secret create s3_bucket -
echo "us-east-1" | docker secret create aws_region -
echo "AKIA..." | docker secret create aws_access_key -
echo "..." | docker secret create aws_secret_key -
# Update docker-compose.yml to use secretsAll services include TCP health checks on the SFTP port. Verify status:
docker-compose ps
# Status should show "healthy"View logs from specific service:
docker-compose logs -f sftp-s3
# Set log level
docker-compose run -e RUST_LOG=sftp_s3=debug sftp-s3Add to docker-compose.yml service:
services:
sftp-s3:
deploy:
resources:
limits:
cpus: '1'
memory: 512M
reservations:
cpus: '0.5'
memory: 256M- Credentials: The example docker-compose.yml includes test credentials (
user:changeme) for development. Change these before production use or use public key authentication. - Public Key Auth: When using
authorized_keys, SFTP_USERS can be omitted entirely for password-less authentication. - Persistent Host Keys: Use
./scripts/docker-setup.shto generate host keys. Ensures consistent server identity. - Non-root Execution: All containers run as UID 1000 for reduced attack surface.
- Read-only Config: Host keys and authorized_keys mounted as read-only to prevent tampering.
# Check if service is running and healthy
docker-compose ps
# Check logs
docker-compose logs sftp-s3
# Test TCP connection
nc -zv localhost 2224# Error: SFTP_USERS must be explicitly set
# Solution: Set SFTP_USERS in docker-compose.yml before starting
docker-compose up -d- Verify host key file exists:
ls -la ./keys/ssh_host_ed25519_key - Check permissions: should be
600 - Verify directory permissions:
ls -la ./keys/
- Verify credentials in
.envfile - Check AWS region matches your bucket
- For LocalStack, verify it's healthy:
docker-compose ps localstack - Check S3 endpoint is accessible from container
# Stop all services
docker-compose down
# Remove volumes (including persistent data)
docker-compose down -v
# Remove images
docker rmi sftp-s3:latestApache 2.0