What is Rclone?
Rclone is an open-source command-line tool for managing files on over 70 cloud storage services. Often referred to as "rsync for the cloud," it offers advanced features for synchronization, transfer, encryption, and mounting of remote file systems.
Internal Architecture
Rclone operates on a modular architecture composed of several layers. At its core is the Rclone Core that handles basic operations. Above that, three main layers interact: the VFS Layer for mounting and caching, the Crypt Layer for client-side encryption, and the Chunker Layer for splitting large files.
These layers communicate with a Backend Abstraction that standardizes access to various providers: S3/R2, Google Drive, SFTP, Backblaze B2, WebDAV, and many others.
System Requirements
| Component | Minimum | Recommended |
|---|---|---|
| RAM | 512 MB | 2 GB+ |
| CPU | 1 vCPU | 2+ vCPU |
| Storage | 100 MB | 1 GB (cache) |
| Kernel | 3.10+ | 5.4+ (FUSE3) |
| OS | Ubuntu 18.04+ | Ubuntu 22.04/24.04 |
Advanced Installation
Method 1: Installation via Official Script (Recommended)
# Installation with signature verification
curl -fsSL https://rclone.org/install.sh | sudo bash
# Verify the installation
rclone version
Method 2: Manual Installation (Full Control)
# Set the desired version
RCLONE_VERSION="v1.68.2"
ARCH="amd64" # or arm64 for ARM
# Download and extract
cd /tmp
curl -LO "https://downloads.rclone.org/${RCLONE_VERSION}/rclone-${RCLONE_VERSION}-linux-${ARCH}.zip"
unzip rclone-${RCLONE_VERSION}-linux-${ARCH}.zip
cd rclone-${RCLONE_VERSION}-linux-${ARCH}
# Binary installation
sudo cp rclone /usr/local/bin/
sudo chmod 755 /usr/local/bin/rclone
# Man pages installation
sudo mkdir -p /usr/local/share/man/man1
sudo cp rclone.1 /usr/local/share/man/man1/
sudo mandb
# Bash autocompletion installation
rclone completion bash | sudo tee /etc/bash_completion.d/rclone > /dev/null
# Zsh autocompletion installation
rclone completion zsh | sudo tee /usr/local/share/zsh/site-functions/_rclone > /dev/null
Method 3: Compilation from Sources
# Prerequisites Go 1.21+
sudo apt update && sudo apt install -y golang-go git
# Clone and compile
git clone https://github.com/rclone/rclone.git
cd rclone
go build -ldflags "-s -w" -o rclone
# Installation
sudo mv rclone /usr/local/bin/
FUSE Dependencies Installation
# Ubuntu/Debian
sudo apt update
sudo apt install -y fuse3 libfuse3-dev
# Enable FUSE for non-root users
sudo sed -i 's/#user_allow_other/user_allow_other/' /etc/fuse.conf
Multi-Provider Configuration
Configuration File Structure
The default configuration file is located at ~/.config/rclone/rclone.conf. Its INI structure supports multiple remotes.
# ~/.config/rclone/rclone.conf
[gdrive]
type = drive
client_id = YOUR_CLIENT_ID.apps.googleusercontent.com
client_secret = YOUR_CLIENT_SECRET
scope = drive
token = {"access_token":"...","token_type":"Bearer","refresh_token":"...","expiry":"..."}
team_drive =
[s3-aws]
type = s3
provider = AWS
access_key_id = AKIAIOSFODNN7EXAMPLE
secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
region = eu-west-1
acl = private
storage_class = STANDARD_IA
[s3-r2]
type = s3
provider = Cloudflare
access_key_id = YOUR_R2_ACCESS_KEY
secret_access_key = YOUR_R2_SECRET_KEY
endpoint = https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com
acl = private
[b2]
type = b2
account = YOUR_APPLICATION_KEY_ID
key = YOUR_APPLICATION_KEY
hard_delete = true
[sftp-backup]
type = sftp
host = backup.example.com
user = backupuser
port = 22
key_file = ~/.ssh/backup_key
shell_type = unix
md5sum_command = md5sum
sha1sum_command = sha1sum
Interactive Configuration
# Launch the configuration wizard
rclone config
# Interactive menu commands
# n) New remote
# d) Delete remote
# r) Rename remote
# c) Copy remote
# s) Set configuration password
# q) Quit config
Google Drive Configuration (Headless VPS)
To configure Google Drive on a VPS without a graphical interface:
# On your local machine with a browser
rclone authorize "drive"
# Copy the generated token, then on the VPS
rclone config
# Select: n (new remote)
# Name: gdrive
# Storage: drive
# client_id: (leave empty or use yours)
# client_secret: (leave empty or use yours)
# scope: drive
# service_account_file: (leave empty)
# Edit advanced config: n
# Use auto config: n
# Paste the token obtained on the local machine
Configuration with Service Account (Google Workspace)
# Create the directory for credentials
mkdir -p ~/.config/rclone/sa
# Place the service account JSON file
# ~/.config/rclone/sa/service-account.json
[gdrive-sa]
type = drive
scope = drive
service_account_file = /home/user/.config/rclone/sa/service-account.json
team_drive = 0ABCdefGHIjklMNOpqr
S3 Compatible Configuration (MinIO, Wasabi, etc.)
[minio]
type = s3
provider = Minio
access_key_id = minioadmin
secret_access_key = minioadmin
endpoint = http://localhost:9000
acl = private
[wasabi]
type = s3
provider = Wasabi
access_key_id = YOUR_WASABI_KEY
secret_access_key = YOUR_WASABI_SECRET
region = eu-central-1
endpoint = s3.eu-central-1.wasabisys.com
Essential Commands
General Syntax
rclone [options] subcommand <source> [<destination>] [flags]
Basic Commands
Listing and Navigation
# List configured remotes
rclone listremotes
# List the contents of a remote
rclone ls gdrive:
rclone ls gdrive:Documents/
# Detailed listing with size and date
rclone lsl gdrive:
# JSON format listing (for scripting)
rclone lsjson gdrive: --recursive
# Display directories only
rclone lsd gdrive:
# Full tree view
rclone tree gdrive:Projects/ --level 3
# Search for files
rclone ls gdrive: --include "*.pdf" --recursive
Copy and Move
# Copy from local to remote
rclone copy /local/path gdrive:backup/ -P
# Copy from remote to local
rclone copy gdrive:Documents/ /local/documents/ -P
# Copy between two remotes (server-side if supported)
rclone copy gdrive:source/ s3-aws:destination/ -P
# Move (copy + delete source)
rclone move /local/temp/ gdrive:archive/ -P
# Copy a single file
rclone copyto /local/file.txt gdrive:backup/file.txt
Synchronization
# One-way synchronization (source → destination)
# CAUTION: deletes files in destination not present in source
rclone sync /local/data/ gdrive:data/ -P
# Synchronization with dry-run (simulation)
rclone sync /local/data/ gdrive:data/ --dry-run
# Bidirectional synchronization (experimental)
rclone bisync /local/data/ gdrive:data/ --resync
Deletion
# Delete a file
rclone deletefile gdrive:path/to/file.txt
# Delete a directory and its contents
rclone purge gdrive:old-backup/
# Delete only files (keep structure)
rclone delete gdrive:temp/
# Delete empty directories
rclone rmdirs gdrive: --leave-root
Essential Flags
| Flag | Description |
|---|---|
-P, --progress | Display progress |
-v, --verbose | Verbose mode |
-n, --dry-run | Simulation without modifications |
--transfers N | Number of parallel transfers |
--checkers N | Number of parallel checkers |
--bwlimit RATE | Bandwidth limit |
--exclude PATTERN | Exclude files |
--include PATTERN | Include only these files |
--min-size SIZE | Minimum size |
--max-size SIZE | Maximum size |
--min-age DURATION | Minimum age |
--max-age DURATION | Maximum age |
Advanced Operations
Advanced Filtering
# Filter file
cat > /etc/rclone/filters.txt << 'EOF'
# Inclusions
+ **.pdf
+ **.docx
+ Documents/**
# Exclusions
- *.tmp
- *.log
- .git/**
- node_modules/**
- __pycache__/**
- .DS_Store
- Thumbs.db
# Exclude files over 1GB
- *
EOF
# Use filter file
rclone sync /data/ gdrive:backup/ --filter-from /etc/rclone/filters.txt -P
Server-Side Operations
# Server-side copy (avoids local download)
rclone copy gdrive:source/ gdrive:destination/ --drive-server-side-across-configs
# Check if server-side is available
rclone backend features gdrive:
# Deduplication (Google Drive)
rclone dedupe gdrive:folder/ --dedupe-mode newest
Large File Handling
# Chunked upload for files > 5GB (S3)
rclone copy largefile.tar gdrive:backup/ \
--s3-chunk-size 100M \
--s3-upload-concurrency 4
# Integrity check
rclone check /local/data/ gdrive:data/ --one-way
rclone hashsum MD5 gdrive:file.zip
Metadata Operations
# Copy with timestamp preservation
rclone copy /local/ gdrive:backup/ --metadata
# Display metadata
rclone lsjson gdrive:file.txt --metadata
# Modify metadata (supported backends)
rclone settier gdrive:file.txt ARCHIVE
FUSE Mount and Cache
Basic Mount
# Create mount point
sudo mkdir -p /mnt/gdrive
sudo chown $USER:$USER /mnt/gdrive
# Simple mount
rclone mount gdrive: /mnt/gdrive --daemon
# Mount with advanced options
rclone mount gdrive: /mnt/gdrive \
--daemon \
--allow-other \
--vfs-cache-mode full \
--vfs-cache-max-size 10G \
--vfs-cache-max-age 24h \
--vfs-read-chunk-size 64M \
--vfs-read-chunk-size-limit 1G \
--buffer-size 256M \
--dir-cache-time 72h \
--poll-interval 15s \
--log-file /var/log/rclone/gdrive.log \
--log-level INFO
VFS Cache Modes
| Mode | Description | Usage |
|---|---|---|
off | No cache | Streaming only |
minimal | Write cache only | Uploads |
writes | Cache writes until upload | File editing |
full | Full read/write cache | Intensive use, applications |
Advanced Cache Configuration
# Cache with dedicated backend
rclone mount gdrive: /mnt/gdrive \
--vfs-cache-mode full \
--cache-dir /var/cache/rclone \
--vfs-cache-max-size 50G \
--vfs-cache-max-age 168h \
--vfs-write-back 5s \
--vfs-read-ahead 128M \
--attr-timeout 1h \
--daemon
Systemd Service for Persistent Mount
# Create service file
sudo tee /etc/systemd/system/rclone-gdrive.service << 'EOF'
[Unit]
Description=Rclone Mount - Google Drive
Documentation=https://rclone.org/docs/
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
User=rclone
Group=rclone
ExecStartPre=/bin/mkdir -p /mnt/gdrive
ExecStart=/usr/local/bin/rclone mount gdrive: /mnt/gdrive \
--config /home/rclone/.config/rclone/rclone.conf \
--allow-other \
--vfs-cache-mode full \
--vfs-cache-max-size 20G \
--vfs-cache-max-age 72h \
--vfs-read-chunk-size 32M \
--vfs-read-chunk-size-limit 256M \
--buffer-size 128M \
--dir-cache-time 48h \
--poll-interval 30s \
--log-level INFO \
--log-file /var/log/rclone/gdrive.log \
--cache-dir /var/cache/rclone/gdrive
ExecStop=/bin/fusermount -uz /mnt/gdrive
Restart=on-failure
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
# Create dedicated user
sudo useradd -r -s /usr/sbin/nologin rclone
sudo mkdir -p /var/log/rclone /var/cache/rclone
sudo chown rclone:rclone /var/log/rclone /var/cache/rclone
# Enable and start service
sudo systemctl daemon-reload
sudo systemctl enable rclone-gdrive.service
sudo systemctl start rclone-gdrive.service
# Check status
sudo systemctl status rclone-gdrive.service
Unmounting
# Clean unmount
fusermount -u /mnt/gdrive
# Force unmount (if stuck)
fusermount -uz /mnt/gdrive
# Via systemd
sudo systemctl stop rclone-gdrive.service
Encryption and Security
Setting up an Encrypted Remote
The crypt backend provides transparent client-side encryption.
# Configuration via wizard
rclone config
# Select: n (new remote)
# Name: gdrive-crypt
# Storage: crypt
# remote: gdrive:encrypted/ (underlying remote)
# filename_encryption: standard
# directory_name_encryption: true
# password: (generated or chosen)
# password2: (salt, optional but recommended)
Resulting configuration:
[gdrive-crypt]
type = crypt
remote = gdrive:encrypted/
password = ENCRYPTED_PASSWORD_HASH
password2 = ENCRYPTED_SALT_HASH
filename_encryption = standard
directory_name_encryption = true
Encryption Modes for Names
| Mode | Description | Example |
|---|---|---|
off | No name encryption | document.pdf |
standard | Full encryption | q4kp2q8fj3m2... |
obfuscate | Simple obfuscation | yqkpzqFfj3m2... |
Using the Encrypted Remote
# Operations are transparent
rclone copy /sensitive/data/ gdrive-crypt: -P
# Files appear encrypted on the cloud side
rclone ls gdrive:encrypted/
# Output: encrypted and unreadable
# But readable via the crypt remote
rclone ls gdrive-crypt:
# Output: original names
Securing the Configuration
# Encrypt the configuration file
rclone config password
# Configuration now uses a password
# Set via environment variable
export RCLONE_CONFIG_PASS="your_strong_password"
# Or use a secret manager
export RCLONE_CONFIG_PASS=$(cat /run/secrets/rclone_pass)
Sensitive Environment Variables
# Configuration via variables (avoids config file)
export RCLONE_CONFIG_GDRIVE_TYPE=drive
export RCLONE_CONFIG_GDRIVE_TOKEN='{"access_token":"..."}'
# For S3
export RCLONE_CONFIG_S3_TYPE=s3
export RCLONE_CONFIG_S3_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export RCLONE_CONFIG_S3_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG
Automation and Scripting
Full Backup Script
#!/usr/bin/env bash
#
# rclone-backup.sh - Automated backup script
# Usage: ./rclone-backup.sh [daily|weekly|monthly]
#
set -euo pipefail
IFS=$'\n\t'
# Configuration
readonly SCRIPT_NAME=$(basename "$0")
readonly LOG_DIR="/var/log/rclone"
readonly LOG_FILE="${LOG_DIR}/backup-$(date +%Y%m%d-%H%M%S).log"
readonly LOCK_FILE="/var/run/rclone-backup.lock"
readonly CONFIG_FILE="${HOME}/.config/rclone/rclone.conf"
# Parameters
readonly LOCAL_SOURCE="/data"
readonly REMOTE_DEST="gdrive-crypt:backups"
readonly RETENTION_DAYS=30
readonly BANDWIDTH_LIMIT="50M"
readonly MAX_TRANSFERS=4
# Colors
readonly RED='\033[0;31m'
readonly GREEN='\033[0;32m'
readonly YELLOW='\033[1;33m'
readonly NC='\033[0m'
# Functions
log() {
local level="$1"
shift
local message="$*"
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
echo -e "${timestamp} [${level}] ${message}" | tee -a "${LOG_FILE}"
}
cleanup() {
rm -f "${LOCK_FILE}"
log "INFO" "Cleanup done, lock released"
}
check_dependencies() {
local deps=("rclone" "flock")
for dep in "${deps[@]}"; do
if ! command -v "$dep" &> /dev/null; then
log "ERROR" "Missing dependency: $dep"
exit 1
fi
done
}
acquire_lock() {
exec 200>"${LOCK_FILE}"
if ! flock -n 200; then
log "ERROR" "Another instance is already running"
exit 1
fi
}
perform_backup() {
local backup_type="${1:-daily}"
local timestamp=$(date +%Y%m%d-%H%M%S)
local dest_path="${REMOTE_DEST}/${backup_type}/${timestamp}"
log "INFO" "Starting backup ${backup_type} to ${dest_path}"
rclone sync "${LOCAL_SOURCE}" "${dest_path}" \
--config "${CONFIG_FILE}" \
--transfers "${MAX_TRANSFERS}" \
--checkers 8 \
--bwlimit "${BANDWIDTH_LIMIT}" \
--log-file "${LOG_FILE}" \
--log-level INFO \
--stats 1m \
--stats-log-level NOTICE \
--exclude-from /etc/rclone/exclude.txt \
--backup-dir "${REMOTE_DEST}/.versions/${timestamp}" \
--suffix ".bak" \
--fast-list \
--retries 3 \
--retries-sleep 10s \
--ignore-errors
local exit_code=$?
if [ $exit_code -eq 0 ]; then
log "INFO" "${GREEN}Backup ${backup_type} completed successfully${NC}"
else
log "ERROR" "${RED}Backup ${backup_type} failed (code: ${exit_code})${NC}"
fi
return $exit_code
}
cleanup_old_backups() {
log "INFO" "Cleaning backups older than ${RETENTION_DAYS} days"
rclone delete "${REMOTE_DEST}" \
--config "${CONFIG_FILE}" \
--min-age "${RETENTION_DAYS}d" \
--rmdirs \
--log-file "${LOG_FILE}" \
--log-level INFO
log "INFO" "Cleanup completed"
}
send_notification() {
local status="$1"
local message="$2"
# Example with Discord/Slack webhook
if [ -n "${WEBHOOK_URL:-}" ]; then
curl -s -X POST "${WEBHOOK_URL}" \
-H "Content-Type: application/json" \
-d "{\"content\": \"[Rclone Backup] ${status}: ${message}\"}"
fi
}
main() {
local backup_type="${1:-daily}"
# Initialization
mkdir -p "${LOG_DIR}"
trap cleanup EXIT
log "INFO" "=== Starting ${SCRIPT_NAME} (${backup_type}) ==="
check_dependencies
acquire_lock
# Execution
if perform_backup "${backup_type}"; then
cleanup_old_backups
send_notification "SUCCESS" "Backup ${backup_type} completed"
else
send_notification "FAILURE" "Backup ${backup_type} failed"
exit 1
fi
log "INFO" "=== End ${SCRIPT_NAME} ==="
}
main "$@"
Cron Configuration
# Edit crontab
crontab -e
# Automated backups
# Daily at 2:00
0 2 * * * /usr/local/bin/rclone-backup.sh daily >> /var/log/rclone/cron.log 2>&1
# Weekly on Sunday at 3:00
0 3 * * 0 /usr/local/bin/rclone-backup.sh weekly >> /var/log/rclone/cron.log 2>&1
# Monthly on the 1st of the month at 4:00
0 4 1 * * /usr/local/bin/rclone-backup.sh monthly >> /var/log/rclone/cron.log 2>&1
# Cache cleanup daily at 5:00
0 5 * * * /usr/local/bin/rclone cleanup gdrive: --config /home/user/.config/rclone/rclone.conf
Systemd Timer (Cron Alternative)
# Service
sudo tee /etc/systemd/system/rclone-backup.service << 'EOF'
[Unit]
Description=Rclone Backup Service
After=network-online.target
[Service]
Type=oneshot
User=rclone
ExecStart=/usr/local/bin/rclone-backup.sh daily
StandardOutput=journal
StandardError=journal
EOF
# Timer
sudo tee /etc/systemd/system/rclone-backup.timer << 'EOF'
[Unit]
Description=Run Rclone Backup Daily
[Timer]
OnCalendar=*-*-* 02:00:00
RandomizedDelaySec=300
Persistent=true
[Install]
WantedBy=timers.target
EOF
# Activation
sudo systemctl daemon-reload
sudo systemctl enable --now rclone-backup.timer
# Verification
systemctl list-timers | grep rclone
Performance Optimization
Transfer Tuning
# High-performance configuration
rclone sync /source/ remote:dest/ \
--transfers 16 \
--checkers 32 \
--buffer-size 512M \
--drive-chunk-size 256M \
--fast-list \
--tpslimit 10 \
--tpslimit-burst 20 \
-P
Provider-Specific Settings
Google Drive
# Google Drive optimization
rclone copy /data/ gdrive:backup/ \
--drive-chunk-size 256M \
--drive-upload-cutoff 256M \
--drive-acknowledge-abuse \
--drive-keep-revision-forever=false \
--fast-list \
-P
Amazon S3
# S3 optimization
rclone copy /data/ s3:bucket/path/ \
--s3-chunk-size 100M \
--s3-upload-cutoff 200M \
--s3-upload-concurrency 8 \
--s3-copy-cutoff 4G \
--fast-list \
-P
Backblaze B2
# B2 optimization
rclone copy /data/ b2:bucket/path/ \
--b2-chunk-size 96M \
--b2-upload-cutoff 200M \
--b2-hard-delete \
--fast-list \
-P
Benchmarking
# Speed test
rclone test makefile /tmp/rclone_test_file --size 1G
# Full benchmark
rclone test memory remote:path/
rclone test bandwidth remote:path/ --time 30s
# Measure transfers
time rclone copy /test/1GB.bin remote:test/ -P --stats 1s
Bandwidth Management
# Fixed limit
--bwlimit 10M
# Variable limit based on time
--bwlimit "08:00,512K 18:00,10M 23:00,off"
# Separate upload/download limits
--bwlimit "10M:5M" # 10M up, 5M down
Monitoring and Logging
Log Configuration
# Log Levels
--log-level DEBUG # Very verbose
--log-level INFO # Standard
--log-level NOTICE # Important only
--log-level ERROR # Errors only
# Output file
--log-file /var/log/rclone/operation.log
# JSON format (for parsing)
--use-json-log
Detailed Statistics
# Periodic statistics
rclone sync /source/ remote:dest/ \
--stats 30s \
--stats-file-name-length 0 \
--stats-log-level NOTICE \
--stats-one-line \
-P
Monitoring with Prometheus
# Enable metrics
rclone serve restic remote:path/ \
--rc \
--rc-addr :5572 \
--rc-enable-metrics
# Prometheus endpoint
curl http://localhost:5572/metrics
Monitoring Script
#!/usr/bin/env bash
# rclone-health-check.sh
REMOTE="gdrive:"
LOG_FILE="/var/log/rclone/health.log"
check_remote() {
if rclone lsd "${REMOTE}" --max-depth 1 &> /dev/null; then
echo "$(date '+%Y-%m-%d %H:%M:%S') [OK] ${REMOTE} accessible" >> "${LOG_FILE}"
return 0
else
echo "$(date '+%Y-%m-%d %H:%M:%S') [FAIL] ${REMOTE} inaccessible" >> "${LOG_FILE}"
return 1
fi
}
check_mount() {
local mount_point="/mnt/gdrive"
if mountpoint -q "${mount_point}"; then
echo "$(date '+%Y-%m-%d %H:%M:%S') [OK] ${mount_point} monté" >> "${LOG_FILE}"
return 0
else
echo "$(date '+%Y-%m-%d %H:%M:%S') [FAIL] ${mount_point} non monté" >> "${LOG_FILE}"
return 1
fi
}
check_remote
check_mount
Logrotate Configuration
sudo tee /etc/logrotate.d/rclone << 'EOF'
/var/log/rclone/*.log {
daily
missingok
rotate 14
compress
delaycompress
notifempty
create 0640 rclone rclone
sharedscripts
postrotate
systemctl reload rclone-gdrive.service > /dev/null 2>&1 || true
endscript
}
EOF
Expert Troubleshooting
Common Issues and Solutions
Error 403 - Rate Limit (Google Drive)
# Symptom
ERROR : Failed to copy: googleapi: Error 403: User Rate Limit Exceeded
# Solutions
# 1. Reduce parallelism
--transfers 2 --checkers 4
# 2. Add delays
--tpslimit 2 --tpslimit-burst 0
# 3. Use your own Client ID
# In rclone config, set custom client_id and client_secret
Expired Token Error
# Regenerate the token
rclone config reconnect gdrive:
# Or manually
rclone authorize drive
FUSE Mount - Permission Denied
# Check /etc/fuse.conf
grep user_allow_other /etc/fuse.conf
# If commented, uncomment
sudo sed -i 's/#user_allow_other/user_allow_other/' /etc/fuse.conf
# Use --allow-other
rclone mount remote: /mnt/point --allow-other
Missing Files after Sync
# Always dry-run first
rclone sync source: dest: --dry-run
# Check exclusions
rclone ls source: --include "*.ext" -v
# Use check to compare
rclone check source: dest: --one-way --combined report.txt
Advanced Diagnostics
# Full debug mode
rclone copy source: dest: -vv --dump headers --dump bodies
# Test connection
rclone lsd remote: --timeout 30s
# Check configuration
rclone config show remote
# List backends and features
rclone backend features remote:
# Network performance test
rclone test bandwidth remote: --time 30s
Error Recovery
# Resume interrupted transfer
rclone copy source: dest: \
--retries 10 \
--retries-sleep 30s \
--low-level-retries 20 \
--ignore-errors
# Handle files with errors separately
rclone copy source: dest: \
--error-on-no-transfer \
2>&1 | tee transfer.log
# Replay only errors
grep "ERROR" transfer.log | while read line; do
# Extract and reprocess
done
Production Use Cases
Full Web Server Backup
#!/usr/bin/env bash
# backup-webserver.sh
# Variables
MYSQL_USER="backup"
MYSQL_PASS="secret"
BACKUP_DIR="/var/backups/web"
REMOTE="b2:webserver-backups"
DATE=$(date +%Y%m%d)
# Create backup directory
mkdir -p "${BACKUP_DIR}/${DATE}"
# 1. Backup MySQL
mysqldump -u${MYSQL_USER} -p${MYSQL_PASS} --all-databases \
| gzip > "${BACKUP_DIR}/${DATE}/mysql-all.sql.gz"
# 2. Backup web files
tar -czf "${BACKUP_DIR}/${DATE}/www.tar.gz" -C /var/www .
# 3. Backup configuration
tar -czf "${BACKUP_DIR}/${DATE}/etc.tar.gz" \
/etc/nginx /etc/php /etc/mysql /etc/letsencrypt
# 4. Upload to cloud
rclone copy "${BACKUP_DIR}/${DATE}" "${REMOTE}/${DATE}/" \
--transfers 4 \
--b2-hard-delete \
-P
# 5. Local cleanup (keep 7 days)
find "${BACKUP_DIR}" -type d -mtime +7 -exec rm -rf {} +
# 6. Remote cleanup (keep 30 days)
rclone delete "${REMOTE}" --min-age 30d --rmdirs
Multi-Cloud Synchronization
#!/usr/bin/env bash
# multi-cloud-sync.sh - Replication across multiple providers
SOURCES=("gdrive:important" "s3:bucket/important")
DESTINATIONS=("b2:redundancy" "wasabi:redundancy")
for src in "${SOURCES[@]}"; do
for dst in "${DESTINATIONS[@]}"; do
echo "Syncing ${src} -> ${dst}"
rclone sync "${src}" "${dst}" \
--transfers 8 \
--fast-list \
--checksum \
-P
done
done
Media Server with Rclone Mount
# Configuration for Plex/Jellyfin
rclone mount gdrive:Media /mnt/media \
--daemon \
--allow-other \
--uid $(id -u plex) \
--gid $(id -g plex) \
--umask 002 \
--vfs-cache-mode full \
--vfs-cache-max-size 100G \
--vfs-cache-max-age 168h \
--vfs-read-chunk-size 128M \
--vfs-read-chunk-size-limit 1G \
--buffer-size 512M \
--dir-cache-time 168h \
--poll-interval 1m \
--log-file /var/log/rclone/media.log \
--log-level NOTICE \
--cache-dir /var/cache/rclone/media
Archiving with Versioning
# Sync with versioning
rclone sync /data/ remote:current/ \
--backup-dir remote:versions/$(date +%Y%m%d) \
--suffix .bak \
--suffix-keep-extension \
-P
# Restore a version
rclone copy remote:versions/20240115/file.txt.bak /restore/file.txt
Resources and References
Official Documentation
- Official website: https://rclone.org/
- Documentation: https://rclone.org/docs/
- Forum: https://forum.rclone.org/
- GitHub: https://github.com/rclone/rclone
Quick Useful Commands
# Check used space
rclone size remote:path/
# Clean up temporary files
rclone cleanup remote:
# Mount with web interface
rclone rcd --rc-web-gui --rc-addr :5572
# Export config (without secrets)
rclone config dump | jq 'del(..|.token?, .password?, .secret_access_key?)'

