Universal task runner with multi-environment deploy support.
Write your deploy logic once in Taskfile.yml, run it from your terminal, GitLab CI, GitHub Actions, Gitea, Jenkins — or any other pipeline. Zero CI/CD lock-in.
- The Problem This Solves
- Install
- Quick Start
- Taskfile.yml Reference
- CLI Commands
- Multi-Environment Deploy
- Multi-Platform Deploy
- Environment Groups & Fleet Management
- Parallel Tasks & Error Handling
- Registry Authentication
- Multi-Registry Publishing
- Quadlet Generator
- VPS Setup (One-Command)
- Release Pipeline
- CI/CD Integration
- Scaffold Templates
- Diagnostics & Validation
- Examples
- Development
- Troubleshooting & Debugging
You have one project with multiple deployment stages:
local → Docker Compose + Traefik (dev on laptop)
staging → Docker Compose over SSH (test server)
prod → Podman Quadlet + Traefik (512MB VPS)
fleet → 20× Raspberry Pi kiosks (edge deploy)
Without taskfile, you maintain separate scripts, CI configs, and fleet tools for each. With taskfile:
┌──────────────────────────────────────────────────┐
│ Taskfile.yml │
│ (environments + tasks + groups = one file) │
│ │
│ taskfile --env local run dev │
│ taskfile --env prod run deploy │
│ taskfile -G kiosks run deploy-kiosk │
│ taskfile fleet status │
│ taskfile auth setup │
└──────────────────────────────────────────────────┘
One YAML file. All environments, platforms, device groups, and deploy tasks in one place.
pip install taskfileShort alias tf is also available after install:
tf --env prod run deploy# 1. Create project from template
taskfile init --template full
# 2. See available tasks
taskfile list
# 3. Local development
taskfile --env local run dev
# 4. Deploy to production
taskfile --env prod run deploy
# 5. Dry run — see what would happen
taskfile --env prod --dry-run run deployA complete Taskfile.yml can contain these top-level sections:
version: "1"
name: my-project
description: Project description
default_env: local
default_platform: web
# ─── Global variables ────────────────────────────
variables:
APP_NAME: my-project
IMAGE: ghcr.io/myorg/my-project
TAG: latest
# ─── Hosts (compact environment declaration) ─────
# Extra keys (region, role) become uppercase variables.
hosts:
_defaults:
user: deploy
key: ~/.ssh/id_ed25519
runtime: podman
prod-eu: { host: eu.example.com, region: eu-west-1 }
prod-us: { host: us.example.com, region: us-east-1 }
_groups:
all-prod: { members: [prod-eu, prod-us], strategy: canary }
# ─── Environments (WHERE to deploy) ──────────────
# Smart defaults: ssh_host → podman/quadlet/~/.ssh/id_ed25519
# env_file defaults to .env.{env_name}
environments:
local: {} # docker/compose auto-detected
staging:
ssh_host: staging.example.com # → podman, quadlet, .env.staging
variables:
DOMAIN: staging.example.com
# ─── Deploy recipe (auto-generates tasks) ────────
deploy:
strategy: quadlet # compose | quadlet | ssh-push
images:
api: services/api/Dockerfile
web: services/web/Dockerfile
registry: ${REGISTRY}
health_check: /health
# ─── Addons (pluggable task generators) ──────────
addons:
- postgres: { db_name: myapp }
- monitoring: { grafana: http://grafana:3000 }
- redis: { url: redis://localhost:6379 }
# ─── Platforms (WHAT to deploy) ───────────────────
platforms:
web:
desc: Web application
variables:
BUILD_DIR: dist/web
desktop:
desc: Electron desktop app
variables:
BUILD_DIR: dist/desktop
# ─── Tasks ────────────────────────────────────────
tasks:
build:
desc: Build the application
cmds:
- docker build -t ${IMAGE}:${TAG} .
deploy:
desc: Deploy to target environment
deps: [build, push] # run dependencies first
parallel: true # run deps concurrently
env: [prod] # only run on prod
platform: [web] # only run for web platform
condition: "test -f Dockerfile" # skip if condition fails
continue_on_error: true # don't stop on failure
retries: 3 # retry on failure (Ansible-inspired)
retry_delay: 10 # seconds between retries
timeout: 300 # abort after 300 seconds
tags: [deploy, ci] # selective execution with --tags
register: DEPLOY_OUTPUT # capture stdout into variable
cmds:
- "@remote podman pull ${IMAGE}:${TAG}"
- "@remote systemctl --user restart ${APP_NAME}"
- "@fn notify Deployed ${APP_NAME}" # call embedded function
- "@python print('done')" # inline Python
# ─── Embedded Functions ───────────────────────────────
functions:
notify:
lang: python # shell | python | node | binary
code: | # inline code
import os; print(f"Deployed {os.environ['APP_NAME']}")
health-check:
lang: shell
file: scripts/health.sh # external filehosts— compact environment + group declaration with_defaultsand_groupsenvironments— WHERE to deploy (local machine, remote server via SSH)- Smart defaults —
ssh_hostpresent → podman/quadlet/~/.ssh/id_ed25519; absent → docker/compose;env_file→.env.{name} environment_defaults— shared SSH/runtime config applied to all environmentsdeploy— recipe that auto-generates build/push/deploy/rollback/health tasksaddons— pluggable task generators (postgres, monitoring, redis)platforms— WHAT to deploy (web, desktop, mobile)environment_groups— batch of environments for fleet/group deploytasks— commands to execute, with deps, filters, conditionsvariables— cascade: global → environment → platform →--varCLI overridesfunctions— embed Python/shell/Node/binary as callable@fnfrom tasks@remoteprefix — command runs via SSH on the target environment's host@fnprefix — call an embedded function:@fn notify arg1@pythonprefix — run inline Python:@python print('hello')retries/timeout/tags/register— Ansible-inspired robustnessinclude— split Taskfile.yml into multiple files for better organizationpipeline— define CI/CD stages for automated generationcompose— Docker Compose integration with override support
When ssh_host is present, taskfile auto-detects remote deploy settings — no need to repeat boilerplate:
# Before (verbose):
environments:
prod:
ssh_host: prod.example.com
ssh_user: deploy
ssh_key: ~/.ssh/id_ed25519
container_runtime: podman
service_manager: quadlet
env_file: .env.prod
quadlet_dir: deploy/quadlet
quadlet_remote_dir: ~/.config/containers/systemd
# After (smart defaults):
environments:
prod:
ssh_host: prod.example.com| Condition | Default |
|---|---|
ssh_host present |
container_runtime: podman, service_manager: quadlet, ssh_key: ~/.ssh/id_ed25519 |
ssh_host absent |
container_runtime: docker, compose_command: docker compose |
| Always | env_file: .env.{env_name}, ssh_user: deploy |
Explicit values always override defaults.
Declare fleets and multi-region deploys in a fraction of the YAML:
# Before (50+ lines):
environments:
prod-eu:
ssh_host: eu.example.com
ssh_user: deploy
ssh_key: ~/.ssh/id_ed25519
container_runtime: podman
service_manager: quadlet
variables: { REGION: eu-west-1 }
prod-us:
ssh_host: us.example.com
ssh_user: deploy
ssh_key: ~/.ssh/id_ed25519
container_runtime: podman
service_manager: quadlet
variables: { REGION: us-east-1 }
environment_groups:
all-prod:
members: [prod-eu, prod-us]
strategy: canary
# After (10 lines):
hosts:
_defaults: { user: deploy, runtime: podman }
prod-eu: { host: eu.example.com, region: eu-west-1 }
prod-us: { host: us.example.com, region: us-east-1 }
_groups:
all-prod: { members: [prod-eu, prod-us], strategy: canary }_defaults— shared config for all hosts (short aliases:host,user,key,port,runtime,manager)- Extra keys (like
region,role) automatically become uppercase variables (REGION,ROLE) _groups— same format asenvironment_groups- Works alongside
environments:— both are merged
Auto-generate build, push, deploy, rollback, and health tasks from a recipe:
deploy:
strategy: quadlet # compose | quadlet | ssh-push
images:
api: services/api/Dockerfile
web: services/web/Dockerfile
registry: ${REGISTRY}
health_check: /health
health_retries: 5
rollback: autoThis generates: build-api, build-web, build-all, push-api, push-web, push-all, deploy, health, rollback. User-defined tasks with the same names override generated ones.
Add common operations in one line instead of writing 20+ tasks manually:
addons:
- postgres: { db_name: myapp, backup_dir: /tmp/bak }
- monitoring: { grafana: http://grafana:3000 }
- redis: { url: redis://redis:6379 }| Addon | Generated tasks |
|---|---|
| postgres | db-status, db-size, db-migrate, db-backup, db-restore, db-vacuum, db-prune-backups |
| monitoring | mon-status, mon-alerts, mon-metrics, mon-dashboard-export, mon-setup |
| redis | redis-status, redis-info, redis-flush, redis-monitor |
String shorthand also works: addons: ["postgres"] (uses all defaults).
See exactly what a task will do without running it:
$ taskfile --env prod-eu explain deploy
📋 deploy (env: prod-eu)
Deploy via Podman Quadlet
Steps:
1. 💻 docker build -t ghcr.io/org/api:latest ...
2. 💻 docker push ghcr.io/org/api:latest
3. 🌐 @remote systemctl --user daemon-reload
4. 🌐 @remote podman pull ghcr.io/org/api:latest
...
Variables: APP_NAME=myapp REGION=eu-west-1 TAG=latest
Requires: Docker, SSH to eu.example.comDefine reusable functions in Python, Shell, Node.js, or binary executables:
functions:
notify:
lang: python
desc: Send Slack notification
code: |
import os, json, urllib.request
webhook = os.environ.get("SLACK_WEBHOOK")
msg = os.environ.get("FN_ARGS", "Done")
# ... send notification
health-check:
lang: shell
file: scripts/health.sh # External file
tasks:
deploy:
cmds:
- "@fn notify Deployment started"
- "@fn health-check"New Ansible-inspired features for robust automation:
tasks:
deploy:
desc: Deploy with retry logic
retries: 3 # Retry on failure
retry_delay: 10 # Seconds between retries
timeout: 300 # Abort after 5 minutes
tags: [deploy, ci] # Selective execution
register: DEPLOY_ID # Capture output
continue_on_error: true # Don't stop on failure
cmds:
- "@fn deploy-service"
- echo "Deploy ID: {{DEPLOY_ID}}"Run with tags:
taskfile run --tags deploy # Only run tasks with 'deploy' tagOrganize large Taskfiles by splitting them:
# Taskfile.yml
include:
- path: ./tasks/build.yml
- path: ./tasks/deploy.yml
prefix: deploy # Tasks become: deploy-local, deploy-prod
- ./tasks/test.yml # String shorthand
variables:
APP: myapp
tasks:
all:
deps: [lint, test, build, deploy-prod]# tasks/deploy.yml
environments:
prod:
ssh_host: prod.example.com
tasks:
prod:
cmds: ["@remote systemctl restart myapp"]Define CI/CD stages that generate GitHub Actions, GitLab CI, etc:
pipeline:
python_version: "3.12"
docker_in_docker: true
secrets: [GHCR_TOKEN, DEPLOY_KEY]
cache: [~/.cache/pip, node_modules]
stages:
- name: test
tasks: [lint, test]
cache: [~/.cache/pip]
- name: build
tasks: [build, push]
docker_in_docker: true
- name: deploy
tasks: [deploy]
env: prod
when: manual # or "branch:main"Generate CI configs:
taskfile ci generate --target github # GitHub Actions
taskfile ci generate --target gitlab # GitLab CI
taskfile ci run --stage test # Run locallyEnhanced Docker Compose support with overrides:
compose:
file: docker-compose.yml
override_files:
- docker-compose.override.yml
- docker-compose.prod.yml
network: proxy
auto_update: true
environments:
prod:
compose_file: docker-compose.prod.yml
env_file: .env.prodtaskfile [OPTIONS] COMMAND [ARGS...]
Options:
--version Show version
-f, --file PATH Path to Taskfile.yml
-e, --env ENV Target environment (default: local)
-G, --env-group GROUP Target environment group (fleet deploy)
-p, --platform PLATFORM Target platform (e.g. desktop, web)
--var KEY=VALUE Override variable (repeatable)
--dry-run Show commands without executing
-v, --verbose Verbose output
| Command | Description |
|---|---|
taskfile <tasks...> |
Run one or more tasks |
taskfile <tasks...> --tags ci |
Run only tasks matching tags |
taskfile list |
List tasks, environments, groups, platforms, variables |
taskfile info <task> |
Show detailed info about a task (incl. tags, retries, timeout) |
taskfile validate |
Check Taskfile.yml for errors |
taskfile explain <task> |
Show detailed execution plan without running |
taskfile init [--template T] |
Create Taskfile.yml from template |
taskfile import <file> |
Import CI/CD config, Makefile, or script INTO Taskfile.yml |
taskfile export <format> |
Export Taskfile.yml to other formats (GitHub Actions, GitLab CI) |
| Command | Description |
|---|---|
taskfile deploy |
Smart deploy — auto-detects strategy per environment |
taskfile release [--tag v1.0] |
Full release pipeline: tag → build → deploy → health |
taskfile rollback [--target TAG] |
Rollback to previous version |
taskfile setup <IP> |
One-command VPS provisioning + deploy |
taskfile version bump |
Bump version (patch/minor/major) |
taskfile version show |
Show current version |
taskfile version set <version> |
Set specific version |
| Command | Description |
|---|---|
taskfile fleet status |
SSH health check on all remote environments |
taskfile fleet status --group kiosks |
Check only devices in a group |
taskfile fleet list |
List remote environments and groups |
taskfile fleet repair <env> |
8-point diagnostics + auto-fix |
taskfile -G kiosks run deploy |
Deploy to all devices in a group |
| Command | Description |
|---|---|
taskfile auth setup |
Interactive token setup for registries |
taskfile auth setup --registry pypi |
Setup for one registry only |
taskfile auth verify |
Test all configured credentials |
| Command | Description |
|---|---|
taskfile quadlet generate |
Generate Podman Quadlet from docker-compose.yml |
taskfile quadlet upload |
Upload Quadlet files to server via SSH |
taskfile ci generate |
Generate CI/CD config (GitHub Actions, GitLab, etc.) |
taskfile health |
Check health of deployed services |
| Command | Description |
|---|---|
taskfile docker ps |
Show running Docker containers |
taskfile docker stop-port <port> |
Stop containers using a specific port |
taskfile docker stop-all |
Stop all running containers |
taskfile docker compose-down |
Run docker compose down in directory |
| Command | Description |
|---|---|
taskfile doctor |
Full 5-layer diagnostics (preflight → validation → checks → fix → AI) |
taskfile doctor --fix |
Auto-fix issues where possible (Layer 4) |
taskfile doctor --llm |
Ask AI for help on unresolved issues (Layer 5, requires pip install taskfile[llm]) |
taskfile doctor --category config |
Filter by category: config, env, infra, runtime, or all |
taskfile doctor --report |
JSON output for CI pipelines |
taskfile doctor --examples |
Validate all examples/ directories |
taskfile doctor -v |
Verbose — also check task commands and SSH connectivity |
| Layer | Name | What it does |
|---|---|---|
| 1 | Preflight | Check if tools exist (docker, ssh, git, python3) |
| 2 | Validation | Check if Taskfile.yml is correct YAML with valid references |
| 3 | Diagnostics | Check environment health (ports, SSH keys, .env files, Docker) |
| 4 | Algorithmic fix | Auto-fix deterministic issues (copy .env.example, init git, rename PORT) |
| 5 | LLM assist | Escalate unresolved issues to AI via litellm (optional) |
| Category | Meaning | Example |
|---|---|---|
| taskfile_bug | Bug in taskfile itself | Parser crash, internal error |
| config_error | User misconfiguration | Missing task, broken dep, script not found, empty .env |
| dep_missing | Missing tool/dependency | Docker not installed, command not found |
| runtime_error | App/command execution failure | Exit code 1, process crash |
| external_error | Network/infra problem | SSH refused, VPS offline, OOM kill |
Each issue has a fix strategy indicating how it can be resolved:
| Strategy | Behavior |
|---|---|
| auto | Fixed automatically without asking |
| confirm | Ask user before applying fix |
| manual | Print instructions — user must act |
| llm | Escalate to AI for suggestion (--llm flag) |
# Full diagnostics
taskfile doctor
# Auto-fix + AI suggestions
taskfile doctor --fix --llm
# JSON for CI (non-zero exit on errors)
taskfile doctor --reportBefore executing tasks, taskfile validates the configuration and stops early with clear messages:
✗ [config_error] Missing env file for 'prod': .env.prod (copy from .env.prod.example)
Fix your configuration — check Taskfile.yml and .env files.
Pre-run validation failed. Run taskfile doctor --fix to resolve.
When a command fails, taskfile classifies the exit code:
| Exit Code | Category | Hint |
|---|---|---|
| 1 | runtime | Command error — check logs above |
| 2 | config | Invalid arguments — check command syntax |
| 126 | config | Permission denied — check script permissions |
| 127 | config | Command not found — check PATH |
| 124 | infra | Timeout — increase timeout or check network |
| 137 | infra | Process killed (OOM?) — check resources |
For complex failures, use taskfile doctor --llm for AI-assisted troubleshooting.
Taskfile works great as an orchestration layer for AI coding tools. See examples/ai-*/ for complete Taskfile.yml configs:
| Tool | Example | Key Tasks |
|---|---|---|
| Aider | examples/ai-aider/ |
feature, tdd, review-diff, lint-fix, type-fix |
| Claude Code | examples/ai-claude-code/ |
implement, review-staged, generate-tests, ai-commit |
| OpenAI Codex | examples/ai-codex/ |
implement, implement-auto, sandbox, fix-tests |
| GitHub Copilot | examples/ai-copilot/ |
suggest, explain, init-instructions, review-pr |
| Cursor | examples/ai-cursor/ |
init-rules, init-context, composer-feature |
| Windsurf | examples/ai-windsurf/ |
init-rules, init-workflows, doctor-fix |
| Gemini CLI | examples/ai-gemini-cli/ |
implement, review-screenshot (multimodal!), review-staged |
# Example: AI-assisted TDD with Aider
cd examples/ai-aider/
taskfile run tdd --var SPEC="User login returns JWT token"
# Example: Claude Code review of staged changes
cd examples/ai-claude-code/
taskfile run review-staged
# Example: Generate IDE rules for Windsurf
cd examples/ai-windsurf/
taskfile run init-rules # → .windsurfrules
taskfile run init-workflows # → .windsurf/workflows/ (4 templates)
# Example: Pipe taskfile doctor output to AI
taskfile doctor --report | claude "Fix these issues"Define environments in Taskfile.yml, then target them with --env:
# Local development
taskfile --env local run dev
# Staging deploy
taskfile --env staging run deploy
# Production deploy
taskfile --env prod run deploy
# Override variables per-run
taskfile --env prod run deploy --var TAG=v1.2.3 --var DOMAIN=new.example.comAny command prefixed with @remote runs on the environment's SSH host:
tasks:
restart:
env: [prod]
cmds:
- "@remote systemctl --user restart ${APP_NAME}"
- "@remote podman ps --filter name=${APP_NAME}"This translates to: ssh -i ~/.ssh/id_ed25519 deploy@prod.example.com 'systemctl --user restart my-app'
taskfile deploy auto-detects the right strategy:
taskfile --env local deploy # → docker compose up -d
taskfile --env prod deploy # → generate Quadlet → scp → systemctl restartDeploy to desktop and web platforms across environments:
┌──────────┬───────────────────────┬──────────────────────────┐
│ │ local │ prod │
├──────────┼───────────────────────┼──────────────────────────┤
│ desktop │ npm run dev:electron │ electron-builder publish │
│ web │ docker compose up │ podman pull + restart │
└──────────┴───────────────────────┴──────────────────────────┘
taskfile --env local --platform desktop run deploy
taskfile --env prod --platform web run deploy
taskfile release # all platforms at onceVariables cascade: global → environment → platform → CLI overrides.
Generate a multiplatform scaffold:
taskfile init --template multiplatformManage fleets of devices (Raspberry Pi, edge nodes, kiosks) using environment_groups in Taskfile.yml. Each device is an environment with ssh_host; groups define batch-deploy strategies.
Using hosts: shorthand (recommended for fleets):
hosts:
_defaults: { user: pi, runtime: podman }
kiosk-lobby: { host: 192.168.1.10, kiosk_id: lobby }
kiosk-cafe: { host: 192.168.1.11, kiosk_id: cafe }
kiosk-entrance: { host: 192.168.1.12, kiosk_id: entrance }
_groups:
kiosks:
members: [kiosk-lobby, kiosk-cafe, kiosk-entrance]
strategy: rolling # rolling | canary | parallel
max_parallel: 2 # for rolling: how many at a timeOr using classic environments + environment_groups:
environment_defaults:
ssh_user: pi
container_runtime: podman
environments:
kiosk-lobby: { ssh_host: 192.168.1.10 }
kiosk-cafe: { ssh_host: 192.168.1.11 }
kiosk-entrance: { ssh_host: 192.168.1.12 }
environment_groups:
kiosks:
members: [kiosk-lobby, kiosk-cafe, kiosk-entrance]
strategy: rolling
max_parallel: 2rolling— deploy tomax_paralleldevices at a time, wait for success, then next batchcanary— deploy tocanary_countdevices first, confirm, then deploy to restparallel— deploy to all devices simultaneously
# Deploy to all kiosks with rolling strategy
taskfile -G kiosks run deploy-kiosk --var TAG=v2.0
# Deploy to a single device
taskfile --env kiosk-lobby run deploy-kiosk --var TAG=v2.0# Check all remote devices (parallel SSH: temp, RAM, disk, containers, uptime)
taskfile fleet status
# Check only devices in a group
taskfile fleet status --group kiosks
# List all remote environments and groups
taskfile fleet listExample output:
┌─────────────────┬──────────────┬────────┬──────┬─────┬──────┬────────────┬─────────┐
│ Name │ IP │ Status │ Temp │ RAM │ Disk │ Containers │ Uptime │
├─────────────────┼──────────────┼────────┼──────┼─────┼──────┼────────────┼─────────┤
│ kiosk-cafe │ 192.168.1.11 │ ✅ UP │ 52°C │ 41% │ 23% │ 3 │ up 14d │
│ kiosk-entrance │ 192.168.1.12 │ ✅ UP │ 48°C │ 38% │ 19% │ 3 │ up 14d │
│ kiosk-lobby │ 192.168.1.10 │ ✅ UP │ 55°C │ 45% │ 27% │ 3 │ up 14d │
└─────────────────┴──────────────┴────────┴──────┴─────┴──────┴────────────┴─────────┘
Diagnose and auto-fix issues on a device with 8-point check: ping, SSH, disk, RAM, temperature, Podman, containers, NTP.
# Interactive repair
taskfile fleet repair kiosk-lobby
# Auto-fix without prompts
taskfile fleet repair kiosk-lobby --auto-fixRun task dependencies concurrently for faster builds:
tasks:
deploy:
deps: [test, lint, build]
parallel: true # test, lint, build run at the same time
cmds:
- echo "All deps done, deploying..."Allow tasks to continue even if a command fails:
tasks:
lint:
cmds:
- ruff check .
continue_on_error: true # alias for ignore_errors: true
deploy:
deps: [lint, test]
parallel: true
continue_on_error: true # failed deps won't stop the deploy
cmds:
- "@remote systemctl --user restart ${APP_NAME}"Skip tasks when conditions aren't met:
tasks:
migrate:
condition: "test -f migrations/pending.sql"
cmds:
- "@remote psql < migrations/pending.sql"Interactively configure API tokens for package registries. Tokens are saved to .env (auto-gitignored).
# Setup all registries
taskfile auth setup
# Setup one registry
taskfile auth setup --registry pypi
# Verify all configured tokens
taskfile auth verifySupported registries:
| Registry | Token variable | How to get |
|---|---|---|
| PyPI | PYPI_TOKEN |
https://pypi.org/manage/account/token/ |
| npm | NPM_TOKEN |
npm token create |
| Docker Hub | DOCKER_TOKEN |
https://hub.docker.com/settings/security |
| GitHub | GITHUB_TOKEN |
https://github.com/settings/tokens |
| crates.io | CARGO_TOKEN |
https://crates.io/settings/tokens |
Generate a publish scaffold for releasing to multiple registries:
taskfile init --template publishThis creates a Taskfile.yml with tasks for:
- PyPI —
twine upload - npm —
npm publish - Docker Hub / GHCR —
docker push - GitHub Releases —
gh release create - Landing page — build & deploy
# Publish to all registries
taskfile run publish-all --var TAG=v1.0.0
# Publish to single registry
taskfile run publish-pypi --var TAG=v1.0.0
taskfile run publish-docker --var TAG=v1.0.0Automatically generate Podman Quadlet .container files from your existing docker-compose.yml.
taskfile quadlet generate --env-file .env.prod -o deploy/quadletReads docker-compose.yml, resolves ${VAR:-default} with .env.prod values, generates Quadlet units with:
[Container]— image, env, volumes, labels, ports, resource limits[Unit]—After=/Requires=fromdepends_onAutoUpdate=registryfor automatic updates- Traefik labels preserved
- Named volumes →
.volumeunits - Networks →
.networkunits
No podlet binary needed — pure Python.
# Generate + upload to server
taskfile quadlet generate --env-file .env.prod
taskfile --env prod quadlet uploadProvision a fresh VPS and deploy your app in one command:
taskfile setup 123.45.67.89 --domain app.example.comThis runs:
- SSH key provisioning
- System update + Podman install
- Firewall configuration
- Deploy user creation
- Application deployment
Options:
taskfile setup 123.45.67.89 \
--domain app.example.com \
--ssh-key ~/.ssh/custom_key \
--user admin \
--ports 80,443,8080
# Dry run
taskfile setup 123.45.67.89 --dry-run
# Skip steps
taskfile setup 123.45.67.89 --skip-provision
taskfile setup 123.45.67.89 --skip-deployFull release orchestration: tag → build → deploy → health check.
# Full release
taskfile release --tag v1.0.0
# Skip desktop build
taskfile release --tag v1.0.0 --skip-desktop
# Dry run
taskfile release --tag v1.0.0 --dry-runSteps:
- Create git tag
- Build desktop applications
- Build and deploy web (SaaS)
- Upload desktop binaries
- Build and deploy landing page
- Run health checks
Use shorthand commands for version management:
# Bump version (creates git tag automatically)
taskfile version bump # 0.1.0 → 0.1.1 (patch)
taskfile version bump minor # 0.1.0 → 0.2.0
taskfile version bump major # 0.1.0 → 1.0.0
taskfile version bump --dry-run # Preview changes
# Show current version
taskfile version show
# Set specific version
taskfile version set 1.0.0
taskfile version set 2.0.0-rc1Rollback:
taskfile rollback # rollback to previous tag
taskfile rollback --target v0.9.0 # rollback to specific tag
taskfile rollback --dry-runSame commands work everywhere — terminal, GitLab CI, GitHub Actions, Jenkins:
# Terminal
taskfile --env prod run deploy --var TAG=v1.2.3
# GitLab CI
script: taskfile --env prod run deploy --var TAG=$CI_COMMIT_SHORT_SHA
# GitHub Actions
run: taskfile --env prod run deploy --var TAG=${{ github.sha }}
# Jenkins
sh 'taskfile --env prod run deploy --var TAG=${BUILD_NUMBER}'Generate CI configs automatically:
# Generate GitHub Actions workflow
taskfile ci generate --provider github
# Generated workflows support:
# - Tag-triggered releases (v*)
# - Secrets injection from GitHub Secrets
# - Multi-job pipelinesGenerate a Taskfile.yml from built-in templates:
taskfile init --template <name>| Template | Description |
|---|---|
minimal |
Basic build/deploy, 2 environments |
web |
Web app with Docker + Traefik, 3 environments |
podman |
Podman Quadlet + Traefik, optimized for low-RAM |
full |
All features: multi-env, release, cleanup, quadlet |
codereview |
3-stage: local(Docker) → staging → prod(Podman Quadlet) |
multiplatform |
Desktop + Web × Local + Prod deployment matrix |
publish |
Multi-registry publishing: PyPI, npm, Docker, GitHub |
saas |
SaaS app with hosts:, deploy:, addons:, smart defaults |
kubernetes |
Kubernetes + Helm multi-cluster deployment |
terraform |
Terraform IaC with multi-environment state management |
iot |
IoT/edge fleet with rolling, canary, and parallel strategies |
Templates are stored as plain YAML files and can be customized.
For faster, connection-pooled SSH execution without subprocess overhead:
pip install taskfile[ssh]When paramiko is installed, @remote commands use native Python SSH with connection pooling. Falls back to subprocess ssh automatically if paramiko is not available.
# Taskfile.yml
include:
- path: ./tasks/build.yml
- path: ./tasks/deploy.yml
prefix: deploy # tasks become: deploy-local, deploy-prod
- ./tasks/test.yml # string shorthandTasks, variables, and environments from included files are merged. Local definitions take precedence.
my-project/
├── Taskfile.yml # tasks, environments, groups
├── docker-compose.yml # container definitions (source of truth)
├── .env.local # local variables
├── .env.prod # production variables (gitignored)
├── deploy/
│ └── quadlet/ # auto-generated .container files
└── Dockerfile
| Example | Complexity | Features |
|---|---|---|
| minimal | ⭐ | test, build, run — no environments |
| saas-app | ⭐⭐ | local/staging/prod with pipeline |
| multiplatform | ⭐⭐⭐ | Web + Desktop, CI/CD generation |
| codereview.pl | ⭐⭐⭐⭐ | 6 CI platforms, Quadlet, docker-compose |
| Example | Registry | Language |
|---|---|---|
| publish-pypi | PyPI | Python |
| publish-npm | npm | Node.js / TypeScript |
| publish-cargo | crates.io | Rust |
| publish-docker | GHCR + Docker Hub | any (multi-arch) |
| publish-github | GitHub Releases | Go (binaries + checksums) |
| multi-artifact | 5 registries | Python + Rust + Node.js + Docker |
| Example | Features |
|---|---|
| fleet-rpi | 6 RPi, hosts: shorthand, rolling/canary groups |
| edge-iot | IoT gateways, hosts:, ssh_port: 2200, all 3 group strategies, condition |
| Example | Features |
|---|---|
| ci-pipeline | pipeline section, stage field, taskfile ci generate/run/preview, condition, silent |
| kubernetes-deploy | Helm, multi-cluster (staging + prod-eu + prod-us), canary groups |
| iac-terraform | dir (working_dir), env_file, Terraform plan/apply/destroy, condition |
| cloud-aws | Lambda + ECS + S3, multi-region, env_file, environment_groups |
| quadlet-podman | service_manager: quadlet, compose section, ssh_port: 2222, taskfile deploy/setup |
| Example | Features |
|---|---|
| script-extraction | Split Taskfile into shell/Python scripts, mixed inline + script tasks |
| ci-generation | pipeline section → 6 CI platforms, stage triggers (when), docker_in_docker |
| include-split | include — import tasks/vars/envs from other YAML files, prefix support |
| functions-embed | functions section, @fn/@python prefix, retries, timeout, tags, register |
| import-cicd | taskfile import — GitHub Actions, GitLab CI, Makefile, shell → Taskfile.yml |
| Example | Features |
|---|---|
| monorepo-microservices | platforms, build_cmd/deploy_cmd, condition, dir, stage, platform filter |
| fullstack-deploy | ALL CLI commands: deploy, setup, release, init, validate, info, ci, --dry-run |
| mega-saas-v2 | hosts:, deploy:, addons:, smart defaults — 70% less YAML vs mega-saas |
# CI pipeline — generate + run locally
cd examples/ci-pipeline
taskfile ci generate --target github
taskfile ci run --stage test
# Kubernetes — multi-cluster canary
cd examples/kubernetes-deploy
taskfile -G all-prod run helm-deploy --var TAG=v1.0.0
# Terraform — multi-env IaC
cd examples/iac-terraform
taskfile --env staging run plan
taskfile --env staging run apply
# IoT fleet — all 3 strategies
cd examples/edge-iot
taskfile -G warehouse run deploy --var TAG=v2.0 # canary
taskfile -G factory run deploy --var TAG=v2.0 # parallel
# AWS — Lambda + ECS multi-region
cd examples/cloud-aws
taskfile --env prod-eu run ecs-deploy lambda-deploy --var TAG=v1.0.0Taskfile is designed to complement existing tools, not replace them all. Here's how to integrate with popular alternatives:
Use Makefile as a thin wrapper for teams that expect make:
# Makefile — delegates to taskfile
deploy:
taskfile --env prod run deploy
test:
taskfile run test
.PHONY: deploy testOr use taskfile alongside Make — each handles what it does best:
- Make — C/C++ compilation, file-based dependency graphs
- Taskfile — multi-environment deploys, fleet management, registry auth
Similar philosophy (command runner), different strengths:
- Just — simple per-project recipes, no environments
- Taskfile — environments, groups, fleet,
@remote, registry auth
Migration: each Just recipe maps to a Taskfile task. Add environments for multi-host.
Both use YAML, but Taskfile adds:
environments/environment_groups/@remoteSSHtaskfile fleet,taskfile auth,taskfile quadlet- Publishing pipelines with registry integration
They can coexist — use Taskfile.yml for deploy, Taskfile.dist.yml for go-task.
Complementary:
- Dagger — containerized CI pipelines (build graph in code)
- Taskfile — orchestration layer that calls Dagger
tasks:
build:
cmds:
- dagger call build --source=.
deploy:
deps: [build]
env: [prod]
cmds:
- "@remote podman pull ${IMAGE}:${TAG}"
- "@remote systemctl --user restart ${APP}"For fleet management at scale:
- Ansible — 100+ hosts, complex inventories, roles, idempotent modules
- Taskfile — small fleets (<50), simple SSH commands,
environment_groups
For hybrid: use Ansible for provisioning, Taskfile for daily operations:
tasks:
provision:
cmds:
- ansible-playbook -i inventory.yml setup.yml
deploy:
cmds:
- "@remote podman pull ${IMAGE}:${TAG}"
- "@remote systemctl --user restart ${APP}"Taskfile.yml is configuration — it should declare what to do, not how:
# ✅ Good — declarative, short commands
tasks:
build:
deps: [test]
cmds:
- cargo build --release
deploy:
env: [prod]
cmds:
- "@remote podman pull ${IMAGE}:${TAG}"
- "@remote systemctl --user restart ${APP}"# ❌ Bad — logic embedded in YAML
tasks:
deploy:
cmds:
- |
if [ "$ENV" = "prod" ]; then
ssh deploy@prod "podman pull $IMAGE"
ssh deploy@prod "systemctl restart $APP"
elif [ "$ENV" = "staging" ]; then
...
fiWhen a task needs conditionals, loops, or error handling — put it in scripts/:
# ✅ Taskfile calls script
tasks:
validate:
cmds:
- ./scripts/validate-deploy.sh ${APP_NAME} ${TAG}# scripts/validate-deploy.sh — testable, lintable, reusable
#!/usr/bin/env bash
set -euo pipefail
docker build -t "$1-validate:$2" .
docker run -d --name "$1-validate" -p 9999:3000 "$1-validate:$2"
curl -sf http://localhost:9999/health || exit 1# ✅ Best — hosts: shorthand (for fleets)
hosts:
_defaults: { user: pi, key: ~/.ssh/fleet_ed25519, runtime: podman }
node-1: { host: 192.168.1.10 }
node-2: { host: 192.168.1.11 }# ✅ Good — smart defaults (for 2-3 environments)
environments:
prod:
ssh_host: prod.example.com # → auto: podman, quadlet, .env.prod# ✅ Also good — environment_defaults (explicit shared config)
environment_defaults:
ssh_user: pi
ssh_key: ~/.ssh/fleet_ed25519
container_runtime: podman
environments:
node-1: { ssh_host: 192.168.1.10 }
node-2: { ssh_host: 192.168.1.11 }# ❌ WET — repeated on every environment
environments:
node-1:
ssh_host: 192.168.1.10
ssh_user: pi
ssh_key: ~/.ssh/fleet_ed25519
container_runtime: podman
node-2:
ssh_host: 192.168.1.11
ssh_user: pi
ssh_key: ~/.ssh/fleet_ed25519
container_runtime: podman# ✅ No environments needed for a simple publish pipeline
version: "1"
name: my-lib
variables:
VERSION: "1.0.0"
tasks:
test:
cmds: [cargo test]
publish:
deps: [test]
cmds: [cargo publish]# ❌ Unnecessary boilerplate
environments:
local:
container_runtime: docker
compose_command: docker compose
# ^ Never used — the tasks don't reference Docker Compose# ✅ Compose via deps
test-all:
deps: [py-test, rs-test, js-test]
parallel: true
# ❌ Duplicating commands from other tasks
test-all:
cmds:
- cd packages/python && pytest
- cd packages/rust && cargo test
- cd packages/node && npm testWorking with the taskfile project itself:
# Clone repository
git clone https://github.com/pyfunc/taskfile.git
cd taskfile
# Create virtual environment
python3 -m venv venv
source venv/bin/activate # or: venv\Scripts\activate on Windows
# Install in editable mode with dev dependencies
pip install -e ".[dev]"
# Verify installation
taskfile --version# Run all tests
pytest tests/ -v
# Run specific test file
pytest tests/test_parser.py -v
# Run with coverage
pytest tests/ --cov=taskfile --cov-report=html --cov-report=term
# Run only e2e tests
pytest tests/test_e2e_examples.py -v
# Run DSL command tests (all command types: @local, @remote, @fn, @python, globs, etc.)
pytest tests/test_dsl_commands.py -v# Lint with ruff
ruff check src/taskfile/
# Format code
ruff format src/taskfile/
# Check types (optional, requires mypy)
mypy src/taskfile/ --ignore-missing-importstaskfile/
├── src/taskfile/ # Main source code
│ ├── cli/ # CLI commands
│ ├── runner/ # Task execution engine
│ ├── diagnostics/ # Diagnostics & doctor
│ ├── scaffold/ # Template generation
│ └── cigen/ # CI/CD generators
├── tests/ # Test suite
├── examples/ # Example configurations (24 examples)
├── docs/ # Documentation
└── Taskfile.yml # Project's own tasks
# 1. Create a branch
git checkout -b fix/my-feature
# 2. Make changes and run tests
pytest tests/ -v
# 3. Run project taskfile for validation
taskfile validate
# 4. Test against examples
taskfile doctor --examples
# 5. Build and install locally
pip install -e .# Verbose output
taskfile -v run <task>
# Very verbose (internal debug)
taskfile -vv run <task>
# Dry run to see commands without executing
taskfile --dry-run run <task>Every command shows where in your Taskfile.yml it comes from:
## 🚀 Running: `deploy`
- Config: Taskfile.yml
- Environment: prod
▶ build — Build Docker images [prod] (Taskfile.yml:25)
### Step 1/1 — 💻 local `Taskfile.yml:28`
→ docker compose build
▶ deploy — Deploy to target [prod] (Taskfile.yml:30)
### Step 1/4 — 💻 local `Taskfile.yml:35`
### Step 2/4 — 🌐 remote `Taskfile.yml:36`
Use -v for full YAML snippet context at each step:
taskfile -v --env prod run deployBefore executing scp/rsync commands, taskfile checks that local files exist:
### Step 3/4 — 🌐 remote `Taskfile.yml:37`
⚠️ **No files match** `deploy/quadlet/*.container` — generate them first
(e.g. `taskfile quadlet generate`)
💡 Tip: Generate Quadlet files first
Run `taskfile quadlet generate --env-file .env.prod -o deploy/quadlet`
### ❌ Pre-run validation failed for task `deploy`
**Fix:** Create the missing files, then re-run.
**Diagnose:** `taskfile doctor --fix`
This catches missing deploy artifacts before SSH/SCP fails with cryptic errors.
Taskfile shows contextual tips as you work, helping you learn best practices:
| Trigger | Tip |
|---|---|
scp in command |
Use rsync -avz instead (handles globs, resume) |
quadlet in command |
Generate .container files first |
@remote prefix |
Test SSH with taskfile fleet status |
docker compose |
Validate with docker compose config |
systemctl |
Quadlet auto-generates systemd units |
.env reference |
Keep .env.prod gitignored, use .example templates |
Tips also appear on failures with exit-code-specific advice (SSH errors, permission denied, command not found).
# Full system diagnostics
taskfile doctor
# Auto-fix common issues
taskfile doctor --fix
# Get AI help on unresolved issues
taskfile doctor --fix --llm
# Check specific category
taskfile doctor --category config
taskfile doctor --category runtime# Test SSH connectivity
taskfile fleet status
# Check SSH key permissions
chmod 600 ~/.ssh/id_ed25519
# Test manual SSH
ssh -i ~/.ssh/id_ed25519 user@host "echo OK"# Generate Quadlet files from docker-compose.yml
taskfile quadlet generate --env-file .env.prod -o deploy/quadlet
# Verify files were created
ls -la deploy/quadlet/
# Add as dependency in Taskfile.yml:
# deploy:
# deps: [build, quadlet-generate]# Validate configuration
taskfile validate
# Check specific task
taskfile info <task-name>
# List all tasks with their environments
taskfile list# Check loaded variables
taskfile list --vars
# Override for testing
taskfile run <task> --var KEY=VALUE --var DEBUG=1# Check Docker/Podman
taskfile doctor --category runtime
# Test container runtime manually
docker ps # or: podman ps
# Check registry authentication
taskfile auth verify| Flag | Output |
|---|---|
-v |
Verbose — step-by-step tracing with YAML snippets and learning tips |
-vv |
Very verbose — internal debug info |
--dry-run |
Show commands without executing |
--report |
JSON output for CI/debugging |
# Command help
taskfile --help
taskfile <command> --help
# Task help
taskfile info <task-name>
# AI-assisted debugging (requires LLM extras)
pip install taskfile[llm]
taskfile doctor --llm- Run diagnostics:
taskfile doctor --report > debug.json - Run with verbose:
taskfile -v run <task> 2>&1 | tee debug.log - Check version:
taskfile --version - Include:
debug.jsonanddebug.logoutput- Your
Taskfile.yml(redact secrets) - Python version:
python --version - OS:
uname -a
Apache License 2.0 - see LICENSE for details.
Created by Tom Sapletta - tom@sapletta.com