- Rust 79.4%
- JavaScript 8.4%
- Shell 6.7%
- HTML 2%
- CSS 2%
- Other 1.4%
Replace deprecated `:param` capture syntax with `{param}` in
/api/jobs/{id}/kill, /api/runs/{id}/kill, and /api/actions/{name}/run.
Axum v0.7 panics at startup when `:` is used in path segments.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
||
|---|---|---|
| .claude | ||
| .forgejo/workflows | ||
| crates | ||
| docker | ||
| docs | ||
| embedded_scripts | ||
| etc/hero_proc | ||
| examples | ||
| scripts | ||
| tests | ||
| .gitignore | ||
| buildenv.sh | ||
| Cargo.lock | ||
| Cargo.toml | ||
| INSTRUCTIONS.md | ||
| Makefile | ||
| README.md | ||
hero_proc
A lightweight process supervisor with dependency management, similar to systemd but simpler.
Quick Install
Get started in one command:
curl -fsSL https://raw.githubusercontent.com/threefoldtech/hero_proc/main/scripts/install.sh | bash
Or download and run the installer script directly:
cd /tmp && curl -O https://raw.githubusercontent.com/threefoldtech/hero_proc/main/scripts/install.sh
chmod +x install.sh
./install.sh
This will:
- Detect your OS and architecture
- Download pre-built binaries (Linux amd64, macOS arm64)
- Install to
$HOME/hero/bin - Configure your shell PATH automatically
- Start
hero_proc_serverin background (macOS/Windows only)
Then use: hero_proc list, hero_proc status, hero_proc start <service>, etc.
For more options, see Quick Start section below.
Documentation
Getting Started:
- CLI User Guide: Complete tutorials, recipes, and usage patterns (4000+ words)
- CLI Command Reference: All commands and options reference (3000+ words)
- Example Configurations: Ready-to-use examples and patterns
Reference:
- Service Specification: Detailed reference for all TOML fields.
- SDK Documentation: Rust SDK for programmatic service management.
- Path Configuration: Platform-specific default paths and environment variables.
Features
- Dependency Graph: Services declare dependencies (
requires,after,wants,conflicts) - State Machine: 8 explicit states (Inactive, Blocked, Starting, Running, Stopping, Success, Exited, Failed)
- Process Groups: Signals sent to process groups, handling
sh -cchild processes correctly - Health Checks: TCP, HTTP, and exec-based health checks with retries
- Ordered Shutdown: Dependents stop before their dependencies
- Hot Reload: Reload configuration without full restart
- Multi-Environment: Works in containers, VMs, and bare-metal
- Web Admin Dashboard: Real-time service management UI with charts, logs, events, and bulk operations
- Fully Embedded UI: All assets (Bootstrap, Chart.js, icons) compiled into the binary — no CDN or network required
Deployment Modes
hero_proc adapts its behavior based on deployment environment:
Container Mode
Use hero_proc_pid1 as your container entrypoint:
ENTRYPOINT ["/usr/bin/hero_proc_pid1", "--container"]
Or set the environment variable:
HERO_PROC_CONTAINER=1 hero_proc_pid1
Behavior:
- Loads services from
/etc/hero_proc/services/ - Clean exit on shutdown (no reboot syscall)
- No system services directory
VM / Bare-Metal Mode
Use hero_proc_pid1 as your init system (PID 1):
# In /sbin/init or kernel cmdline: init=/usr/bin/hero_proc_pid1
Behavior:
- Loads system services from
/etc/hero_proc/system/first (auto-assignedclass=system) - Loads user services from
/etc/hero_proc/services/second - Handles reboot/poweroff via syscalls (SIGINT=reboot, SIGTERM=poweroff)
- Never exits (kernel panic prevention)
Standalone Mode
Run hero_proc_server directly (not as PID 1):
hero_proc_server --config-dir ~/hero/cfg/hero_proc
Important: In standalone mode, TOML config files placed in the config directory are not automatically loaded at startup. You must run hero_proc reload after placing or modifying config files to import them into the service database:
# Place service configs
cp my-service.toml ~/hero/cfg/hero_proc/
# Load configs into hero_proc
hero_proc reload
# Now start
hero_proc start my-service
Optionally enable system services directory:
hero_proc_server --config-dir /etc/hero_proc/services --pid1-mode
Quick Start
Installation (Recommended)
Download and install pre-built binaries:
./scripts/install.sh
This script:
- Detects your OS and architecture
- Downloads binaries from Forgejo registry
- Installs to
$HOME/hero/bin - Configures your PATH automatically
- On macOS/Windows, automatically starts the server in the background
Building from Source
# Full build with Makefile
make build
# Or manual build
cargo build --release --workspace
# Run the server + admin UI
make run
# Use the CLI
hero_proc list
hero_proc status my-service
hero_proc start my-service
hero_proc stop my-service
See scripts/README.md for detailed information about installation scripts and the Makefile for build targets.
Architecture
hero_proc_pid1 (PID 1 shim)
| spawns/monitors
v
hero_proc_server (daemon)
| unix socket (IPC + OpenRPC)
v
hero_proc (CLI/TUI) hero_proc_ui (web admin dashboard)
| unix socket + TCP :9999
Crate Structure
hero_proc is organized as a Cargo workspace with 7 separate crates:
crates/
hero_proc_sdk/ # Shared service SDK types and client library
hero_proc_server/ # Process supervisor daemon (IPC + OpenRPC)
hero_proc/ # Command-line interface
hero_proc_ui/ # Web dashboard UI
hero_proc_lib/ # SQLite persistence layer with factory pattern
hero_proc_pid1/ # Init shim (PID 1 mode)
hero_proc_integration_test/ # Integration test suite
Dependency Graph
hero_proc_sdk (no internal deps)
^ ^ ^ ^ ^
| | | | |
server CLI UI lib pid1
All crates depend on hero_proc_sdk. No cross-dependencies between server, CLI, UI, lib, or pid1. The lib crate provides the factory pattern for persistent storage (SQLite) and job tracking.
Ports and Sockets
| Component | Binding | Default |
|---|---|---|
| hero_proc_server | Unix socket (IPC) | ~/hero/var/sockets/hero_proc_server.sock |
| hero_proc_ui | Unix socket (local tool access) | ~/hero/var/sockets/hero_proc_ui.sock |
| hero_proc_ui | TCP (HTTP dashboard) | 9999 |
Core Concepts
Service
A service is a named container for one or more executable tasks with metadata. Services are defined in TOML configuration files and represent applications, daemons, or system components you want to manage.
Example:
[service]
name = "my-app"
exec = "/usr/bin/my-app --daemon"
Job
A job (previously called an "action") is a single executable task within a service. Jobs have triggers that determine when they run:
- start: Runs when the service starts
- stop: Runs when the service stops
- check: Health check that runs periodically
- manual: Runs only on explicit command
Run
A run is a persistent execution record of a job. Each time a job executes, a run is created with:
- Execution status (pending, running, success, failed)
- Exit code and error messages
- Execution timestamp and duration
- Complete output logs (stdout/stderr)
Runs are stored in SQLite and enable historical tracking, auditing, and debugging of job executions.
Factory Pattern
The factory pattern (implemented in hero_proc_lib) provides a unified entry point (HeroProcDb) for accessing all persistence and configuration APIs:
let hero_procdb = HeroProcDb::with_defaults()?;
// Access APIs via namespaced methods
hero_procdb.jobs.list()? // Job CRUD operations
hero_procdb.runs.insert(...)? // Create execution records
hero_procdb.services.parse(...)? // Load service configs
hero_procdb.logging.append(...)? // Store job logs
This pattern isolates all database complexity and provides clean, type-safe access to:
- JobsApi: Job lifecycle management
- RunsApi: Execution tracking
- ServicesApi: Configuration file handling
- LoggingApi: Persistent log storage
For detailed API reference, see crates/hero_proc_lib.
Configuration
Service configs are TOML files in the config directory (default: ~/hero/cfg/hero_proc/ on macOS/Linux standalone, /etc/hero_proc/services/ in container/PID1 mode).
Important: After placing or modifying TOML config files, run hero_proc reload to import them into hero_proc's database.
For detailed service configuration defaults and specifications, see docs/SERVICE_SPECS.md
Currently Parsed TOML Sections
The legacy TOML loader (hero_proc reload) parses these sections:
[service]
name = "my-app"
exec = "/usr/bin/my-app --daemon"
description = "My application" # optional
oneshot = false # optional (default: false)
kill_others = false # optional (default: false)
[service.env]
RUST_LOG = "info"
DATABASE_URL = "postgres://localhost/mydb"
[dependencies]
requires = ["database"] # must be running
after = ["logger"] # start order only
Planned (Not Yet Parsed)
The following sections are defined in the spec but are not yet implemented in the TOML config loader. They will be silently ignored if present:
# NOT YET PARSED — use SDK builders or CLI flags instead
[lifecycle] # restart policy, signals, timeouts
[health] # health check configuration
[logging] # log buffer settings
These features are available through the SDK builder API and CLI hero_proc add commands. See docs/SDK.md for programmatic configuration.
Environment Variables in TOML
Environment variables are set under [service.env]:
[service.env]
DATABASE_URL = "postgres://localhost/mydb"
DEBUG = "true"
Targets
Virtual services for grouping:
[target]
name = "multi-user"
[dependencies]
requires = ["network", "logger", "database"]
Service Status
The status field controls supervisor behavior:
start(default): Automatically start and keep runningstop: Keep stopped (won't auto-start)ignore: Supervisor ignores this service
Service Class
The class field protects critical services from bulk operations:
user(default): Normal service, affected by*_allcommandssystem: Protected service, skipped by bulk operations
System-class services are immune to start_all, stop_all, and delete_all commands.
CLI Commands
hero_proc list # List all services
hero_proc status <name> # Show service status
hero_proc start <name> # Start a service
[--tree] # Also start required dependencies
hero_proc stop <name> # Stop (cascades to dependents)
hero_proc restart <name> # Restart a service
hero_proc kill <name> [signal] # Send signal to service
hero_proc logs <name> [-n N] # View service logs
hero_proc why <name> # Show why service is blocked
hero_proc tree # Show dependency tree
hero_proc reload # Reload configuration
hero_proc add service <name> # Add service at runtime
[--description <text>] # Service description
[--class user|system] # Service class (default: user)
[--after <svc>] # Start order dependency
[--requires <svc>] # Hard dependency
[--wants <svc>] # Soft dependency
[--conflicts <svc>] # Mutual exclusion
[--persist] # Save to config directory
hero_proc add job <service> <job> # Add job to service
--exec <cmd> # Command to execute
[--trigger start|stop|check]# Job trigger (manual if omitted)
[--restart on-failure] # Restart policy
[--interval-ms <ms>] # Check interval
[--timeout-ms <ms>] # Job timeout
hero_proc delete <service> [job] # Delete service or job
hero_proc reset # Stop all services, delete all configs (with confirmation)
[--force] # Skip confirmation prompt
hero_proc shutdown # Stop all services, exit daemon
hero_proc poweroff # Power off system (signals init)
hero_proc reboot # Reboot system (signals init)
# Xinet socket activation proxy commands
hero_proc xinet set <name> # Create or update xinet proxy (replaces existing)
--listen <addr> # Listen address: host:port or unix:/path (repeatable)
--backend <addr> # Backend address: host:port or unix:/path
--service <name> # HeroProc service to start on connection
[--connect-timeout <secs>] # Timeout for backend connect (default: 30)
[--idle-timeout <secs>] # Stop service after idle seconds (default: 0=never)
[--single-connection] # Allow only one connection at a time
hero_proc xinet delete <name> # Delete xinet proxy
hero_proc xinet list # List all xinet proxies
hero_proc xinet status [name] # Show proxy status (all if no name given)
# Debug commands
hero_proc debug-state # Full graph state dump
hero_proc debug-procs <name> # Process tree for a service
# Demo & Testing
hero_proc demo # Create demo service configs and reload
Web Admin Dashboard
The hero_proc_ui crate provides a real-time web admin dashboard at http://localhost:9999 with six main tabs:
- Actions: Display registered actions with interpreter, timeout, and tags
- Jobs: View job instances with phase, status, and logs; includes statistics bar
- Runs: Track execution runs with status and job counts
- Services: Manage services, dependencies, and action mappings
- Secrets: Store and manage encrypted configuration values
- Logs: Query and filter system logs by source, level, and timestamp
The dashboard also includes:
- Theme toggle: Dark/Light mode in the navbar
- Refresh button: Manual refresh of all data
- Search and filtering: Each tab has search/filter controls
- Bulk operations: Service demo loading and job purging
All UI assets (Bootstrap 5.3.3, Bootstrap Icons) are embedded in the binary via rust-embed — no internet connection needed.
The UI connects to hero_proc_server via the SDK (AsyncHeroProcClient) over Unix socket.
# Start server + UI
make run
# Or start separately
hero_proc_server --config-dir ~/hero/cfg/hero_proc &
hero_proc_ui --port 9999
Xinet Socket Activation Proxy
Xinet enables on-demand service startup through socket activation. When a client connects to the proxy's listening socket, the proxy starts the backend service and forwards traffic.
Use Cases
- Databases: Start postgres on first query
- Development Servers: Start on HTTP request
- Backup Services: Start on trigger
- Rarely-Used Services: Reduce memory footprint
Example: PostgreSQL with Socket Activation
Create the backend service:
# /etc/hero_proc/services/postgres.toml
[service]
name = "postgres"
exec = "/usr/bin/postgres -D /var/lib/postgres"
status = "stop" # Don't autostart
[lifecycle]
start_timeout_ms = 5000
stop_timeout_ms = 10000
Register the proxy (starts postgres on first connection):
hero_proc xinet set postgres-proxy \
--listen tcp:localhost:5432 \
--backend unix:/tmp/postgres.sock \
--service postgres \
--connect-timeout 10 \
--idle-timeout 300 # Stop after 5 minutes idle
Now clients connect to localhost:5432 and postgres starts automatically.
Example: Multiple Listen Addresses
hero_proc xinet set postgres-proxy \
--listen tcp:0.0.0.0:5432 \
--listen unix:/run/postgres.sock \
--backend unix:/tmp/postgres.sock \
--service postgres
Proxy Features
- Bidirectional Forwarding: TCP ↔ TCP, Unix ↔ Unix, TCP ↔ Unix
- Auto-Start Backend: Starts service on first connection
- Idle Timeout: Automatically stops service after inactivity
- Connection Limits: Optional single-connection mode
- Replace Mode:
xinet setreplaces existing proxy (stops old one first) - Connection Stats: Track active connections and total traffic
Path Configuration
hero_proc uses platform-specific default paths:
Linux (System/PID1 mode)
- Config directory:
/etc/hero_proc/services - System services:
/etc/hero_proc/system(PID1 mode only) - Socket:
/run/hero_proc.sock
macOS / Windows (Standalone mode)
- Config directory:
$HOME/hero/cfg/hero_proc - Socket:
$HOME/hero/var/sockets/hero_proc_server.sock
You can override these with environment variables (see below).
See docs/PATHS.md for detailed path configuration documentation.
Environment Variables
| Variable | Default | Description |
|---|---|---|
HERO_PROC_LOG_LEVEL |
info |
Log level: trace, debug, info, warn, error |
HERO_PROC_CONFIG_DIR |
Platform-specific (see above) | Service config directory |
HERO_PROC_SOCKET |
Platform-specific (see above) | Unix socket path |
HERO_PROC_CONTAINER |
unset | If set, hero_proc_pid1 runs in container mode |
Example: Custom Paths
# Use custom config and socket directories
export HERO_PROC_CONFIG_DIR=/opt/my-services
export HERO_PROC_SOCKET=/tmp/my-hero_proc.sock
# Start server
hero_proc_server
# Connect with CLI
hero_proc list
Library Usage
SDK Client
Use hero_proc_sdk for IPC communication with the running server. The client is auto-generated from the OpenRPC specification:
use hero_proc_sdk::HeroProcRPCAPIClient;
// Connect via Unix socket (async)
let client = HeroProcRPCAPIClient::connect_socket("/path/to/hero_proc_server.sock").await?;
// List services
let list = client.service_list(hero_proc_sdk::ServiceListInput {
context: None,
}).await?;
// Get service status
let status = client.service_status(hero_proc_sdk::ServiceStatusInput {
name: "my-service".into(),
context: None,
}).await?;
For ergonomic service construction, use the builder API:
use hero_proc_sdk::{ServiceBuilder, ActionBuilder};
let service = ServiceBuilder::new("myapp")
.description("My application")
.exec("./myapp --server")
.requires(&["database"])
.build();
Persistence Layer (Factory Pattern)
For direct database access and offline service configuration management, use hero_proc_lib and its factory pattern:
use hero_proc_lib::HeroProcDb;
let db = HeroProcDb::with_defaults()?;
// Access jobs, runs, services, and logs via namespaced APIs
db.jobs.list()?
db.runs.insert(service, job, trigger, command)?
db.services.load_from_file(path)?
db.logging.append(run_id, "stdout", message)?
The factory pattern provides a unified entry point to SQLite persistence with clean, type-safe APIs. See crates/hero_proc_lib/src/db/README.md for complete API documentation.
Docker Usage
# Build test image
docker build -t hero_proc-test -f docker/Dockerfile .
# Run (uses container mode automatically)
docker run -it --rm hero_proc-test
# With debug logging
docker run -it --rm -e HERO_PROC_LOG_LEVEL=debug hero_proc-test
# Explicit container mode
docker run -it --rm -e HERO_PROC_CONTAINER=1 hero_proc-test
Shutdown Ordering
Services are stopped in reverse dependency order:
Example: database <- app <- worker
Startup order: database -> app -> worker
Shutdown order: worker -> app -> database
When stopping a single service, dependents are stopped first:
hero_proc stop databasestops worker, then app, then database- Dependencies are NOT auto-stopped (other services may need them)
Development
make check # Verify workspace builds
make test # Run unit tests
make build # Build all release binaries
make install # Build release and install to ~/hero/bin/
make lint # Run clippy linter
make test-all # Run all tests (unit + bash + rhai)
# Run server + UI
make run # Release build, install to ~/hero/bin/, start on port 9999
make rundev # Debug build, install to ~/hero/bin/, start on port 9999
make stop # Graceful shutdown
# Run specific integration tests
make test-bash # Bash-based integration tests
make test-rhai # Rhai-based integration tests
# Playground
make play-tui # Launch TUI with sample services for manual testing
make play-web # Launch web UI with sample services
License
See LICENSE file.