forked from herocode/horus
Compare commits
5 Commits
1208ef00a1
...
main_cicdp
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
aa30c54742 | ||
| 8fdfdd1746 | |||
|
|
dffb17ffbc | ||
|
|
8c33c73b3c | ||
|
|
7675dc2150 |
@@ -1,881 +0,0 @@
|
|||||||
# Gitea Actions Workflows - Line-by-Line Explanation
|
|
||||||
|
|
||||||
This document provides a detailed explanation of every line in the CI/CD workflows, explaining what each line does, why it's needed, and how it works.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Table of Contents
|
|
||||||
|
|
||||||
1. [CI Workflow (ci.yml)](#ci-workflow-ciyml)
|
|
||||||
2. [Release Workflow (release.yml)](#release-workflow-releaseyml)
|
|
||||||
3. [Testing Guide](#testing-guide)
|
|
||||||
4. [Common Patterns Explained](#common-patterns-explained)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## CI Workflow (ci.yml)
|
|
||||||
|
|
||||||
### Header Section
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
name: CI
|
|
||||||
```
|
|
||||||
**What**: Defines the workflow name displayed in the Gitea Actions UI
|
|
||||||
**Why**: Helps identify this workflow among multiple workflows
|
|
||||||
**How**: Gitea reads this and displays "CI" in the Actions tab
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Trigger Configuration
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
on:
|
|
||||||
```
|
|
||||||
**What**: Starts the trigger configuration section
|
|
||||||
**Why**: Tells Gitea when to run this workflow
|
|
||||||
**How**: Gitea monitors repository events and matches them against these triggers
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
push:
|
|
||||||
branches:
|
|
||||||
- '**'
|
|
||||||
```
|
|
||||||
**What**: Triggers workflow on push to any branch
|
|
||||||
**Why**: We want to test every commit on every branch
|
|
||||||
**How**: `'**'` is a glob pattern matching all branch names (main, develop, feature/*, etc.)
|
|
||||||
**Alternative**: Use `- main` to only run on main branch
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
pull_request:
|
|
||||||
types: [opened, synchronize, reopened]
|
|
||||||
```
|
|
||||||
**What**: Triggers workflow on pull request events
|
|
||||||
**Why**: Test code before merging into main branch
|
|
||||||
**How**:
|
|
||||||
- `opened` - When PR is first created
|
|
||||||
- `synchronize` - When new commits are pushed to the PR
|
|
||||||
- `reopened` - When a closed PR is reopened
|
|
||||||
**Note**: Does not run on `closed` to save resources
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Job Definition
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
jobs:
|
|
||||||
```
|
|
||||||
**What**: Starts the jobs section
|
|
||||||
**Why**: Workflows contain one or more jobs that run tasks
|
|
||||||
**How**: Each job runs in a fresh virtual environment
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
build-and-test:
|
|
||||||
```
|
|
||||||
**What**: Job identifier (internal name)
|
|
||||||
**Why**: Unique ID for this job, used in logs and dependencies
|
|
||||||
**How**: Must be unique within the workflow, use kebab-case
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
name: Build & Test
|
|
||||||
```
|
|
||||||
**What**: Human-readable job name
|
|
||||||
**Why**: Displayed in the Gitea UI for better readability
|
|
||||||
**How**: Shows in the Actions tab instead of "build-and-test"
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
```
|
|
||||||
**What**: Specifies which runner to use
|
|
||||||
**Why**: Determines the OS and environment for the job
|
|
||||||
**How**: Gitea matches this label with available runners
|
|
||||||
**Note**: Your runner must have the `ubuntu-latest` label configured
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Steps Section
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
steps:
|
|
||||||
```
|
|
||||||
**What**: Starts the list of steps to execute
|
|
||||||
**Why**: Steps are the individual tasks that make up a job
|
|
||||||
**How**: Steps run sequentially in order
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
#### Step 1: Checkout Code
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- name: Checkout code
|
|
||||||
```
|
|
||||||
**What**: Human-readable step name
|
|
||||||
**Why**: Helps identify this step in logs
|
|
||||||
**How**: Displayed in the workflow run UI
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
```
|
|
||||||
**What**: Uses a pre-built action to checkout code
|
|
||||||
**Why**: Clones your repository into the runner's workspace
|
|
||||||
**How**:
|
|
||||||
- `actions/checkout` - GitHub's official checkout action (Gitea compatible)
|
|
||||||
- `@v4` - Pins to version 4 for stability
|
|
||||||
**What it does**:
|
|
||||||
1. Clones the repository
|
|
||||||
2. Checks out the commit that triggered the workflow
|
|
||||||
3. Sets up git configuration
|
|
||||||
|
|
||||||
**Why needed**: Without this, the runner has no access to your code
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
#### Step 2: Setup Rust
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- name: Setup Rust toolchain
|
|
||||||
uses: actions-rust-lang/setup-rust-toolchain@v1
|
|
||||||
```
|
|
||||||
**What**: Installs Rust toolchain on the runner
|
|
||||||
**Why**: Needed to compile Rust code
|
|
||||||
**How**: Downloads and installs rustc, cargo, and related tools
|
|
||||||
**What it does**:
|
|
||||||
1. Installs Rust stable version
|
|
||||||
2. Adds cargo to PATH
|
|
||||||
3. Configures cargo home directory
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
with:
|
|
||||||
toolchain: stable
|
|
||||||
```
|
|
||||||
**What**: Specifies which Rust version to install
|
|
||||||
**Why**: `stable` ensures we use the latest stable Rust release
|
|
||||||
**How**: Downloads from rust-lang.org
|
|
||||||
**Alternatives**: `nightly`, `beta`, or specific version like `1.75.0`
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
components: rustfmt, clippy
|
|
||||||
```
|
|
||||||
**What**: Installs additional Rust components
|
|
||||||
**Why**:
|
|
||||||
- `rustfmt` - Code formatter (needed for formatting check)
|
|
||||||
- `clippy` - Linter (needed for lint check)
|
|
||||||
**How**: Installed via rustup alongside the toolchain
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
#### Step 3-5: Caching
|
|
||||||
|
|
||||||
**Why caching is needed**:
|
|
||||||
- Rust compilation is slow
|
|
||||||
- Dependencies rarely change
|
|
||||||
- Caching speeds up builds from ~15 minutes to ~2 minutes
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- name: Cache cargo registry
|
|
||||||
uses: actions/cache@v4
|
|
||||||
```
|
|
||||||
**What**: Caches the Cargo registry
|
|
||||||
**Why**: Stores downloaded crate metadata to avoid re-downloading
|
|
||||||
**How**: Uses GitHub's cache action (Gitea compatible)
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
with:
|
|
||||||
path: ~/.cargo/registry
|
|
||||||
```
|
|
||||||
**What**: Directory to cache
|
|
||||||
**Why**: This is where Cargo stores downloaded crate files
|
|
||||||
**How**: Entire directory is compressed and stored
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
key: ${{ runner.os }}-cargo-registry-${{ hashFiles('**/Cargo.lock') }}
|
|
||||||
```
|
|
||||||
**What**: Unique cache key
|
|
||||||
**Why**: Identifies this specific cache
|
|
||||||
**How**:
|
|
||||||
- `${{ runner.os }}` - OS name (e.g., "Linux")
|
|
||||||
- `cargo-registry` - Cache identifier
|
|
||||||
- `${{ hashFiles('**/Cargo.lock') }}` - Hash of Cargo.lock
|
|
||||||
**Result**: Key like `Linux-cargo-registry-a1b2c3d4`
|
|
||||||
**When it changes**: When Cargo.lock changes (dependencies updated)
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-cargo-registry-
|
|
||||||
```
|
|
||||||
**What**: Fallback cache keys
|
|
||||||
**Why**: If exact match not found, use partial match
|
|
||||||
**How**: Tries to find any cache starting with `Linux-cargo-registry-`
|
|
||||||
**Benefit**: Even with dependency changes, most cache is still valid
|
|
||||||
|
|
||||||
**Same pattern repeats for**:
|
|
||||||
- Cargo index cache (`~/.cargo/git`)
|
|
||||||
- Build cache (`target/`)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
#### Step 6: Check Code
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- name: Check code
|
|
||||||
run: cargo check --workspace --verbose
|
|
||||||
```
|
|
||||||
**What**: Runs `cargo check` command
|
|
||||||
**Why**: Fast compilation check without producing binaries
|
|
||||||
**How**:
|
|
||||||
- `cargo check` - Compiles code but doesn't generate executables
|
|
||||||
- `--workspace` - Check all packages in the workspace
|
|
||||||
- `--verbose` - Show detailed output
|
|
||||||
**Benefit**: Catches compilation errors quickly (~2x faster than full build)
|
|
||||||
**Exit code**: Non-zero if compilation fails, which fails the workflow
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
#### Step 7: Run Tests
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- name: Run tests
|
|
||||||
run: cargo test --workspace --verbose
|
|
||||||
```
|
|
||||||
**What**: Runs all tests in the workspace
|
|
||||||
**Why**: Ensures code changes don't break functionality
|
|
||||||
**How**:
|
|
||||||
- Compiles test code
|
|
||||||
- Runs all `#[test]` functions
|
|
||||||
- Runs integration tests in `tests/` directory
|
|
||||||
**Exit code**: Non-zero if any test fails
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
#### Step 8: Run Clippy
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- name: Run clippy
|
|
||||||
run: cargo clippy --workspace -- -D warnings
|
|
||||||
```
|
|
||||||
**What**: Runs Rust linter
|
|
||||||
**Why**: Catches common mistakes and enforces best practices
|
|
||||||
**How**:
|
|
||||||
- `cargo clippy` - Runs the clippy linter
|
|
||||||
- `--workspace` - Lint all packages
|
|
||||||
- `--` - Separator between cargo args and clippy args
|
|
||||||
- `-D warnings` - Treat warnings as errors
|
|
||||||
**Result**: Fails if any clippy warnings are found
|
|
||||||
**Examples of what clippy catches**:
|
|
||||||
- Unused variables
|
|
||||||
- Inefficient code patterns
|
|
||||||
- Potential bugs
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
#### Step 9: Check Formatting
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- name: Check formatting
|
|
||||||
run: cargo fmt --all -- --check
|
|
||||||
```
|
|
||||||
**What**: Checks if code is properly formatted
|
|
||||||
**Why**: Enforces consistent code style
|
|
||||||
**How**:
|
|
||||||
- `cargo fmt` - Rust formatter
|
|
||||||
- `--all` - Check all packages
|
|
||||||
- `--` - Separator
|
|
||||||
- `--check` - Don't modify files, just check
|
|
||||||
**Exit code**: Non-zero if any file is not formatted
|
|
||||||
**To fix locally**: Run `cargo fmt --all` without `--check`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
#### Step 10: Build Release Binaries
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- name: Build release binaries
|
|
||||||
run: cargo build --workspace --release --verbose
|
|
||||||
```
|
|
||||||
**What**: Builds all binaries with optimizations
|
|
||||||
**Why**: Ensures release builds work and produces artifacts
|
|
||||||
**How**:
|
|
||||||
- `cargo build` - Compile code
|
|
||||||
- `--workspace` - Build all packages
|
|
||||||
- `--release` - Enable optimizations (from Cargo.toml profile.release)
|
|
||||||
- `--verbose` - Show detailed output
|
|
||||||
**Result**: Binaries in `target/release/` directory
|
|
||||||
**Time**: ~5-10 minutes (first run), ~2-5 minutes (cached)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
#### Step 11: List Binaries
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- name: List built binaries
|
|
||||||
run: |
|
|
||||||
echo "Built binaries:"
|
|
||||||
ls -lh target/release/ | grep -E '^-.*x.*'
|
|
||||||
```
|
|
||||||
**What**: Lists built executable files
|
|
||||||
**Why**: Helps verify all binaries were built successfully
|
|
||||||
**How**:
|
|
||||||
- `ls -lh` - List files with human-readable sizes
|
|
||||||
- `grep -E '^-.*x.*'` - Filter for executable files
|
|
||||||
**Output**: Shows binary names and sizes in logs
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
#### Step 12: Upload Artifacts
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- name: Upload build artifacts
|
|
||||||
uses: actions/upload-artifact@v4
|
|
||||||
```
|
|
||||||
**What**: Uploads files to Gitea for download
|
|
||||||
**Why**: Makes binaries available for testing without creating a release
|
|
||||||
**How**: Uses GitHub's upload action (Gitea compatible)
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
with:
|
|
||||||
name: binaries-${{ github.sha }}
|
|
||||||
```
|
|
||||||
**What**: Artifact name
|
|
||||||
**Why**: Unique name for this set of binaries
|
|
||||||
**How**: `${{ github.sha }}` - Git commit SHA (e.g., `binaries-a1b2c3d4`)
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
path: |
|
|
||||||
target/release/supervisor
|
|
||||||
target/release/coordinator
|
|
||||||
target/release/horus
|
|
||||||
target/release/osiris
|
|
||||||
target/release/herorunner
|
|
||||||
target/release/runner_osiris
|
|
||||||
target/release/runner_sal
|
|
||||||
```
|
|
||||||
**What**: Files to upload
|
|
||||||
**Why**: These are the 7 binaries we want to preserve
|
|
||||||
**How**: Each line is a file path, `|` allows multi-line list
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
retention-days: 7
|
|
||||||
```
|
|
||||||
**What**: How long to keep artifacts
|
|
||||||
**Why**: Saves storage space by auto-deleting old artifacts
|
|
||||||
**How**: Gitea automatically deletes after 7 days
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
if-no-files-found: warn
|
|
||||||
```
|
|
||||||
**What**: What to do if files don't exist
|
|
||||||
**Why**: `warn` logs a warning but doesn't fail the workflow
|
|
||||||
**How**: Useful if some binaries fail to build
|
|
||||||
**Alternatives**: `error` (fail workflow), `ignore` (silent)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Release Workflow (release.yml)
|
|
||||||
|
|
||||||
### Header and Triggers
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
name: Release
|
|
||||||
```
|
|
||||||
**What**: Workflow name
|
|
||||||
**Why**: Identifies this as the release workflow
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
tags:
|
|
||||||
- 'v*.*.*'
|
|
||||||
```
|
|
||||||
**What**: Triggers on version tags
|
|
||||||
**Why**: Only create releases for version tags
|
|
||||||
**How**:
|
|
||||||
- `tags:` - Watches for tag pushes
|
|
||||||
- `'v*.*.*'` - Glob pattern matching semantic versions
|
|
||||||
**Matches**: `v1.0.0`, `v2.1.3`, `v0.1.0`
|
|
||||||
**Doesn't match**: `v1.0`, `1.0.0`, `release-1.0.0`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Job Setup
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
jobs:
|
|
||||||
build-release:
|
|
||||||
name: Build Release Binaries
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
```
|
|
||||||
**Same as CI workflow** - See above for explanation
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Steps 1-2: Checkout and Setup
|
|
||||||
|
|
||||||
**Same as CI workflow** - See above for explanation
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
#### Step 3: Extract Version
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- name: Extract version from tag
|
|
||||||
id: version
|
|
||||||
```
|
|
||||||
**What**: Names this step and gives it an ID
|
|
||||||
**Why**: `id` allows other steps to reference this step's outputs
|
|
||||||
**How**: Use `${{ steps.version.outputs.VERSION }}` in later steps
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
run: echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
|
|
||||||
```
|
|
||||||
**What**: Extracts version from tag name
|
|
||||||
**Why**: Needed for naming release files
|
|
||||||
**How**:
|
|
||||||
- `GITHUB_REF` - Full ref like `refs/tags/v1.0.0`
|
|
||||||
- `${GITHUB_REF#refs/tags/}` - Bash parameter expansion, removes `refs/tags/` prefix
|
|
||||||
- Result: `v1.0.0`
|
|
||||||
- `>> $GITHUB_OUTPUT` - Sets output variable
|
|
||||||
**Usage**: Later steps access via `${{ steps.version.outputs.VERSION }}`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
#### Step 4: Build Release Binaries
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- name: Build release binaries
|
|
||||||
run: cargo build --workspace --release --verbose
|
|
||||||
```
|
|
||||||
**Same as CI workflow** - Builds optimized binaries
|
|
||||||
**Why here**: Need fresh release builds for distribution
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
#### Step 5: Strip Binaries
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- name: Strip binaries
|
|
||||||
run: |
|
|
||||||
strip target/release/supervisor || true
|
|
||||||
strip target/release/coordinator || true
|
|
||||||
# ... etc for all 7 binaries
|
|
||||||
```
|
|
||||||
**What**: Removes debug symbols from binaries
|
|
||||||
**Why**: Reduces binary size by 50-90%
|
|
||||||
**How**:
|
|
||||||
- `strip` - Linux command that removes debugging symbols
|
|
||||||
- `|| true` - Don't fail if strip fails (some binaries might not exist)
|
|
||||||
**Result**: Smaller binaries, faster downloads
|
|
||||||
**Example**: 50MB binary → 5MB binary
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
#### Step 6: Create Release Directory
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- name: Create release directory
|
|
||||||
run: mkdir -p release-artifacts
|
|
||||||
```
|
|
||||||
**What**: Creates directory for release files
|
|
||||||
**Why**: Organize artifacts before uploading
|
|
||||||
**How**: `mkdir -p` creates directory (doesn't fail if exists)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
#### Step 7: Package Binaries
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- name: Package binaries
|
|
||||||
run: |
|
|
||||||
# Package each binary as a tarball
|
|
||||||
for binary in supervisor coordinator horus osiris herorunner runner_osiris runner_sal; do
|
|
||||||
```
|
|
||||||
**What**: Loops through all binary names
|
|
||||||
**Why**: Package each binary separately
|
|
||||||
**How**: Bash for loop
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
if [ -f "target/release/$binary" ]; then
|
|
||||||
```
|
|
||||||
**What**: Checks if binary file exists
|
|
||||||
**Why**: Skip if binary wasn't built
|
|
||||||
**How**: `-f` tests if file exists
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
tar -czf "release-artifacts/${binary}-${{ steps.version.outputs.VERSION }}-linux-x86_64.tar.gz" \
|
|
||||||
-C target/release "$binary"
|
|
||||||
```
|
|
||||||
**What**: Creates compressed tarball
|
|
||||||
**Why**: Standard distribution format for Linux binaries
|
|
||||||
**How**:
|
|
||||||
- `tar` - Archive tool
|
|
||||||
- `-c` - Create archive
|
|
||||||
- `-z` - Compress with gzip
|
|
||||||
- `-f` - Output filename
|
|
||||||
- `-C target/release` - Change to this directory first
|
|
||||||
- `"$binary"` - File to archive
|
|
||||||
**Result**: File like `supervisor-v1.0.0-linux-x86_64.tar.gz`
|
|
||||||
**Naming convention**: `{name}-{version}-{platform}-{arch}.tar.gz`
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
echo "Packaged $binary"
|
|
||||||
else
|
|
||||||
echo "Warning: $binary not found, skipping"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
```
|
|
||||||
**What**: Logs success or warning
|
|
||||||
**Why**: Helps debug if binaries are missing
|
|
||||||
**How**: Simple echo statements
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
#### Step 8: Generate Checksums
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- name: Generate checksums
|
|
||||||
run: |
|
|
||||||
cd release-artifacts
|
|
||||||
sha256sum *.tar.gz > checksums.txt
|
|
||||||
cat checksums.txt
|
|
||||||
```
|
|
||||||
**What**: Creates SHA256 checksums for all tarballs
|
|
||||||
**Why**: Allows users to verify download integrity
|
|
||||||
**How**:
|
|
||||||
- `cd release-artifacts` - Change to artifact directory
|
|
||||||
- `sha256sum *.tar.gz` - Calculate SHA256 hash for each tarball
|
|
||||||
- `> checksums.txt` - Save to file
|
|
||||||
- `cat checksums.txt` - Display in logs
|
|
||||||
**Result**: File with lines like:
|
|
||||||
```
|
|
||||||
a1b2c3d4... supervisor-v1.0.0-linux-x86_64.tar.gz
|
|
||||||
e5f6g7h8... coordinator-v1.0.0-linux-x86_64.tar.gz
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
#### Step 9: Create Release
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- name: Create Release
|
|
||||||
uses: actions/gitea-release@v1
|
|
||||||
```
|
|
||||||
**What**: Uses Gitea's release action
|
|
||||||
**Why**: Creates a release with attached files
|
|
||||||
**How**: Calls Gitea API to create release
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
with:
|
|
||||||
files: release-artifacts/*
|
|
||||||
```
|
|
||||||
**What**: Files to attach to release
|
|
||||||
**Why**: Makes binaries downloadable
|
|
||||||
**How**: Glob pattern uploads all files in directory
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
token: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
```
|
|
||||||
**What**: Authentication token
|
|
||||||
**Why**: Needed to create releases via API
|
|
||||||
**How**: Gitea automatically provides this secret
|
|
||||||
**Security**: Token is scoped to this repository only
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
tag_name: ${{ steps.version.outputs.VERSION }}
|
|
||||||
```
|
|
||||||
**What**: Tag to create release for
|
|
||||||
**Why**: Associates release with the tag
|
|
||||||
**How**: Uses version extracted in step 3
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
name: Release ${{ steps.version.outputs.VERSION }}
|
|
||||||
```
|
|
||||||
**What**: Release title
|
|
||||||
**Why**: Displayed in releases page
|
|
||||||
**How**: Combines "Release" + version (e.g., "Release v1.0.0")
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
body: |
|
|
||||||
## Horus Release ${{ steps.version.outputs.VERSION }}
|
|
||||||
|
|
||||||
### Binaries
|
|
||||||
This release includes the following binaries for Linux x86_64:
|
|
||||||
- `supervisor` - Hero Supervisor service
|
|
||||||
# ... etc
|
|
||||||
```
|
|
||||||
**What**: Release description (markdown)
|
|
||||||
**Why**: Provides context and instructions
|
|
||||||
**How**: Multi-line string with markdown formatting
|
|
||||||
**Result**: Rendered as formatted text in release page
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
draft: false
|
|
||||||
```
|
|
||||||
**What**: Publish immediately
|
|
||||||
**Why**: Make release public right away
|
|
||||||
**How**: `false` = published, `true` = draft
|
|
||||||
**Alternative**: Set to `true` to review before publishing
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
prerelease: false
|
|
||||||
```
|
|
||||||
**What**: Mark as stable release
|
|
||||||
**Why**: Indicates this is production-ready
|
|
||||||
**How**: `false` = stable, `true` = pre-release (beta, alpha)
|
|
||||||
**When to use `true`**: For tags like `v1.0.0-beta.1`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Testing Guide
|
|
||||||
|
|
||||||
### Testing CI Workflow Locally
|
|
||||||
|
|
||||||
Before pushing, test locally:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Check compilation
|
|
||||||
cargo check --workspace --verbose
|
|
||||||
|
|
||||||
# 2. Run tests
|
|
||||||
cargo test --workspace --verbose
|
|
||||||
|
|
||||||
# 3. Run clippy
|
|
||||||
cargo clippy --workspace -- -D warnings
|
|
||||||
|
|
||||||
# 4. Check formatting
|
|
||||||
cargo fmt --all -- --check
|
|
||||||
|
|
||||||
# 5. Build release
|
|
||||||
cargo build --workspace --release --verbose
|
|
||||||
|
|
||||||
# 6. Verify binaries exist
|
|
||||||
ls -lh target/release/ | grep -E '^-.*x.*'
|
|
||||||
```
|
|
||||||
|
|
||||||
**Expected result**: All commands should succeed with exit code 0
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Testing CI Workflow in Gitea
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Create a test branch
|
|
||||||
git checkout -b test-ci
|
|
||||||
|
|
||||||
# 2. Make a small change (e.g., add a comment)
|
|
||||||
echo "// Test CI" >> bin/supervisor/src/main.rs
|
|
||||||
|
|
||||||
# 3. Commit and push
|
|
||||||
git add .
|
|
||||||
git commit -m "test: Trigger CI workflow"
|
|
||||||
git push origin test-ci
|
|
||||||
|
|
||||||
# 4. Check Gitea Actions
|
|
||||||
# Navigate to: https://git.ourworld.tf/peternashaat/horus/actions
|
|
||||||
```
|
|
||||||
|
|
||||||
**Expected result**:
|
|
||||||
- Workflow appears in Actions tab
|
|
||||||
- All steps complete successfully (green checkmarks)
|
|
||||||
- Artifacts are uploaded
|
|
||||||
|
|
||||||
**To download artifacts**:
|
|
||||||
1. Click on the workflow run
|
|
||||||
2. Scroll to "Artifacts" section
|
|
||||||
3. Click to download
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Testing Release Workflow Locally
|
|
||||||
|
|
||||||
Simulate release build:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Build release binaries
|
|
||||||
cargo build --workspace --release --verbose
|
|
||||||
|
|
||||||
# 2. Strip binaries
|
|
||||||
strip target/release/supervisor || true
|
|
||||||
strip target/release/coordinator || true
|
|
||||||
# ... etc
|
|
||||||
|
|
||||||
# 3. Create test directory
|
|
||||||
mkdir -p test-release
|
|
||||||
|
|
||||||
# 4. Package binaries
|
|
||||||
for binary in supervisor coordinator horus osiris herorunner runner_osiris runner_sal; do
|
|
||||||
if [ -f "target/release/$binary" ]; then
|
|
||||||
tar -czf "test-release/${binary}-v0.0.1-test-linux-x86_64.tar.gz" \
|
|
||||||
-C target/release "$binary"
|
|
||||||
echo "Packaged $binary"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
# 5. Generate checksums
|
|
||||||
cd test-release
|
|
||||||
sha256sum *.tar.gz > checksums.txt
|
|
||||||
cat checksums.txt
|
|
||||||
cd ..
|
|
||||||
|
|
||||||
# 6. Test extraction
|
|
||||||
cd test-release
|
|
||||||
tar -xzf supervisor-v0.0.1-test-linux-x86_64.tar.gz
|
|
||||||
./supervisor --help
|
|
||||||
cd ..
|
|
||||||
```
|
|
||||||
|
|
||||||
**Expected result**:
|
|
||||||
- All binaries package successfully
|
|
||||||
- Checksums are generated
|
|
||||||
- Binary extracts and runs
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Testing Release Workflow in Gitea
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Ensure code is ready
|
|
||||||
git checkout main
|
|
||||||
git pull
|
|
||||||
|
|
||||||
# 2. Create a test tag
|
|
||||||
git tag v0.1.0-test
|
|
||||||
|
|
||||||
# 3. Push the tag
|
|
||||||
git push origin v0.1.0-test
|
|
||||||
|
|
||||||
# 4. Check Gitea
|
|
||||||
# Navigate to: https://git.ourworld.tf/peternashaat/horus/releases
|
|
||||||
```
|
|
||||||
|
|
||||||
**Expected result**:
|
|
||||||
- Release appears in Releases tab
|
|
||||||
- All 7 binaries are attached as `.tar.gz` files
|
|
||||||
- `checksums.txt` is attached
|
|
||||||
- Release notes are properly formatted
|
|
||||||
|
|
||||||
**To test download**:
|
|
||||||
```bash
|
|
||||||
# Download a binary
|
|
||||||
wget https://git.ourworld.tf/peternashaat/horus/releases/download/v0.1.0-test/supervisor-v0.1.0-test-linux-x86_64.tar.gz
|
|
||||||
|
|
||||||
# Extract
|
|
||||||
tar -xzf supervisor-v0.1.0-test-linux-x86_64.tar.gz
|
|
||||||
|
|
||||||
# Test
|
|
||||||
chmod +x supervisor
|
|
||||||
./supervisor --help
|
|
||||||
```
|
|
||||||
|
|
||||||
**Cleanup test release**:
|
|
||||||
1. Go to Releases page
|
|
||||||
2. Click on the test release
|
|
||||||
3. Click "Delete" button
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Common Patterns Explained
|
|
||||||
|
|
||||||
### Why `|| true`?
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
strip target/release/supervisor || true
|
|
||||||
```
|
|
||||||
|
|
||||||
**What**: Bash OR operator
|
|
||||||
**Why**: Prevents step from failing if command fails
|
|
||||||
**How**:
|
|
||||||
- If `strip` succeeds (exit 0), continue
|
|
||||||
- If `strip` fails (exit non-zero), `|| true` runs and returns 0
|
|
||||||
**Use case**: Some binaries might not exist, don't fail the whole workflow
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Why `${{ }}` Syntax?
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
${{ github.sha }}
|
|
||||||
${{ steps.version.outputs.VERSION }}
|
|
||||||
```
|
|
||||||
|
|
||||||
**What**: GitHub Actions expression syntax
|
|
||||||
**Why**: Access variables and context
|
|
||||||
**How**: Gitea evaluates these at runtime
|
|
||||||
**Types**:
|
|
||||||
- `github.*` - Workflow context (sha, ref, actor, etc.)
|
|
||||||
- `secrets.*` - Secret variables
|
|
||||||
- `steps.*` - Output from previous steps
|
|
||||||
- `runner.*` - Runner information (os, arch, etc.)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Why `--workspace`?
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
cargo build --workspace
|
|
||||||
```
|
|
||||||
|
|
||||||
**What**: Cargo flag to include all workspace members
|
|
||||||
**Why**: Your project is a workspace with multiple packages
|
|
||||||
**How**: Cargo reads `Cargo.toml` [workspace] section
|
|
||||||
**Without it**: Only builds the root package
|
|
||||||
**With it**: Builds all 7 binaries
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Why Separate CI and Release Workflows?
|
|
||||||
|
|
||||||
**CI Workflow**:
|
|
||||||
- Runs frequently (every push)
|
|
||||||
- Fast feedback
|
|
||||||
- Doesn't create releases
|
|
||||||
|
|
||||||
**Release Workflow**:
|
|
||||||
- Runs rarely (only on tags)
|
|
||||||
- Slower (includes packaging)
|
|
||||||
- Creates permanent artifacts
|
|
||||||
|
|
||||||
**Benefit**: Fast CI doesn't slow down development, releases are deliberate
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Why Cache Three Directories?
|
|
||||||
|
|
||||||
1. **`~/.cargo/registry`** - Downloaded crate files
|
|
||||||
- Changes when: Dependencies are added/updated
|
|
||||||
- Size: ~500MB - 2GB
|
|
||||||
|
|
||||||
2. **`~/.cargo/git`** - Git dependencies
|
|
||||||
- Changes when: Git dependencies are updated
|
|
||||||
- Size: ~100MB - 500MB
|
|
||||||
|
|
||||||
3. **`target/`** - Compiled artifacts
|
|
||||||
- Changes when: Code or dependencies change
|
|
||||||
- Size: ~2GB - 10GB
|
|
||||||
|
|
||||||
**Together**: Reduce build time from 15 minutes to 2 minutes
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Why `hashFiles('**/Cargo.lock')`?
|
|
||||||
|
|
||||||
**What**: Generates hash of Cargo.lock
|
|
||||||
**Why**: Cache key changes when dependencies change
|
|
||||||
**How**:
|
|
||||||
- `**/Cargo.lock` - Find all Cargo.lock files
|
|
||||||
- `hashFiles()` - Generate SHA256 hash
|
|
||||||
**Result**: Different hash = different cache = rebuild dependencies
|
|
||||||
**Benefit**: Cache is invalidated when dependencies change
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Summary
|
|
||||||
|
|
||||||
Both workflows follow best practices:
|
|
||||||
|
|
||||||
✅ **Fast feedback** - CI runs on every push
|
|
||||||
✅ **Comprehensive testing** - Check, test, lint, format
|
|
||||||
✅ **Optimized builds** - Caching reduces build time
|
|
||||||
✅ **Automated releases** - Tag-based release creation
|
|
||||||
✅ **Secure** - Uses scoped tokens, no manual secrets
|
|
||||||
✅ **Reproducible** - Pinned action versions
|
|
||||||
✅ **User-friendly** - Clear release notes and instructions
|
|
||||||
|
|
||||||
For more information, see [README.md](./README.md).
|
|
||||||
@@ -46,20 +46,49 @@ jobs:
|
|||||||
restore-keys: |
|
restore-keys: |
|
||||||
${{ runner.os }}-cargo-build-target-
|
${{ runner.os }}-cargo-build-target-
|
||||||
|
|
||||||
- name: Check code
|
# 👇 Don't fail CI on hero-runner's warnings
|
||||||
run: cargo check --workspace --verbose
|
- name: Check code (all crates under bin/)
|
||||||
|
env:
|
||||||
|
RUSTFLAGS: "--cap-lints=warn"
|
||||||
|
run: |
|
||||||
|
set -euo pipefail
|
||||||
|
find bin -name Cargo.toml -print0 | while IFS= read -r -d '' manifest; do
|
||||||
|
echo "=== cargo check --manifest-path $manifest ==="
|
||||||
|
cargo check --manifest-path "$manifest" --verbose
|
||||||
|
done
|
||||||
|
|
||||||
- name: Run tests
|
# 👇 Same trick for tests, otherwise they’d fail for the same reason
|
||||||
run: cargo test --workspace --verbose
|
- name: Run tests (all crates under bin/)
|
||||||
|
env:
|
||||||
|
RUSTFLAGS: "--cap-lints=warn"
|
||||||
|
run: |
|
||||||
|
set -euo pipefail
|
||||||
|
find bin -name Cargo.toml -print0 | while IFS= read -r -d '' manifest; do
|
||||||
|
echo "=== cargo test --manifest-path $manifest ==="
|
||||||
|
cargo test --manifest-path "$manifest" --verbose
|
||||||
|
done
|
||||||
|
|
||||||
- name: Run clippy
|
# Clippy stays strict (still uses -D warnings for clippy lints).
|
||||||
run: cargo clippy --workspace -- -D warnings
|
# If this later fails because of hero-runner, we can also add RUSTFLAGS here.
|
||||||
|
- name: Run clippy (all crates under bin/)
|
||||||
|
run: |
|
||||||
|
set -euo pipefail
|
||||||
|
find bin -name Cargo.toml -print0 | while IFS= read -r -d '' manifest; do
|
||||||
|
echo "=== cargo clippy --manifest-path $manifest ==="
|
||||||
|
cargo clippy --manifest-path "$manifest" -- -D warnings
|
||||||
|
done
|
||||||
|
|
||||||
- name: Check formatting
|
- name: Check formatting
|
||||||
run: cargo fmt --all -- --check
|
run: cargo fmt --all -- --check
|
||||||
|
|
||||||
- name: Build release binaries
|
# Build was already succeeding; leaving it without cap-lints is fine.
|
||||||
run: cargo build --workspace --release --verbose
|
- name: Build release binaries (all crates under bin/)
|
||||||
|
run: |
|
||||||
|
set -euo pipefail
|
||||||
|
find bin -name Cargo.toml -print0 | while IFS= read -r -d '' manifest; do
|
||||||
|
echo "=== cargo build --manifest-path $manifest --release ==="
|
||||||
|
cargo build --manifest-path "$manifest" --release --verbose
|
||||||
|
done
|
||||||
|
|
||||||
- name: List built binaries
|
- name: List built binaries
|
||||||
run: |
|
run: |
|
||||||
@@ -79,4 +108,4 @@ jobs:
|
|||||||
target/release/runner_osiris
|
target/release/runner_osiris
|
||||||
target/release/runner_sal
|
target/release/runner_sal
|
||||||
retention-days: 7
|
retention-days: 7
|
||||||
if-no-files-found: warn
|
if-no-files-found: warn
|
||||||
@@ -7,94 +7,109 @@ on:
|
|||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
build-release:
|
build-release:
|
||||||
name: Build Release Binaries
|
name: Build Release Binaries via hero-git installer
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
env:
|
||||||
|
IMAGE_NAME: hero-git:latest
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout code
|
- name: Checkout code
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v4
|
||||||
|
|
||||||
- name: Setup Rust toolchain
|
|
||||||
uses: actions-rust-lang/setup-rust-toolchain@v1
|
|
||||||
with:
|
|
||||||
toolchain: stable
|
|
||||||
|
|
||||||
- name: Extract version from tag
|
- name: Extract version from tag
|
||||||
id: version
|
id: version
|
||||||
run: echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
|
run: echo "VERSION=${GITHUB_REF#refs/tags/}" >> "$GITHUB_OUTPUT"
|
||||||
|
|
||||||
- name: Build release binaries
|
- name: Ensure hero-git image exists
|
||||||
run: cargo build --workspace --release --verbose
|
|
||||||
|
|
||||||
- name: Strip binaries
|
|
||||||
run: |
|
run: |
|
||||||
strip target/release/supervisor || true
|
if ! docker image inspect "$IMAGE_NAME" >/dev/null 2>&1; then
|
||||||
strip target/release/coordinator || true
|
echo "ERROR: Docker image '$IMAGE_NAME' not found on runner."
|
||||||
strip target/release/horus || true
|
echo "Build it manually on the runner with:"
|
||||||
strip target/release/osiris || true
|
echo " docker build -t $IMAGE_NAME -f Dockerfile ."
|
||||||
strip target/release/herorunner || true
|
exit 1
|
||||||
strip target/release/runner_osiris || true
|
fi
|
||||||
strip target/release/runner_sal || true
|
|
||||||
|
- name: Prepare bin output dir
|
||||||
|
run: mkdir -p hero-bin
|
||||||
|
|
||||||
|
- name: Run horus_full_install installer in container
|
||||||
|
run: |
|
||||||
|
set -euxo pipefail
|
||||||
|
|
||||||
|
docker run --rm \
|
||||||
|
-v "$PWD/hero-bin:/root/hero/bin" \
|
||||||
|
-e HEROLIB_REF=development_nile_installers \
|
||||||
|
"$IMAGE_NAME" \
|
||||||
|
bash -lc '
|
||||||
|
set -euxo pipefail
|
||||||
|
cd /opt/herolib
|
||||||
|
./examples/installers/horus/horus_full_install.vsh
|
||||||
|
echo "===== AFTER INSTALL, ls -R /root/hero ====="
|
||||||
|
ls -R /root/hero || true
|
||||||
|
'
|
||||||
|
|
||||||
|
- name: List built binaries
|
||||||
|
run: ls -al hero-bin
|
||||||
|
|
||||||
- name: Create release directory
|
- name: Create release directory
|
||||||
run: mkdir -p release-artifacts
|
run: mkdir -p release-artifacts
|
||||||
|
|
||||||
- name: Package binaries
|
- name: Package binaries
|
||||||
run: |
|
run: |
|
||||||
# Package each binary as a tarball
|
|
||||||
for binary in supervisor coordinator horus osiris herorunner runner_osiris runner_sal; do
|
for binary in supervisor coordinator horus osiris herorunner runner_osiris runner_sal; do
|
||||||
if [ -f "target/release/$binary" ]; then
|
if [ -f "hero-bin/$binary" ]; then
|
||||||
tar -czf "release-artifacts/${binary}-${{ steps.version.outputs.VERSION }}-linux-x86_64.tar.gz" \
|
tar -czf "release-artifacts/${binary}-${{ steps.version.outputs.VERSION }}-linux-x86_64.tar.gz" \
|
||||||
-C target/release "$binary"
|
-C hero-bin "$binary"
|
||||||
echo "Packaged $binary"
|
echo "Packaged $binary"
|
||||||
else
|
else
|
||||||
echo "Warning: $binary not found, skipping"
|
echo "Warning: $binary not found, skipping"
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
|
|
||||||
- name: Generate checksums
|
- name: Generate checksums
|
||||||
run: |
|
run: |
|
||||||
cd release-artifacts
|
cd release-artifacts
|
||||||
sha256sum *.tar.gz > checksums.txt
|
if ls *.tar.gz >/dev/null 2>&1; then
|
||||||
cat checksums.txt
|
sha256sum *.tar.gz > checksums.txt
|
||||||
|
cat checksums.txt
|
||||||
|
else
|
||||||
|
echo "ERROR: no .tar.gz artifacts were produced; check previous steps (likely the installer didn’t build any binaries)."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
- name: Create Release
|
- name: Create Release
|
||||||
uses: actions/gitea-release@v1
|
uses: akkuman/gitea-release-action@v1
|
||||||
with:
|
with:
|
||||||
files: release-artifacts/*
|
files: release-artifacts/*
|
||||||
token: ${{ secrets.GITHUB_TOKEN }}
|
token: ${{ secrets.TOKEN_GITEA }}
|
||||||
tag_name: ${{ steps.version.outputs.VERSION }}
|
tag_name: ${{ steps.version.outputs.VERSION }}
|
||||||
name: Release ${{ steps.version.outputs.VERSION }}
|
name: Release ${{ steps.version.outputs.VERSION }}
|
||||||
body: |
|
body: |
|
||||||
## Horus Release ${{ steps.version.outputs.VERSION }}
|
## Horus Release ${{ steps.version.outputs.VERSION }}
|
||||||
|
|
||||||
### Binaries
|
### Binaries
|
||||||
This release includes the following binaries for Linux x86_64:
|
Built via the herolib V installer (`horus_full_install.vsh`) inside
|
||||||
- `supervisor` - Hero Supervisor service
|
the `hero-git:latest` image and packaged for Linux x86_64:
|
||||||
- `coordinator` - Hero Coordinator service
|
- `supervisor`
|
||||||
- `horus` - Horus main binary
|
- `coordinator`
|
||||||
- `osiris` - Osiris server
|
- `horus`
|
||||||
- `herorunner` - Hero runner
|
- `osiris`
|
||||||
- `runner_osiris` - Osiris runner
|
- `herorunner`
|
||||||
- `runner_sal` - SAL runner
|
- `runner_osiris`
|
||||||
|
- `runner_sal`
|
||||||
### Installation
|
|
||||||
Download the appropriate binary for your system:
|
### Installation (example)
|
||||||
```bash
|
```bash
|
||||||
# Example: Download and install supervisor
|
wget https://git.ourworld.tf/herocode/horus/releases/download/${{ steps.version.outputs.VERSION }}/supervisor-${{ steps.version.outputs.VERSION }}-linux-x86_64.tar.gz
|
||||||
wget https://git.ourworld.tf/peternashaat/horus/releases/download/${{ steps.version.outputs.VERSION }}/supervisor-${{ steps.version.outputs.VERSION }}-linux-x86_64.tar.gz
|
|
||||||
tar -xzf supervisor-${{ steps.version.outputs.VERSION }}-linux-x86_64.tar.gz
|
tar -xzf supervisor-${{ steps.version.outputs.VERSION }}-linux-x86_64.tar.gz
|
||||||
chmod +x supervisor
|
chmod +x supervisor
|
||||||
sudo mv supervisor /usr/local/bin/
|
sudo mv supervisor /usr/local/bin/
|
||||||
```
|
```
|
||||||
|
|
||||||
### Verification
|
### Verification
|
||||||
Verify the integrity of downloaded files using the checksums:
|
|
||||||
```bash
|
```bash
|
||||||
sha256sum -c checksums.txt
|
sha256sum -c checksums.txt
|
||||||
```
|
```
|
||||||
|
|
||||||
### Changes
|
|
||||||
See commit history for detailed changes in this release.
|
|
||||||
draft: false
|
draft: false
|
||||||
prerelease: false
|
prerelease: false
|
||||||
@@ -29,6 +29,7 @@ repository.workspace = true
|
|||||||
[dependencies]
|
[dependencies]
|
||||||
# Integration test dependencies - no library dependencies, tests spawn binaries
|
# Integration test dependencies - no library dependencies, tests spawn binaries
|
||||||
hero-supervisor-openrpc-client = { path = "lib/clients/supervisor" }
|
hero-supervisor-openrpc-client = { path = "lib/clients/supervisor" }
|
||||||
|
hero-coordinator-client = { path = "lib/clients/coordinator" }
|
||||||
hero-job = { path = "lib/models/job" }
|
hero-job = { path = "lib/models/job" }
|
||||||
hero-job-client = { path = "lib/clients/job" }
|
hero-job-client = { path = "lib/clients/job" }
|
||||||
tokio = { workspace = true }
|
tokio = { workspace = true }
|
||||||
|
|||||||
@@ -70,6 +70,21 @@ struct Cli {
|
|||||||
help = "Bind port for WebSocket JSON-RPC server (default: 9653)"
|
help = "Bind port for WebSocket JSON-RPC server (default: 9653)"
|
||||||
)]
|
)]
|
||||||
api_ws_port: u16,
|
api_ws_port: u16,
|
||||||
|
|
||||||
|
#[arg(
|
||||||
|
long = "supervisor-transport",
|
||||||
|
env = "SUPERVISOR_TRANSPORT",
|
||||||
|
default_value = "mycelium",
|
||||||
|
help = "Transport to use for supervisor communication: 'mycelium' or 'http' (default: mycelium)"
|
||||||
|
)]
|
||||||
|
supervisor_transport: String,
|
||||||
|
|
||||||
|
#[arg(
|
||||||
|
long = "supervisor-http-url",
|
||||||
|
env = "SUPERVISOR_HTTP_URL",
|
||||||
|
help = "HTTP URL for supervisor when using HTTP transport (e.g., http://127.0.0.1:3031)"
|
||||||
|
)]
|
||||||
|
supervisor_http_url: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::main]
|
#[tokio::main]
|
||||||
@@ -99,8 +114,9 @@ async fn main() {
|
|||||||
// Shared application state
|
// Shared application state
|
||||||
let state = Arc::new(herocoordinator::rpc::AppState::new(service));
|
let state = Arc::new(herocoordinator::rpc::AppState::new(service));
|
||||||
|
|
||||||
// Start router workers (auto-discovered contexts) using a single global SupervisorHub (no separate inbound listener)
|
// Start router workers (auto-discovered contexts) using a single global SupervisorHub
|
||||||
{
|
// Skip router if using HTTP transport (no mycelium needed)
|
||||||
|
if cli.supervisor_transport == "mycelium" {
|
||||||
let base_url = format!("http://{}:{}", cli.mycelium_ip, cli.mycelium_port);
|
let base_url = format!("http://{}:{}", cli.mycelium_ip, cli.mycelium_port);
|
||||||
let hub = herocoordinator::clients::SupervisorHub::new(
|
let hub = herocoordinator::clients::SupervisorHub::new(
|
||||||
base_url.clone(),
|
base_url.clone(),
|
||||||
@@ -118,6 +134,9 @@ async fn main() {
|
|||||||
};
|
};
|
||||||
// Per-context outbound delivery loops (replies handled by SupervisorHub)
|
// Per-context outbound delivery loops (replies handled by SupervisorHub)
|
||||||
let _auto_handle = herocoordinator::router::start_router_auto(service_for_router, cfg);
|
let _auto_handle = herocoordinator::router::start_router_auto(service_for_router, cfg);
|
||||||
|
info!("Router started with mycelium transport");
|
||||||
|
} else {
|
||||||
|
info!("Skipping router - using HTTP transport for supervisor communication");
|
||||||
}
|
}
|
||||||
|
|
||||||
// Build RPC modules for both servers
|
// Build RPC modules for both servers
|
||||||
|
|||||||
@@ -70,6 +70,21 @@ struct Cli {
|
|||||||
help = "Bind port for WebSocket JSON-RPC server (default: 9653)"
|
help = "Bind port for WebSocket JSON-RPC server (default: 9653)"
|
||||||
)]
|
)]
|
||||||
api_ws_port: u16,
|
api_ws_port: u16,
|
||||||
|
|
||||||
|
#[arg(
|
||||||
|
long = "supervisor-transport",
|
||||||
|
env = "SUPERVISOR_TRANSPORT",
|
||||||
|
default_value = "mycelium",
|
||||||
|
help = "Transport to use for supervisor communication: 'mycelium' or 'http' (default: mycelium)"
|
||||||
|
)]
|
||||||
|
supervisor_transport: String,
|
||||||
|
|
||||||
|
#[arg(
|
||||||
|
long = "supervisor-http-url",
|
||||||
|
env = "SUPERVISOR_HTTP_URL",
|
||||||
|
help = "HTTP URL for supervisor when using HTTP transport (e.g., http://127.0.0.1:3031)"
|
||||||
|
)]
|
||||||
|
supervisor_http_url: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::main]
|
#[tokio::main]
|
||||||
@@ -99,8 +114,9 @@ async fn main() {
|
|||||||
// Shared application state
|
// Shared application state
|
||||||
let state = Arc::new(hero_coordinator::rpc::AppState::new(service));
|
let state = Arc::new(hero_coordinator::rpc::AppState::new(service));
|
||||||
|
|
||||||
// Start router workers (auto-discovered contexts) using a single global SupervisorHub (no separate inbound listener)
|
// Start router workers (auto-discovered contexts) using a single global SupervisorHub
|
||||||
{
|
// Skip router if using HTTP transport (no mycelium needed)
|
||||||
|
if cli.supervisor_transport == "mycelium" {
|
||||||
let base_url = format!("http://{}:{}", cli.mycelium_ip, cli.mycelium_port);
|
let base_url = format!("http://{}:{}", cli.mycelium_ip, cli.mycelium_port);
|
||||||
let mycelium = Arc::new(
|
let mycelium = Arc::new(
|
||||||
hero_supervisor_openrpc_client::transports::MyceliumClient::new(&base_url)
|
hero_supervisor_openrpc_client::transports::MyceliumClient::new(&base_url)
|
||||||
@@ -121,6 +137,9 @@ async fn main() {
|
|||||||
};
|
};
|
||||||
// Per-context outbound delivery loops (replies handled by SupervisorHub)
|
// Per-context outbound delivery loops (replies handled by SupervisorHub)
|
||||||
let _auto_handle = hero_coordinator::router::start_router_auto(service_for_router, cfg);
|
let _auto_handle = hero_coordinator::router::start_router_auto(service_for_router, cfg);
|
||||||
|
info!("Router started with mycelium transport");
|
||||||
|
} else {
|
||||||
|
info!("Skipping router - using HTTP transport for supervisor communication");
|
||||||
}
|
}
|
||||||
|
|
||||||
// Build RPC modules for both servers
|
// Build RPC modules for both servers
|
||||||
|
|||||||
43
docker/Dockerfile
Normal file
43
docker/Dockerfile
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
FROM ubuntu:24.04 AS base
|
||||||
|
|
||||||
|
# Includes a bunch of stuff hero will try to install later, so we don't have to
|
||||||
|
# do that on each new container launch
|
||||||
|
RUN apt update && apt install -y ssh wget unzip build-essential git redis-server sudo autoconf libtool iputils-ping net-tools rsync curl mc tmux libsqlite3-dev xz-utils git-lfs ufw libpq-dev
|
||||||
|
|
||||||
|
RUN wget https://github.com/vlang/v/releases/latest/download/v_linux.zip && \
|
||||||
|
unzip v_linux.zip && \
|
||||||
|
cd v && \
|
||||||
|
./v symlink
|
||||||
|
|
||||||
|
# Install bun.sh for docusaurus
|
||||||
|
RUN curl -fsSL https://bun.sh/install | bash
|
||||||
|
|
||||||
|
# Make a new stage so we can invalidate the cache by passing --no-cache-filter
|
||||||
|
FROM base AS clone
|
||||||
|
|
||||||
|
RUN git clone https://github.com/Incubaid/herolib /opt/herolib
|
||||||
|
|
||||||
|
RUN git clone https://github.com/incubaid/docusaurus_template /root/code/github/incubaid/docusaurus_template
|
||||||
|
|
||||||
|
# Warm the bun node modules cache
|
||||||
|
RUN cd /root/code/github/incubaid/docusaurus_template/template && /root/.bun/bin/bun install
|
||||||
|
|
||||||
|
RUN mkdir -p /root/.vmodules/incubaid
|
||||||
|
|
||||||
|
# Make a place for users to mount their ssh key file. We will copy to .ssh and
|
||||||
|
# change permissions in entrypoint script
|
||||||
|
RUN mkdir -p /root/ssh
|
||||||
|
|
||||||
|
COPY entrypoint.sh /bin/entrypoint.sh
|
||||||
|
|
||||||
|
RUN chmod +x /bin/entrypoint.sh
|
||||||
|
|
||||||
|
# Make a new stage so we can invalidate the cache by passing --no-cache-filter
|
||||||
|
FROM clone AS fetch
|
||||||
|
|
||||||
|
# Fetch to freshen the repos with minimal work
|
||||||
|
RUN cd /opt/herolib && git fetch
|
||||||
|
RUN cd /root/code/github/incubaid/docusaurus_template && git fetch
|
||||||
|
|
||||||
|
|
||||||
|
ENTRYPOINT ["/bin/entrypoint.sh"]
|
||||||
268
docker/Readme.md
Normal file
268
docker/Readme.md
Normal file
@@ -0,0 +1,268 @@
|
|||||||
|
# `hero-git` Docker Image
|
||||||
|
|
||||||
|
This directory contains a reusable Docker image and entrypoint script used to build Hero / Horus binaries via the V-based **herolib** installers.
|
||||||
|
|
||||||
|
It is designed for two main use cases:
|
||||||
|
|
||||||
|
1. **Local development / testing** – run the herolib CLI (`hero`) and Horus installers in a clean, reproducible environment.
|
||||||
|
2. **CI / Release builds** – build Horus binaries inside this image (using `horus_full_install.vsh`) and export them as release artifacts.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Files
|
||||||
|
|
||||||
|
### `Dockerfile`
|
||||||
|
|
||||||
|
The `Dockerfile` builds an Ubuntu-based image with all dependencies needed for `herolib` and its installers.
|
||||||
|
|
||||||
|
Key points:
|
||||||
|
|
||||||
|
* Base: `ubuntu:24.04`
|
||||||
|
* Installs system tooling:
|
||||||
|
|
||||||
|
* `ssh`, `wget`, `curl`, `unzip`, `build-essential`, `git`, `git-lfs`
|
||||||
|
* `redis-server`, `libsqlite3-dev`, `libpq-dev`, `autoconf`, `libtool`, `net-tools`, `iputils-ping`, `rsync`, `mc`, `tmux`, `ufw`, `xz-utils`
|
||||||
|
* Installs **V** compiler:
|
||||||
|
|
||||||
|
* Downloads `v_linux.zip` from the official V GitHub releases
|
||||||
|
* Unzips it and runs `./v symlink` so `v` is on the `PATH`
|
||||||
|
* Installs **Bun** (used for the docusaurus docs template):
|
||||||
|
|
||||||
|
* `curl -fsSL https://bun.sh/install | bash`
|
||||||
|
* Clones:
|
||||||
|
|
||||||
|
* `https://github.com/Incubaid/herolib` into `/opt/herolib`
|
||||||
|
* `https://github.com/incubaid/docusaurus_template` into `/root/code/github/incubaid/docusaurus_template`
|
||||||
|
* Warms Bun dependencies for the docusaurus template (for faster re-use):
|
||||||
|
|
||||||
|
* `cd /root/code/github/incubaid/docusaurus_template/template && bun install`
|
||||||
|
* Prepares directories:
|
||||||
|
|
||||||
|
* `/root/.vmodules/incubaid` – used by V module resolution
|
||||||
|
* `/root/ssh` – optional mount point for SSH keys
|
||||||
|
* Copies `entrypoint.sh` into the image and marks it executable.
|
||||||
|
|
||||||
|
The container’s entrypoint is set to the custom script `/bin/entrypoint.sh`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `entrypoint.sh`
|
||||||
|
|
||||||
|
The entrypoint script is responsible for:
|
||||||
|
|
||||||
|
1. Starting **Redis** in the background.
|
||||||
|
2. Optionally configuring **SSH** (if keys are provided).
|
||||||
|
3. Selecting the correct `herolib` source (mounted or cloned).
|
||||||
|
4. Checking out the desired `herolib` branch.
|
||||||
|
5. Building the `hero` CLI with V and making it available as `/bin/hero`.
|
||||||
|
6. Finally, executing any command passed to `docker run`.
|
||||||
|
|
||||||
|
#### Behavior in detail
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Start Redis in the background
|
||||||
|
redis-server --daemonize yes
|
||||||
|
|
||||||
|
# Optional SSH setup: only if /root/ssh has keys
|
||||||
|
if [ -d /root/ssh ] && compgen -G "/root/ssh/*" > /dev/null; then
|
||||||
|
mkdir -p /root/.ssh
|
||||||
|
cp -r /root/ssh/* /root/.ssh/
|
||||||
|
chmod 600 /root/.ssh/*
|
||||||
|
eval "$(ssh-agent)"
|
||||||
|
ssh-add /root/.ssh/*
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Support optionally bind-mounting a local herolib into the container.
|
||||||
|
# If /opt/herolib_mount exists, we use that; otherwise we use the cloned /opt/herolib
|
||||||
|
rm -f /root/.vmodules/freeflowuniverse/herolib
|
||||||
|
if [ -d "/opt/herolib_mount" ]; then
|
||||||
|
ln -s /opt/herolib_mount/lib /root/.vmodules/incubaid/herolib
|
||||||
|
cd /opt/herolib_mount
|
||||||
|
else
|
||||||
|
ln -s /opt/herolib/lib /root/.vmodules/incubaid/herolib
|
||||||
|
cd /opt/herolib
|
||||||
|
git fetch
|
||||||
|
git checkout "${HEROLIB_REF:-development}"
|
||||||
|
git pull
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd cli
|
||||||
|
echo "Building hero..."
|
||||||
|
v -enable-globals hero.v > build.log 2>&1 || (cat build.log && exit 1)
|
||||||
|
ln -s "$(realpath hero)" /bin/hero
|
||||||
|
|
||||||
|
cd /root
|
||||||
|
|
||||||
|
# If a command was provided to `docker run`, execute it as-is.
|
||||||
|
# Example: docker run hero-git:latest bash -lc 'cd /opt/herolib && ./examples/installers/horus/horus_full_install.vsh'
|
||||||
|
if [ "$#" -gt 0 ]; then
|
||||||
|
exec "$@"
|
||||||
|
else
|
||||||
|
# No command passed → give an interactive shell
|
||||||
|
exec bash
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Important environment variables
|
||||||
|
|
||||||
|
* `HEROLIB_REF`
|
||||||
|
|
||||||
|
* Git ref (branch, tag, or commit) used when checking out `/opt/herolib`.
|
||||||
|
* Default: `development`.
|
||||||
|
* Example for CI: `HEROLIB_REF=development_nile_installers`.
|
||||||
|
|
||||||
|
#### Optional mounts
|
||||||
|
|
||||||
|
* `/opt/herolib_mount`
|
||||||
|
|
||||||
|
* If you bind-mount your local `herolib` repo here, the container will use it instead of the cloned `/opt/herolib`.
|
||||||
|
* Useful for local development when you want to test uncommitted changes.
|
||||||
|
* `/root/ssh`
|
||||||
|
|
||||||
|
* Optional directory containing SSH keys (e.g. `id_ed25519`) if you need to access private repositories.
|
||||||
|
* If present and non-empty, keys are copied to `/root/.ssh` and added to an SSH agent.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What this image is used for
|
||||||
|
|
||||||
|
### 1. Local development / experimentation
|
||||||
|
|
||||||
|
You can drop into the container and manually run installers or commands against `herolib`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Interactive shell with default branch
|
||||||
|
docker run -it --rm hero-git:latest
|
||||||
|
|
||||||
|
# Use a specific herolib ref
|
||||||
|
docker run -it --rm -e HEROLIB_REF=development_nile_installers hero-git:latest
|
||||||
|
|
||||||
|
# Mount a local herolib repo and use that instead of the cloned one
|
||||||
|
docker run -it --rm \
|
||||||
|
-v "$PWD/herolib:/opt/herolib_mount" \
|
||||||
|
hero-git:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
Inside the container you can then run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd /opt/herolib
|
||||||
|
./examples/installers/horus/horus_full_install.vsh
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. CI / Release builds for Horus
|
||||||
|
|
||||||
|
In CI (e.g. the `release.yml` workflow), this image is used to build Horus binaries via the V-based installer script:
|
||||||
|
|
||||||
|
* The workflow:
|
||||||
|
|
||||||
|
1. Ensures `hero-git:latest` is available on the self-hosted runner.
|
||||||
|
2. Creates a local directory (e.g. `hero-bin/`) on the runner.
|
||||||
|
3. Runs the container, mounting `hero-bin/` into `/root/hero/bin`.
|
||||||
|
4. Inside the container, executes `./examples/installers/horus/horus_full_install.vsh` from `/opt/herolib`.
|
||||||
|
5. The installer writes the compiled Horus binaries into `/root/hero/bin`, which appear on the host inside `hero-bin/`.
|
||||||
|
6. The workflow then packages those binaries into `.tar.gz` archives and uploads them as release artifacts.
|
||||||
|
|
||||||
|
Example CI step:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
- name: Run horus_full_install installer in container
|
||||||
|
run: |
|
||||||
|
set -euxo pipefail
|
||||||
|
|
||||||
|
docker run --rm \
|
||||||
|
-v "$PWD/hero-bin:/root/hero/bin" \
|
||||||
|
-e HEROLIB_REF=development_nile_installers \
|
||||||
|
hero-git:latest \
|
||||||
|
bash -lc '
|
||||||
|
set -euxo pipefail
|
||||||
|
cd /opt/herolib
|
||||||
|
./examples/installers/horus/horus_full_install.vsh
|
||||||
|
echo "===== AFTER INSTALL, ls -R /root/hero ====="
|
||||||
|
ls -R /root/hero || true
|
||||||
|
'
|
||||||
|
```
|
||||||
|
|
||||||
|
After this step, `hero-bin/` on the host should contain binaries like:
|
||||||
|
|
||||||
|
* `supervisor`
|
||||||
|
* `coordinator`
|
||||||
|
* `horus`
|
||||||
|
* `osiris`
|
||||||
|
* `herorunner`
|
||||||
|
* `runner_osiris`
|
||||||
|
* `runner_sal`
|
||||||
|
|
||||||
|
These are then packaged and attached to the Gitea release.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## How to build the image
|
||||||
|
|
||||||
|
From the directory containing the `Dockerfile` and `entrypoint.sh`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Build the image (force a clean rebuild)
|
||||||
|
docker build --no-cache -t hero-git:latest .
|
||||||
|
|
||||||
|
# Verify entrypoint contents
|
||||||
|
docker run --rm --entrypoint cat hero-git:latest /bin/entrypoint.sh | tail -n 20
|
||||||
|
```
|
||||||
|
|
||||||
|
You should see the `exec "$@"` block at the end of the script, which ensures that commands passed to `docker run` are executed as-is inside the container.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## How to run the image (examples)
|
||||||
|
|
||||||
|
### Basic interactive usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Default branch (HEROLIB_REF=development)
|
||||||
|
docker run -it --rm hero-git:latest
|
||||||
|
|
||||||
|
# Specific herolib ref
|
||||||
|
export HEROLIB_REF=development_nile_installers
|
||||||
|
|
||||||
|
docker run -it --rm \
|
||||||
|
-e HEROLIB_REF=$HEROLIB_REF \
|
||||||
|
hero-git:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
### Build Horus binaries into a host directory
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir -p hero-bin
|
||||||
|
|
||||||
|
docker run --rm \
|
||||||
|
-v "$PWD/hero-bin:/root/hero/bin" \
|
||||||
|
-e HEROLIB_REF=development_nile_installers \
|
||||||
|
hero-git:latest \
|
||||||
|
bash -lc '
|
||||||
|
set -euxo pipefail
|
||||||
|
cd /opt/herolib
|
||||||
|
./examples/installers/horus/horus_full_install.vsh
|
||||||
|
ls -al /root/hero/bin
|
||||||
|
'
|
||||||
|
|
||||||
|
ls -al hero-bin
|
||||||
|
```
|
||||||
|
|
||||||
|
You should now see the built Horus binaries on the host in `hero-bin/`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
* The `hero-git:latest` image encapsulates all dependencies required to build Horus via `herolib`.
|
||||||
|
* `entrypoint.sh`:
|
||||||
|
|
||||||
|
* Starts Redis
|
||||||
|
* Optionally configures SSH
|
||||||
|
* Selects and updates the `herolib` checkout
|
||||||
|
* Builds the `hero` CLI once
|
||||||
|
* Runs any command passed to the container (e.g. Horus installers)
|
||||||
|
* CI uses this image to run `horus_full_install.vsh` and collect Horus binaries from `/root/hero/bin` for releases.
|
||||||
44
docker/entrypoint.sh
Normal file
44
docker/entrypoint.sh
Normal file
@@ -0,0 +1,44 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Start Redis in the background
|
||||||
|
redis-server --daemonize yes
|
||||||
|
|
||||||
|
# Optional SSH setup: only if /root/ssh has keys
|
||||||
|
if [ -d /root/ssh ] && compgen -G "/root/ssh/*" > /dev/null; then
|
||||||
|
mkdir -p /root/.ssh
|
||||||
|
cp -r /root/ssh/* /root/.ssh/
|
||||||
|
chmod 600 /root/.ssh/*
|
||||||
|
eval "$(ssh-agent)"
|
||||||
|
ssh-add /root/.ssh/*
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Support optionally bind-mounting a local herolib into the container.
|
||||||
|
# If /opt/herolib_mount exists, we use that; otherwise we use the cloned /opt/herolib
|
||||||
|
rm -f /root/.vmodules/freeflowuniverse/herolib
|
||||||
|
if [ -d "/opt/herolib_mount" ]; then
|
||||||
|
ln -s /opt/herolib_mount/lib /root/.vmodules/incubaid/herolib
|
||||||
|
cd /opt/herolib_mount
|
||||||
|
else
|
||||||
|
ln -s /opt/herolib/lib /root/.vmodules/incubaid/herolib
|
||||||
|
cd /opt/herolib
|
||||||
|
git fetch
|
||||||
|
git checkout "${HEROLIB_REF:-development}"
|
||||||
|
git pull
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Build hero CLI once so it's available as /bin/hero
|
||||||
|
cd cli
|
||||||
|
echo "Building hero..."
|
||||||
|
v -enable-globals hero.v > build.log 2>&1 || (cat build.log && exit 1)
|
||||||
|
ln -s "$(realpath hero)" /bin/hero
|
||||||
|
|
||||||
|
cd /root
|
||||||
|
|
||||||
|
# If a command was provided to `docker run`, execute it as-is.
|
||||||
|
if [ "$#" -gt 0 ]; then
|
||||||
|
exec "$@"
|
||||||
|
else
|
||||||
|
# No command passed → give an interactive shell
|
||||||
|
exec bash
|
||||||
|
fi
|
||||||
@@ -290,13 +290,19 @@ impl CoordinatorClient {
|
|||||||
|
|
||||||
async fn call<T: serde::de::DeserializeOwned>(&self, method: &str, params: Value) -> Result<T> {
|
async fn call<T: serde::de::DeserializeOwned>(&self, method: &str, params: Value) -> Result<T> {
|
||||||
use jsonrpsee::core::client::ClientT;
|
use jsonrpsee::core::client::ClientT;
|
||||||
use jsonrpsee::core::params::ArrayParams;
|
use jsonrpsee::core::params::ObjectParams;
|
||||||
|
|
||||||
let mut array_params = ArrayParams::new();
|
// Coordinator expects params as named parameters (object), not positional (array)
|
||||||
array_params.insert(params).map_err(|e| CoordinatorError::Rpc(e.to_string()))?;
|
// Convert the Value object to ObjectParams
|
||||||
|
let mut object_params = ObjectParams::new();
|
||||||
|
if let Value::Object(map) = params {
|
||||||
|
for (key, value) in map {
|
||||||
|
object_params.insert(&key, value).map_err(|e| CoordinatorError::Rpc(e.to_string()))?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
self.client
|
let result: T = self.client
|
||||||
.request(method, array_params)
|
.request(method, object_params)
|
||||||
.await
|
.await
|
||||||
.map_err(|e| {
|
.map_err(|e| {
|
||||||
let err_str = e.to_string();
|
let err_str = e.to_string();
|
||||||
@@ -311,7 +317,9 @@ impl CoordinatorClient {
|
|||||||
} else {
|
} else {
|
||||||
CoordinatorError::Rpc(err_str)
|
CoordinatorError::Rpc(err_str)
|
||||||
}
|
}
|
||||||
})
|
})?;
|
||||||
|
|
||||||
|
Ok(result)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -200,3 +200,213 @@ fn is_offsetdatetime_type(ty: &Type) -> bool {
|
|||||||
}
|
}
|
||||||
false
|
false
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Derive macro for generating CRUD client methods for Osiris models
|
||||||
|
///
|
||||||
|
/// This macro generates async CRUD methods (create, get, update, delete, list) for a model,
|
||||||
|
/// plus any custom methods defined on the model.
|
||||||
|
///
|
||||||
|
/// # Example
|
||||||
|
///
|
||||||
|
/// ```rust
|
||||||
|
/// #[derive(OsirisModel)]
|
||||||
|
/// #[osiris(
|
||||||
|
/// collection = "calendar_events",
|
||||||
|
/// id_field = "event_id",
|
||||||
|
/// methods = ["reschedule", "cancel"]
|
||||||
|
/// )]
|
||||||
|
/// pub struct CalendarEvent {
|
||||||
|
/// pub event_id: String,
|
||||||
|
/// pub title: String,
|
||||||
|
/// pub start_time: i64,
|
||||||
|
/// // ...
|
||||||
|
/// }
|
||||||
|
/// ```
|
||||||
|
///
|
||||||
|
/// This generates methods on OsirisClient:
|
||||||
|
/// - `create_calendar_event(&self, event: CalendarEvent) -> Result<CalendarEvent>`
|
||||||
|
/// - `get_calendar_event(&self, event_id: &str) -> Result<CalendarEvent>`
|
||||||
|
/// - `update_calendar_event(&self, event_id: &str, event: CalendarEvent) -> Result<CalendarEvent>`
|
||||||
|
/// - `delete_calendar_event(&self, event_id: &str) -> Result<()>`
|
||||||
|
/// - `list_calendar_events(&self) -> Result<Vec<CalendarEvent>>`
|
||||||
|
/// - `reschedule_calendar_event(&self, event_id: &str, new_time: i64) -> Result<CalendarEvent>`
|
||||||
|
/// - `cancel_calendar_event(&self, event_id: &str) -> Result<CalendarEvent>`
|
||||||
|
#[proc_macro_derive(OsirisModel, attributes(osiris))]
|
||||||
|
pub fn derive_osiris_model(input: TokenStream) -> TokenStream {
|
||||||
|
let input = parse_macro_input!(input as DeriveInput);
|
||||||
|
|
||||||
|
let model_name = &input.ident;
|
||||||
|
let model_name_snake = to_snake_case(&model_name.to_string());
|
||||||
|
|
||||||
|
// Parse attributes
|
||||||
|
let mut collection = model_name_snake.clone();
|
||||||
|
let mut id_field = "id".to_string();
|
||||||
|
let mut custom_methods: Vec<String> = Vec::new();
|
||||||
|
|
||||||
|
for attr in &input.attrs {
|
||||||
|
if attr.path().is_ident("osiris") {
|
||||||
|
if let Ok(meta_list) = attr.parse_args::<syn::MetaList>() {
|
||||||
|
// Parse nested attributes
|
||||||
|
for nested in meta_list.tokens.clone() {
|
||||||
|
let nested_str = nested.to_string();
|
||||||
|
if nested_str.starts_with("collection") {
|
||||||
|
if let Some(val) = extract_string_value(&nested_str) {
|
||||||
|
collection = val;
|
||||||
|
}
|
||||||
|
} else if nested_str.starts_with("id_field") {
|
||||||
|
if let Some(val) = extract_string_value(&nested_str) {
|
||||||
|
id_field = val;
|
||||||
|
}
|
||||||
|
} else if nested_str.starts_with("methods") {
|
||||||
|
custom_methods = extract_array_values(&nested_str);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Generate method names
|
||||||
|
let create_method = syn::Ident::new(&format!("create_{}", model_name_snake), model_name.span());
|
||||||
|
let get_method = syn::Ident::new(&format!("get_{}", model_name_snake), model_name.span());
|
||||||
|
let update_method = syn::Ident::new(&format!("update_{}", model_name_snake), model_name.span());
|
||||||
|
let delete_method = syn::Ident::new(&format!("delete_{}", model_name_snake), model_name.span());
|
||||||
|
let list_method = syn::Ident::new(&format!("list_{}s", model_name_snake), model_name.span());
|
||||||
|
|
||||||
|
// Generate custom method implementations
|
||||||
|
let custom_method_impls: Vec<_> = custom_methods.iter().map(|method_name| {
|
||||||
|
let method_ident = syn::Ident::new(&format!("{}_{}", method_name, model_name_snake), model_name.span());
|
||||||
|
let rhai_call = format!("{}_{}", model_name_snake, method_name);
|
||||||
|
|
||||||
|
quote! {
|
||||||
|
pub async fn #method_ident(&self, id: &str, params: serde_json::Value) -> Result<#model_name, OsirisClientError> {
|
||||||
|
let script = format!(
|
||||||
|
r#"
|
||||||
|
let obj = {}::get("{}");
|
||||||
|
obj.{}(params);
|
||||||
|
obj.save();
|
||||||
|
obj
|
||||||
|
"#,
|
||||||
|
#collection, id, #method_name
|
||||||
|
);
|
||||||
|
|
||||||
|
let response = self.execute_script(&script).await?;
|
||||||
|
// Parse response and return model
|
||||||
|
// This is a simplified version - actual implementation would parse the job result
|
||||||
|
Err(OsirisClientError::CommandFailed("Not yet implemented".to_string()))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}).collect();
|
||||||
|
|
||||||
|
let expanded = quote! {
|
||||||
|
impl OsirisClient {
|
||||||
|
/// Create a new instance of #model_name
|
||||||
|
pub async fn #create_method(&self, model: &#model_name) -> Result<#model_name, OsirisClientError> {
|
||||||
|
let json = serde_json::to_string(model)
|
||||||
|
.map_err(|e| OsirisClientError::SerializationFailed(e.to_string()))?;
|
||||||
|
|
||||||
|
let script = format!(
|
||||||
|
r#"
|
||||||
|
let data = {};
|
||||||
|
let obj = {}::new(data);
|
||||||
|
obj.save();
|
||||||
|
obj
|
||||||
|
"#,
|
||||||
|
json, #collection
|
||||||
|
);
|
||||||
|
|
||||||
|
let response = self.execute_script(&script).await?;
|
||||||
|
// Parse response - simplified for now
|
||||||
|
Err(OsirisClientError::CommandFailed("Not yet implemented".to_string()))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get an instance of #model_name by ID
|
||||||
|
pub async fn #get_method(&self, id: &str) -> Result<#model_name, OsirisClientError> {
|
||||||
|
let query = format!(r#"{{ "{}": "{}" }}"#, #id_field, id);
|
||||||
|
self.query::<#model_name>(#collection, &query).await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Update an existing #model_name
|
||||||
|
pub async fn #update_method(&self, id: &str, model: &#model_name) -> Result<#model_name, OsirisClientError> {
|
||||||
|
let json = serde_json::to_string(model)
|
||||||
|
.map_err(|e| OsirisClientError::SerializationFailed(e.to_string()))?;
|
||||||
|
|
||||||
|
let script = format!(
|
||||||
|
r#"
|
||||||
|
let obj = {}::get("{}");
|
||||||
|
let data = {};
|
||||||
|
obj.update(data);
|
||||||
|
obj.save();
|
||||||
|
obj
|
||||||
|
"#,
|
||||||
|
#collection, id, json
|
||||||
|
);
|
||||||
|
|
||||||
|
let response = self.execute_script(&script).await?;
|
||||||
|
Err(OsirisClientError::CommandFailed("Not yet implemented".to_string()))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Delete an instance of #model_name
|
||||||
|
pub async fn #delete_method(&self, id: &str) -> Result<(), OsirisClientError> {
|
||||||
|
let script = format!(
|
||||||
|
r#"
|
||||||
|
let obj = {}::get("{}");
|
||||||
|
obj.delete();
|
||||||
|
"#,
|
||||||
|
#collection, id
|
||||||
|
);
|
||||||
|
|
||||||
|
self.execute_script(&script).await?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// List all instances of #model_name
|
||||||
|
pub async fn #list_method(&self) -> Result<Vec<#model_name>, OsirisClientError> {
|
||||||
|
self.query_all::<#model_name>(#collection).await
|
||||||
|
}
|
||||||
|
|
||||||
|
#(#custom_method_impls)*
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
TokenStream::from(expanded)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn to_snake_case(s: &str) -> String {
|
||||||
|
let mut result = String::new();
|
||||||
|
for (i, ch) in s.chars().enumerate() {
|
||||||
|
if ch.is_uppercase() {
|
||||||
|
if i > 0 {
|
||||||
|
result.push('_');
|
||||||
|
}
|
||||||
|
result.push(ch.to_lowercase().next().unwrap());
|
||||||
|
} else {
|
||||||
|
result.push(ch);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
result
|
||||||
|
}
|
||||||
|
|
||||||
|
fn extract_string_value(s: &str) -> Option<String> {
|
||||||
|
// Extract value from "key = \"value\"" format
|
||||||
|
if let Some(eq_pos) = s.find('=') {
|
||||||
|
let value_part = &s[eq_pos + 1..].trim();
|
||||||
|
let cleaned = value_part.trim_matches(|c| c == '"' || c == ' ');
|
||||||
|
return Some(cleaned.to_string());
|
||||||
|
}
|
||||||
|
None
|
||||||
|
}
|
||||||
|
|
||||||
|
fn extract_array_values(s: &str) -> Vec<String> {
|
||||||
|
// Extract values from "methods = [\"method1\", \"method2\"]" format
|
||||||
|
if let Some(start) = s.find('[') {
|
||||||
|
if let Some(end) = s.find(']') {
|
||||||
|
let array_content = &s[start + 1..end];
|
||||||
|
return array_content
|
||||||
|
.split(',')
|
||||||
|
.map(|item| item.trim().trim_matches('"').to_string())
|
||||||
|
.filter(|item| !item.is_empty())
|
||||||
|
.collect();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Vec::new()
|
||||||
|
}
|
||||||
|
|||||||
1
priv_key.bin
Normal file
1
priv_key.bin
Normal file
@@ -0,0 +1 @@
|
|||||||
|
<EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD>L<18><>F<>E<EFBFBD><45>Dȫ<44>Ag<41>ߧ9<DFA7>yQ)<
|
||||||
@@ -1,6 +1,5 @@
|
|||||||
!!include configure.md
|
# Horus Installation
|
||||||
|
|
||||||
// Install all components
|
|
||||||
!!coordinator.install name:'development'
|
!!coordinator.install name:'development'
|
||||||
!!supervisor.install name:'development'
|
!!supervisor.install name:'development'
|
||||||
!!herorunner.install name:'development'
|
!!herorunner.install name:'development'
|
||||||
|
|||||||
@@ -2,11 +2,9 @@
|
|||||||
|
|
||||||
Starts all horus binaries
|
Starts all horus binaries
|
||||||
|
|
||||||
!!include install.md
|
|
||||||
|
|
||||||
// Start all services
|
// Start all services
|
||||||
!!herocoordinator.start name:'default'
|
!!herocoordinator.start name:'development'
|
||||||
!!supervisor.start name:'default'
|
!!supervisor.start name:'development'
|
||||||
!!herorunner.start name:'default'
|
!!herorunner.start name:'development'
|
||||||
!!osirisrunner.start name:'default'
|
!!osirisrunner.start name:'development'
|
||||||
!!salrunner.start name:'default'
|
!!salrunner.start name:'development'
|
||||||
|
|||||||
170
tests/README.md
Normal file
170
tests/README.md
Normal file
@@ -0,0 +1,170 @@
|
|||||||
|
# End-to-End Integration Tests
|
||||||
|
|
||||||
|
This directory contains end-to-end integration tests for the Horus system components. Each test file spawns the actual binary and tests it via its client library.
|
||||||
|
|
||||||
|
## Test Files
|
||||||
|
|
||||||
|
### `coordinator.rs`
|
||||||
|
End-to-end tests for the Hero Coordinator service.
|
||||||
|
|
||||||
|
**Tests:**
|
||||||
|
- Actor creation and loading
|
||||||
|
- Context creation and management
|
||||||
|
- Runner registration and configuration
|
||||||
|
- Job creation with dependencies
|
||||||
|
- Flow creation and DAG generation
|
||||||
|
- Flow execution (start)
|
||||||
|
|
||||||
|
**Prerequisites:**
|
||||||
|
- Redis server running on `127.0.0.1:6379`
|
||||||
|
- Port `9652` (HTTP API) and `9653` (WebSocket API) available
|
||||||
|
|
||||||
|
**Run:**
|
||||||
|
```bash
|
||||||
|
cargo test --test coordinator -- --test-threads=1
|
||||||
|
```
|
||||||
|
|
||||||
|
### `supervisor.rs`
|
||||||
|
End-to-end tests for the Hero Supervisor service.
|
||||||
|
|
||||||
|
**Tests:**
|
||||||
|
- OpenRPC discovery
|
||||||
|
- Runner registration and management
|
||||||
|
- Job creation and execution
|
||||||
|
- Job status tracking
|
||||||
|
- API key generation and management
|
||||||
|
- Authentication verification
|
||||||
|
- Complete workflow integration
|
||||||
|
|
||||||
|
**Prerequisites:**
|
||||||
|
- Redis server running on `127.0.0.1:6379`
|
||||||
|
- Port `3031` available
|
||||||
|
|
||||||
|
**Run:**
|
||||||
|
```bash
|
||||||
|
cargo test --test coordinator -- --test-threads=1
|
||||||
|
```
|
||||||
|
|
||||||
|
### `runner_hero.rs`
|
||||||
|
End-to-end tests for the Hero (Python) runner.
|
||||||
|
|
||||||
|
**Prerequisites:**
|
||||||
|
- Python 3 installed
|
||||||
|
- Redis server running
|
||||||
|
|
||||||
|
**Run:**
|
||||||
|
```bash
|
||||||
|
cargo test --test runner_hero -- --test-threads=1
|
||||||
|
```
|
||||||
|
|
||||||
|
### `runner_osiris.rs`
|
||||||
|
End-to-end tests for the Osiris (V language) runner.
|
||||||
|
|
||||||
|
**Prerequisites:**
|
||||||
|
- V language compiler installed
|
||||||
|
- Redis server running
|
||||||
|
|
||||||
|
**Run:**
|
||||||
|
```bash
|
||||||
|
cargo test --test runner_osiris -- --test-threads=1
|
||||||
|
```
|
||||||
|
|
||||||
|
### `runner_sal.rs`
|
||||||
|
End-to-end tests for the Sal (Rhai scripting) runner.
|
||||||
|
|
||||||
|
**Prerequisites:**
|
||||||
|
- Redis server running
|
||||||
|
|
||||||
|
**Run:**
|
||||||
|
```bash
|
||||||
|
cargo test --test runner_sal -- --test-threads=1
|
||||||
|
```
|
||||||
|
|
||||||
|
## Running All Tests
|
||||||
|
|
||||||
|
To run all end-to-end tests sequentially:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --tests -- --test-threads=1
|
||||||
|
```
|
||||||
|
|
||||||
|
## Important Notes
|
||||||
|
|
||||||
|
### Sequential Execution Required
|
||||||
|
|
||||||
|
All tests **must** be run with `--test-threads=1` because:
|
||||||
|
1. Each test spawns a server process that binds to specific ports
|
||||||
|
2. Tests share Redis databases and may conflict if run in parallel
|
||||||
|
3. Process cleanup needs to happen sequentially
|
||||||
|
|
||||||
|
### Redis Requirement
|
||||||
|
|
||||||
|
All tests require a Redis server running on `127.0.0.1:6379`. You can start Redis with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
redis-server
|
||||||
|
```
|
||||||
|
|
||||||
|
Or using Docker:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run -d -p 6379:6379 redis:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
### Port Conflicts
|
||||||
|
|
||||||
|
If tests fail to start, check that the required ports are not in use:
|
||||||
|
|
||||||
|
- **Coordinator**: 9652 (HTTP), 9653 (WebSocket)
|
||||||
|
- **Supervisor**: 3031
|
||||||
|
- **Runners**: Various ports depending on configuration
|
||||||
|
|
||||||
|
You can check port usage with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
lsof -i :9652
|
||||||
|
lsof -i :3031
|
||||||
|
```
|
||||||
|
|
||||||
|
### Test Isolation
|
||||||
|
|
||||||
|
Each test file:
|
||||||
|
1. Builds the binary using `escargot`
|
||||||
|
2. Starts the process with test-specific configuration
|
||||||
|
3. Runs tests against the running instance
|
||||||
|
4. Cleans up the process at the end
|
||||||
|
|
||||||
|
Tests within a file may share state through Redis, so they are designed to be idempotent and handle existing data.
|
||||||
|
|
||||||
|
### Debugging
|
||||||
|
|
||||||
|
To see detailed logs during test execution:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
RUST_LOG=debug cargo test --test coordinator -- --test-threads=1 --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
To run a specific test:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test coordinator test_01_actor_create -- --test-threads=1 --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Architecture
|
||||||
|
|
||||||
|
Each test file follows this pattern:
|
||||||
|
|
||||||
|
1. **Global Process Management**: Uses `lazy_static` and `Once` to ensure the server process starts only once
|
||||||
|
2. **Setup Helper**: Common setup code (e.g., `setup_prerequisites()`) to reduce duplication
|
||||||
|
3. **Sequential Tests**: Tests are numbered (e.g., `test_01_`, `test_02_`) to indicate execution order
|
||||||
|
4. **Cleanup Test**: A final `test_zz_cleanup()` ensures the process is terminated and ports are freed
|
||||||
|
|
||||||
|
## Contributing
|
||||||
|
|
||||||
|
When adding new tests:
|
||||||
|
|
||||||
|
1. Follow the existing naming convention (`test_NN_description`)
|
||||||
|
2. Use the setup helpers to avoid duplication
|
||||||
|
3. Make tests idempotent (handle existing data gracefully)
|
||||||
|
4. Add cleanup in the `test_zz_cleanup()` function
|
||||||
|
5. Update this README with any new prerequisites or test descriptions
|
||||||
392
tests/coordinator.rs
Normal file
392
tests/coordinator.rs
Normal file
@@ -0,0 +1,392 @@
|
|||||||
|
//! End-to-End Integration Tests for Hero Coordinator
|
||||||
|
//!
|
||||||
|
//! Tests coordinator flow management functionality against a running coordinator instance.
|
||||||
|
//! The coordinator binary is automatically started and stopped for each test run.
|
||||||
|
//!
|
||||||
|
//! **IMPORTANT**: Run with `--test-threads=1` to ensure tests run sequentially:
|
||||||
|
//! ```
|
||||||
|
//! cargo test --test coordinator -- --test-threads=1
|
||||||
|
//! ```
|
||||||
|
|
||||||
|
use hero_coordinator_client::{CoordinatorClient, models::*};
|
||||||
|
use std::collections::HashMap;
|
||||||
|
use std::sync::Once;
|
||||||
|
use std::process::Child;
|
||||||
|
|
||||||
|
/// Test configuration
|
||||||
|
const COORDINATOR_URL: &str = "http://127.0.0.1:9652";
|
||||||
|
const TEST_CONTEXT_ID: u32 = 2;
|
||||||
|
const TEST_CALLER_ID: u32 = 11001;
|
||||||
|
const TEST_FLOW_ID: u32 = 13001;
|
||||||
|
const BASE_JOB_ID: u32 = 20000;
|
||||||
|
|
||||||
|
use std::sync::Mutex;
|
||||||
|
use lazy_static::lazy_static;
|
||||||
|
|
||||||
|
lazy_static! {
|
||||||
|
static ref COORDINATOR_PROCESS: Mutex<Option<Child>> = Mutex::new(None);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Global initialization flag
|
||||||
|
static INIT: Once = Once::new();
|
||||||
|
|
||||||
|
/// Initialize and start the coordinator binary (called once)
|
||||||
|
async fn init_coordinator() {
|
||||||
|
INIT.call_once(|| {
|
||||||
|
// Register cleanup handler
|
||||||
|
let _ = std::panic::catch_unwind(|| {
|
||||||
|
ctrlc::set_handler(move || {
|
||||||
|
cleanup_coordinator();
|
||||||
|
std::process::exit(0);
|
||||||
|
}).ok();
|
||||||
|
});
|
||||||
|
|
||||||
|
// Use escargot to build and get the binary path
|
||||||
|
let binary = escargot::CargoBuild::new()
|
||||||
|
.bin("coordinator")
|
||||||
|
.package("hero-coordinator")
|
||||||
|
.run()
|
||||||
|
.expect("Failed to build coordinator binary");
|
||||||
|
|
||||||
|
// Start the coordinator binary with HTTP transport (no mycelium needed)
|
||||||
|
let child = binary
|
||||||
|
.command()
|
||||||
|
.env("RUST_LOG", "info")
|
||||||
|
.args(&[
|
||||||
|
"--api-http-port",
|
||||||
|
"9652",
|
||||||
|
"--api-ws-port",
|
||||||
|
"9653",
|
||||||
|
"--redis-addr",
|
||||||
|
"127.0.0.1:6379",
|
||||||
|
"--supervisor-transport",
|
||||||
|
"http",
|
||||||
|
])
|
||||||
|
.spawn()
|
||||||
|
.expect("Failed to start coordinator");
|
||||||
|
|
||||||
|
*COORDINATOR_PROCESS.lock().unwrap() = Some(child);
|
||||||
|
|
||||||
|
// Wait for server to be ready with simple TCP check
|
||||||
|
use std::net::TcpStream;
|
||||||
|
use std::time::Duration;
|
||||||
|
|
||||||
|
println!("⏳ Waiting for coordinator to start...");
|
||||||
|
|
||||||
|
for i in 0..30 {
|
||||||
|
std::thread::sleep(Duration::from_millis(500));
|
||||||
|
|
||||||
|
// Try to connect to the port
|
||||||
|
if TcpStream::connect_timeout(
|
||||||
|
&"127.0.0.1:9652".parse().unwrap(),
|
||||||
|
Duration::from_millis(100)
|
||||||
|
).is_ok() {
|
||||||
|
// Give it more time to fully initialize
|
||||||
|
std::thread::sleep(Duration::from_secs(2));
|
||||||
|
println!("✅ Coordinator ready after ~{}ms", (i * 500) + 2000);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
panic!("Coordinator failed to start within 15 seconds");
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Cleanup coordinator process
|
||||||
|
fn cleanup_coordinator() {
|
||||||
|
if let Ok(mut guard) = COORDINATOR_PROCESS.lock() {
|
||||||
|
if let Some(mut child) = guard.take() {
|
||||||
|
println!("🧹 Cleaning up coordinator process...");
|
||||||
|
let _ = child.kill();
|
||||||
|
let _ = child.wait();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Helper to create a test client
|
||||||
|
async fn create_client() -> CoordinatorClient {
|
||||||
|
// Ensure coordinator is running
|
||||||
|
init_coordinator().await;
|
||||||
|
|
||||||
|
CoordinatorClient::new(COORDINATOR_URL)
|
||||||
|
.expect("Failed to create coordinator client")
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_01_flow_create_simple() {
|
||||||
|
println!("\n🧪 Test: flow.create (simple flow)");
|
||||||
|
|
||||||
|
let client = create_client().await;
|
||||||
|
|
||||||
|
// Create jobs for the flow
|
||||||
|
let job_ids = vec![BASE_JOB_ID, BASE_JOB_ID + 1];
|
||||||
|
for (i, job_id) in job_ids.iter().enumerate() {
|
||||||
|
let job = JobCreate {
|
||||||
|
id: *job_id,
|
||||||
|
caller_id: TEST_CALLER_ID,
|
||||||
|
context_id: TEST_CONTEXT_ID,
|
||||||
|
script: format!("print('job {}')", i),
|
||||||
|
script_type: ScriptType::Python,
|
||||||
|
timeout: 60,
|
||||||
|
retries: 0,
|
||||||
|
env_vars: HashMap::new(),
|
||||||
|
prerequisites: vec![],
|
||||||
|
depends: if i == 0 { vec![] } else { vec![job_ids[i - 1]] },
|
||||||
|
};
|
||||||
|
|
||||||
|
let result = client.job_create_or_load(TEST_CONTEXT_ID, job).await;
|
||||||
|
if let Err(ref e) = result {
|
||||||
|
println!(" Job {} creation error: {:?}", job_id, e);
|
||||||
|
}
|
||||||
|
assert!(result.is_ok(), "Job {} should be created", job_id);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create flow
|
||||||
|
let flow_create = FlowCreate {
|
||||||
|
id: TEST_FLOW_ID,
|
||||||
|
caller_id: TEST_CALLER_ID,
|
||||||
|
context_id: TEST_CONTEXT_ID,
|
||||||
|
jobs: job_ids.clone(),
|
||||||
|
env_vars: HashMap::new(),
|
||||||
|
};
|
||||||
|
|
||||||
|
let result = client.flow_create_or_load(TEST_CONTEXT_ID, flow_create).await;
|
||||||
|
|
||||||
|
if let Err(ref e) = result {
|
||||||
|
println!(" Error: {:?}", e);
|
||||||
|
}
|
||||||
|
assert!(result.is_ok(), "flow.create_or_load should succeed");
|
||||||
|
let flow = result.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(flow.id, TEST_FLOW_ID);
|
||||||
|
assert_eq!(flow.jobs, job_ids);
|
||||||
|
println!("✅ flow.create works - flow: {}, jobs: {:?}", flow.id, flow.jobs);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_02_flow_load() {
|
||||||
|
println!("\n🧪 Test: flow.load");
|
||||||
|
|
||||||
|
let client = create_client().await;
|
||||||
|
|
||||||
|
// Create a flow first (reuse from test_01)
|
||||||
|
let job_ids = vec![BASE_JOB_ID, BASE_JOB_ID + 1];
|
||||||
|
for (i, job_id) in job_ids.iter().enumerate() {
|
||||||
|
let job = JobCreate {
|
||||||
|
id: *job_id,
|
||||||
|
caller_id: TEST_CALLER_ID,
|
||||||
|
context_id: TEST_CONTEXT_ID,
|
||||||
|
script: format!("print('job {}')", i),
|
||||||
|
script_type: ScriptType::Python,
|
||||||
|
timeout: 60,
|
||||||
|
retries: 0,
|
||||||
|
env_vars: HashMap::new(),
|
||||||
|
prerequisites: vec![],
|
||||||
|
depends: if i == 0 { vec![] } else { vec![job_ids[i - 1]] },
|
||||||
|
};
|
||||||
|
let _ = client.job_create_or_load(TEST_CONTEXT_ID, job).await;
|
||||||
|
}
|
||||||
|
|
||||||
|
let flow_create = FlowCreate {
|
||||||
|
id: TEST_FLOW_ID,
|
||||||
|
caller_id: TEST_CALLER_ID,
|
||||||
|
context_id: TEST_CONTEXT_ID,
|
||||||
|
jobs: job_ids.clone(),
|
||||||
|
env_vars: HashMap::new(),
|
||||||
|
};
|
||||||
|
let _ = client.flow_create_or_load(TEST_CONTEXT_ID, flow_create).await;
|
||||||
|
|
||||||
|
// Load the flow
|
||||||
|
let result = client.flow_load(TEST_CONTEXT_ID, TEST_FLOW_ID).await;
|
||||||
|
|
||||||
|
if let Err(ref e) = result {
|
||||||
|
println!(" Error: {:?}", e);
|
||||||
|
}
|
||||||
|
assert!(result.is_ok(), "flow.load should succeed");
|
||||||
|
let flow = result.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(flow.id, TEST_FLOW_ID);
|
||||||
|
assert_eq!(flow.jobs, job_ids);
|
||||||
|
println!("✅ flow.load works - loaded flow: {}", flow.id);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_03_flow_dag() {
|
||||||
|
println!("\n🧪 Test: flow.dag");
|
||||||
|
|
||||||
|
let client = create_client().await;
|
||||||
|
|
||||||
|
// Create jobs with dependencies
|
||||||
|
let job_ids = vec![BASE_JOB_ID + 100, BASE_JOB_ID + 101, BASE_JOB_ID + 102];
|
||||||
|
for (i, job_id) in job_ids.iter().enumerate() {
|
||||||
|
let job = JobCreate {
|
||||||
|
id: *job_id,
|
||||||
|
caller_id: TEST_CALLER_ID,
|
||||||
|
context_id: TEST_CONTEXT_ID,
|
||||||
|
script: format!("print('dag job {}')", i),
|
||||||
|
script_type: ScriptType::Python,
|
||||||
|
timeout: 60,
|
||||||
|
retries: 0,
|
||||||
|
env_vars: HashMap::new(),
|
||||||
|
prerequisites: vec![],
|
||||||
|
depends: if i == 0 { vec![] } else { vec![job_ids[i - 1]] },
|
||||||
|
};
|
||||||
|
let _ = client.job_create_or_load(TEST_CONTEXT_ID, job).await;
|
||||||
|
}
|
||||||
|
|
||||||
|
let flow_id = TEST_FLOW_ID + 1;
|
||||||
|
let flow_create = FlowCreate {
|
||||||
|
id: flow_id,
|
||||||
|
caller_id: TEST_CALLER_ID,
|
||||||
|
context_id: TEST_CONTEXT_ID,
|
||||||
|
jobs: job_ids.clone(),
|
||||||
|
env_vars: HashMap::new(),
|
||||||
|
};
|
||||||
|
let _ = client.flow_create_or_load(TEST_CONTEXT_ID, flow_create).await;
|
||||||
|
|
||||||
|
// Get the DAG
|
||||||
|
let result = client.flow_dag(TEST_CONTEXT_ID, flow_id).await;
|
||||||
|
|
||||||
|
if let Err(ref e) = result {
|
||||||
|
println!(" Error: {:?}", e);
|
||||||
|
}
|
||||||
|
assert!(result.is_ok(), "flow.dag should succeed");
|
||||||
|
let dag = result.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(dag.flow_id, flow_id);
|
||||||
|
assert_eq!(dag.nodes.len(), 3);
|
||||||
|
assert_eq!(dag.edges.len(), 2); // Two edges for the chain
|
||||||
|
println!("✅ flow.dag works - flow: {}, nodes: {}, edges: {}",
|
||||||
|
dag.flow_id, dag.nodes.len(), dag.edges.len());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_04_flow_start() {
|
||||||
|
println!("\n🧪 Test: flow.start");
|
||||||
|
|
||||||
|
let client = create_client().await;
|
||||||
|
|
||||||
|
// Create a simple flow
|
||||||
|
let job_id = BASE_JOB_ID + 200;
|
||||||
|
let job = JobCreate {
|
||||||
|
id: job_id,
|
||||||
|
caller_id: TEST_CALLER_ID,
|
||||||
|
context_id: TEST_CONTEXT_ID,
|
||||||
|
script: "print('start test')".to_string(),
|
||||||
|
script_type: ScriptType::Python,
|
||||||
|
timeout: 60,
|
||||||
|
retries: 0,
|
||||||
|
env_vars: HashMap::new(),
|
||||||
|
prerequisites: vec![],
|
||||||
|
depends: vec![],
|
||||||
|
};
|
||||||
|
let _ = client.job_create_or_load(TEST_CONTEXT_ID, job).await;
|
||||||
|
|
||||||
|
let flow_id = TEST_FLOW_ID + 2;
|
||||||
|
let flow_create = FlowCreate {
|
||||||
|
id: flow_id,
|
||||||
|
caller_id: TEST_CALLER_ID,
|
||||||
|
context_id: TEST_CONTEXT_ID,
|
||||||
|
jobs: vec![job_id],
|
||||||
|
env_vars: HashMap::new(),
|
||||||
|
};
|
||||||
|
let _ = client.flow_create_or_load(TEST_CONTEXT_ID, flow_create).await;
|
||||||
|
|
||||||
|
// Start the flow
|
||||||
|
let result = client.flow_start(TEST_CONTEXT_ID, flow_id).await;
|
||||||
|
|
||||||
|
match result {
|
||||||
|
Ok(started) => {
|
||||||
|
println!("✅ flow.start works - started: {}", started);
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
println!("⚠️ flow.start: {:?} (runner may not be available)", e);
|
||||||
|
// This is expected if no actual runner is listening
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_05_message_create() {
|
||||||
|
println!("\n🧪 Test: message.create");
|
||||||
|
|
||||||
|
let client = create_client().await;
|
||||||
|
|
||||||
|
let message_create = MessageCreate {
|
||||||
|
id: 1,
|
||||||
|
context_id: TEST_CONTEXT_ID,
|
||||||
|
runner_id: 12001,
|
||||||
|
job_id: BASE_JOB_ID,
|
||||||
|
message_type: MessageType::JobRun,
|
||||||
|
format: MessageFormatType::JsonRpc,
|
||||||
|
payload: r#"{"method":"job.run","params":{}}"#.to_string(),
|
||||||
|
};
|
||||||
|
|
||||||
|
let result = client.message_create(TEST_CONTEXT_ID, message_create).await;
|
||||||
|
|
||||||
|
match result {
|
||||||
|
Ok(message) => {
|
||||||
|
assert_eq!(message.id, 1);
|
||||||
|
assert_eq!(message.context_id, TEST_CONTEXT_ID);
|
||||||
|
println!("✅ message.create works - message: {}", message.id);
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
println!("⚠️ message.create: {:?} (may already exist)", e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_06_message_load() {
|
||||||
|
println!("\n🧪 Test: message.load");
|
||||||
|
|
||||||
|
let client = create_client().await;
|
||||||
|
|
||||||
|
// Create a message first
|
||||||
|
let message_create = MessageCreate {
|
||||||
|
id: 2,
|
||||||
|
context_id: TEST_CONTEXT_ID,
|
||||||
|
runner_id: 12001,
|
||||||
|
job_id: BASE_JOB_ID,
|
||||||
|
message_type: MessageType::JobRun,
|
||||||
|
format: MessageFormatType::JsonRpc,
|
||||||
|
payload: r#"{"method":"job.run","params":{}}"#.to_string(),
|
||||||
|
};
|
||||||
|
let _ = client.message_create(TEST_CONTEXT_ID, message_create).await;
|
||||||
|
|
||||||
|
// Load the message
|
||||||
|
let result = client.message_load(TEST_CONTEXT_ID, 2).await;
|
||||||
|
|
||||||
|
if let Err(ref e) = result {
|
||||||
|
println!(" Error: {:?}", e);
|
||||||
|
}
|
||||||
|
|
||||||
|
match result {
|
||||||
|
Ok(message) => {
|
||||||
|
assert_eq!(message.id, 2);
|
||||||
|
assert_eq!(message.context_id, TEST_CONTEXT_ID);
|
||||||
|
println!("✅ message.load works - loaded message: {}", message.id);
|
||||||
|
}
|
||||||
|
Err(_) => {
|
||||||
|
println!("⚠️ message.load failed (message may not exist)");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Final test that ensures cleanup happens
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_zz_cleanup() {
|
||||||
|
println!("🧹 Running cleanup...");
|
||||||
|
cleanup_coordinator();
|
||||||
|
|
||||||
|
tokio::time::sleep(tokio::time::Duration::from_millis(500)).await;
|
||||||
|
|
||||||
|
use std::net::TcpStream;
|
||||||
|
let port_free = TcpStream::connect_timeout(
|
||||||
|
&"127.0.0.1:9652".parse().unwrap(),
|
||||||
|
std::time::Duration::from_millis(100)
|
||||||
|
).is_err();
|
||||||
|
|
||||||
|
assert!(port_free, "Port 9652 should be free after cleanup");
|
||||||
|
println!("✅ Cleanup complete - port 9652 is free");
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user