5 Commits

Author SHA1 Message Date
peternashaat
aa30c54742 docs: document hero-git image and installer-based release pipeline
- add README for hero-git Docker image usage (local + CI)
- explain Dockerfile/entrypoint behavior and horus_full_install workflow
- describe new release process using hero-git image and horus_full_install.vsh
2025-11-20 12:06:41 +00:00
8fdfdd1746 chore: use hero-git installer to build release binaries
* run `horus_full_install.vsh` inside hero-git docker image
 * collect built binaries from /root/hero/bin into release artifacts
2025-11-20 11:46:01 +00:00
peternashaat
dffb17ffbc Add CI and release workflows for Horus Rust binaries
This adds a per-bin Rust CI pipeline and a tagged release workflow that builds,
packages, and publishes binaries as Gitea releases.
2025-11-19 18:47:16 +00:00
Timur Gordon
8c33c73b3c update coordinator and add end to end tests 2025-11-19 10:34:28 +01:00
Timur Gordon
7675dc2150 update horus heroscripts 2025-11-19 10:33:54 +01:00
16 changed files with 1809 additions and 19 deletions

392
.gitea/workflows/README.md Normal file
View File

@@ -0,0 +1,392 @@
# Gitea Actions Workflows Documentation
This directory contains the CI/CD workflows for the Horus project using Gitea Actions.
## Overview
The Horus project uses two main workflows:
1. **[ci.yml](./ci.yml)** - Continuous Integration workflow
2. **[release.yml](./release.yml)** - Release automation workflow
## Workflow Files
### ci.yml - Continuous Integration
**Purpose**: Automatically build, test, and validate code quality on every push and pull request.
**Triggers**:
- Push to any branch
- Pull request events (opened, synchronized, reopened)
**What it does**:
1. Sets up Rust toolchain
2. Caches dependencies for faster builds
3. Runs code quality checks (check, test, clippy, fmt)
4. Builds all 7 binaries in release mode
5. Uploads binaries as artifacts
**Duration**: ~5-15 minutes (first run), ~2-5 minutes (cached runs)
**Artifacts**: Binaries are stored for 7 days and can be downloaded from the Actions tab
---
### release.yml - Release Automation
**Purpose**: Automatically create GitHub-style releases with downloadable binaries when version tags are pushed.
**Triggers**:
- Tags matching `v*.*.*` pattern (e.g., `v1.0.0`, `v2.1.3`)
**What it does**:
1. Builds optimized release binaries
2. Strips debug symbols to reduce size
3. Packages each binary as a tarball
4. Generates SHA256 checksums
5. Creates a Gitea release with all artifacts attached
**Duration**: ~5-10 minutes
**Artifacts**: Permanently attached to the release
---
## Binaries Built
Both workflows build the following 7 binaries:
| Binary Name | Description |
|-------------|-------------|
| `supervisor` | Hero Supervisor service |
| `coordinator` | Hero Coordinator service |
| `horus` | Horus main binary |
| `osiris` | Osiris server |
| `herorunner` | Hero runner |
| `runner_osiris` | Osiris runner |
| `runner_sal` | SAL runner |
---
## Usage Guide
### Testing Code Changes
Every time you push code or create a pull request, the CI workflow automatically runs:
```bash
# Make your changes
git add .
git commit -m "Your changes"
git push origin your-branch
# Or create a pull request
# The CI workflow will run automatically
```
**Check Results**:
1. Go to your Gitea repository
2. Click on the **Actions** tab
3. Find your workflow run
4. Click to see detailed logs
---
### Creating a Release
To create a new release with binaries:
```bash
# 1. Ensure your code is ready for release
# 2. Create a version tag (use semantic versioning)
git tag v1.0.0
# 3. Push the tag
git push origin v1.0.0
# 4. The release workflow will automatically:
# - Build all binaries
# - Create a release
# - Attach binaries and checksums
```
**View Release**:
1. Go to your Gitea repository
2. Click on the **Releases** tab
3. Your new release will be listed with downloadable artifacts
---
### Downloading Release Binaries
Users can download binaries from releases:
```bash
# Download a specific binary
wget https://git.ourworld.tf/peternashaat/horus/releases/download/v1.0.0/supervisor-v1.0.0-linux-x86_64.tar.gz
# Extract
tar -xzf supervisor-v1.0.0-linux-x86_64.tar.gz
# Make executable
chmod +x supervisor
# Optionally move to system path
sudo mv supervisor /usr/local/bin/
# Verify it works
supervisor --help
```
**Verify Integrity**:
```bash
# Download checksums
wget https://git.ourworld.tf/peternashaat/horus/releases/download/v1.0.0/checksums.txt
# Verify a binary
sha256sum -c checksums.txt
```
---
## Workflow Requirements
### Runner Configuration
Your Gitea Actions runner must be configured with these labels:
- `ubuntu-latest` (recommended)
- `ubuntu-22.04` (alternative)
- `ubuntu-20.04` (alternative)
### Permissions
The workflows require:
- Read access to repository code
- Write access to create releases (for release.yml)
- Access to `GITHUB_TOKEN` secret (automatically provided by Gitea)
### Dependencies
The workflows automatically install:
- Rust stable toolchain
- rustfmt (code formatter)
- clippy (linter)
No manual setup required!
---
## Caching Strategy
The CI workflow uses three levels of caching to speed up builds:
1. **Cargo Registry Cache** - Downloaded crate metadata
2. **Cargo Index Cache** - Git index of crates.io
3. **Build Cache** - Compiled dependencies
**Benefits**:
- First build: ~10-15 minutes
- Cached builds: ~2-5 minutes
- Saves bandwidth and runner resources
---
## Troubleshooting
### CI Workflow Fails
**Check these common issues**:
1. **Compilation Errors**
- Review the "Check code" step logs
- Fix Rust compilation errors locally first
2. **Test Failures**
- Review the "Run tests" step logs
- Run `cargo test --workspace` locally to reproduce
3. **Clippy Warnings**
- Review the "Run clippy" step logs
- Fix with: `cargo clippy --workspace --fix`
4. **Formatting Issues**
- Review the "Check formatting" step logs
- Fix with: `cargo fmt --all`
5. **Runner Offline**
- Check if your Gitea Actions runner is running
- Verify runner labels match workflow requirements
### Release Workflow Fails
**Check these common issues**:
1. **Tag Format**
- Ensure tag matches `v*.*.*` pattern
- Examples: `v1.0.0`, `v2.1.3`, `v0.1.0-beta`
2. **Binary Not Found**
- Check if all binaries built successfully
- Review the "Build release binaries" step logs
3. **Permission Denied**
- Ensure runner has write access to create releases
- Check repository settings
4. **Release Already Exists**
- Delete the existing release first
- Or use a different version tag
---
## Best Practices
### Version Tagging
Use [Semantic Versioning](https://semver.org/):
- `v1.0.0` - Major release (breaking changes)
- `v1.1.0` - Minor release (new features)
- `v1.0.1` - Patch release (bug fixes)
- `v1.0.0-beta.1` - Pre-release
### Commit Messages
Write clear commit messages for better release notes:
```bash
git commit -m "feat: Add new authentication system"
git commit -m "fix: Resolve memory leak in supervisor"
git commit -m "docs: Update installation guide"
```
### Testing Before Release
Always test before creating a release:
```bash
# Run all checks locally
cargo check --workspace
cargo test --workspace
cargo clippy --workspace -- -D warnings
cargo fmt --all -- --check
# Build release binaries locally
cargo build --workspace --release
# Test the binaries
./target/release/supervisor --help
```
---
## Workflow Customization
### Changing Rust Version
Edit the toolchain in both workflows:
```yaml
- name: Setup Rust toolchain
uses: actions-rust-lang/setup-rust-toolchain@v1
with:
toolchain: 1.75.0 # Specify exact version
```
### Adding More Binaries
If you add new binaries to the workspace:
1. Update `ci.yml` - Add to the upload artifacts step
2. Update `release.yml` - Add to strip and package steps
3. Update this README
### Changing Artifact Retention
In `ci.yml`, modify the retention period:
```yaml
retention-days: 30 # Keep for 30 days instead of 7
```
### Adding Build Matrix
To build for multiple platforms, add a matrix strategy:
```yaml
jobs:
build:
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
runs-on: ${{ matrix.os }}
```
---
## Monitoring
### View Workflow Status
**In Gitea UI**:
1. Repository → Actions tab
2. See all workflow runs
3. Click any run for detailed logs
**Via Git**:
```bash
# List recent tags
git tag -l
# Show tag details
git show v1.0.0
```
### Workflow Badges
Add status badges to your README:
```markdown
![CI Status](https://git.ourworld.tf/peternashaat/horus/actions/workflows/ci.yml/badge.svg)
```
---
## Security Considerations
### Secrets
The workflows use `GITHUB_TOKEN` which is automatically provided by Gitea. This token:
- Has repository-scoped permissions
- Expires after the workflow run
- Cannot be accessed by pull requests from forks (for security)
### Binary Verification
Always verify downloaded binaries:
```bash
# Check SHA256 hash
sha256sum binary-name
# Compare with checksums.txt
```
### Supply Chain Security
The workflows:
- Use pinned action versions (`@v4`, `@v1`)
- Build from source (no pre-built binaries)
- Generate checksums for verification
---
## Additional Resources
- [Gitea Actions Documentation](https://docs.gitea.com/usage/actions/overview)
- [GitHub Actions Syntax](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions) (Gitea is compatible)
- [Rust CI Best Practices](https://doc.rust-lang.org/cargo/guide/continuous-integration.html)
- [Semantic Versioning](https://semver.org/)
---
## Support
For issues with:
- **Workflows**: Check the troubleshooting section above
- **Horus Project**: See the main [README.md](../../README.md)
- **Gitea Actions**: Consult [Gitea documentation](https://docs.gitea.com)
For detailed line-by-line explanation of the workflows, see [WORKFLOW_EXPLAINED.md](./WORKFLOW_EXPLAINED.md).

111
.gitea/workflows/ci.yml Normal file
View File

@@ -0,0 +1,111 @@
name: CI
on:
push:
branches:
- '**'
pull_request:
types: [opened, synchronize, reopened]
jobs:
build-and-test:
name: Build & Test
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Rust toolchain
uses: actions-rust-lang/setup-rust-toolchain@v1
with:
toolchain: stable
components: rustfmt, clippy
- name: Cache cargo registry
uses: actions/cache@v4
with:
path: ~/.cargo/registry
key: ${{ runner.os }}-cargo-registry-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-cargo-registry-
- name: Cache cargo index
uses: actions/cache@v4
with:
path: ~/.cargo/git
key: ${{ runner.os }}-cargo-index-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-cargo-index-
- name: Cache cargo build
uses: actions/cache@v4
with:
path: target
key: ${{ runner.os }}-cargo-build-target-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-cargo-build-target-
# 👇 Don't fail CI on hero-runner's warnings
- name: Check code (all crates under bin/)
env:
RUSTFLAGS: "--cap-lints=warn"
run: |
set -euo pipefail
find bin -name Cargo.toml -print0 | while IFS= read -r -d '' manifest; do
echo "=== cargo check --manifest-path $manifest ==="
cargo check --manifest-path "$manifest" --verbose
done
# 👇 Same trick for tests, otherwise theyd fail for the same reason
- name: Run tests (all crates under bin/)
env:
RUSTFLAGS: "--cap-lints=warn"
run: |
set -euo pipefail
find bin -name Cargo.toml -print0 | while IFS= read -r -d '' manifest; do
echo "=== cargo test --manifest-path $manifest ==="
cargo test --manifest-path "$manifest" --verbose
done
# Clippy stays strict (still uses -D warnings for clippy lints).
# If this later fails because of hero-runner, we can also add RUSTFLAGS here.
- name: Run clippy (all crates under bin/)
run: |
set -euo pipefail
find bin -name Cargo.toml -print0 | while IFS= read -r -d '' manifest; do
echo "=== cargo clippy --manifest-path $manifest ==="
cargo clippy --manifest-path "$manifest" -- -D warnings
done
- name: Check formatting
run: cargo fmt --all -- --check
# Build was already succeeding; leaving it without cap-lints is fine.
- name: Build release binaries (all crates under bin/)
run: |
set -euo pipefail
find bin -name Cargo.toml -print0 | while IFS= read -r -d '' manifest; do
echo "=== cargo build --manifest-path $manifest --release ==="
cargo build --manifest-path "$manifest" --release --verbose
done
- name: List built binaries
run: |
echo "Built binaries:"
ls -lh target/release/ | grep -E '^-.*x.*'
- name: Upload build artifacts
uses: actions/upload-artifact@v4
with:
name: binaries-${{ github.sha }}
path: |
target/release/supervisor
target/release/coordinator
target/release/horus
target/release/osiris
target/release/herorunner
target/release/runner_osiris
target/release/runner_sal
retention-days: 7
if-no-files-found: warn

View File

@@ -0,0 +1,115 @@
name: Release
on:
push:
tags:
- 'v*.*.*'
jobs:
build-release:
name: Build Release Binaries via hero-git installer
runs-on: ubuntu-latest
env:
IMAGE_NAME: hero-git:latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Extract version from tag
id: version
run: echo "VERSION=${GITHUB_REF#refs/tags/}" >> "$GITHUB_OUTPUT"
- name: Ensure hero-git image exists
run: |
if ! docker image inspect "$IMAGE_NAME" >/dev/null 2>&1; then
echo "ERROR: Docker image '$IMAGE_NAME' not found on runner."
echo "Build it manually on the runner with:"
echo " docker build -t $IMAGE_NAME -f Dockerfile ."
exit 1
fi
- name: Prepare bin output dir
run: mkdir -p hero-bin
- name: Run horus_full_install installer in container
run: |
set -euxo pipefail
docker run --rm \
-v "$PWD/hero-bin:/root/hero/bin" \
-e HEROLIB_REF=development_nile_installers \
"$IMAGE_NAME" \
bash -lc '
set -euxo pipefail
cd /opt/herolib
./examples/installers/horus/horus_full_install.vsh
echo "===== AFTER INSTALL, ls -R /root/hero ====="
ls -R /root/hero || true
'
- name: List built binaries
run: ls -al hero-bin
- name: Create release directory
run: mkdir -p release-artifacts
- name: Package binaries
run: |
for binary in supervisor coordinator horus osiris herorunner runner_osiris runner_sal; do
if [ -f "hero-bin/$binary" ]; then
tar -czf "release-artifacts/${binary}-${{ steps.version.outputs.VERSION }}-linux-x86_64.tar.gz" \
-C hero-bin "$binary"
echo "Packaged $binary"
else
echo "Warning: $binary not found, skipping"
fi
done
- name: Generate checksums
run: |
cd release-artifacts
if ls *.tar.gz >/dev/null 2>&1; then
sha256sum *.tar.gz > checksums.txt
cat checksums.txt
else
echo "ERROR: no .tar.gz artifacts were produced; check previous steps (likely the installer didnt build any binaries)."
exit 1
fi
- name: Create Release
uses: akkuman/gitea-release-action@v1
with:
files: release-artifacts/*
token: ${{ secrets.TOKEN_GITEA }}
tag_name: ${{ steps.version.outputs.VERSION }}
name: Release ${{ steps.version.outputs.VERSION }}
body: |
## Horus Release ${{ steps.version.outputs.VERSION }}
### Binaries
Built via the herolib V installer (`horus_full_install.vsh`) inside
the `hero-git:latest` image and packaged for Linux x86_64:
- `supervisor`
- `coordinator`
- `horus`
- `osiris`
- `herorunner`
- `runner_osiris`
- `runner_sal`
### Installation (example)
```bash
wget https://git.ourworld.tf/herocode/horus/releases/download/${{ steps.version.outputs.VERSION }}/supervisor-${{ steps.version.outputs.VERSION }}-linux-x86_64.tar.gz
tar -xzf supervisor-${{ steps.version.outputs.VERSION }}-linux-x86_64.tar.gz
chmod +x supervisor
sudo mv supervisor /usr/local/bin/
```
### Verification
```bash
sha256sum -c checksums.txt
```
draft: false
prerelease: false

View File

@@ -29,6 +29,7 @@ repository.workspace = true
[dependencies] [dependencies]
# Integration test dependencies - no library dependencies, tests spawn binaries # Integration test dependencies - no library dependencies, tests spawn binaries
hero-supervisor-openrpc-client = { path = "lib/clients/supervisor" } hero-supervisor-openrpc-client = { path = "lib/clients/supervisor" }
hero-coordinator-client = { path = "lib/clients/coordinator" }
hero-job = { path = "lib/models/job" } hero-job = { path = "lib/models/job" }
hero-job-client = { path = "lib/clients/job" } hero-job-client = { path = "lib/clients/job" }
tokio = { workspace = true } tokio = { workspace = true }

View File

@@ -70,6 +70,21 @@ struct Cli {
help = "Bind port for WebSocket JSON-RPC server (default: 9653)" help = "Bind port for WebSocket JSON-RPC server (default: 9653)"
)] )]
api_ws_port: u16, api_ws_port: u16,
#[arg(
long = "supervisor-transport",
env = "SUPERVISOR_TRANSPORT",
default_value = "mycelium",
help = "Transport to use for supervisor communication: 'mycelium' or 'http' (default: mycelium)"
)]
supervisor_transport: String,
#[arg(
long = "supervisor-http-url",
env = "SUPERVISOR_HTTP_URL",
help = "HTTP URL for supervisor when using HTTP transport (e.g., http://127.0.0.1:3031)"
)]
supervisor_http_url: Option<String>,
} }
#[tokio::main] #[tokio::main]
@@ -99,8 +114,9 @@ async fn main() {
// Shared application state // Shared application state
let state = Arc::new(herocoordinator::rpc::AppState::new(service)); let state = Arc::new(herocoordinator::rpc::AppState::new(service));
// Start router workers (auto-discovered contexts) using a single global SupervisorHub (no separate inbound listener) // Start router workers (auto-discovered contexts) using a single global SupervisorHub
{ // Skip router if using HTTP transport (no mycelium needed)
if cli.supervisor_transport == "mycelium" {
let base_url = format!("http://{}:{}", cli.mycelium_ip, cli.mycelium_port); let base_url = format!("http://{}:{}", cli.mycelium_ip, cli.mycelium_port);
let hub = herocoordinator::clients::SupervisorHub::new( let hub = herocoordinator::clients::SupervisorHub::new(
base_url.clone(), base_url.clone(),
@@ -118,6 +134,9 @@ async fn main() {
}; };
// Per-context outbound delivery loops (replies handled by SupervisorHub) // Per-context outbound delivery loops (replies handled by SupervisorHub)
let _auto_handle = herocoordinator::router::start_router_auto(service_for_router, cfg); let _auto_handle = herocoordinator::router::start_router_auto(service_for_router, cfg);
info!("Router started with mycelium transport");
} else {
info!("Skipping router - using HTTP transport for supervisor communication");
} }
// Build RPC modules for both servers // Build RPC modules for both servers

View File

@@ -70,6 +70,21 @@ struct Cli {
help = "Bind port for WebSocket JSON-RPC server (default: 9653)" help = "Bind port for WebSocket JSON-RPC server (default: 9653)"
)] )]
api_ws_port: u16, api_ws_port: u16,
#[arg(
long = "supervisor-transport",
env = "SUPERVISOR_TRANSPORT",
default_value = "mycelium",
help = "Transport to use for supervisor communication: 'mycelium' or 'http' (default: mycelium)"
)]
supervisor_transport: String,
#[arg(
long = "supervisor-http-url",
env = "SUPERVISOR_HTTP_URL",
help = "HTTP URL for supervisor when using HTTP transport (e.g., http://127.0.0.1:3031)"
)]
supervisor_http_url: Option<String>,
} }
#[tokio::main] #[tokio::main]
@@ -99,8 +114,9 @@ async fn main() {
// Shared application state // Shared application state
let state = Arc::new(hero_coordinator::rpc::AppState::new(service)); let state = Arc::new(hero_coordinator::rpc::AppState::new(service));
// Start router workers (auto-discovered contexts) using a single global SupervisorHub (no separate inbound listener) // Start router workers (auto-discovered contexts) using a single global SupervisorHub
{ // Skip router if using HTTP transport (no mycelium needed)
if cli.supervisor_transport == "mycelium" {
let base_url = format!("http://{}:{}", cli.mycelium_ip, cli.mycelium_port); let base_url = format!("http://{}:{}", cli.mycelium_ip, cli.mycelium_port);
let mycelium = Arc::new( let mycelium = Arc::new(
hero_supervisor_openrpc_client::transports::MyceliumClient::new(&base_url) hero_supervisor_openrpc_client::transports::MyceliumClient::new(&base_url)
@@ -121,6 +137,9 @@ async fn main() {
}; };
// Per-context outbound delivery loops (replies handled by SupervisorHub) // Per-context outbound delivery loops (replies handled by SupervisorHub)
let _auto_handle = hero_coordinator::router::start_router_auto(service_for_router, cfg); let _auto_handle = hero_coordinator::router::start_router_auto(service_for_router, cfg);
info!("Router started with mycelium transport");
} else {
info!("Skipping router - using HTTP transport for supervisor communication");
} }
// Build RPC modules for both servers // Build RPC modules for both servers

43
docker/Dockerfile Normal file
View File

@@ -0,0 +1,43 @@
FROM ubuntu:24.04 AS base
# Includes a bunch of stuff hero will try to install later, so we don't have to
# do that on each new container launch
RUN apt update && apt install -y ssh wget unzip build-essential git redis-server sudo autoconf libtool iputils-ping net-tools rsync curl mc tmux libsqlite3-dev xz-utils git-lfs ufw libpq-dev
RUN wget https://github.com/vlang/v/releases/latest/download/v_linux.zip && \
unzip v_linux.zip && \
cd v && \
./v symlink
# Install bun.sh for docusaurus
RUN curl -fsSL https://bun.sh/install | bash
# Make a new stage so we can invalidate the cache by passing --no-cache-filter
FROM base AS clone
RUN git clone https://github.com/Incubaid/herolib /opt/herolib
RUN git clone https://github.com/incubaid/docusaurus_template /root/code/github/incubaid/docusaurus_template
# Warm the bun node modules cache
RUN cd /root/code/github/incubaid/docusaurus_template/template && /root/.bun/bin/bun install
RUN mkdir -p /root/.vmodules/incubaid
# Make a place for users to mount their ssh key file. We will copy to .ssh and
# change permissions in entrypoint script
RUN mkdir -p /root/ssh
COPY entrypoint.sh /bin/entrypoint.sh
RUN chmod +x /bin/entrypoint.sh
# Make a new stage so we can invalidate the cache by passing --no-cache-filter
FROM clone AS fetch
# Fetch to freshen the repos with minimal work
RUN cd /opt/herolib && git fetch
RUN cd /root/code/github/incubaid/docusaurus_template && git fetch
ENTRYPOINT ["/bin/entrypoint.sh"]

268
docker/Readme.md Normal file
View File

@@ -0,0 +1,268 @@
# `hero-git` Docker Image
This directory contains a reusable Docker image and entrypoint script used to build Hero / Horus binaries via the V-based **herolib** installers.
It is designed for two main use cases:
1. **Local development / testing** run the herolib CLI (`hero`) and Horus installers in a clean, reproducible environment.
2. **CI / Release builds** build Horus binaries inside this image (using `horus_full_install.vsh`) and export them as release artifacts.
---
## Files
### `Dockerfile`
The `Dockerfile` builds an Ubuntu-based image with all dependencies needed for `herolib` and its installers.
Key points:
* Base: `ubuntu:24.04`
* Installs system tooling:
* `ssh`, `wget`, `curl`, `unzip`, `build-essential`, `git`, `git-lfs`
* `redis-server`, `libsqlite3-dev`, `libpq-dev`, `autoconf`, `libtool`, `net-tools`, `iputils-ping`, `rsync`, `mc`, `tmux`, `ufw`, `xz-utils`
* Installs **V** compiler:
* Downloads `v_linux.zip` from the official V GitHub releases
* Unzips it and runs `./v symlink` so `v` is on the `PATH`
* Installs **Bun** (used for the docusaurus docs template):
* `curl -fsSL https://bun.sh/install | bash`
* Clones:
* `https://github.com/Incubaid/herolib` into `/opt/herolib`
* `https://github.com/incubaid/docusaurus_template` into `/root/code/github/incubaid/docusaurus_template`
* Warms Bun dependencies for the docusaurus template (for faster re-use):
* `cd /root/code/github/incubaid/docusaurus_template/template && bun install`
* Prepares directories:
* `/root/.vmodules/incubaid` used by V module resolution
* `/root/ssh` optional mount point for SSH keys
* Copies `entrypoint.sh` into the image and marks it executable.
The containers entrypoint is set to the custom script `/bin/entrypoint.sh`.
---
### `entrypoint.sh`
The entrypoint script is responsible for:
1. Starting **Redis** in the background.
2. Optionally configuring **SSH** (if keys are provided).
3. Selecting the correct `herolib` source (mounted or cloned).
4. Checking out the desired `herolib` branch.
5. Building the `hero` CLI with V and making it available as `/bin/hero`.
6. Finally, executing any command passed to `docker run`.
#### Behavior in detail
```bash
#!/bin/bash
set -euo pipefail
# Start Redis in the background
redis-server --daemonize yes
# Optional SSH setup: only if /root/ssh has keys
if [ -d /root/ssh ] && compgen -G "/root/ssh/*" > /dev/null; then
mkdir -p /root/.ssh
cp -r /root/ssh/* /root/.ssh/
chmod 600 /root/.ssh/*
eval "$(ssh-agent)"
ssh-add /root/.ssh/*
fi
# Support optionally bind-mounting a local herolib into the container.
# If /opt/herolib_mount exists, we use that; otherwise we use the cloned /opt/herolib
rm -f /root/.vmodules/freeflowuniverse/herolib
if [ -d "/opt/herolib_mount" ]; then
ln -s /opt/herolib_mount/lib /root/.vmodules/incubaid/herolib
cd /opt/herolib_mount
else
ln -s /opt/herolib/lib /root/.vmodules/incubaid/herolib
cd /opt/herolib
git fetch
git checkout "${HEROLIB_REF:-development}"
git pull
fi
cd cli
echo "Building hero..."
v -enable-globals hero.v > build.log 2>&1 || (cat build.log && exit 1)
ln -s "$(realpath hero)" /bin/hero
cd /root
# If a command was provided to `docker run`, execute it as-is.
# Example: docker run hero-git:latest bash -lc 'cd /opt/herolib && ./examples/installers/horus/horus_full_install.vsh'
if [ "$#" -gt 0 ]; then
exec "$@"
else
# No command passed → give an interactive shell
exec bash
fi
```
#### Important environment variables
* `HEROLIB_REF`
* Git ref (branch, tag, or commit) used when checking out `/opt/herolib`.
* Default: `development`.
* Example for CI: `HEROLIB_REF=development_nile_installers`.
#### Optional mounts
* `/opt/herolib_mount`
* If you bind-mount your local `herolib` repo here, the container will use it instead of the cloned `/opt/herolib`.
* Useful for local development when you want to test uncommitted changes.
* `/root/ssh`
* Optional directory containing SSH keys (e.g. `id_ed25519`) if you need to access private repositories.
* If present and non-empty, keys are copied to `/root/.ssh` and added to an SSH agent.
---
## What this image is used for
### 1. Local development / experimentation
You can drop into the container and manually run installers or commands against `herolib`:
```bash
# Interactive shell with default branch
docker run -it --rm hero-git:latest
# Use a specific herolib ref
docker run -it --rm -e HEROLIB_REF=development_nile_installers hero-git:latest
# Mount a local herolib repo and use that instead of the cloned one
docker run -it --rm \
-v "$PWD/herolib:/opt/herolib_mount" \
hero-git:latest
```
Inside the container you can then run:
```bash
cd /opt/herolib
./examples/installers/horus/horus_full_install.vsh
```
### 2. CI / Release builds for Horus
In CI (e.g. the `release.yml` workflow), this image is used to build Horus binaries via the V-based installer script:
* The workflow:
1. Ensures `hero-git:latest` is available on the self-hosted runner.
2. Creates a local directory (e.g. `hero-bin/`) on the runner.
3. Runs the container, mounting `hero-bin/` into `/root/hero/bin`.
4. Inside the container, executes `./examples/installers/horus/horus_full_install.vsh` from `/opt/herolib`.
5. The installer writes the compiled Horus binaries into `/root/hero/bin`, which appear on the host inside `hero-bin/`.
6. The workflow then packages those binaries into `.tar.gz` archives and uploads them as release artifacts.
Example CI step:
```yaml
- name: Run horus_full_install installer in container
run: |
set -euxo pipefail
docker run --rm \
-v "$PWD/hero-bin:/root/hero/bin" \
-e HEROLIB_REF=development_nile_installers \
hero-git:latest \
bash -lc '
set -euxo pipefail
cd /opt/herolib
./examples/installers/horus/horus_full_install.vsh
echo "===== AFTER INSTALL, ls -R /root/hero ====="
ls -R /root/hero || true
'
```
After this step, `hero-bin/` on the host should contain binaries like:
* `supervisor`
* `coordinator`
* `horus`
* `osiris`
* `herorunner`
* `runner_osiris`
* `runner_sal`
These are then packaged and attached to the Gitea release.
---
## How to build the image
From the directory containing the `Dockerfile` and `entrypoint.sh`:
```bash
# Build the image (force a clean rebuild)
docker build --no-cache -t hero-git:latest .
# Verify entrypoint contents
docker run --rm --entrypoint cat hero-git:latest /bin/entrypoint.sh | tail -n 20
```
You should see the `exec "$@"` block at the end of the script, which ensures that commands passed to `docker run` are executed as-is inside the container.
---
## How to run the image (examples)
### Basic interactive usage
```bash
# Default branch (HEROLIB_REF=development)
docker run -it --rm hero-git:latest
# Specific herolib ref
export HEROLIB_REF=development_nile_installers
docker run -it --rm \
-e HEROLIB_REF=$HEROLIB_REF \
hero-git:latest
```
### Build Horus binaries into a host directory
```bash
mkdir -p hero-bin
docker run --rm \
-v "$PWD/hero-bin:/root/hero/bin" \
-e HEROLIB_REF=development_nile_installers \
hero-git:latest \
bash -lc '
set -euxo pipefail
cd /opt/herolib
./examples/installers/horus/horus_full_install.vsh
ls -al /root/hero/bin
'
ls -al hero-bin
```
You should now see the built Horus binaries on the host in `hero-bin/`.
---
## Summary
* The `hero-git:latest` image encapsulates all dependencies required to build Horus via `herolib`.
* `entrypoint.sh`:
* Starts Redis
* Optionally configures SSH
* Selects and updates the `herolib` checkout
* Builds the `hero` CLI once
* Runs any command passed to the container (e.g. Horus installers)
* CI uses this image to run `horus_full_install.vsh` and collect Horus binaries from `/root/hero/bin` for releases.

44
docker/entrypoint.sh Normal file
View File

@@ -0,0 +1,44 @@
#!/bin/bash
set -euo pipefail
# Start Redis in the background
redis-server --daemonize yes
# Optional SSH setup: only if /root/ssh has keys
if [ -d /root/ssh ] && compgen -G "/root/ssh/*" > /dev/null; then
mkdir -p /root/.ssh
cp -r /root/ssh/* /root/.ssh/
chmod 600 /root/.ssh/*
eval "$(ssh-agent)"
ssh-add /root/.ssh/*
fi
# Support optionally bind-mounting a local herolib into the container.
# If /opt/herolib_mount exists, we use that; otherwise we use the cloned /opt/herolib
rm -f /root/.vmodules/freeflowuniverse/herolib
if [ -d "/opt/herolib_mount" ]; then
ln -s /opt/herolib_mount/lib /root/.vmodules/incubaid/herolib
cd /opt/herolib_mount
else
ln -s /opt/herolib/lib /root/.vmodules/incubaid/herolib
cd /opt/herolib
git fetch
git checkout "${HEROLIB_REF:-development}"
git pull
fi
# Build hero CLI once so it's available as /bin/hero
cd cli
echo "Building hero..."
v -enable-globals hero.v > build.log 2>&1 || (cat build.log && exit 1)
ln -s "$(realpath hero)" /bin/hero
cd /root
# If a command was provided to `docker run`, execute it as-is.
if [ "$#" -gt 0 ]; then
exec "$@"
else
# No command passed → give an interactive shell
exec bash
fi

View File

@@ -290,13 +290,19 @@ impl CoordinatorClient {
async fn call<T: serde::de::DeserializeOwned>(&self, method: &str, params: Value) -> Result<T> { async fn call<T: serde::de::DeserializeOwned>(&self, method: &str, params: Value) -> Result<T> {
use jsonrpsee::core::client::ClientT; use jsonrpsee::core::client::ClientT;
use jsonrpsee::core::params::ArrayParams; use jsonrpsee::core::params::ObjectParams;
let mut array_params = ArrayParams::new(); // Coordinator expects params as named parameters (object), not positional (array)
array_params.insert(params).map_err(|e| CoordinatorError::Rpc(e.to_string()))?; // Convert the Value object to ObjectParams
let mut object_params = ObjectParams::new();
if let Value::Object(map) = params {
for (key, value) in map {
object_params.insert(&key, value).map_err(|e| CoordinatorError::Rpc(e.to_string()))?;
}
}
self.client let result: T = self.client
.request(method, array_params) .request(method, object_params)
.await .await
.map_err(|e| { .map_err(|e| {
let err_str = e.to_string(); let err_str = e.to_string();
@@ -311,7 +317,9 @@ impl CoordinatorClient {
} else { } else {
CoordinatorError::Rpc(err_str) CoordinatorError::Rpc(err_str)
} }
}) })?;
Ok(result)
} }
} }

View File

@@ -200,3 +200,213 @@ fn is_offsetdatetime_type(ty: &Type) -> bool {
} }
false false
} }
/// Derive macro for generating CRUD client methods for Osiris models
///
/// This macro generates async CRUD methods (create, get, update, delete, list) for a model,
/// plus any custom methods defined on the model.
///
/// # Example
///
/// ```rust
/// #[derive(OsirisModel)]
/// #[osiris(
/// collection = "calendar_events",
/// id_field = "event_id",
/// methods = ["reschedule", "cancel"]
/// )]
/// pub struct CalendarEvent {
/// pub event_id: String,
/// pub title: String,
/// pub start_time: i64,
/// // ...
/// }
/// ```
///
/// This generates methods on OsirisClient:
/// - `create_calendar_event(&self, event: CalendarEvent) -> Result<CalendarEvent>`
/// - `get_calendar_event(&self, event_id: &str) -> Result<CalendarEvent>`
/// - `update_calendar_event(&self, event_id: &str, event: CalendarEvent) -> Result<CalendarEvent>`
/// - `delete_calendar_event(&self, event_id: &str) -> Result<()>`
/// - `list_calendar_events(&self) -> Result<Vec<CalendarEvent>>`
/// - `reschedule_calendar_event(&self, event_id: &str, new_time: i64) -> Result<CalendarEvent>`
/// - `cancel_calendar_event(&self, event_id: &str) -> Result<CalendarEvent>`
#[proc_macro_derive(OsirisModel, attributes(osiris))]
pub fn derive_osiris_model(input: TokenStream) -> TokenStream {
let input = parse_macro_input!(input as DeriveInput);
let model_name = &input.ident;
let model_name_snake = to_snake_case(&model_name.to_string());
// Parse attributes
let mut collection = model_name_snake.clone();
let mut id_field = "id".to_string();
let mut custom_methods: Vec<String> = Vec::new();
for attr in &input.attrs {
if attr.path().is_ident("osiris") {
if let Ok(meta_list) = attr.parse_args::<syn::MetaList>() {
// Parse nested attributes
for nested in meta_list.tokens.clone() {
let nested_str = nested.to_string();
if nested_str.starts_with("collection") {
if let Some(val) = extract_string_value(&nested_str) {
collection = val;
}
} else if nested_str.starts_with("id_field") {
if let Some(val) = extract_string_value(&nested_str) {
id_field = val;
}
} else if nested_str.starts_with("methods") {
custom_methods = extract_array_values(&nested_str);
}
}
}
}
}
// Generate method names
let create_method = syn::Ident::new(&format!("create_{}", model_name_snake), model_name.span());
let get_method = syn::Ident::new(&format!("get_{}", model_name_snake), model_name.span());
let update_method = syn::Ident::new(&format!("update_{}", model_name_snake), model_name.span());
let delete_method = syn::Ident::new(&format!("delete_{}", model_name_snake), model_name.span());
let list_method = syn::Ident::new(&format!("list_{}s", model_name_snake), model_name.span());
// Generate custom method implementations
let custom_method_impls: Vec<_> = custom_methods.iter().map(|method_name| {
let method_ident = syn::Ident::new(&format!("{}_{}", method_name, model_name_snake), model_name.span());
let rhai_call = format!("{}_{}", model_name_snake, method_name);
quote! {
pub async fn #method_ident(&self, id: &str, params: serde_json::Value) -> Result<#model_name, OsirisClientError> {
let script = format!(
r#"
let obj = {}::get("{}");
obj.{}(params);
obj.save();
obj
"#,
#collection, id, #method_name
);
let response = self.execute_script(&script).await?;
// Parse response and return model
// This is a simplified version - actual implementation would parse the job result
Err(OsirisClientError::CommandFailed("Not yet implemented".to_string()))
}
}
}).collect();
let expanded = quote! {
impl OsirisClient {
/// Create a new instance of #model_name
pub async fn #create_method(&self, model: &#model_name) -> Result<#model_name, OsirisClientError> {
let json = serde_json::to_string(model)
.map_err(|e| OsirisClientError::SerializationFailed(e.to_string()))?;
let script = format!(
r#"
let data = {};
let obj = {}::new(data);
obj.save();
obj
"#,
json, #collection
);
let response = self.execute_script(&script).await?;
// Parse response - simplified for now
Err(OsirisClientError::CommandFailed("Not yet implemented".to_string()))
}
/// Get an instance of #model_name by ID
pub async fn #get_method(&self, id: &str) -> Result<#model_name, OsirisClientError> {
let query = format!(r#"{{ "{}": "{}" }}"#, #id_field, id);
self.query::<#model_name>(#collection, &query).await
}
/// Update an existing #model_name
pub async fn #update_method(&self, id: &str, model: &#model_name) -> Result<#model_name, OsirisClientError> {
let json = serde_json::to_string(model)
.map_err(|e| OsirisClientError::SerializationFailed(e.to_string()))?;
let script = format!(
r#"
let obj = {}::get("{}");
let data = {};
obj.update(data);
obj.save();
obj
"#,
#collection, id, json
);
let response = self.execute_script(&script).await?;
Err(OsirisClientError::CommandFailed("Not yet implemented".to_string()))
}
/// Delete an instance of #model_name
pub async fn #delete_method(&self, id: &str) -> Result<(), OsirisClientError> {
let script = format!(
r#"
let obj = {}::get("{}");
obj.delete();
"#,
#collection, id
);
self.execute_script(&script).await?;
Ok(())
}
/// List all instances of #model_name
pub async fn #list_method(&self) -> Result<Vec<#model_name>, OsirisClientError> {
self.query_all::<#model_name>(#collection).await
}
#(#custom_method_impls)*
}
};
TokenStream::from(expanded)
}
fn to_snake_case(s: &str) -> String {
let mut result = String::new();
for (i, ch) in s.chars().enumerate() {
if ch.is_uppercase() {
if i > 0 {
result.push('_');
}
result.push(ch.to_lowercase().next().unwrap());
} else {
result.push(ch);
}
}
result
}
fn extract_string_value(s: &str) -> Option<String> {
// Extract value from "key = \"value\"" format
if let Some(eq_pos) = s.find('=') {
let value_part = &s[eq_pos + 1..].trim();
let cleaned = value_part.trim_matches(|c| c == '"' || c == ' ');
return Some(cleaned.to_string());
}
None
}
fn extract_array_values(s: &str) -> Vec<String> {
// Extract values from "methods = [\"method1\", \"method2\"]" format
if let Some(start) = s.find('[') {
if let Some(end) = s.find(']') {
let array_content = &s[start + 1..end];
return array_content
.split(',')
.map(|item| item.trim().trim_matches('"').to_string())
.filter(|item| !item.is_empty())
.collect();
}
}
Vec::new()
}

1
priv_key.bin Normal file
View File

@@ -0,0 +1 @@
<EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD>L<18><>F<>E<EFBFBD><45><44>Ag<41>ߧ9<DFA7>yQ)<

View File

@@ -1,6 +1,5 @@
!!include configure.md # Horus Installation
// Install all components
!!coordinator.install name:'development' !!coordinator.install name:'development'
!!supervisor.install name:'development' !!supervisor.install name:'development'
!!herorunner.install name:'development' !!herorunner.install name:'development'

View File

@@ -2,11 +2,9 @@
Starts all horus binaries Starts all horus binaries
!!include install.md
// Start all services // Start all services
!!herocoordinator.start name:'default' !!herocoordinator.start name:'development'
!!supervisor.start name:'default' !!supervisor.start name:'development'
!!herorunner.start name:'default' !!herorunner.start name:'development'
!!osirisrunner.start name:'default' !!osirisrunner.start name:'development'
!!salrunner.start name:'default' !!salrunner.start name:'development'

170
tests/README.md Normal file
View File

@@ -0,0 +1,170 @@
# End-to-End Integration Tests
This directory contains end-to-end integration tests for the Horus system components. Each test file spawns the actual binary and tests it via its client library.
## Test Files
### `coordinator.rs`
End-to-end tests for the Hero Coordinator service.
**Tests:**
- Actor creation and loading
- Context creation and management
- Runner registration and configuration
- Job creation with dependencies
- Flow creation and DAG generation
- Flow execution (start)
**Prerequisites:**
- Redis server running on `127.0.0.1:6379`
- Port `9652` (HTTP API) and `9653` (WebSocket API) available
**Run:**
```bash
cargo test --test coordinator -- --test-threads=1
```
### `supervisor.rs`
End-to-end tests for the Hero Supervisor service.
**Tests:**
- OpenRPC discovery
- Runner registration and management
- Job creation and execution
- Job status tracking
- API key generation and management
- Authentication verification
- Complete workflow integration
**Prerequisites:**
- Redis server running on `127.0.0.1:6379`
- Port `3031` available
**Run:**
```bash
cargo test --test coordinator -- --test-threads=1
```
### `runner_hero.rs`
End-to-end tests for the Hero (Python) runner.
**Prerequisites:**
- Python 3 installed
- Redis server running
**Run:**
```bash
cargo test --test runner_hero -- --test-threads=1
```
### `runner_osiris.rs`
End-to-end tests for the Osiris (V language) runner.
**Prerequisites:**
- V language compiler installed
- Redis server running
**Run:**
```bash
cargo test --test runner_osiris -- --test-threads=1
```
### `runner_sal.rs`
End-to-end tests for the Sal (Rhai scripting) runner.
**Prerequisites:**
- Redis server running
**Run:**
```bash
cargo test --test runner_sal -- --test-threads=1
```
## Running All Tests
To run all end-to-end tests sequentially:
```bash
cargo test --tests -- --test-threads=1
```
## Important Notes
### Sequential Execution Required
All tests **must** be run with `--test-threads=1` because:
1. Each test spawns a server process that binds to specific ports
2. Tests share Redis databases and may conflict if run in parallel
3. Process cleanup needs to happen sequentially
### Redis Requirement
All tests require a Redis server running on `127.0.0.1:6379`. You can start Redis with:
```bash
redis-server
```
Or using Docker:
```bash
docker run -d -p 6379:6379 redis:latest
```
### Port Conflicts
If tests fail to start, check that the required ports are not in use:
- **Coordinator**: 9652 (HTTP), 9653 (WebSocket)
- **Supervisor**: 3031
- **Runners**: Various ports depending on configuration
You can check port usage with:
```bash
lsof -i :9652
lsof -i :3031
```
### Test Isolation
Each test file:
1. Builds the binary using `escargot`
2. Starts the process with test-specific configuration
3. Runs tests against the running instance
4. Cleans up the process at the end
Tests within a file may share state through Redis, so they are designed to be idempotent and handle existing data.
### Debugging
To see detailed logs during test execution:
```bash
RUST_LOG=debug cargo test --test coordinator -- --test-threads=1 --nocapture
```
To run a specific test:
```bash
cargo test --test coordinator test_01_actor_create -- --test-threads=1 --nocapture
```
## Test Architecture
Each test file follows this pattern:
1. **Global Process Management**: Uses `lazy_static` and `Once` to ensure the server process starts only once
2. **Setup Helper**: Common setup code (e.g., `setup_prerequisites()`) to reduce duplication
3. **Sequential Tests**: Tests are numbered (e.g., `test_01_`, `test_02_`) to indicate execution order
4. **Cleanup Test**: A final `test_zz_cleanup()` ensures the process is terminated and ports are freed
## Contributing
When adding new tests:
1. Follow the existing naming convention (`test_NN_description`)
2. Use the setup helpers to avoid duplication
3. Make tests idempotent (handle existing data gracefully)
4. Add cleanup in the `test_zz_cleanup()` function
5. Update this README with any new prerequisites or test descriptions

392
tests/coordinator.rs Normal file
View File

@@ -0,0 +1,392 @@
//! End-to-End Integration Tests for Hero Coordinator
//!
//! Tests coordinator flow management functionality against a running coordinator instance.
//! The coordinator binary is automatically started and stopped for each test run.
//!
//! **IMPORTANT**: Run with `--test-threads=1` to ensure tests run sequentially:
//! ```
//! cargo test --test coordinator -- --test-threads=1
//! ```
use hero_coordinator_client::{CoordinatorClient, models::*};
use std::collections::HashMap;
use std::sync::Once;
use std::process::Child;
/// Test configuration
const COORDINATOR_URL: &str = "http://127.0.0.1:9652";
const TEST_CONTEXT_ID: u32 = 2;
const TEST_CALLER_ID: u32 = 11001;
const TEST_FLOW_ID: u32 = 13001;
const BASE_JOB_ID: u32 = 20000;
use std::sync::Mutex;
use lazy_static::lazy_static;
lazy_static! {
static ref COORDINATOR_PROCESS: Mutex<Option<Child>> = Mutex::new(None);
}
/// Global initialization flag
static INIT: Once = Once::new();
/// Initialize and start the coordinator binary (called once)
async fn init_coordinator() {
INIT.call_once(|| {
// Register cleanup handler
let _ = std::panic::catch_unwind(|| {
ctrlc::set_handler(move || {
cleanup_coordinator();
std::process::exit(0);
}).ok();
});
// Use escargot to build and get the binary path
let binary = escargot::CargoBuild::new()
.bin("coordinator")
.package("hero-coordinator")
.run()
.expect("Failed to build coordinator binary");
// Start the coordinator binary with HTTP transport (no mycelium needed)
let child = binary
.command()
.env("RUST_LOG", "info")
.args(&[
"--api-http-port",
"9652",
"--api-ws-port",
"9653",
"--redis-addr",
"127.0.0.1:6379",
"--supervisor-transport",
"http",
])
.spawn()
.expect("Failed to start coordinator");
*COORDINATOR_PROCESS.lock().unwrap() = Some(child);
// Wait for server to be ready with simple TCP check
use std::net::TcpStream;
use std::time::Duration;
println!("⏳ Waiting for coordinator to start...");
for i in 0..30 {
std::thread::sleep(Duration::from_millis(500));
// Try to connect to the port
if TcpStream::connect_timeout(
&"127.0.0.1:9652".parse().unwrap(),
Duration::from_millis(100)
).is_ok() {
// Give it more time to fully initialize
std::thread::sleep(Duration::from_secs(2));
println!("✅ Coordinator ready after ~{}ms", (i * 500) + 2000);
return;
}
}
panic!("Coordinator failed to start within 15 seconds");
});
}
/// Cleanup coordinator process
fn cleanup_coordinator() {
if let Ok(mut guard) = COORDINATOR_PROCESS.lock() {
if let Some(mut child) = guard.take() {
println!("🧹 Cleaning up coordinator process...");
let _ = child.kill();
let _ = child.wait();
}
}
}
/// Helper to create a test client
async fn create_client() -> CoordinatorClient {
// Ensure coordinator is running
init_coordinator().await;
CoordinatorClient::new(COORDINATOR_URL)
.expect("Failed to create coordinator client")
}
#[tokio::test]
async fn test_01_flow_create_simple() {
println!("\n🧪 Test: flow.create (simple flow)");
let client = create_client().await;
// Create jobs for the flow
let job_ids = vec![BASE_JOB_ID, BASE_JOB_ID + 1];
for (i, job_id) in job_ids.iter().enumerate() {
let job = JobCreate {
id: *job_id,
caller_id: TEST_CALLER_ID,
context_id: TEST_CONTEXT_ID,
script: format!("print('job {}')", i),
script_type: ScriptType::Python,
timeout: 60,
retries: 0,
env_vars: HashMap::new(),
prerequisites: vec![],
depends: if i == 0 { vec![] } else { vec![job_ids[i - 1]] },
};
let result = client.job_create_or_load(TEST_CONTEXT_ID, job).await;
if let Err(ref e) = result {
println!(" Job {} creation error: {:?}", job_id, e);
}
assert!(result.is_ok(), "Job {} should be created", job_id);
}
// Create flow
let flow_create = FlowCreate {
id: TEST_FLOW_ID,
caller_id: TEST_CALLER_ID,
context_id: TEST_CONTEXT_ID,
jobs: job_ids.clone(),
env_vars: HashMap::new(),
};
let result = client.flow_create_or_load(TEST_CONTEXT_ID, flow_create).await;
if let Err(ref e) = result {
println!(" Error: {:?}", e);
}
assert!(result.is_ok(), "flow.create_or_load should succeed");
let flow = result.unwrap();
assert_eq!(flow.id, TEST_FLOW_ID);
assert_eq!(flow.jobs, job_ids);
println!("✅ flow.create works - flow: {}, jobs: {:?}", flow.id, flow.jobs);
}
#[tokio::test]
async fn test_02_flow_load() {
println!("\n🧪 Test: flow.load");
let client = create_client().await;
// Create a flow first (reuse from test_01)
let job_ids = vec![BASE_JOB_ID, BASE_JOB_ID + 1];
for (i, job_id) in job_ids.iter().enumerate() {
let job = JobCreate {
id: *job_id,
caller_id: TEST_CALLER_ID,
context_id: TEST_CONTEXT_ID,
script: format!("print('job {}')", i),
script_type: ScriptType::Python,
timeout: 60,
retries: 0,
env_vars: HashMap::new(),
prerequisites: vec![],
depends: if i == 0 { vec![] } else { vec![job_ids[i - 1]] },
};
let _ = client.job_create_or_load(TEST_CONTEXT_ID, job).await;
}
let flow_create = FlowCreate {
id: TEST_FLOW_ID,
caller_id: TEST_CALLER_ID,
context_id: TEST_CONTEXT_ID,
jobs: job_ids.clone(),
env_vars: HashMap::new(),
};
let _ = client.flow_create_or_load(TEST_CONTEXT_ID, flow_create).await;
// Load the flow
let result = client.flow_load(TEST_CONTEXT_ID, TEST_FLOW_ID).await;
if let Err(ref e) = result {
println!(" Error: {:?}", e);
}
assert!(result.is_ok(), "flow.load should succeed");
let flow = result.unwrap();
assert_eq!(flow.id, TEST_FLOW_ID);
assert_eq!(flow.jobs, job_ids);
println!("✅ flow.load works - loaded flow: {}", flow.id);
}
#[tokio::test]
async fn test_03_flow_dag() {
println!("\n🧪 Test: flow.dag");
let client = create_client().await;
// Create jobs with dependencies
let job_ids = vec![BASE_JOB_ID + 100, BASE_JOB_ID + 101, BASE_JOB_ID + 102];
for (i, job_id) in job_ids.iter().enumerate() {
let job = JobCreate {
id: *job_id,
caller_id: TEST_CALLER_ID,
context_id: TEST_CONTEXT_ID,
script: format!("print('dag job {}')", i),
script_type: ScriptType::Python,
timeout: 60,
retries: 0,
env_vars: HashMap::new(),
prerequisites: vec![],
depends: if i == 0 { vec![] } else { vec![job_ids[i - 1]] },
};
let _ = client.job_create_or_load(TEST_CONTEXT_ID, job).await;
}
let flow_id = TEST_FLOW_ID + 1;
let flow_create = FlowCreate {
id: flow_id,
caller_id: TEST_CALLER_ID,
context_id: TEST_CONTEXT_ID,
jobs: job_ids.clone(),
env_vars: HashMap::new(),
};
let _ = client.flow_create_or_load(TEST_CONTEXT_ID, flow_create).await;
// Get the DAG
let result = client.flow_dag(TEST_CONTEXT_ID, flow_id).await;
if let Err(ref e) = result {
println!(" Error: {:?}", e);
}
assert!(result.is_ok(), "flow.dag should succeed");
let dag = result.unwrap();
assert_eq!(dag.flow_id, flow_id);
assert_eq!(dag.nodes.len(), 3);
assert_eq!(dag.edges.len(), 2); // Two edges for the chain
println!("✅ flow.dag works - flow: {}, nodes: {}, edges: {}",
dag.flow_id, dag.nodes.len(), dag.edges.len());
}
#[tokio::test]
async fn test_04_flow_start() {
println!("\n🧪 Test: flow.start");
let client = create_client().await;
// Create a simple flow
let job_id = BASE_JOB_ID + 200;
let job = JobCreate {
id: job_id,
caller_id: TEST_CALLER_ID,
context_id: TEST_CONTEXT_ID,
script: "print('start test')".to_string(),
script_type: ScriptType::Python,
timeout: 60,
retries: 0,
env_vars: HashMap::new(),
prerequisites: vec![],
depends: vec![],
};
let _ = client.job_create_or_load(TEST_CONTEXT_ID, job).await;
let flow_id = TEST_FLOW_ID + 2;
let flow_create = FlowCreate {
id: flow_id,
caller_id: TEST_CALLER_ID,
context_id: TEST_CONTEXT_ID,
jobs: vec![job_id],
env_vars: HashMap::new(),
};
let _ = client.flow_create_or_load(TEST_CONTEXT_ID, flow_create).await;
// Start the flow
let result = client.flow_start(TEST_CONTEXT_ID, flow_id).await;
match result {
Ok(started) => {
println!("✅ flow.start works - started: {}", started);
}
Err(e) => {
println!("⚠️ flow.start: {:?} (runner may not be available)", e);
// This is expected if no actual runner is listening
}
}
}
#[tokio::test]
async fn test_05_message_create() {
println!("\n🧪 Test: message.create");
let client = create_client().await;
let message_create = MessageCreate {
id: 1,
context_id: TEST_CONTEXT_ID,
runner_id: 12001,
job_id: BASE_JOB_ID,
message_type: MessageType::JobRun,
format: MessageFormatType::JsonRpc,
payload: r#"{"method":"job.run","params":{}}"#.to_string(),
};
let result = client.message_create(TEST_CONTEXT_ID, message_create).await;
match result {
Ok(message) => {
assert_eq!(message.id, 1);
assert_eq!(message.context_id, TEST_CONTEXT_ID);
println!("✅ message.create works - message: {}", message.id);
}
Err(e) => {
println!("⚠️ message.create: {:?} (may already exist)", e);
}
}
}
#[tokio::test]
async fn test_06_message_load() {
println!("\n🧪 Test: message.load");
let client = create_client().await;
// Create a message first
let message_create = MessageCreate {
id: 2,
context_id: TEST_CONTEXT_ID,
runner_id: 12001,
job_id: BASE_JOB_ID,
message_type: MessageType::JobRun,
format: MessageFormatType::JsonRpc,
payload: r#"{"method":"job.run","params":{}}"#.to_string(),
};
let _ = client.message_create(TEST_CONTEXT_ID, message_create).await;
// Load the message
let result = client.message_load(TEST_CONTEXT_ID, 2).await;
if let Err(ref e) = result {
println!(" Error: {:?}", e);
}
match result {
Ok(message) => {
assert_eq!(message.id, 2);
assert_eq!(message.context_id, TEST_CONTEXT_ID);
println!("✅ message.load works - loaded message: {}", message.id);
}
Err(_) => {
println!("⚠️ message.load failed (message may not exist)");
}
}
}
/// Final test that ensures cleanup happens
#[tokio::test]
async fn test_zz_cleanup() {
println!("🧹 Running cleanup...");
cleanup_coordinator();
tokio::time::sleep(tokio::time::Duration::from_millis(500)).await;
use std::net::TcpStream;
let port_free = TcpStream::connect_timeout(
&"127.0.0.1:9652".parse().unwrap(),
std::time::Duration::from_millis(100)
).is_err();
assert!(port_free, "Port 9652 should be free after cleanup");
println!("✅ Cleanup complete - port 9652 is free");
}