2 Commits

Author SHA1 Message Date
peternashaat
1208ef00a1 Add CI/CD workflows with docs 2025-11-19 09:01:10 +00:00
peternashaat
be920e1eb6 Add CI/CD workflows with docs 2025-11-19 08:45:56 +00:00
26 changed files with 1633 additions and 2443 deletions

392
.gitea/workflows/README.md Normal file
View File

@@ -0,0 +1,392 @@
# Gitea Actions Workflows Documentation
This directory contains the CI/CD workflows for the Horus project using Gitea Actions.
## Overview
The Horus project uses two main workflows:
1. **[ci.yml](./ci.yml)** - Continuous Integration workflow
2. **[release.yml](./release.yml)** - Release automation workflow
## Workflow Files
### ci.yml - Continuous Integration
**Purpose**: Automatically build, test, and validate code quality on every push and pull request.
**Triggers**:
- Push to any branch
- Pull request events (opened, synchronized, reopened)
**What it does**:
1. Sets up Rust toolchain
2. Caches dependencies for faster builds
3. Runs code quality checks (check, test, clippy, fmt)
4. Builds all 7 binaries in release mode
5. Uploads binaries as artifacts
**Duration**: ~5-15 minutes (first run), ~2-5 minutes (cached runs)
**Artifacts**: Binaries are stored for 7 days and can be downloaded from the Actions tab
---
### release.yml - Release Automation
**Purpose**: Automatically create GitHub-style releases with downloadable binaries when version tags are pushed.
**Triggers**:
- Tags matching `v*.*.*` pattern (e.g., `v1.0.0`, `v2.1.3`)
**What it does**:
1. Builds optimized release binaries
2. Strips debug symbols to reduce size
3. Packages each binary as a tarball
4. Generates SHA256 checksums
5. Creates a Gitea release with all artifacts attached
**Duration**: ~5-10 minutes
**Artifacts**: Permanently attached to the release
---
## Binaries Built
Both workflows build the following 7 binaries:
| Binary Name | Description |
|-------------|-------------|
| `supervisor` | Hero Supervisor service |
| `coordinator` | Hero Coordinator service |
| `horus` | Horus main binary |
| `osiris` | Osiris server |
| `herorunner` | Hero runner |
| `runner_osiris` | Osiris runner |
| `runner_sal` | SAL runner |
---
## Usage Guide
### Testing Code Changes
Every time you push code or create a pull request, the CI workflow automatically runs:
```bash
# Make your changes
git add .
git commit -m "Your changes"
git push origin your-branch
# Or create a pull request
# The CI workflow will run automatically
```
**Check Results**:
1. Go to your Gitea repository
2. Click on the **Actions** tab
3. Find your workflow run
4. Click to see detailed logs
---
### Creating a Release
To create a new release with binaries:
```bash
# 1. Ensure your code is ready for release
# 2. Create a version tag (use semantic versioning)
git tag v1.0.0
# 3. Push the tag
git push origin v1.0.0
# 4. The release workflow will automatically:
# - Build all binaries
# - Create a release
# - Attach binaries and checksums
```
**View Release**:
1. Go to your Gitea repository
2. Click on the **Releases** tab
3. Your new release will be listed with downloadable artifacts
---
### Downloading Release Binaries
Users can download binaries from releases:
```bash
# Download a specific binary
wget https://git.ourworld.tf/peternashaat/horus/releases/download/v1.0.0/supervisor-v1.0.0-linux-x86_64.tar.gz
# Extract
tar -xzf supervisor-v1.0.0-linux-x86_64.tar.gz
# Make executable
chmod +x supervisor
# Optionally move to system path
sudo mv supervisor /usr/local/bin/
# Verify it works
supervisor --help
```
**Verify Integrity**:
```bash
# Download checksums
wget https://git.ourworld.tf/peternashaat/horus/releases/download/v1.0.0/checksums.txt
# Verify a binary
sha256sum -c checksums.txt
```
---
## Workflow Requirements
### Runner Configuration
Your Gitea Actions runner must be configured with these labels:
- `ubuntu-latest` (recommended)
- `ubuntu-22.04` (alternative)
- `ubuntu-20.04` (alternative)
### Permissions
The workflows require:
- Read access to repository code
- Write access to create releases (for release.yml)
- Access to `GITHUB_TOKEN` secret (automatically provided by Gitea)
### Dependencies
The workflows automatically install:
- Rust stable toolchain
- rustfmt (code formatter)
- clippy (linter)
No manual setup required!
---
## Caching Strategy
The CI workflow uses three levels of caching to speed up builds:
1. **Cargo Registry Cache** - Downloaded crate metadata
2. **Cargo Index Cache** - Git index of crates.io
3. **Build Cache** - Compiled dependencies
**Benefits**:
- First build: ~10-15 minutes
- Cached builds: ~2-5 minutes
- Saves bandwidth and runner resources
---
## Troubleshooting
### CI Workflow Fails
**Check these common issues**:
1. **Compilation Errors**
- Review the "Check code" step logs
- Fix Rust compilation errors locally first
2. **Test Failures**
- Review the "Run tests" step logs
- Run `cargo test --workspace` locally to reproduce
3. **Clippy Warnings**
- Review the "Run clippy" step logs
- Fix with: `cargo clippy --workspace --fix`
4. **Formatting Issues**
- Review the "Check formatting" step logs
- Fix with: `cargo fmt --all`
5. **Runner Offline**
- Check if your Gitea Actions runner is running
- Verify runner labels match workflow requirements
### Release Workflow Fails
**Check these common issues**:
1. **Tag Format**
- Ensure tag matches `v*.*.*` pattern
- Examples: `v1.0.0`, `v2.1.3`, `v0.1.0-beta`
2. **Binary Not Found**
- Check if all binaries built successfully
- Review the "Build release binaries" step logs
3. **Permission Denied**
- Ensure runner has write access to create releases
- Check repository settings
4. **Release Already Exists**
- Delete the existing release first
- Or use a different version tag
---
## Best Practices
### Version Tagging
Use [Semantic Versioning](https://semver.org/):
- `v1.0.0` - Major release (breaking changes)
- `v1.1.0` - Minor release (new features)
- `v1.0.1` - Patch release (bug fixes)
- `v1.0.0-beta.1` - Pre-release
### Commit Messages
Write clear commit messages for better release notes:
```bash
git commit -m "feat: Add new authentication system"
git commit -m "fix: Resolve memory leak in supervisor"
git commit -m "docs: Update installation guide"
```
### Testing Before Release
Always test before creating a release:
```bash
# Run all checks locally
cargo check --workspace
cargo test --workspace
cargo clippy --workspace -- -D warnings
cargo fmt --all -- --check
# Build release binaries locally
cargo build --workspace --release
# Test the binaries
./target/release/supervisor --help
```
---
## Workflow Customization
### Changing Rust Version
Edit the toolchain in both workflows:
```yaml
- name: Setup Rust toolchain
uses: actions-rust-lang/setup-rust-toolchain@v1
with:
toolchain: 1.75.0 # Specify exact version
```
### Adding More Binaries
If you add new binaries to the workspace:
1. Update `ci.yml` - Add to the upload artifacts step
2. Update `release.yml` - Add to strip and package steps
3. Update this README
### Changing Artifact Retention
In `ci.yml`, modify the retention period:
```yaml
retention-days: 30 # Keep for 30 days instead of 7
```
### Adding Build Matrix
To build for multiple platforms, add a matrix strategy:
```yaml
jobs:
build:
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
runs-on: ${{ matrix.os }}
```
---
## Monitoring
### View Workflow Status
**In Gitea UI**:
1. Repository → Actions tab
2. See all workflow runs
3. Click any run for detailed logs
**Via Git**:
```bash
# List recent tags
git tag -l
# Show tag details
git show v1.0.0
```
### Workflow Badges
Add status badges to your README:
```markdown
![CI Status](https://git.ourworld.tf/peternashaat/horus/actions/workflows/ci.yml/badge.svg)
```
---
## Security Considerations
### Secrets
The workflows use `GITHUB_TOKEN` which is automatically provided by Gitea. This token:
- Has repository-scoped permissions
- Expires after the workflow run
- Cannot be accessed by pull requests from forks (for security)
### Binary Verification
Always verify downloaded binaries:
```bash
# Check SHA256 hash
sha256sum binary-name
# Compare with checksums.txt
```
### Supply Chain Security
The workflows:
- Use pinned action versions (`@v4`, `@v1`)
- Build from source (no pre-built binaries)
- Generate checksums for verification
---
## Additional Resources
- [Gitea Actions Documentation](https://docs.gitea.com/usage/actions/overview)
- [GitHub Actions Syntax](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions) (Gitea is compatible)
- [Rust CI Best Practices](https://doc.rust-lang.org/cargo/guide/continuous-integration.html)
- [Semantic Versioning](https://semver.org/)
---
## Support
For issues with:
- **Workflows**: Check the troubleshooting section above
- **Horus Project**: See the main [README.md](../../README.md)
- **Gitea Actions**: Consult [Gitea documentation](https://docs.gitea.com)
For detailed line-by-line explanation of the workflows, see [WORKFLOW_EXPLAINED.md](./WORKFLOW_EXPLAINED.md).

View File

@@ -0,0 +1,881 @@
# Gitea Actions Workflows - Line-by-Line Explanation
This document provides a detailed explanation of every line in the CI/CD workflows, explaining what each line does, why it's needed, and how it works.
---
## Table of Contents
1. [CI Workflow (ci.yml)](#ci-workflow-ciyml)
2. [Release Workflow (release.yml)](#release-workflow-releaseyml)
3. [Testing Guide](#testing-guide)
4. [Common Patterns Explained](#common-patterns-explained)
---
## CI Workflow (ci.yml)
### Header Section
```yaml
name: CI
```
**What**: Defines the workflow name displayed in the Gitea Actions UI
**Why**: Helps identify this workflow among multiple workflows
**How**: Gitea reads this and displays "CI" in the Actions tab
---
### Trigger Configuration
```yaml
on:
```
**What**: Starts the trigger configuration section
**Why**: Tells Gitea when to run this workflow
**How**: Gitea monitors repository events and matches them against these triggers
```yaml
push:
branches:
- '**'
```
**What**: Triggers workflow on push to any branch
**Why**: We want to test every commit on every branch
**How**: `'**'` is a glob pattern matching all branch names (main, develop, feature/*, etc.)
**Alternative**: Use `- main` to only run on main branch
```yaml
pull_request:
types: [opened, synchronize, reopened]
```
**What**: Triggers workflow on pull request events
**Why**: Test code before merging into main branch
**How**:
- `opened` - When PR is first created
- `synchronize` - When new commits are pushed to the PR
- `reopened` - When a closed PR is reopened
**Note**: Does not run on `closed` to save resources
---
### Job Definition
```yaml
jobs:
```
**What**: Starts the jobs section
**Why**: Workflows contain one or more jobs that run tasks
**How**: Each job runs in a fresh virtual environment
```yaml
build-and-test:
```
**What**: Job identifier (internal name)
**Why**: Unique ID for this job, used in logs and dependencies
**How**: Must be unique within the workflow, use kebab-case
```yaml
name: Build & Test
```
**What**: Human-readable job name
**Why**: Displayed in the Gitea UI for better readability
**How**: Shows in the Actions tab instead of "build-and-test"
```yaml
runs-on: ubuntu-latest
```
**What**: Specifies which runner to use
**Why**: Determines the OS and environment for the job
**How**: Gitea matches this label with available runners
**Note**: Your runner must have the `ubuntu-latest` label configured
---
### Steps Section
```yaml
steps:
```
**What**: Starts the list of steps to execute
**Why**: Steps are the individual tasks that make up a job
**How**: Steps run sequentially in order
---
#### Step 1: Checkout Code
```yaml
- name: Checkout code
```
**What**: Human-readable step name
**Why**: Helps identify this step in logs
**How**: Displayed in the workflow run UI
```yaml
uses: actions/checkout@v4
```
**What**: Uses a pre-built action to checkout code
**Why**: Clones your repository into the runner's workspace
**How**:
- `actions/checkout` - GitHub's official checkout action (Gitea compatible)
- `@v4` - Pins to version 4 for stability
**What it does**:
1. Clones the repository
2. Checks out the commit that triggered the workflow
3. Sets up git configuration
**Why needed**: Without this, the runner has no access to your code
---
#### Step 2: Setup Rust
```yaml
- name: Setup Rust toolchain
uses: actions-rust-lang/setup-rust-toolchain@v1
```
**What**: Installs Rust toolchain on the runner
**Why**: Needed to compile Rust code
**How**: Downloads and installs rustc, cargo, and related tools
**What it does**:
1. Installs Rust stable version
2. Adds cargo to PATH
3. Configures cargo home directory
```yaml
with:
toolchain: stable
```
**What**: Specifies which Rust version to install
**Why**: `stable` ensures we use the latest stable Rust release
**How**: Downloads from rust-lang.org
**Alternatives**: `nightly`, `beta`, or specific version like `1.75.0`
```yaml
components: rustfmt, clippy
```
**What**: Installs additional Rust components
**Why**:
- `rustfmt` - Code formatter (needed for formatting check)
- `clippy` - Linter (needed for lint check)
**How**: Installed via rustup alongside the toolchain
---
#### Step 3-5: Caching
**Why caching is needed**:
- Rust compilation is slow
- Dependencies rarely change
- Caching speeds up builds from ~15 minutes to ~2 minutes
```yaml
- name: Cache cargo registry
uses: actions/cache@v4
```
**What**: Caches the Cargo registry
**Why**: Stores downloaded crate metadata to avoid re-downloading
**How**: Uses GitHub's cache action (Gitea compatible)
```yaml
with:
path: ~/.cargo/registry
```
**What**: Directory to cache
**Why**: This is where Cargo stores downloaded crate files
**How**: Entire directory is compressed and stored
```yaml
key: ${{ runner.os }}-cargo-registry-${{ hashFiles('**/Cargo.lock') }}
```
**What**: Unique cache key
**Why**: Identifies this specific cache
**How**:
- `${{ runner.os }}` - OS name (e.g., "Linux")
- `cargo-registry` - Cache identifier
- `${{ hashFiles('**/Cargo.lock') }}` - Hash of Cargo.lock
**Result**: Key like `Linux-cargo-registry-a1b2c3d4`
**When it changes**: When Cargo.lock changes (dependencies updated)
```yaml
restore-keys: |
${{ runner.os }}-cargo-registry-
```
**What**: Fallback cache keys
**Why**: If exact match not found, use partial match
**How**: Tries to find any cache starting with `Linux-cargo-registry-`
**Benefit**: Even with dependency changes, most cache is still valid
**Same pattern repeats for**:
- Cargo index cache (`~/.cargo/git`)
- Build cache (`target/`)
---
#### Step 6: Check Code
```yaml
- name: Check code
run: cargo check --workspace --verbose
```
**What**: Runs `cargo check` command
**Why**: Fast compilation check without producing binaries
**How**:
- `cargo check` - Compiles code but doesn't generate executables
- `--workspace` - Check all packages in the workspace
- `--verbose` - Show detailed output
**Benefit**: Catches compilation errors quickly (~2x faster than full build)
**Exit code**: Non-zero if compilation fails, which fails the workflow
---
#### Step 7: Run Tests
```yaml
- name: Run tests
run: cargo test --workspace --verbose
```
**What**: Runs all tests in the workspace
**Why**: Ensures code changes don't break functionality
**How**:
- Compiles test code
- Runs all `#[test]` functions
- Runs integration tests in `tests/` directory
**Exit code**: Non-zero if any test fails
---
#### Step 8: Run Clippy
```yaml
- name: Run clippy
run: cargo clippy --workspace -- -D warnings
```
**What**: Runs Rust linter
**Why**: Catches common mistakes and enforces best practices
**How**:
- `cargo clippy` - Runs the clippy linter
- `--workspace` - Lint all packages
- `--` - Separator between cargo args and clippy args
- `-D warnings` - Treat warnings as errors
**Result**: Fails if any clippy warnings are found
**Examples of what clippy catches**:
- Unused variables
- Inefficient code patterns
- Potential bugs
---
#### Step 9: Check Formatting
```yaml
- name: Check formatting
run: cargo fmt --all -- --check
```
**What**: Checks if code is properly formatted
**Why**: Enforces consistent code style
**How**:
- `cargo fmt` - Rust formatter
- `--all` - Check all packages
- `--` - Separator
- `--check` - Don't modify files, just check
**Exit code**: Non-zero if any file is not formatted
**To fix locally**: Run `cargo fmt --all` without `--check`
---
#### Step 10: Build Release Binaries
```yaml
- name: Build release binaries
run: cargo build --workspace --release --verbose
```
**What**: Builds all binaries with optimizations
**Why**: Ensures release builds work and produces artifacts
**How**:
- `cargo build` - Compile code
- `--workspace` - Build all packages
- `--release` - Enable optimizations (from Cargo.toml profile.release)
- `--verbose` - Show detailed output
**Result**: Binaries in `target/release/` directory
**Time**: ~5-10 minutes (first run), ~2-5 minutes (cached)
---
#### Step 11: List Binaries
```yaml
- name: List built binaries
run: |
echo "Built binaries:"
ls -lh target/release/ | grep -E '^-.*x.*'
```
**What**: Lists built executable files
**Why**: Helps verify all binaries were built successfully
**How**:
- `ls -lh` - List files with human-readable sizes
- `grep -E '^-.*x.*'` - Filter for executable files
**Output**: Shows binary names and sizes in logs
---
#### Step 12: Upload Artifacts
```yaml
- name: Upload build artifacts
uses: actions/upload-artifact@v4
```
**What**: Uploads files to Gitea for download
**Why**: Makes binaries available for testing without creating a release
**How**: Uses GitHub's upload action (Gitea compatible)
```yaml
with:
name: binaries-${{ github.sha }}
```
**What**: Artifact name
**Why**: Unique name for this set of binaries
**How**: `${{ github.sha }}` - Git commit SHA (e.g., `binaries-a1b2c3d4`)
```yaml
path: |
target/release/supervisor
target/release/coordinator
target/release/horus
target/release/osiris
target/release/herorunner
target/release/runner_osiris
target/release/runner_sal
```
**What**: Files to upload
**Why**: These are the 7 binaries we want to preserve
**How**: Each line is a file path, `|` allows multi-line list
```yaml
retention-days: 7
```
**What**: How long to keep artifacts
**Why**: Saves storage space by auto-deleting old artifacts
**How**: Gitea automatically deletes after 7 days
```yaml
if-no-files-found: warn
```
**What**: What to do if files don't exist
**Why**: `warn` logs a warning but doesn't fail the workflow
**How**: Useful if some binaries fail to build
**Alternatives**: `error` (fail workflow), `ignore` (silent)
---
## Release Workflow (release.yml)
### Header and Triggers
```yaml
name: Release
```
**What**: Workflow name
**Why**: Identifies this as the release workflow
```yaml
on:
push:
tags:
- 'v*.*.*'
```
**What**: Triggers on version tags
**Why**: Only create releases for version tags
**How**:
- `tags:` - Watches for tag pushes
- `'v*.*.*'` - Glob pattern matching semantic versions
**Matches**: `v1.0.0`, `v2.1.3`, `v0.1.0`
**Doesn't match**: `v1.0`, `1.0.0`, `release-1.0.0`
---
### Job Setup
```yaml
jobs:
build-release:
name: Build Release Binaries
runs-on: ubuntu-latest
```
**Same as CI workflow** - See above for explanation
---
### Steps 1-2: Checkout and Setup
**Same as CI workflow** - See above for explanation
---
#### Step 3: Extract Version
```yaml
- name: Extract version from tag
id: version
```
**What**: Names this step and gives it an ID
**Why**: `id` allows other steps to reference this step's outputs
**How**: Use `${{ steps.version.outputs.VERSION }}` in later steps
```yaml
run: echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
```
**What**: Extracts version from tag name
**Why**: Needed for naming release files
**How**:
- `GITHUB_REF` - Full ref like `refs/tags/v1.0.0`
- `${GITHUB_REF#refs/tags/}` - Bash parameter expansion, removes `refs/tags/` prefix
- Result: `v1.0.0`
- `>> $GITHUB_OUTPUT` - Sets output variable
**Usage**: Later steps access via `${{ steps.version.outputs.VERSION }}`
---
#### Step 4: Build Release Binaries
```yaml
- name: Build release binaries
run: cargo build --workspace --release --verbose
```
**Same as CI workflow** - Builds optimized binaries
**Why here**: Need fresh release builds for distribution
---
#### Step 5: Strip Binaries
```yaml
- name: Strip binaries
run: |
strip target/release/supervisor || true
strip target/release/coordinator || true
# ... etc for all 7 binaries
```
**What**: Removes debug symbols from binaries
**Why**: Reduces binary size by 50-90%
**How**:
- `strip` - Linux command that removes debugging symbols
- `|| true` - Don't fail if strip fails (some binaries might not exist)
**Result**: Smaller binaries, faster downloads
**Example**: 50MB binary → 5MB binary
---
#### Step 6: Create Release Directory
```yaml
- name: Create release directory
run: mkdir -p release-artifacts
```
**What**: Creates directory for release files
**Why**: Organize artifacts before uploading
**How**: `mkdir -p` creates directory (doesn't fail if exists)
---
#### Step 7: Package Binaries
```yaml
- name: Package binaries
run: |
# Package each binary as a tarball
for binary in supervisor coordinator horus osiris herorunner runner_osiris runner_sal; do
```
**What**: Loops through all binary names
**Why**: Package each binary separately
**How**: Bash for loop
```yaml
if [ -f "target/release/$binary" ]; then
```
**What**: Checks if binary file exists
**Why**: Skip if binary wasn't built
**How**: `-f` tests if file exists
```yaml
tar -czf "release-artifacts/${binary}-${{ steps.version.outputs.VERSION }}-linux-x86_64.tar.gz" \
-C target/release "$binary"
```
**What**: Creates compressed tarball
**Why**: Standard distribution format for Linux binaries
**How**:
- `tar` - Archive tool
- `-c` - Create archive
- `-z` - Compress with gzip
- `-f` - Output filename
- `-C target/release` - Change to this directory first
- `"$binary"` - File to archive
**Result**: File like `supervisor-v1.0.0-linux-x86_64.tar.gz`
**Naming convention**: `{name}-{version}-{platform}-{arch}.tar.gz`
```yaml
echo "Packaged $binary"
else
echo "Warning: $binary not found, skipping"
fi
done
```
**What**: Logs success or warning
**Why**: Helps debug if binaries are missing
**How**: Simple echo statements
---
#### Step 8: Generate Checksums
```yaml
- name: Generate checksums
run: |
cd release-artifacts
sha256sum *.tar.gz > checksums.txt
cat checksums.txt
```
**What**: Creates SHA256 checksums for all tarballs
**Why**: Allows users to verify download integrity
**How**:
- `cd release-artifacts` - Change to artifact directory
- `sha256sum *.tar.gz` - Calculate SHA256 hash for each tarball
- `> checksums.txt` - Save to file
- `cat checksums.txt` - Display in logs
**Result**: File with lines like:
```
a1b2c3d4... supervisor-v1.0.0-linux-x86_64.tar.gz
e5f6g7h8... coordinator-v1.0.0-linux-x86_64.tar.gz
```
---
#### Step 9: Create Release
```yaml
- name: Create Release
uses: actions/gitea-release@v1
```
**What**: Uses Gitea's release action
**Why**: Creates a release with attached files
**How**: Calls Gitea API to create release
```yaml
with:
files: release-artifacts/*
```
**What**: Files to attach to release
**Why**: Makes binaries downloadable
**How**: Glob pattern uploads all files in directory
```yaml
token: ${{ secrets.GITHUB_TOKEN }}
```
**What**: Authentication token
**Why**: Needed to create releases via API
**How**: Gitea automatically provides this secret
**Security**: Token is scoped to this repository only
```yaml
tag_name: ${{ steps.version.outputs.VERSION }}
```
**What**: Tag to create release for
**Why**: Associates release with the tag
**How**: Uses version extracted in step 3
```yaml
name: Release ${{ steps.version.outputs.VERSION }}
```
**What**: Release title
**Why**: Displayed in releases page
**How**: Combines "Release" + version (e.g., "Release v1.0.0")
```yaml
body: |
## Horus Release ${{ steps.version.outputs.VERSION }}
### Binaries
This release includes the following binaries for Linux x86_64:
- `supervisor` - Hero Supervisor service
# ... etc
```
**What**: Release description (markdown)
**Why**: Provides context and instructions
**How**: Multi-line string with markdown formatting
**Result**: Rendered as formatted text in release page
```yaml
draft: false
```
**What**: Publish immediately
**Why**: Make release public right away
**How**: `false` = published, `true` = draft
**Alternative**: Set to `true` to review before publishing
```yaml
prerelease: false
```
**What**: Mark as stable release
**Why**: Indicates this is production-ready
**How**: `false` = stable, `true` = pre-release (beta, alpha)
**When to use `true`**: For tags like `v1.0.0-beta.1`
---
## Testing Guide
### Testing CI Workflow Locally
Before pushing, test locally:
```bash
# 1. Check compilation
cargo check --workspace --verbose
# 2. Run tests
cargo test --workspace --verbose
# 3. Run clippy
cargo clippy --workspace -- -D warnings
# 4. Check formatting
cargo fmt --all -- --check
# 5. Build release
cargo build --workspace --release --verbose
# 6. Verify binaries exist
ls -lh target/release/ | grep -E '^-.*x.*'
```
**Expected result**: All commands should succeed with exit code 0
---
### Testing CI Workflow in Gitea
```bash
# 1. Create a test branch
git checkout -b test-ci
# 2. Make a small change (e.g., add a comment)
echo "// Test CI" >> bin/supervisor/src/main.rs
# 3. Commit and push
git add .
git commit -m "test: Trigger CI workflow"
git push origin test-ci
# 4. Check Gitea Actions
# Navigate to: https://git.ourworld.tf/peternashaat/horus/actions
```
**Expected result**:
- Workflow appears in Actions tab
- All steps complete successfully (green checkmarks)
- Artifacts are uploaded
**To download artifacts**:
1. Click on the workflow run
2. Scroll to "Artifacts" section
3. Click to download
---
### Testing Release Workflow Locally
Simulate release build:
```bash
# 1. Build release binaries
cargo build --workspace --release --verbose
# 2. Strip binaries
strip target/release/supervisor || true
strip target/release/coordinator || true
# ... etc
# 3. Create test directory
mkdir -p test-release
# 4. Package binaries
for binary in supervisor coordinator horus osiris herorunner runner_osiris runner_sal; do
if [ -f "target/release/$binary" ]; then
tar -czf "test-release/${binary}-v0.0.1-test-linux-x86_64.tar.gz" \
-C target/release "$binary"
echo "Packaged $binary"
fi
done
# 5. Generate checksums
cd test-release
sha256sum *.tar.gz > checksums.txt
cat checksums.txt
cd ..
# 6. Test extraction
cd test-release
tar -xzf supervisor-v0.0.1-test-linux-x86_64.tar.gz
./supervisor --help
cd ..
```
**Expected result**:
- All binaries package successfully
- Checksums are generated
- Binary extracts and runs
---
### Testing Release Workflow in Gitea
```bash
# 1. Ensure code is ready
git checkout main
git pull
# 2. Create a test tag
git tag v0.1.0-test
# 3. Push the tag
git push origin v0.1.0-test
# 4. Check Gitea
# Navigate to: https://git.ourworld.tf/peternashaat/horus/releases
```
**Expected result**:
- Release appears in Releases tab
- All 7 binaries are attached as `.tar.gz` files
- `checksums.txt` is attached
- Release notes are properly formatted
**To test download**:
```bash
# Download a binary
wget https://git.ourworld.tf/peternashaat/horus/releases/download/v0.1.0-test/supervisor-v0.1.0-test-linux-x86_64.tar.gz
# Extract
tar -xzf supervisor-v0.1.0-test-linux-x86_64.tar.gz
# Test
chmod +x supervisor
./supervisor --help
```
**Cleanup test release**:
1. Go to Releases page
2. Click on the test release
3. Click "Delete" button
---
## Common Patterns Explained
### Why `|| true`?
```yaml
strip target/release/supervisor || true
```
**What**: Bash OR operator
**Why**: Prevents step from failing if command fails
**How**:
- If `strip` succeeds (exit 0), continue
- If `strip` fails (exit non-zero), `|| true` runs and returns 0
**Use case**: Some binaries might not exist, don't fail the whole workflow
---
### Why `${{ }}` Syntax?
```yaml
${{ github.sha }}
${{ steps.version.outputs.VERSION }}
```
**What**: GitHub Actions expression syntax
**Why**: Access variables and context
**How**: Gitea evaluates these at runtime
**Types**:
- `github.*` - Workflow context (sha, ref, actor, etc.)
- `secrets.*` - Secret variables
- `steps.*` - Output from previous steps
- `runner.*` - Runner information (os, arch, etc.)
---
### Why `--workspace`?
```yaml
cargo build --workspace
```
**What**: Cargo flag to include all workspace members
**Why**: Your project is a workspace with multiple packages
**How**: Cargo reads `Cargo.toml` [workspace] section
**Without it**: Only builds the root package
**With it**: Builds all 7 binaries
---
### Why Separate CI and Release Workflows?
**CI Workflow**:
- Runs frequently (every push)
- Fast feedback
- Doesn't create releases
**Release Workflow**:
- Runs rarely (only on tags)
- Slower (includes packaging)
- Creates permanent artifacts
**Benefit**: Fast CI doesn't slow down development, releases are deliberate
---
### Why Cache Three Directories?
1. **`~/.cargo/registry`** - Downloaded crate files
- Changes when: Dependencies are added/updated
- Size: ~500MB - 2GB
2. **`~/.cargo/git`** - Git dependencies
- Changes when: Git dependencies are updated
- Size: ~100MB - 500MB
3. **`target/`** - Compiled artifacts
- Changes when: Code or dependencies change
- Size: ~2GB - 10GB
**Together**: Reduce build time from 15 minutes to 2 minutes
---
### Why `hashFiles('**/Cargo.lock')`?
**What**: Generates hash of Cargo.lock
**Why**: Cache key changes when dependencies change
**How**:
- `**/Cargo.lock` - Find all Cargo.lock files
- `hashFiles()` - Generate SHA256 hash
**Result**: Different hash = different cache = rebuild dependencies
**Benefit**: Cache is invalidated when dependencies change
---
## Summary
Both workflows follow best practices:
**Fast feedback** - CI runs on every push
**Comprehensive testing** - Check, test, lint, format
**Optimized builds** - Caching reduces build time
**Automated releases** - Tag-based release creation
**Secure** - Uses scoped tokens, no manual secrets
**Reproducible** - Pinned action versions
**User-friendly** - Clear release notes and instructions
For more information, see [README.md](./README.md).

82
.gitea/workflows/ci.yml Normal file
View File

@@ -0,0 +1,82 @@
name: CI
on:
push:
branches:
- '**'
pull_request:
types: [opened, synchronize, reopened]
jobs:
build-and-test:
name: Build & Test
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Rust toolchain
uses: actions-rust-lang/setup-rust-toolchain@v1
with:
toolchain: stable
components: rustfmt, clippy
- name: Cache cargo registry
uses: actions/cache@v4
with:
path: ~/.cargo/registry
key: ${{ runner.os }}-cargo-registry-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-cargo-registry-
- name: Cache cargo index
uses: actions/cache@v4
with:
path: ~/.cargo/git
key: ${{ runner.os }}-cargo-index-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-cargo-index-
- name: Cache cargo build
uses: actions/cache@v4
with:
path: target
key: ${{ runner.os }}-cargo-build-target-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-cargo-build-target-
- name: Check code
run: cargo check --workspace --verbose
- name: Run tests
run: cargo test --workspace --verbose
- name: Run clippy
run: cargo clippy --workspace -- -D warnings
- name: Check formatting
run: cargo fmt --all -- --check
- name: Build release binaries
run: cargo build --workspace --release --verbose
- name: List built binaries
run: |
echo "Built binaries:"
ls -lh target/release/ | grep -E '^-.*x.*'
- name: Upload build artifacts
uses: actions/upload-artifact@v4
with:
name: binaries-${{ github.sha }}
path: |
target/release/supervisor
target/release/coordinator
target/release/horus
target/release/osiris
target/release/herorunner
target/release/runner_osiris
target/release/runner_sal
retention-days: 7
if-no-files-found: warn

View File

@@ -0,0 +1,100 @@
name: Release
on:
push:
tags:
- 'v*.*.*'
jobs:
build-release:
name: Build Release Binaries
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Rust toolchain
uses: actions-rust-lang/setup-rust-toolchain@v1
with:
toolchain: stable
- name: Extract version from tag
id: version
run: echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
- name: Build release binaries
run: cargo build --workspace --release --verbose
- name: Strip binaries
run: |
strip target/release/supervisor || true
strip target/release/coordinator || true
strip target/release/horus || true
strip target/release/osiris || true
strip target/release/herorunner || true
strip target/release/runner_osiris || true
strip target/release/runner_sal || true
- name: Create release directory
run: mkdir -p release-artifacts
- name: Package binaries
run: |
# Package each binary as a tarball
for binary in supervisor coordinator horus osiris herorunner runner_osiris runner_sal; do
if [ -f "target/release/$binary" ]; then
tar -czf "release-artifacts/${binary}-${{ steps.version.outputs.VERSION }}-linux-x86_64.tar.gz" \
-C target/release "$binary"
echo "Packaged $binary"
else
echo "Warning: $binary not found, skipping"
fi
done
- name: Generate checksums
run: |
cd release-artifacts
sha256sum *.tar.gz > checksums.txt
cat checksums.txt
- name: Create Release
uses: actions/gitea-release@v1
with:
files: release-artifacts/*
token: ${{ secrets.GITHUB_TOKEN }}
tag_name: ${{ steps.version.outputs.VERSION }}
name: Release ${{ steps.version.outputs.VERSION }}
body: |
## Horus Release ${{ steps.version.outputs.VERSION }}
### Binaries
This release includes the following binaries for Linux x86_64:
- `supervisor` - Hero Supervisor service
- `coordinator` - Hero Coordinator service
- `horus` - Horus main binary
- `osiris` - Osiris server
- `herorunner` - Hero runner
- `runner_osiris` - Osiris runner
- `runner_sal` - SAL runner
### Installation
Download the appropriate binary for your system:
```bash
# Example: Download and install supervisor
wget https://git.ourworld.tf/peternashaat/horus/releases/download/${{ steps.version.outputs.VERSION }}/supervisor-${{ steps.version.outputs.VERSION }}-linux-x86_64.tar.gz
tar -xzf supervisor-${{ steps.version.outputs.VERSION }}-linux-x86_64.tar.gz
chmod +x supervisor
sudo mv supervisor /usr/local/bin/
```
### Verification
Verify the integrity of downloaded files using the checksums:
```bash
sha256sum -c checksums.txt
```
### Changes
See commit history for detailed changes in this release.
draft: false
prerelease: false

View File

@@ -13,7 +13,6 @@ members = [
"lib/clients/osiris",
"lib/clients/supervisor",
"lib/models/job",
"lib/models/context",
"lib/osiris/core",
"lib/osiris/derive",
"lib/runner",
@@ -30,7 +29,6 @@ repository.workspace = true
[dependencies]
# Integration test dependencies - no library dependencies, tests spawn binaries
hero-supervisor-openrpc-client = { path = "lib/clients/supervisor" }
hero-coordinator-client = { path = "lib/clients/coordinator" }
hero-job = { path = "lib/models/job" }
hero-job-client = { path = "lib/clients/job" }
tokio = { workspace = true }

View File

@@ -70,21 +70,6 @@ struct Cli {
help = "Bind port for WebSocket JSON-RPC server (default: 9653)"
)]
api_ws_port: u16,
#[arg(
long = "supervisor-transport",
env = "SUPERVISOR_TRANSPORT",
default_value = "mycelium",
help = "Transport to use for supervisor communication: 'mycelium' or 'http' (default: mycelium)"
)]
supervisor_transport: String,
#[arg(
long = "supervisor-http-url",
env = "SUPERVISOR_HTTP_URL",
help = "HTTP URL for supervisor when using HTTP transport (e.g., http://127.0.0.1:3031)"
)]
supervisor_http_url: Option<String>,
}
#[tokio::main]
@@ -114,9 +99,8 @@ async fn main() {
// Shared application state
let state = Arc::new(herocoordinator::rpc::AppState::new(service));
// Start router workers (auto-discovered contexts) using a single global SupervisorHub
// Skip router if using HTTP transport (no mycelium needed)
if cli.supervisor_transport == "mycelium" {
// Start router workers (auto-discovered contexts) using a single global SupervisorHub (no separate inbound listener)
{
let base_url = format!("http://{}:{}", cli.mycelium_ip, cli.mycelium_port);
let hub = herocoordinator::clients::SupervisorHub::new(
base_url.clone(),
@@ -134,9 +118,6 @@ async fn main() {
};
// Per-context outbound delivery loops (replies handled by SupervisorHub)
let _auto_handle = herocoordinator::router::start_router_auto(service_for_router, cfg);
info!("Router started with mycelium transport");
} else {
info!("Skipping router - using HTTP transport for supervisor communication");
}
// Build RPC modules for both servers

View File

@@ -70,21 +70,6 @@ struct Cli {
help = "Bind port for WebSocket JSON-RPC server (default: 9653)"
)]
api_ws_port: u16,
#[arg(
long = "supervisor-transport",
env = "SUPERVISOR_TRANSPORT",
default_value = "mycelium",
help = "Transport to use for supervisor communication: 'mycelium' or 'http' (default: mycelium)"
)]
supervisor_transport: String,
#[arg(
long = "supervisor-http-url",
env = "SUPERVISOR_HTTP_URL",
help = "HTTP URL for supervisor when using HTTP transport (e.g., http://127.0.0.1:3031)"
)]
supervisor_http_url: Option<String>,
}
#[tokio::main]
@@ -114,9 +99,8 @@ async fn main() {
// Shared application state
let state = Arc::new(hero_coordinator::rpc::AppState::new(service));
// Start router workers (auto-discovered contexts) using a single global SupervisorHub
// Skip router if using HTTP transport (no mycelium needed)
if cli.supervisor_transport == "mycelium" {
// Start router workers (auto-discovered contexts) using a single global SupervisorHub (no separate inbound listener)
{
let base_url = format!("http://{}:{}", cli.mycelium_ip, cli.mycelium_port);
let mycelium = Arc::new(
hero_supervisor_openrpc_client::transports::MyceliumClient::new(&base_url)
@@ -137,9 +121,6 @@ async fn main() {
};
// Per-context outbound delivery loops (replies handled by SupervisorHub)
let _auto_handle = hero_coordinator::router::start_router_auto(service_for_router, cfg);
info!("Router started with mycelium transport");
} else {
info!("Skipping router - using HTTP transport for supervisor communication");
}
// Build RPC modules for both servers

View File

@@ -200,8 +200,41 @@ impl FlowCreate {
}
}
// JobCreate removed - coordinator only manages flows, not individual jobs
// Jobs should be created by the supervisor or other services
#[derive(Debug, Deserialize)]
pub struct JobCreate {
pub id: u32,
pub caller_id: u32,
pub context_id: u32,
pub script: String,
pub runner: Option<String>,
pub timeout: u32,
pub retries: u8,
pub env_vars: HashMap<String, String>,
pub prerequisites: Vec<String>,
pub depends: Vec<u32>,
}
impl JobCreate {
pub fn into_domain(self) -> Job {
use chrono::Utc;
// Convert old format to hero_job::Job
// Note: depends and prerequisites are workflow fields that need separate storage
Job {
id: self.id.to_string(),
caller_id: self.caller_id.to_string(),
context_id: self.context_id.to_string(),
payload: self.script,
runner: self.runner.unwrap_or_else(|| "default-runner".to_string()),
timeout: self.timeout as u64,
env_vars: self.env_vars,
created_at: Utc::now(),
updated_at: Utc::now(),
signatures: Vec::new(),
}
// TODO: Store depends and prerequisites separately in JobSummary/DAG
}
}
#[derive(Debug, Deserialize)]
pub struct MessageCreate {
@@ -214,31 +247,40 @@ pub struct MessageCreate {
pub timeout: u32,
pub timeout_ack: u32,
pub timeout_result: u32,
// Jobs removed - use flow nodes instead
pub job: Vec<JobCreate>,
}
impl MessageCreate {
pub fn into_domain(self) -> Message {
use crate::time::current_timestamp;
let ts = current_timestamp();
// Convert to Message
// Note: flow_id is set to 0 for now, should be set by the caller
let MessageCreate {
id,
caller_id,
context_id,
message,
message_type,
message_format_type,
timeout,
timeout_ack,
timeout_result,
job,
} = self;
Message {
id: self.id,
caller_id: self.caller_id,
context_id: self.context_id,
flow_id: 0, // TODO: Get from params or context
message: self.message,
message_type: self.message_type,
message_format_type: self.message_format_type,
timeout: self.timeout,
timeout_ack: self.timeout_ack,
timeout_result: self.timeout_result,
id,
caller_id,
context_id,
flow_id: 0, // TODO: MessageCreate should include flow_id
message,
message_type,
message_format_type,
timeout,
timeout_ack,
timeout_result,
transport_id: None,
transport_status: None,
nodes: Vec::new(), // TODO: MessageCreate should include nodes
job: Vec::new(), // Jobs removed - coordinator only manages flows
job: job.into_iter().map(JobCreate::into_domain).collect(),
logs: Vec::new(),
created_at: ts,
updated_at: ts,
@@ -288,7 +330,17 @@ pub struct FlowLoadParams {
pub id: u32,
}
// JobCreateParams and JobLoadParams removed - coordinator only manages flows
#[derive(Debug, Deserialize)]
pub struct JobCreateParams {
pub context_id: u32,
pub job: JobCreate,
}
#[derive(Debug, Deserialize)]
pub struct JobLoadParams {
pub context_id: u32,
pub caller_id: u32,
pub id: u32,
}
#[derive(Debug, Deserialize)]
pub struct MessageCreateParams {
@@ -454,8 +506,42 @@ pub fn build_module(state: Arc<AppState>) -> RpcModule<()> {
.expect("register flow.start");
}
// Job endpoints removed - coordinator only manages flows
// Jobs should be created and managed by the supervisor
// Job
{
let state = state.clone();
module
.register_async_method("job.create", move |params, _caller, _ctx| {
let state = state.clone();
async move {
let p: JobCreateParams = params.parse().map_err(invalid_params_err)?;
let job = p.job.into_domain();
let job = state
.service
.create_job(p.context_id, job)
.await
.map_err(storage_err)?;
Ok::<_, ErrorObjectOwned>(job)
}
})
.expect("register job.create");
}
{
let state = state.clone();
module
.register_async_method("job.load", move |params, _caller, _ctx| {
let state = state.clone();
async move {
let p: JobLoadParams = params.parse().map_err(invalid_params_err)?;
let job = state
.service
.load_job(p.context_id, p.caller_id, p.id)
.await
.map_err(storage_err)?;
Ok::<_, ErrorObjectOwned>(job)
}
})
.expect("register job.load");
}
// Message
{

View File

@@ -1,197 +0,0 @@
//! Osiris Engine Example
//!
//! This example demonstrates how to:
//! 1. Create an Osiris Rhai engine with all registered functions
//! 2. Execute Rhai scripts using the actual Osiris API
//! 3. Test context creation, save, get, list, delete operations
//!
//! Run with: cargo run --example engine -p runner-osiris
use rhai::{Dynamic, Map};
// Import the actual engine creation function
mod engine_impl {
include!("../src/engine.rs");
}
use engine_impl::create_osiris_engine;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
println!("🚀 Osiris Engine Example\n");
println!("==========================================\n");
// Create the engine with all Osiris functions registered
let mut engine = create_osiris_engine()?;
// Set up context tags (simulating what the runner does)
let mut tag_map = Map::new();
let signatories: rhai::Array = vec![
Dynamic::from("pk1".to_string()),
Dynamic::from("pk2".to_string()),
];
tag_map.insert("SIGNATORIES".into(), Dynamic::from(signatories));
tag_map.insert("CALLER_ID".into(), "test-caller".to_string().into());
tag_map.insert("CONTEXT_ID".into(), "test-context".to_string().into());
engine.set_default_tag(Dynamic::from(tag_map));
// Test 1: Simple Rhai script
println!("📝 Test 1: Simple Rhai Script");
let script = r#"
let x = 10;
let y = 20;
x + y
"#;
match engine.eval::<i64>(script) {
Ok(result) => println!(" ✓ Result: {}\n", result),
Err(e) => println!(" ✗ Error: {}\n", e),
}
// Test 2: Get context (Osiris function)
println!("📝 Test 2: Get Context");
let context_script = r#"
// Get context with participants (must be signatories)
let ctx = get_context(["pk1", "pk2"]);
ctx.context_id()
"#;
match engine.eval::<String>(context_script) {
Ok(result) => println!(" ✓ Context ID: {}\n", result),
Err(e) => println!(" ✗ Error: {}\n", e),
}
// Test 3: Create a Note and save it
println!("📝 Test 3: Create and Save a Note");
let note_script = r#"
let ctx = get_context(["pk1"]);
// Use the builder-style API
let my_note = note("test-note-123")
.title("Test Note")
.content("This is a test note");
ctx.save(my_note);
"Note saved successfully"
"#;
match engine.eval::<String>(note_script) {
Ok(result) => println!("{}\n", result),
Err(e) => println!(" ✗ Error: {}\n", e),
}
// Test 4: Get from collection
println!("📝 Test 4: Get from Collection");
let get_script = r#"
let ctx = get_context(["pk1"]);
// Try to get a note (will fail if doesn't exist, but shows the API works)
ctx.get("notes", "test-note-123")
"#;
match engine.eval::<Dynamic>(get_script) {
Ok(result) => println!(" ✓ Result: {:?}\n", result),
Err(e) => println!(" ⚠ Error (expected if note doesn't exist): {}\n", e),
}
// Test 5: List from collection
println!("📝 Test 5: List from Collection");
let list_script = r#"
let ctx = get_context(["pk1"]);
// List all notes in the context
ctx.list("notes")
"#;
match engine.eval::<Dynamic>(list_script) {
Ok(result) => println!(" ✓ Result: {:?}\n", result),
Err(e) => println!(" ⚠ Error: {}\n", e),
}
// Test 6: Delete from collection
println!("📝 Test 6: Delete from Collection");
let delete_script = r#"
let ctx = get_context(["pk1"]);
// Try to delete a note
ctx.delete("notes", "test-note-123")
"#;
match engine.eval::<Dynamic>(delete_script) {
Ok(result) => println!(" ✓ Result: {:?}\n", result),
Err(e) => println!(" ⚠ Error (expected if note doesn't exist): {}\n", e),
}
// Test 7: Create an Event
println!("📝 Test 7: Create and Save an Event");
let event_script = r#"
let ctx = get_context(["pk1"]);
// event() takes (namespace, title) in the module version
let my_event = event("test-event-123", "Test Event")
.description("This is a test event");
ctx.save(my_event);
"Event saved successfully"
"#;
match engine.eval::<String>(event_script) {
Ok(result) => println!("{}\n", result),
Err(e) => println!(" ✗ Error: {}\n", e),
}
// Test 8: Create a User (HeroLedger)
println!("📝 Test 8: Create and Save a User");
let user_script = r#"
let ctx = get_context(["pk1"]);
let my_user = new_user()
.username("testuser")
.add_email("test@example.com")
.pubkey("pk1");
ctx.save(my_user);
"User saved successfully"
"#;
match engine.eval::<String>(user_script) {
Ok(result) => println!("{}\n", result),
Err(e) => println!(" ✗ Error: {}\n", e),
}
// Test 9: Create a Group (HeroLedger)
println!("📝 Test 9: Create and Save a Group");
let group_script = r#"
let ctx = get_context(["pk1"]);
let my_group = new_group()
.name("Test Group")
.description("A test group");
ctx.save(my_group);
"Group saved successfully"
"#;
match engine.eval::<String>(group_script) {
Ok(result) => println!("{}\n", result),
Err(e) => println!(" ✗ Error: {}\n", e),
}
// Test 10: List users
println!("📝 Test 10: List Users from Collection");
let list_users_script = r#"
let ctx = get_context(["pk1"]);
ctx.list("users")
"#;
match engine.eval::<Dynamic>(list_users_script) {
Ok(result) => println!(" ✓ Users: {:?}\n", result),
Err(e) => println!(" ⚠ Error: {}\n", e),
}
println!("==========================================");
println!("🎉 All tests completed!\n");
println!("📚 Available Object Types:");
println!(" - Note: note(id).title(...).content(...)");
println!(" - Event: event(id, title).description(...)");
println!(" - User: new_user().username(...).add_email(...).pubkey(...)");
println!(" - Group: new_group().name(...).description(...)");
println!(" - Account: new_account()...");
println!(" - And many more: KycSession, FlowTemplate, FlowInstance, Contract, etc.");
println!("\n📖 Available Operations:");
println!(" - ctx.save(object) - Save an object");
println!(" - ctx.get(collection, id) - Get an object by ID");
println!(" - ctx.list(collection) - List all objects in collection");
println!(" - ctx.delete(collection, id) - Delete an object");
Ok(())
}

View File

@@ -140,8 +140,39 @@ impl CoordinatorClient {
}
}
// Job methods removed - coordinator only manages flows
// Jobs should be created and managed by the supervisor
// ==================== Job Methods ====================
/// Create a new job in a context
pub async fn job_create(&self, context_id: u32, job: JobCreate) -> Result<Job> {
let params = serde_json::json!({
"context_id": context_id,
"job": job
});
self.call("job.create", params).await
}
/// Load an existing job from a context
pub async fn job_load(&self, context_id: u32, caller_id: u32, id: u32) -> Result<Job> {
let params = serde_json::json!({
"context_id": context_id,
"caller_id": caller_id,
"id": id
});
self.call("job.load", params).await
}
/// Try to create a job, or load it if it already exists
pub async fn job_create_or_load(&self, context_id: u32, job: JobCreate) -> Result<Job> {
let caller_id = job.caller_id;
let job_id = job.id;
match self.job_create(context_id, job).await {
Ok(j) => Ok(j),
Err(CoordinatorError::AlreadyExists | CoordinatorError::Storage(_)) => {
self.job_load(context_id, caller_id, job_id).await
}
Err(e) => Err(e),
}
}
// ==================== Flow Methods ====================
@@ -259,19 +290,13 @@ impl CoordinatorClient {
async fn call<T: serde::de::DeserializeOwned>(&self, method: &str, params: Value) -> Result<T> {
use jsonrpsee::core::client::ClientT;
use jsonrpsee::core::params::ObjectParams;
use jsonrpsee::core::params::ArrayParams;
// Coordinator expects params as named parameters (object), not positional (array)
// Convert the Value object to ObjectParams
let mut object_params = ObjectParams::new();
if let Value::Object(map) = params {
for (key, value) in map {
object_params.insert(&key, value).map_err(|e| CoordinatorError::Rpc(e.to_string()))?;
}
}
let mut array_params = ArrayParams::new();
array_params.insert(params).map_err(|e| CoordinatorError::Rpc(e.to_string()))?;
let result: T = self.client
.request(method, object_params)
self.client
.request(method, array_params)
.await
.map_err(|e| {
let err_str = e.to_string();
@@ -286,9 +311,7 @@ impl CoordinatorClient {
} else {
CoordinatorError::Rpc(err_str)
}
})?;
Ok(result)
})
}
}

View File

@@ -103,8 +103,24 @@ pub enum ScriptType {
// ==================== Job ====================
// JobCreate removed - coordinator only manages flows, not individual jobs
// Use hero_job::Job from lib/models/job for job operations
/// Parameters for creating a job
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct JobCreate {
pub id: u32,
pub caller_id: u32,
pub context_id: u32,
pub script: String,
pub script_type: ScriptType,
pub timeout: u64,
#[serde(default)]
pub retries: u8,
#[serde(default)]
pub env_vars: HashMap<String, String>,
#[serde(default)]
pub prerequisites: Vec<u32>,
#[serde(default)]
pub depends: Vec<u32>,
}
// ==================== Flow ====================

View File

@@ -5,19 +5,13 @@ edition.workspace = true
description = "Osiris client library"
license = "MIT OR Apache-2.0"
[features]
default = []
zdfz = ["dep:zdfz-models"]
[dependencies]
zdfz-models = { path = "../../../../../zdfz/sdk/models", optional = true }
# Core dependencies
serde.workspace = true
serde_json.workspace = true
anyhow.workspace = true
thiserror.workspace = true
chrono.workspace = true
paste = "1.0"
# HTTP client
reqwest = { version = "0.12", default-features = false, features = ["json"] }

View File

@@ -13,15 +13,10 @@ use thiserror::Error;
pub mod kyc;
pub mod payment;
pub mod communication;
pub mod macros;
#[cfg(feature = "zdfz")]
pub mod zdfz_extensions;
pub use kyc::*;
pub use payment::*;
pub use communication::*;
pub use macros::*;
#[derive(Debug, Error)]
pub enum OsirisClientError {
@@ -39,9 +34,6 @@ pub enum OsirisClientError {
#[error("Command execution failed: {0}")]
CommandFailed(String),
#[error("Serialization failed: {0}")]
SerializationFailed(String),
}
/// Osiris client with CQRS support

View File

@@ -1,204 +0,0 @@
//! Macros for generating CRUD methods on OsirisClient
//!
//! These macros allow you to quickly generate standard CRUD operations
//! and custom methods for your models.
/// Generate CRUD methods for a model on OsirisClient
///
/// This macro generates 5 standard methods:
/// - create_{collection}
/// - get_{collection}
/// - update_{collection}
/// - delete_{collection}
/// - list_{collection}
///
/// # Example
///
/// ```rust
/// use osiris_client::{OsirisClient, impl_osiris_crud};
///
/// #[derive(serde::Serialize, serde::Deserialize)]
/// struct User {
/// id: String,
/// name: String,
/// }
///
/// impl_osiris_crud!(User, "users", "id");
///
/// // Now you can use:
/// // client.create_users(&user).await?;
/// // client.get_users("123").await?;
/// // client.update_users("123", &user).await?;
/// // client.delete_users("123").await?;
/// // client.list_users().await?;
/// ```
#[macro_export]
macro_rules! impl_osiris_crud {
($model:ty, $collection:expr, $id_field:expr) => {
paste::paste! {
impl $crate::OsirisClient {
/// Create a new instance
#[doc = "Create a new " $collection " instance"]
pub async fn [<$collection:snake _create>](&self, model: &$model) -> Result<$model, $crate::OsirisClientError> {
let json = serde_json::to_string(model)
.map_err(|e| $crate::OsirisClientError::SerializationFailed(e.to_string()))?;
// Create Rhai script that uses Osiris context API
// Note: The actual object creation depends on the model type
// For now, we serialize the data and would need model-specific constructors
let script = format!(
r#"
let ctx = get_context(["system"]);
let data = {};
// TODO: Model-specific object creation
// For now, this is a placeholder
data
"#,
json
);
let response = self.execute_script(&script).await?;
// TODO: Parse response from job result
Err($crate::OsirisClientError::CommandFailed("Not yet implemented".to_string()))
}
/// Get an instance by ID
#[doc = "Get a " $collection " instance by ID"]
pub async fn [<$collection:snake _get>](&self, id: &str) -> Result<$model, $crate::OsirisClientError> {
let script = format!(
r#"
let ctx = get_context(["system"]);
ctx.get("{}", "{}")
"#,
$collection, id
);
let response = self.execute_script(&script).await?;
// TODO: Parse response from job result
Err($crate::OsirisClientError::CommandFailed("Not yet implemented".to_string()))
}
/// Update an existing instance
#[doc = "Update an existing " $collection " instance"]
pub async fn [<$collection:snake _update>](&self, id: &str, model: &$model) -> Result<$model, $crate::OsirisClientError> {
let json = serde_json::to_string(model)
.map_err(|e| $crate::OsirisClientError::SerializationFailed(e.to_string()))?;
let script = format!(
r#"
let ctx = get_context(["system"]);
let obj = ctx.get("{}", "{}");
let data = {};
// TODO: Update object fields from data
ctx.save(obj);
obj
"#,
$collection, id, json
);
let response = self.execute_script(&script).await?;
// TODO: Parse response from job result
Err($crate::OsirisClientError::CommandFailed("Not yet implemented".to_string()))
}
/// Delete an instance
#[doc = "Delete a " $collection " instance"]
pub async fn [<$collection:snake _delete>](&self, id: &str) -> Result<(), $crate::OsirisClientError> {
let script = format!(
r#"
let ctx = get_context(["system"]);
ctx.delete("{}", "{}")
"#,
$collection, id
);
self.execute_script(&script).await?;
Ok(())
}
/// List all instances
#[doc = "List all " $collection " instances"]
pub async fn [<$collection:snake _list>](&self) -> Result<Vec<$model>, $crate::OsirisClientError> {
let script = format!(
r#"
let ctx = get_context(["system"]);
ctx.list("{}")
"#,
$collection
);
let response = self.execute_script(&script).await?;
// TODO: Parse response from job result
Err($crate::OsirisClientError::CommandFailed("Not yet implemented".to_string()))
}
}
}
};
}
/// Generate a custom method on a model
///
/// This macro generates a method that calls a custom Rhai function on the model.
///
/// # Example
///
/// ```rust
/// use osiris_client::{OsirisClient, impl_osiris_method};
///
/// #[derive(serde::Serialize, serde::Deserialize)]
/// struct CalendarEvent {
/// id: String,
/// start_time: i64,
/// }
///
/// impl_osiris_method!(CalendarEvent, "calendar_events", reschedule, new_start: i64, new_end: i64);
///
/// // Now you can use:
/// // client.reschedule_calendar_events("123", 1234567890, 1234567900).await?;
/// ```
#[macro_export]
macro_rules! impl_osiris_method {
($model:ty, $collection:expr, $method_name:ident $(, $param:ident: $param_type:ty)*) => {
paste::paste! {
impl $crate::OsirisClient {
#[doc = "Call " $method_name " on a " $collection " instance"]
pub async fn [<$collection:snake _ $method_name>](&self, id: &str $(, $param: $param_type)*) -> Result<$model, $crate::OsirisClientError> {
let params = serde_json::json!({
$(stringify!($param): $param),*
});
let script = format!(
r#"
let ctx = get_context(["system"]);
let obj = ctx.get("{}", "{}");
// TODO: Call custom method on object
// obj.{}({});
ctx.save(obj);
obj
"#,
$collection, id, stringify!($method_name), params
);
let response = self.execute_script(&script).await?;
// TODO: Parse response from job result
Err($crate::OsirisClientError::CommandFailed("Not yet implemented".to_string()))
}
}
}
};
}
#[cfg(test)]
mod tests {
use super::*;
// Example model for testing
#[derive(serde::Serialize, serde::Deserialize)]
struct TestModel {
id: String,
name: String,
}
// This would generate the methods (can't actually test async in doc tests easily)
// impl_osiris_crud!(TestModel, "test_models", "id");
}

View File

@@ -1,24 +0,0 @@
//! ZDFZ model extensions for OsirisClient
//!
//! This module generates CRUD and custom methods for ZDFZ models.
//! It must be in the osiris-client crate to satisfy Rust's orphan rules.
use crate::{impl_osiris_crud, impl_osiris_method};
// Import ZDFZ models - these will be available when zdfz-models is a dependency
#[cfg(feature = "zdfz")]
use zdfz_models::*;
// ========== Core Business Models ==========
// Digital Residents - Individual users of the freezone
#[cfg(feature = "zdfz")]
impl_osiris_crud!(ApiDigitalResident, "digital_residents", "resident_id");
// Free Zone Companies - Companies registered in the freezone
#[cfg(feature = "zdfz")]
impl_osiris_crud!(FreeZoneCompany, "free_zone_companies", "fzc_id");
// Invoices - Financial documents for companies
#[cfg(feature = "zdfz")]
impl_osiris_crud!(FreeZoneInvoice, "invoices", "fz_invoice_id");

View File

@@ -1,18 +0,0 @@
[package]
name = "hero-context"
version.workspace = true
edition.workspace = true
description = "Context model for Hero platform"
license = "MIT OR Apache-2.0"
[dependencies]
serde.workspace = true
serde_json.workspace = true
chrono.workspace = true
rhai = { version = "1.19", features = ["sync"] }
[target.'cfg(not(target_arch = "wasm32"))'.dependencies]
uuid.workspace = true
[target.'cfg(target_arch = "wasm32")'.dependencies]
uuid = { workspace = true, features = ["js"] }

View File

@@ -1,181 +0,0 @@
//! Access Control Logic for Context
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
/// Unified ACL configuration for objects
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct ObjectAcl {
/// Per-user permissions for this object type
/// Maps public key -> list of permissions
pub permissions: HashMap<String, Vec<ObjectPermission>>,
/// Multi-signature requirements (optional)
pub multi_sig: Option<MultiSigRequirement>,
}
/// Permissions for object operations
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Hash)]
#[serde(rename_all = "lowercase")]
pub enum ObjectPermission {
/// Can create new objects of this type
Create,
/// Can read objects of this type
Read,
/// Can update existing objects of this type
Update,
/// Can delete objects of this type
Delete,
/// Can list all objects of this type
List,
}
/// SAL access control - binary permission (can call or not)
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct SalAcl {
/// List of public keys allowed to call this SAL
pub allowed_callers: Vec<String>,
/// Multi-signature requirements (optional)
pub multi_sig: Option<MultiSigRequirement>,
}
/// Global permissions - simple RWX model
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Hash)]
#[serde(rename_all = "lowercase")]
pub enum GlobalPermission {
/// Can read data
Read,
/// Can write/modify data
Write,
/// Can execute operations
Execute,
}
/// Multi-signature requirements
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
#[serde(tag = "type", rename_all = "lowercase")]
pub enum MultiSigRequirement {
/// Require ALL specified signers to sign unanimously
Unanimous {
/// List of public keys that must ALL sign
required_signers: Vec<String>,
},
/// Require a minimum number of signatures from a set
Threshold {
/// Minimum number of signatures required
min_signatures: usize,
/// Optional: specific set of allowed signers
/// If None, any signers from the context are allowed
allowed_signers: Option<Vec<String>>,
},
/// Require a percentage of signers from a set
Percentage {
/// Percentage required (0.0 to 1.0, e.g., 0.66 for 66%)
percentage: f64,
/// Optional: specific set of allowed signers
/// If None, any signers from the context are allowed
allowed_signers: Option<Vec<String>>,
},
}
impl MultiSigRequirement {
/// Check if signatories satisfy this multi-sig requirement
pub fn check(&self, signatories: &[String], total_members: usize) -> bool {
match self {
MultiSigRequirement::Unanimous { required_signers } => {
// ALL required signers must be present
required_signers.iter().all(|signer| signatories.contains(signer))
}
MultiSigRequirement::Threshold { min_signatures, allowed_signers } => {
// Check if we have enough signatures
if signatories.len() < *min_signatures {
return false;
}
// If allowed_signers is specified, check all signatories are in the list
if let Some(allowed) = allowed_signers {
signatories.iter().all(|sig| allowed.contains(sig))
} else {
true
}
}
MultiSigRequirement::Percentage { percentage, allowed_signers } => {
if let Some(allowed) = allowed_signers {
// Filter signatories to only those in allowed list
let valid_sigs: Vec<_> = signatories
.iter()
.filter(|sig| allowed.contains(sig))
.collect();
let required_count = (allowed.len() as f64 * percentage).ceil() as usize;
valid_sigs.len() >= required_count
} else {
// Use all context members
let required_count = (total_members as f64 * percentage).ceil() as usize;
signatories.len() >= required_count
}
}
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_multi_sig_unanimous() {
let multi_sig = MultiSigRequirement::Unanimous {
required_signers: vec!["alice".to_string(), "bob".to_string()],
};
// Both signers present - should pass
assert!(multi_sig.check(&["alice".to_string(), "bob".to_string()], 3));
// Only one signer - should fail
assert!(!multi_sig.check(&["alice".to_string()], 3));
}
#[test]
fn test_multi_sig_threshold() {
let multi_sig = MultiSigRequirement::Threshold {
min_signatures: 2,
allowed_signers: Some(vec!["alice".to_string(), "bob".to_string(), "charlie".to_string()]),
};
// 2 signatures - should pass
assert!(multi_sig.check(&["alice".to_string(), "bob".to_string()], 3));
// 1 signature - should fail
assert!(!multi_sig.check(&["alice".to_string()], 3));
}
#[test]
fn test_multi_sig_percentage() {
let multi_sig = MultiSigRequirement::Percentage {
percentage: 0.66, // 66%
allowed_signers: Some(vec![
"alice".to_string(),
"bob".to_string(),
"charlie".to_string(),
]),
};
// 2 out of 3 (66%) - should pass
assert!(multi_sig.check(&["alice".to_string(), "bob".to_string()], 3));
// 1 out of 3 (33%) - should fail
assert!(!multi_sig.check(&["alice".to_string()], 3));
}
}

View File

@@ -1,343 +0,0 @@
//! Context Model
//!
//! A Context represents an isolated instance/workspace where users can:
//! - Store and retrieve objects (via Osiris)
//! - Execute SALs (System Abstraction Layer functions)
//! - Collaborate with specific permissions
//!
//! The Context is the authorization boundary - all operations go through it
//! and are subject to ACL checks.
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
pub mod access;
pub mod rhai;
pub use access::*;
/// A Context represents an isolated workspace with ACL-controlled access
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct Context {
/// Human-readable name
pub name: String,
/// Description of this context's purpose
pub description: Option<String>,
/// List of admin public keys - only admins can modify ACLs
pub admins: Vec<String>,
/// Global permissions (RWX) - what can users do in this context?
/// Maps public key -> list of global permissions
pub global_permissions: HashMap<String, Vec<GlobalPermission>>,
/// Per-object-type ACLs - fine-grained control over data operations
/// Maps object type (e.g., "notes", "events") -> ACL configuration
pub object_acls: HashMap<String, ObjectAcl>,
/// SAL ACLs - binary permission (can call or not)
/// Maps SAL name (e.g., "launch_vm", "send_email") -> ACL configuration
pub sal_acls: HashMap<String, SalAcl>,
}
impl Default for Context {
fn default() -> Self {
Self {
name: String::new(),
description: None,
admins: Vec::new(),
global_permissions: HashMap::new(),
object_acls: HashMap::new(),
sal_acls: HashMap::new(),
}
}
}
impl Context {
/// Create a new context with a name and initial admin
pub fn new(name: String, admin: String) -> Self {
Self {
name,
description: None,
admins: vec![admin],
global_permissions: HashMap::new(),
object_acls: HashMap::new(),
sal_acls: HashMap::new(),
}
}
/// Check if a user is an admin
pub fn is_admin(&self, pubkey: &str) -> bool {
self.admins.contains(&pubkey.to_string())
}
/// Check if a user has a global permission
pub fn has_global_permission(&self, pubkey: &str, permission: &GlobalPermission) -> bool {
self.global_permissions
.get(pubkey)
.map(|perms| perms.contains(permission))
.unwrap_or(false)
}
/// Check if a user has permission for an object type
pub fn has_object_permission(
&self,
pubkey: &str,
object_type: &str,
permission: &ObjectPermission,
) -> bool {
self.object_acls
.get(object_type)
.and_then(|acl| acl.permissions.get(pubkey))
.map(|perms| perms.contains(permission))
.unwrap_or(false)
}
/// Check if a user can call a SAL
pub fn can_call_sal(&self, pubkey: &str, sal_name: &str) -> bool {
self.sal_acls
.get(sal_name)
.map(|acl| acl.allowed_callers.contains(&pubkey.to_string()))
.unwrap_or(false)
}
/// Check if signatories satisfy multi-sig requirements for an object
pub fn check_object_multi_sig(
&self,
signatories: &[String],
object_type: &str,
) -> bool {
if let Some(acl) = self.object_acls.get(object_type) {
if let Some(multi_sig) = &acl.multi_sig {
return multi_sig.check(signatories, self.global_permissions.len());
}
}
// No multi-sig requirement
true
}
/// Check if signatories satisfy multi-sig requirements for a SAL
pub fn check_sal_multi_sig(
&self,
signatories: &[String],
sal_name: &str,
) -> bool {
if let Some(acl) = self.sal_acls.get(sal_name) {
if let Some(multi_sig) = &acl.multi_sig {
return multi_sig.check(signatories, self.global_permissions.len());
}
}
// No multi-sig requirement
true
}
/// Add an admin (only admins can call this)
pub fn add_admin(&mut self, caller: &str, new_admin: String) -> Result<(), String> {
if !self.is_admin(caller) {
return Err("Only admins can add admins".to_string());
}
if !self.admins.contains(&new_admin) {
self.admins.push(new_admin);
}
Ok(())
}
/// Grant a global permission to a user (only admins can call this)
pub fn grant_global_permission(
&mut self,
caller: &str,
pubkey: String,
permission: GlobalPermission,
) -> Result<(), String> {
if !self.is_admin(caller) {
return Err("Only admins can grant permissions".to_string());
}
self.global_permissions
.entry(pubkey)
.or_insert_with(Vec::new)
.push(permission);
Ok(())
}
/// Grant an object permission to a user (only admins can call this)
pub fn grant_object_permission(
&mut self,
caller: &str,
pubkey: String,
object_type: String,
permission: ObjectPermission,
) -> Result<(), String> {
if !self.is_admin(caller) {
return Err("Only admins can grant permissions".to_string());
}
self.object_acls
.entry(object_type)
.or_insert_with(|| ObjectAcl {
permissions: HashMap::new(),
multi_sig: None,
})
.permissions
.entry(pubkey)
.or_insert_with(Vec::new)
.push(permission);
Ok(())
}
/// Grant SAL access to a user (only admins can call this)
pub fn grant_sal_access(
&mut self,
caller: &str,
pubkey: String,
sal_name: String,
) -> Result<(), String> {
if !self.is_admin(caller) {
return Err("Only admins can grant SAL access".to_string());
}
self.sal_acls
.entry(sal_name)
.or_insert_with(|| SalAcl {
allowed_callers: Vec::new(),
multi_sig: None,
})
.allowed_callers
.push(pubkey);
Ok(())
}
/// Set multi-sig requirement for an object (only admins can call this)
pub fn set_object_multi_sig(
&mut self,
caller: &str,
object_type: String,
multi_sig: MultiSigRequirement,
) -> Result<(), String> {
if !self.is_admin(caller) {
return Err("Only admins can set multi-sig requirements".to_string());
}
self.object_acls
.entry(object_type)
.or_insert_with(|| ObjectAcl {
permissions: HashMap::new(),
multi_sig: None,
})
.multi_sig = Some(multi_sig);
Ok(())
}
/// Set multi-sig requirement for a SAL (only admins can call this)
pub fn set_sal_multi_sig(
&mut self,
caller: &str,
sal_name: String,
multi_sig: MultiSigRequirement,
) -> Result<(), String> {
if !self.is_admin(caller) {
return Err("Only admins can set multi-sig requirements".to_string());
}
self.sal_acls
.entry(sal_name)
.or_insert_with(|| SalAcl {
allowed_callers: Vec::new(),
multi_sig: None,
})
.multi_sig = Some(multi_sig);
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_context_creation() {
let ctx = Context::new("Test Context".to_string(), "admin_pk".to_string());
assert_eq!(ctx.name, "Test Context");
assert!(ctx.is_admin("admin_pk"));
}
#[test]
fn test_admin_permissions() {
let mut ctx = Context::new("Test".to_string(), "admin".to_string());
// Admin can add another admin
assert!(ctx.add_admin("admin", "admin2".to_string()).is_ok());
assert!(ctx.is_admin("admin2"));
// Non-admin cannot add admin
assert!(ctx.add_admin("user1", "admin3".to_string()).is_err());
}
#[test]
fn test_global_permissions() {
let mut ctx = Context::new("Test".to_string(), "admin".to_string());
// Admin can grant permissions
assert!(ctx.grant_global_permission("admin", "user1".to_string(), GlobalPermission::Read).is_ok());
assert!(ctx.has_global_permission("user1", &GlobalPermission::Read));
assert!(!ctx.has_global_permission("user1", &GlobalPermission::Write));
// Non-admin cannot grant permissions
assert!(ctx.grant_global_permission("user1", "user2".to_string(), GlobalPermission::Read).is_err());
}
#[test]
fn test_object_permissions() {
let mut ctx = Context::new("Test".to_string(), "admin".to_string());
// Admin can grant object permissions
assert!(ctx.grant_object_permission("admin", "user1".to_string(), "notes".to_string(), ObjectPermission::Create).is_ok());
assert!(ctx.has_object_permission("user1", "notes", &ObjectPermission::Create));
assert!(!ctx.has_object_permission("user1", "notes", &ObjectPermission::Delete));
}
#[test]
fn test_sal_permissions() {
let mut ctx = Context::new("Test".to_string(), "admin".to_string());
// Admin can grant SAL access
assert!(ctx.grant_sal_access("admin", "user1".to_string(), "launch_vm".to_string()).is_ok());
assert!(ctx.can_call_sal("user1", "launch_vm"));
assert!(!ctx.can_call_sal("user1", "send_email"));
}
#[test]
fn test_object_multi_sig_unanimous() {
let mut ctx = Context::new("Test".to_string(), "admin".to_string());
assert!(ctx.set_object_multi_sig(
"admin",
"sensitive_data".to_string(),
MultiSigRequirement::Unanimous {
required_signers: vec!["alice".to_string(), "bob".to_string()],
},
).is_ok());
// Both signers present - should pass
assert!(ctx.check_object_multi_sig(&["alice".to_string(), "bob".to_string()], "sensitive_data"));
// Only one signer - should fail
assert!(!ctx.check_object_multi_sig(&["alice".to_string()], "sensitive_data"));
}
#[test]
fn test_sal_multi_sig_threshold() {
let mut ctx = Context::new("Test".to_string(), "admin".to_string());
assert!(ctx.set_sal_multi_sig(
"admin",
"launch_vm".to_string(),
MultiSigRequirement::Threshold {
min_signatures: 2,
allowed_signers: Some(vec!["alice".to_string(), "bob".to_string(), "charlie".to_string()]),
},
).is_ok());
// 2 signatures - should pass
assert!(ctx.check_sal_multi_sig(&["alice".to_string(), "bob".to_string()], "launch_vm"));
// 1 signature - should fail
assert!(!ctx.check_sal_multi_sig(&["alice".to_string()], "launch_vm"));
}
}

View File

@@ -1,327 +0,0 @@
use ::rhai::plugin::*;
use ::rhai::{CustomType, Dynamic, Engine, EvalAltResult, Module, TypeBuilder};
use crate::Context;
// ============================================================================
// Context Module
// ============================================================================
type RhaiContext = Context;
#[export_module]
mod rhai_context_module {
use super::RhaiContext;
use crate::MultiSigRequirement;
use ::rhai::{Dynamic, EvalAltResult};
/// Create a new context with name and initial admin
#[rhai_fn(name = "new_context", return_raw)]
pub fn new_context(name: String, admin: String) -> Result<RhaiContext, Box<EvalAltResult>> {
Ok(RhaiContext::new(name, admin))
}
/// Set context description
#[rhai_fn(name = "description", return_raw)]
pub fn set_description(
ctx: &mut RhaiContext,
description: String,
) -> Result<RhaiContext, Box<EvalAltResult>> {
ctx.description = Some(description);
Ok(ctx.clone())
}
// ========== Admin Management ==========
/// Check if a user is an admin
#[rhai_fn(name = "is_admin")]
pub fn is_admin(ctx: &mut RhaiContext, pubkey: String) -> bool {
ctx.is_admin(&pubkey)
}
/// Add an admin (only admins can call this)
#[rhai_fn(name = "add_admin", return_raw)]
pub fn add_admin(
ctx: &mut RhaiContext,
caller: String,
new_admin: String,
) -> Result<RhaiContext, Box<EvalAltResult>> {
ctx.add_admin(&caller, new_admin)
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
Ok(ctx.clone())
}
// ========== Global Permission Management (RWX) ==========
/// Grant a global permission to a user (only admins can call this)
#[rhai_fn(name = "grant_global_permission", return_raw)]
pub fn grant_global_permission(
ctx: &mut RhaiContext,
caller: String,
pubkey: String,
permission: String,
) -> Result<RhaiContext, Box<EvalAltResult>> {
let perm = parse_global_permission(&permission)?;
ctx.grant_global_permission(&caller, pubkey, perm)
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
Ok(ctx.clone())
}
/// Check if a user has a global permission
#[rhai_fn(name = "has_global_permission", return_raw)]
pub fn has_global_permission(
ctx: &mut RhaiContext,
pubkey: String,
permission: String,
) -> Result<bool, Box<EvalAltResult>> {
let perm = parse_global_permission(&permission)?;
Ok(ctx.has_global_permission(&pubkey, &perm))
}
// ========== Object Permission Management ==========
/// Grant an object permission to a user (only admins can call this)
#[rhai_fn(name = "grant_object_permission", return_raw)]
pub fn grant_object_permission(
ctx: &mut RhaiContext,
caller: String,
pubkey: String,
object_type: String,
permission: String,
) -> Result<RhaiContext, Box<EvalAltResult>> {
let perm = parse_object_permission(&permission)?;
ctx.grant_object_permission(&caller, pubkey, object_type, perm)
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
Ok(ctx.clone())
}
/// Check if a user has an object permission
#[rhai_fn(name = "has_object_permission", return_raw)]
pub fn has_object_permission(
ctx: &mut RhaiContext,
pubkey: String,
object_type: String,
permission: String,
) -> Result<bool, Box<EvalAltResult>> {
let perm = parse_object_permission(&permission)?;
Ok(ctx.has_object_permission(&pubkey, &object_type, &perm))
}
// ========== SAL Permission Management (Binary) ==========
/// Grant SAL access to a user (only admins can call this)
#[rhai_fn(name = "grant_sal_access", return_raw)]
pub fn grant_sal_access(
ctx: &mut RhaiContext,
caller: String,
pubkey: String,
sal_name: String,
) -> Result<RhaiContext, Box<EvalAltResult>> {
ctx.grant_sal_access(&caller, pubkey, sal_name)
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
Ok(ctx.clone())
}
/// Check if a user can call a SAL
#[rhai_fn(name = "can_call_sal")]
pub fn can_call_sal(ctx: &mut RhaiContext, pubkey: String, sal_name: String) -> bool {
ctx.can_call_sal(&pubkey, &sal_name)
}
// ========== Multi-Sig Management for Objects ==========
/// Set unanimous multi-sig requirement for an object (only admins can call this)
#[rhai_fn(name = "set_object_multisig_unanimous", return_raw)]
pub fn set_object_multisig_unanimous(
ctx: &mut RhaiContext,
caller: String,
object_type: String,
required_signers: Vec<Dynamic>,
) -> Result<RhaiContext, Box<EvalAltResult>> {
let signers = parse_signers(required_signers)?;
ctx.set_object_multi_sig(
&caller,
object_type,
MultiSigRequirement::Unanimous { required_signers: signers },
)
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
Ok(ctx.clone())
}
/// Set threshold multi-sig requirement for an object (only admins can call this)
#[rhai_fn(name = "set_object_multisig_threshold", return_raw)]
pub fn set_object_multisig_threshold(
ctx: &mut RhaiContext,
caller: String,
object_type: String,
min_signatures: i64,
allowed_signers: Vec<Dynamic>,
) -> Result<RhaiContext, Box<EvalAltResult>> {
let signers = parse_signers(allowed_signers)?;
ctx.set_object_multi_sig(
&caller,
object_type,
MultiSigRequirement::Threshold {
min_signatures: min_signatures as usize,
allowed_signers: Some(signers),
},
)
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
Ok(ctx.clone())
}
/// Set percentage multi-sig requirement for an object (only admins can call this)
#[rhai_fn(name = "set_object_multisig_percentage", return_raw)]
pub fn set_object_multisig_percentage(
ctx: &mut RhaiContext,
caller: String,
object_type: String,
percentage: f64,
allowed_signers: Vec<Dynamic>,
) -> Result<RhaiContext, Box<EvalAltResult>> {
if percentage < 0.0 || percentage > 1.0 {
return Err("Percentage must be between 0.0 and 1.0".into());
}
let signers = parse_signers(allowed_signers)?;
ctx.set_object_multi_sig(
&caller,
object_type,
MultiSigRequirement::Percentage {
percentage,
allowed_signers: Some(signers),
},
)
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
Ok(ctx.clone())
}
// ========== Multi-Sig Management for SALs ==========
/// Set unanimous multi-sig requirement for a SAL (only admins can call this)
#[rhai_fn(name = "set_sal_multisig_unanimous", return_raw)]
pub fn set_sal_multisig_unanimous(
ctx: &mut RhaiContext,
caller: String,
sal_name: String,
required_signers: Vec<Dynamic>,
) -> Result<RhaiContext, Box<EvalAltResult>> {
let signers = parse_signers(required_signers)?;
ctx.set_sal_multi_sig(
&caller,
sal_name,
MultiSigRequirement::Unanimous { required_signers: signers },
)
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
Ok(ctx.clone())
}
/// Set threshold multi-sig requirement for a SAL (only admins can call this)
#[rhai_fn(name = "set_sal_multisig_threshold", return_raw)]
pub fn set_sal_multisig_threshold(
ctx: &mut RhaiContext,
caller: String,
sal_name: String,
min_signatures: i64,
allowed_signers: Vec<Dynamic>,
) -> Result<RhaiContext, Box<EvalAltResult>> {
let signers = parse_signers(allowed_signers)?;
ctx.set_sal_multi_sig(
&caller,
sal_name,
MultiSigRequirement::Threshold {
min_signatures: min_signatures as usize,
allowed_signers: Some(signers),
},
)
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
Ok(ctx.clone())
}
/// Set percentage multi-sig requirement for a SAL (only admins can call this)
#[rhai_fn(name = "set_sal_multisig_percentage", return_raw)]
pub fn set_sal_multisig_percentage(
ctx: &mut RhaiContext,
caller: String,
sal_name: String,
percentage: f64,
allowed_signers: Vec<Dynamic>,
) -> Result<RhaiContext, Box<EvalAltResult>> {
if percentage < 0.0 || percentage > 1.0 {
return Err("Percentage must be between 0.0 and 1.0".into());
}
let signers = parse_signers(allowed_signers)?;
ctx.set_sal_multi_sig(
&caller,
sal_name,
MultiSigRequirement::Percentage {
percentage,
allowed_signers: Some(signers),
},
)
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
Ok(ctx.clone())
}
// ========== Getters ==========
#[rhai_fn(name = "get_name")]
pub fn get_name(ctx: &mut RhaiContext) -> String {
ctx.name.clone()
}
#[rhai_fn(name = "get_description")]
pub fn get_description(ctx: &mut RhaiContext) -> String {
ctx.description.clone().unwrap_or_default()
}
}
// Helper functions to parse permissions
fn parse_global_permission(permission: &str) -> Result<crate::GlobalPermission, Box<EvalAltResult>> {
match permission {
"read" => Ok(crate::GlobalPermission::Read),
"write" => Ok(crate::GlobalPermission::Write),
"execute" => Ok(crate::GlobalPermission::Execute),
_ => Err(format!("Invalid global permission: {}", permission).into()),
}
}
fn parse_object_permission(permission: &str) -> Result<crate::ObjectPermission, Box<EvalAltResult>> {
match permission {
"create" => Ok(crate::ObjectPermission::Create),
"read" => Ok(crate::ObjectPermission::Read),
"update" => Ok(crate::ObjectPermission::Update),
"delete" => Ok(crate::ObjectPermission::Delete),
"list" => Ok(crate::ObjectPermission::List),
_ => Err(format!("Invalid object permission: {}", permission).into()),
}
}
fn parse_signers(signers: Vec<Dynamic>) -> Result<Vec<String>, Box<EvalAltResult>> {
signers
.into_iter()
.map(|d| d.into_string().map_err(|e| format!("Invalid signer: {:?}", e)))
.collect::<Result<Vec<String>, _>>()
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))
}
impl CustomType for Context {
fn build(mut builder: TypeBuilder<Self>) {
builder.with_name("Context");
}
}
/// Register the Context module with the Rhai engine
pub fn register_context_module(engine: &mut Engine) {
let module = exported_module!(rhai_context_module);
engine.register_static_module("context", module.into());
engine.register_type::<Context>();
}
/// Register Context functions directly on the engine (for global access)
pub fn register_context_functions(engine: &mut Engine) {
engine.register_type::<Context>();
// Register the module functions
let module = exported_module!(rhai_context_module);
engine.register_global_module(module.into());
}

View File

@@ -1,333 +0,0 @@
use ::rhai::plugin::*;
use ::rhai::{CustomType, Dynamic, Engine, EvalAltResult, Module, TypeBuilder};
use crate::Context;
// ============================================================================
// Context Module
// ============================================================================
type RhaiContext = Context;
#[export_module]
mod rhai_context_module {
use super::RhaiContext;
use crate::{GlobalPermission, MultiSigRequirement, ObjectPermission};
use ::rhai::{Dynamic, EvalAltResult};
/// Create a new context with name and initial admin
#[rhai_fn(name = "new_context", return_raw)]
pub fn new_context(name: String, admin: String) -> Result<RhaiContext, Box<EvalAltResult>> {
Ok(RhaiContext::new(name, admin))
}
/// Set context description
#[rhai_fn(name = "description", return_raw)]
pub fn set_description(
ctx: &mut RhaiContext,
description: String,
) -> Result<RhaiContext, Box<EvalAltResult>> {
ctx.description = Some(description);
Ok(ctx.clone())
}
// ========== Global Permission Management ==========
/// Grant a global permission to a user
#[rhai_fn(name = "grant_permission", return_raw)]
pub fn grant_permission(
ctx: &mut RhaiContext,
pubkey: String,
permission: String,
) -> Result<RhaiContext, Box<EvalAltResult>> {
let perm = match permission.as_str() {
"read" => Permission::Read,
"write" => Permission::Write,
"delete" => Permission::Delete,
"execute" => Permission::Execute,
"admin" => Permission::Admin,
"invite" => Permission::Invite,
_ => return Err(format!("Invalid permission: {}", permission).into()),
};
ctx.grant_permission(pubkey, perm);
Ok(ctx.clone())
}
/// Check if a user has a global permission
#[rhai_fn(name = "has_permission", return_raw)]
pub fn has_permission(
ctx: &mut RhaiContext,
pubkey: String,
permission: String,
) -> Result<bool, Box<EvalAltResult>> {
let perm = match permission.as_str() {
"read" => Permission::Read,
"write" => Permission::Write,
"delete" => Permission::Delete,
"execute" => Permission::Execute,
"admin" => Permission::Admin,
"invite" => Permission::Invite,
_ => return Err(format!("Invalid permission: {}", permission).into()),
};
Ok(ctx.has_permission(&pubkey, &perm))
}
// ========== Object Permission Management ==========
/// Grant an object permission to a user
#[rhai_fn(name = "grant_object_permission", return_raw)]
pub fn grant_object_permission(
ctx: &mut RhaiContext,
pubkey: String,
object_type: String,
permission: String,
) -> Result<RhaiContext, Box<EvalAltResult>> {
let perm = parse_resource_permission(&permission)?;
ctx.grant_resource_permission(pubkey, object_type, perm, false);
Ok(ctx.clone())
}
/// Check if a user has an object permission
#[rhai_fn(name = "has_object_permission", return_raw)]
pub fn has_object_permission(
ctx: &mut RhaiContext,
pubkey: String,
object_type: String,
permission: String,
) -> Result<bool, Box<EvalAltResult>> {
let perm = parse_resource_permission(&permission)?;
Ok(ctx.has_resource_permission(&pubkey, &object_type, &perm, false))
}
// ========== SAL Permission Management ==========
/// Grant a SAL permission to a user
#[rhai_fn(name = "grant_sal_permission", return_raw)]
pub fn grant_sal_permission(
ctx: &mut RhaiContext,
pubkey: String,
sal_name: String,
permission: String,
) -> Result<RhaiContext, Box<EvalAltResult>> {
let perm = parse_resource_permission(&permission)?;
ctx.grant_resource_permission(pubkey, sal_name, perm, true);
Ok(ctx.clone())
}
/// Check if a user has a SAL permission
#[rhai_fn(name = "has_sal_permission", return_raw)]
pub fn has_sal_permission(
ctx: &mut RhaiContext,
pubkey: String,
sal_name: String,
permission: String,
) -> Result<bool, Box<EvalAltResult>> {
let perm = parse_resource_permission(&permission)?;
Ok(ctx.has_resource_permission(&pubkey, &sal_name, &perm, true))
}
// ========== Multi-Sig Management ==========
/// Set unanimous multi-sig requirement for an object
#[rhai_fn(name = "set_object_multisig_unanimous", return_raw)]
pub fn set_object_multisig_unanimous(
ctx: &mut RhaiContext,
object_type: String,
required_signers: Vec<Dynamic>,
) -> Result<RhaiContext, Box<EvalAltResult>> {
let signers: Result<Vec<String>, _> = required_signers
.into_iter()
.map(|d| d.into_string().map_err(|e| format!("Invalid signer: {:?}", e)))
.collect();
let signers = signers.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
ctx.set_multi_sig(
object_type,
MultiSigRequirement::Unanimous { required_signers: signers },
false,
);
Ok(ctx.clone())
}
/// Set threshold multi-sig requirement for an object
#[rhai_fn(name = "set_object_multisig_threshold", return_raw)]
pub fn set_object_multisig_threshold(
ctx: &mut RhaiContext,
object_type: String,
min_signatures: i64,
allowed_signers: Vec<Dynamic>,
) -> Result<RhaiContext, Box<EvalAltResult>> {
let signers: Result<Vec<String>, _> = allowed_signers
.into_iter()
.map(|d| d.into_string().map_err(|e| format!("Invalid signer: {:?}", e)))
.collect();
let signers = signers.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
ctx.set_multi_sig(
object_type,
MultiSigRequirement::Threshold {
min_signatures: min_signatures as usize,
allowed_signers: Some(signers),
},
false,
);
Ok(ctx.clone())
}
/// Set percentage multi-sig requirement for an object
#[rhai_fn(name = "set_object_multisig_percentage", return_raw)]
pub fn set_object_multisig_percentage(
ctx: &mut RhaiContext,
object_type: String,
percentage: f64,
allowed_signers: Vec<Dynamic>,
) -> Result<RhaiContext, Box<EvalAltResult>> {
if percentage < 0.0 || percentage > 1.0 {
return Err("Percentage must be between 0.0 and 1.0".into());
}
let signers: Result<Vec<String>, _> = allowed_signers
.into_iter()
.map(|d| d.into_string().map_err(|e| format!("Invalid signer: {:?}", e)))
.collect();
let signers = signers.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
ctx.set_multi_sig(
object_type,
MultiSigRequirement::Percentage {
percentage,
allowed_signers: Some(signers),
},
false,
);
Ok(ctx.clone())
}
/// Set unanimous multi-sig requirement for a SAL
#[rhai_fn(name = "set_sal_multisig_unanimous", return_raw)]
pub fn set_sal_multisig_unanimous(
ctx: &mut RhaiContext,
sal_name: String,
required_signers: Vec<Dynamic>,
) -> Result<RhaiContext, Box<EvalAltResult>> {
let signers: Result<Vec<String>, _> = required_signers
.into_iter()
.map(|d| d.into_string().map_err(|e| format!("Invalid signer: {:?}", e)))
.collect();
let signers = signers.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
ctx.set_multi_sig(
sal_name,
MultiSigRequirement::Unanimous { required_signers: signers },
true,
);
Ok(ctx.clone())
}
/// Set threshold multi-sig requirement for a SAL
#[rhai_fn(name = "set_sal_multisig_threshold", return_raw)]
pub fn set_sal_multisig_threshold(
ctx: &mut RhaiContext,
sal_name: String,
min_signatures: i64,
allowed_signers: Vec<Dynamic>,
) -> Result<RhaiContext, Box<EvalAltResult>> {
let signers: Result<Vec<String>, _> = allowed_signers
.into_iter()
.map(|d| d.into_string().map_err(|e| format!("Invalid signer: {:?}", e)))
.collect();
let signers = signers.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
ctx.set_multi_sig(
sal_name,
MultiSigRequirement::Threshold {
min_signatures: min_signatures as usize,
allowed_signers: Some(signers),
},
true,
);
Ok(ctx.clone())
}
/// Set percentage multi-sig requirement for a SAL
#[rhai_fn(name = "set_sal_multisig_percentage", return_raw)]
pub fn set_sal_multisig_percentage(
ctx: &mut RhaiContext,
sal_name: String,
percentage: f64,
allowed_signers: Vec<Dynamic>,
) -> Result<RhaiContext, Box<EvalAltResult>> {
if percentage < 0.0 || percentage > 1.0 {
return Err("Percentage must be between 0.0 and 1.0".into());
}
let signers: Result<Vec<String>, _> = allowed_signers
.into_iter()
.map(|d| d.into_string().map_err(|e| format!("Invalid signer: {:?}", e)))
.collect();
let signers = signers.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
ctx.set_multi_sig(
sal_name,
MultiSigRequirement::Percentage {
percentage,
allowed_signers: Some(signers),
},
true,
);
Ok(ctx.clone())
}
// ========== Getters ==========
#[rhai_fn(name = "get_name")]
pub fn get_name(ctx: &mut RhaiContext) -> String {
ctx.name.clone()
}
#[rhai_fn(name = "get_description")]
pub fn get_description(ctx: &mut RhaiContext) -> String {
ctx.description.clone().unwrap_or_default()
}
}
// Helper function to parse resource permissions
fn parse_resource_permission(permission: &str) -> Result<crate::ResourcePermission, Box<EvalAltResult>> {
match permission {
"create" => Ok(crate::ResourcePermission::Create),
"read" => Ok(crate::ResourcePermission::Read),
"update" => Ok(crate::ResourcePermission::Update),
"delete" => Ok(crate::ResourcePermission::Delete),
"list" => Ok(crate::ResourcePermission::List),
"execute" => Ok(crate::ResourcePermission::Execute),
_ => Err(format!("Invalid resource permission: {}", permission).into()),
}
}
impl CustomType for Context {
fn build(mut builder: TypeBuilder<Self>) {
builder.with_name("Context");
}
}
/// Register the Context module with the Rhai engine
pub fn register_context_module(engine: &mut Engine) {
let module = exported_module!(rhai_context_module);
engine.register_static_module("context", module.into());
engine.register_type::<Context>();
}
/// Register Context functions directly on the engine (for global access)
pub fn register_context_functions(engine: &mut Engine) {
engine.register_type::<Context>();
// Register the module functions
let module = exported_module!(rhai_context_module);
engine.register_global_module(module.into());
}

View File

@@ -200,213 +200,3 @@ fn is_offsetdatetime_type(ty: &Type) -> bool {
}
false
}
/// Derive macro for generating CRUD client methods for Osiris models
///
/// This macro generates async CRUD methods (create, get, update, delete, list) for a model,
/// plus any custom methods defined on the model.
///
/// # Example
///
/// ```rust
/// #[derive(OsirisModel)]
/// #[osiris(
/// collection = "calendar_events",
/// id_field = "event_id",
/// methods = ["reschedule", "cancel"]
/// )]
/// pub struct CalendarEvent {
/// pub event_id: String,
/// pub title: String,
/// pub start_time: i64,
/// // ...
/// }
/// ```
///
/// This generates methods on OsirisClient:
/// - `create_calendar_event(&self, event: CalendarEvent) -> Result<CalendarEvent>`
/// - `get_calendar_event(&self, event_id: &str) -> Result<CalendarEvent>`
/// - `update_calendar_event(&self, event_id: &str, event: CalendarEvent) -> Result<CalendarEvent>`
/// - `delete_calendar_event(&self, event_id: &str) -> Result<()>`
/// - `list_calendar_events(&self) -> Result<Vec<CalendarEvent>>`
/// - `reschedule_calendar_event(&self, event_id: &str, new_time: i64) -> Result<CalendarEvent>`
/// - `cancel_calendar_event(&self, event_id: &str) -> Result<CalendarEvent>`
#[proc_macro_derive(OsirisModel, attributes(osiris))]
pub fn derive_osiris_model(input: TokenStream) -> TokenStream {
let input = parse_macro_input!(input as DeriveInput);
let model_name = &input.ident;
let model_name_snake = to_snake_case(&model_name.to_string());
// Parse attributes
let mut collection = model_name_snake.clone();
let mut id_field = "id".to_string();
let mut custom_methods: Vec<String> = Vec::new();
for attr in &input.attrs {
if attr.path().is_ident("osiris") {
if let Ok(meta_list) = attr.parse_args::<syn::MetaList>() {
// Parse nested attributes
for nested in meta_list.tokens.clone() {
let nested_str = nested.to_string();
if nested_str.starts_with("collection") {
if let Some(val) = extract_string_value(&nested_str) {
collection = val;
}
} else if nested_str.starts_with("id_field") {
if let Some(val) = extract_string_value(&nested_str) {
id_field = val;
}
} else if nested_str.starts_with("methods") {
custom_methods = extract_array_values(&nested_str);
}
}
}
}
}
// Generate method names
let create_method = syn::Ident::new(&format!("create_{}", model_name_snake), model_name.span());
let get_method = syn::Ident::new(&format!("get_{}", model_name_snake), model_name.span());
let update_method = syn::Ident::new(&format!("update_{}", model_name_snake), model_name.span());
let delete_method = syn::Ident::new(&format!("delete_{}", model_name_snake), model_name.span());
let list_method = syn::Ident::new(&format!("list_{}s", model_name_snake), model_name.span());
// Generate custom method implementations
let custom_method_impls: Vec<_> = custom_methods.iter().map(|method_name| {
let method_ident = syn::Ident::new(&format!("{}_{}", method_name, model_name_snake), model_name.span());
let rhai_call = format!("{}_{}", model_name_snake, method_name);
quote! {
pub async fn #method_ident(&self, id: &str, params: serde_json::Value) -> Result<#model_name, OsirisClientError> {
let script = format!(
r#"
let obj = {}::get("{}");
obj.{}(params);
obj.save();
obj
"#,
#collection, id, #method_name
);
let response = self.execute_script(&script).await?;
// Parse response and return model
// This is a simplified version - actual implementation would parse the job result
Err(OsirisClientError::CommandFailed("Not yet implemented".to_string()))
}
}
}).collect();
let expanded = quote! {
impl OsirisClient {
/// Create a new instance of #model_name
pub async fn #create_method(&self, model: &#model_name) -> Result<#model_name, OsirisClientError> {
let json = serde_json::to_string(model)
.map_err(|e| OsirisClientError::SerializationFailed(e.to_string()))?;
let script = format!(
r#"
let data = {};
let obj = {}::new(data);
obj.save();
obj
"#,
json, #collection
);
let response = self.execute_script(&script).await?;
// Parse response - simplified for now
Err(OsirisClientError::CommandFailed("Not yet implemented".to_string()))
}
/// Get an instance of #model_name by ID
pub async fn #get_method(&self, id: &str) -> Result<#model_name, OsirisClientError> {
let query = format!(r#"{{ "{}": "{}" }}"#, #id_field, id);
self.query::<#model_name>(#collection, &query).await
}
/// Update an existing #model_name
pub async fn #update_method(&self, id: &str, model: &#model_name) -> Result<#model_name, OsirisClientError> {
let json = serde_json::to_string(model)
.map_err(|e| OsirisClientError::SerializationFailed(e.to_string()))?;
let script = format!(
r#"
let obj = {}::get("{}");
let data = {};
obj.update(data);
obj.save();
obj
"#,
#collection, id, json
);
let response = self.execute_script(&script).await?;
Err(OsirisClientError::CommandFailed("Not yet implemented".to_string()))
}
/// Delete an instance of #model_name
pub async fn #delete_method(&self, id: &str) -> Result<(), OsirisClientError> {
let script = format!(
r#"
let obj = {}::get("{}");
obj.delete();
"#,
#collection, id
);
self.execute_script(&script).await?;
Ok(())
}
/// List all instances of #model_name
pub async fn #list_method(&self) -> Result<Vec<#model_name>, OsirisClientError> {
self.query_all::<#model_name>(#collection).await
}
#(#custom_method_impls)*
}
};
TokenStream::from(expanded)
}
fn to_snake_case(s: &str) -> String {
let mut result = String::new();
for (i, ch) in s.chars().enumerate() {
if ch.is_uppercase() {
if i > 0 {
result.push('_');
}
result.push(ch.to_lowercase().next().unwrap());
} else {
result.push(ch);
}
}
result
}
fn extract_string_value(s: &str) -> Option<String> {
// Extract value from "key = \"value\"" format
if let Some(eq_pos) = s.find('=') {
let value_part = &s[eq_pos + 1..].trim();
let cleaned = value_part.trim_matches(|c| c == '"' || c == ' ');
return Some(cleaned.to_string());
}
None
}
fn extract_array_values(s: &str) -> Vec<String> {
// Extract values from "methods = [\"method1\", \"method2\"]" format
if let Some(start) = s.find('[') {
if let Some(end) = s.find(']') {
let array_content = &s[start + 1..end];
return array_content
.split(',')
.map(|item| item.trim().trim_matches('"').to_string())
.filter(|item| !item.is_empty())
.collect();
}
}
Vec::new()
}

View File

@@ -1 +0,0 @@
<EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD>L<18><>F<>E<EFBFBD><45><44>Ag<41>ߧ9<DFA7>yQ)<

View File

@@ -1,5 +1,6 @@
# Horus Installation
!!include configure.md
// Install all components
!!coordinator.install name:'development'
!!supervisor.install name:'development'
!!herorunner.install name:'development'

View File

@@ -2,9 +2,11 @@
Starts all horus binaries
!!include install.md
// Start all services
!!herocoordinator.start name:'development'
!!supervisor.start name:'development'
!!herorunner.start name:'development'
!!osirisrunner.start name:'development'
!!salrunner.start name:'development'
!!herocoordinator.start name:'default'
!!supervisor.start name:'default'
!!herorunner.start name:'default'
!!osirisrunner.start name:'default'
!!salrunner.start name:'default'

View File

@@ -1,170 +0,0 @@
# End-to-End Integration Tests
This directory contains end-to-end integration tests for the Horus system components. Each test file spawns the actual binary and tests it via its client library.
## Test Files
### `coordinator.rs`
End-to-end tests for the Hero Coordinator service.
**Tests:**
- Actor creation and loading
- Context creation and management
- Runner registration and configuration
- Job creation with dependencies
- Flow creation and DAG generation
- Flow execution (start)
**Prerequisites:**
- Redis server running on `127.0.0.1:6379`
- Port `9652` (HTTP API) and `9653` (WebSocket API) available
**Run:**
```bash
cargo test --test coordinator -- --test-threads=1
```
### `supervisor.rs`
End-to-end tests for the Hero Supervisor service.
**Tests:**
- OpenRPC discovery
- Runner registration and management
- Job creation and execution
- Job status tracking
- API key generation and management
- Authentication verification
- Complete workflow integration
**Prerequisites:**
- Redis server running on `127.0.0.1:6379`
- Port `3031` available
**Run:**
```bash
cargo test --test coordinator -- --test-threads=1
```
### `runner_hero.rs`
End-to-end tests for the Hero (Python) runner.
**Prerequisites:**
- Python 3 installed
- Redis server running
**Run:**
```bash
cargo test --test runner_hero -- --test-threads=1
```
### `runner_osiris.rs`
End-to-end tests for the Osiris (V language) runner.
**Prerequisites:**
- V language compiler installed
- Redis server running
**Run:**
```bash
cargo test --test runner_osiris -- --test-threads=1
```
### `runner_sal.rs`
End-to-end tests for the Sal (Rhai scripting) runner.
**Prerequisites:**
- Redis server running
**Run:**
```bash
cargo test --test runner_sal -- --test-threads=1
```
## Running All Tests
To run all end-to-end tests sequentially:
```bash
cargo test --tests -- --test-threads=1
```
## Important Notes
### Sequential Execution Required
All tests **must** be run with `--test-threads=1` because:
1. Each test spawns a server process that binds to specific ports
2. Tests share Redis databases and may conflict if run in parallel
3. Process cleanup needs to happen sequentially
### Redis Requirement
All tests require a Redis server running on `127.0.0.1:6379`. You can start Redis with:
```bash
redis-server
```
Or using Docker:
```bash
docker run -d -p 6379:6379 redis:latest
```
### Port Conflicts
If tests fail to start, check that the required ports are not in use:
- **Coordinator**: 9652 (HTTP), 9653 (WebSocket)
- **Supervisor**: 3031
- **Runners**: Various ports depending on configuration
You can check port usage with:
```bash
lsof -i :9652
lsof -i :3031
```
### Test Isolation
Each test file:
1. Builds the binary using `escargot`
2. Starts the process with test-specific configuration
3. Runs tests against the running instance
4. Cleans up the process at the end
Tests within a file may share state through Redis, so they are designed to be idempotent and handle existing data.
### Debugging
To see detailed logs during test execution:
```bash
RUST_LOG=debug cargo test --test coordinator -- --test-threads=1 --nocapture
```
To run a specific test:
```bash
cargo test --test coordinator test_01_actor_create -- --test-threads=1 --nocapture
```
## Test Architecture
Each test file follows this pattern:
1. **Global Process Management**: Uses `lazy_static` and `Once` to ensure the server process starts only once
2. **Setup Helper**: Common setup code (e.g., `setup_prerequisites()`) to reduce duplication
3. **Sequential Tests**: Tests are numbered (e.g., `test_01_`, `test_02_`) to indicate execution order
4. **Cleanup Test**: A final `test_zz_cleanup()` ensures the process is terminated and ports are freed
## Contributing
When adding new tests:
1. Follow the existing naming convention (`test_NN_description`)
2. Use the setup helpers to avoid duplication
3. Make tests idempotent (handle existing data gracefully)
4. Add cleanup in the `test_zz_cleanup()` function
5. Update this README with any new prerequisites or test descriptions

View File

@@ -1,331 +0,0 @@
//! End-to-End Integration Tests for Hero Coordinator
//!
//! Tests coordinator flow management functionality against a running coordinator instance.
//! The coordinator binary is automatically started and stopped for each test run.
//!
//! **IMPORTANT**: Run with `--test-threads=1` to ensure tests run sequentially:
//! ```
//! cargo test --test coordinator -- --test-threads=1
//! ```
use hero_coordinator_client::{CoordinatorClient, models::*};
use std::collections::HashMap;
use std::sync::Once;
use std::process::Child;
/// Test configuration
const COORDINATOR_URL: &str = "http://127.0.0.1:9652";
const TEST_CONTEXT_ID: u32 = 2;
const TEST_CALLER_ID: u32 = 11001;
const TEST_FLOW_ID: u32 = 13001;
const BASE_JOB_ID: u32 = 20000;
use std::sync::Mutex;
use lazy_static::lazy_static;
lazy_static! {
static ref COORDINATOR_PROCESS: Mutex<Option<Child>> = Mutex::new(None);
}
/// Global initialization flag
static INIT: Once = Once::new();
/// Initialize and start the coordinator binary (called once)
async fn init_coordinator() {
INIT.call_once(|| {
// Register cleanup handler
let _ = std::panic::catch_unwind(|| {
ctrlc::set_handler(move || {
cleanup_coordinator();
std::process::exit(0);
}).ok();
});
// Use escargot to build and get the binary path
let binary = escargot::CargoBuild::new()
.bin("coordinator")
.package("hero-coordinator")
.run()
.expect("Failed to build coordinator binary");
// Start the coordinator binary with HTTP transport (no mycelium needed)
let child = binary
.command()
.env("RUST_LOG", "info")
.args(&[
"--api-http-port",
"9652",
"--api-ws-port",
"9653",
"--redis-addr",
"127.0.0.1:6379",
"--supervisor-transport",
"http",
])
.spawn()
.expect("Failed to start coordinator");
*COORDINATOR_PROCESS.lock().unwrap() = Some(child);
// Wait for server to be ready with simple TCP check
use std::net::TcpStream;
use std::time::Duration;
println!("⏳ Waiting for coordinator to start...");
for i in 0..30 {
std::thread::sleep(Duration::from_millis(500));
// Try to connect to the port
if TcpStream::connect_timeout(
&"127.0.0.1:9652".parse().unwrap(),
Duration::from_millis(100)
).is_ok() {
// Give it more time to fully initialize
std::thread::sleep(Duration::from_secs(2));
println!("✅ Coordinator ready after ~{}ms", (i * 500) + 2000);
return;
}
}
panic!("Coordinator failed to start within 15 seconds");
});
}
/// Cleanup coordinator process
fn cleanup_coordinator() {
if let Ok(mut guard) = COORDINATOR_PROCESS.lock() {
if let Some(mut child) = guard.take() {
println!("🧹 Cleaning up coordinator process...");
let _ = child.kill();
let _ = child.wait();
}
}
}
/// Helper to create a test client
async fn create_client() -> CoordinatorClient {
// Ensure coordinator is running
init_coordinator().await;
CoordinatorClient::new(COORDINATOR_URL)
.expect("Failed to create coordinator client")
}
#[tokio::test]
async fn test_01_flow_create_simple() {
println!("\n🧪 Test: flow.create (simple flow)");
let client = create_client().await;
// Note: Jobs should be created by the supervisor, not the coordinator
// For this test, we'll create a flow with job IDs that may not exist yet
// In a real scenario, jobs would be created by the supervisor first
let job_ids = vec![BASE_JOB_ID, BASE_JOB_ID + 1];
// Create flow
let flow_create = FlowCreate {
id: TEST_FLOW_ID,
caller_id: TEST_CALLER_ID,
context_id: TEST_CONTEXT_ID,
jobs: job_ids.clone(),
env_vars: HashMap::new(),
};
let result = client.flow_create_or_load(TEST_CONTEXT_ID, flow_create).await;
if let Err(ref e) = result {
println!(" Error: {:?}", e);
}
assert!(result.is_ok(), "flow.create_or_load should succeed");
let flow = result.unwrap();
assert_eq!(flow.id, TEST_FLOW_ID);
assert_eq!(flow.jobs, job_ids);
println!("✅ flow.create works - flow: {}, jobs: {:?}", flow.id, flow.jobs);
}
#[tokio::test]
async fn test_02_flow_load() {
println!("\n🧪 Test: flow.load");
let client = create_client().await;
// Create a flow first (reuse from test_01)
let job_ids = vec![BASE_JOB_ID, BASE_JOB_ID + 1];
let flow_create = FlowCreate {
id: TEST_FLOW_ID,
caller_id: TEST_CALLER_ID,
context_id: TEST_CONTEXT_ID,
jobs: job_ids.clone(),
env_vars: HashMap::new(),
};
let _ = client.flow_create_or_load(TEST_CONTEXT_ID, flow_create).await;
// Load the flow
let result = client.flow_load(TEST_CONTEXT_ID, TEST_FLOW_ID).await;
if let Err(ref e) = result {
println!(" Error: {:?}", e);
}
assert!(result.is_ok(), "flow.load should succeed");
let flow = result.unwrap();
assert_eq!(flow.id, TEST_FLOW_ID);
assert_eq!(flow.jobs, job_ids);
println!("✅ flow.load works - loaded flow: {}", flow.id);
}
#[tokio::test]
async fn test_03_flow_dag() {
println!("\n🧪 Test: flow.dag");
let client = create_client().await;
// Note: Jobs should be created by the supervisor
let job_ids = vec![BASE_JOB_ID + 100, BASE_JOB_ID + 101, BASE_JOB_ID + 102];
let flow_id = TEST_FLOW_ID + 1;
let flow_create = FlowCreate {
id: flow_id,
caller_id: TEST_CALLER_ID,
context_id: TEST_CONTEXT_ID,
jobs: job_ids.clone(),
env_vars: HashMap::new(),
};
let _ = client.flow_create_or_load(TEST_CONTEXT_ID, flow_create).await;
// Get the DAG
let result = client.flow_dag(TEST_CONTEXT_ID, flow_id).await;
if let Err(ref e) = result {
println!(" Error: {:?}", e);
}
assert!(result.is_ok(), "flow.dag should succeed");
let dag = result.unwrap();
assert_eq!(dag.flow_id, flow_id);
assert_eq!(dag.nodes.len(), 3);
assert_eq!(dag.edges.len(), 2); // Two edges for the chain
println!("✅ flow.dag works - flow: {}, nodes: {}, edges: {}",
dag.flow_id, dag.nodes.len(), dag.edges.len());
}
#[tokio::test]
async fn test_04_flow_start() {
println!("\n🧪 Test: flow.start");
let client = create_client().await;
// Create a simple flow
let job_id = BASE_JOB_ID + 200;
let flow_id = TEST_FLOW_ID + 2;
let flow_create = FlowCreate {
id: flow_id,
caller_id: TEST_CALLER_ID,
context_id: TEST_CONTEXT_ID,
jobs: vec![job_id],
env_vars: HashMap::new(),
};
let _ = client.flow_create_or_load(TEST_CONTEXT_ID, flow_create).await;
// Start the flow
let result = client.flow_start(TEST_CONTEXT_ID, flow_id).await;
match result {
Ok(started) => {
println!("✅ flow.start works - started: {}", started);
}
Err(e) => {
println!("⚠️ flow.start: {:?} (runner may not be available)", e);
// This is expected if no actual runner is listening
}
}
}
#[tokio::test]
async fn test_05_message_create() {
println!("\n🧪 Test: message.create");
let client = create_client().await;
let message_create = MessageCreate {
id: 1,
context_id: TEST_CONTEXT_ID,
runner_id: 12001,
job_id: BASE_JOB_ID,
message_type: MessageType::JobRun,
format: MessageFormatType::JsonRpc,
payload: r#"{"method":"job.run","params":{}}"#.to_string(),
};
let result = client.message_create(TEST_CONTEXT_ID, message_create).await;
match result {
Ok(message) => {
assert_eq!(message.id, 1);
assert_eq!(message.context_id, TEST_CONTEXT_ID);
println!("✅ message.create works - message: {}", message.id);
}
Err(e) => {
println!("⚠️ message.create: {:?} (may already exist)", e);
}
}
}
#[tokio::test]
async fn test_06_message_load() {
println!("\n🧪 Test: message.load");
let client = create_client().await;
// Create a message first
let message_create = MessageCreate {
id: 2,
context_id: TEST_CONTEXT_ID,
runner_id: 12001,
job_id: BASE_JOB_ID,
message_type: MessageType::JobRun,
format: MessageFormatType::JsonRpc,
payload: r#"{"method":"job.run","params":{}}"#.to_string(),
};
let _ = client.message_create(TEST_CONTEXT_ID, message_create).await;
// Load the message
let result = client.message_load(TEST_CONTEXT_ID, 2).await;
if let Err(ref e) = result {
println!(" Error: {:?}", e);
}
match result {
Ok(message) => {
assert_eq!(message.id, 2);
assert_eq!(message.context_id, TEST_CONTEXT_ID);
println!("✅ message.load works - loaded message: {}", message.id);
}
Err(_) => {
println!("⚠️ message.load failed (message may not exist)");
}
}
}
/// Final test that ensures cleanup happens
#[tokio::test]
async fn test_zz_cleanup() {
println!("🧹 Running cleanup...");
cleanup_coordinator();
tokio::time::sleep(tokio::time::Duration::from_millis(500)).await;
use std::net::TcpStream;
let port_free = TcpStream::connect_timeout(
&"127.0.0.1:9652".parse().unwrap(),
std::time::Duration::from_millis(100)
).is_err();
assert!(port_free, "Port 9652 should be free after cleanup");
println!("✅ Cleanup complete - port 9652 is free");
}