Compare commits
3 Commits
main_cicdp
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
9f6032e2e1 | ||
|
|
5d8189a653 | ||
|
|
4e3d7a815d |
@@ -1,392 +0,0 @@
|
|||||||
# Gitea Actions Workflows Documentation
|
|
||||||
|
|
||||||
This directory contains the CI/CD workflows for the Horus project using Gitea Actions.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
The Horus project uses two main workflows:
|
|
||||||
|
|
||||||
1. **[ci.yml](./ci.yml)** - Continuous Integration workflow
|
|
||||||
2. **[release.yml](./release.yml)** - Release automation workflow
|
|
||||||
|
|
||||||
## Workflow Files
|
|
||||||
|
|
||||||
### ci.yml - Continuous Integration
|
|
||||||
|
|
||||||
**Purpose**: Automatically build, test, and validate code quality on every push and pull request.
|
|
||||||
|
|
||||||
**Triggers**:
|
|
||||||
- Push to any branch
|
|
||||||
- Pull request events (opened, synchronized, reopened)
|
|
||||||
|
|
||||||
**What it does**:
|
|
||||||
1. Sets up Rust toolchain
|
|
||||||
2. Caches dependencies for faster builds
|
|
||||||
3. Runs code quality checks (check, test, clippy, fmt)
|
|
||||||
4. Builds all 7 binaries in release mode
|
|
||||||
5. Uploads binaries as artifacts
|
|
||||||
|
|
||||||
**Duration**: ~5-15 minutes (first run), ~2-5 minutes (cached runs)
|
|
||||||
|
|
||||||
**Artifacts**: Binaries are stored for 7 days and can be downloaded from the Actions tab
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### release.yml - Release Automation
|
|
||||||
|
|
||||||
**Purpose**: Automatically create GitHub-style releases with downloadable binaries when version tags are pushed.
|
|
||||||
|
|
||||||
**Triggers**:
|
|
||||||
- Tags matching `v*.*.*` pattern (e.g., `v1.0.0`, `v2.1.3`)
|
|
||||||
|
|
||||||
**What it does**:
|
|
||||||
1. Builds optimized release binaries
|
|
||||||
2. Strips debug symbols to reduce size
|
|
||||||
3. Packages each binary as a tarball
|
|
||||||
4. Generates SHA256 checksums
|
|
||||||
5. Creates a Gitea release with all artifacts attached
|
|
||||||
|
|
||||||
**Duration**: ~5-10 minutes
|
|
||||||
|
|
||||||
**Artifacts**: Permanently attached to the release
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Binaries Built
|
|
||||||
|
|
||||||
Both workflows build the following 7 binaries:
|
|
||||||
|
|
||||||
| Binary Name | Description |
|
|
||||||
|-------------|-------------|
|
|
||||||
| `supervisor` | Hero Supervisor service |
|
|
||||||
| `coordinator` | Hero Coordinator service |
|
|
||||||
| `horus` | Horus main binary |
|
|
||||||
| `osiris` | Osiris server |
|
|
||||||
| `herorunner` | Hero runner |
|
|
||||||
| `runner_osiris` | Osiris runner |
|
|
||||||
| `runner_sal` | SAL runner |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Usage Guide
|
|
||||||
|
|
||||||
### Testing Code Changes
|
|
||||||
|
|
||||||
Every time you push code or create a pull request, the CI workflow automatically runs:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Make your changes
|
|
||||||
git add .
|
|
||||||
git commit -m "Your changes"
|
|
||||||
git push origin your-branch
|
|
||||||
|
|
||||||
# Or create a pull request
|
|
||||||
# The CI workflow will run automatically
|
|
||||||
```
|
|
||||||
|
|
||||||
**Check Results**:
|
|
||||||
1. Go to your Gitea repository
|
|
||||||
2. Click on the **Actions** tab
|
|
||||||
3. Find your workflow run
|
|
||||||
4. Click to see detailed logs
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Creating a Release
|
|
||||||
|
|
||||||
To create a new release with binaries:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Ensure your code is ready for release
|
|
||||||
# 2. Create a version tag (use semantic versioning)
|
|
||||||
git tag v1.0.0
|
|
||||||
|
|
||||||
# 3. Push the tag
|
|
||||||
git push origin v1.0.0
|
|
||||||
|
|
||||||
# 4. The release workflow will automatically:
|
|
||||||
# - Build all binaries
|
|
||||||
# - Create a release
|
|
||||||
# - Attach binaries and checksums
|
|
||||||
```
|
|
||||||
|
|
||||||
**View Release**:
|
|
||||||
1. Go to your Gitea repository
|
|
||||||
2. Click on the **Releases** tab
|
|
||||||
3. Your new release will be listed with downloadable artifacts
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Downloading Release Binaries
|
|
||||||
|
|
||||||
Users can download binaries from releases:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Download a specific binary
|
|
||||||
wget https://git.ourworld.tf/peternashaat/horus/releases/download/v1.0.0/supervisor-v1.0.0-linux-x86_64.tar.gz
|
|
||||||
|
|
||||||
# Extract
|
|
||||||
tar -xzf supervisor-v1.0.0-linux-x86_64.tar.gz
|
|
||||||
|
|
||||||
# Make executable
|
|
||||||
chmod +x supervisor
|
|
||||||
|
|
||||||
# Optionally move to system path
|
|
||||||
sudo mv supervisor /usr/local/bin/
|
|
||||||
|
|
||||||
# Verify it works
|
|
||||||
supervisor --help
|
|
||||||
```
|
|
||||||
|
|
||||||
**Verify Integrity**:
|
|
||||||
```bash
|
|
||||||
# Download checksums
|
|
||||||
wget https://git.ourworld.tf/peternashaat/horus/releases/download/v1.0.0/checksums.txt
|
|
||||||
|
|
||||||
# Verify a binary
|
|
||||||
sha256sum -c checksums.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Workflow Requirements
|
|
||||||
|
|
||||||
### Runner Configuration
|
|
||||||
|
|
||||||
Your Gitea Actions runner must be configured with these labels:
|
|
||||||
- `ubuntu-latest` (recommended)
|
|
||||||
- `ubuntu-22.04` (alternative)
|
|
||||||
- `ubuntu-20.04` (alternative)
|
|
||||||
|
|
||||||
### Permissions
|
|
||||||
|
|
||||||
The workflows require:
|
|
||||||
- Read access to repository code
|
|
||||||
- Write access to create releases (for release.yml)
|
|
||||||
- Access to `GITHUB_TOKEN` secret (automatically provided by Gitea)
|
|
||||||
|
|
||||||
### Dependencies
|
|
||||||
|
|
||||||
The workflows automatically install:
|
|
||||||
- Rust stable toolchain
|
|
||||||
- rustfmt (code formatter)
|
|
||||||
- clippy (linter)
|
|
||||||
|
|
||||||
No manual setup required!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Caching Strategy
|
|
||||||
|
|
||||||
The CI workflow uses three levels of caching to speed up builds:
|
|
||||||
|
|
||||||
1. **Cargo Registry Cache** - Downloaded crate metadata
|
|
||||||
2. **Cargo Index Cache** - Git index of crates.io
|
|
||||||
3. **Build Cache** - Compiled dependencies
|
|
||||||
|
|
||||||
**Benefits**:
|
|
||||||
- First build: ~10-15 minutes
|
|
||||||
- Cached builds: ~2-5 minutes
|
|
||||||
- Saves bandwidth and runner resources
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### CI Workflow Fails
|
|
||||||
|
|
||||||
**Check these common issues**:
|
|
||||||
|
|
||||||
1. **Compilation Errors**
|
|
||||||
- Review the "Check code" step logs
|
|
||||||
- Fix Rust compilation errors locally first
|
|
||||||
|
|
||||||
2. **Test Failures**
|
|
||||||
- Review the "Run tests" step logs
|
|
||||||
- Run `cargo test --workspace` locally to reproduce
|
|
||||||
|
|
||||||
3. **Clippy Warnings**
|
|
||||||
- Review the "Run clippy" step logs
|
|
||||||
- Fix with: `cargo clippy --workspace --fix`
|
|
||||||
|
|
||||||
4. **Formatting Issues**
|
|
||||||
- Review the "Check formatting" step logs
|
|
||||||
- Fix with: `cargo fmt --all`
|
|
||||||
|
|
||||||
5. **Runner Offline**
|
|
||||||
- Check if your Gitea Actions runner is running
|
|
||||||
- Verify runner labels match workflow requirements
|
|
||||||
|
|
||||||
### Release Workflow Fails
|
|
||||||
|
|
||||||
**Check these common issues**:
|
|
||||||
|
|
||||||
1. **Tag Format**
|
|
||||||
- Ensure tag matches `v*.*.*` pattern
|
|
||||||
- Examples: `v1.0.0`, `v2.1.3`, `v0.1.0-beta`
|
|
||||||
|
|
||||||
2. **Binary Not Found**
|
|
||||||
- Check if all binaries built successfully
|
|
||||||
- Review the "Build release binaries" step logs
|
|
||||||
|
|
||||||
3. **Permission Denied**
|
|
||||||
- Ensure runner has write access to create releases
|
|
||||||
- Check repository settings
|
|
||||||
|
|
||||||
4. **Release Already Exists**
|
|
||||||
- Delete the existing release first
|
|
||||||
- Or use a different version tag
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Best Practices
|
|
||||||
|
|
||||||
### Version Tagging
|
|
||||||
|
|
||||||
Use [Semantic Versioning](https://semver.org/):
|
|
||||||
- `v1.0.0` - Major release (breaking changes)
|
|
||||||
- `v1.1.0` - Minor release (new features)
|
|
||||||
- `v1.0.1` - Patch release (bug fixes)
|
|
||||||
- `v1.0.0-beta.1` - Pre-release
|
|
||||||
|
|
||||||
### Commit Messages
|
|
||||||
|
|
||||||
Write clear commit messages for better release notes:
|
|
||||||
```bash
|
|
||||||
git commit -m "feat: Add new authentication system"
|
|
||||||
git commit -m "fix: Resolve memory leak in supervisor"
|
|
||||||
git commit -m "docs: Update installation guide"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Testing Before Release
|
|
||||||
|
|
||||||
Always test before creating a release:
|
|
||||||
```bash
|
|
||||||
# Run all checks locally
|
|
||||||
cargo check --workspace
|
|
||||||
cargo test --workspace
|
|
||||||
cargo clippy --workspace -- -D warnings
|
|
||||||
cargo fmt --all -- --check
|
|
||||||
|
|
||||||
# Build release binaries locally
|
|
||||||
cargo build --workspace --release
|
|
||||||
|
|
||||||
# Test the binaries
|
|
||||||
./target/release/supervisor --help
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Workflow Customization
|
|
||||||
|
|
||||||
### Changing Rust Version
|
|
||||||
|
|
||||||
Edit the toolchain in both workflows:
|
|
||||||
```yaml
|
|
||||||
- name: Setup Rust toolchain
|
|
||||||
uses: actions-rust-lang/setup-rust-toolchain@v1
|
|
||||||
with:
|
|
||||||
toolchain: 1.75.0 # Specify exact version
|
|
||||||
```
|
|
||||||
|
|
||||||
### Adding More Binaries
|
|
||||||
|
|
||||||
If you add new binaries to the workspace:
|
|
||||||
|
|
||||||
1. Update `ci.yml` - Add to the upload artifacts step
|
|
||||||
2. Update `release.yml` - Add to strip and package steps
|
|
||||||
3. Update this README
|
|
||||||
|
|
||||||
### Changing Artifact Retention
|
|
||||||
|
|
||||||
In `ci.yml`, modify the retention period:
|
|
||||||
```yaml
|
|
||||||
retention-days: 30 # Keep for 30 days instead of 7
|
|
||||||
```
|
|
||||||
|
|
||||||
### Adding Build Matrix
|
|
||||||
|
|
||||||
To build for multiple platforms, add a matrix strategy:
|
|
||||||
```yaml
|
|
||||||
jobs:
|
|
||||||
build:
|
|
||||||
strategy:
|
|
||||||
matrix:
|
|
||||||
os: [ubuntu-latest, macos-latest, windows-latest]
|
|
||||||
runs-on: ${{ matrix.os }}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Monitoring
|
|
||||||
|
|
||||||
### View Workflow Status
|
|
||||||
|
|
||||||
**In Gitea UI**:
|
|
||||||
1. Repository → Actions tab
|
|
||||||
2. See all workflow runs
|
|
||||||
3. Click any run for detailed logs
|
|
||||||
|
|
||||||
**Via Git**:
|
|
||||||
```bash
|
|
||||||
# List recent tags
|
|
||||||
git tag -l
|
|
||||||
|
|
||||||
# Show tag details
|
|
||||||
git show v1.0.0
|
|
||||||
```
|
|
||||||
|
|
||||||
### Workflow Badges
|
|
||||||
|
|
||||||
Add status badges to your README:
|
|
||||||
```markdown
|
|
||||||

|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Security Considerations
|
|
||||||
|
|
||||||
### Secrets
|
|
||||||
|
|
||||||
The workflows use `GITHUB_TOKEN` which is automatically provided by Gitea. This token:
|
|
||||||
- Has repository-scoped permissions
|
|
||||||
- Expires after the workflow run
|
|
||||||
- Cannot be accessed by pull requests from forks (for security)
|
|
||||||
|
|
||||||
### Binary Verification
|
|
||||||
|
|
||||||
Always verify downloaded binaries:
|
|
||||||
```bash
|
|
||||||
# Check SHA256 hash
|
|
||||||
sha256sum binary-name
|
|
||||||
# Compare with checksums.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
### Supply Chain Security
|
|
||||||
|
|
||||||
The workflows:
|
|
||||||
- Use pinned action versions (`@v4`, `@v1`)
|
|
||||||
- Build from source (no pre-built binaries)
|
|
||||||
- Generate checksums for verification
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Additional Resources
|
|
||||||
|
|
||||||
- [Gitea Actions Documentation](https://docs.gitea.com/usage/actions/overview)
|
|
||||||
- [GitHub Actions Syntax](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions) (Gitea is compatible)
|
|
||||||
- [Rust CI Best Practices](https://doc.rust-lang.org/cargo/guide/continuous-integration.html)
|
|
||||||
- [Semantic Versioning](https://semver.org/)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Support
|
|
||||||
|
|
||||||
For issues with:
|
|
||||||
- **Workflows**: Check the troubleshooting section above
|
|
||||||
- **Horus Project**: See the main [README.md](../../README.md)
|
|
||||||
- **Gitea Actions**: Consult [Gitea documentation](https://docs.gitea.com)
|
|
||||||
|
|
||||||
For detailed line-by-line explanation of the workflows, see [WORKFLOW_EXPLAINED.md](./WORKFLOW_EXPLAINED.md).
|
|
||||||
|
|
||||||
@@ -1,111 +0,0 @@
|
|||||||
name: CI
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
branches:
|
|
||||||
- '**'
|
|
||||||
pull_request:
|
|
||||||
types: [opened, synchronize, reopened]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
build-and-test:
|
|
||||||
name: Build & Test
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Checkout code
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: Setup Rust toolchain
|
|
||||||
uses: actions-rust-lang/setup-rust-toolchain@v1
|
|
||||||
with:
|
|
||||||
toolchain: stable
|
|
||||||
components: rustfmt, clippy
|
|
||||||
|
|
||||||
- name: Cache cargo registry
|
|
||||||
uses: actions/cache@v4
|
|
||||||
with:
|
|
||||||
path: ~/.cargo/registry
|
|
||||||
key: ${{ runner.os }}-cargo-registry-${{ hashFiles('**/Cargo.lock') }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-cargo-registry-
|
|
||||||
|
|
||||||
- name: Cache cargo index
|
|
||||||
uses: actions/cache@v4
|
|
||||||
with:
|
|
||||||
path: ~/.cargo/git
|
|
||||||
key: ${{ runner.os }}-cargo-index-${{ hashFiles('**/Cargo.lock') }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-cargo-index-
|
|
||||||
|
|
||||||
- name: Cache cargo build
|
|
||||||
uses: actions/cache@v4
|
|
||||||
with:
|
|
||||||
path: target
|
|
||||||
key: ${{ runner.os }}-cargo-build-target-${{ hashFiles('**/Cargo.lock') }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-cargo-build-target-
|
|
||||||
|
|
||||||
# 👇 Don't fail CI on hero-runner's warnings
|
|
||||||
- name: Check code (all crates under bin/)
|
|
||||||
env:
|
|
||||||
RUSTFLAGS: "--cap-lints=warn"
|
|
||||||
run: |
|
|
||||||
set -euo pipefail
|
|
||||||
find bin -name Cargo.toml -print0 | while IFS= read -r -d '' manifest; do
|
|
||||||
echo "=== cargo check --manifest-path $manifest ==="
|
|
||||||
cargo check --manifest-path "$manifest" --verbose
|
|
||||||
done
|
|
||||||
|
|
||||||
# 👇 Same trick for tests, otherwise they’d fail for the same reason
|
|
||||||
- name: Run tests (all crates under bin/)
|
|
||||||
env:
|
|
||||||
RUSTFLAGS: "--cap-lints=warn"
|
|
||||||
run: |
|
|
||||||
set -euo pipefail
|
|
||||||
find bin -name Cargo.toml -print0 | while IFS= read -r -d '' manifest; do
|
|
||||||
echo "=== cargo test --manifest-path $manifest ==="
|
|
||||||
cargo test --manifest-path "$manifest" --verbose
|
|
||||||
done
|
|
||||||
|
|
||||||
# Clippy stays strict (still uses -D warnings for clippy lints).
|
|
||||||
# If this later fails because of hero-runner, we can also add RUSTFLAGS here.
|
|
||||||
- name: Run clippy (all crates under bin/)
|
|
||||||
run: |
|
|
||||||
set -euo pipefail
|
|
||||||
find bin -name Cargo.toml -print0 | while IFS= read -r -d '' manifest; do
|
|
||||||
echo "=== cargo clippy --manifest-path $manifest ==="
|
|
||||||
cargo clippy --manifest-path "$manifest" -- -D warnings
|
|
||||||
done
|
|
||||||
|
|
||||||
- name: Check formatting
|
|
||||||
run: cargo fmt --all -- --check
|
|
||||||
|
|
||||||
# Build was already succeeding; leaving it without cap-lints is fine.
|
|
||||||
- name: Build release binaries (all crates under bin/)
|
|
||||||
run: |
|
|
||||||
set -euo pipefail
|
|
||||||
find bin -name Cargo.toml -print0 | while IFS= read -r -d '' manifest; do
|
|
||||||
echo "=== cargo build --manifest-path $manifest --release ==="
|
|
||||||
cargo build --manifest-path "$manifest" --release --verbose
|
|
||||||
done
|
|
||||||
|
|
||||||
- name: List built binaries
|
|
||||||
run: |
|
|
||||||
echo "Built binaries:"
|
|
||||||
ls -lh target/release/ | grep -E '^-.*x.*'
|
|
||||||
|
|
||||||
- name: Upload build artifacts
|
|
||||||
uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: binaries-${{ github.sha }}
|
|
||||||
path: |
|
|
||||||
target/release/supervisor
|
|
||||||
target/release/coordinator
|
|
||||||
target/release/horus
|
|
||||||
target/release/osiris
|
|
||||||
target/release/herorunner
|
|
||||||
target/release/runner_osiris
|
|
||||||
target/release/runner_sal
|
|
||||||
retention-days: 7
|
|
||||||
if-no-files-found: warn
|
|
||||||
@@ -1,115 +0,0 @@
|
|||||||
name: Release
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
tags:
|
|
||||||
- 'v*.*.*'
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
build-release:
|
|
||||||
name: Build Release Binaries via hero-git installer
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
env:
|
|
||||||
IMAGE_NAME: hero-git:latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Checkout code
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: Extract version from tag
|
|
||||||
id: version
|
|
||||||
run: echo "VERSION=${GITHUB_REF#refs/tags/}" >> "$GITHUB_OUTPUT"
|
|
||||||
|
|
||||||
- name: Ensure hero-git image exists
|
|
||||||
run: |
|
|
||||||
if ! docker image inspect "$IMAGE_NAME" >/dev/null 2>&1; then
|
|
||||||
echo "ERROR: Docker image '$IMAGE_NAME' not found on runner."
|
|
||||||
echo "Build it manually on the runner with:"
|
|
||||||
echo " docker build -t $IMAGE_NAME -f Dockerfile ."
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
- name: Prepare bin output dir
|
|
||||||
run: mkdir -p hero-bin
|
|
||||||
|
|
||||||
- name: Run horus_full_install installer in container
|
|
||||||
run: |
|
|
||||||
set -euxo pipefail
|
|
||||||
|
|
||||||
docker run --rm \
|
|
||||||
-v "$PWD/hero-bin:/root/hero/bin" \
|
|
||||||
-e HEROLIB_REF=development_nile_installers \
|
|
||||||
"$IMAGE_NAME" \
|
|
||||||
bash -lc '
|
|
||||||
set -euxo pipefail
|
|
||||||
cd /opt/herolib
|
|
||||||
./examples/installers/horus/horus_full_install.vsh
|
|
||||||
echo "===== AFTER INSTALL, ls -R /root/hero ====="
|
|
||||||
ls -R /root/hero || true
|
|
||||||
'
|
|
||||||
|
|
||||||
- name: List built binaries
|
|
||||||
run: ls -al hero-bin
|
|
||||||
|
|
||||||
- name: Create release directory
|
|
||||||
run: mkdir -p release-artifacts
|
|
||||||
|
|
||||||
- name: Package binaries
|
|
||||||
run: |
|
|
||||||
for binary in supervisor coordinator horus osiris herorunner runner_osiris runner_sal; do
|
|
||||||
if [ -f "hero-bin/$binary" ]; then
|
|
||||||
tar -czf "release-artifacts/${binary}-${{ steps.version.outputs.VERSION }}-linux-x86_64.tar.gz" \
|
|
||||||
-C hero-bin "$binary"
|
|
||||||
echo "Packaged $binary"
|
|
||||||
else
|
|
||||||
echo "Warning: $binary not found, skipping"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
- name: Generate checksums
|
|
||||||
run: |
|
|
||||||
cd release-artifacts
|
|
||||||
if ls *.tar.gz >/dev/null 2>&1; then
|
|
||||||
sha256sum *.tar.gz > checksums.txt
|
|
||||||
cat checksums.txt
|
|
||||||
else
|
|
||||||
echo "ERROR: no .tar.gz artifacts were produced; check previous steps (likely the installer didn’t build any binaries)."
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
- name: Create Release
|
|
||||||
uses: akkuman/gitea-release-action@v1
|
|
||||||
with:
|
|
||||||
files: release-artifacts/*
|
|
||||||
token: ${{ secrets.TOKEN_GITEA }}
|
|
||||||
tag_name: ${{ steps.version.outputs.VERSION }}
|
|
||||||
name: Release ${{ steps.version.outputs.VERSION }}
|
|
||||||
body: |
|
|
||||||
## Horus Release ${{ steps.version.outputs.VERSION }}
|
|
||||||
|
|
||||||
### Binaries
|
|
||||||
Built via the herolib V installer (`horus_full_install.vsh`) inside
|
|
||||||
the `hero-git:latest` image and packaged for Linux x86_64:
|
|
||||||
- `supervisor`
|
|
||||||
- `coordinator`
|
|
||||||
- `horus`
|
|
||||||
- `osiris`
|
|
||||||
- `herorunner`
|
|
||||||
- `runner_osiris`
|
|
||||||
- `runner_sal`
|
|
||||||
|
|
||||||
### Installation (example)
|
|
||||||
```bash
|
|
||||||
wget https://git.ourworld.tf/herocode/horus/releases/download/${{ steps.version.outputs.VERSION }}/supervisor-${{ steps.version.outputs.VERSION }}-linux-x86_64.tar.gz
|
|
||||||
tar -xzf supervisor-${{ steps.version.outputs.VERSION }}-linux-x86_64.tar.gz
|
|
||||||
chmod +x supervisor
|
|
||||||
sudo mv supervisor /usr/local/bin/
|
|
||||||
```
|
|
||||||
|
|
||||||
### Verification
|
|
||||||
```bash
|
|
||||||
sha256sum -c checksums.txt
|
|
||||||
```
|
|
||||||
draft: false
|
|
||||||
prerelease: false
|
|
||||||
@@ -13,6 +13,7 @@ members = [
|
|||||||
"lib/clients/osiris",
|
"lib/clients/osiris",
|
||||||
"lib/clients/supervisor",
|
"lib/clients/supervisor",
|
||||||
"lib/models/job",
|
"lib/models/job",
|
||||||
|
"lib/models/context",
|
||||||
"lib/osiris/core",
|
"lib/osiris/core",
|
||||||
"lib/osiris/derive",
|
"lib/osiris/derive",
|
||||||
"lib/runner",
|
"lib/runner",
|
||||||
|
|||||||
@@ -200,41 +200,8 @@ impl FlowCreate {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Deserialize)]
|
// JobCreate removed - coordinator only manages flows, not individual jobs
|
||||||
pub struct JobCreate {
|
// Jobs should be created by the supervisor or other services
|
||||||
pub id: u32,
|
|
||||||
pub caller_id: u32,
|
|
||||||
pub context_id: u32,
|
|
||||||
pub script: String,
|
|
||||||
pub runner: Option<String>,
|
|
||||||
pub timeout: u32,
|
|
||||||
pub retries: u8,
|
|
||||||
pub env_vars: HashMap<String, String>,
|
|
||||||
pub prerequisites: Vec<String>,
|
|
||||||
pub depends: Vec<u32>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl JobCreate {
|
|
||||||
pub fn into_domain(self) -> Job {
|
|
||||||
use chrono::Utc;
|
|
||||||
|
|
||||||
// Convert old format to hero_job::Job
|
|
||||||
// Note: depends and prerequisites are workflow fields that need separate storage
|
|
||||||
Job {
|
|
||||||
id: self.id.to_string(),
|
|
||||||
caller_id: self.caller_id.to_string(),
|
|
||||||
context_id: self.context_id.to_string(),
|
|
||||||
payload: self.script,
|
|
||||||
runner: self.runner.unwrap_or_else(|| "default-runner".to_string()),
|
|
||||||
timeout: self.timeout as u64,
|
|
||||||
env_vars: self.env_vars,
|
|
||||||
created_at: Utc::now(),
|
|
||||||
updated_at: Utc::now(),
|
|
||||||
signatures: Vec::new(),
|
|
||||||
}
|
|
||||||
// TODO: Store depends and prerequisites separately in JobSummary/DAG
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Deserialize)]
|
#[derive(Debug, Deserialize)]
|
||||||
pub struct MessageCreate {
|
pub struct MessageCreate {
|
||||||
@@ -247,40 +214,31 @@ pub struct MessageCreate {
|
|||||||
pub timeout: u32,
|
pub timeout: u32,
|
||||||
pub timeout_ack: u32,
|
pub timeout_ack: u32,
|
||||||
pub timeout_result: u32,
|
pub timeout_result: u32,
|
||||||
pub job: Vec<JobCreate>,
|
// Jobs removed - use flow nodes instead
|
||||||
}
|
}
|
||||||
impl MessageCreate {
|
impl MessageCreate {
|
||||||
pub fn into_domain(self) -> Message {
|
pub fn into_domain(self) -> Message {
|
||||||
|
use crate::time::current_timestamp;
|
||||||
|
|
||||||
let ts = current_timestamp();
|
let ts = current_timestamp();
|
||||||
|
|
||||||
let MessageCreate {
|
// Convert to Message
|
||||||
id,
|
// Note: flow_id is set to 0 for now, should be set by the caller
|
||||||
caller_id,
|
|
||||||
context_id,
|
|
||||||
message,
|
|
||||||
message_type,
|
|
||||||
message_format_type,
|
|
||||||
timeout,
|
|
||||||
timeout_ack,
|
|
||||||
timeout_result,
|
|
||||||
job,
|
|
||||||
} = self;
|
|
||||||
|
|
||||||
Message {
|
Message {
|
||||||
id,
|
id: self.id,
|
||||||
caller_id,
|
caller_id: self.caller_id,
|
||||||
context_id,
|
context_id: self.context_id,
|
||||||
flow_id: 0, // TODO: MessageCreate should include flow_id
|
flow_id: 0, // TODO: Get from params or context
|
||||||
message,
|
message: self.message,
|
||||||
message_type,
|
message_type: self.message_type,
|
||||||
message_format_type,
|
message_format_type: self.message_format_type,
|
||||||
timeout,
|
timeout: self.timeout,
|
||||||
timeout_ack,
|
timeout_ack: self.timeout_ack,
|
||||||
timeout_result,
|
timeout_result: self.timeout_result,
|
||||||
transport_id: None,
|
transport_id: None,
|
||||||
transport_status: None,
|
transport_status: None,
|
||||||
nodes: Vec::new(), // TODO: MessageCreate should include nodes
|
nodes: Vec::new(), // TODO: MessageCreate should include nodes
|
||||||
job: job.into_iter().map(JobCreate::into_domain).collect(),
|
job: Vec::new(), // Jobs removed - coordinator only manages flows
|
||||||
logs: Vec::new(),
|
logs: Vec::new(),
|
||||||
created_at: ts,
|
created_at: ts,
|
||||||
updated_at: ts,
|
updated_at: ts,
|
||||||
@@ -330,17 +288,7 @@ pub struct FlowLoadParams {
|
|||||||
pub id: u32,
|
pub id: u32,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Deserialize)]
|
// JobCreateParams and JobLoadParams removed - coordinator only manages flows
|
||||||
pub struct JobCreateParams {
|
|
||||||
pub context_id: u32,
|
|
||||||
pub job: JobCreate,
|
|
||||||
}
|
|
||||||
#[derive(Debug, Deserialize)]
|
|
||||||
pub struct JobLoadParams {
|
|
||||||
pub context_id: u32,
|
|
||||||
pub caller_id: u32,
|
|
||||||
pub id: u32,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Deserialize)]
|
#[derive(Debug, Deserialize)]
|
||||||
pub struct MessageCreateParams {
|
pub struct MessageCreateParams {
|
||||||
@@ -506,42 +454,8 @@ pub fn build_module(state: Arc<AppState>) -> RpcModule<()> {
|
|||||||
.expect("register flow.start");
|
.expect("register flow.start");
|
||||||
}
|
}
|
||||||
|
|
||||||
// Job
|
// Job endpoints removed - coordinator only manages flows
|
||||||
{
|
// Jobs should be created and managed by the supervisor
|
||||||
let state = state.clone();
|
|
||||||
module
|
|
||||||
.register_async_method("job.create", move |params, _caller, _ctx| {
|
|
||||||
let state = state.clone();
|
|
||||||
async move {
|
|
||||||
let p: JobCreateParams = params.parse().map_err(invalid_params_err)?;
|
|
||||||
let job = p.job.into_domain();
|
|
||||||
let job = state
|
|
||||||
.service
|
|
||||||
.create_job(p.context_id, job)
|
|
||||||
.await
|
|
||||||
.map_err(storage_err)?;
|
|
||||||
Ok::<_, ErrorObjectOwned>(job)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
.expect("register job.create");
|
|
||||||
}
|
|
||||||
{
|
|
||||||
let state = state.clone();
|
|
||||||
module
|
|
||||||
.register_async_method("job.load", move |params, _caller, _ctx| {
|
|
||||||
let state = state.clone();
|
|
||||||
async move {
|
|
||||||
let p: JobLoadParams = params.parse().map_err(invalid_params_err)?;
|
|
||||||
let job = state
|
|
||||||
.service
|
|
||||||
.load_job(p.context_id, p.caller_id, p.id)
|
|
||||||
.await
|
|
||||||
.map_err(storage_err)?;
|
|
||||||
Ok::<_, ErrorObjectOwned>(job)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
.expect("register job.load");
|
|
||||||
}
|
|
||||||
|
|
||||||
// Message
|
// Message
|
||||||
{
|
{
|
||||||
|
|||||||
197
bin/runners/osiris/examples/engine.rs
Normal file
197
bin/runners/osiris/examples/engine.rs
Normal file
@@ -0,0 +1,197 @@
|
|||||||
|
//! Osiris Engine Example
|
||||||
|
//!
|
||||||
|
//! This example demonstrates how to:
|
||||||
|
//! 1. Create an Osiris Rhai engine with all registered functions
|
||||||
|
//! 2. Execute Rhai scripts using the actual Osiris API
|
||||||
|
//! 3. Test context creation, save, get, list, delete operations
|
||||||
|
//!
|
||||||
|
//! Run with: cargo run --example engine -p runner-osiris
|
||||||
|
|
||||||
|
use rhai::{Dynamic, Map};
|
||||||
|
|
||||||
|
// Import the actual engine creation function
|
||||||
|
mod engine_impl {
|
||||||
|
include!("../src/engine.rs");
|
||||||
|
}
|
||||||
|
|
||||||
|
use engine_impl::create_osiris_engine;
|
||||||
|
|
||||||
|
#[tokio::main]
|
||||||
|
async fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||||
|
println!("🚀 Osiris Engine Example\n");
|
||||||
|
println!("==========================================\n");
|
||||||
|
|
||||||
|
// Create the engine with all Osiris functions registered
|
||||||
|
let mut engine = create_osiris_engine()?;
|
||||||
|
|
||||||
|
// Set up context tags (simulating what the runner does)
|
||||||
|
let mut tag_map = Map::new();
|
||||||
|
let signatories: rhai::Array = vec![
|
||||||
|
Dynamic::from("pk1".to_string()),
|
||||||
|
Dynamic::from("pk2".to_string()),
|
||||||
|
];
|
||||||
|
tag_map.insert("SIGNATORIES".into(), Dynamic::from(signatories));
|
||||||
|
tag_map.insert("CALLER_ID".into(), "test-caller".to_string().into());
|
||||||
|
tag_map.insert("CONTEXT_ID".into(), "test-context".to_string().into());
|
||||||
|
engine.set_default_tag(Dynamic::from(tag_map));
|
||||||
|
|
||||||
|
// Test 1: Simple Rhai script
|
||||||
|
println!("📝 Test 1: Simple Rhai Script");
|
||||||
|
let script = r#"
|
||||||
|
let x = 10;
|
||||||
|
let y = 20;
|
||||||
|
x + y
|
||||||
|
"#;
|
||||||
|
|
||||||
|
match engine.eval::<i64>(script) {
|
||||||
|
Ok(result) => println!(" ✓ Result: {}\n", result),
|
||||||
|
Err(e) => println!(" ✗ Error: {}\n", e),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test 2: Get context (Osiris function)
|
||||||
|
println!("📝 Test 2: Get Context");
|
||||||
|
let context_script = r#"
|
||||||
|
// Get context with participants (must be signatories)
|
||||||
|
let ctx = get_context(["pk1", "pk2"]);
|
||||||
|
ctx.context_id()
|
||||||
|
"#;
|
||||||
|
|
||||||
|
match engine.eval::<String>(context_script) {
|
||||||
|
Ok(result) => println!(" ✓ Context ID: {}\n", result),
|
||||||
|
Err(e) => println!(" ✗ Error: {}\n", e),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test 3: Create a Note and save it
|
||||||
|
println!("📝 Test 3: Create and Save a Note");
|
||||||
|
let note_script = r#"
|
||||||
|
let ctx = get_context(["pk1"]);
|
||||||
|
// Use the builder-style API
|
||||||
|
let my_note = note("test-note-123")
|
||||||
|
.title("Test Note")
|
||||||
|
.content("This is a test note");
|
||||||
|
ctx.save(my_note);
|
||||||
|
"Note saved successfully"
|
||||||
|
"#;
|
||||||
|
|
||||||
|
match engine.eval::<String>(note_script) {
|
||||||
|
Ok(result) => println!(" ✓ {}\n", result),
|
||||||
|
Err(e) => println!(" ✗ Error: {}\n", e),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test 4: Get from collection
|
||||||
|
println!("📝 Test 4: Get from Collection");
|
||||||
|
let get_script = r#"
|
||||||
|
let ctx = get_context(["pk1"]);
|
||||||
|
// Try to get a note (will fail if doesn't exist, but shows the API works)
|
||||||
|
ctx.get("notes", "test-note-123")
|
||||||
|
"#;
|
||||||
|
|
||||||
|
match engine.eval::<Dynamic>(get_script) {
|
||||||
|
Ok(result) => println!(" ✓ Result: {:?}\n", result),
|
||||||
|
Err(e) => println!(" ⚠ Error (expected if note doesn't exist): {}\n", e),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test 5: List from collection
|
||||||
|
println!("📝 Test 5: List from Collection");
|
||||||
|
let list_script = r#"
|
||||||
|
let ctx = get_context(["pk1"]);
|
||||||
|
// List all notes in the context
|
||||||
|
ctx.list("notes")
|
||||||
|
"#;
|
||||||
|
|
||||||
|
match engine.eval::<Dynamic>(list_script) {
|
||||||
|
Ok(result) => println!(" ✓ Result: {:?}\n", result),
|
||||||
|
Err(e) => println!(" ⚠ Error: {}\n", e),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test 6: Delete from collection
|
||||||
|
println!("📝 Test 6: Delete from Collection");
|
||||||
|
let delete_script = r#"
|
||||||
|
let ctx = get_context(["pk1"]);
|
||||||
|
// Try to delete a note
|
||||||
|
ctx.delete("notes", "test-note-123")
|
||||||
|
"#;
|
||||||
|
|
||||||
|
match engine.eval::<Dynamic>(delete_script) {
|
||||||
|
Ok(result) => println!(" ✓ Result: {:?}\n", result),
|
||||||
|
Err(e) => println!(" ⚠ Error (expected if note doesn't exist): {}\n", e),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test 7: Create an Event
|
||||||
|
println!("📝 Test 7: Create and Save an Event");
|
||||||
|
let event_script = r#"
|
||||||
|
let ctx = get_context(["pk1"]);
|
||||||
|
// event() takes (namespace, title) in the module version
|
||||||
|
let my_event = event("test-event-123", "Test Event")
|
||||||
|
.description("This is a test event");
|
||||||
|
ctx.save(my_event);
|
||||||
|
"Event saved successfully"
|
||||||
|
"#;
|
||||||
|
|
||||||
|
match engine.eval::<String>(event_script) {
|
||||||
|
Ok(result) => println!(" ✓ {}\n", result),
|
||||||
|
Err(e) => println!(" ✗ Error: {}\n", e),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test 8: Create a User (HeroLedger)
|
||||||
|
println!("📝 Test 8: Create and Save a User");
|
||||||
|
let user_script = r#"
|
||||||
|
let ctx = get_context(["pk1"]);
|
||||||
|
let my_user = new_user()
|
||||||
|
.username("testuser")
|
||||||
|
.add_email("test@example.com")
|
||||||
|
.pubkey("pk1");
|
||||||
|
ctx.save(my_user);
|
||||||
|
"User saved successfully"
|
||||||
|
"#;
|
||||||
|
|
||||||
|
match engine.eval::<String>(user_script) {
|
||||||
|
Ok(result) => println!(" ✓ {}\n", result),
|
||||||
|
Err(e) => println!(" ✗ Error: {}\n", e),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test 9: Create a Group (HeroLedger)
|
||||||
|
println!("📝 Test 9: Create and Save a Group");
|
||||||
|
let group_script = r#"
|
||||||
|
let ctx = get_context(["pk1"]);
|
||||||
|
let my_group = new_group()
|
||||||
|
.name("Test Group")
|
||||||
|
.description("A test group");
|
||||||
|
ctx.save(my_group);
|
||||||
|
"Group saved successfully"
|
||||||
|
"#;
|
||||||
|
|
||||||
|
match engine.eval::<String>(group_script) {
|
||||||
|
Ok(result) => println!(" ✓ {}\n", result),
|
||||||
|
Err(e) => println!(" ✗ Error: {}\n", e),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test 10: List users
|
||||||
|
println!("📝 Test 10: List Users from Collection");
|
||||||
|
let list_users_script = r#"
|
||||||
|
let ctx = get_context(["pk1"]);
|
||||||
|
ctx.list("users")
|
||||||
|
"#;
|
||||||
|
|
||||||
|
match engine.eval::<Dynamic>(list_users_script) {
|
||||||
|
Ok(result) => println!(" ✓ Users: {:?}\n", result),
|
||||||
|
Err(e) => println!(" ⚠ Error: {}\n", e),
|
||||||
|
}
|
||||||
|
|
||||||
|
println!("==========================================");
|
||||||
|
println!("🎉 All tests completed!\n");
|
||||||
|
println!("📚 Available Object Types:");
|
||||||
|
println!(" - Note: note(id).title(...).content(...)");
|
||||||
|
println!(" - Event: event(id, title).description(...)");
|
||||||
|
println!(" - User: new_user().username(...).add_email(...).pubkey(...)");
|
||||||
|
println!(" - Group: new_group().name(...).description(...)");
|
||||||
|
println!(" - Account: new_account()...");
|
||||||
|
println!(" - And many more: KycSession, FlowTemplate, FlowInstance, Contract, etc.");
|
||||||
|
println!("\n📖 Available Operations:");
|
||||||
|
println!(" - ctx.save(object) - Save an object");
|
||||||
|
println!(" - ctx.get(collection, id) - Get an object by ID");
|
||||||
|
println!(" - ctx.list(collection) - List all objects in collection");
|
||||||
|
println!(" - ctx.delete(collection, id) - Delete an object");
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
@@ -1,43 +0,0 @@
|
|||||||
FROM ubuntu:24.04 AS base
|
|
||||||
|
|
||||||
# Includes a bunch of stuff hero will try to install later, so we don't have to
|
|
||||||
# do that on each new container launch
|
|
||||||
RUN apt update && apt install -y ssh wget unzip build-essential git redis-server sudo autoconf libtool iputils-ping net-tools rsync curl mc tmux libsqlite3-dev xz-utils git-lfs ufw libpq-dev
|
|
||||||
|
|
||||||
RUN wget https://github.com/vlang/v/releases/latest/download/v_linux.zip && \
|
|
||||||
unzip v_linux.zip && \
|
|
||||||
cd v && \
|
|
||||||
./v symlink
|
|
||||||
|
|
||||||
# Install bun.sh for docusaurus
|
|
||||||
RUN curl -fsSL https://bun.sh/install | bash
|
|
||||||
|
|
||||||
# Make a new stage so we can invalidate the cache by passing --no-cache-filter
|
|
||||||
FROM base AS clone
|
|
||||||
|
|
||||||
RUN git clone https://github.com/Incubaid/herolib /opt/herolib
|
|
||||||
|
|
||||||
RUN git clone https://github.com/incubaid/docusaurus_template /root/code/github/incubaid/docusaurus_template
|
|
||||||
|
|
||||||
# Warm the bun node modules cache
|
|
||||||
RUN cd /root/code/github/incubaid/docusaurus_template/template && /root/.bun/bin/bun install
|
|
||||||
|
|
||||||
RUN mkdir -p /root/.vmodules/incubaid
|
|
||||||
|
|
||||||
# Make a place for users to mount their ssh key file. We will copy to .ssh and
|
|
||||||
# change permissions in entrypoint script
|
|
||||||
RUN mkdir -p /root/ssh
|
|
||||||
|
|
||||||
COPY entrypoint.sh /bin/entrypoint.sh
|
|
||||||
|
|
||||||
RUN chmod +x /bin/entrypoint.sh
|
|
||||||
|
|
||||||
# Make a new stage so we can invalidate the cache by passing --no-cache-filter
|
|
||||||
FROM clone AS fetch
|
|
||||||
|
|
||||||
# Fetch to freshen the repos with minimal work
|
|
||||||
RUN cd /opt/herolib && git fetch
|
|
||||||
RUN cd /root/code/github/incubaid/docusaurus_template && git fetch
|
|
||||||
|
|
||||||
|
|
||||||
ENTRYPOINT ["/bin/entrypoint.sh"]
|
|
||||||
268
docker/Readme.md
268
docker/Readme.md
@@ -1,268 +0,0 @@
|
|||||||
# `hero-git` Docker Image
|
|
||||||
|
|
||||||
This directory contains a reusable Docker image and entrypoint script used to build Hero / Horus binaries via the V-based **herolib** installers.
|
|
||||||
|
|
||||||
It is designed for two main use cases:
|
|
||||||
|
|
||||||
1. **Local development / testing** – run the herolib CLI (`hero`) and Horus installers in a clean, reproducible environment.
|
|
||||||
2. **CI / Release builds** – build Horus binaries inside this image (using `horus_full_install.vsh`) and export them as release artifacts.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Files
|
|
||||||
|
|
||||||
### `Dockerfile`
|
|
||||||
|
|
||||||
The `Dockerfile` builds an Ubuntu-based image with all dependencies needed for `herolib` and its installers.
|
|
||||||
|
|
||||||
Key points:
|
|
||||||
|
|
||||||
* Base: `ubuntu:24.04`
|
|
||||||
* Installs system tooling:
|
|
||||||
|
|
||||||
* `ssh`, `wget`, `curl`, `unzip`, `build-essential`, `git`, `git-lfs`
|
|
||||||
* `redis-server`, `libsqlite3-dev`, `libpq-dev`, `autoconf`, `libtool`, `net-tools`, `iputils-ping`, `rsync`, `mc`, `tmux`, `ufw`, `xz-utils`
|
|
||||||
* Installs **V** compiler:
|
|
||||||
|
|
||||||
* Downloads `v_linux.zip` from the official V GitHub releases
|
|
||||||
* Unzips it and runs `./v symlink` so `v` is on the `PATH`
|
|
||||||
* Installs **Bun** (used for the docusaurus docs template):
|
|
||||||
|
|
||||||
* `curl -fsSL https://bun.sh/install | bash`
|
|
||||||
* Clones:
|
|
||||||
|
|
||||||
* `https://github.com/Incubaid/herolib` into `/opt/herolib`
|
|
||||||
* `https://github.com/incubaid/docusaurus_template` into `/root/code/github/incubaid/docusaurus_template`
|
|
||||||
* Warms Bun dependencies for the docusaurus template (for faster re-use):
|
|
||||||
|
|
||||||
* `cd /root/code/github/incubaid/docusaurus_template/template && bun install`
|
|
||||||
* Prepares directories:
|
|
||||||
|
|
||||||
* `/root/.vmodules/incubaid` – used by V module resolution
|
|
||||||
* `/root/ssh` – optional mount point for SSH keys
|
|
||||||
* Copies `entrypoint.sh` into the image and marks it executable.
|
|
||||||
|
|
||||||
The container’s entrypoint is set to the custom script `/bin/entrypoint.sh`.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### `entrypoint.sh`
|
|
||||||
|
|
||||||
The entrypoint script is responsible for:
|
|
||||||
|
|
||||||
1. Starting **Redis** in the background.
|
|
||||||
2. Optionally configuring **SSH** (if keys are provided).
|
|
||||||
3. Selecting the correct `herolib` source (mounted or cloned).
|
|
||||||
4. Checking out the desired `herolib` branch.
|
|
||||||
5. Building the `hero` CLI with V and making it available as `/bin/hero`.
|
|
||||||
6. Finally, executing any command passed to `docker run`.
|
|
||||||
|
|
||||||
#### Behavior in detail
|
|
||||||
|
|
||||||
```bash
|
|
||||||
#!/bin/bash
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
# Start Redis in the background
|
|
||||||
redis-server --daemonize yes
|
|
||||||
|
|
||||||
# Optional SSH setup: only if /root/ssh has keys
|
|
||||||
if [ -d /root/ssh ] && compgen -G "/root/ssh/*" > /dev/null; then
|
|
||||||
mkdir -p /root/.ssh
|
|
||||||
cp -r /root/ssh/* /root/.ssh/
|
|
||||||
chmod 600 /root/.ssh/*
|
|
||||||
eval "$(ssh-agent)"
|
|
||||||
ssh-add /root/.ssh/*
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Support optionally bind-mounting a local herolib into the container.
|
|
||||||
# If /opt/herolib_mount exists, we use that; otherwise we use the cloned /opt/herolib
|
|
||||||
rm -f /root/.vmodules/freeflowuniverse/herolib
|
|
||||||
if [ -d "/opt/herolib_mount" ]; then
|
|
||||||
ln -s /opt/herolib_mount/lib /root/.vmodules/incubaid/herolib
|
|
||||||
cd /opt/herolib_mount
|
|
||||||
else
|
|
||||||
ln -s /opt/herolib/lib /root/.vmodules/incubaid/herolib
|
|
||||||
cd /opt/herolib
|
|
||||||
git fetch
|
|
||||||
git checkout "${HEROLIB_REF:-development}"
|
|
||||||
git pull
|
|
||||||
fi
|
|
||||||
|
|
||||||
cd cli
|
|
||||||
echo "Building hero..."
|
|
||||||
v -enable-globals hero.v > build.log 2>&1 || (cat build.log && exit 1)
|
|
||||||
ln -s "$(realpath hero)" /bin/hero
|
|
||||||
|
|
||||||
cd /root
|
|
||||||
|
|
||||||
# If a command was provided to `docker run`, execute it as-is.
|
|
||||||
# Example: docker run hero-git:latest bash -lc 'cd /opt/herolib && ./examples/installers/horus/horus_full_install.vsh'
|
|
||||||
if [ "$#" -gt 0 ]; then
|
|
||||||
exec "$@"
|
|
||||||
else
|
|
||||||
# No command passed → give an interactive shell
|
|
||||||
exec bash
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Important environment variables
|
|
||||||
|
|
||||||
* `HEROLIB_REF`
|
|
||||||
|
|
||||||
* Git ref (branch, tag, or commit) used when checking out `/opt/herolib`.
|
|
||||||
* Default: `development`.
|
|
||||||
* Example for CI: `HEROLIB_REF=development_nile_installers`.
|
|
||||||
|
|
||||||
#### Optional mounts
|
|
||||||
|
|
||||||
* `/opt/herolib_mount`
|
|
||||||
|
|
||||||
* If you bind-mount your local `herolib` repo here, the container will use it instead of the cloned `/opt/herolib`.
|
|
||||||
* Useful for local development when you want to test uncommitted changes.
|
|
||||||
* `/root/ssh`
|
|
||||||
|
|
||||||
* Optional directory containing SSH keys (e.g. `id_ed25519`) if you need to access private repositories.
|
|
||||||
* If present and non-empty, keys are copied to `/root/.ssh` and added to an SSH agent.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## What this image is used for
|
|
||||||
|
|
||||||
### 1. Local development / experimentation
|
|
||||||
|
|
||||||
You can drop into the container and manually run installers or commands against `herolib`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Interactive shell with default branch
|
|
||||||
docker run -it --rm hero-git:latest
|
|
||||||
|
|
||||||
# Use a specific herolib ref
|
|
||||||
docker run -it --rm -e HEROLIB_REF=development_nile_installers hero-git:latest
|
|
||||||
|
|
||||||
# Mount a local herolib repo and use that instead of the cloned one
|
|
||||||
docker run -it --rm \
|
|
||||||
-v "$PWD/herolib:/opt/herolib_mount" \
|
|
||||||
hero-git:latest
|
|
||||||
```
|
|
||||||
|
|
||||||
Inside the container you can then run:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd /opt/herolib
|
|
||||||
./examples/installers/horus/horus_full_install.vsh
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. CI / Release builds for Horus
|
|
||||||
|
|
||||||
In CI (e.g. the `release.yml` workflow), this image is used to build Horus binaries via the V-based installer script:
|
|
||||||
|
|
||||||
* The workflow:
|
|
||||||
|
|
||||||
1. Ensures `hero-git:latest` is available on the self-hosted runner.
|
|
||||||
2. Creates a local directory (e.g. `hero-bin/`) on the runner.
|
|
||||||
3. Runs the container, mounting `hero-bin/` into `/root/hero/bin`.
|
|
||||||
4. Inside the container, executes `./examples/installers/horus/horus_full_install.vsh` from `/opt/herolib`.
|
|
||||||
5. The installer writes the compiled Horus binaries into `/root/hero/bin`, which appear on the host inside `hero-bin/`.
|
|
||||||
6. The workflow then packages those binaries into `.tar.gz` archives and uploads them as release artifacts.
|
|
||||||
|
|
||||||
Example CI step:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- name: Run horus_full_install installer in container
|
|
||||||
run: |
|
|
||||||
set -euxo pipefail
|
|
||||||
|
|
||||||
docker run --rm \
|
|
||||||
-v "$PWD/hero-bin:/root/hero/bin" \
|
|
||||||
-e HEROLIB_REF=development_nile_installers \
|
|
||||||
hero-git:latest \
|
|
||||||
bash -lc '
|
|
||||||
set -euxo pipefail
|
|
||||||
cd /opt/herolib
|
|
||||||
./examples/installers/horus/horus_full_install.vsh
|
|
||||||
echo "===== AFTER INSTALL, ls -R /root/hero ====="
|
|
||||||
ls -R /root/hero || true
|
|
||||||
'
|
|
||||||
```
|
|
||||||
|
|
||||||
After this step, `hero-bin/` on the host should contain binaries like:
|
|
||||||
|
|
||||||
* `supervisor`
|
|
||||||
* `coordinator`
|
|
||||||
* `horus`
|
|
||||||
* `osiris`
|
|
||||||
* `herorunner`
|
|
||||||
* `runner_osiris`
|
|
||||||
* `runner_sal`
|
|
||||||
|
|
||||||
These are then packaged and attached to the Gitea release.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## How to build the image
|
|
||||||
|
|
||||||
From the directory containing the `Dockerfile` and `entrypoint.sh`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Build the image (force a clean rebuild)
|
|
||||||
docker build --no-cache -t hero-git:latest .
|
|
||||||
|
|
||||||
# Verify entrypoint contents
|
|
||||||
docker run --rm --entrypoint cat hero-git:latest /bin/entrypoint.sh | tail -n 20
|
|
||||||
```
|
|
||||||
|
|
||||||
You should see the `exec "$@"` block at the end of the script, which ensures that commands passed to `docker run` are executed as-is inside the container.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## How to run the image (examples)
|
|
||||||
|
|
||||||
### Basic interactive usage
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Default branch (HEROLIB_REF=development)
|
|
||||||
docker run -it --rm hero-git:latest
|
|
||||||
|
|
||||||
# Specific herolib ref
|
|
||||||
export HEROLIB_REF=development_nile_installers
|
|
||||||
|
|
||||||
docker run -it --rm \
|
|
||||||
-e HEROLIB_REF=$HEROLIB_REF \
|
|
||||||
hero-git:latest
|
|
||||||
```
|
|
||||||
|
|
||||||
### Build Horus binaries into a host directory
|
|
||||||
|
|
||||||
```bash
|
|
||||||
mkdir -p hero-bin
|
|
||||||
|
|
||||||
docker run --rm \
|
|
||||||
-v "$PWD/hero-bin:/root/hero/bin" \
|
|
||||||
-e HEROLIB_REF=development_nile_installers \
|
|
||||||
hero-git:latest \
|
|
||||||
bash -lc '
|
|
||||||
set -euxo pipefail
|
|
||||||
cd /opt/herolib
|
|
||||||
./examples/installers/horus/horus_full_install.vsh
|
|
||||||
ls -al /root/hero/bin
|
|
||||||
'
|
|
||||||
|
|
||||||
ls -al hero-bin
|
|
||||||
```
|
|
||||||
|
|
||||||
You should now see the built Horus binaries on the host in `hero-bin/`.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Summary
|
|
||||||
|
|
||||||
* The `hero-git:latest` image encapsulates all dependencies required to build Horus via `herolib`.
|
|
||||||
* `entrypoint.sh`:
|
|
||||||
|
|
||||||
* Starts Redis
|
|
||||||
* Optionally configures SSH
|
|
||||||
* Selects and updates the `herolib` checkout
|
|
||||||
* Builds the `hero` CLI once
|
|
||||||
* Runs any command passed to the container (e.g. Horus installers)
|
|
||||||
* CI uses this image to run `horus_full_install.vsh` and collect Horus binaries from `/root/hero/bin` for releases.
|
|
||||||
@@ -1,44 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
# Start Redis in the background
|
|
||||||
redis-server --daemonize yes
|
|
||||||
|
|
||||||
# Optional SSH setup: only if /root/ssh has keys
|
|
||||||
if [ -d /root/ssh ] && compgen -G "/root/ssh/*" > /dev/null; then
|
|
||||||
mkdir -p /root/.ssh
|
|
||||||
cp -r /root/ssh/* /root/.ssh/
|
|
||||||
chmod 600 /root/.ssh/*
|
|
||||||
eval "$(ssh-agent)"
|
|
||||||
ssh-add /root/.ssh/*
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Support optionally bind-mounting a local herolib into the container.
|
|
||||||
# If /opt/herolib_mount exists, we use that; otherwise we use the cloned /opt/herolib
|
|
||||||
rm -f /root/.vmodules/freeflowuniverse/herolib
|
|
||||||
if [ -d "/opt/herolib_mount" ]; then
|
|
||||||
ln -s /opt/herolib_mount/lib /root/.vmodules/incubaid/herolib
|
|
||||||
cd /opt/herolib_mount
|
|
||||||
else
|
|
||||||
ln -s /opt/herolib/lib /root/.vmodules/incubaid/herolib
|
|
||||||
cd /opt/herolib
|
|
||||||
git fetch
|
|
||||||
git checkout "${HEROLIB_REF:-development}"
|
|
||||||
git pull
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Build hero CLI once so it's available as /bin/hero
|
|
||||||
cd cli
|
|
||||||
echo "Building hero..."
|
|
||||||
v -enable-globals hero.v > build.log 2>&1 || (cat build.log && exit 1)
|
|
||||||
ln -s "$(realpath hero)" /bin/hero
|
|
||||||
|
|
||||||
cd /root
|
|
||||||
|
|
||||||
# If a command was provided to `docker run`, execute it as-is.
|
|
||||||
if [ "$#" -gt 0 ]; then
|
|
||||||
exec "$@"
|
|
||||||
else
|
|
||||||
# No command passed → give an interactive shell
|
|
||||||
exec bash
|
|
||||||
fi
|
|
||||||
@@ -140,39 +140,8 @@ impl CoordinatorClient {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// ==================== Job Methods ====================
|
// Job methods removed - coordinator only manages flows
|
||||||
|
// Jobs should be created and managed by the supervisor
|
||||||
/// Create a new job in a context
|
|
||||||
pub async fn job_create(&self, context_id: u32, job: JobCreate) -> Result<Job> {
|
|
||||||
let params = serde_json::json!({
|
|
||||||
"context_id": context_id,
|
|
||||||
"job": job
|
|
||||||
});
|
|
||||||
self.call("job.create", params).await
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Load an existing job from a context
|
|
||||||
pub async fn job_load(&self, context_id: u32, caller_id: u32, id: u32) -> Result<Job> {
|
|
||||||
let params = serde_json::json!({
|
|
||||||
"context_id": context_id,
|
|
||||||
"caller_id": caller_id,
|
|
||||||
"id": id
|
|
||||||
});
|
|
||||||
self.call("job.load", params).await
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Try to create a job, or load it if it already exists
|
|
||||||
pub async fn job_create_or_load(&self, context_id: u32, job: JobCreate) -> Result<Job> {
|
|
||||||
let caller_id = job.caller_id;
|
|
||||||
let job_id = job.id;
|
|
||||||
match self.job_create(context_id, job).await {
|
|
||||||
Ok(j) => Ok(j),
|
|
||||||
Err(CoordinatorError::AlreadyExists | CoordinatorError::Storage(_)) => {
|
|
||||||
self.job_load(context_id, caller_id, job_id).await
|
|
||||||
}
|
|
||||||
Err(e) => Err(e),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// ==================== Flow Methods ====================
|
// ==================== Flow Methods ====================
|
||||||
|
|
||||||
|
|||||||
@@ -103,24 +103,8 @@ pub enum ScriptType {
|
|||||||
|
|
||||||
// ==================== Job ====================
|
// ==================== Job ====================
|
||||||
|
|
||||||
/// Parameters for creating a job
|
// JobCreate removed - coordinator only manages flows, not individual jobs
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
// Use hero_job::Job from lib/models/job for job operations
|
||||||
pub struct JobCreate {
|
|
||||||
pub id: u32,
|
|
||||||
pub caller_id: u32,
|
|
||||||
pub context_id: u32,
|
|
||||||
pub script: String,
|
|
||||||
pub script_type: ScriptType,
|
|
||||||
pub timeout: u64,
|
|
||||||
#[serde(default)]
|
|
||||||
pub retries: u8,
|
|
||||||
#[serde(default)]
|
|
||||||
pub env_vars: HashMap<String, String>,
|
|
||||||
#[serde(default)]
|
|
||||||
pub prerequisites: Vec<u32>,
|
|
||||||
#[serde(default)]
|
|
||||||
pub depends: Vec<u32>,
|
|
||||||
}
|
|
||||||
|
|
||||||
// ==================== Flow ====================
|
// ==================== Flow ====================
|
||||||
|
|
||||||
|
|||||||
@@ -5,13 +5,19 @@ edition.workspace = true
|
|||||||
description = "Osiris client library"
|
description = "Osiris client library"
|
||||||
license = "MIT OR Apache-2.0"
|
license = "MIT OR Apache-2.0"
|
||||||
|
|
||||||
|
[features]
|
||||||
|
default = []
|
||||||
|
zdfz = ["dep:zdfz-models"]
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
|
zdfz-models = { path = "../../../../../zdfz/sdk/models", optional = true }
|
||||||
# Core dependencies
|
# Core dependencies
|
||||||
serde.workspace = true
|
serde.workspace = true
|
||||||
serde_json.workspace = true
|
serde_json.workspace = true
|
||||||
anyhow.workspace = true
|
anyhow.workspace = true
|
||||||
thiserror.workspace = true
|
thiserror.workspace = true
|
||||||
chrono.workspace = true
|
chrono.workspace = true
|
||||||
|
paste = "1.0"
|
||||||
|
|
||||||
# HTTP client
|
# HTTP client
|
||||||
reqwest = { version = "0.12", default-features = false, features = ["json"] }
|
reqwest = { version = "0.12", default-features = false, features = ["json"] }
|
||||||
|
|||||||
@@ -13,10 +13,15 @@ use thiserror::Error;
|
|||||||
pub mod kyc;
|
pub mod kyc;
|
||||||
pub mod payment;
|
pub mod payment;
|
||||||
pub mod communication;
|
pub mod communication;
|
||||||
|
pub mod macros;
|
||||||
|
|
||||||
|
#[cfg(feature = "zdfz")]
|
||||||
|
pub mod zdfz_extensions;
|
||||||
|
|
||||||
pub use kyc::*;
|
pub use kyc::*;
|
||||||
pub use payment::*;
|
pub use payment::*;
|
||||||
pub use communication::*;
|
pub use communication::*;
|
||||||
|
pub use macros::*;
|
||||||
|
|
||||||
#[derive(Debug, Error)]
|
#[derive(Debug, Error)]
|
||||||
pub enum OsirisClientError {
|
pub enum OsirisClientError {
|
||||||
@@ -34,6 +39,9 @@ pub enum OsirisClientError {
|
|||||||
|
|
||||||
#[error("Command execution failed: {0}")]
|
#[error("Command execution failed: {0}")]
|
||||||
CommandFailed(String),
|
CommandFailed(String),
|
||||||
|
|
||||||
|
#[error("Serialization failed: {0}")]
|
||||||
|
SerializationFailed(String),
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Osiris client with CQRS support
|
/// Osiris client with CQRS support
|
||||||
|
|||||||
204
lib/clients/osiris/src/macros.rs
Normal file
204
lib/clients/osiris/src/macros.rs
Normal file
@@ -0,0 +1,204 @@
|
|||||||
|
//! Macros for generating CRUD methods on OsirisClient
|
||||||
|
//!
|
||||||
|
//! These macros allow you to quickly generate standard CRUD operations
|
||||||
|
//! and custom methods for your models.
|
||||||
|
|
||||||
|
/// Generate CRUD methods for a model on OsirisClient
|
||||||
|
///
|
||||||
|
/// This macro generates 5 standard methods:
|
||||||
|
/// - create_{collection}
|
||||||
|
/// - get_{collection}
|
||||||
|
/// - update_{collection}
|
||||||
|
/// - delete_{collection}
|
||||||
|
/// - list_{collection}
|
||||||
|
///
|
||||||
|
/// # Example
|
||||||
|
///
|
||||||
|
/// ```rust
|
||||||
|
/// use osiris_client::{OsirisClient, impl_osiris_crud};
|
||||||
|
///
|
||||||
|
/// #[derive(serde::Serialize, serde::Deserialize)]
|
||||||
|
/// struct User {
|
||||||
|
/// id: String,
|
||||||
|
/// name: String,
|
||||||
|
/// }
|
||||||
|
///
|
||||||
|
/// impl_osiris_crud!(User, "users", "id");
|
||||||
|
///
|
||||||
|
/// // Now you can use:
|
||||||
|
/// // client.create_users(&user).await?;
|
||||||
|
/// // client.get_users("123").await?;
|
||||||
|
/// // client.update_users("123", &user).await?;
|
||||||
|
/// // client.delete_users("123").await?;
|
||||||
|
/// // client.list_users().await?;
|
||||||
|
/// ```
|
||||||
|
#[macro_export]
|
||||||
|
macro_rules! impl_osiris_crud {
|
||||||
|
($model:ty, $collection:expr, $id_field:expr) => {
|
||||||
|
paste::paste! {
|
||||||
|
impl $crate::OsirisClient {
|
||||||
|
/// Create a new instance
|
||||||
|
#[doc = "Create a new " $collection " instance"]
|
||||||
|
pub async fn [<$collection:snake _create>](&self, model: &$model) -> Result<$model, $crate::OsirisClientError> {
|
||||||
|
let json = serde_json::to_string(model)
|
||||||
|
.map_err(|e| $crate::OsirisClientError::SerializationFailed(e.to_string()))?;
|
||||||
|
|
||||||
|
// Create Rhai script that uses Osiris context API
|
||||||
|
// Note: The actual object creation depends on the model type
|
||||||
|
// For now, we serialize the data and would need model-specific constructors
|
||||||
|
let script = format!(
|
||||||
|
r#"
|
||||||
|
let ctx = get_context(["system"]);
|
||||||
|
let data = {};
|
||||||
|
// TODO: Model-specific object creation
|
||||||
|
// For now, this is a placeholder
|
||||||
|
data
|
||||||
|
"#,
|
||||||
|
json
|
||||||
|
);
|
||||||
|
|
||||||
|
let response = self.execute_script(&script).await?;
|
||||||
|
// TODO: Parse response from job result
|
||||||
|
Err($crate::OsirisClientError::CommandFailed("Not yet implemented".to_string()))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get an instance by ID
|
||||||
|
#[doc = "Get a " $collection " instance by ID"]
|
||||||
|
pub async fn [<$collection:snake _get>](&self, id: &str) -> Result<$model, $crate::OsirisClientError> {
|
||||||
|
let script = format!(
|
||||||
|
r#"
|
||||||
|
let ctx = get_context(["system"]);
|
||||||
|
ctx.get("{}", "{}")
|
||||||
|
"#,
|
||||||
|
$collection, id
|
||||||
|
);
|
||||||
|
|
||||||
|
let response = self.execute_script(&script).await?;
|
||||||
|
// TODO: Parse response from job result
|
||||||
|
Err($crate::OsirisClientError::CommandFailed("Not yet implemented".to_string()))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Update an existing instance
|
||||||
|
#[doc = "Update an existing " $collection " instance"]
|
||||||
|
pub async fn [<$collection:snake _update>](&self, id: &str, model: &$model) -> Result<$model, $crate::OsirisClientError> {
|
||||||
|
let json = serde_json::to_string(model)
|
||||||
|
.map_err(|e| $crate::OsirisClientError::SerializationFailed(e.to_string()))?;
|
||||||
|
|
||||||
|
let script = format!(
|
||||||
|
r#"
|
||||||
|
let ctx = get_context(["system"]);
|
||||||
|
let obj = ctx.get("{}", "{}");
|
||||||
|
let data = {};
|
||||||
|
// TODO: Update object fields from data
|
||||||
|
ctx.save(obj);
|
||||||
|
obj
|
||||||
|
"#,
|
||||||
|
$collection, id, json
|
||||||
|
);
|
||||||
|
|
||||||
|
let response = self.execute_script(&script).await?;
|
||||||
|
// TODO: Parse response from job result
|
||||||
|
Err($crate::OsirisClientError::CommandFailed("Not yet implemented".to_string()))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Delete an instance
|
||||||
|
#[doc = "Delete a " $collection " instance"]
|
||||||
|
pub async fn [<$collection:snake _delete>](&self, id: &str) -> Result<(), $crate::OsirisClientError> {
|
||||||
|
let script = format!(
|
||||||
|
r#"
|
||||||
|
let ctx = get_context(["system"]);
|
||||||
|
ctx.delete("{}", "{}")
|
||||||
|
"#,
|
||||||
|
$collection, id
|
||||||
|
);
|
||||||
|
|
||||||
|
self.execute_script(&script).await?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// List all instances
|
||||||
|
#[doc = "List all " $collection " instances"]
|
||||||
|
pub async fn [<$collection:snake _list>](&self) -> Result<Vec<$model>, $crate::OsirisClientError> {
|
||||||
|
let script = format!(
|
||||||
|
r#"
|
||||||
|
let ctx = get_context(["system"]);
|
||||||
|
ctx.list("{}")
|
||||||
|
"#,
|
||||||
|
$collection
|
||||||
|
);
|
||||||
|
|
||||||
|
let response = self.execute_script(&script).await?;
|
||||||
|
// TODO: Parse response from job result
|
||||||
|
Err($crate::OsirisClientError::CommandFailed("Not yet implemented".to_string()))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generate a custom method on a model
|
||||||
|
///
|
||||||
|
/// This macro generates a method that calls a custom Rhai function on the model.
|
||||||
|
///
|
||||||
|
/// # Example
|
||||||
|
///
|
||||||
|
/// ```rust
|
||||||
|
/// use osiris_client::{OsirisClient, impl_osiris_method};
|
||||||
|
///
|
||||||
|
/// #[derive(serde::Serialize, serde::Deserialize)]
|
||||||
|
/// struct CalendarEvent {
|
||||||
|
/// id: String,
|
||||||
|
/// start_time: i64,
|
||||||
|
/// }
|
||||||
|
///
|
||||||
|
/// impl_osiris_method!(CalendarEvent, "calendar_events", reschedule, new_start: i64, new_end: i64);
|
||||||
|
///
|
||||||
|
/// // Now you can use:
|
||||||
|
/// // client.reschedule_calendar_events("123", 1234567890, 1234567900).await?;
|
||||||
|
/// ```
|
||||||
|
#[macro_export]
|
||||||
|
macro_rules! impl_osiris_method {
|
||||||
|
($model:ty, $collection:expr, $method_name:ident $(, $param:ident: $param_type:ty)*) => {
|
||||||
|
paste::paste! {
|
||||||
|
impl $crate::OsirisClient {
|
||||||
|
#[doc = "Call " $method_name " on a " $collection " instance"]
|
||||||
|
pub async fn [<$collection:snake _ $method_name>](&self, id: &str $(, $param: $param_type)*) -> Result<$model, $crate::OsirisClientError> {
|
||||||
|
let params = serde_json::json!({
|
||||||
|
$(stringify!($param): $param),*
|
||||||
|
});
|
||||||
|
|
||||||
|
let script = format!(
|
||||||
|
r#"
|
||||||
|
let ctx = get_context(["system"]);
|
||||||
|
let obj = ctx.get("{}", "{}");
|
||||||
|
// TODO: Call custom method on object
|
||||||
|
// obj.{}({});
|
||||||
|
ctx.save(obj);
|
||||||
|
obj
|
||||||
|
"#,
|
||||||
|
$collection, id, stringify!($method_name), params
|
||||||
|
);
|
||||||
|
|
||||||
|
let response = self.execute_script(&script).await?;
|
||||||
|
// TODO: Parse response from job result
|
||||||
|
Err($crate::OsirisClientError::CommandFailed("Not yet implemented".to_string()))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
// Example model for testing
|
||||||
|
#[derive(serde::Serialize, serde::Deserialize)]
|
||||||
|
struct TestModel {
|
||||||
|
id: String,
|
||||||
|
name: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
// This would generate the methods (can't actually test async in doc tests easily)
|
||||||
|
// impl_osiris_crud!(TestModel, "test_models", "id");
|
||||||
|
}
|
||||||
24
lib/clients/osiris/src/zdfz_extensions.rs
Normal file
24
lib/clients/osiris/src/zdfz_extensions.rs
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
//! ZDFZ model extensions for OsirisClient
|
||||||
|
//!
|
||||||
|
//! This module generates CRUD and custom methods for ZDFZ models.
|
||||||
|
//! It must be in the osiris-client crate to satisfy Rust's orphan rules.
|
||||||
|
|
||||||
|
use crate::{impl_osiris_crud, impl_osiris_method};
|
||||||
|
|
||||||
|
// Import ZDFZ models - these will be available when zdfz-models is a dependency
|
||||||
|
#[cfg(feature = "zdfz")]
|
||||||
|
use zdfz_models::*;
|
||||||
|
|
||||||
|
// ========== Core Business Models ==========
|
||||||
|
|
||||||
|
// Digital Residents - Individual users of the freezone
|
||||||
|
#[cfg(feature = "zdfz")]
|
||||||
|
impl_osiris_crud!(ApiDigitalResident, "digital_residents", "resident_id");
|
||||||
|
|
||||||
|
// Free Zone Companies - Companies registered in the freezone
|
||||||
|
#[cfg(feature = "zdfz")]
|
||||||
|
impl_osiris_crud!(FreeZoneCompany, "free_zone_companies", "fzc_id");
|
||||||
|
|
||||||
|
// Invoices - Financial documents for companies
|
||||||
|
#[cfg(feature = "zdfz")]
|
||||||
|
impl_osiris_crud!(FreeZoneInvoice, "invoices", "fz_invoice_id");
|
||||||
18
lib/models/context/Cargo.toml
Normal file
18
lib/models/context/Cargo.toml
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
[package]
|
||||||
|
name = "hero-context"
|
||||||
|
version.workspace = true
|
||||||
|
edition.workspace = true
|
||||||
|
description = "Context model for Hero platform"
|
||||||
|
license = "MIT OR Apache-2.0"
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
serde.workspace = true
|
||||||
|
serde_json.workspace = true
|
||||||
|
chrono.workspace = true
|
||||||
|
rhai = { version = "1.19", features = ["sync"] }
|
||||||
|
|
||||||
|
[target.'cfg(not(target_arch = "wasm32"))'.dependencies]
|
||||||
|
uuid.workspace = true
|
||||||
|
|
||||||
|
[target.'cfg(target_arch = "wasm32")'.dependencies]
|
||||||
|
uuid = { workspace = true, features = ["js"] }
|
||||||
181
lib/models/context/src/access.rs
Normal file
181
lib/models/context/src/access.rs
Normal file
@@ -0,0 +1,181 @@
|
|||||||
|
//! Access Control Logic for Context
|
||||||
|
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
use std::collections::HashMap;
|
||||||
|
|
||||||
|
/// Unified ACL configuration for objects
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
|
||||||
|
pub struct ObjectAcl {
|
||||||
|
/// Per-user permissions for this object type
|
||||||
|
/// Maps public key -> list of permissions
|
||||||
|
pub permissions: HashMap<String, Vec<ObjectPermission>>,
|
||||||
|
|
||||||
|
/// Multi-signature requirements (optional)
|
||||||
|
pub multi_sig: Option<MultiSigRequirement>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Permissions for object operations
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Hash)]
|
||||||
|
#[serde(rename_all = "lowercase")]
|
||||||
|
pub enum ObjectPermission {
|
||||||
|
/// Can create new objects of this type
|
||||||
|
Create,
|
||||||
|
|
||||||
|
/// Can read objects of this type
|
||||||
|
Read,
|
||||||
|
|
||||||
|
/// Can update existing objects of this type
|
||||||
|
Update,
|
||||||
|
|
||||||
|
/// Can delete objects of this type
|
||||||
|
Delete,
|
||||||
|
|
||||||
|
/// Can list all objects of this type
|
||||||
|
List,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// SAL access control - binary permission (can call or not)
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
|
||||||
|
pub struct SalAcl {
|
||||||
|
/// List of public keys allowed to call this SAL
|
||||||
|
pub allowed_callers: Vec<String>,
|
||||||
|
|
||||||
|
/// Multi-signature requirements (optional)
|
||||||
|
pub multi_sig: Option<MultiSigRequirement>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Global permissions - simple RWX model
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Hash)]
|
||||||
|
#[serde(rename_all = "lowercase")]
|
||||||
|
pub enum GlobalPermission {
|
||||||
|
/// Can read data
|
||||||
|
Read,
|
||||||
|
|
||||||
|
/// Can write/modify data
|
||||||
|
Write,
|
||||||
|
|
||||||
|
/// Can execute operations
|
||||||
|
Execute,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Multi-signature requirements
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
|
||||||
|
#[serde(tag = "type", rename_all = "lowercase")]
|
||||||
|
pub enum MultiSigRequirement {
|
||||||
|
/// Require ALL specified signers to sign unanimously
|
||||||
|
Unanimous {
|
||||||
|
/// List of public keys that must ALL sign
|
||||||
|
required_signers: Vec<String>,
|
||||||
|
},
|
||||||
|
|
||||||
|
/// Require a minimum number of signatures from a set
|
||||||
|
Threshold {
|
||||||
|
/// Minimum number of signatures required
|
||||||
|
min_signatures: usize,
|
||||||
|
|
||||||
|
/// Optional: specific set of allowed signers
|
||||||
|
/// If None, any signers from the context are allowed
|
||||||
|
allowed_signers: Option<Vec<String>>,
|
||||||
|
},
|
||||||
|
|
||||||
|
/// Require a percentage of signers from a set
|
||||||
|
Percentage {
|
||||||
|
/// Percentage required (0.0 to 1.0, e.g., 0.66 for 66%)
|
||||||
|
percentage: f64,
|
||||||
|
|
||||||
|
/// Optional: specific set of allowed signers
|
||||||
|
/// If None, any signers from the context are allowed
|
||||||
|
allowed_signers: Option<Vec<String>>,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
impl MultiSigRequirement {
|
||||||
|
/// Check if signatories satisfy this multi-sig requirement
|
||||||
|
pub fn check(&self, signatories: &[String], total_members: usize) -> bool {
|
||||||
|
match self {
|
||||||
|
MultiSigRequirement::Unanimous { required_signers } => {
|
||||||
|
// ALL required signers must be present
|
||||||
|
required_signers.iter().all(|signer| signatories.contains(signer))
|
||||||
|
}
|
||||||
|
MultiSigRequirement::Threshold { min_signatures, allowed_signers } => {
|
||||||
|
// Check if we have enough signatures
|
||||||
|
if signatories.len() < *min_signatures {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// If allowed_signers is specified, check all signatories are in the list
|
||||||
|
if let Some(allowed) = allowed_signers {
|
||||||
|
signatories.iter().all(|sig| allowed.contains(sig))
|
||||||
|
} else {
|
||||||
|
true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
MultiSigRequirement::Percentage { percentage, allowed_signers } => {
|
||||||
|
if let Some(allowed) = allowed_signers {
|
||||||
|
// Filter signatories to only those in allowed list
|
||||||
|
let valid_sigs: Vec<_> = signatories
|
||||||
|
.iter()
|
||||||
|
.filter(|sig| allowed.contains(sig))
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let required_count = (allowed.len() as f64 * percentage).ceil() as usize;
|
||||||
|
valid_sigs.len() >= required_count
|
||||||
|
} else {
|
||||||
|
// Use all context members
|
||||||
|
let required_count = (total_members as f64 * percentage).ceil() as usize;
|
||||||
|
signatories.len() >= required_count
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_multi_sig_unanimous() {
|
||||||
|
let multi_sig = MultiSigRequirement::Unanimous {
|
||||||
|
required_signers: vec!["alice".to_string(), "bob".to_string()],
|
||||||
|
};
|
||||||
|
|
||||||
|
// Both signers present - should pass
|
||||||
|
assert!(multi_sig.check(&["alice".to_string(), "bob".to_string()], 3));
|
||||||
|
|
||||||
|
// Only one signer - should fail
|
||||||
|
assert!(!multi_sig.check(&["alice".to_string()], 3));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_multi_sig_threshold() {
|
||||||
|
let multi_sig = MultiSigRequirement::Threshold {
|
||||||
|
min_signatures: 2,
|
||||||
|
allowed_signers: Some(vec!["alice".to_string(), "bob".to_string(), "charlie".to_string()]),
|
||||||
|
};
|
||||||
|
|
||||||
|
// 2 signatures - should pass
|
||||||
|
assert!(multi_sig.check(&["alice".to_string(), "bob".to_string()], 3));
|
||||||
|
|
||||||
|
// 1 signature - should fail
|
||||||
|
assert!(!multi_sig.check(&["alice".to_string()], 3));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_multi_sig_percentage() {
|
||||||
|
let multi_sig = MultiSigRequirement::Percentage {
|
||||||
|
percentage: 0.66, // 66%
|
||||||
|
allowed_signers: Some(vec![
|
||||||
|
"alice".to_string(),
|
||||||
|
"bob".to_string(),
|
||||||
|
"charlie".to_string(),
|
||||||
|
]),
|
||||||
|
};
|
||||||
|
|
||||||
|
// 2 out of 3 (66%) - should pass
|
||||||
|
assert!(multi_sig.check(&["alice".to_string(), "bob".to_string()], 3));
|
||||||
|
|
||||||
|
// 1 out of 3 (33%) - should fail
|
||||||
|
assert!(!multi_sig.check(&["alice".to_string()], 3));
|
||||||
|
}
|
||||||
|
}
|
||||||
343
lib/models/context/src/lib.rs
Normal file
343
lib/models/context/src/lib.rs
Normal file
@@ -0,0 +1,343 @@
|
|||||||
|
//! Context Model
|
||||||
|
//!
|
||||||
|
//! A Context represents an isolated instance/workspace where users can:
|
||||||
|
//! - Store and retrieve objects (via Osiris)
|
||||||
|
//! - Execute SALs (System Abstraction Layer functions)
|
||||||
|
//! - Collaborate with specific permissions
|
||||||
|
//!
|
||||||
|
//! The Context is the authorization boundary - all operations go through it
|
||||||
|
//! and are subject to ACL checks.
|
||||||
|
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
use std::collections::HashMap;
|
||||||
|
|
||||||
|
pub mod access;
|
||||||
|
pub mod rhai;
|
||||||
|
|
||||||
|
pub use access::*;
|
||||||
|
|
||||||
|
/// A Context represents an isolated workspace with ACL-controlled access
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
|
||||||
|
pub struct Context {
|
||||||
|
/// Human-readable name
|
||||||
|
pub name: String,
|
||||||
|
|
||||||
|
/// Description of this context's purpose
|
||||||
|
pub description: Option<String>,
|
||||||
|
|
||||||
|
/// List of admin public keys - only admins can modify ACLs
|
||||||
|
pub admins: Vec<String>,
|
||||||
|
|
||||||
|
/// Global permissions (RWX) - what can users do in this context?
|
||||||
|
/// Maps public key -> list of global permissions
|
||||||
|
pub global_permissions: HashMap<String, Vec<GlobalPermission>>,
|
||||||
|
|
||||||
|
/// Per-object-type ACLs - fine-grained control over data operations
|
||||||
|
/// Maps object type (e.g., "notes", "events") -> ACL configuration
|
||||||
|
pub object_acls: HashMap<String, ObjectAcl>,
|
||||||
|
|
||||||
|
/// SAL ACLs - binary permission (can call or not)
|
||||||
|
/// Maps SAL name (e.g., "launch_vm", "send_email") -> ACL configuration
|
||||||
|
pub sal_acls: HashMap<String, SalAcl>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Default for Context {
|
||||||
|
fn default() -> Self {
|
||||||
|
Self {
|
||||||
|
name: String::new(),
|
||||||
|
description: None,
|
||||||
|
admins: Vec::new(),
|
||||||
|
global_permissions: HashMap::new(),
|
||||||
|
object_acls: HashMap::new(),
|
||||||
|
sal_acls: HashMap::new(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Context {
|
||||||
|
/// Create a new context with a name and initial admin
|
||||||
|
pub fn new(name: String, admin: String) -> Self {
|
||||||
|
Self {
|
||||||
|
name,
|
||||||
|
description: None,
|
||||||
|
admins: vec![admin],
|
||||||
|
global_permissions: HashMap::new(),
|
||||||
|
object_acls: HashMap::new(),
|
||||||
|
sal_acls: HashMap::new(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if a user is an admin
|
||||||
|
pub fn is_admin(&self, pubkey: &str) -> bool {
|
||||||
|
self.admins.contains(&pubkey.to_string())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if a user has a global permission
|
||||||
|
pub fn has_global_permission(&self, pubkey: &str, permission: &GlobalPermission) -> bool {
|
||||||
|
self.global_permissions
|
||||||
|
.get(pubkey)
|
||||||
|
.map(|perms| perms.contains(permission))
|
||||||
|
.unwrap_or(false)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if a user has permission for an object type
|
||||||
|
pub fn has_object_permission(
|
||||||
|
&self,
|
||||||
|
pubkey: &str,
|
||||||
|
object_type: &str,
|
||||||
|
permission: &ObjectPermission,
|
||||||
|
) -> bool {
|
||||||
|
self.object_acls
|
||||||
|
.get(object_type)
|
||||||
|
.and_then(|acl| acl.permissions.get(pubkey))
|
||||||
|
.map(|perms| perms.contains(permission))
|
||||||
|
.unwrap_or(false)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if a user can call a SAL
|
||||||
|
pub fn can_call_sal(&self, pubkey: &str, sal_name: &str) -> bool {
|
||||||
|
self.sal_acls
|
||||||
|
.get(sal_name)
|
||||||
|
.map(|acl| acl.allowed_callers.contains(&pubkey.to_string()))
|
||||||
|
.unwrap_or(false)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if signatories satisfy multi-sig requirements for an object
|
||||||
|
pub fn check_object_multi_sig(
|
||||||
|
&self,
|
||||||
|
signatories: &[String],
|
||||||
|
object_type: &str,
|
||||||
|
) -> bool {
|
||||||
|
if let Some(acl) = self.object_acls.get(object_type) {
|
||||||
|
if let Some(multi_sig) = &acl.multi_sig {
|
||||||
|
return multi_sig.check(signatories, self.global_permissions.len());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// No multi-sig requirement
|
||||||
|
true
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if signatories satisfy multi-sig requirements for a SAL
|
||||||
|
pub fn check_sal_multi_sig(
|
||||||
|
&self,
|
||||||
|
signatories: &[String],
|
||||||
|
sal_name: &str,
|
||||||
|
) -> bool {
|
||||||
|
if let Some(acl) = self.sal_acls.get(sal_name) {
|
||||||
|
if let Some(multi_sig) = &acl.multi_sig {
|
||||||
|
return multi_sig.check(signatories, self.global_permissions.len());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// No multi-sig requirement
|
||||||
|
true
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Add an admin (only admins can call this)
|
||||||
|
pub fn add_admin(&mut self, caller: &str, new_admin: String) -> Result<(), String> {
|
||||||
|
if !self.is_admin(caller) {
|
||||||
|
return Err("Only admins can add admins".to_string());
|
||||||
|
}
|
||||||
|
if !self.admins.contains(&new_admin) {
|
||||||
|
self.admins.push(new_admin);
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Grant a global permission to a user (only admins can call this)
|
||||||
|
pub fn grant_global_permission(
|
||||||
|
&mut self,
|
||||||
|
caller: &str,
|
||||||
|
pubkey: String,
|
||||||
|
permission: GlobalPermission,
|
||||||
|
) -> Result<(), String> {
|
||||||
|
if !self.is_admin(caller) {
|
||||||
|
return Err("Only admins can grant permissions".to_string());
|
||||||
|
}
|
||||||
|
self.global_permissions
|
||||||
|
.entry(pubkey)
|
||||||
|
.or_insert_with(Vec::new)
|
||||||
|
.push(permission);
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Grant an object permission to a user (only admins can call this)
|
||||||
|
pub fn grant_object_permission(
|
||||||
|
&mut self,
|
||||||
|
caller: &str,
|
||||||
|
pubkey: String,
|
||||||
|
object_type: String,
|
||||||
|
permission: ObjectPermission,
|
||||||
|
) -> Result<(), String> {
|
||||||
|
if !self.is_admin(caller) {
|
||||||
|
return Err("Only admins can grant permissions".to_string());
|
||||||
|
}
|
||||||
|
self.object_acls
|
||||||
|
.entry(object_type)
|
||||||
|
.or_insert_with(|| ObjectAcl {
|
||||||
|
permissions: HashMap::new(),
|
||||||
|
multi_sig: None,
|
||||||
|
})
|
||||||
|
.permissions
|
||||||
|
.entry(pubkey)
|
||||||
|
.or_insert_with(Vec::new)
|
||||||
|
.push(permission);
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Grant SAL access to a user (only admins can call this)
|
||||||
|
pub fn grant_sal_access(
|
||||||
|
&mut self,
|
||||||
|
caller: &str,
|
||||||
|
pubkey: String,
|
||||||
|
sal_name: String,
|
||||||
|
) -> Result<(), String> {
|
||||||
|
if !self.is_admin(caller) {
|
||||||
|
return Err("Only admins can grant SAL access".to_string());
|
||||||
|
}
|
||||||
|
self.sal_acls
|
||||||
|
.entry(sal_name)
|
||||||
|
.or_insert_with(|| SalAcl {
|
||||||
|
allowed_callers: Vec::new(),
|
||||||
|
multi_sig: None,
|
||||||
|
})
|
||||||
|
.allowed_callers
|
||||||
|
.push(pubkey);
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Set multi-sig requirement for an object (only admins can call this)
|
||||||
|
pub fn set_object_multi_sig(
|
||||||
|
&mut self,
|
||||||
|
caller: &str,
|
||||||
|
object_type: String,
|
||||||
|
multi_sig: MultiSigRequirement,
|
||||||
|
) -> Result<(), String> {
|
||||||
|
if !self.is_admin(caller) {
|
||||||
|
return Err("Only admins can set multi-sig requirements".to_string());
|
||||||
|
}
|
||||||
|
self.object_acls
|
||||||
|
.entry(object_type)
|
||||||
|
.or_insert_with(|| ObjectAcl {
|
||||||
|
permissions: HashMap::new(),
|
||||||
|
multi_sig: None,
|
||||||
|
})
|
||||||
|
.multi_sig = Some(multi_sig);
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Set multi-sig requirement for a SAL (only admins can call this)
|
||||||
|
pub fn set_sal_multi_sig(
|
||||||
|
&mut self,
|
||||||
|
caller: &str,
|
||||||
|
sal_name: String,
|
||||||
|
multi_sig: MultiSigRequirement,
|
||||||
|
) -> Result<(), String> {
|
||||||
|
if !self.is_admin(caller) {
|
||||||
|
return Err("Only admins can set multi-sig requirements".to_string());
|
||||||
|
}
|
||||||
|
self.sal_acls
|
||||||
|
.entry(sal_name)
|
||||||
|
.or_insert_with(|| SalAcl {
|
||||||
|
allowed_callers: Vec::new(),
|
||||||
|
multi_sig: None,
|
||||||
|
})
|
||||||
|
.multi_sig = Some(multi_sig);
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_context_creation() {
|
||||||
|
let ctx = Context::new("Test Context".to_string(), "admin_pk".to_string());
|
||||||
|
assert_eq!(ctx.name, "Test Context");
|
||||||
|
assert!(ctx.is_admin("admin_pk"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_admin_permissions() {
|
||||||
|
let mut ctx = Context::new("Test".to_string(), "admin".to_string());
|
||||||
|
|
||||||
|
// Admin can add another admin
|
||||||
|
assert!(ctx.add_admin("admin", "admin2".to_string()).is_ok());
|
||||||
|
assert!(ctx.is_admin("admin2"));
|
||||||
|
|
||||||
|
// Non-admin cannot add admin
|
||||||
|
assert!(ctx.add_admin("user1", "admin3".to_string()).is_err());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_global_permissions() {
|
||||||
|
let mut ctx = Context::new("Test".to_string(), "admin".to_string());
|
||||||
|
|
||||||
|
// Admin can grant permissions
|
||||||
|
assert!(ctx.grant_global_permission("admin", "user1".to_string(), GlobalPermission::Read).is_ok());
|
||||||
|
assert!(ctx.has_global_permission("user1", &GlobalPermission::Read));
|
||||||
|
assert!(!ctx.has_global_permission("user1", &GlobalPermission::Write));
|
||||||
|
|
||||||
|
// Non-admin cannot grant permissions
|
||||||
|
assert!(ctx.grant_global_permission("user1", "user2".to_string(), GlobalPermission::Read).is_err());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_object_permissions() {
|
||||||
|
let mut ctx = Context::new("Test".to_string(), "admin".to_string());
|
||||||
|
|
||||||
|
// Admin can grant object permissions
|
||||||
|
assert!(ctx.grant_object_permission("admin", "user1".to_string(), "notes".to_string(), ObjectPermission::Create).is_ok());
|
||||||
|
assert!(ctx.has_object_permission("user1", "notes", &ObjectPermission::Create));
|
||||||
|
assert!(!ctx.has_object_permission("user1", "notes", &ObjectPermission::Delete));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_sal_permissions() {
|
||||||
|
let mut ctx = Context::new("Test".to_string(), "admin".to_string());
|
||||||
|
|
||||||
|
// Admin can grant SAL access
|
||||||
|
assert!(ctx.grant_sal_access("admin", "user1".to_string(), "launch_vm".to_string()).is_ok());
|
||||||
|
assert!(ctx.can_call_sal("user1", "launch_vm"));
|
||||||
|
assert!(!ctx.can_call_sal("user1", "send_email"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_object_multi_sig_unanimous() {
|
||||||
|
let mut ctx = Context::new("Test".to_string(), "admin".to_string());
|
||||||
|
|
||||||
|
assert!(ctx.set_object_multi_sig(
|
||||||
|
"admin",
|
||||||
|
"sensitive_data".to_string(),
|
||||||
|
MultiSigRequirement::Unanimous {
|
||||||
|
required_signers: vec!["alice".to_string(), "bob".to_string()],
|
||||||
|
},
|
||||||
|
).is_ok());
|
||||||
|
|
||||||
|
// Both signers present - should pass
|
||||||
|
assert!(ctx.check_object_multi_sig(&["alice".to_string(), "bob".to_string()], "sensitive_data"));
|
||||||
|
|
||||||
|
// Only one signer - should fail
|
||||||
|
assert!(!ctx.check_object_multi_sig(&["alice".to_string()], "sensitive_data"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_sal_multi_sig_threshold() {
|
||||||
|
let mut ctx = Context::new("Test".to_string(), "admin".to_string());
|
||||||
|
|
||||||
|
assert!(ctx.set_sal_multi_sig(
|
||||||
|
"admin",
|
||||||
|
"launch_vm".to_string(),
|
||||||
|
MultiSigRequirement::Threshold {
|
||||||
|
min_signatures: 2,
|
||||||
|
allowed_signers: Some(vec!["alice".to_string(), "bob".to_string(), "charlie".to_string()]),
|
||||||
|
},
|
||||||
|
).is_ok());
|
||||||
|
|
||||||
|
// 2 signatures - should pass
|
||||||
|
assert!(ctx.check_sal_multi_sig(&["alice".to_string(), "bob".to_string()], "launch_vm"));
|
||||||
|
|
||||||
|
// 1 signature - should fail
|
||||||
|
assert!(!ctx.check_sal_multi_sig(&["alice".to_string()], "launch_vm"));
|
||||||
|
}
|
||||||
|
}
|
||||||
327
lib/models/context/src/rhai.rs
Normal file
327
lib/models/context/src/rhai.rs
Normal file
@@ -0,0 +1,327 @@
|
|||||||
|
use ::rhai::plugin::*;
|
||||||
|
use ::rhai::{CustomType, Dynamic, Engine, EvalAltResult, Module, TypeBuilder};
|
||||||
|
|
||||||
|
use crate::Context;
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Context Module
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
type RhaiContext = Context;
|
||||||
|
|
||||||
|
#[export_module]
|
||||||
|
mod rhai_context_module {
|
||||||
|
use super::RhaiContext;
|
||||||
|
use crate::MultiSigRequirement;
|
||||||
|
use ::rhai::{Dynamic, EvalAltResult};
|
||||||
|
|
||||||
|
/// Create a new context with name and initial admin
|
||||||
|
#[rhai_fn(name = "new_context", return_raw)]
|
||||||
|
pub fn new_context(name: String, admin: String) -> Result<RhaiContext, Box<EvalAltResult>> {
|
||||||
|
Ok(RhaiContext::new(name, admin))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Set context description
|
||||||
|
#[rhai_fn(name = "description", return_raw)]
|
||||||
|
pub fn set_description(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
description: String,
|
||||||
|
) -> Result<RhaiContext, Box<EvalAltResult>> {
|
||||||
|
ctx.description = Some(description);
|
||||||
|
Ok(ctx.clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
// ========== Admin Management ==========
|
||||||
|
|
||||||
|
/// Check if a user is an admin
|
||||||
|
#[rhai_fn(name = "is_admin")]
|
||||||
|
pub fn is_admin(ctx: &mut RhaiContext, pubkey: String) -> bool {
|
||||||
|
ctx.is_admin(&pubkey)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Add an admin (only admins can call this)
|
||||||
|
#[rhai_fn(name = "add_admin", return_raw)]
|
||||||
|
pub fn add_admin(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
caller: String,
|
||||||
|
new_admin: String,
|
||||||
|
) -> Result<RhaiContext, Box<EvalAltResult>> {
|
||||||
|
ctx.add_admin(&caller, new_admin)
|
||||||
|
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
|
||||||
|
Ok(ctx.clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
// ========== Global Permission Management (RWX) ==========
|
||||||
|
|
||||||
|
/// Grant a global permission to a user (only admins can call this)
|
||||||
|
#[rhai_fn(name = "grant_global_permission", return_raw)]
|
||||||
|
pub fn grant_global_permission(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
caller: String,
|
||||||
|
pubkey: String,
|
||||||
|
permission: String,
|
||||||
|
) -> Result<RhaiContext, Box<EvalAltResult>> {
|
||||||
|
let perm = parse_global_permission(&permission)?;
|
||||||
|
ctx.grant_global_permission(&caller, pubkey, perm)
|
||||||
|
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
|
||||||
|
Ok(ctx.clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if a user has a global permission
|
||||||
|
#[rhai_fn(name = "has_global_permission", return_raw)]
|
||||||
|
pub fn has_global_permission(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
pubkey: String,
|
||||||
|
permission: String,
|
||||||
|
) -> Result<bool, Box<EvalAltResult>> {
|
||||||
|
let perm = parse_global_permission(&permission)?;
|
||||||
|
Ok(ctx.has_global_permission(&pubkey, &perm))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ========== Object Permission Management ==========
|
||||||
|
|
||||||
|
/// Grant an object permission to a user (only admins can call this)
|
||||||
|
#[rhai_fn(name = "grant_object_permission", return_raw)]
|
||||||
|
pub fn grant_object_permission(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
caller: String,
|
||||||
|
pubkey: String,
|
||||||
|
object_type: String,
|
||||||
|
permission: String,
|
||||||
|
) -> Result<RhaiContext, Box<EvalAltResult>> {
|
||||||
|
let perm = parse_object_permission(&permission)?;
|
||||||
|
ctx.grant_object_permission(&caller, pubkey, object_type, perm)
|
||||||
|
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
|
||||||
|
Ok(ctx.clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if a user has an object permission
|
||||||
|
#[rhai_fn(name = "has_object_permission", return_raw)]
|
||||||
|
pub fn has_object_permission(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
pubkey: String,
|
||||||
|
object_type: String,
|
||||||
|
permission: String,
|
||||||
|
) -> Result<bool, Box<EvalAltResult>> {
|
||||||
|
let perm = parse_object_permission(&permission)?;
|
||||||
|
Ok(ctx.has_object_permission(&pubkey, &object_type, &perm))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ========== SAL Permission Management (Binary) ==========
|
||||||
|
|
||||||
|
/// Grant SAL access to a user (only admins can call this)
|
||||||
|
#[rhai_fn(name = "grant_sal_access", return_raw)]
|
||||||
|
pub fn grant_sal_access(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
caller: String,
|
||||||
|
pubkey: String,
|
||||||
|
sal_name: String,
|
||||||
|
) -> Result<RhaiContext, Box<EvalAltResult>> {
|
||||||
|
ctx.grant_sal_access(&caller, pubkey, sal_name)
|
||||||
|
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
|
||||||
|
Ok(ctx.clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if a user can call a SAL
|
||||||
|
#[rhai_fn(name = "can_call_sal")]
|
||||||
|
pub fn can_call_sal(ctx: &mut RhaiContext, pubkey: String, sal_name: String) -> bool {
|
||||||
|
ctx.can_call_sal(&pubkey, &sal_name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ========== Multi-Sig Management for Objects ==========
|
||||||
|
|
||||||
|
/// Set unanimous multi-sig requirement for an object (only admins can call this)
|
||||||
|
#[rhai_fn(name = "set_object_multisig_unanimous", return_raw)]
|
||||||
|
pub fn set_object_multisig_unanimous(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
caller: String,
|
||||||
|
object_type: String,
|
||||||
|
required_signers: Vec<Dynamic>,
|
||||||
|
) -> Result<RhaiContext, Box<EvalAltResult>> {
|
||||||
|
let signers = parse_signers(required_signers)?;
|
||||||
|
ctx.set_object_multi_sig(
|
||||||
|
&caller,
|
||||||
|
object_type,
|
||||||
|
MultiSigRequirement::Unanimous { required_signers: signers },
|
||||||
|
)
|
||||||
|
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
|
||||||
|
Ok(ctx.clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Set threshold multi-sig requirement for an object (only admins can call this)
|
||||||
|
#[rhai_fn(name = "set_object_multisig_threshold", return_raw)]
|
||||||
|
pub fn set_object_multisig_threshold(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
caller: String,
|
||||||
|
object_type: String,
|
||||||
|
min_signatures: i64,
|
||||||
|
allowed_signers: Vec<Dynamic>,
|
||||||
|
) -> Result<RhaiContext, Box<EvalAltResult>> {
|
||||||
|
let signers = parse_signers(allowed_signers)?;
|
||||||
|
ctx.set_object_multi_sig(
|
||||||
|
&caller,
|
||||||
|
object_type,
|
||||||
|
MultiSigRequirement::Threshold {
|
||||||
|
min_signatures: min_signatures as usize,
|
||||||
|
allowed_signers: Some(signers),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
|
||||||
|
Ok(ctx.clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Set percentage multi-sig requirement for an object (only admins can call this)
|
||||||
|
#[rhai_fn(name = "set_object_multisig_percentage", return_raw)]
|
||||||
|
pub fn set_object_multisig_percentage(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
caller: String,
|
||||||
|
object_type: String,
|
||||||
|
percentage: f64,
|
||||||
|
allowed_signers: Vec<Dynamic>,
|
||||||
|
) -> Result<RhaiContext, Box<EvalAltResult>> {
|
||||||
|
if percentage < 0.0 || percentage > 1.0 {
|
||||||
|
return Err("Percentage must be between 0.0 and 1.0".into());
|
||||||
|
}
|
||||||
|
let signers = parse_signers(allowed_signers)?;
|
||||||
|
ctx.set_object_multi_sig(
|
||||||
|
&caller,
|
||||||
|
object_type,
|
||||||
|
MultiSigRequirement::Percentage {
|
||||||
|
percentage,
|
||||||
|
allowed_signers: Some(signers),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
|
||||||
|
Ok(ctx.clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
// ========== Multi-Sig Management for SALs ==========
|
||||||
|
|
||||||
|
/// Set unanimous multi-sig requirement for a SAL (only admins can call this)
|
||||||
|
#[rhai_fn(name = "set_sal_multisig_unanimous", return_raw)]
|
||||||
|
pub fn set_sal_multisig_unanimous(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
caller: String,
|
||||||
|
sal_name: String,
|
||||||
|
required_signers: Vec<Dynamic>,
|
||||||
|
) -> Result<RhaiContext, Box<EvalAltResult>> {
|
||||||
|
let signers = parse_signers(required_signers)?;
|
||||||
|
ctx.set_sal_multi_sig(
|
||||||
|
&caller,
|
||||||
|
sal_name,
|
||||||
|
MultiSigRequirement::Unanimous { required_signers: signers },
|
||||||
|
)
|
||||||
|
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
|
||||||
|
Ok(ctx.clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Set threshold multi-sig requirement for a SAL (only admins can call this)
|
||||||
|
#[rhai_fn(name = "set_sal_multisig_threshold", return_raw)]
|
||||||
|
pub fn set_sal_multisig_threshold(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
caller: String,
|
||||||
|
sal_name: String,
|
||||||
|
min_signatures: i64,
|
||||||
|
allowed_signers: Vec<Dynamic>,
|
||||||
|
) -> Result<RhaiContext, Box<EvalAltResult>> {
|
||||||
|
let signers = parse_signers(allowed_signers)?;
|
||||||
|
ctx.set_sal_multi_sig(
|
||||||
|
&caller,
|
||||||
|
sal_name,
|
||||||
|
MultiSigRequirement::Threshold {
|
||||||
|
min_signatures: min_signatures as usize,
|
||||||
|
allowed_signers: Some(signers),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
|
||||||
|
Ok(ctx.clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Set percentage multi-sig requirement for a SAL (only admins can call this)
|
||||||
|
#[rhai_fn(name = "set_sal_multisig_percentage", return_raw)]
|
||||||
|
pub fn set_sal_multisig_percentage(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
caller: String,
|
||||||
|
sal_name: String,
|
||||||
|
percentage: f64,
|
||||||
|
allowed_signers: Vec<Dynamic>,
|
||||||
|
) -> Result<RhaiContext, Box<EvalAltResult>> {
|
||||||
|
if percentage < 0.0 || percentage > 1.0 {
|
||||||
|
return Err("Percentage must be between 0.0 and 1.0".into());
|
||||||
|
}
|
||||||
|
let signers = parse_signers(allowed_signers)?;
|
||||||
|
ctx.set_sal_multi_sig(
|
||||||
|
&caller,
|
||||||
|
sal_name,
|
||||||
|
MultiSigRequirement::Percentage {
|
||||||
|
percentage,
|
||||||
|
allowed_signers: Some(signers),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
|
||||||
|
Ok(ctx.clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
// ========== Getters ==========
|
||||||
|
|
||||||
|
#[rhai_fn(name = "get_name")]
|
||||||
|
pub fn get_name(ctx: &mut RhaiContext) -> String {
|
||||||
|
ctx.name.clone()
|
||||||
|
}
|
||||||
|
|
||||||
|
#[rhai_fn(name = "get_description")]
|
||||||
|
pub fn get_description(ctx: &mut RhaiContext) -> String {
|
||||||
|
ctx.description.clone().unwrap_or_default()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Helper functions to parse permissions
|
||||||
|
fn parse_global_permission(permission: &str) -> Result<crate::GlobalPermission, Box<EvalAltResult>> {
|
||||||
|
match permission {
|
||||||
|
"read" => Ok(crate::GlobalPermission::Read),
|
||||||
|
"write" => Ok(crate::GlobalPermission::Write),
|
||||||
|
"execute" => Ok(crate::GlobalPermission::Execute),
|
||||||
|
_ => Err(format!("Invalid global permission: {}", permission).into()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn parse_object_permission(permission: &str) -> Result<crate::ObjectPermission, Box<EvalAltResult>> {
|
||||||
|
match permission {
|
||||||
|
"create" => Ok(crate::ObjectPermission::Create),
|
||||||
|
"read" => Ok(crate::ObjectPermission::Read),
|
||||||
|
"update" => Ok(crate::ObjectPermission::Update),
|
||||||
|
"delete" => Ok(crate::ObjectPermission::Delete),
|
||||||
|
"list" => Ok(crate::ObjectPermission::List),
|
||||||
|
_ => Err(format!("Invalid object permission: {}", permission).into()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn parse_signers(signers: Vec<Dynamic>) -> Result<Vec<String>, Box<EvalAltResult>> {
|
||||||
|
signers
|
||||||
|
.into_iter()
|
||||||
|
.map(|d| d.into_string().map_err(|e| format!("Invalid signer: {:?}", e)))
|
||||||
|
.collect::<Result<Vec<String>, _>>()
|
||||||
|
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))
|
||||||
|
}
|
||||||
|
|
||||||
|
impl CustomType for Context {
|
||||||
|
fn build(mut builder: TypeBuilder<Self>) {
|
||||||
|
builder.with_name("Context");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Register the Context module with the Rhai engine
|
||||||
|
pub fn register_context_module(engine: &mut Engine) {
|
||||||
|
let module = exported_module!(rhai_context_module);
|
||||||
|
engine.register_static_module("context", module.into());
|
||||||
|
engine.register_type::<Context>();
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Register Context functions directly on the engine (for global access)
|
||||||
|
pub fn register_context_functions(engine: &mut Engine) {
|
||||||
|
engine.register_type::<Context>();
|
||||||
|
|
||||||
|
// Register the module functions
|
||||||
|
let module = exported_module!(rhai_context_module);
|
||||||
|
engine.register_global_module(module.into());
|
||||||
|
}
|
||||||
333
lib/models/context/src/rhai_old.rs
Normal file
333
lib/models/context/src/rhai_old.rs
Normal file
@@ -0,0 +1,333 @@
|
|||||||
|
use ::rhai::plugin::*;
|
||||||
|
use ::rhai::{CustomType, Dynamic, Engine, EvalAltResult, Module, TypeBuilder};
|
||||||
|
|
||||||
|
use crate::Context;
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Context Module
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
type RhaiContext = Context;
|
||||||
|
|
||||||
|
#[export_module]
|
||||||
|
mod rhai_context_module {
|
||||||
|
use super::RhaiContext;
|
||||||
|
use crate::{GlobalPermission, MultiSigRequirement, ObjectPermission};
|
||||||
|
use ::rhai::{Dynamic, EvalAltResult};
|
||||||
|
|
||||||
|
/// Create a new context with name and initial admin
|
||||||
|
#[rhai_fn(name = "new_context", return_raw)]
|
||||||
|
pub fn new_context(name: String, admin: String) -> Result<RhaiContext, Box<EvalAltResult>> {
|
||||||
|
Ok(RhaiContext::new(name, admin))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Set context description
|
||||||
|
#[rhai_fn(name = "description", return_raw)]
|
||||||
|
pub fn set_description(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
description: String,
|
||||||
|
) -> Result<RhaiContext, Box<EvalAltResult>> {
|
||||||
|
ctx.description = Some(description);
|
||||||
|
Ok(ctx.clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
// ========== Global Permission Management ==========
|
||||||
|
|
||||||
|
/// Grant a global permission to a user
|
||||||
|
#[rhai_fn(name = "grant_permission", return_raw)]
|
||||||
|
pub fn grant_permission(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
pubkey: String,
|
||||||
|
permission: String,
|
||||||
|
) -> Result<RhaiContext, Box<EvalAltResult>> {
|
||||||
|
let perm = match permission.as_str() {
|
||||||
|
"read" => Permission::Read,
|
||||||
|
"write" => Permission::Write,
|
||||||
|
"delete" => Permission::Delete,
|
||||||
|
"execute" => Permission::Execute,
|
||||||
|
"admin" => Permission::Admin,
|
||||||
|
"invite" => Permission::Invite,
|
||||||
|
_ => return Err(format!("Invalid permission: {}", permission).into()),
|
||||||
|
};
|
||||||
|
ctx.grant_permission(pubkey, perm);
|
||||||
|
Ok(ctx.clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if a user has a global permission
|
||||||
|
#[rhai_fn(name = "has_permission", return_raw)]
|
||||||
|
pub fn has_permission(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
pubkey: String,
|
||||||
|
permission: String,
|
||||||
|
) -> Result<bool, Box<EvalAltResult>> {
|
||||||
|
let perm = match permission.as_str() {
|
||||||
|
"read" => Permission::Read,
|
||||||
|
"write" => Permission::Write,
|
||||||
|
"delete" => Permission::Delete,
|
||||||
|
"execute" => Permission::Execute,
|
||||||
|
"admin" => Permission::Admin,
|
||||||
|
"invite" => Permission::Invite,
|
||||||
|
_ => return Err(format!("Invalid permission: {}", permission).into()),
|
||||||
|
};
|
||||||
|
Ok(ctx.has_permission(&pubkey, &perm))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ========== Object Permission Management ==========
|
||||||
|
|
||||||
|
/// Grant an object permission to a user
|
||||||
|
#[rhai_fn(name = "grant_object_permission", return_raw)]
|
||||||
|
pub fn grant_object_permission(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
pubkey: String,
|
||||||
|
object_type: String,
|
||||||
|
permission: String,
|
||||||
|
) -> Result<RhaiContext, Box<EvalAltResult>> {
|
||||||
|
let perm = parse_resource_permission(&permission)?;
|
||||||
|
ctx.grant_resource_permission(pubkey, object_type, perm, false);
|
||||||
|
Ok(ctx.clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if a user has an object permission
|
||||||
|
#[rhai_fn(name = "has_object_permission", return_raw)]
|
||||||
|
pub fn has_object_permission(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
pubkey: String,
|
||||||
|
object_type: String,
|
||||||
|
permission: String,
|
||||||
|
) -> Result<bool, Box<EvalAltResult>> {
|
||||||
|
let perm = parse_resource_permission(&permission)?;
|
||||||
|
Ok(ctx.has_resource_permission(&pubkey, &object_type, &perm, false))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ========== SAL Permission Management ==========
|
||||||
|
|
||||||
|
/// Grant a SAL permission to a user
|
||||||
|
#[rhai_fn(name = "grant_sal_permission", return_raw)]
|
||||||
|
pub fn grant_sal_permission(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
pubkey: String,
|
||||||
|
sal_name: String,
|
||||||
|
permission: String,
|
||||||
|
) -> Result<RhaiContext, Box<EvalAltResult>> {
|
||||||
|
let perm = parse_resource_permission(&permission)?;
|
||||||
|
ctx.grant_resource_permission(pubkey, sal_name, perm, true);
|
||||||
|
Ok(ctx.clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if a user has a SAL permission
|
||||||
|
#[rhai_fn(name = "has_sal_permission", return_raw)]
|
||||||
|
pub fn has_sal_permission(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
pubkey: String,
|
||||||
|
sal_name: String,
|
||||||
|
permission: String,
|
||||||
|
) -> Result<bool, Box<EvalAltResult>> {
|
||||||
|
let perm = parse_resource_permission(&permission)?;
|
||||||
|
Ok(ctx.has_resource_permission(&pubkey, &sal_name, &perm, true))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ========== Multi-Sig Management ==========
|
||||||
|
|
||||||
|
/// Set unanimous multi-sig requirement for an object
|
||||||
|
#[rhai_fn(name = "set_object_multisig_unanimous", return_raw)]
|
||||||
|
pub fn set_object_multisig_unanimous(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
object_type: String,
|
||||||
|
required_signers: Vec<Dynamic>,
|
||||||
|
) -> Result<RhaiContext, Box<EvalAltResult>> {
|
||||||
|
let signers: Result<Vec<String>, _> = required_signers
|
||||||
|
.into_iter()
|
||||||
|
.map(|d| d.into_string().map_err(|e| format!("Invalid signer: {:?}", e)))
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let signers = signers.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
|
||||||
|
|
||||||
|
ctx.set_multi_sig(
|
||||||
|
object_type,
|
||||||
|
MultiSigRequirement::Unanimous { required_signers: signers },
|
||||||
|
false,
|
||||||
|
);
|
||||||
|
Ok(ctx.clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Set threshold multi-sig requirement for an object
|
||||||
|
#[rhai_fn(name = "set_object_multisig_threshold", return_raw)]
|
||||||
|
pub fn set_object_multisig_threshold(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
object_type: String,
|
||||||
|
min_signatures: i64,
|
||||||
|
allowed_signers: Vec<Dynamic>,
|
||||||
|
) -> Result<RhaiContext, Box<EvalAltResult>> {
|
||||||
|
let signers: Result<Vec<String>, _> = allowed_signers
|
||||||
|
.into_iter()
|
||||||
|
.map(|d| d.into_string().map_err(|e| format!("Invalid signer: {:?}", e)))
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let signers = signers.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
|
||||||
|
|
||||||
|
ctx.set_multi_sig(
|
||||||
|
object_type,
|
||||||
|
MultiSigRequirement::Threshold {
|
||||||
|
min_signatures: min_signatures as usize,
|
||||||
|
allowed_signers: Some(signers),
|
||||||
|
},
|
||||||
|
false,
|
||||||
|
);
|
||||||
|
Ok(ctx.clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Set percentage multi-sig requirement for an object
|
||||||
|
#[rhai_fn(name = "set_object_multisig_percentage", return_raw)]
|
||||||
|
pub fn set_object_multisig_percentage(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
object_type: String,
|
||||||
|
percentage: f64,
|
||||||
|
allowed_signers: Vec<Dynamic>,
|
||||||
|
) -> Result<RhaiContext, Box<EvalAltResult>> {
|
||||||
|
if percentage < 0.0 || percentage > 1.0 {
|
||||||
|
return Err("Percentage must be between 0.0 and 1.0".into());
|
||||||
|
}
|
||||||
|
|
||||||
|
let signers: Result<Vec<String>, _> = allowed_signers
|
||||||
|
.into_iter()
|
||||||
|
.map(|d| d.into_string().map_err(|e| format!("Invalid signer: {:?}", e)))
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let signers = signers.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
|
||||||
|
|
||||||
|
ctx.set_multi_sig(
|
||||||
|
object_type,
|
||||||
|
MultiSigRequirement::Percentage {
|
||||||
|
percentage,
|
||||||
|
allowed_signers: Some(signers),
|
||||||
|
},
|
||||||
|
false,
|
||||||
|
);
|
||||||
|
Ok(ctx.clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Set unanimous multi-sig requirement for a SAL
|
||||||
|
#[rhai_fn(name = "set_sal_multisig_unanimous", return_raw)]
|
||||||
|
pub fn set_sal_multisig_unanimous(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
sal_name: String,
|
||||||
|
required_signers: Vec<Dynamic>,
|
||||||
|
) -> Result<RhaiContext, Box<EvalAltResult>> {
|
||||||
|
let signers: Result<Vec<String>, _> = required_signers
|
||||||
|
.into_iter()
|
||||||
|
.map(|d| d.into_string().map_err(|e| format!("Invalid signer: {:?}", e)))
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let signers = signers.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
|
||||||
|
|
||||||
|
ctx.set_multi_sig(
|
||||||
|
sal_name,
|
||||||
|
MultiSigRequirement::Unanimous { required_signers: signers },
|
||||||
|
true,
|
||||||
|
);
|
||||||
|
Ok(ctx.clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Set threshold multi-sig requirement for a SAL
|
||||||
|
#[rhai_fn(name = "set_sal_multisig_threshold", return_raw)]
|
||||||
|
pub fn set_sal_multisig_threshold(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
sal_name: String,
|
||||||
|
min_signatures: i64,
|
||||||
|
allowed_signers: Vec<Dynamic>,
|
||||||
|
) -> Result<RhaiContext, Box<EvalAltResult>> {
|
||||||
|
let signers: Result<Vec<String>, _> = allowed_signers
|
||||||
|
.into_iter()
|
||||||
|
.map(|d| d.into_string().map_err(|e| format!("Invalid signer: {:?}", e)))
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let signers = signers.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
|
||||||
|
|
||||||
|
ctx.set_multi_sig(
|
||||||
|
sal_name,
|
||||||
|
MultiSigRequirement::Threshold {
|
||||||
|
min_signatures: min_signatures as usize,
|
||||||
|
allowed_signers: Some(signers),
|
||||||
|
},
|
||||||
|
true,
|
||||||
|
);
|
||||||
|
Ok(ctx.clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Set percentage multi-sig requirement for a SAL
|
||||||
|
#[rhai_fn(name = "set_sal_multisig_percentage", return_raw)]
|
||||||
|
pub fn set_sal_multisig_percentage(
|
||||||
|
ctx: &mut RhaiContext,
|
||||||
|
sal_name: String,
|
||||||
|
percentage: f64,
|
||||||
|
allowed_signers: Vec<Dynamic>,
|
||||||
|
) -> Result<RhaiContext, Box<EvalAltResult>> {
|
||||||
|
if percentage < 0.0 || percentage > 1.0 {
|
||||||
|
return Err("Percentage must be between 0.0 and 1.0".into());
|
||||||
|
}
|
||||||
|
|
||||||
|
let signers: Result<Vec<String>, _> = allowed_signers
|
||||||
|
.into_iter()
|
||||||
|
.map(|d| d.into_string().map_err(|e| format!("Invalid signer: {:?}", e)))
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let signers = signers.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(e.into(), rhai::Position::NONE)))?;
|
||||||
|
|
||||||
|
ctx.set_multi_sig(
|
||||||
|
sal_name,
|
||||||
|
MultiSigRequirement::Percentage {
|
||||||
|
percentage,
|
||||||
|
allowed_signers: Some(signers),
|
||||||
|
},
|
||||||
|
true,
|
||||||
|
);
|
||||||
|
Ok(ctx.clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
// ========== Getters ==========
|
||||||
|
|
||||||
|
#[rhai_fn(name = "get_name")]
|
||||||
|
pub fn get_name(ctx: &mut RhaiContext) -> String {
|
||||||
|
ctx.name.clone()
|
||||||
|
}
|
||||||
|
|
||||||
|
#[rhai_fn(name = "get_description")]
|
||||||
|
pub fn get_description(ctx: &mut RhaiContext) -> String {
|
||||||
|
ctx.description.clone().unwrap_or_default()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Helper function to parse resource permissions
|
||||||
|
fn parse_resource_permission(permission: &str) -> Result<crate::ResourcePermission, Box<EvalAltResult>> {
|
||||||
|
match permission {
|
||||||
|
"create" => Ok(crate::ResourcePermission::Create),
|
||||||
|
"read" => Ok(crate::ResourcePermission::Read),
|
||||||
|
"update" => Ok(crate::ResourcePermission::Update),
|
||||||
|
"delete" => Ok(crate::ResourcePermission::Delete),
|
||||||
|
"list" => Ok(crate::ResourcePermission::List),
|
||||||
|
"execute" => Ok(crate::ResourcePermission::Execute),
|
||||||
|
_ => Err(format!("Invalid resource permission: {}", permission).into()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl CustomType for Context {
|
||||||
|
fn build(mut builder: TypeBuilder<Self>) {
|
||||||
|
builder.with_name("Context");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Register the Context module with the Rhai engine
|
||||||
|
pub fn register_context_module(engine: &mut Engine) {
|
||||||
|
let module = exported_module!(rhai_context_module);
|
||||||
|
engine.register_static_module("context", module.into());
|
||||||
|
engine.register_type::<Context>();
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Register Context functions directly on the engine (for global access)
|
||||||
|
pub fn register_context_functions(engine: &mut Engine) {
|
||||||
|
engine.register_type::<Context>();
|
||||||
|
|
||||||
|
// Register the module functions
|
||||||
|
let module = exported_module!(rhai_context_module);
|
||||||
|
engine.register_global_module(module.into());
|
||||||
|
}
|
||||||
@@ -118,28 +118,10 @@ async fn test_01_flow_create_simple() {
|
|||||||
|
|
||||||
let client = create_client().await;
|
let client = create_client().await;
|
||||||
|
|
||||||
// Create jobs for the flow
|
// Note: Jobs should be created by the supervisor, not the coordinator
|
||||||
|
// For this test, we'll create a flow with job IDs that may not exist yet
|
||||||
|
// In a real scenario, jobs would be created by the supervisor first
|
||||||
let job_ids = vec![BASE_JOB_ID, BASE_JOB_ID + 1];
|
let job_ids = vec![BASE_JOB_ID, BASE_JOB_ID + 1];
|
||||||
for (i, job_id) in job_ids.iter().enumerate() {
|
|
||||||
let job = JobCreate {
|
|
||||||
id: *job_id,
|
|
||||||
caller_id: TEST_CALLER_ID,
|
|
||||||
context_id: TEST_CONTEXT_ID,
|
|
||||||
script: format!("print('job {}')", i),
|
|
||||||
script_type: ScriptType::Python,
|
|
||||||
timeout: 60,
|
|
||||||
retries: 0,
|
|
||||||
env_vars: HashMap::new(),
|
|
||||||
prerequisites: vec![],
|
|
||||||
depends: if i == 0 { vec![] } else { vec![job_ids[i - 1]] },
|
|
||||||
};
|
|
||||||
|
|
||||||
let result = client.job_create_or_load(TEST_CONTEXT_ID, job).await;
|
|
||||||
if let Err(ref e) = result {
|
|
||||||
println!(" Job {} creation error: {:?}", job_id, e);
|
|
||||||
}
|
|
||||||
assert!(result.is_ok(), "Job {} should be created", job_id);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create flow
|
// Create flow
|
||||||
let flow_create = FlowCreate {
|
let flow_create = FlowCreate {
|
||||||
@@ -171,21 +153,6 @@ async fn test_02_flow_load() {
|
|||||||
|
|
||||||
// Create a flow first (reuse from test_01)
|
// Create a flow first (reuse from test_01)
|
||||||
let job_ids = vec![BASE_JOB_ID, BASE_JOB_ID + 1];
|
let job_ids = vec![BASE_JOB_ID, BASE_JOB_ID + 1];
|
||||||
for (i, job_id) in job_ids.iter().enumerate() {
|
|
||||||
let job = JobCreate {
|
|
||||||
id: *job_id,
|
|
||||||
caller_id: TEST_CALLER_ID,
|
|
||||||
context_id: TEST_CONTEXT_ID,
|
|
||||||
script: format!("print('job {}')", i),
|
|
||||||
script_type: ScriptType::Python,
|
|
||||||
timeout: 60,
|
|
||||||
retries: 0,
|
|
||||||
env_vars: HashMap::new(),
|
|
||||||
prerequisites: vec![],
|
|
||||||
depends: if i == 0 { vec![] } else { vec![job_ids[i - 1]] },
|
|
||||||
};
|
|
||||||
let _ = client.job_create_or_load(TEST_CONTEXT_ID, job).await;
|
|
||||||
}
|
|
||||||
|
|
||||||
let flow_create = FlowCreate {
|
let flow_create = FlowCreate {
|
||||||
id: TEST_FLOW_ID,
|
id: TEST_FLOW_ID,
|
||||||
@@ -216,23 +183,8 @@ async fn test_03_flow_dag() {
|
|||||||
|
|
||||||
let client = create_client().await;
|
let client = create_client().await;
|
||||||
|
|
||||||
// Create jobs with dependencies
|
// Note: Jobs should be created by the supervisor
|
||||||
let job_ids = vec![BASE_JOB_ID + 100, BASE_JOB_ID + 101, BASE_JOB_ID + 102];
|
let job_ids = vec![BASE_JOB_ID + 100, BASE_JOB_ID + 101, BASE_JOB_ID + 102];
|
||||||
for (i, job_id) in job_ids.iter().enumerate() {
|
|
||||||
let job = JobCreate {
|
|
||||||
id: *job_id,
|
|
||||||
caller_id: TEST_CALLER_ID,
|
|
||||||
context_id: TEST_CONTEXT_ID,
|
|
||||||
script: format!("print('dag job {}')", i),
|
|
||||||
script_type: ScriptType::Python,
|
|
||||||
timeout: 60,
|
|
||||||
retries: 0,
|
|
||||||
env_vars: HashMap::new(),
|
|
||||||
prerequisites: vec![],
|
|
||||||
depends: if i == 0 { vec![] } else { vec![job_ids[i - 1]] },
|
|
||||||
};
|
|
||||||
let _ = client.job_create_or_load(TEST_CONTEXT_ID, job).await;
|
|
||||||
}
|
|
||||||
|
|
||||||
let flow_id = TEST_FLOW_ID + 1;
|
let flow_id = TEST_FLOW_ID + 1;
|
||||||
let flow_create = FlowCreate {
|
let flow_create = FlowCreate {
|
||||||
@@ -268,19 +220,6 @@ async fn test_04_flow_start() {
|
|||||||
|
|
||||||
// Create a simple flow
|
// Create a simple flow
|
||||||
let job_id = BASE_JOB_ID + 200;
|
let job_id = BASE_JOB_ID + 200;
|
||||||
let job = JobCreate {
|
|
||||||
id: job_id,
|
|
||||||
caller_id: TEST_CALLER_ID,
|
|
||||||
context_id: TEST_CONTEXT_ID,
|
|
||||||
script: "print('start test')".to_string(),
|
|
||||||
script_type: ScriptType::Python,
|
|
||||||
timeout: 60,
|
|
||||||
retries: 0,
|
|
||||||
env_vars: HashMap::new(),
|
|
||||||
prerequisites: vec![],
|
|
||||||
depends: vec![],
|
|
||||||
};
|
|
||||||
let _ = client.job_create_or_load(TEST_CONTEXT_ID, job).await;
|
|
||||||
|
|
||||||
let flow_id = TEST_FLOW_ID + 2;
|
let flow_id = TEST_FLOW_ID + 2;
|
||||||
let flow_create = FlowCreate {
|
let flow_create = FlowCreate {
|
||||||
|
|||||||
Reference in New Issue
Block a user