move rhailib to herolib
This commit is contained in:
11
rhailib/src/derive/Cargo.toml
Normal file
11
rhailib/src/derive/Cargo.toml
Normal file
@@ -0,0 +1,11 @@
|
||||
[package]
|
||||
name = "derive"
|
||||
version = "0.1.0"
|
||||
edition = "2024"
|
||||
|
||||
[lib]
|
||||
proc-macro = true
|
||||
|
||||
[dependencies]
|
||||
syn = { version = "1.0", features = ["full"] }
|
||||
quote = "1.0"
|
78
rhailib/src/derive/README.md
Normal file
78
rhailib/src/derive/README.md
Normal file
@@ -0,0 +1,78 @@
|
||||
# Rhai Derive Macros
|
||||
|
||||
This crate provides procedural macros to simplify the integration of Rust types with the Rhai scripting engine.
|
||||
|
||||
## `RhaiApi` Derive Macro
|
||||
|
||||
The `RhaiApi` macro automatically generates a Rhai module with a fluent, builder-style API for your Rust structs. This allows you to create and modify your structs in Rhai scripts using chained method calls.
|
||||
|
||||
### How It Works
|
||||
|
||||
When you derive `RhaiApi` on a struct, the macro generates:
|
||||
|
||||
1. A new Rust module named `{struct_name}_rhai_dsl`.
|
||||
2. A Rhai `export_module` within that module named `generated_rhai_module`.
|
||||
3. A `new_{struct_name}()` function to create a new instance of your struct.
|
||||
4. Setter functions for each field in your struct, allowing for method chaining.
|
||||
5. An `id()` function to retrieve the object's ID.
|
||||
|
||||
### Example
|
||||
|
||||
**Rust Struct Definition:**
|
||||
|
||||
```rust
|
||||
use derive::RhaiApi;
|
||||
|
||||
#[derive(RhaiApi, Clone)]
|
||||
pub struct Product {
|
||||
pub id: i64,
|
||||
pub name: String,
|
||||
pub price: f64,
|
||||
}
|
||||
```
|
||||
|
||||
**Generated Rhai API Usage:**
|
||||
|
||||
```rhai
|
||||
// Import the generated module
|
||||
import product_rhai_dsl::generated_rhai_module as product_api;
|
||||
|
||||
// Use the fluent API to build a new product
|
||||
let my_product = product_api::new_product()
|
||||
.id(1)
|
||||
.name("Awesome Gadget")
|
||||
.price(99.99);
|
||||
|
||||
print(my_product.id()); // prints 1
|
||||
```
|
||||
|
||||
## `FromVec` Derive Macro
|
||||
|
||||
The `FromVec` macro is a utility for tuple structs that contain a single field. It implements the `From<T>` trait, where `T` is the inner type, allowing for seamless conversions.
|
||||
|
||||
### Example
|
||||
|
||||
**Rust Struct Definition:**
|
||||
|
||||
```rust
|
||||
use derive::FromVec;
|
||||
|
||||
#[derive(FromVec)]
|
||||
pub struct MyVec(Vec<u8>);
|
||||
```
|
||||
|
||||
**Usage:**
|
||||
|
||||
```rust
|
||||
let data = vec![1, 2, 3];
|
||||
let my_vec = MyVec::from(data);
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
To use these macros in your project, add this crate as a dependency in your `Cargo.toml` file:
|
||||
|
||||
```toml
|
||||
[dependencies]
|
||||
derive = { path = "../path/to/rhailib/src/derive" }
|
||||
```
|
67
rhailib/src/derive/docs/ARCHITECTURE.md
Normal file
67
rhailib/src/derive/docs/ARCHITECTURE.md
Normal file
@@ -0,0 +1,67 @@
|
||||
# Architecture of the `derive` Crate
|
||||
|
||||
The `derive` crate is a procedural macro crate responsible for generating boilerplate code that integrates Rust structs with the Rhai scripting engine. It simplifies the process of exposing Rust types and their properties to Rhai scripts.
|
||||
|
||||
## Core Functionality
|
||||
|
||||
The crate provides two main procedural macros:
|
||||
|
||||
1. `#[derive(RhaiApi)]`
|
||||
2. `#[derive(FromVec)]`
|
||||
|
||||
---
|
||||
|
||||
## `#[derive(RhaiApi)]`
|
||||
|
||||
This is the primary macro of the crate. When applied to a Rust struct, it automatically generates a Rhai-compatible DSL (Domain-Specific Language) for that struct.
|
||||
|
||||
### Generated Code Structure
|
||||
|
||||
For a struct named `MyStruct`, the macro generates a new module named `my_struct_rhai_dsl`. This module contains a Rhai `export_module` with the following functions:
|
||||
|
||||
* **`new_my_struct()`**: A constructor function that creates a new instance of `MyStruct` within the Rhai engine.
|
||||
* **Setter Functions**: For each field in `MyStruct`, a corresponding setter function is generated. For a field named `my_field`, a Rhai function `my_field(value)` is created to set its value.
|
||||
* **`id()`**: A function to retrieve the ID of the object.
|
||||
|
||||
This allows for a fluent, chainable API within Rhai scripts, like so:
|
||||
|
||||
```rhai
|
||||
let my_object = new_my_struct().field1(42).field2("hello");
|
||||
```
|
||||
|
||||
### Implementation Details
|
||||
|
||||
The implementation resides in `src/rhai_api.rs`. It uses the `syn` crate to parse the input `DeriveInput` and the `quote` crate to construct the output `TokenStream`.
|
||||
|
||||
The process is as follows:
|
||||
|
||||
1. The macro input is parsed into a `DeriveInput` AST (Abstract Syntax Tree).
|
||||
2. The struct's name and fields are extracted from the AST.
|
||||
3. A new module name is generated based on the struct's name (e.g., `MyStruct` -> `my_struct_rhai_dsl`).
|
||||
4. Using the `quote!` macro, the code for the new module, the `export_module`, the constructor, and the setter functions is generated.
|
||||
5. The generated code is returned as a `TokenStream`, which the compiler then incorporates into the crate.
|
||||
|
||||
### Architectural Diagram
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Rust Struct Definition] -- `#[derive(RhaiApi)]` --> B{`derive` Crate};
|
||||
B -- `syn` --> C[Parse Struct AST];
|
||||
C -- Extract Fields & Name --> D[Generate Code with `quote`];
|
||||
D -- Create --> E[Constructor `new_...()`];
|
||||
D -- Create --> F[Setter Functions `field(...)`];
|
||||
D -- Create --> G[`id()` function];
|
||||
E & F & G -- Packaged into --> H[Rhai `export_module`];
|
||||
H -- Returned as `TokenStream` --> I[Compiler];
|
||||
I -- Integrates into --> J[Final Binary];
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## `#[derive(FromVec)]`
|
||||
|
||||
This is a simpler utility macro. Its purpose is to generate a `From<Vec<T>>` implementation for a tuple struct that contains a single `Vec<T>`. This is useful for converting a vector of items into a specific newtype-pattern struct.
|
||||
|
||||
### Implementation
|
||||
|
||||
The implementation is located directly in `src/lib.rs`. It parses the input struct and, if it's a single-element tuple struct, generates the corresponding `From` implementation.
|
117
rhailib/src/derive/src/lib.rs
Normal file
117
rhailib/src/derive/src/lib.rs
Normal file
@@ -0,0 +1,117 @@
|
||||
//! # Derive Macros for Rhai Integration
|
||||
//!
|
||||
//! This crate provides procedural macros to simplify the integration of Rust structs
|
||||
//! with the Rhai scripting engine. It automatically generates boilerplate code for
|
||||
//! exposing Rust types to Rhai scripts.
|
||||
|
||||
extern crate proc_macro;
|
||||
use proc_macro::TokenStream;
|
||||
use quote::quote;
|
||||
use syn::{Data, DeriveInput, Fields, parse_macro_input};
|
||||
|
||||
mod rhai_api;
|
||||
|
||||
/// Derives the `RhaiApi` for a struct, generating a Rhai DSL module.
|
||||
///
|
||||
/// This macro creates a new module containing a Rhai `export_module` with:
|
||||
/// - A constructor function (`new_<struct_name>()`)
|
||||
/// - Setter functions for each field (chainable API)
|
||||
/// - An `id()` function to retrieve the object's ID
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// ```rust
|
||||
/// use derive::RhaiApi;
|
||||
///
|
||||
/// #[derive(RhaiApi)]
|
||||
/// struct MyStruct {
|
||||
/// id: u64,
|
||||
/// name: String,
|
||||
/// value: i32,
|
||||
/// }
|
||||
/// ```
|
||||
///
|
||||
/// This generates a Rhai module that allows scripts like:
|
||||
/// ```rhai
|
||||
/// let obj = new_mystruct().name("test").value(42);
|
||||
/// let obj_id = obj.id();
|
||||
/// ```
|
||||
///
|
||||
/// # Generated Module Structure
|
||||
///
|
||||
/// For a struct `MyStruct`, this creates a module `mystruct_rhai_dsl` containing
|
||||
/// the Rhai-compatible functions. The module can be registered with a Rhai engine
|
||||
/// to expose the functionality to scripts.
|
||||
///
|
||||
/// # Limitations
|
||||
///
|
||||
/// - Only works with structs that have named fields
|
||||
/// - Fields named `base_data` are ignored during generation
|
||||
/// - The struct must implement an `id()` method returning a numeric type
|
||||
#[proc_macro_derive(RhaiApi)]
|
||||
pub fn rhai_api_derive(input: TokenStream) -> TokenStream {
|
||||
rhai_api::impl_rhai_api(input)
|
||||
}
|
||||
|
||||
/// Derives a `From<T>` implementation for single-element tuple structs.
|
||||
///
|
||||
/// This macro generates a `From` trait implementation that allows converting
|
||||
/// the inner type directly into the tuple struct wrapper.
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// ```rust
|
||||
/// use derive::FromVec;
|
||||
///
|
||||
/// #[derive(FromVec)]
|
||||
/// struct MyWrapper(Vec<String>);
|
||||
/// ```
|
||||
///
|
||||
/// This generates:
|
||||
/// ```rust
|
||||
/// impl From<Vec<String>> for MyWrapper {
|
||||
/// fn from(vec: Vec<String>) -> Self {
|
||||
/// MyWrapper(vec)
|
||||
/// }
|
||||
/// }
|
||||
/// ```
|
||||
///
|
||||
/// # Limitations
|
||||
///
|
||||
/// - Only works with tuple structs containing exactly one field
|
||||
/// - The struct must be a simple wrapper around another type
|
||||
///
|
||||
/// # Panics
|
||||
///
|
||||
/// This macro will panic at compile time if:
|
||||
/// - Applied to a struct that is not a tuple struct
|
||||
/// - Applied to a tuple struct with more or fewer than one field
|
||||
#[proc_macro_derive(FromVec)]
|
||||
pub fn from_vec_derive(input: TokenStream) -> TokenStream {
|
||||
let input = parse_macro_input!(input as DeriveInput);
|
||||
let name = input.ident;
|
||||
|
||||
let inner_type = match input.data {
|
||||
Data::Struct(s) => match s.fields {
|
||||
Fields::Unnamed(mut fields) => {
|
||||
if fields.unnamed.len() != 1 {
|
||||
panic!("FromVec can only be derived for tuple structs with one field.");
|
||||
}
|
||||
let field = fields.unnamed.pop().unwrap().into_value();
|
||||
field.ty
|
||||
}
|
||||
_ => panic!("FromVec can only be derived for tuple structs."),
|
||||
},
|
||||
_ => panic!("FromVec can only be derived for structs."),
|
||||
};
|
||||
|
||||
let expanded = quote! {
|
||||
impl From<#inner_type> for #name {
|
||||
fn from(vec: #inner_type) -> Self {
|
||||
#name(vec)
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
TokenStream::from(expanded)
|
||||
}
|
116
rhailib/src/derive/src/rhai_api.rs
Normal file
116
rhailib/src/derive/src/rhai_api.rs
Normal file
@@ -0,0 +1,116 @@
|
||||
//! Implementation of the `RhaiApi` derive macro.
|
||||
//!
|
||||
//! This module contains the core logic for generating Rhai-compatible DSL modules
|
||||
//! from Rust struct definitions.
|
||||
|
||||
use proc_macro::TokenStream;
|
||||
use quote::{format_ident, quote};
|
||||
use syn::{Data, DeriveInput, Fields, parse_macro_input};
|
||||
|
||||
/// Implements the `RhaiApi` derive macro functionality.
|
||||
///
|
||||
/// This function takes a `TokenStream` representing a struct definition and generates
|
||||
/// a complete Rhai DSL module with constructor, setter functions, and utility methods.
|
||||
///
|
||||
/// # Generated Code Structure
|
||||
///
|
||||
/// For a struct `MyStruct`, this generates:
|
||||
/// - A module named `mystruct_rhai_dsl`
|
||||
/// - A constructor function `new_mystruct()`
|
||||
/// - Setter functions for each field (excluding `base_data`)
|
||||
/// - An `id()` function for object identification
|
||||
///
|
||||
/// # Arguments
|
||||
///
|
||||
/// * `input` - A `TokenStream` containing the struct definition to process
|
||||
///
|
||||
/// # Returns
|
||||
///
|
||||
/// A `TokenStream` containing the generated Rhai DSL module code
|
||||
///
|
||||
/// # Panics
|
||||
///
|
||||
/// This function will panic if:
|
||||
/// - The input is not a struct
|
||||
/// - The struct does not have named fields
|
||||
pub fn impl_rhai_api(input: TokenStream) -> TokenStream {
|
||||
let input = parse_macro_input!(input as DeriveInput);
|
||||
let struct_name = &input.ident;
|
||||
let struct_name_lowercase_str = struct_name.to_string().to_lowercase();
|
||||
|
||||
let mod_name = format_ident!("{}_rhai_dsl", struct_name_lowercase_str);
|
||||
let id_fn_name = format_ident!("{}_id", struct_name_lowercase_str);
|
||||
|
||||
// --- Generate `new` function ---
|
||||
let new_fn_name_str = format!("new_{}", struct_name_lowercase_str);
|
||||
let new_fn_name_ident = format_ident!("new_{}", struct_name_lowercase_str);
|
||||
let new_fn = quote! {
|
||||
#[rhai_fn(name = #new_fn_name_str, return_raw)]
|
||||
pub fn #new_fn_name_ident() -> Result<RhaiObject, Box<EvalAltResult>> {
|
||||
let object = RhaiObject::new();
|
||||
Ok(object)
|
||||
}
|
||||
};
|
||||
|
||||
// --- Generate setter functions from struct fields ---
|
||||
let fields = if let Data::Struct(s) = &input.data {
|
||||
if let Fields::Named(fields) = &s.fields {
|
||||
&fields.named
|
||||
} else {
|
||||
panic!("RhaiApi can only be derived for structs with named fields.");
|
||||
}
|
||||
} else {
|
||||
panic!("RhaiApi can only be derived for structs.");
|
||||
};
|
||||
|
||||
let setter_fns = fields.iter().map(|f| {
|
||||
let field_name = f.ident.as_ref().unwrap();
|
||||
let field_type = &f.ty;
|
||||
|
||||
if field_name.to_string() == "base_data" {
|
||||
return quote! {};
|
||||
}
|
||||
|
||||
let rhai_fn_name_str = field_name.to_string();
|
||||
let rust_fn_name = format_ident!("{}_{}", struct_name_lowercase_str, field_name);
|
||||
|
||||
quote! {
|
||||
#[rhai_fn(name = #rhai_fn_name_str, return_raw, global, pure)]
|
||||
pub fn #rust_fn_name(
|
||||
object: &mut RhaiObject,
|
||||
value: #field_type,
|
||||
) -> Result<RhaiObject, Box<EvalAltResult>> {
|
||||
let owned_object = std::mem::take(object);
|
||||
*object = owned_object.#field_name(value);
|
||||
Ok(object.clone())
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
let expanded = quote! {
|
||||
pub mod #mod_name {
|
||||
use rhai::plugin::*;
|
||||
use rhai::{EvalAltResult, INT};
|
||||
use super::#struct_name;
|
||||
use std::mem;
|
||||
|
||||
type RhaiObject = #struct_name;
|
||||
|
||||
#[export_module]
|
||||
pub mod generated_rhai_module {
|
||||
use super::*;
|
||||
|
||||
#new_fn
|
||||
|
||||
#[rhai_fn(name = "id", return_raw, global, pure)]
|
||||
pub fn #id_fn_name(object: &mut RhaiObject) -> Result<i64, Box<EvalAltResult>> {
|
||||
Ok(object.id() as i64)
|
||||
}
|
||||
|
||||
#(#setter_fns)*
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
TokenStream::from(expanded)
|
||||
}
|
17
rhailib/src/flow/Cargo.toml
Normal file
17
rhailib/src/flow/Cargo.toml
Normal file
@@ -0,0 +1,17 @@
|
||||
[package]
|
||||
name = "flow"
|
||||
version = "0.1.0"
|
||||
edition = "2021"
|
||||
description = "Simple flow manager for Rhai scripts"
|
||||
|
||||
[dependencies]
|
||||
rhai = { version = "=1.21.0", features = ["std", "sync"] }
|
||||
rhai_dispatcher = { path = "../dispatcher" }
|
||||
serde = { version = "1.0", features = ["derive"] }
|
||||
serde_json = "1.0"
|
||||
tokio = { version = "1", features = ["full"] }
|
||||
redis = { version = "0.23", features = ["tokio-comp"] }
|
||||
uuid = { version = "1.0", features = ["v4"] }
|
||||
|
||||
[dev-dependencies]
|
||||
tempfile = "3"
|
110
rhailib/src/flow/README.md
Normal file
110
rhailib/src/flow/README.md
Normal file
@@ -0,0 +1,110 @@
|
||||
# Flow Manager
|
||||
|
||||
A simple, generic flow manager for Rhai scripts with builder pattern API and non-blocking execution.
|
||||
|
||||
## Features
|
||||
|
||||
- **Builder Pattern API**: Fluent interface for creating steps and flows
|
||||
- **Non-blocking Execution**: Uses `tokio::spawn` for async step execution
|
||||
- **Simple State Management**: Redis-based state tracking
|
||||
- **Retry Logic**: Configurable timeouts and retry attempts
|
||||
- **Mock API Support**: Built-in mock API for testing different scenarios
|
||||
- **RhaiDispatcher Integration**: Seamless integration with existing Rhai execution system
|
||||
|
||||
## Quick Start
|
||||
|
||||
```rust
|
||||
use flow::{new_step, new_flow, FlowExecutor};
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||
// Create executor
|
||||
let executor = FlowExecutor::new("redis://127.0.0.1/").await?;
|
||||
|
||||
// Build steps using fluent API
|
||||
let step1 = new_step("stripe_config")
|
||||
.script("stripe_config_script")
|
||||
.timeout(5)
|
||||
.retries(2)
|
||||
.build();
|
||||
|
||||
let step2 = new_step("stripe_config_confirm")
|
||||
.script("script that looks up stripe config confirmation in db")
|
||||
.timeout(5)
|
||||
.build();
|
||||
|
||||
let step3 = new_step("create_product")
|
||||
.script("create_product_script")
|
||||
.timeout(10)
|
||||
.retries(1)
|
||||
.build();
|
||||
|
||||
// Build flow using fluent API
|
||||
let flow = new_flow("stripe_payment_request")
|
||||
.add_step(step1)
|
||||
.add_step(step2)
|
||||
.add_step(step3)
|
||||
.build();
|
||||
|
||||
// Execute flow (non-blocking)
|
||||
let result = executor.execute_flow(flow).await?;
|
||||
println!("Flow started: {}", result);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
### Core Components
|
||||
|
||||
- **Types** (`types.rs`): Core data structures (Flow, Step, Status enums)
|
||||
- **Builder** (`builder.rs`): Fluent API for constructing flows and steps
|
||||
- **State** (`state.rs`): Simple Redis-based state management
|
||||
- **Executor** (`executor.rs`): Non-blocking flow execution engine
|
||||
- **Mock API** (`mock_api.rs`): Testing utilities for different response scenarios
|
||||
|
||||
### State Management
|
||||
|
||||
The system tracks minimal state:
|
||||
|
||||
**Flow State:**
|
||||
- `flow_id: String` - unique identifier
|
||||
- `status: FlowStatus` (Created, Running, Completed, Failed)
|
||||
- `current_step: Option<String>` - currently executing step
|
||||
- `completed_steps: Vec<String>` - list of finished steps
|
||||
|
||||
**Step State:**
|
||||
- `step_id: String` - unique identifier
|
||||
- `status: StepStatus` (Pending, Running, Completed, Failed)
|
||||
- `attempt_count: u32` - for retry logic
|
||||
- `output: Option<String>` - result from script execution
|
||||
|
||||
**Storage:**
|
||||
- Redis key-value pairs: `flow:{flow_id}` and `step:{flow_id}:{step_id}`
|
||||
|
||||
## Examples
|
||||
|
||||
Run the example:
|
||||
|
||||
```bash
|
||||
cd ../rhailib/src/flow
|
||||
cargo run --example stripe_flow_example
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
```bash
|
||||
cargo test
|
||||
```
|
||||
|
||||
Note: Some tests require Redis to be running. Set `SKIP_REDIS_TESTS=1` to skip Redis-dependent tests.
|
||||
|
||||
## Integration
|
||||
|
||||
The flow manager integrates with:
|
||||
- **RhaiDispatcher**: For executing Rhai scripts
|
||||
- **Redis**: For state persistence
|
||||
- **tokio**: For non-blocking async execution
|
||||
|
||||
This provides a simple, reliable foundation for orchestrating complex workflows while maintaining the non-blocking execution pattern established in the payment system.
|
90
rhailib/src/flow/examples/stripe_flow_example.rs
Normal file
90
rhailib/src/flow/examples/stripe_flow_example.rs
Normal file
@@ -0,0 +1,90 @@
|
||||
//! Example demonstrating the flow manager with mock Stripe API calls
|
||||
|
||||
use flow::{new_step, new_flow, FlowExecutor};
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||
println!("=== Flow Manager Example ===");
|
||||
println!("Demonstrating the builder pattern API with mock Stripe workflow\n");
|
||||
|
||||
// Create the flow executor
|
||||
let executor = FlowExecutor::new("redis://127.0.0.1/").await?;
|
||||
|
||||
// Build steps using the fluent API
|
||||
let step1 = new_step("stripe_config")
|
||||
.script("mock_api_call stripe_config")
|
||||
.timeout(5)
|
||||
.retries(2)
|
||||
.build();
|
||||
|
||||
let step2 = new_step("stripe_config_confirm")
|
||||
.script("mock_api_call create_product")
|
||||
.timeout(5)
|
||||
.retries(1)
|
||||
.build();
|
||||
|
||||
let step3 = new_step("create_product")
|
||||
.script("mock_api_call create_product")
|
||||
.timeout(10)
|
||||
.retries(1)
|
||||
.build();
|
||||
|
||||
// Build flow using the fluent API
|
||||
let flow = new_flow("stripe_payment_request")
|
||||
.add_step(step1)
|
||||
.add_step(step2)
|
||||
.add_step(step3)
|
||||
.build();
|
||||
|
||||
println!("Created flow: {}", flow.name);
|
||||
println!("Flow ID: {}", flow.id);
|
||||
println!("Number of steps: {}", flow.steps.len());
|
||||
|
||||
for (i, step) in flow.steps.iter().enumerate() {
|
||||
println!(" Step {}: {} (timeout: {}s, retries: {})",
|
||||
i + 1, step.name, step.timeout_seconds, step.max_retries);
|
||||
}
|
||||
|
||||
// Execute the flow (non-blocking)
|
||||
println!("\n🚀 Starting flow execution...");
|
||||
let result = executor.execute_flow(flow.clone()).await?;
|
||||
println!("✅ {}", result);
|
||||
|
||||
// Monitor flow progress
|
||||
println!("\n📊 Monitoring flow progress...");
|
||||
for i in 0..10 {
|
||||
tokio::time::sleep(tokio::time::Duration::from_millis(500)).await;
|
||||
|
||||
if let Ok(Some(flow_state)) = executor.get_flow_status(&flow.id).await {
|
||||
println!(" Status: {:?}, Current step: {:?}, Completed: {}/{}",
|
||||
flow_state.status,
|
||||
flow_state.current_step,
|
||||
flow_state.completed_steps.len(),
|
||||
flow.steps.len());
|
||||
|
||||
if matches!(flow_state.status, flow::FlowStatus::Completed | flow::FlowStatus::Failed) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check final status
|
||||
if let Ok(Some(final_state)) = executor.get_flow_status(&flow.id).await {
|
||||
println!("\n🎯 Final flow status: {:?}", final_state.status);
|
||||
println!("Completed steps: {:?}", final_state.completed_steps);
|
||||
|
||||
// Check individual step results
|
||||
for step in &flow.steps {
|
||||
if let Ok(Some(step_state)) = executor.get_step_status(&flow.id, &step.id).await {
|
||||
println!(" Step '{}': {:?} (attempts: {})",
|
||||
step.name, step_state.status, step_state.attempt_count);
|
||||
if let Some(output) = &step_state.output {
|
||||
println!(" Output: {}", output);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
println!("\n✨ Flow execution demonstration completed!");
|
||||
Ok(())
|
||||
}
|
108
rhailib/src/flow/src/builder.rs
Normal file
108
rhailib/src/flow/src/builder.rs
Normal file
@@ -0,0 +1,108 @@
|
||||
//! Builder patterns for steps and flows
|
||||
|
||||
use crate::types::{Step, Flow};
|
||||
|
||||
/// Builder for creating steps with fluent API
|
||||
pub struct StepBuilder {
|
||||
step: Step,
|
||||
}
|
||||
|
||||
impl StepBuilder {
|
||||
pub fn new(name: &str) -> Self {
|
||||
Self {
|
||||
step: Step::new(name),
|
||||
}
|
||||
}
|
||||
|
||||
/// Set the script content for this step
|
||||
pub fn script(mut self, script: &str) -> Self {
|
||||
self.step.script = script.to_string();
|
||||
self
|
||||
}
|
||||
|
||||
/// Set timeout in seconds
|
||||
pub fn timeout(mut self, seconds: u64) -> Self {
|
||||
self.step.timeout_seconds = seconds;
|
||||
self
|
||||
}
|
||||
|
||||
/// Set maximum retry attempts
|
||||
pub fn retries(mut self, count: u32) -> Self {
|
||||
self.step.max_retries = count;
|
||||
self
|
||||
}
|
||||
|
||||
/// Build the final step
|
||||
pub fn build(self) -> Step {
|
||||
self.step
|
||||
}
|
||||
}
|
||||
|
||||
/// Builder for creating flows with fluent API
|
||||
pub struct FlowBuilder {
|
||||
flow: Flow,
|
||||
}
|
||||
|
||||
impl FlowBuilder {
|
||||
pub fn new(name: &str) -> Self {
|
||||
Self {
|
||||
flow: Flow::new(name),
|
||||
}
|
||||
}
|
||||
|
||||
/// Add a step to this flow
|
||||
pub fn add_step(mut self, step: Step) -> Self {
|
||||
self.flow.steps.push(step);
|
||||
self
|
||||
}
|
||||
|
||||
/// Build the final flow
|
||||
pub fn build(self) -> Flow {
|
||||
self.flow
|
||||
}
|
||||
}
|
||||
|
||||
/// Convenience function to create a new step builder
|
||||
pub fn new_step(name: &str) -> StepBuilder {
|
||||
StepBuilder::new(name)
|
||||
}
|
||||
|
||||
/// Convenience function to create a new flow builder
|
||||
pub fn new_flow(name: &str) -> FlowBuilder {
|
||||
FlowBuilder::new(name)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_step_builder() {
|
||||
let step = new_step("test_step")
|
||||
.script("print('hello world');")
|
||||
.timeout(10)
|
||||
.retries(3)
|
||||
.build();
|
||||
|
||||
assert_eq!(step.name, "test_step");
|
||||
assert_eq!(step.script, "print('hello world');");
|
||||
assert_eq!(step.timeout_seconds, 10);
|
||||
assert_eq!(step.max_retries, 3);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_flow_builder() {
|
||||
let step1 = new_step("step1").script("let x = 1;").build();
|
||||
let step2 = new_step("step2").script("let y = 2;").build();
|
||||
|
||||
let flow = new_flow("test_flow")
|
||||
.add_step(step1)
|
||||
.add_step(step2)
|
||||
.build();
|
||||
|
||||
assert_eq!(flow.name, "test_flow");
|
||||
assert_eq!(flow.steps.len(), 2);
|
||||
assert_eq!(flow.steps[0].name, "step1");
|
||||
assert_eq!(flow.steps[1].name, "step2");
|
||||
}
|
||||
}
|
243
rhailib/src/flow/src/executor.rs
Normal file
243
rhailib/src/flow/src/executor.rs
Normal file
@@ -0,0 +1,243 @@
|
||||
//! Simple flow executor with non-blocking step execution
|
||||
|
||||
use crate::types::{Flow, Step, FlowStatus, StepStatus};
|
||||
use crate::state::{FlowState, StepState, StateManager};
|
||||
use crate::mock_api::MockAPI;
|
||||
use rhai_dispatcher::RhaiDispatcherBuilder;
|
||||
use std::sync::Arc;
|
||||
use tokio::time::{timeout, Duration};
|
||||
|
||||
/// Simple flow executor
|
||||
pub struct FlowExecutor {
|
||||
state_manager: Arc<StateManager>,
|
||||
mock_api: Arc<MockAPI>,
|
||||
redis_url: String,
|
||||
}
|
||||
|
||||
impl FlowExecutor {
|
||||
pub async fn new(redis_url: &str) -> Result<Self, Box<dyn std::error::Error>> {
|
||||
let state_manager = Arc::new(StateManager::new(redis_url).await?);
|
||||
let mock_api = Arc::new(MockAPI::default());
|
||||
|
||||
Ok(Self {
|
||||
state_manager,
|
||||
mock_api,
|
||||
redis_url: redis_url.to_string(),
|
||||
})
|
||||
}
|
||||
|
||||
/// Execute a flow non-blocking
|
||||
pub async fn execute_flow(&self, flow: Flow) -> Result<String, Box<dyn std::error::Error>> {
|
||||
// Initialize flow state
|
||||
let mut flow_state = FlowState::new(flow.id.clone());
|
||||
flow_state.status = FlowStatus::Running;
|
||||
self.state_manager.save_flow_state(&flow_state).await?;
|
||||
|
||||
// Initialize step states
|
||||
for step in &flow.steps {
|
||||
let step_state = StepState::new(step.id.clone());
|
||||
self.state_manager.save_step_state(&flow.id, &step_state).await?;
|
||||
}
|
||||
|
||||
// Spawn flow execution in background
|
||||
let flow_id = flow.id.clone();
|
||||
let state_manager = self.state_manager.clone();
|
||||
let mock_api = self.mock_api.clone();
|
||||
let redis_url = self.redis_url.clone();
|
||||
|
||||
tokio::spawn(async move {
|
||||
if let Err(e) = Self::execute_flow_steps(flow, state_manager, mock_api, redis_url).await {
|
||||
eprintln!("Flow execution error: {}", e);
|
||||
}
|
||||
});
|
||||
|
||||
Ok(format!("flow_execution_started:{}", flow_id))
|
||||
}
|
||||
|
||||
/// Execute all steps in a flow
|
||||
async fn execute_flow_steps(
|
||||
flow: Flow,
|
||||
state_manager: Arc<StateManager>,
|
||||
mock_api: Arc<MockAPI>,
|
||||
redis_url: String,
|
||||
) -> Result<(), Box<dyn std::error::Error>> {
|
||||
let mut flow_state = state_manager.load_flow_state(&flow.id).await?
|
||||
.ok_or("Flow state not found")?;
|
||||
|
||||
// Execute steps sequentially
|
||||
for step in &flow.steps {
|
||||
flow_state.current_step = Some(step.id.clone());
|
||||
state_manager.save_flow_state(&flow_state).await?;
|
||||
|
||||
match Self::execute_step_with_retries(
|
||||
step,
|
||||
&flow.id,
|
||||
state_manager.clone(),
|
||||
mock_api.clone(),
|
||||
redis_url.clone(),
|
||||
).await {
|
||||
Ok(_) => {
|
||||
flow_state.completed_steps.push(step.id.clone());
|
||||
}
|
||||
Err(e) => {
|
||||
eprintln!("Step {} failed: {}", step.name, e);
|
||||
flow_state.status = FlowStatus::Failed;
|
||||
state_manager.save_flow_state(&flow_state).await?;
|
||||
return Err(e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Mark flow as completed
|
||||
flow_state.status = FlowStatus::Completed;
|
||||
flow_state.current_step = None;
|
||||
state_manager.save_flow_state(&flow_state).await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Execute a single step with retry logic
|
||||
async fn execute_step_with_retries(
|
||||
step: &Step,
|
||||
flow_id: &str,
|
||||
state_manager: Arc<StateManager>,
|
||||
mock_api: Arc<MockAPI>,
|
||||
redis_url: String,
|
||||
) -> Result<(), Box<dyn std::error::Error>> {
|
||||
let mut step_state = state_manager.load_step_state(flow_id, &step.id).await?
|
||||
.ok_or("Step state not found")?;
|
||||
|
||||
let max_attempts = step.max_retries + 1;
|
||||
|
||||
for attempt in 0..max_attempts {
|
||||
step_state.attempt_count = attempt + 1;
|
||||
step_state.status = StepStatus::Running;
|
||||
state_manager.save_step_state(flow_id, &step_state).await?;
|
||||
|
||||
match Self::execute_single_step(step, &mock_api, &redis_url).await {
|
||||
Ok(output) => {
|
||||
step_state.status = StepStatus::Completed;
|
||||
step_state.output = Some(output);
|
||||
state_manager.save_step_state(flow_id, &step_state).await?;
|
||||
return Ok(());
|
||||
}
|
||||
Err(e) => {
|
||||
if attempt + 1 >= max_attempts {
|
||||
step_state.status = StepStatus::Failed;
|
||||
state_manager.save_step_state(flow_id, &step_state).await?;
|
||||
return Err(e);
|
||||
}
|
||||
// Wait before retry
|
||||
tokio::time::sleep(Duration::from_millis(1000)).await;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Err("Max retries exceeded".into())
|
||||
}
|
||||
|
||||
/// Execute a single step
|
||||
async fn execute_single_step(
|
||||
step: &Step,
|
||||
mock_api: &MockAPI,
|
||||
redis_url: &str,
|
||||
) -> Result<String, Box<dyn std::error::Error>> {
|
||||
// Execute with timeout
|
||||
let result = timeout(step.timeout(), async {
|
||||
// For demo, we'll use mock API calls instead of real Rhai execution
|
||||
// In real implementation, this would execute the Rhai script
|
||||
if step.script.contains("mock_api_call") {
|
||||
// Extract endpoint from script (simple parsing)
|
||||
let endpoint = if step.script.contains("stripe_config") {
|
||||
"stripe_config"
|
||||
} else if step.script.contains("create_product") {
|
||||
"create_product"
|
||||
} else {
|
||||
"default_endpoint"
|
||||
};
|
||||
|
||||
mock_api.call(endpoint).await
|
||||
} else {
|
||||
// For non-mock scripts, simulate Rhai execution via dispatcher
|
||||
Self::execute_rhai_script(&step.script, redis_url).await
|
||||
}
|
||||
}).await;
|
||||
|
||||
match result {
|
||||
Ok(Ok(output)) => Ok(output),
|
||||
Ok(Err(e)) => Err(e.into()),
|
||||
Err(_) => Err("Step execution timed out".into()),
|
||||
}
|
||||
}
|
||||
|
||||
/// Execute Rhai script using dispatcher (simplified)
|
||||
async fn execute_rhai_script(
|
||||
script: &str,
|
||||
redis_url: &str,
|
||||
) -> Result<String, Box<dyn std::error::Error>> {
|
||||
let dispatcher = RhaiDispatcherBuilder::new()
|
||||
.caller_id("flow_executor")
|
||||
.redis_url(redis_url)
|
||||
.build()?;
|
||||
|
||||
let result = dispatcher
|
||||
.new_play_request()
|
||||
.worker_id("flow_worker")
|
||||
.script(script)
|
||||
.timeout(Duration::from_secs(30))
|
||||
.await_response()
|
||||
.await;
|
||||
|
||||
match result {
|
||||
Ok(task_details) => {
|
||||
if task_details.status == "completed" {
|
||||
Ok(task_details.output.unwrap_or_default())
|
||||
} else {
|
||||
Err(format!("Script execution failed: {:?}", task_details.error).into())
|
||||
}
|
||||
}
|
||||
Err(e) => Err(format!("Dispatcher error: {}", e).into()),
|
||||
}
|
||||
}
|
||||
|
||||
/// Get flow status
|
||||
pub async fn get_flow_status(&self, flow_id: &str) -> Result<Option<FlowState>, Box<dyn std::error::Error>> {
|
||||
self.state_manager.load_flow_state(flow_id).await
|
||||
}
|
||||
|
||||
/// Get step status
|
||||
pub async fn get_step_status(&self, flow_id: &str, step_id: &str) -> Result<Option<StepState>, Box<dyn std::error::Error>> {
|
||||
self.state_manager.load_step_state(flow_id, step_id).await
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::builder::{new_step, new_flow};
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_flow_execution() {
|
||||
// This test requires Redis to be running
|
||||
// Skip if Redis is not available
|
||||
if std::env::var("SKIP_REDIS_TESTS").is_ok() {
|
||||
return;
|
||||
}
|
||||
|
||||
let executor = FlowExecutor::new("redis://127.0.0.1/").await.unwrap();
|
||||
|
||||
let step1 = new_step("test_step")
|
||||
.script("mock_api_call stripe_config")
|
||||
.timeout(5)
|
||||
.retries(1)
|
||||
.build();
|
||||
|
||||
let flow = new_flow("test_flow")
|
||||
.add_step(step1)
|
||||
.build();
|
||||
|
||||
let result = executor.execute_flow(flow).await;
|
||||
assert!(result.is_ok());
|
||||
assert!(result.unwrap().starts_with("flow_execution_started:"));
|
||||
}
|
||||
}
|
20
rhailib/src/flow/src/lib.rs
Normal file
20
rhailib/src/flow/src/lib.rs
Normal file
@@ -0,0 +1,20 @@
|
||||
//! Simple Flow Manager for Rhai Scripts
|
||||
//!
|
||||
//! Provides a minimal flow execution system with builder patterns:
|
||||
//! - `new_step("name").script("script").timeout(5).retries(2)`
|
||||
//! - `new_flow("name").add_step(step1).add_step(step2)`
|
||||
|
||||
pub mod types;
|
||||
pub mod builder;
|
||||
pub mod executor;
|
||||
pub mod state;
|
||||
pub mod mock_api;
|
||||
|
||||
pub use types::{Flow, Step, FlowStatus, StepStatus};
|
||||
pub use builder::{StepBuilder, FlowBuilder, new_step, new_flow};
|
||||
pub use executor::FlowExecutor;
|
||||
pub use state::{FlowState, StepState, StateManager};
|
||||
pub use mock_api::MockAPI;
|
||||
|
||||
// Re-export for convenience
|
||||
pub use rhai_dispatcher::RhaiDispatcherBuilder;
|
144
rhailib/src/flow/src/mock_api.rs
Normal file
144
rhailib/src/flow/src/mock_api.rs
Normal file
@@ -0,0 +1,144 @@
|
||||
//! Simple mock API for testing different response types and durations
|
||||
|
||||
use serde::{Serialize, Deserialize};
|
||||
use std::time::Duration;
|
||||
use std::collections::HashMap;
|
||||
|
||||
/// Mock API response types
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub enum MockResponseType {
|
||||
Success,
|
||||
Failure,
|
||||
Timeout,
|
||||
}
|
||||
|
||||
/// Mock API scenario configuration
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct MockScenario {
|
||||
pub response_type: MockResponseType,
|
||||
pub delay_ms: u64,
|
||||
pub response_data: String,
|
||||
}
|
||||
|
||||
impl MockScenario {
|
||||
pub fn success(delay_ms: u64, data: &str) -> Self {
|
||||
Self {
|
||||
response_type: MockResponseType::Success,
|
||||
delay_ms,
|
||||
response_data: data.to_string(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn failure(delay_ms: u64, error: &str) -> Self {
|
||||
Self {
|
||||
response_type: MockResponseType::Failure,
|
||||
delay_ms,
|
||||
response_data: error.to_string(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn timeout(delay_ms: u64) -> Self {
|
||||
Self {
|
||||
response_type: MockResponseType::Timeout,
|
||||
delay_ms,
|
||||
response_data: "Request timed out".to_string(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Simple mock API for testing
|
||||
pub struct MockAPI {
|
||||
scenarios: HashMap<String, MockScenario>,
|
||||
}
|
||||
|
||||
impl MockAPI {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
scenarios: HashMap::new(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Add a mock scenario for an endpoint
|
||||
pub fn add_scenario(&mut self, endpoint: &str, scenario: MockScenario) {
|
||||
self.scenarios.insert(endpoint.to_string(), scenario);
|
||||
}
|
||||
|
||||
/// Call a mock endpoint
|
||||
pub async fn call(&self, endpoint: &str) -> Result<String, String> {
|
||||
match self.scenarios.get(endpoint) {
|
||||
Some(scenario) => {
|
||||
// Simulate delay
|
||||
tokio::time::sleep(Duration::from_millis(scenario.delay_ms)).await;
|
||||
|
||||
match scenario.response_type {
|
||||
MockResponseType::Success => Ok(scenario.response_data.clone()),
|
||||
MockResponseType::Failure => Err(scenario.response_data.clone()),
|
||||
MockResponseType::Timeout => {
|
||||
// For timeout, we just return an error after the delay
|
||||
Err("Request timed out".to_string())
|
||||
}
|
||||
}
|
||||
}
|
||||
None => Err(format!("Unknown endpoint: {}", endpoint)),
|
||||
}
|
||||
}
|
||||
|
||||
/// Setup common test scenarios
|
||||
pub fn setup_test_scenarios(&mut self) {
|
||||
// Fast success
|
||||
self.add_scenario("stripe_config", MockScenario::success(100, r#"{"status": "configured"}"#));
|
||||
|
||||
// Slow success
|
||||
self.add_scenario("create_product", MockScenario::success(2000, r#"{"id": "prod_123", "name": "Test Product"}"#));
|
||||
|
||||
// Fast failure
|
||||
self.add_scenario("invalid_endpoint", MockScenario::failure(50, "Invalid API key"));
|
||||
|
||||
// Timeout scenario
|
||||
self.add_scenario("slow_endpoint", MockScenario::timeout(5000));
|
||||
|
||||
// Variable responses for testing retries
|
||||
self.add_scenario("flaky_endpoint", MockScenario::failure(500, "Temporary server error"));
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for MockAPI {
|
||||
fn default() -> Self {
|
||||
let mut api = Self::new();
|
||||
api.setup_test_scenarios();
|
||||
api
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_mock_api_success() {
|
||||
let mut api = MockAPI::new();
|
||||
api.add_scenario("test", MockScenario::success(10, "success"));
|
||||
|
||||
let result = api.call("test").await;
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(result.unwrap(), "success");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_mock_api_failure() {
|
||||
let mut api = MockAPI::new();
|
||||
api.add_scenario("test", MockScenario::failure(10, "error"));
|
||||
|
||||
let result = api.call("test").await;
|
||||
assert!(result.is_err());
|
||||
assert_eq!(result.unwrap_err(), "error");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_mock_api_unknown_endpoint() {
|
||||
let api = MockAPI::new();
|
||||
let result = api.call("unknown").await;
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("Unknown endpoint"));
|
||||
}
|
||||
}
|
100
rhailib/src/flow/src/state.rs
Normal file
100
rhailib/src/flow/src/state.rs
Normal file
@@ -0,0 +1,100 @@
|
||||
//! Simple state management for flows and steps
|
||||
|
||||
use serde::{Serialize, Deserialize};
|
||||
use crate::types::{FlowStatus, StepStatus};
|
||||
|
||||
/// Minimal flow state tracking
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct FlowState {
|
||||
pub flow_id: String,
|
||||
pub status: FlowStatus,
|
||||
pub current_step: Option<String>,
|
||||
pub completed_steps: Vec<String>,
|
||||
}
|
||||
|
||||
impl FlowState {
|
||||
pub fn new(flow_id: String) -> Self {
|
||||
Self {
|
||||
flow_id,
|
||||
status: FlowStatus::Created,
|
||||
current_step: None,
|
||||
completed_steps: Vec::new(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Minimal step state tracking
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct StepState {
|
||||
pub step_id: String,
|
||||
pub status: StepStatus,
|
||||
pub attempt_count: u32,
|
||||
pub output: Option<String>,
|
||||
}
|
||||
|
||||
impl StepState {
|
||||
pub fn new(step_id: String) -> Self {
|
||||
Self {
|
||||
step_id,
|
||||
status: StepStatus::Pending,
|
||||
attempt_count: 0,
|
||||
output: None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Simple Redis-based state manager
|
||||
pub struct StateManager {
|
||||
redis_client: redis::Client,
|
||||
}
|
||||
|
||||
impl StateManager {
|
||||
pub async fn new(redis_url: &str) -> Result<Self, Box<dyn std::error::Error>> {
|
||||
let client = redis::Client::open(redis_url)?;
|
||||
Ok(Self {
|
||||
redis_client: client,
|
||||
})
|
||||
}
|
||||
|
||||
/// Save flow state to Redis
|
||||
pub async fn save_flow_state(&self, state: &FlowState) -> Result<(), Box<dyn std::error::Error>> {
|
||||
let mut conn = self.redis_client.get_async_connection().await?;
|
||||
let key = format!("flow:{}", state.flow_id);
|
||||
let json = serde_json::to_string(state)?;
|
||||
redis::cmd("SET").arg(&key).arg(&json).query_async(&mut conn).await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Load flow state from Redis
|
||||
pub async fn load_flow_state(&self, flow_id: &str) -> Result<Option<FlowState>, Box<dyn std::error::Error>> {
|
||||
let mut conn = self.redis_client.get_async_connection().await?;
|
||||
let key = format!("flow:{}", flow_id);
|
||||
let result: Option<String> = redis::cmd("GET").arg(&key).query_async(&mut conn).await?;
|
||||
|
||||
match result {
|
||||
Some(json) => Ok(Some(serde_json::from_str(&json)?)),
|
||||
None => Ok(None),
|
||||
}
|
||||
}
|
||||
|
||||
/// Save step state to Redis
|
||||
pub async fn save_step_state(&self, flow_id: &str, state: &StepState) -> Result<(), Box<dyn std::error::Error>> {
|
||||
let mut conn = self.redis_client.get_async_connection().await?;
|
||||
let key = format!("step:{}:{}", flow_id, state.step_id);
|
||||
let json = serde_json::to_string(state)?;
|
||||
redis::cmd("SET").arg(&key).arg(&json).query_async(&mut conn).await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Load step state from Redis
|
||||
pub async fn load_step_state(&self, flow_id: &str, step_id: &str) -> Result<Option<StepState>, Box<dyn std::error::Error>> {
|
||||
let mut conn = self.redis_client.get_async_connection().await?;
|
||||
let key = format!("step:{}:{}", flow_id, step_id);
|
||||
let result: Option<String> = redis::cmd("GET").arg(&key).query_async(&mut conn).await?;
|
||||
|
||||
match result {
|
||||
Some(json) => Ok(Some(serde_json::from_str(&json)?)),
|
||||
None => Ok(None),
|
||||
}
|
||||
}
|
||||
}
|
66
rhailib/src/flow/src/types.rs
Normal file
66
rhailib/src/flow/src/types.rs
Normal file
@@ -0,0 +1,66 @@
|
||||
//! Core types for the flow manager
|
||||
|
||||
use serde::{Serialize, Deserialize};
|
||||
use std::time::Duration;
|
||||
|
||||
/// Simple flow status enumeration
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
|
||||
pub enum FlowStatus {
|
||||
Created,
|
||||
Running,
|
||||
Completed,
|
||||
Failed,
|
||||
}
|
||||
|
||||
/// Simple step status enumeration
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
|
||||
pub enum StepStatus {
|
||||
Pending,
|
||||
Running,
|
||||
Completed,
|
||||
Failed,
|
||||
}
|
||||
|
||||
/// A single step in a flow
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct Step {
|
||||
pub id: String,
|
||||
pub name: String,
|
||||
pub script: String,
|
||||
pub timeout_seconds: u64,
|
||||
pub max_retries: u32,
|
||||
}
|
||||
|
||||
impl Step {
|
||||
pub fn new(name: &str) -> Self {
|
||||
Self {
|
||||
id: uuid::Uuid::new_v4().to_string(),
|
||||
name: name.to_string(),
|
||||
script: String::new(),
|
||||
timeout_seconds: 30, // default 30 seconds
|
||||
max_retries: 0, // default no retries
|
||||
}
|
||||
}
|
||||
|
||||
pub fn timeout(&self) -> Duration {
|
||||
Duration::from_secs(self.timeout_seconds)
|
||||
}
|
||||
}
|
||||
|
||||
/// A flow containing multiple steps
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct Flow {
|
||||
pub id: String,
|
||||
pub name: String,
|
||||
pub steps: Vec<Step>,
|
||||
}
|
||||
|
||||
impl Flow {
|
||||
pub fn new(name: &str) -> Self {
|
||||
Self {
|
||||
id: uuid::Uuid::new_v4().to_string(),
|
||||
name: name.to_string(),
|
||||
steps: Vec::new(),
|
||||
}
|
||||
}
|
||||
}
|
8
rhailib/src/lib.rs
Normal file
8
rhailib/src/lib.rs
Normal file
@@ -0,0 +1,8 @@
|
||||
//! Rhailib - Distributed Rhai Scripting Library
|
||||
//!
|
||||
//! This library provides infrastructure for executing Rhai scripts in a distributed
|
||||
//! manner using Redis as a message broker and task queue.
|
||||
|
||||
// Re-export commonly used types
|
||||
pub use redis;
|
||||
pub use serde_json;
|
8
rhailib/src/macros/Cargo.toml
Normal file
8
rhailib/src/macros/Cargo.toml
Normal file
@@ -0,0 +1,8 @@
|
||||
[package]
|
||||
name = "rhailib-macros"
|
||||
version = "0.1.0"
|
||||
edition = "2024"
|
||||
|
||||
[dependencies]
|
||||
rhai = { version = "1.21.0", features = ["std", "sync", "decimal", "internals"] }
|
||||
serde = { version = "1.0", features = ["derive"] }
|
380
rhailib/src/macros/_archive/lib.rs
Normal file
380
rhailib/src/macros/_archive/lib.rs
Normal file
@@ -0,0 +1,380 @@
|
||||
//! # Rhai Authorization Crate
|
||||
//! This crate provides authorization mechanisms for Rhai functions, particularly those interacting with a database.
|
||||
//! It includes helper functions for authorization checks and macros to simplify the registration
|
||||
//! of authorized Rhai functions.
|
||||
//! ## Features:
|
||||
//! - `is_super_admin`: Checks if a caller (identified by a public key) is a super admin.
|
||||
//! - `can_access_resource`: Checks if a caller has specific access rights to a resource, using a database connection.
|
||||
//! - `get_caller_public_key`: Helper to extract `CALLER_ID` from the Rhai `NativeCallContext`.
|
||||
//! - `id_from_i64_to_u32`: Helper to convert `i64` Rhai IDs to `u32` Rust IDs.
|
||||
//! - `register_authorized_get_by_id_fn!`: Macro to register a Rhai function that retrieves a single item by ID, with authorization checks.
|
||||
//! - `register_authorized_list_fn!`: Macro to register a Rhai function that lists multiple items, filtering them based on authorization.
|
||||
//! ## Usage:
|
||||
//! 1. Use the macros to register your Rhai functions, providing a database connection (`Arc<OurDB>`) and necessary type/name information.
|
||||
//! 2. The macros internally use `can_access_resource` for authorization checks.
|
||||
//! 3. Ensure `CALLER_ID` is set in the Rhai engine's scope before calling authorized functions.
|
||||
|
||||
use rhai::{EvalAltResult, Position};
|
||||
use std::convert::TryFrom;
|
||||
|
||||
/// Extracts the `CALLER_ID` string constant from the Rhai `NativeCallContext`.
|
||||
/// This key is used to identify the caller for authorization checks.
|
||||
/// It first checks the current `Scope` and then falls back to the global constants cache.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `context`: The Rhai `NativeCallContext` of the currently executing function.
|
||||
///
|
||||
|
||||
/// Converts an `i64` (common Rhai integer type) to a `u32` (common Rust ID type).
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `id_i64`: The `i64` value to convert.
|
||||
///
|
||||
/// # Errors
|
||||
/// Returns `Err(EvalAltResult::ErrorMismatchDataType)` if the `i64` value cannot be represented as a `u32`.
|
||||
pub fn id_from_i64_to_u32(id_i64: i64) -> Result<u32, Box<EvalAltResult>> {
|
||||
u32::try_from(id_i64).map_err(|_| {
|
||||
Box::new(EvalAltResult::ErrorMismatchDataType(
|
||||
"u32".to_string(),
|
||||
format!("i64 value ({}) that cannot be represented as u32", id_i64),
|
||||
Position::NONE,
|
||||
))
|
||||
})
|
||||
}
|
||||
|
||||
/// Extracts the `CALLER_ID` string constant from the Rhai `NativeCallContext`'s tag.
|
||||
/// This key is used to identify the caller for authorization checks.
|
||||
|
||||
/// Macro to register a Rhai function that retrieves a single resource by its ID, with authorization.
|
||||
///
|
||||
/// The macro handles:
|
||||
/// - Argument parsing (ID).
|
||||
/// - Caller identification via `CALLER_ID`.
|
||||
/// - Authorization check using `AccessControlService::can_access_resource`.
|
||||
/// - Database call to fetch the resource.
|
||||
/// - Error handling for type mismatches, authorization failures, DB errors, and not found errors.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `module`: Mutable reference to the Rhai `Module`.
|
||||
/// * `db_clone`: Cloned `Arc<Db>` for database access.
|
||||
/// * `acs_clone`: Cloned `Arc<AccessControlService>`.
|
||||
/// * `rhai_fn_name`: String literal for the Rhai function name (e.g., "get_collection").
|
||||
/// * `resource_type_str`: String literal for the resource type (e.g., "Collection"), used in authorization checks and error messages.
|
||||
/// * `db_method_name`: Identifier for the database method to call (e.g., `get_by_id`).
|
||||
/// * `id_arg_type`: Rust type of the ID argument in Rhai (e.g., `i64`).
|
||||
/// * `id_rhai_type_name`: String literal for the Rhai type name of the ID (e.g., "i64"), for error messages.
|
||||
/// * `id_conversion_fn`: Path to a function converting `id_arg_type` to `actual_id_type` (e.g., `id_from_i64_to_u32`).
|
||||
/// * `actual_id_type`: Rust type of the ID used in the database (e.g., `u32`).
|
||||
/// * `rhai_return_rust_type`: Rust type of the resource returned by the DB and Rhai function (e.g., `RhaiCollection`).
|
||||
#[macro_export]
|
||||
macro_rules! register_authorized_get_by_id_fn {
|
||||
(
|
||||
module: $module:expr,
|
||||
rhai_fn_name: $rhai_fn_name:expr, // String literal for the Rhai function name (e.g., "get_collection")
|
||||
resource_type_str: $resource_type_str:expr, // String literal for the resource type (e.g., "Collection")
|
||||
rhai_return_rust_type: $rhai_return_rust_type:ty // Rust type of the resource returned (e.g., `RhaiCollection`)
|
||||
) => {
|
||||
FuncRegistration::new($rhai_fn_name).set_into_module(
|
||||
$module,
|
||||
move |context: rhai::NativeCallContext,
|
||||
id_val: i64|
|
||||
-> Result<$rhai_return_rust_type, Box<EvalAltResult>> {
|
||||
let actual_id: u32 = $crate::id_from_i64_to_u32(id_val)?;
|
||||
|
||||
// Inlined logic to get caller public key
|
||||
let tag_map = context
|
||||
.tag()
|
||||
.and_then(|tag| tag.read_lock::<rhai::Map>())
|
||||
.ok_or_else(|| {
|
||||
Box::new(EvalAltResult::ErrorRuntime(
|
||||
"Context tag must be a Map.".into(),
|
||||
context.position(),
|
||||
))
|
||||
})?;
|
||||
|
||||
let pk_dynamic = tag_map.get("CALLER_ID").ok_or_else(|| {
|
||||
Box::new(EvalAltResult::ErrorRuntime(
|
||||
"'CALLER_ID' not found in context tag Map.".into(),
|
||||
context.position(),
|
||||
))
|
||||
})?;
|
||||
|
||||
let db_path = tag_map.get("DB_PATH").ok_or_else(|| {
|
||||
Box::new(EvalAltResult::ErrorRuntime(
|
||||
"'DB_PATH' not found in context tag Map.".into(),
|
||||
context.position(),
|
||||
))
|
||||
})?;
|
||||
|
||||
let db_path = db_path.clone().into_string()?;
|
||||
|
||||
let circle_pk = tag_map.get("CONTEXT_ID").ok_or_else(|| {
|
||||
Box::new(EvalAltResult::ErrorRuntime(
|
||||
"'CONTEXT_ID' not found in context tag Map.".into(),
|
||||
context.position(),
|
||||
))
|
||||
})?;
|
||||
|
||||
let circle_pk = circle_pk.clone().into_string()?;
|
||||
|
||||
let db_path = format!("{}/{}", db_path, circle_pk);
|
||||
let db = Arc::new(OurDB::new(db_path, false).expect("Failed to create DB"));
|
||||
|
||||
let caller_pk_str = pk_dynamic.clone().into_string()?;
|
||||
|
||||
println!("Checking access for public key: {}", caller_pk_str);
|
||||
if circle_pk != caller_pk_str {
|
||||
// Use the standalone can_access_resource function from heromodels
|
||||
let has_access = heromodels::models::access::access::can_access_resource(
|
||||
db.clone(),
|
||||
&caller_pk_str,
|
||||
actual_id,
|
||||
$resource_type_str,
|
||||
);
|
||||
|
||||
if !has_access {
|
||||
return Err(Box::new(EvalAltResult::ErrorRuntime(
|
||||
format!("Access denied for public key: {}", caller_pk_str).into(),
|
||||
context.position(),
|
||||
)));
|
||||
}
|
||||
}
|
||||
|
||||
let result = db
|
||||
.collection::<$rhai_return_rust_type>()
|
||||
.unwrap()
|
||||
.get_by_id(actual_id)
|
||||
.map_err(|e| {
|
||||
println!(
|
||||
"Database error fetching {} with ID: {}",
|
||||
$resource_type_str, actual_id
|
||||
);
|
||||
Box::new(EvalAltResult::ErrorRuntime(
|
||||
format!("Database error fetching {}: {:?}", $resource_type_str, e)
|
||||
.into(),
|
||||
context.position(),
|
||||
))
|
||||
})?
|
||||
.ok_or_else(|| {
|
||||
Box::new(EvalAltResult::ErrorRuntime(
|
||||
format!(
|
||||
"Database error fetching {} with ID: {}",
|
||||
$resource_type_str, actual_id
|
||||
)
|
||||
.into(),
|
||||
context.position(),
|
||||
))
|
||||
})?;
|
||||
Ok(result)
|
||||
},
|
||||
);
|
||||
};
|
||||
}
|
||||
|
||||
// Macro to register a Rhai function that retrieves a single resource by its ID, with authorization.
|
||||
#[macro_export]
|
||||
macro_rules! register_authorized_create_by_id_fn {
|
||||
(
|
||||
module: $module:expr,
|
||||
rhai_fn_name: $rhai_fn_name:expr, // String literal for the Rhai function name (e.g., "get_collection")
|
||||
resource_type_str: $resource_type_str:expr, // String literal for the resource type (e.g., "Collection")
|
||||
rhai_return_rust_type: $rhai_return_rust_type:ty // Rust type of the resource returned (e.g., `RhaiCollection`)
|
||||
) => {
|
||||
FuncRegistration::new($rhai_fn_name).set_into_module(
|
||||
$module,
|
||||
move |context: rhai::NativeCallContext, object: $rhai_return_rust_type| -> Result<$rhai_return_rust_type, Box<EvalAltResult>> {
|
||||
|
||||
// Inlined logic to get caller public key
|
||||
let tag_map = context
|
||||
.tag()
|
||||
.and_then(|tag| tag.read_lock::<rhai::Map>())
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("Context tag must be a Map.".into(), context.position())))?;
|
||||
|
||||
let pk_dynamic = tag_map.get("CALLER_ID")
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("'CALLER_ID' not found in context tag Map.".into(), context.position())))?;
|
||||
|
||||
let db_path = tag_map.get("DB_PATH")
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("'DB_PATH' not found in context tag Map.".into(), context.position())))?;
|
||||
|
||||
let db_path = db_path.clone().into_string()?;
|
||||
|
||||
let circle_pk = tag_map.get("CONTEXT_ID")
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("'CONTEXT_ID' not found in context tag Map.".into(), context.position())))?;
|
||||
|
||||
let circle_pk = circle_pk.clone().into_string()?;
|
||||
|
||||
let db_path = format!("{}/{}", db_path, circle_pk);
|
||||
let db = Arc::new(OurDB::new(db_path, false).expect("Failed to create DB"));
|
||||
|
||||
let caller_pk_str = pk_dynamic.clone().into_string()?;
|
||||
|
||||
if circle_pk != caller_pk_str {
|
||||
let is_circle_member = heromodels::models::access::access::is_circle_member(
|
||||
db.clone(),
|
||||
&caller_pk_str,
|
||||
);
|
||||
if !is_circle_member {
|
||||
// TODO: check if caller pk is member of circle
|
||||
return Err(Box::new(EvalAltResult::ErrorRuntime(
|
||||
format!("Insufficient authorization. Caller public key {} does not match circle public key {}", caller_pk_str, circle_pk).into(),
|
||||
context.position(),
|
||||
)));
|
||||
}
|
||||
}
|
||||
|
||||
let result = db.set(&object).map_err(|e| {
|
||||
Box::new(EvalAltResult::ErrorRuntime(
|
||||
format!("Database error creating {}: {:?}", $resource_type_str, e).into(),
|
||||
context.position(),
|
||||
))
|
||||
})?;
|
||||
Ok(result.1)
|
||||
},
|
||||
);
|
||||
};
|
||||
}
|
||||
|
||||
// Macro to register a Rhai function that retrieves a single resource by its ID, with authorization.
|
||||
#[macro_export]
|
||||
macro_rules! register_authorized_delete_by_id_fn {
|
||||
(
|
||||
module: $module:expr,
|
||||
rhai_fn_name: $rhai_fn_name:expr, // String literal for the Rhai function name (e.g., "get_collection")
|
||||
resource_type_str: $resource_type_str:expr, // String literal for the resource type (e.g., "Collection")
|
||||
rhai_return_rust_type: $rhai_return_rust_type:ty // Rust type of the resource returned (e.g., `RhaiCollection`)
|
||||
) => {
|
||||
FuncRegistration::new($rhai_fn_name).set_into_module(
|
||||
$module,
|
||||
move |context: rhai::NativeCallContext, id_val: i64| -> Result<(), Box<EvalAltResult>> {
|
||||
let actual_id: u32 = $crate::id_from_i64_to_u32(id_val)?;
|
||||
|
||||
// Inlined logic to get caller public key
|
||||
let tag_map = context
|
||||
.tag()
|
||||
.and_then(|tag| tag.read_lock::<rhai::Map>())
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("Context tag must be a Map.".into(), context.position())))?;
|
||||
|
||||
let pk_dynamic = tag_map.get("CALLER_ID")
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("'CALLER_ID' not found in context tag Map.".into(), context.position())))?;
|
||||
|
||||
let db_path = tag_map.get("DB_PATH")
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("'DB_PATH' not found in context tag Map.".into(), context.position())))?;
|
||||
|
||||
let db_path = db_path.clone().into_string()?;
|
||||
|
||||
let circle_pk = tag_map.get("CONTEXT_ID")
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("'CONTEXT_ID' not found in context tag Map.".into(), context.position())))?;
|
||||
|
||||
let circle_pk = circle_pk.clone().into_string()?;
|
||||
|
||||
let db_path = format!("{}/{}", db_path, circle_pk);
|
||||
let db = Arc::new(OurDB::new(db_path, false).expect("Failed to create DB"));
|
||||
|
||||
let caller_pk_str = pk_dynamic.clone().into_string()?;
|
||||
|
||||
if circle_pk != caller_pk_str {
|
||||
let is_circle_member = heromodels::models::access::access::is_circle_member(
|
||||
db.clone(),
|
||||
&caller_pk_str,
|
||||
);
|
||||
if !is_circle_member {
|
||||
// TODO: check if caller pk is member of circle
|
||||
return Err(Box::new(EvalAltResult::ErrorRuntime(
|
||||
format!("Insufficient authorization. Caller public key {} does not match circle public key {}", caller_pk_str, circle_pk).into(),
|
||||
context.position(),
|
||||
)));
|
||||
}
|
||||
}
|
||||
|
||||
let result = db
|
||||
.collection::<$rhai_return_rust_type>()
|
||||
.unwrap()
|
||||
.delete_by_id(actual_id)
|
||||
.map_err(|e| {
|
||||
Box::new(EvalAltResult::ErrorRuntime(
|
||||
format!("Database error deleting {}: {:?}", $resource_type_str, e).into(),
|
||||
context.position(),
|
||||
))
|
||||
})?;
|
||||
Ok(())
|
||||
},
|
||||
);
|
||||
};
|
||||
}
|
||||
|
||||
/// Macro to register a Rhai function that lists all resources of a certain type, with authorization.
|
||||
///
|
||||
/// The macro handles:
|
||||
/// - Caller identification via `CALLER_ID`.
|
||||
/// - Fetching all items of a specific type from the database.
|
||||
/// - Filtering the items based on the standalone `can_access_resource` function for each item.
|
||||
/// - Wrapping the authorized items in a specified collection type (e.g., `RhaiCollectionArray`).
|
||||
/// - Error handling for DB errors during fetch or authorization checks.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `module`: Mutable reference to the Rhai `Module`.
|
||||
/// * `rhai_fn_name`: String literal for the Rhai function name (e.g., "list_collections").
|
||||
/// * `resource_type_str`: String literal for the resource type (e.g., "Collection"), used in authorization checks.
|
||||
/// * `rhai_return_rust_type`: Rust type of the resource item (e.g., `RhaiCollection`).
|
||||
/// * `item_id_accessor`: Identifier for the method on `rhai_return_rust_type` that returns its ID (e.g., `id`).
|
||||
/// * `rhai_return_wrapper_type`: Rust type that wraps a `Vec` of `rhai_return_rust_type` for Rhai (e.g., `RhaiCollectionArray`).
|
||||
#[macro_export]
|
||||
macro_rules! register_authorized_list_fn {
|
||||
(
|
||||
module: $module:expr,
|
||||
rhai_fn_name: $rhai_fn_name:expr,
|
||||
resource_type_str: $resource_type_str:expr,
|
||||
rhai_return_rust_type: $rhai_return_rust_type:ty,
|
||||
rhai_return_wrapper_type: $rhai_return_wrapper_type:ty
|
||||
) => {
|
||||
FuncRegistration::new($rhai_fn_name).set_into_module(
|
||||
$module,
|
||||
move |context: rhai::NativeCallContext| -> Result<$rhai_return_wrapper_type, Box<EvalAltResult>> {
|
||||
// Inlined logic to get caller public key
|
||||
let tag_map = context
|
||||
.tag()
|
||||
.and_then(|tag| tag.read_lock::<rhai::Map>())
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("Context tag must be a Map.".into(), context.position())))?;
|
||||
|
||||
let pk_dynamic = tag_map.get("CALLER_ID")
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("'CALLER_ID' not found in context tag Map.".into(), context.position())))?;
|
||||
|
||||
let caller_pk_str = pk_dynamic.clone().into_string()?;
|
||||
|
||||
let db_path = tag_map.get("DB_PATH")
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("'DB_PATH' not found in context tag Map.".into(), context.position())))?;
|
||||
|
||||
let db_path = db_path.clone().into_string()?;
|
||||
|
||||
let circle_pk = tag_map.get("CONTEXT_ID")
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("'CONTEXT_ID' not found in context tag Map.".into(), context.position())))?;
|
||||
|
||||
let circle_pk = circle_pk.clone().into_string()?;
|
||||
|
||||
let db_path = format!("{}/{}", db_path, circle_pk);
|
||||
let db = Arc::new(OurDB::new(db_path, false).expect("Failed to create DB"));
|
||||
|
||||
let all_items: Vec<$rhai_return_rust_type> = db
|
||||
.collection::<$rhai_return_rust_type>()
|
||||
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(format!("{:?}", e).into(), Position::NONE)))?
|
||||
.get_all()
|
||||
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(format!("{:?}", e).into(), Position::NONE)))?;
|
||||
|
||||
let authorized_items: Vec<$rhai_return_rust_type> = all_items
|
||||
.into_iter()
|
||||
.filter(|item| {
|
||||
let resource_id = item.id();
|
||||
heromodels::models::access::access::can_access_resource(
|
||||
db.clone(),
|
||||
&caller_pk_str,
|
||||
resource_id,
|
||||
$resource_type_str,
|
||||
)
|
||||
})
|
||||
.collect();
|
||||
|
||||
Ok(authorized_items.into())
|
||||
},
|
||||
);
|
||||
};
|
||||
}
|
303
rhailib/src/macros/docs/ARCHITECTURE.md
Normal file
303
rhailib/src/macros/docs/ARCHITECTURE.md
Normal file
@@ -0,0 +1,303 @@
|
||||
# Architecture of the `macros` Crate
|
||||
|
||||
The `macros` crate provides authorization mechanisms and procedural macros for Rhai functions that interact with databases. It implements a comprehensive security layer that ensures proper access control for all database operations within the Rhai scripting environment.
|
||||
|
||||
## Core Architecture
|
||||
|
||||
The crate follows a macro-driven approach to authorization, providing declarative macros that generate secure database access functions:
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[macros Crate] --> B[Authorization Functions]
|
||||
A --> C[Utility Functions]
|
||||
A --> D[Registration Macros]
|
||||
|
||||
B --> B1[Context Extraction]
|
||||
B --> B2[Access Control Checks]
|
||||
B --> B3[Circle Membership Validation]
|
||||
|
||||
C --> C1[ID Conversion]
|
||||
C --> C2[Error Handling]
|
||||
|
||||
D --> D1[register_authorized_get_by_id_fn!]
|
||||
D --> D2[register_authorized_create_by_id_fn!]
|
||||
D --> D3[register_authorized_delete_by_id_fn!]
|
||||
D --> D4[register_authorized_list_fn!]
|
||||
|
||||
D1 --> E[Generated Rhai Functions]
|
||||
D2 --> E
|
||||
D3 --> E
|
||||
D4 --> E
|
||||
|
||||
E --> F[Database Operations]
|
||||
F --> G[Secure Data Access]
|
||||
```
|
||||
|
||||
## Security Model
|
||||
|
||||
### Authentication Context
|
||||
|
||||
All operations require authentication context passed through Rhai's `NativeCallContext`:
|
||||
|
||||
- **`CALLER_ID`**: Identifies the requesting user
|
||||
- **`CONTEXT_ID`**: Identifies the target context (the circle)
|
||||
- **`DB_PATH`**: Specifies the database location
|
||||
|
||||
### Authorization Levels
|
||||
|
||||
1. **Owner Access**: Direct access when `CALLER_ID == CONTEXT_ID`
|
||||
2. **Circle Member Access**: Verified through `is_circle_member()` function
|
||||
3. **Resource-Specific Access**: Granular permissions via `can_access_resource()`
|
||||
|
||||
### Access Control Flow
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant Script as Rhai Script
|
||||
participant Macro as Generated Function
|
||||
participant Auth as Authorization Layer
|
||||
participant DB as Database
|
||||
|
||||
Script->>Macro: Call authorized function
|
||||
Macro->>Auth: Extract caller context
|
||||
Auth->>Auth: Validate CALLER_ID
|
||||
Auth->>Auth: Check circle membership
|
||||
Auth->>Auth: Verify resource access
|
||||
Auth->>DB: Execute database operation
|
||||
DB->>Macro: Return result
|
||||
Macro->>Script: Return authorized data
|
||||
```
|
||||
|
||||
## Core Components
|
||||
|
||||
### 1. Utility Functions
|
||||
|
||||
#### ID Conversion (`id_from_i64_to_u32`)
|
||||
```rust
|
||||
pub fn id_from_i64_to_u32(id_i64: i64) -> Result<u32, Box<EvalAltResult>>
|
||||
```
|
||||
Safely converts Rhai's `i64` integers to Rust's `u32` IDs with proper error handling.
|
||||
|
||||
### 2. Authorization Macros
|
||||
|
||||
#### Get By ID (`register_authorized_get_by_id_fn!`)
|
||||
Generates functions for retrieving single resources by ID with authorization checks.
|
||||
|
||||
**Features:**
|
||||
- ID validation and conversion
|
||||
- Caller authentication
|
||||
- Resource-specific access control
|
||||
- Database error handling
|
||||
- Not found error handling
|
||||
|
||||
#### Create Resource (`register_authorized_create_by_id_fn!`)
|
||||
Generates functions for creating new resources with authorization.
|
||||
|
||||
**Features:**
|
||||
- Circle membership validation
|
||||
- Object persistence
|
||||
- Creation authorization
|
||||
- Database transaction handling
|
||||
|
||||
#### Delete By ID (`register_authorized_delete_by_id_fn!`)
|
||||
Generates functions for deleting resources by ID with authorization.
|
||||
|
||||
**Features:**
|
||||
- Deletion authorization
|
||||
- Circle membership validation
|
||||
- Cascade deletion handling
|
||||
- Audit trail support
|
||||
|
||||
#### List Resources (`register_authorized_list_fn!`)
|
||||
Generates functions for listing resources with filtering based on access rights.
|
||||
|
||||
**Features:**
|
||||
- Bulk authorization checking
|
||||
- Result filtering
|
||||
- Collection wrapping
|
||||
- Performance optimization
|
||||
|
||||
## Generated Function Architecture
|
||||
|
||||
### Function Signature Pattern
|
||||
|
||||
All generated functions follow a consistent pattern:
|
||||
|
||||
```rust
|
||||
move |context: rhai::NativeCallContext, /* parameters */| -> Result<ReturnType, Box<EvalAltResult>>
|
||||
```
|
||||
|
||||
### Context Extraction Pattern
|
||||
|
||||
```rust
|
||||
let tag_map = context
|
||||
.tag()
|
||||
.and_then(|tag| tag.read_lock::<rhai::Map>())
|
||||
.ok_or_else(|| /* error */)?;
|
||||
|
||||
let caller_pk = tag_map.get("CALLER_ID")?.into_string()?;
|
||||
let context_id = tag_map.get("CONTEXT_ID")?.into_string()?;
|
||||
let db_path = tag_map.get("DB_PATH")?.into_string()?;
|
||||
```
|
||||
|
||||
### Database Connection Pattern
|
||||
|
||||
```rust
|
||||
let db_path = format!("{}/{}", db_path, context_id);
|
||||
let db = Arc::new(OurDB::new(db_path, false).expect("Failed to create DB"));
|
||||
```
|
||||
|
||||
## Authorization Strategies
|
||||
|
||||
### 1. Circle-Based Authorization
|
||||
|
||||
```rust
|
||||
if context_id != caller_pk_str {
|
||||
let is_circle_member = heromodels::models::access::access::is_circle_member(
|
||||
db.clone(),
|
||||
&caller_pk_str,
|
||||
);
|
||||
if !is_circle_member {
|
||||
return Err(/* authorization error */);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Resource-Specific Authorization
|
||||
|
||||
```rust
|
||||
let has_access = heromodels::models::access::access::can_access_resource(
|
||||
db.clone(),
|
||||
&caller_pk_str,
|
||||
actual_id,
|
||||
resource_type_str,
|
||||
);
|
||||
|
||||
if !has_access {
|
||||
return Err(/* access denied error */);
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Bulk Authorization (for lists)
|
||||
|
||||
```rust
|
||||
let authorized_items: Vec<ResourceType> = all_items
|
||||
.into_iter()
|
||||
.filter(|item| {
|
||||
let resource_id = item.id();
|
||||
heromodels::models::access::access::can_access_resource(
|
||||
db.clone(),
|
||||
&caller_pk_str,
|
||||
resource_id,
|
||||
resource_type_str,
|
||||
)
|
||||
})
|
||||
.collect();
|
||||
```
|
||||
|
||||
## Error Handling Architecture
|
||||
|
||||
### Error Categories
|
||||
|
||||
1. **Context Errors**: Missing or invalid authentication context
|
||||
2. **Type Conversion Errors**: Invalid ID formats or type mismatches
|
||||
3. **Authorization Errors**: Access denied or insufficient permissions
|
||||
4. **Database Errors**: Connection failures or query errors
|
||||
5. **Not Found Errors**: Requested resources don't exist
|
||||
|
||||
### Error Propagation Pattern
|
||||
|
||||
```rust
|
||||
.map_err(|e| {
|
||||
Box::new(EvalAltResult::ErrorRuntime(
|
||||
format!("Database error: {:?}", e).into(),
|
||||
context.position(),
|
||||
))
|
||||
})?
|
||||
```
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### Database Connection Management
|
||||
|
||||
- **Connection Per Operation**: Each function creates its own database connection
|
||||
- **Path-Based Isolation**: Database paths include circle identifiers for isolation
|
||||
- **Connection Pooling**: Relies on underlying database implementation
|
||||
|
||||
### Authorization Caching
|
||||
|
||||
- **No Caching**: Authorization checks are performed for each operation
|
||||
- **Stateless Design**: No session state maintained between calls
|
||||
- **Fresh Validation**: Ensures up-to-date permission checking
|
||||
|
||||
### Bulk Operations Optimization
|
||||
|
||||
- **Filtered Iteration**: List operations filter results after database fetch
|
||||
- **Lazy Evaluation**: Authorization checks only performed on accessed items
|
||||
- **Memory Efficiency**: Results collected into appropriate wrapper types
|
||||
|
||||
## Integration Patterns
|
||||
|
||||
### Macro Usage in DSL Modules
|
||||
|
||||
```rust
|
||||
register_authorized_get_by_id_fn!(
|
||||
module: &mut module,
|
||||
rhai_fn_name: "get_company",
|
||||
resource_type_str: "Company",
|
||||
rhai_return_rust_type: heromodels::models::biz::company::Company
|
||||
);
|
||||
```
|
||||
|
||||
### Context Setup in Engine
|
||||
|
||||
```rust
|
||||
let mut context_map = rhai::Map::new();
|
||||
context_map.insert("CALLER_ID".into(), caller_pk.into());
|
||||
context_map.insert("CONTEXT_ID".into(), context_id.into());
|
||||
context_map.insert("DB_PATH".into(), db_path.into());
|
||||
engine.set_tag(context_map);
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Authentication Requirements
|
||||
|
||||
- **Mandatory Context**: All operations require valid authentication context
|
||||
- **Public Key Validation**: Caller identity verified through cryptographic keys
|
||||
- **Circle Membership**: Hierarchical access control through circle membership
|
||||
|
||||
### Authorization Granularity
|
||||
|
||||
- **Resource-Level**: Individual resource access control
|
||||
- **Operation-Level**: Different permissions for read/write/delete operations
|
||||
- **Circle-Level**: Organization-based access boundaries
|
||||
|
||||
### Audit and Logging
|
||||
|
||||
- **Operation Logging**: All database operations include caller identification
|
||||
- **Access Logging**: Authorization decisions are logged for audit trails
|
||||
- **Error Logging**: Failed authorization attempts are recorded
|
||||
|
||||
## Extensibility
|
||||
|
||||
### Adding New Operations
|
||||
|
||||
1. Create new macro following existing patterns
|
||||
2. Implement authorization logic specific to operation
|
||||
3. Add error handling for operation-specific failures
|
||||
4. Register with DSL modules using macro
|
||||
|
||||
### Custom Authorization Logic
|
||||
|
||||
```rust
|
||||
// Custom authorization can be added within macro implementations
|
||||
if requires_special_permission {
|
||||
let has_special_access = check_special_permission(db.clone(), &caller_pk_str);
|
||||
if !has_special_access {
|
||||
return Err(/* custom error */);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This architecture provides a robust, secure foundation for database operations within the Rhai scripting environment while maintaining flexibility for future extensions and customizations.
|
208
rhailib/src/macros/examples/access_control.rs
Normal file
208
rhailib/src/macros/examples/access_control.rs
Normal file
@@ -0,0 +1,208 @@
|
||||
use macros::{register_authorized_get_by_id_fn, register_authorized_list_fn};
|
||||
use rhai::{Dynamic, Engine, Module, Position, Scope};
|
||||
use std::sync::Arc;
|
||||
|
||||
// Import DB traits with an alias for the Collection trait to avoid naming conflicts.
|
||||
// Import DB traits from heromodels::db as suggested by compiler errors.
|
||||
use heromodels::db::{Collection as DbCollection, Db};
|
||||
use heromodels::{
|
||||
db::hero::OurDB,
|
||||
models::access::access::Access,
|
||||
models::library::collection::Collection, // Actual data model for single items
|
||||
models::library::rhai::RhaiCollectionArray, // Wrapper for arrays of collections
|
||||
};
|
||||
|
||||
use rhai::{EvalAltResult, FuncRegistration}; // For macro expansion
|
||||
|
||||
// Rewritten to match the new `Access` model structure.
|
||||
fn grant_access(db: &Arc<OurDB>, user_pk: &str, resource_type: &str, resource_id: u32) {
|
||||
let access_record = Access::new()
|
||||
.circle_pk(user_pk.to_string())
|
||||
.object_type(resource_type.to_string())
|
||||
.object_id(resource_id)
|
||||
.contact_id(0)
|
||||
.group_id(0);
|
||||
|
||||
db.set(&access_record).expect("Failed to set access record");
|
||||
}
|
||||
|
||||
// No changes needed here, but it relies on the new imports to compile.
|
||||
fn register_example_module(engine: &mut Engine, db: Arc<OurDB>) {
|
||||
let mut module = Module::new();
|
||||
|
||||
register_authorized_get_by_id_fn!(
|
||||
module: &mut module,
|
||||
rhai_fn_name: "get_collection",
|
||||
resource_type_str: "Collection",
|
||||
rhai_return_rust_type: heromodels::models::library::collection::Collection // Use Collection struct
|
||||
);
|
||||
|
||||
register_authorized_list_fn!(
|
||||
module: &mut module,
|
||||
db_clone: db.clone(),
|
||||
rhai_fn_name: "list_all_collections",
|
||||
resource_type_str: "Collection",
|
||||
rhai_return_rust_type: heromodels::models::library::collection::Collection, // Use Collection struct
|
||||
item_id_accessor: id, // Assumes Collection has an id() method that returns u32
|
||||
rhai_return_wrapper_type: heromodels::models::library::rhai::RhaiCollectionArray // Wrapper type for Rhai
|
||||
);
|
||||
|
||||
engine.register_global_module(module.into());
|
||||
}
|
||||
|
||||
fn create_alice_engine(db_dir: &str, alice_pk: &str) -> Engine {
|
||||
let mut engine = Engine::new();
|
||||
|
||||
let db_path = format!("{}/{}", db_dir, alice_pk);
|
||||
let db = Arc::new(OurDB::new(&db_path, false).expect("Failed to create DB"));
|
||||
|
||||
// Populate DB using the new `create_collection` helper.
|
||||
// Ownership is no longer on the collection itself, so we don't need owner_pk here.
|
||||
let coll = Collection::new()
|
||||
.title("My new collection")
|
||||
.description("This is a new collection");
|
||||
|
||||
db.set(&coll).expect("Failed to set collection");
|
||||
|
||||
let coll1 = Collection::new()
|
||||
.title("Alice's Private Collection")
|
||||
.description("This is Alice's private collection");
|
||||
let coll3 = Collection::new()
|
||||
.title("General Collection")
|
||||
.description("This is a general collection");
|
||||
|
||||
db.set(&coll1).expect("Failed to set collection");
|
||||
db.set(&coll3).expect("Failed to set collection");
|
||||
|
||||
// Grant access based on the new model.
|
||||
grant_access(&db, "alice_pk", "Collection", coll1.id());
|
||||
grant_access(&db, "user_pk", "Collection", coll3.id());
|
||||
|
||||
register_example_module(&mut engine, db.clone());
|
||||
let mut db_config = rhai::Map::new();
|
||||
db_config.insert("DB_PATH".into(), db_dir.clone().into());
|
||||
db_config.insert("CONTEXT_ID".into(), "alice_pk".into());
|
||||
engine.set_default_tag(Dynamic::from(db_config));
|
||||
engine
|
||||
}
|
||||
|
||||
fn create_bob_engine(db_dir: &str, bob_pk: &str) -> Engine {
|
||||
let mut engine = Engine::new();
|
||||
|
||||
let db_path = format!("{}/{}", db_dir, bob_pk);
|
||||
let db = Arc::new(OurDB::new(db_path, false).expect("Failed to create DB"));
|
||||
|
||||
let coll2 = Collection::new()
|
||||
.title("Bob's Shared Collection")
|
||||
.description("This is Bob's shared collection Alice has access.");
|
||||
|
||||
db.set(&coll2).expect("Failed to set collection");
|
||||
grant_access(&db, "alice_pk", "Collection", coll2.id());
|
||||
|
||||
register_example_module(&mut engine, db.clone());
|
||||
let mut db_config = rhai::Map::new();
|
||||
db_config.insert("DB_PATH".into(), db_dir.clone().into());
|
||||
db_config.insert("CONTEXT_ID".into(), "bob_pk".into());
|
||||
engine.set_default_tag(Dynamic::from(db_config));
|
||||
engine
|
||||
}
|
||||
|
||||
fn create_user_engine(db_dir: &str, user_pk: &str) -> Engine {
|
||||
let mut engine = Engine::new();
|
||||
|
||||
let db_path = format!("{}/{}", db_dir, user_pk);
|
||||
let db = Arc::new(OurDB::new(db_path, false).expect("Failed to create DB"));
|
||||
register_example_module(&mut engine, db.clone());
|
||||
let mut db_config = rhai::Map::new();
|
||||
db_config.insert("DB_PATH".into(), db_dir.clone().into());
|
||||
db_config.insert("CONTEXT_ID".into(), "user_pk".into());
|
||||
engine.set_default_tag(Dynamic::from(db_config));
|
||||
engine
|
||||
}
|
||||
|
||||
fn main() -> Result<(), Box<rhai::EvalAltResult>> {
|
||||
let db_path = format!("{}/hero/db", std::env::var("HOME").unwrap());
|
||||
let alice_pk = "alice_pk";
|
||||
let bob_pk = "bob_pk";
|
||||
let user_pk = "user_pk";
|
||||
|
||||
let mut engine_alice = create_alice_engine(&db_path, alice_pk);
|
||||
let mut engine_bob = create_bob_engine(&db_path, bob_pk);
|
||||
let mut engine_user = create_user_engine(&db_path, user_pk);
|
||||
|
||||
println!("--------------------------");
|
||||
println!("--- Rhai Authorization Example ---");
|
||||
|
||||
let mut scope = Scope::new();
|
||||
|
||||
// Create a Dynamic value holding your DB path or a config object
|
||||
{
|
||||
let mut tag_dynamic = engine_alice.default_tag_mut().as_map_mut().unwrap();
|
||||
tag_dynamic.insert("CALLER_ID".into(), "alice_pk".into());
|
||||
}
|
||||
// engine_alice.set_default_tag(Dynamic::from(tag_dynamic.clone()));
|
||||
|
||||
println!("Alice accessing her collection 1: Success, title"); // Access field directly
|
||||
let result = engine_alice.eval::<Option<Collection>>("get_collection(1)")?;
|
||||
let result_clone = result
|
||||
.clone()
|
||||
.expect("Failed to retrieve collection. It might not exist or you may not have access.");
|
||||
println!(
|
||||
"Alice accessing her collection 1: Success, title = {}",
|
||||
result_clone.title
|
||||
); // Access field directly
|
||||
assert_eq!(result_clone.id(), 1);
|
||||
|
||||
// Scenario 2: Bob tries to access Alice's collection (Failure)
|
||||
{
|
||||
let mut tag_dynamic = engine_bob.default_tag_mut().as_map_mut().unwrap();
|
||||
tag_dynamic.insert("CALLER_ID".into(), "bob_pk".into());
|
||||
}
|
||||
let result =
|
||||
engine_alice.eval_with_scope::<Option<Collection>>(&mut scope, "get_collection(1)")?;
|
||||
println!(
|
||||
"Bob accessing Alice's collection 1: Failure as expected ({:?})",
|
||||
result
|
||||
);
|
||||
assert!(result.is_none());
|
||||
|
||||
// Scenario 3: Alice accesses Bob's collection (Success)
|
||||
let mut db_config = rhai::Map::new();
|
||||
db_config.insert("CALLER_ID".into(), "alice_pk".into());
|
||||
engine_bob.set_default_tag(Dynamic::from(db_config));
|
||||
let result: Option<Collection> =
|
||||
engine_bob.eval_with_scope::<Option<Collection>>(&mut scope, "get_collection(2)")?;
|
||||
let collection = result.expect("Alice should have access to Bob's collection");
|
||||
println!(
|
||||
"Alice accessing Bob's collection 2: Success, title = {}",
|
||||
collection.title
|
||||
); // Access field directly
|
||||
assert_eq!(collection.id(), 2);
|
||||
|
||||
// Scenario 4: General user lists collections (Sees 1)
|
||||
let mut db_config = rhai::Map::new();
|
||||
db_config.insert("db_path".into(), "actual/path/to/db.sqlite".into());
|
||||
db_config.insert("CALLER_ID".into(), "general_user_pk".into());
|
||||
engine_user.set_default_tag(Dynamic::from(db_config));
|
||||
let result = engine_user
|
||||
.eval_with_scope::<RhaiCollectionArray>(&mut scope, "list_all_collections()")
|
||||
.unwrap();
|
||||
println!("General user listing collections: Found {}", result.0.len());
|
||||
assert_eq!(result.0.len(), 1);
|
||||
assert_eq!(result.0[0].id(), 3);
|
||||
|
||||
// Scenario 5: Alice lists collections (Sees 2)
|
||||
let mut db_config = rhai::Map::new();
|
||||
db_config.insert("db_path".into(), "actual/path/to/db.sqlite".into());
|
||||
db_config.insert("CALLER_ID".into(), "alice_pk".into());
|
||||
engine_alice.set_default_tag(Dynamic::from(db_config));
|
||||
let collections = engine_alice
|
||||
.eval_with_scope::<RhaiCollectionArray>(&mut scope, "list_all_collections()")
|
||||
.unwrap();
|
||||
println!("Alice listing collections: Found {}", collections.0.len());
|
||||
assert_eq!(collections.0.len(), 2);
|
||||
let ids: Vec<u32> = collections.0.iter().map(|c| c.id()).collect();
|
||||
assert!(ids.contains(&1) && ids.contains(&2));
|
||||
|
||||
Ok(())
|
||||
}
|
352
rhailib/src/macros/src/lib.rs
Normal file
352
rhailib/src/macros/src/lib.rs
Normal file
@@ -0,0 +1,352 @@
|
||||
//! # Rhai Authorization Crate
|
||||
//! This crate provides authorization mechanisms for Rhai functions, particularly those interacting with a database.
|
||||
//! It includes helper functions for authorization checks and macros to simplify the registration
|
||||
//! of authorized Rhai functions.
|
||||
//! ## Features:
|
||||
//! - `is_super_admin`: Checks if a caller (identified by a public key) is a super admin.
|
||||
//! - `can_access_resource`: Checks if a caller has specific access rights to a resource, using a database connection.
|
||||
//! - `get_caller_public_key`: Helper to extract `CALLER_ID` from the Rhai `NativeCallContext`.
|
||||
//! - `id_from_i64_to_u32`: Helper to convert `i64` Rhai IDs to `u32` Rust IDs.
|
||||
//! - `register_authorized_get_by_id_fn!`: Macro to register a Rhai function that retrieves a single item by ID, with authorization checks.
|
||||
//! - `register_authorized_list_fn!`: Macro to register a Rhai function that lists multiple items, filtering them based on authorization.
|
||||
//! ## Usage:
|
||||
//! 1. Use the macros to register your Rhai functions, providing a database connection (`Arc<OurDB>`) and necessary type/name information.
|
||||
//! 2. The macros internally use `can_access_resource` for authorization checks.
|
||||
//! 3. Ensure `CALLER_ID` is set in the Rhai engine's scope before calling authorized functions.
|
||||
|
||||
use rhai::{EvalAltResult, Position};
|
||||
use std::convert::TryFrom;
|
||||
|
||||
/// Extracts the `CALLER_ID` string constant from the Rhai `NativeCallContext`.
|
||||
/// This key is used to identify the caller for authorization checks.
|
||||
/// It first checks the current `Scope` and then falls back to the global constants cache.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `context`: The Rhai `NativeCallContext` of the currently executing function.
|
||||
///
|
||||
|
||||
/// Converts an `i64` (common Rhai integer type) to a `u32` (common Rust ID type).
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `id_i64`: The `i64` value to convert.
|
||||
///
|
||||
/// # Errors
|
||||
/// Returns `Err(EvalAltResult::ErrorMismatchDataType)` if the `i64` value cannot be represented as a `u32`.
|
||||
pub fn id_from_i64_to_u32(id_i64: i64) -> Result<u32, Box<EvalAltResult>> {
|
||||
u32::try_from(id_i64).map_err(|_| {
|
||||
Box::new(EvalAltResult::ErrorMismatchDataType(
|
||||
"u32".to_string(),
|
||||
format!("i64 value ({}) that cannot be represented as u32", id_i64),
|
||||
Position::NONE,
|
||||
))
|
||||
})
|
||||
}
|
||||
|
||||
/// Extracts the `CALLER_ID` string constant from the Rhai `NativeCallContext`'s tag.
|
||||
/// This key is used to identify the caller for authorization checks.
|
||||
|
||||
/// Macro to register a Rhai function that retrieves a single resource by its ID, with authorization.
|
||||
///
|
||||
/// The macro handles:
|
||||
/// - Argument parsing (ID).
|
||||
/// - Caller identification via `CALLER_ID`.
|
||||
/// - Authorization check using `AccessControlService::can_access_resource`.
|
||||
/// - Database call to fetch the resource.
|
||||
/// - Error handling for type mismatches, authorization failures, DB errors, and not found errors.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `module`: Mutable reference to the Rhai `Module`.
|
||||
/// * `db_clone`: Cloned `Arc<Db>` for database access.
|
||||
/// * `acs_clone`: Cloned `Arc<AccessControlService>`.
|
||||
/// * `rhai_fn_name`: String literal for the Rhai function name (e.g., "get_collection").
|
||||
/// * `resource_type_str`: String literal for the resource type (e.g., "Collection"), used in authorization checks and error messages.
|
||||
/// * `db_method_name`: Identifier for the database method to call (e.g., `get_by_id`).
|
||||
/// * `id_arg_type`: Rust type of the ID argument in Rhai (e.g., `i64`).
|
||||
/// * `id_rhai_type_name`: String literal for the Rhai type name of the ID (e.g., "i64"), for error messages.
|
||||
/// * `id_conversion_fn`: Path to a function converting `id_arg_type` to `actual_id_type` (e.g., `id_from_i64_to_u32`).
|
||||
/// * `actual_id_type`: Rust type of the ID used in the database (e.g., `u32`).
|
||||
/// * `rhai_return_rust_type`: Rust type of the resource returned by the DB and Rhai function (e.g., `RhaiCollection`).
|
||||
#[macro_export]
|
||||
macro_rules! register_authorized_get_by_id_fn {
|
||||
(
|
||||
module: $module:expr,
|
||||
rhai_fn_name: $rhai_fn_name:expr, // String literal for the Rhai function name (e.g., "get_collection")
|
||||
resource_type_str: $resource_type_str:expr, // String literal for the resource type (e.g., "Collection")
|
||||
rhai_return_rust_type: $rhai_return_rust_type:ty // Rust type of the resource returned (e.g., `RhaiCollection`)
|
||||
) => {
|
||||
FuncRegistration::new($rhai_fn_name).set_into_module(
|
||||
$module,
|
||||
move |context: rhai::NativeCallContext,
|
||||
id_val: i64|
|
||||
-> Result<$rhai_return_rust_type, Box<EvalAltResult>> {
|
||||
let actual_id: u32 = $crate::id_from_i64_to_u32(id_val)?;
|
||||
|
||||
// Inlined logic to get caller public key
|
||||
let tag_map = context
|
||||
.tag()
|
||||
.and_then(|tag| tag.read_lock::<rhai::Map>())
|
||||
.ok_or_else(|| {
|
||||
Box::new(EvalAltResult::ErrorRuntime(
|
||||
"Context tag must be a Map.".into(),
|
||||
context.position(),
|
||||
))
|
||||
})?;
|
||||
|
||||
let pk_dynamic = tag_map.get("CALLER_ID").ok_or_else(|| {
|
||||
Box::new(EvalAltResult::ErrorRuntime(
|
||||
"'CALLER_ID' not found in context tag Map.".into(),
|
||||
context.position(),
|
||||
))
|
||||
})?;
|
||||
|
||||
let db_path = tag_map.get("DB_PATH").ok_or_else(|| {
|
||||
Box::new(EvalAltResult::ErrorRuntime(
|
||||
"'DB_PATH' not found in context tag Map.".into(),
|
||||
context.position(),
|
||||
))
|
||||
})?;
|
||||
|
||||
let db_path = db_path.clone().into_string()?;
|
||||
|
||||
let circle_pk = tag_map.get("CONTEXT_ID").ok_or_else(|| {
|
||||
Box::new(EvalAltResult::ErrorRuntime(
|
||||
"'CONTEXT_ID' not found in context tag Map.".into(),
|
||||
context.position(),
|
||||
))
|
||||
})?;
|
||||
|
||||
let circle_pk = circle_pk.clone().into_string()?;
|
||||
|
||||
let db_path = format!("{}/{}", db_path, circle_pk);
|
||||
let db = Arc::new(OurDB::new(db_path, false).expect("Failed to create DB"));
|
||||
|
||||
let caller_pk_str = pk_dynamic.clone().into_string()?;
|
||||
|
||||
println!("Checking access for public key: {}", caller_pk_str);
|
||||
if circle_pk != caller_pk_str {
|
||||
// Use the standalone can_access_resource function from heromodels
|
||||
let has_access = crate::models::access::access::can_access_resource(
|
||||
db.clone(),
|
||||
&caller_pk_str,
|
||||
actual_id,
|
||||
$resource_type_str,
|
||||
);
|
||||
|
||||
if !has_access {
|
||||
return Err(Box::new(EvalAltResult::ErrorRuntime(
|
||||
format!("Access denied for public key: {}", caller_pk_str).into(),
|
||||
context.position(),
|
||||
)));
|
||||
}
|
||||
}
|
||||
|
||||
let result = db
|
||||
.collection::<$rhai_return_rust_type>()
|
||||
.unwrap()
|
||||
.get_by_id(actual_id)
|
||||
.map_err(|e| {
|
||||
println!(
|
||||
"Database error fetching {} with ID: {}",
|
||||
$resource_type_str, actual_id
|
||||
);
|
||||
Box::new(EvalAltResult::ErrorRuntime(
|
||||
format!("Database error fetching {}: {:?}", $resource_type_str, e)
|
||||
.into(),
|
||||
context.position(),
|
||||
))
|
||||
})?
|
||||
.ok_or_else(|| {
|
||||
Box::new(EvalAltResult::ErrorRuntime(
|
||||
format!(
|
||||
"Database error fetching {} with ID: {}",
|
||||
$resource_type_str, actual_id
|
||||
)
|
||||
.into(),
|
||||
context.position(),
|
||||
))
|
||||
})?;
|
||||
Ok(result)
|
||||
},
|
||||
);
|
||||
};
|
||||
}
|
||||
|
||||
// Macro to register a Rhai function that retrieves a single resource by its ID, with authorization.
|
||||
#[macro_export]
|
||||
macro_rules! register_authorized_create_by_id_fn {
|
||||
(
|
||||
module: $module:expr,
|
||||
rhai_fn_name: $rhai_fn_name:expr, // String literal for the Rhai function name (e.g., "get_collection")
|
||||
resource_type_str: $resource_type_str:expr, // String literal for the resource type (e.g., "Collection")
|
||||
rhai_return_rust_type: $rhai_return_rust_type:ty // Rust type of the resource returned (e.g., `RhaiCollection`)
|
||||
) => {
|
||||
FuncRegistration::new($rhai_fn_name).set_into_module(
|
||||
$module,
|
||||
move |context: rhai::NativeCallContext, object: $rhai_return_rust_type| -> Result<$rhai_return_rust_type, Box<EvalAltResult>> {
|
||||
|
||||
// Inlined logic to get caller public key
|
||||
let tag_map = context
|
||||
.tag()
|
||||
.and_then(|tag| tag.read_lock::<rhai::Map>())
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("Context tag must be a Map.".into(), context.position())))?;
|
||||
|
||||
let pk_dynamic = tag_map.get("CALLER_ID")
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("'CALLER_ID' not found in context tag Map.".into(), context.position())))?;
|
||||
|
||||
let db_path = tag_map.get("DB_PATH")
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("'DB_PATH' not found in context tag Map.".into(), context.position())))?;
|
||||
|
||||
let db_path = db_path.clone().into_string()?;
|
||||
|
||||
let circle_pk = tag_map.get("CONTEXT_ID")
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("'CONTEXT_ID' not found in context tag Map.".into(), context.position())))?;
|
||||
|
||||
let circle_pk = circle_pk.clone().into_string()?;
|
||||
|
||||
let db_path = format!("{}/{}", db_path, circle_pk);
|
||||
let db = Arc::new(OurDB::new(db_path, false).expect("Failed to create DB"));
|
||||
|
||||
let caller_pk_str = pk_dynamic.clone().into_string()?;
|
||||
|
||||
let result = db.set(&object).map_err(|e| {
|
||||
Box::new(EvalAltResult::ErrorRuntime(
|
||||
format!("Database error creating {}: {:?}", $resource_type_str, e).into(),
|
||||
context.position(),
|
||||
))
|
||||
})?;
|
||||
Ok(result.1)
|
||||
},
|
||||
);
|
||||
};
|
||||
}
|
||||
|
||||
// Macro to register a Rhai function that retrieves a single resource by its ID, with authorization.
|
||||
#[macro_export]
|
||||
macro_rules! register_authorized_delete_by_id_fn {
|
||||
(
|
||||
module: $module:expr,
|
||||
rhai_fn_name: $rhai_fn_name:expr, // String literal for the Rhai function name (e.g., "get_collection")
|
||||
resource_type_str: $resource_type_str:expr, // String literal for the resource type (e.g., "Collection")
|
||||
rhai_return_rust_type: $rhai_return_rust_type:ty // Rust type of the resource returned (e.g., `RhaiCollection`)
|
||||
) => {
|
||||
FuncRegistration::new($rhai_fn_name).set_into_module(
|
||||
$module,
|
||||
move |context: rhai::NativeCallContext, id_val: i64| -> Result<(), Box<EvalAltResult>> {
|
||||
let actual_id: u32 = $crate::id_from_i64_to_u32(id_val)?;
|
||||
|
||||
// Inlined logic to get caller public key
|
||||
let tag_map = context
|
||||
.tag()
|
||||
.and_then(|tag| tag.read_lock::<rhai::Map>())
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("Context tag must be a Map.".into(), context.position())))?;
|
||||
|
||||
let pk_dynamic = tag_map.get("CALLER_ID")
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("'CALLER_ID' not found in context tag Map.".into(), context.position())))?;
|
||||
|
||||
let db_path = tag_map.get("DB_PATH")
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("'DB_PATH' not found in context tag Map.".into(), context.position())))?;
|
||||
|
||||
let db_path = db_path.clone().into_string()?;
|
||||
|
||||
let circle_pk = tag_map.get("CONTEXT_ID")
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("'CONTEXT_ID' not found in context tag Map.".into(), context.position())))?;
|
||||
|
||||
let circle_pk = circle_pk.clone().into_string()?;
|
||||
|
||||
let db_path = format!("{}/{}", db_path, circle_pk);
|
||||
let db = Arc::new(OurDB::new(db_path, false).expect("Failed to create DB"));
|
||||
|
||||
let caller_pk_str = pk_dynamic.clone().into_string()?;
|
||||
|
||||
let result = db
|
||||
.collection::<$rhai_return_rust_type>()
|
||||
.unwrap()
|
||||
.delete_by_id(actual_id)
|
||||
.map_err(|e| {
|
||||
Box::new(EvalAltResult::ErrorRuntime(
|
||||
format!("Database error deleting {}: {:?}", $resource_type_str, e).into(),
|
||||
context.position(),
|
||||
))
|
||||
})?;
|
||||
Ok(())
|
||||
},
|
||||
);
|
||||
};
|
||||
}
|
||||
|
||||
/// Macro to register a Rhai function that lists all resources of a certain type, with authorization.
|
||||
///
|
||||
/// The macro handles:
|
||||
/// - Caller identification via `CALLER_ID`.
|
||||
/// - Fetching all items of a specific type from the database.
|
||||
/// - Filtering the items based on the standalone `can_access_resource` function for each item.
|
||||
/// - Wrapping the authorized items in a specified collection type (e.g., `RhaiCollectionArray`).
|
||||
/// - Error handling for DB errors during fetch or authorization checks.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `module`: Mutable reference to the Rhai `Module`.
|
||||
/// * `rhai_fn_name`: String literal for the Rhai function name (e.g., "list_collections").
|
||||
/// * `resource_type_str`: String literal for the resource type (e.g., "Collection"), used in authorization checks.
|
||||
/// * `rhai_return_rust_type`: Rust type of the resource item (e.g., `RhaiCollection`).
|
||||
/// * `item_id_accessor`: Identifier for the method on `rhai_return_rust_type` that returns its ID (e.g., `id`).
|
||||
/// * `rhai_return_wrapper_type`: Rust type that wraps a `Vec` of `rhai_return_rust_type` for Rhai (e.g., `RhaiCollectionArray`).
|
||||
#[macro_export]
|
||||
macro_rules! register_authorized_list_fn {
|
||||
(
|
||||
module: $module:expr,
|
||||
rhai_fn_name: $rhai_fn_name:expr,
|
||||
resource_type_str: $resource_type_str:expr,
|
||||
rhai_return_rust_type: $rhai_return_rust_type:ty,
|
||||
rhai_return_wrapper_type: $rhai_return_wrapper_type:ty
|
||||
) => {
|
||||
FuncRegistration::new($rhai_fn_name).set_into_module(
|
||||
$module,
|
||||
move |context: rhai::NativeCallContext| -> Result<$rhai_return_wrapper_type, Box<EvalAltResult>> {
|
||||
// Inlined logic to get caller public key
|
||||
let tag_map = context
|
||||
.tag()
|
||||
.and_then(|tag| tag.read_lock::<rhai::Map>())
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("Context tag must be a Map.".into(), context.position())))?;
|
||||
|
||||
let pk_dynamic = tag_map.get("CALLER_ID")
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("'CALLER_ID' not found in context tag Map.".into(), context.position())))?;
|
||||
|
||||
let caller_pk_str = pk_dynamic.clone().into_string()?;
|
||||
|
||||
let db_path = tag_map.get("DB_PATH")
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("'DB_PATH' not found in context tag Map.".into(), context.position())))?;
|
||||
|
||||
let db_path = db_path.clone().into_string()?;
|
||||
|
||||
let circle_pk = tag_map.get("CONTEXT_ID")
|
||||
.ok_or_else(|| Box::new(EvalAltResult::ErrorRuntime("'CONTEXT_ID' not found in context tag Map.".into(), context.position())))?;
|
||||
|
||||
let circle_pk = circle_pk.clone().into_string()?;
|
||||
|
||||
let db_path = format!("{}/{}", db_path, circle_pk);
|
||||
let db = Arc::new(OurDB::new(db_path, false).expect("Failed to create DB"));
|
||||
|
||||
let all_items: Vec<$rhai_return_rust_type> = db
|
||||
.collection::<$rhai_return_rust_type>()
|
||||
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(format!("{:?}", e).into(), Position::NONE)))?
|
||||
.get_all()
|
||||
.map_err(|e| Box::new(EvalAltResult::ErrorRuntime(format!("{:?}", e).into(), Position::NONE)))?;
|
||||
|
||||
let authorized_items: Vec<$rhai_return_rust_type> = all_items
|
||||
.into_iter()
|
||||
.filter(|item| {
|
||||
let resource_id = item.id();
|
||||
crate::models::access::access::can_access_resource(
|
||||
db.clone(),
|
||||
&caller_pk_str,
|
||||
resource_id,
|
||||
$resource_type_str,
|
||||
)
|
||||
})
|
||||
.collect();
|
||||
|
||||
Ok(authorized_items.into())
|
||||
},
|
||||
);
|
||||
};
|
||||
}
|
23
rhailib/src/monitor/Cargo.toml
Normal file
23
rhailib/src/monitor/Cargo.toml
Normal file
@@ -0,0 +1,23 @@
|
||||
[package]
|
||||
name = "monitor"
|
||||
version = "0.1.0"
|
||||
edition = "2021"
|
||||
|
||||
[dependencies]
|
||||
anyhow = "1.0"
|
||||
clap = { version = "4.4", features = ["derive"] }
|
||||
tokio = { version = "1", features = ["macros", "rt-multi-thread", "signal", "time"] } # time feature might be needed later
|
||||
tracing = "0.1"
|
||||
tracing-subscriber = { version = "0.3", features = ["fmt"] }
|
||||
redis = { version = "0.25.0", features = ["tokio-comp"] } # For Redis communication
|
||||
prettytable-rs = "0.10.0" # For displaying tasks in a formatted table
|
||||
clearscreen = "2.0.1" # For clearing the terminal screen
|
||||
chrono = { version = "0.4", features = ["serde"] } # For timestamps
|
||||
futures = "0.3"
|
||||
|
||||
# If the monitor library needs to use parts of rhailib (e.g. Redis connections, task definitions):
|
||||
# rhailib = { path = ".." } # Assuming monitor is a direct sub-directory of rhailib workspace member
|
||||
|
||||
[[bin]]
|
||||
name = "monitor"
|
||||
path = "src/main.rs"
|
67
rhailib/src/monitor/README.md
Normal file
67
rhailib/src/monitor/README.md
Normal file
@@ -0,0 +1,67 @@
|
||||
# Rhai Worker Monitor (`monitor`)
|
||||
|
||||
`monitor` is a command-line tool designed to observe and display live information about Rhai workers managed by `rhailib`. It provides insights into Redis queue congestion and a table of tasks being processed by specified workers.
|
||||
|
||||
## Features (Planned)
|
||||
|
||||
* **Live Redis Queue Visualization**: Displays a textual, horizontal plot showing the number of tasks in the Redis queue for each monitored worker. The plot will be color-coded to indicate congestion levels and will update by polling the queue size.
|
||||
* **Task Table**: Shows a table of tasks associated with each worker, including task hash, creation date, status (e.g., pending, running, completed, failed), and potentially other details.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* Rust and Cargo installed.
|
||||
* Access to the Redis instance used by the Rhai workers.
|
||||
|
||||
## Building
|
||||
|
||||
Navigate to the `rhailib/monitor` crate's root directory and build the project:
|
||||
|
||||
```bash
|
||||
cargo build
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
To run the monitor, you need to specify which worker queues you want to observe using the `--workers` (or `-w`) flag. Provide a comma-separated list of worker names.
|
||||
|
||||
From the `rhailib/monitor` root directory:
|
||||
|
||||
```bash
|
||||
cargo run -- --workers <worker_name_1>[,<worker_name_2>,...]
|
||||
```
|
||||
|
||||
Or from the parent `rhailib` directory (workspace root):
|
||||
|
||||
```bash
|
||||
cargo run -p monitor -- --workers <worker_name_1>[,<worker_name_2>,...]
|
||||
```
|
||||
|
||||
**Examples:**
|
||||
|
||||
* Monitor a single worker named `my_default_worker` (from `rhailib/monitor`):
|
||||
|
||||
```bash
|
||||
cargo run -- --workers my_default_worker
|
||||
```
|
||||
|
||||
* Monitor multiple workers, `image_processing_worker` and `data_analysis_worker` (from `rhailib` workspace root):
|
||||
|
||||
```bash
|
||||
cargo run -p monitor -- --workers image_processing_worker,data_analysis_worker
|
||||
```
|
||||
|
||||
### Command-Line Options
|
||||
|
||||
* `-w, --workers <WORKERS>`: (Required) A comma-separated list of worker names to monitor.
|
||||
|
||||
(Future options might include Redis connection parameters, polling intervals, etc.)
|
||||
|
||||
## Development
|
||||
|
||||
The core logic for the monitor is located in `rhailib/monitor/src/`:
|
||||
* `lib.rs`: Main library file, defines modules.
|
||||
* `cli_logic.rs`: Handles argument parsing, Redis interaction, and orchestrates the display.
|
||||
* `plot.rs`: Responsible for generating the textual queue visualization.
|
||||
* `tasks.rs`: Responsible for fetching and displaying the task table.
|
||||
|
||||
The binary entry point is `rhailib/monitor/src/main.rs`.
|
37
rhailib/src/monitor/cmd/main.rs
Normal file
37
rhailib/src/monitor/cmd/main.rs
Normal file
@@ -0,0 +1,37 @@
|
||||
// File: /Users/timurgordon/code/git.ourworld.tf/herocode/rhailib/src/monitor/cmd/main.rs
|
||||
use anyhow::Result;
|
||||
use clap::Parser;
|
||||
|
||||
// This assumes that `rhailib/src/lib.rs` will have `pub mod monitor;`
|
||||
// and `rhailib/src/monitor/mod.rs` will have `pub mod cli_logic;`
|
||||
// and `cli_logic.rs` will contain `pub async fn start_monitoring`.
|
||||
// The `crate::` prefix refers to the `rhailib` crate root.
|
||||
|
||||
#[derive(Parser, Debug)]
|
||||
#[clap(author, version, about = "Rhai Worker Live Monitor", long_about = None)]
|
||||
struct Args {
|
||||
/// Comma-separated list of worker names to monitor
|
||||
#[clap(short, long, value_delimiter = ',')]
|
||||
workers: Vec<String>,
|
||||
}
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() -> Result<()> {
|
||||
tracing_subscriber::fmt::init();
|
||||
|
||||
let args = Args::parse();
|
||||
|
||||
if args.workers.is_empty() {
|
||||
eprintln!("Error: At least one worker name must be provided via --workers.");
|
||||
// Consider returning an Err or using clap's built-in required attributes.
|
||||
std::process::exit(1);
|
||||
}
|
||||
|
||||
tracing::info!("Monitor CLI starting for workers: {:?}", args.workers);
|
||||
|
||||
// Call the monitoring logic from the `cli_logic` submodule within the `monitor` module
|
||||
crate::monitor::cli_logic::start_monitoring(&args.workers).await?;
|
||||
|
||||
tracing::info!("Monitor CLI finished.");
|
||||
Ok(())
|
||||
}
|
61
rhailib/src/monitor/docs/ARCHITECTURE.md
Normal file
61
rhailib/src/monitor/docs/ARCHITECTURE.md
Normal file
@@ -0,0 +1,61 @@
|
||||
# Architecture of the `monitor` Crate
|
||||
|
||||
The `monitor` crate provides a command-line interface for monitoring and managing Rhai task execution across the rhailib ecosystem. It offers real-time visibility into task queues, execution status, and system performance.
|
||||
|
||||
## Core Architecture
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Monitor CLI] --> B[Task Monitoring]
|
||||
A --> C[Queue Management]
|
||||
A --> D[Performance Metrics]
|
||||
|
||||
B --> B1[Redis Task Tracking]
|
||||
B --> B2[Status Visualization]
|
||||
B --> B3[Real-time Updates]
|
||||
|
||||
C --> C1[Queue Inspection]
|
||||
C --> C2[Task Management]
|
||||
C --> C3[Worker Status]
|
||||
|
||||
D --> D1[Performance Plotting]
|
||||
D --> D2[Metrics Collection]
|
||||
D --> D3[Historical Analysis]
|
||||
```
|
||||
|
||||
## Key Components
|
||||
|
||||
### 1. CLI Logic (`cli_logic.rs`)
|
||||
- **Command Processing**: Handles user commands and interface
|
||||
- **Real-time Monitoring**: Continuous task status updates
|
||||
- **Interactive Interface**: User-friendly command-line experience
|
||||
|
||||
### 2. Task Management (`tasks.rs`)
|
||||
- **Task Discovery**: Finds and tracks tasks across Redis queues
|
||||
- **Status Reporting**: Provides detailed task execution information
|
||||
- **Queue Analysis**: Monitors queue depths and processing rates
|
||||
|
||||
### 3. Performance Plotting (`plot.rs`)
|
||||
- **Metrics Visualization**: Creates performance charts and graphs
|
||||
- **Trend Analysis**: Historical performance tracking
|
||||
- **System Health**: Overall system performance indicators
|
||||
|
||||
## Features
|
||||
|
||||
- **Real-time Task Monitoring**: Live updates of task execution status
|
||||
- **Queue Management**: Inspection and management of Redis task queues
|
||||
- **Performance Metrics**: System performance visualization and analysis
|
||||
- **Interactive CLI**: User-friendly command-line interface
|
||||
- **Multi-worker Support**: Monitoring across multiple worker instances
|
||||
|
||||
## Dependencies
|
||||
|
||||
- **Redis Integration**: Direct Redis connectivity for queue monitoring
|
||||
- **CLI Framework**: Clap for command-line argument parsing
|
||||
- **Async Runtime**: Tokio for asynchronous operations
|
||||
- **Visualization**: Pretty tables and terminal clearing for UI
|
||||
- **Logging**: Tracing for structured logging and debugging
|
||||
|
||||
## Usage Patterns
|
||||
|
||||
The monitor serves as a central observability tool for rhailib deployments, providing operators with comprehensive visibility into system behavior and performance characteristics.
|
104
rhailib/src/monitor/src/cli_logic.rs
Normal file
104
rhailib/src/monitor/src/cli_logic.rs
Normal file
@@ -0,0 +1,104 @@
|
||||
// rhailib/monitor/src/cli_logic.rs
|
||||
use anyhow::Result;
|
||||
use futures::stream::{self, StreamExt};
|
||||
|
||||
// Import functions from sibling modules within the same crate
|
||||
use crate::plot;
|
||||
use crate::tasks::{self, RhaiTask};
|
||||
use redis::{AsyncCommands, Client as RedisClient};
|
||||
use std::collections::HashMap;
|
||||
use tokio::signal;
|
||||
use tokio::time::{sleep, Duration};
|
||||
|
||||
const REDIS_URL: &str = "redis://127.0.0.1/";
|
||||
const POLLING_INTERVAL_MILLISECONDS: u64 = 10; // Increased polling interval for SCAN
|
||||
const SCAN_COUNT: isize = 100; // Number of keys to fetch per SCAN iteration
|
||||
|
||||
/// Main monitoring logic.
|
||||
pub async fn start_monitoring(worker_names: &[String]) -> Result<()> {
|
||||
tracing::info!("Attempting to connect to Redis at {}", REDIS_URL);
|
||||
let client = RedisClient::open(REDIS_URL)?;
|
||||
let mut con = client.get_multiplexed_async_connection().await?;
|
||||
tracing::info!("Successfully connected to Redis.");
|
||||
|
||||
let ping_result: String = redis::cmd("PING").query_async(&mut con).await?;
|
||||
tracing::info!("Redis PING response: {}", ping_result);
|
||||
|
||||
tracing::info!(
|
||||
"Starting live monitor. Configured workers: {:?}. Press Ctrl+C to exit.",
|
||||
worker_names
|
||||
);
|
||||
|
||||
loop {
|
||||
tokio::select! {
|
||||
_ = signal::ctrl_c() => {
|
||||
print!("\r");
|
||||
println!("Exiting Rhai Worker Monitor...");
|
||||
break;
|
||||
}
|
||||
_ = async {
|
||||
let mut current_con = con.clone(); // Clone for this iteration
|
||||
clearscreen::clear().unwrap_or_else(|e| tracing::warn!("Failed to clear screen: {}", e));
|
||||
println!("Rhai Worker Monitor (Press Ctrl+C to exit)");
|
||||
println!(
|
||||
"Polling Redis every {}ms. Last update: {}. Configured workers: {:?}",
|
||||
POLLING_INTERVAL_MILLISECONDS,
|
||||
chrono::Local::now().format("%Y-%m-%d %H:%M:%S"),
|
||||
worker_names
|
||||
);
|
||||
|
||||
let mut all_rhai_tasks: Vec<RhaiTask> = Vec::new();
|
||||
let mut cursor: isize = 0;
|
||||
loop {
|
||||
// SCAN returns a tuple: (new_cursor, keys_array)
|
||||
let (new_cursor, task_detail_keys): (isize, Vec<String>) = redis::cmd("SCAN")
|
||||
.arg(cursor)
|
||||
.arg("MATCH")
|
||||
.arg("rhai_task_details:*")
|
||||
.arg("COUNT")
|
||||
.arg(SCAN_COUNT)
|
||||
.query_async(&mut current_con)
|
||||
.await?;
|
||||
|
||||
// Process keys found in this scan iteration
|
||||
let tasks_futures = stream::iter(task_detail_keys)
|
||||
.map(|key_with_prefix| {
|
||||
let mut task_con = current_con.clone();
|
||||
async move {
|
||||
let task_id = key_with_prefix.strip_prefix("rhai_task_details:").unwrap_or(&key_with_prefix).to_string();
|
||||
match task_con.hgetall::<_, HashMap<String, String>>(&key_with_prefix).await {
|
||||
Ok(details_map) => Some(RhaiTask::from_redis_hash(task_id, &details_map)),
|
||||
Err(e) => {
|
||||
tracing::warn!("Could not fetch details for task key {}: {}", key_with_prefix, e);
|
||||
None
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
.buffer_unordered(10) // Concurrently fetch details for 10 tasks
|
||||
.collect::<Vec<_>>()
|
||||
.await;
|
||||
|
||||
all_rhai_tasks.extend(tasks_futures.into_iter().flatten());
|
||||
|
||||
cursor = new_cursor;
|
||||
if cursor == 0 { // SCAN returns 0 when iteration is complete
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Sort tasks by creation date (optional, assuming created_at is parsable)
|
||||
// For simplicity, we'll skip sorting for now as created_at is a string.
|
||||
|
||||
let pending_tasks_count = all_rhai_tasks.iter().filter(|task| task.status.to_lowercase() == "pending").count();
|
||||
|
||||
plot::display_queue_plot("Total Pending Tasks", pending_tasks_count).await?;
|
||||
tasks::display_task_table(&all_rhai_tasks).await?;
|
||||
|
||||
sleep(Duration::from_millis(POLLING_INTERVAL_MILLISECONDS)).await;
|
||||
Result::<()>::Ok(())
|
||||
} => {}
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
10
rhailib/src/monitor/src/lib.rs
Normal file
10
rhailib/src/monitor/src/lib.rs
Normal file
@@ -0,0 +1,10 @@
|
||||
// rhailib/monitor/src/lib.rs
|
||||
|
||||
// Declare the modules that make up this crate's library
|
||||
pub mod cli_logic;
|
||||
pub mod plot;
|
||||
pub mod tasks;
|
||||
|
||||
// Re-export the main function to be used by the binary (src/main.rs)
|
||||
// and potentially by other crates if this library is used as a dependency.
|
||||
pub use cli_logic::start_monitoring;
|
32
rhailib/src/monitor/src/main.rs
Normal file
32
rhailib/src/monitor/src/main.rs
Normal file
@@ -0,0 +1,32 @@
|
||||
// rhailib/monitor/src/main.rs
|
||||
use anyhow::Result;
|
||||
use clap::Parser;
|
||||
|
||||
// Use the start_monitoring function from the monitor crate's library
|
||||
use monitor::start_monitoring;
|
||||
|
||||
#[derive(Parser, Debug)]
|
||||
#[clap(author, version, about = "Rhai Worker Monitor CLI", long_about = None)]
|
||||
struct Args {
|
||||
/// List of worker names to monitor, comma-separated
|
||||
#[clap(short, long, value_delimiter = ',', required = true, num_args = 1..)]
|
||||
workers: Vec<String>,
|
||||
// TODO: Add other options like Redis connection details if not using a config file or env vars.
|
||||
}
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() -> Result<()> {
|
||||
// Initialize logging (e.g., tracing-subscriber)
|
||||
// Consider making log level configurable via CLI args or env var.
|
||||
tracing_subscriber::fmt::init();
|
||||
|
||||
let args = Args::parse();
|
||||
|
||||
tracing::info!("Starting monitor for workers: {:?}", args.workers);
|
||||
|
||||
// Call the main logic function from the monitor library
|
||||
start_monitoring(&args.workers).await?;
|
||||
|
||||
tracing::info!("Monitor finished.");
|
||||
Ok(())
|
||||
}
|
23
rhailib/src/monitor/src/plot.rs
Normal file
23
rhailib/src/monitor/src/plot.rs
Normal file
@@ -0,0 +1,23 @@
|
||||
// rhailib/monitor/src/plot.rs
|
||||
use anyhow::Result;
|
||||
|
||||
/// Placeholder for queue plotting logic.
|
||||
const MAX_BAR_WIDTH: usize = 50; // Max width of the bar in characters
|
||||
const BAR_CHAR: char = '█'; // Character to use for the bar
|
||||
|
||||
pub async fn display_queue_plot(plot_label: &str, count: usize) -> Result<()> {
|
||||
let bar_width = std::cmp::min(count, MAX_BAR_WIDTH);
|
||||
let bar: String = std::iter::repeat(BAR_CHAR).take(bar_width).collect();
|
||||
|
||||
// ANSI escape code for green color can be added here if desired
|
||||
// Example: let green_bar = format!("\x1b[32m{}\x1b[0m", bar);
|
||||
|
||||
println!(
|
||||
"{:<27} [{:<width$}] ({})", // Adjusted label spacing
|
||||
plot_label,
|
||||
bar,
|
||||
count,
|
||||
width = MAX_BAR_WIDTH
|
||||
);
|
||||
Ok(())
|
||||
}
|
93
rhailib/src/monitor/src/tasks.rs
Normal file
93
rhailib/src/monitor/src/tasks.rs
Normal file
@@ -0,0 +1,93 @@
|
||||
// rhailib/monitor/src/tasks.rs
|
||||
use anyhow::Result;
|
||||
use prettytable::{format, Cell, Row, Table};
|
||||
use std::collections::HashMap;
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct RhaiTask {
|
||||
pub id: String,
|
||||
pub script: Option<String>,
|
||||
pub status: String,
|
||||
pub created_at: Option<String>, // Keep as string for display, parsing can be complex
|
||||
pub updated_at: Option<String>, // Keep as string, might be RFC3339 or Unix timestamp
|
||||
pub client_rpc_id: Option<String>,
|
||||
pub reply_to_queue: Option<String>,
|
||||
pub output: Option<String>,
|
||||
pub error: Option<String>,
|
||||
}
|
||||
|
||||
impl RhaiTask {
|
||||
pub fn from_redis_hash(task_id: String, details: &HashMap<String, String>) -> Self {
|
||||
// Helper to get optional string, converting "null" string to None
|
||||
let get_opt_string = |key: &str| -> Option<String> {
|
||||
details.get(key).and_then(|s| {
|
||||
if s.to_lowercase() == "null" || s.is_empty() {
|
||||
None
|
||||
} else {
|
||||
Some(s.clone())
|
||||
}
|
||||
})
|
||||
};
|
||||
|
||||
RhaiTask {
|
||||
id: task_id,
|
||||
script: get_opt_string("script"),
|
||||
status: details
|
||||
.get("status")
|
||||
.cloned()
|
||||
.unwrap_or_else(|| "unknown".to_string()),
|
||||
created_at: get_opt_string("createdAt"),
|
||||
updated_at: get_opt_string("updatedAt"),
|
||||
client_rpc_id: get_opt_string("clientRpcId"),
|
||||
reply_to_queue: get_opt_string("replyToQueue"),
|
||||
output: get_opt_string("output"),
|
||||
error: get_opt_string("error"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Displays all monitored Rhai tasks in a formatted table.
|
||||
pub async fn display_task_table(tasks: &[RhaiTask]) -> Result<()> {
|
||||
println!("\nAll Monitored Rhai Tasks:");
|
||||
|
||||
if tasks.is_empty() {
|
||||
println!(" No tasks to display.");
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let mut table = Table::new();
|
||||
table.set_format(*format::consts::FORMAT_BOX_CHARS);
|
||||
table.add_row(Row::new(vec![
|
||||
Cell::new("Task ID").style_spec("bFg"),
|
||||
Cell::new("Status").style_spec("bFg"),
|
||||
Cell::new("Created At").style_spec("bFg"),
|
||||
Cell::new("Updated At").style_spec("bFg"),
|
||||
Cell::new("Details (Output/Error)").style_spec("bFg"),
|
||||
// Cell::new("Script (Excerpt)").style_spec("bFg"), // Optional: Add if needed
|
||||
]));
|
||||
|
||||
for task in tasks {
|
||||
let details_str = match (&task.output, &task.error) {
|
||||
(Some(out), None) => format!("Output: {:.50}", out), // Truncate for display
|
||||
(None, Some(err)) => format!("Error: {:.50}", err), // Truncate for display
|
||||
(Some(out), Some(err)) => format!("Output: {:.30}... Error: {:.30}...", out, err),
|
||||
(None, None) => "N/A".to_string(),
|
||||
};
|
||||
|
||||
// let script_excerpt = task.script.as_ref().map_or("N/A".to_string(), |s| {
|
||||
// if s.len() > 30 { format!("{:.27}...", s) } else { s.clone() }
|
||||
// });
|
||||
|
||||
table.add_row(Row::new(vec![
|
||||
Cell::new(&task.id[..std::cmp::min(task.id.len(), 12)]), // Show first 12 chars of ID
|
||||
Cell::new(&task.status),
|
||||
Cell::new(task.created_at.as_deref().unwrap_or("N/A")),
|
||||
Cell::new(task.updated_at.as_deref().unwrap_or("N/A")),
|
||||
Cell::new(&details_str),
|
||||
// Cell::new(&script_excerpt),
|
||||
]));
|
||||
}
|
||||
|
||||
table.printstd();
|
||||
Ok(())
|
||||
}
|
Reference in New Issue
Block a user