actor trait improvements and ui implementation
This commit is contained in:
@@ -45,13 +45,27 @@ Jobs can have dependencies on other jobs, which are stored in the `dependencies`
|
||||
|
||||
### Work Queues
|
||||
|
||||
Jobs are queued for execution using Redis lists:
|
||||
Jobs are queued for execution using Redis lists with the following naming convention:
|
||||
```
|
||||
hero:work_queue:{actor_id}
|
||||
hero:job:actor_queue:{script_type_suffix}
|
||||
```
|
||||
|
||||
Where `{script_type_suffix}` corresponds to the script type:
|
||||
- `osis` for OSIS actors (Rhai/HeroScript execution)
|
||||
- `sal` for SAL actors (System Abstraction Layer)
|
||||
- `v` for V actors (V language execution)
|
||||
- `python` for Python actors
|
||||
|
||||
**Examples:**
|
||||
- OSIS actor queue: `hero:job:actor_queue:osis`
|
||||
- SAL actor queue: `hero:job:actor_queue:sal`
|
||||
- V actor queue: `hero:job:actor_queue:v`
|
||||
- Python actor queue: `hero:job:actor_queue:python`
|
||||
|
||||
Actors listen on their specific queue using `BLPOP` for job IDs to process.
|
||||
|
||||
**Important:** Actors must use the same queue naming convention in their `actor_id()` method to ensure proper job dispatch. The actor should return `"actor_queue:{script_type_suffix}"` as its actor ID.
|
||||
|
||||
### Stop Queues
|
||||
|
||||
Job stop requests are sent through dedicated stop queues:
|
||||
@@ -63,12 +77,26 @@ Actors monitor these queues to receive stop requests for running jobs.
|
||||
|
||||
### Reply Queues
|
||||
|
||||
For synchronous job execution, dedicated reply queues are used:
|
||||
```
|
||||
hero:reply:{job_id}
|
||||
```
|
||||
Reply queues are used for responses to specific requests:
|
||||
|
||||
Actors send results to these queues when jobs complete.
|
||||
- `hero:reply:{request_id}`: Response to a specific request
|
||||
|
||||
### Result and Error Queues
|
||||
|
||||
When actors process jobs, they store results and errors in two places:
|
||||
|
||||
1. **Job Hash Storage**: Results are stored in the job hash fields:
|
||||
- `hero:job:{job_id}` hash with `output` field for results
|
||||
- `hero:job:{job_id}` hash with `error` field for errors
|
||||
|
||||
2. **Dedicated Queues**: Results and errors are also pushed to dedicated queues for asynchronous retrieval:
|
||||
- `hero:job:{job_id}:result`: Queue containing job result (use `LPOP` to retrieve)
|
||||
- `hero:job:{job_id}:error`: Queue containing job error (use `LPOP` to retrieve)
|
||||
|
||||
This dual storage approach allows clients to:
|
||||
- Access results/errors directly from job hash for immediate retrieval
|
||||
- Listen on result/error queues for asynchronous notification of job completion
|
||||
- Use `BLPOP` on result/error queues for blocking waits on job completion
|
||||
|
||||
## Job Lifecycle
|
||||
|
||||
|
Reference in New Issue
Block a user