Implement hero_logic: General-Purpose DAG Control Flow Engine #1
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
hero_logic — General-Purpose DAG Control Flow Engine
Overview
hero_logicis a new Hero RPC service that provides a general-purpose directed acyclic graph (DAG) execution engine for defining, storing, and running multi-step workflows. It replaces hardcoded control flows (like the hero_router service agent pipeline) with a configurable, observable, and reusable system.Originating issue: hero_router#34 — Hero service agent improvements
Architecture
Core Concepts
templates/directory, loadable by name.System Interactions
Key principle: ALL node executions go through hero_proc as actions/jobs, providing uniform logging, process management, timeouts, and observability. hero_logic creates hero_proc actions for each node type and tracks the resulting job IDs.
Data Models (OSchema)
These should be defined in
.oschemafiles and processed by the hero_rpc generator.Enums
Core Types
RPC Service API
Templates & Examples
Directory Structure
Service Agent Template (the primary use case)
The
service_agenttemplate encodes the current hero_router agent pipeline as a DAG:Nodes:
service_selection(ai_call) — Given a prompt and service catalog, select relevant servicescode_generation(ai_call) — Given selected services' interfaces, generate Python codescript_execution(script_execution) — Execute the generated Python scripterror_debug(ai_call) — On script failure, analyze error and generate fixed scriptresult_summary(ai_call) — Summarize execution output for the userThe retry loop (steps 3→4→3) is modeled as a conditional retry edge with
retry_counton thescript_executionnode.Integration with hero_proc
All node executions are delegated to hero_proc:
On workflow node execution, hero_logic:
logic_ai_call,logic_script_python)NodeRunAction types by node:
ai_call→ hero_proc action that calls hero_aibroker via Unix socket JSON-RPCscript_execution→ hero_proc action that runs the script with the configured interpreterconditional→ hero_proc action that evaluates the condition expressiontransform→ hero_proc action that applies the data transformationBenefits:
Integration with hero_router
After hero_logic is implemented, hero_router's agent changes:
hero_logic.workflow_from_template("service_agent", overrides)to instantiate the workflow with the appropriate model, service catalog, etc.hero_logic.play_start(workflow_sid, input_data)to executehero_logic.play_status(play_sid)or subscribes to updatesAgentResponseAgentResponseincludes aplay_sidand link to hero_logic UI for detailed inspectionhero_router remains the user-friendly HTTP API; hero_logic is the execution engine.
Crate Structure
Following hero RPC service conventions:
DAG Execution Engine
The play engine in
play.rsimplements:DataMappingon_success,on_failure, and expression-based conditions determine which downstream nodes activateretry_count > 0are re-queued on failure up to the configured limitUI Requirements (hero_logic_ui)
The admin dashboard should provide:
Implementation Phases
Phase 1: Foundation
templates/directoryPhase 2: Execution Engine
ai_callnode executor (hero_aibroker via hero_proc)script_executionnode executor (Python/Rhai via hero_proc)conditionalandtransformnode executorsPhase 3: Service Agent Template
service_agentworkflow templatePhase 4: UI & Observability
Phase 5: Advanced Features
human_inputnode type (pause workflow for user input)waitnode type (time-based delays)Configuration & Settings
Workflow-level and node-level settings enable the configurability requested in hero_router#34:
ai_callnode specifies its modelSocket & Port Conventions
$HERO_SOCKET_DIR/hero_logic/rpc.sock$HERO_SOCKET_DIR/hero_logic/ui.sockDependencies
hero_rpc— RPC framework and OSIS persistencehero_proc_sdk— hero_proc client for action/job managementhero_aibroker— AI model access (invoked indirectly via hero_proc)herolib— shared utilitiesPhase 1+2 Implementation Complete
The initial implementation has been pushed to the
developmentbranch:Commit:
1e077e9— feat: implement hero_logic DAG control flow engineWhat's implemented:
WorkflowandPlayas root objects with full CRUD;Node,Edge,NodeConfig,NodeRun,DataMappingas embedded types; enums forNodeType,ExecutionStatus,EdgeCondition,ScriptLanguageon_success,on_failure,always,conditional), data flow between nodes via DataMapping, per-node retry logicai_call(hero_aibroker via hero_proc),script_execution(Python/Rhai/Bash via hero_proc),conditional(expression evaluation),transform(data extraction),wait(delay)templates/directory as JSONtemplates/service_agent.jsonencoding the current hero_router agent pipeline as a 5-node DAGexamples/service_agent_books.jsonshowing the template configured for hero_booksworkflow_from_template,workflow_validate,template_list,play_start,play_cancel,play_retry,play_status,node_logs,node_retryTests
cargo buildRemaining phases:
Phase 3-5 Complete + Architectural Refactor
Key change: hero_proc is now the one place execution lives
hero_logic no longer embeds AI / Python / Bash dispatch logic. The
NodeTypeenum shrank to purely orchestration kinds:action | conditional | transform | wait | human_input. Every node that does real work points at a hero_proc action by name (config.action_name+config.action_context) and lets hero_proc handle the interpreter, env, timeout, logs, and job id.To make that work, AI became a first-class hero_proc interpreter (
Interpreter::Ai) that calls hero_aibroker via its Unix-socket RPC instead of spawning a child process. Model/system_prompt/temperature/max_tokens live on the action spec as anai_configstruct — the same way other interpreters usescriptandenv. Benefit: no more duplicated AI plumbing; the hero_proc UI gets a model dropdown (populated frommodels.list) whenever you pick theaiinterpreter.Cross-repo changes
hero_proc
hero_proc_lib: addedInterpreter::Ai+AiConfig { model, system_prompt, temperature, max_tokens }onActionSpec(src/db/actions/model.rs).hero_proc_server/src/supervisor/executor.rs: newrun_job_aibranch — POSTsai.chatto$HERO_SOCKET_DIR/hero_aibroker/rpc.sockvia raw HTTP-over-UDS, writes the model response to job logs, updates job status. UsesInterpreter::is_in_process()to route.openrpc.json: addedAiConfigschema; extendedActionSpec.interpreterenum withai; addedai_configfield.hero_proc_ui/static/js/dashboard.js: addedaito the interpreter dropdown; conditionally reveals model/system_prompt/temperature/max_tokens fields; fetches models via new/api/aibroker/modelsproxy route added inroutes.rs.hero_logic
schemas/logic/logic.oschema):NodeType→action | conditional | transform | wait | human_input; addedExecutionStatus::AwaitingInput; addedplay_resume(play_sid, node_id, input_data)RPC method; simplifiedNodeConfigto{ node_type, action_name, action_context, condition_expr, transform_expr, timeout_secs, retry_count, retry_delay_secs }.engine/node_executors.rs: rewritten.execute_actioncallsaction.geton hero_proc, interpolates{{var}}placeholders in the action script +ai_config.system_promptwith the node's input JSON, thenjob.create+ waits for terminal phase. Orchestration nodes (conditional/transform/wait/human_input) stay in-engine.engine/executor.rs: rewritten as a ready-queue model. Groups nodes with satisfied dependencies per iteration; pauses withstatus = AwaitingInputon hitting a HumanInput node; resumption viaplay_resumere-entersexecute()and picks up where it left off.engine/template_loader.rs: templates gained anactions[]array of hero_proc action specs (name, interpreter, script, timeout_ms, ai_config, env).workflow_from_templateupserts them into hero_proc viaaction.setbefore persisting the Workflow record.server/rpc.rs: implementedplay_resume;play_cancelalso handlesAwaitingInput;workflow_from_templateprovisions declared actions.templates/service_agent.json+examples/service_agent_books.json: rewritten to the action-based format (5 actions, 5 nodes, 4 edges).hero_logic_ui:workflows.html).play_detail.html+routes.rs): Cytoscape.js DAG canvas with node status color overlay (success=green, failed=red, running=blue, pending=grey, awaiting_input=amber);dag_jsonpayload built server-side fromworkflow.get./ui/hero_proc/#jobs/:id).hero_router — no code changes required. Already delegates to
hero_logic.workflow_from_template("service_agent") + play_start(...); node_id lookups ("script_execution" etc.) are unchanged.Build status
cargo check --workspacegreen on hero_proc and hero_logic.cargo test -p hero_logic --no-rungreen.aithere today; a proper dropdown matching dashboard.js is left as a follow-up.All five phases on the original spec are now addressed: