v3 — typed I/O, modal editing, step-level runs, toolbar layout #2
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Why
The authoring UX still has rough edges after v1/v2:
{{placeholders}}scraped from prompts; outputs are raw strings and routing lives in free-form string expressions. No way to say "this step outputs a list of selected services" and branch onoutputs.services contains 'hero_books'.Decisions (aligned)
models.listactually returns on this instance, whether provider catalogs are being merged, whether OpenRouter/modelsis exposed — then decide whether to fix upstream (aibroker) or in the hero_logic_ui proxy.Changes
1. Tight cards + edit-in-modal
width/height= cytoscape nodew/hviabox-sizing: border-box. Edge arrows terminate on the card edge again.2. Typed inputs + outputs on NodeConfig
Add to the oschema:
{{inputs.foo}}; the engine substitutes from parsed workflow input.output_dataaccording tooutput_format. On success, a parsed JSON value is stored; on failure, only the raw text is carried.{{outputs.field}}.3. Condition builder on edges
When the source node declares an output schema, the edge drawer shows:
followed by a "+ AND" button and a "Raw expression" escape hatch. Field dropdown is populated from the source's declared outputs.
4. Step-level play
node_play(workflow_sid: str, node_id: str, input_data: str) -> NodeRun. Creates a one-shot hero_proc job scoped to that step, streams logs via polling.5. Full model list
Investigation plan:
models.listagainst aibroker manually; log what comes back./api/aibroker/modelsproxy.6. Top toolbar for step insertion
Phases
{{outputs.*}}in edge conditions.Out of scope (for this issue)
Read through the spec and explored the repo. A few clarifying questions before I start implementation — mostly to lock down scope boundaries for v1 so we don't re-litigate them mid-phase:
JSON Schema support
string/number/integer/boolean),array(withitems),object(withproperties+required), andenum. No$ref,allOf/oneOf,pattern,format. Agree?jsonschemacrate. JS side (for start-play form + per-step play): a small hand-rolled form generator that walks the schema — no full validator, rely on server echo for errors. OK?output_format semantics
4.
rawvstext— what's the intended distinction? My read:raw= pass through untouched bytes;text= UTF-8 string, no parsing;json/toml= parse into structured value. Confirm?5. On parse failure, spec says "only the raw text is carried". Does the node_run still succeed (status=success, parsed=null), or fail? I'd lean success-with-warning so downstream
{{outputs.raw}}still works.Placeholder substitution
6. Migration: existing workflows use
{{placeholder}}scraped from prompts. Do we (a) keep both syntaxes working, (b) auto-migrate to{{inputs.*}}on first save, or (c) hard-break and require manual edit? I'd go (a) — treat bare{{foo}}as{{inputs.foo}}.7. Where does substitution live? Currently the engine passes
input_dataJSON to hero_proc and the AI interpreter does the substitution. For{{outputs.*}}we need engine-side substitution before dispatch since outputs aren't known to hero_proc. Plan: do all substitution engine-side, send final resolved strings to hero_proc. Confirm.Per-step play (
node_play)8. Persistence: spec says "ephemeral, no Play record". Do we persist the
NodeRunitself, or is even that in-memory only? Log streaming needs some job_id to poll against.9. Should
node_playrespect the node's retry/timeout config, or strip those for test runs?10. Can it run even if the workflow is unsaved (dirty editor state)? I'd say yes — serialize current editor state and pass it in.
Condition builder
11. Operator set for v1:
==,!=,>,>=,<,<=,contains(string/array),in(value in array),exists. Addmatches(regex)? Skip it?12. Value side: literal only, or also
{{outputs.*}}from other upstream nodes? Latter is more powerful but needs scope analysis.UX details
13. Card size: ~160px height mentioned. Width? I'll use 280px unless you have a preference.
14. Toolbar: full-width above canvas (meta sidebar still left), or toolbar spans only the canvas area? Drag-to-add kept, or switch to click-to-add-at-center?
15. Step-play modal: blocks the editor underneath (modal-style), or side panel that lets you keep editing? Spec says "modal" — confirming full-screen overlay.
Model list (phase 5)
16. Investigation first, or just add a secondary OpenRouter
/modelsfetch in the UI proxy and ship it? If the former, I'll open a separate issue with findings before patching. Preference?I'll wait for answers before writing any code. Will likely tackle in the order of phases in the spec (1 → 5).
1 - ok, 2 - ok, 3 - form from fields. 4 - ok, 5 - encoding / decoding should fail because its a node execution failure. if someone wants to protect and logic against they can get it raw then pass a script to decode than if doesnt work give it to ai to reprompt etc. 6. a, 7. the hero_logic should template in variables and decode outputs. we keep hero_proc a simple ai executor. 8. yes persist. in fact lets not make node runs ephemeral, probably simpler to keep all unified and treat single node run as a regular play but with one node. 9, yes should respect. goal is to reduce exceptions to keep things uniform. 10 maybe, but sounds like might add complexity. 11 - sure add matches. in fact, connector should also have configuration on which of the outputs from the previous step to map to inputs of next. this also answers 12 - such that maybe a node that came two steps before can still pass output as input via connector. 13: no preference, 14 spans canvas, 15, modal. 16 investigate: hero_logic uses hero_proc uses hero_aibroker. hero aibroker should work perfectly, no hardcoded mock models should be displayed. everything that can be configured for a hero_aibroker can be there in the ai node, and how we want to do this is we can map inputs to values and provide defaults so maybe user can provide a temperature value for flow as input or not and we need this mapping functionality.
All 5 phases implemented and deployed. Summary + aibroker investigation below.
Phase 1 — UX fundamentals ✅
box-sizing: border-box; overflow: hidden+ CSS size = cy node size → edge arrows terminate on the visible card edge.Phase 2 — Typed inputs + AI param mapping ✅
NodeConfiggainedinput_schema,output_schema,output_format(string JSON Schema + format enum:raw|text|json|toml).{{inputs.X}},{{outputs.X}}, dotted paths ({{outputs.foo.bar}}), legacy{{X}}falls through toinputs.X.build_node_inputproduces structured ctx:{ inputs, outputs, ...legacy flat keys }.input_schemaon every save — union of root nodes' input schemas, persisted onWorkflow.output_formatas a select.model/temperature/max_tokens, those override the action's defaults — so a workflow input can drive temperature across a branch.Phase 3 — Typed outputs + parsing ✅
output_format.json/tomlparse failures flip the node toFailedwith the parser error + raw output as the error message.raw/textcarry the string through.NodeRun.output_dataso downstream{{outputs.*}}resolves against structured data rather than double-escaped strings.Phase 4 — Condition builder + cross-hop connectors ✅
DataMappinggainedsource_node— a connector can now pull a field from any upstream ancestor, not just the direct parent.build_node_inputhonorssource_node; falls back to edge'sfrom_nodewhen empty (back-compat).==,!=,>,>=,<,<=,contains,in,matches,exists. Clauses joined byAND. Bare field = truthy check. Dotted paths resolve against the ctx (sooutputs.foo.barworks).Builder/Rawtoggle. Builder renders[field ▾][op ▾][value]rows with+ AND; fields populate from the source node's declared output_schema via a<datalist>; raw toggle retains the freeform textarea.matchescurrently falls back to substring for patterns with regex metacharacters — theregexcrate wasn't pulled in to keep the dep tree tight. Real regex can ride in when needed.Phase 5 — Step-level play + aibroker ✅ (partial)
node_play(workflow_sid, node_id, input_data) -> Play. Creates a normalPlaywithworkflow_sidpointing at the parent and exactly oneNodeRun— so it shows up in the workflow's play list, andplay_status/node_logswork unchanged (per your 'unify, don't make ephemeral' answer).retry_count/retry_delay_secs/timeout_secsexactly like a full play.▶button on each action card (edit mode only). Click → modal with: left = input form generated from the node'sinput_schema, right = output preview, bottom = hero_proc logs streamed via polling every 800 ms.aibroker investigation (Q16)
Confirmed the
/api/aibroker/modelsproxy forwards whatever aibroker returns verbatim — no caching or hardcoded mocks in hero_logic_ui.Live response from
hero_aibroker::models.liston this host:That's 3 models — matching your earlier observation. The OpenRouter catalog (~300 models) isn't being merged in. The gap lives in hero_aibroker itself, not the hero_logic_ui proxy. Recommendation: open a separate ticket against
hero_aibrokeralong the lines of 'pull full OpenRouter/modelscatalog when an OpenRouter key is configured, expose them under the samemodels.listresponse'. Once that lands, the grouping-by-provider UX in the AI node's model dropdown is a ~10-line change here — worth deferring until the catalog actually has multiple providers to group.The mappable-param side of the answer (temperature / max_tokens / model as workflow inputs) is already live via Phase 2's engine override.
Deployed
hero_logicservice restarted on this host with all phases active. Ready for visual review.aibroker investigation — root cause of the 3-model limit
Found the gap. Two separate problems in
hero_aibroker:1. Model filtering is too strict
~/hero/var/hero_aibroker/modelsconfig.yml— not fetched from OpenRouter live. The file has ~55 model entries.hero_aibroker_lib/src/registry/mod.rs:54-78), each model's backends are filtered against the configuredProviderMap— a provider instance must match the backend'sproviderstring. If no backend matches, the whole model is silently skipped.openrouterprovider instance that gets created whenOPENROUTER_API_KEYis set. The rest expect provider instances we never create.2.
system_promptis ignored by the chat handlerhero_aibroker_server/src/api/chat.rs:22-52respectstemperature,top_p,max_tokens,stream.system_promptfield to the outgoing LLM request. That field is only used in the admin UI.Recommended fixes (separate hero_aibroker tickets)
/modelscatalog (~300 entries) when an OpenRouter key is present, instead of relying only on the hand-curated YAML. Or at minimum, rewrite the filter to accept any model whose backend provider name exists in the ProviderMap regardless of the backend'smodel_idmatching specific instances.system_promptthrough the chat API: prepend it as asystem-role message to the messages array. Straightforward change at the chat handler.Hero_proc UI staleness — mentioned in passing but same ticket flavour: hero_proc's job detail UI should surface the ai_config fields (temp/max_tokens/system_prompt) from the job's ActionSpec. Currently it only shows the raw script. Separate hero_proc UI ticket.
Refactor plan — nodes as pointers to hero_proc actions
Filed as lhumina_code/hero_proc#47 on the hero_proc side. The idea is to strip action configuration out of the workflow card entirely: a node becomes just
{ input_schema, output_schema, action_name, action_context }, and the card shows a read-only preview +Configure in hero_proc ↗button. New nodes render as dashed placeholders with aCreate in hero_proc ↗jump-out. Single+ Stepin the toolbar — interpreter lives on the action, not the node.The blocker
Workflow inputs need a route to the action's script/prompt/ai_config. Today hero_logic does its own
{{var}}rendering inengine/node_executors.rs::execute_action(fetch ActionSpec → mutate → re-push tojob.create). Once nodes become pointers, that indirection doesn't fit — we want plainjob.create(action_name, inputs).Proposed fix (on hero_proc)
input_schema: Option<String>onActionSpec— JSON Schema, same shape hero_logic already uses.inputs: Option<Value>onJobCreateInput.{{var}}/{{nested.path}}templating at dispatch time, acrossscript,ai_config.system_prompt,ai_config.model/temperature/max_tokens, andenvvalues. Portinterpolate_templatefrom here.HERO_INPUT_*env vars for scripts that don't use{{}}.input_schemaatjob.create, reject on mismatch.The hero_aibroker
system_promptdrop is still a separate blocker — without that, AI actions can't actually use the templated system prompt even after the hero_proc work lands.Once #47 ships, this side becomes
ai_config/script/ interpreter editors from the card body.Configure ↗.execute_actiontojob.create(action_name, context, inputs: <ctx>).+ Step.interpolate_template(lives in hero_proc then).Estimated ~40% surface-area cut on the editor. Waiting on hero_proc#47.