Unify server lifecycle: all Hero services should use one pattern #27

Open
opened 2026-03-16 11:56:44 +00:00 by timur · 10 comments
Owner

Problem

Hero services currently use four different patterns to start up and manage their lifecycle:

Pattern Services Description
OServer::run_cli() hero_os Full OSIS lifecycle — but coupled to OsisAppRpcHandler + OsisDomainInit traits
ZinitLifecycle + manual CLI dispatch hero_redis, hero_embedder, hero_books, hero_indexer, hero_aibroker Each reimplements CLI parsing and serve dispatch
Custom (no lifecycle) hero_inspector, hero_fossil, hero_proxy No zinit integration, ad-hoc CLI args
Custom OSIS (no lifecycle) hero_osis_openrpc Uses OServer internals but not run_cli()

This fragmentation means:

  • Each service re-invents CLI parsing, socket binding, and shutdown logic
  • Some services lack zinit lifecycle commands entirely (no run, start, stop, status, logs)
  • The mandatory endpoints (/health, /.well-known/heroservice.json, /openrpc.json, POST /rpc) are implemented inconsistently
  • OServer::run_cli() can't be used by non-OSIS services because it requires domain-specific traits

Root Cause

OServer conflates two concerns:

  1. Universal server lifecycle — CLI parsing, zinit integration, socket binding, mandatory endpoints, graceful shutdown
  2. OSIS domain management — context registry, domain registration, seed files, per-domain sockets

Every Hero service needs (1). Only OSIS services need (2). But because they're fused together in OServer, non-OSIS services can't use the standard lifecycle.

Proposed Architecture

Core Principle

All Hero servers are the same thing: a process that binds an Axum router to a Unix socket and serves JSON-RPC over HTTP. The only differences are the routes they register and any background tasks they run.

Solution: HeroServer in hero_rpc

Introduce a HeroServer builder (in hero_rpc::server or a new hero_rpc_service crate) that every service uses:

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    HeroServer::new("hero_inspector_server")
        .description("Hero Inspector")
        .repo_url("https://forge.ourworld.tf/lhumina_code/hero_inspector")
        .package_name("hero_inspector_server")
        .router(build_my_router())  // Axum Router with service-specific routes
        .openrpc_spec(include_str!("../openrpc.json"))
        .run()
        .await
}

HeroServer::run() handles everything:

  1. CLI parsing — standard LifecycleCommand (run/start/stop/status/logs/ui/zinit) + serve
  2. Lifecycle dispatch — delegates to ZinitLifecycle for lifecycle commands
  3. Mandatory endpoints — auto-injects /health, /.well-known/heroservice.json, /openrpc.json
  4. Socket binding — binds the merged router to ~/hero/var/sockets/{service_name}.sock
  5. Graceful shutdown — SIGTERM/SIGINT handling
  6. Serve args — standard --socket-dir plus service-specific args via .serve_args::<T>()

OServer becomes a HeroServer extension

OServer continues to exist but builds on HeroServer internally, adding OSIS-specific domain management:

// OServer just configures a HeroServer with OSIS domain routes
impl OServer {
    pub async fn run_cli(...) {
        let router = /* build OSIS domain routes */;
        HeroServer::new("hero_os_server")
            .router(router)
            // ... lifecycle config ...
            .run()
            .await
    }
}

Migration Path

For each service, the migration is:

  1. Replace custom CLI/lifecycle code with HeroServer::new(...).router(existing_router).run()
  2. Remove manual /health and discovery endpoint handlers (auto-injected)
  3. Gain all lifecycle commands automatically

Scope

Services to migrate (in priority order):

  1. hero_inspector (server + ui) — currently no lifecycle
  2. hero_fossil — currently no lifecycle
  3. hero_proxy — currently no lifecycle
  4. hero_osis_openrpc — currently partial lifecycle
  5. hero_redis — currently manual lifecycle dispatch
  6. hero_embedder — currently manual lifecycle dispatch
  7. hero_books — currently manual lifecycle dispatch
  8. hero_indexer — currently manual lifecycle dispatch
  9. hero_aibroker — currently manual lifecycle dispatch
  10. hero_os — refactor OServer to use HeroServer internally

Deliverables

  • HeroServer builder in hero_rpc (or hero_rpc_service)
  • Migrate all services listed above
  • Update skills documentation (hero_rpc_server_lifecycle, hero_service, hero_openrpc_server_admin_template)
  • Update home/docs/ with unified architecture docs
  • Deprecate direct ZinitLifecycle + manual CLI pattern in documentation
## Problem Hero services currently use **four different patterns** to start up and manage their lifecycle: | Pattern | Services | Description | |---------|----------|-------------| | `OServer::run_cli()` | hero_os | Full OSIS lifecycle — but coupled to `OsisAppRpcHandler` + `OsisDomainInit` traits | | `ZinitLifecycle` + manual CLI dispatch | hero_redis, hero_embedder, hero_books, hero_indexer, hero_aibroker | Each reimplements CLI parsing and serve dispatch | | Custom (no lifecycle) | hero_inspector, hero_fossil, hero_proxy | No zinit integration, ad-hoc CLI args | | Custom OSIS (no lifecycle) | hero_osis_openrpc | Uses OServer internals but not `run_cli()` | This fragmentation means: - Each service re-invents CLI parsing, socket binding, and shutdown logic - Some services lack zinit lifecycle commands entirely (no `run`, `start`, `stop`, `status`, `logs`) - The mandatory endpoints (`/health`, `/.well-known/heroservice.json`, `/openrpc.json`, `POST /rpc`) are implemented inconsistently - `OServer::run_cli()` can't be used by non-OSIS services because it requires domain-specific traits ## Root Cause `OServer` conflates two concerns: 1. **Universal server lifecycle** — CLI parsing, zinit integration, socket binding, mandatory endpoints, graceful shutdown 2. **OSIS domain management** — context registry, domain registration, seed files, per-domain sockets Every Hero service needs (1). Only OSIS services need (2). But because they're fused together in `OServer`, non-OSIS services can't use the standard lifecycle. ## Proposed Architecture ### Core Principle All Hero servers are **the same thing**: a process that binds an Axum router to a Unix socket and serves JSON-RPC over HTTP. The only differences are the routes they register and any background tasks they run. ### Solution: `HeroServer` in `hero_rpc` Introduce a `HeroServer` builder (in `hero_rpc::server` or a new `hero_rpc_service` crate) that every service uses: ```rust #[tokio::main] async fn main() -> anyhow::Result<()> { HeroServer::new("hero_inspector_server") .description("Hero Inspector") .repo_url("https://forge.ourworld.tf/lhumina_code/hero_inspector") .package_name("hero_inspector_server") .router(build_my_router()) // Axum Router with service-specific routes .openrpc_spec(include_str!("../openrpc.json")) .run() .await } ``` `HeroServer::run()` handles everything: 1. **CLI parsing** — standard `LifecycleCommand` (run/start/stop/status/logs/ui/zinit) + `serve` 2. **Lifecycle dispatch** — delegates to `ZinitLifecycle` for lifecycle commands 3. **Mandatory endpoints** — auto-injects `/health`, `/.well-known/heroservice.json`, `/openrpc.json` 4. **Socket binding** — binds the merged router to `~/hero/var/sockets/{service_name}.sock` 5. **Graceful shutdown** — SIGTERM/SIGINT handling 6. **Serve args** — standard `--socket-dir` plus service-specific args via `.serve_args::<T>()` ### OServer becomes a HeroServer extension `OServer` continues to exist but builds on `HeroServer` internally, adding OSIS-specific domain management: ```rust // OServer just configures a HeroServer with OSIS domain routes impl OServer { pub async fn run_cli(...) { let router = /* build OSIS domain routes */; HeroServer::new("hero_os_server") .router(router) // ... lifecycle config ... .run() .await } } ``` ### Migration Path For each service, the migration is: 1. Replace custom CLI/lifecycle code with `HeroServer::new(...).router(existing_router).run()` 2. Remove manual `/health` and discovery endpoint handlers (auto-injected) 3. Gain all lifecycle commands automatically ## Scope Services to migrate (in priority order): 1. **hero_inspector** (server + ui) — currently no lifecycle 2. **hero_fossil** — currently no lifecycle 3. **hero_proxy** — currently no lifecycle 4. **hero_osis_openrpc** — currently partial lifecycle 5. **hero_redis** — currently manual lifecycle dispatch 6. **hero_embedder** — currently manual lifecycle dispatch 7. **hero_books** — currently manual lifecycle dispatch 8. **hero_indexer** — currently manual lifecycle dispatch 9. **hero_aibroker** — currently manual lifecycle dispatch 10. **hero_os** — refactor OServer to use HeroServer internally ## Deliverables - [ ] `HeroServer` builder in `hero_rpc` (or `hero_rpc_service`) - [ ] Migrate all services listed above - [ ] Update skills documentation (`hero_rpc_server_lifecycle`, `hero_service`, `hero_openrpc_server_admin_template`) - [ ] Update `home/docs/` with unified architecture docs - [ ] Deprecate direct `ZinitLifecycle` + manual CLI pattern in documentation
Author
Owner

Implementation Plan

Architecture

New HeroServer builder in hero_service crate (hero_rpc repo). Every Hero service uses it:

HeroServer::new("hero_inspector_server")
    .description("Hero Inspector")
    .repo_url("https://forge.ourworld.tf/lhumina_code/hero_inspector")
    .package_name("hero_inspector_server")
    .openrpc_spec(include_str!("../openrpc.json"))
    .run::<MyServeArgs>(|args| async { Ok(build_router(args)) })
    .await

HeroServer::run() handles: CLI parsing (LifecycleCommand + Serve), zinit lifecycle dispatch, mandatory endpoint injection (/health, /.well-known/heroservice.json, /openrpc.json), Unix socket binding, optional TCP binding, graceful shutdown.

API Variants

  • run::<A>(|args| -> Router) — standard: CLI + lifecycle + serve
  • run_simple(|| -> Router) — no custom serve args
  • run_raw::<A>(|args| -> ()) — CLI + lifecycle, service manages own sockets (for OServer)
  • serve(|| -> Router) — no CLI parsing, just bind+serve
  • lifecycle() -> ZinitLifecycle — for manual dispatch (services with extra CLI commands)

Steps

  1. Add HeroServer to hero_rpc/crates/service — new hero_server.rs module
  2. Migrate hero_inspector_server — uses serve() + lifecycle() (has custom CLI subcommands)
  3. Migrate hero_inspector_ui — uses run() with tcp_bind()
  4. Update home/docs/architecture/hero-server.md
  5. Update hero_skills: hero_rpc_server_lifecycle, hero_service, hero_openrpc_server_admin_template
  6. Future: migrate remaining services (hero_fossil, hero_proxy, hero_redis, hero_embedder, hero_books, hero_indexer, hero_aibroker, hero_os)

Branches

All repos: development_home27

## Implementation Plan ### Architecture New `HeroServer` builder in `hero_service` crate (hero_rpc repo). Every Hero service uses it: ```rust HeroServer::new("hero_inspector_server") .description("Hero Inspector") .repo_url("https://forge.ourworld.tf/lhumina_code/hero_inspector") .package_name("hero_inspector_server") .openrpc_spec(include_str!("../openrpc.json")) .run::<MyServeArgs>(|args| async { Ok(build_router(args)) }) .await ``` `HeroServer::run()` handles: CLI parsing (LifecycleCommand + Serve), zinit lifecycle dispatch, mandatory endpoint injection (/health, /.well-known/heroservice.json, /openrpc.json), Unix socket binding, optional TCP binding, graceful shutdown. ### API Variants - `run::<A>(|args| -> Router)` — standard: CLI + lifecycle + serve - `run_simple(|| -> Router)` — no custom serve args - `run_raw::<A>(|args| -> ())` — CLI + lifecycle, service manages own sockets (for OServer) - `serve(|| -> Router)` — no CLI parsing, just bind+serve - `lifecycle() -> ZinitLifecycle` — for manual dispatch (services with extra CLI commands) ### Steps 1. Add HeroServer to `hero_rpc/crates/service` — new `hero_server.rs` module 2. Migrate `hero_inspector_server` — uses `serve()` + `lifecycle()` (has custom CLI subcommands) 3. Migrate `hero_inspector_ui` — uses `run()` with `tcp_bind()` 4. Update `home/docs/architecture/hero-server.md` 5. Update hero_skills: `hero_rpc_server_lifecycle`, `hero_service`, `hero_openrpc_server_admin_template` 6. Future: migrate remaining services (hero_fossil, hero_proxy, hero_redis, hero_embedder, hero_books, hero_indexer, hero_aibroker, hero_os) ### Branches All repos: `development_home27`
Author
Owner

Progress Update

Completed

  • HeroServer builder implemented in hero_rpc/crates/service (branch development_home27)
  • hero_inspector_server migrated — lifecycle commands + socket binding via HeroServer
  • hero_inspector_ui migrated — lifecycle commands + UDS/TCP binding via HeroServer
  • Architecture docs: home/docs/architecture/hero-server.md
  • Skill updated: hero_rpc_server_lifecycle — HeroServer is now primary pattern

Branches pushed

  • hero_rpcdevelopment_home27
  • hero_inspectordevelopment_home27
  • homedevelopment_home27
  • hero_skillsdevelopment_home27

What changed

  • hero_inspector_server: -263 lines, +139 lines (removed manual hyper/socket binding, added lifecycle commands)
  • hero_inspector_ui: removed manual socket accept loop, added lifecycle + graceful shutdown
  • Both now support run/start/stop/status/logs/ui/zinit commands via zinit

Remaining (future PRs)

  • Migrate: hero_fossil, hero_proxy, hero_redis, hero_embedder, hero_books, hero_indexer, hero_aibroker
  • Refactor OServer to use HeroServer internally
## Progress Update ### Completed - [x] `HeroServer` builder implemented in `hero_rpc/crates/service` (branch `development_home27`) - [x] `hero_inspector_server` migrated — lifecycle commands + socket binding via HeroServer - [x] `hero_inspector_ui` migrated — lifecycle commands + UDS/TCP binding via HeroServer - [x] Architecture docs: `home/docs/architecture/hero-server.md` - [x] Skill updated: `hero_rpc_server_lifecycle` — HeroServer is now primary pattern ### Branches pushed - `hero_rpc` → `development_home27` - `hero_inspector` → `development_home27` - `home` → `development_home27` - `hero_skills` → `development_home27` ### What changed - `hero_inspector_server`: -263 lines, +139 lines (removed manual hyper/socket binding, added lifecycle commands) - `hero_inspector_ui`: removed manual socket accept loop, added lifecycle + graceful shutdown - Both now support `run/start/stop/status/logs/ui/zinit` commands via zinit ### Remaining (future PRs) - Migrate: hero_fossil, hero_proxy, hero_redis, hero_embedder, hero_books, hero_indexer, hero_aibroker - Refactor OServer to use HeroServer internally
Author
Owner

Refinement: Simplify HeroServer

After initial implementation, several simplifications identified:

Changes

  1. Remove TCP binding — Hero standardizes on Unix sockets only. hero_proxy handles TCP→UDS. No .tcp_bind() on HeroServer.

  2. Remove lifecycle-only fields from HeroServerrepo_url, package_name, branch belong on ZinitLifecycle, not on the server builder. HeroServer just runs servers. Lifecycle config is separate.

  3. Remove version — derivable from openrpc spec or CARGO_PKG_VERSION. Not a server concern.

  4. Derive socket path from name — always ~/hero/var/sockets/{service_name}.sock. Remove .socket_path() override.

  5. Replace skip_mandatory_endpoints() with two server types:

    • HeroRpcServer — requires openrpc spec, injects /health + /openrpc.json + /.well-known/heroservice.json
    • HeroUiServer — for UI crates whose build_router() already provides everything

Future (separate issue)

  • OServer = context-aware HeroRpcServer (context support built into HeroRpcServer, OServer becomes thin wrapper)
  • Merge common server infra from hero_rpc_server into hero_service

Implementing now

Simplifying HeroServer on the existing development_home27 branches.

## Refinement: Simplify HeroServer After initial implementation, several simplifications identified: ### Changes 1. **Remove TCP binding** — Hero standardizes on Unix sockets only. `hero_proxy` handles TCP→UDS. No `.tcp_bind()` on HeroServer. 2. **Remove lifecycle-only fields from HeroServer** — `repo_url`, `package_name`, `branch` belong on `ZinitLifecycle`, not on the server builder. HeroServer just runs servers. Lifecycle config is separate. 3. **Remove `version`** — derivable from openrpc spec or `CARGO_PKG_VERSION`. Not a server concern. 4. **Derive socket path from name** — always `~/hero/var/sockets/{service_name}.sock`. Remove `.socket_path()` override. 5. **Replace `skip_mandatory_endpoints()` with two server types**: - `HeroRpcServer` — requires openrpc spec, injects /health + /openrpc.json + /.well-known/heroservice.json - `HeroUiServer` — for UI crates whose `build_router()` already provides everything ### Future (separate issue) - OServer = context-aware HeroRpcServer (context support built into HeroRpcServer, OServer becomes thin wrapper) - Merge common server infra from `hero_rpc_server` into `hero_service` ### Implementing now Simplifying HeroServer on the existing `development_home27` branches.
Author
Owner

Simplification Complete

Refactored HeroServer into two distinct types:

  • HeroRpcServer — requires OpenRPC spec (2nd arg), auto-injects /health, /openrpc.json, /.well-known/heroservice.json, extracts version from spec
  • HeroUiServer — no mandatory endpoints, for UI crates whose build_router() provides everything

Removed

  • TCP binding (Unix sockets only, per Hero standard)
  • repo_url, package_name, branch (lifecycle-only fields, stay on ZinitLifecycle)
  • version field (derived from OpenRPC spec)
  • socket_path override (convention: ~/hero/var/sockets/{service_name}.sock)
  • skip_mandatory_endpoints() (replaced by server type choice)

Usage

// RPC server
HeroRpcServer::new("hero_inspector_server", include_str!("../openrpc.json"))
    .description("Hero Inspector")
    .serve(|| async { Ok(build_router()) })
    .await

// UI server
HeroUiServer::new("hero_inspector_ui")
    .description("Hero Inspector Web UI")
    .serve(|| async { Ok(build_ui_router()) })
    .await

All branches updated and pushed.

## Simplification Complete Refactored HeroServer into two distinct types: - **`HeroRpcServer`** — requires OpenRPC spec (2nd arg), auto-injects `/health`, `/openrpc.json`, `/.well-known/heroservice.json`, extracts version from spec - **`HeroUiServer`** — no mandatory endpoints, for UI crates whose `build_router()` provides everything ### Removed - TCP binding (Unix sockets only, per Hero standard) - `repo_url`, `package_name`, `branch` (lifecycle-only fields, stay on ZinitLifecycle) - `version` field (derived from OpenRPC spec) - `socket_path` override (convention: `~/hero/var/sockets/{service_name}.sock`) - `skip_mandatory_endpoints()` (replaced by server type choice) ### Usage ```rust // RPC server HeroRpcServer::new("hero_inspector_server", include_str!("../openrpc.json")) .description("Hero Inspector") .serve(|| async { Ok(build_router()) }) .await // UI server HeroUiServer::new("hero_inspector_ui") .description("Hero Inspector Web UI") .serve(|| async { Ok(build_ui_router()) }) .await ``` All branches updated and pushed.
Author
Owner

hero_osis Migration Progress

Completed

  • dispatch_jsonrpc_auto_context() — extracts _context from JSON-RPC params (defaults to "root"), strips before forwarding. Also adds rpc.contexts method.
  • hero_osis_server migrated to HeroRpcServer — single socket, HTTP/1.1 transport, lifecycle commands

Architecture Change

Before: N sockets (one per context), raw newline-delimited JSON-RPC

~/hero/var/sockets/root/hero_osis_server.sock
~/hero/var/sockets/acme/hero_osis_server.sock

After: 1 socket, HTTP over UDS, context in _context param

~/hero/var/sockets/hero_osis_server.sock
POST /rpc {"method":"business.Company.get","params":{"_context":"acme","sid":"123"}}

Remaining

  • hero_osis_uiHeroUiServer (proxy needs updating for HTTP backend)
  • Regenerate osis_server_generated.rs (pre-existing issue, not migration-related)

Branches

  • hero_rpcdevelopment_home27 (dispatch_jsonrpc_auto_context)
  • hero_osisdevelopment_home27 (server migration)
## hero_osis Migration Progress ### Completed - [x] `dispatch_jsonrpc_auto_context()` — extracts `_context` from JSON-RPC params (defaults to "root"), strips before forwarding. Also adds `rpc.contexts` method. - [x] `hero_osis_server` migrated to `HeroRpcServer` — single socket, HTTP/1.1 transport, lifecycle commands ### Architecture Change **Before:** N sockets (one per context), raw newline-delimited JSON-RPC ``` ~/hero/var/sockets/root/hero_osis_server.sock ~/hero/var/sockets/acme/hero_osis_server.sock ``` **After:** 1 socket, HTTP over UDS, context in `_context` param ``` ~/hero/var/sockets/hero_osis_server.sock POST /rpc {"method":"business.Company.get","params":{"_context":"acme","sid":"123"}} ``` ### Remaining - [ ] `hero_osis_ui` → `HeroUiServer` (proxy needs updating for HTTP backend) - [ ] Regenerate osis_server_generated.rs (pre-existing issue, not migration-related) ### Branches - `hero_rpc` → `development_home27` (dispatch_jsonrpc_auto_context) - `hero_osis` → `development_home27` (server migration)
Author
Owner

hero_osis Migration Complete

All components migrated:

  • dispatch_jsonrpc_auto_context() — extracts _context from params, adds rpc.contexts
  • hero_osis_serverHeroRpcServer — single socket, HTTP/1.1, context in params
  • hero_osis_uiHeroUiServer — lifecycle commands, HTTP proxy to single backend socket

Architecture (before → after)

BEFORE: N raw sockets, newline JSON-RPC
  ~/hero/var/sockets/root/hero_osis_server.sock
  ~/hero/var/sockets/acme/hero_osis_server.sock
  hero_osis_ui → raw socket connect per context

AFTER: 1 HTTP socket, _context in params
  ~/hero/var/sockets/hero_osis_server.sock  (HTTP/1.1)
  ~/hero/var/sockets/hero_osis_ui.sock      (HTTP/1.1)
  hero_osis_ui → HTTP POST /rpc with _context injected

All repos on development_home27:

  • hero_rpc — HeroRpcServer, HeroUiServer, dispatch_jsonrpc_auto_context
  • hero_inspector — server + ui migrated
  • hero_osis — server + ui migrated
  • home — architecture docs
  • hero_skills — skill updates

Note

hero_osis_server has pre-existing compilation issues in generated osis_server_generated.rs files (unrelated to this migration). These need regeneration from the updated hero_rpc_osis branch.

## hero_osis Migration Complete ### All components migrated: - [x] `dispatch_jsonrpc_auto_context()` — extracts `_context` from params, adds `rpc.contexts` - [x] `hero_osis_server` → `HeroRpcServer` — single socket, HTTP/1.1, context in params - [x] `hero_osis_ui` → `HeroUiServer` — lifecycle commands, HTTP proxy to single backend socket ### Architecture (before → after) ``` BEFORE: N raw sockets, newline JSON-RPC ~/hero/var/sockets/root/hero_osis_server.sock ~/hero/var/sockets/acme/hero_osis_server.sock hero_osis_ui → raw socket connect per context AFTER: 1 HTTP socket, _context in params ~/hero/var/sockets/hero_osis_server.sock (HTTP/1.1) ~/hero/var/sockets/hero_osis_ui.sock (HTTP/1.1) hero_osis_ui → HTTP POST /rpc with _context injected ``` ### All repos on `development_home27`: - `hero_rpc` — HeroRpcServer, HeroUiServer, dispatch_jsonrpc_auto_context - `hero_inspector` — server + ui migrated - `hero_osis` — server + ui migrated - `home` — architecture docs - `hero_skills` — skill updates ### Note hero_osis_server has pre-existing compilation issues in generated `osis_server_generated.rs` files (unrelated to this migration). These need regeneration from the updated hero_rpc_osis branch.
Author
Owner

Remaining Work & Next Steps

Migrate remaining services

  • hero_fossilHeroRpcServer
  • hero_redisHeroRpcServer
  • hero_embedderHeroRpcServer
  • hero_booksHeroRpcServer
  • hero_indexerHeroRpcServer
  • hero_aibrokerHeroRpcServer
  • hero_proxyHeroRpcServer

Deprecate OServer (hero_os)

OServer is now redundant — its responsibilities are split:

  • Socket binding, CLI, lifecycle → HeroRpcServer
  • Domain registration, ServerState → UnixRpcServer/ServerState (reusable directly)
  • Core management socket → can become routes on the service's HeroRpcServer socket

hero_os_server should migrate to HeroRpcServer + ServerState for domain registration (same pattern as hero_osis). The hero_rpc_server crate's OServer can then be deprecated.

WASM UI servers (hero_os_ui, hero_archipelagos)

HeroUiServer works for WASM apps too — the router just serves different static assets (WASM binary + JS glue vs Askama templates). No separate server type needed.

Regenerate hero_osis generated code

The osis_server_generated.rs files need regeneration after switching to development_home27 branch of hero_rpc_osis. This is a pre-existing issue, not migration-related.

See #28 for standard testing and hero_service CLI tooling.

## Remaining Work & Next Steps ### Migrate remaining services - [ ] `hero_fossil` → `HeroRpcServer` - [ ] `hero_redis` → `HeroRpcServer` - [ ] `hero_embedder` → `HeroRpcServer` - [ ] `hero_books` → `HeroRpcServer` - [ ] `hero_indexer` → `HeroRpcServer` - [ ] `hero_aibroker` → `HeroRpcServer` - [ ] `hero_proxy` → `HeroRpcServer` ### Deprecate OServer (hero_os) `OServer` is now redundant — its responsibilities are split: - Socket binding, CLI, lifecycle → `HeroRpcServer` - Domain registration, ServerState → `UnixRpcServer`/`ServerState` (reusable directly) - Core management socket → can become routes on the service's HeroRpcServer socket `hero_os_server` should migrate to `HeroRpcServer` + `ServerState` for domain registration (same pattern as hero_osis). The `hero_rpc_server` crate's OServer can then be deprecated. ### WASM UI servers (hero_os_ui, hero_archipelagos) HeroUiServer works for WASM apps too — the router just serves different static assets (WASM binary + JS glue vs Askama templates). No separate server type needed. ### Regenerate hero_osis generated code The `osis_server_generated.rs` files need regeneration after switching to `development_home27` branch of `hero_rpc_osis`. This is a pre-existing issue, not migration-related. See #28 for standard testing and `hero_service` CLI tooling.
Author
Owner

Switch from zinit to hero_init

Per Kristof's direction: zinit is being split into:

  • my_init (geomind_code/my_init) — simplified, back to basics for zos/mos
  • hero_init (lhumina_code/hero_init) — current zinit with advanced features, for Hero OS. No TOML format, all OpenRPC.

What needs to change in hero_rpc

hero_service/src/lifecycle.rs currently depends on zinit_sdk. The migration is a rename:

Current New
zinit_sdk crate dep hero_init_sdk
zinit_sdk::ZinitRPCAPIClient hero_init_sdk::HeroInitRPCAPIClient (or similar)
zinit_sdk::ServiceBuilder hero_init_sdk::ServiceBuilder
zinit_sdk::ActionBuilder hero_init_sdk::ActionBuilder
Socket: zinit_server.sock hero_init_server.sock
Env var: ZINIT_SOCKET HERO_INIT_SOCKET

The API surface stays the same — same methods (service_start, service_stop, service_status, logs_get, job_create, action_set).

Steps

  1. Create lhumina_code/hero_init — fork of zinit, rename crates
  2. Update hero_service/Cargo.toml — swap zinit_sdkhero_init_sdk
  3. Update hero_service/src/lifecycle.rs — swap types/imports
  4. Update socket path default and env var name
  5. All downstream services get hero_init lifecycle automatically (they depend on hero_service)

Blocked on

  • hero_init repo being created with renamed crates
## Switch from zinit to hero_init Per Kristof's direction: zinit is being split into: - **my_init** (`geomind_code/my_init`) — simplified, back to basics for zos/mos - **hero_init** (`lhumina_code/hero_init`) — current zinit with advanced features, for Hero OS. No TOML format, all OpenRPC. ### What needs to change in hero_rpc `hero_service/src/lifecycle.rs` currently depends on `zinit_sdk`. The migration is a rename: | Current | New | |---------|-----| | `zinit_sdk` crate dep | `hero_init_sdk` | | `zinit_sdk::ZinitRPCAPIClient` | `hero_init_sdk::HeroInitRPCAPIClient` (or similar) | | `zinit_sdk::ServiceBuilder` | `hero_init_sdk::ServiceBuilder` | | `zinit_sdk::ActionBuilder` | `hero_init_sdk::ActionBuilder` | | Socket: `zinit_server.sock` | `hero_init_server.sock` | | Env var: `ZINIT_SOCKET` | `HERO_INIT_SOCKET` | The API surface stays the same — same methods (`service_start`, `service_stop`, `service_status`, `logs_get`, `job_create`, `action_set`). ### Steps 1. Create `lhumina_code/hero_init` — fork of zinit, rename crates 2. Update `hero_service/Cargo.toml` — swap `zinit_sdk` → `hero_init_sdk` 3. Update `hero_service/src/lifecycle.rs` — swap types/imports 4. Update socket path default and env var name 5. All downstream services get hero_init lifecycle automatically (they depend on hero_service) ### Blocked on - hero_init repo being created with renamed crates
Author
Owner

hero_init Migration Complete

What was done

  1. Forked geomind_code/zinitlhumina_code/hero_init
  2. Renamed all crates: zinit_sdk → hero_init_sdk, zinit_server → hero_init_server, etc.
  3. Updated hero_service to depend on hero_init_sdk instead of zinit_sdk
  4. Renamed ZinitLifecycleHeroLifecycle
  5. Updated env vars: ZINIT_SOCKETHERO_INIT_SOCKET
  6. Updated socket paths: zinit_server.sockhero_init_server.sock
  7. Updated CLI: zinit subcommand → hero-init

Repos & branches

  • lhumina_code/hero_initdevelopment_home27 (full rename, compiles clean)
  • lhumina_code/hero_rpcdevelopment_home27 (switched to hero_init_sdk)

Downstream impact

All services using hero_service get the change automatically. They import HeroLifecycle (or use it via HeroRpcServer/HeroUiServer). No changes needed in service repos — the rename is internal to hero_service.

The hero_inspector and hero_osis migrations on development_home27 will need cargo update to pick up the new hero_rpc commit.

## hero_init Migration Complete ### What was done 1. **Forked** `geomind_code/zinit` → `lhumina_code/hero_init` 2. **Renamed** all crates: zinit_sdk → hero_init_sdk, zinit_server → hero_init_server, etc. 3. **Updated** hero_service to depend on `hero_init_sdk` instead of `zinit_sdk` 4. **Renamed** `ZinitLifecycle` → `HeroLifecycle` 5. **Updated** env vars: `ZINIT_SOCKET` → `HERO_INIT_SOCKET` 6. **Updated** socket paths: `zinit_server.sock` → `hero_init_server.sock` 7. **Updated** CLI: `zinit` subcommand → `hero-init` ### Repos & branches - `lhumina_code/hero_init` → `development_home27` (full rename, compiles clean) - `lhumina_code/hero_rpc` → `development_home27` (switched to hero_init_sdk) ### Downstream impact All services using `hero_service` get the change automatically. They import `HeroLifecycle` (or use it via `HeroRpcServer`/`HeroUiServer`). No changes needed in service repos — the rename is internal to hero_service. The hero_inspector and hero_osis migrations on `development_home27` will need `cargo update` to pick up the new hero_rpc commit.
Owner

Proposal: MCP as auto-generated endpoint on HeroRpcServer

With #34 (4-pillar standard), we've added hand-written MCP endpoints to hero_foundry, hero_indexer, hero_osis, and hero_inspector. This works but creates duplicated tool definitions that drift from OpenRPC.

Proposal

Since HeroRpcServer already holds the OpenRPC spec via include_str!() and auto-injects /health, /openrpc.json, /.well-known/heroservice.json — add /mcp as another auto-injected endpoint:

HeroRpcServer::new("hero_foundry_server", include_str!("../openrpc.json"))
    .description("Hero Foundry")
    .serve(|| async { Ok(router) })
    .await
// Auto-generates: /health, /openrpc.json, /.well-known/heroservice.json, /mcp

The conversion logic already exists — hero_inspector has openrpc_to_mcp_tools() that converts OpenRPC methods to MCP tool definitions. This can be extracted into a shared function in hero_service.

Optional curated override

For services that want AI-friendlier tool descriptions (fewer tools, better names):

HeroRpcServer::new("hero_foundry_server", include_str!("../openrpc.json"))
    .mcp_tools(curated_tools())  // optional — overrides auto-generated
    .serve(|| async { Ok(router) })
    .await

Philosophy alignment

Same pattern as OSIS generating from OTML schemas — OpenRPC is the single source of truth, MCP/SDKs/docs/discovery are all derived. No hand-maintained duplicates.

Scope

This would be a natural extension of the remaining #27 work (migrating remaining services). Each service that moves to HeroRpcServer automatically gets /mcp for free.

The hand-written MCP handlers from #34 serve as working implementations until this is ready. cc @timur

## Proposal: MCP as auto-generated endpoint on HeroRpcServer With #34 (4-pillar standard), we've added hand-written MCP endpoints to hero_foundry, hero_indexer, hero_osis, and hero_inspector. This works but creates duplicated tool definitions that drift from OpenRPC. ### Proposal Since `HeroRpcServer` already holds the OpenRPC spec via `include_str!()` and auto-injects `/health`, `/openrpc.json`, `/.well-known/heroservice.json` — add `/mcp` as another auto-injected endpoint: ```rust HeroRpcServer::new("hero_foundry_server", include_str!("../openrpc.json")) .description("Hero Foundry") .serve(|| async { Ok(router) }) .await // Auto-generates: /health, /openrpc.json, /.well-known/heroservice.json, /mcp ``` The conversion logic already exists — `hero_inspector` has `openrpc_to_mcp_tools()` that converts OpenRPC methods to MCP tool definitions. This can be extracted into a shared function in `hero_service`. ### Optional curated override For services that want AI-friendlier tool descriptions (fewer tools, better names): ```rust HeroRpcServer::new("hero_foundry_server", include_str!("../openrpc.json")) .mcp_tools(curated_tools()) // optional — overrides auto-generated .serve(|| async { Ok(router) }) .await ``` ### Philosophy alignment Same pattern as OSIS generating from OTML schemas — OpenRPC is the single source of truth, MCP/SDKs/docs/discovery are all derived. No hand-maintained duplicates. ### Scope This would be a natural extension of the remaining #27 work (migrating remaining services). Each service that moves to `HeroRpcServer` automatically gets `/mcp` for free. The hand-written MCP handlers from #34 serve as working implementations until this is ready. cc @timur
Sign in to join this conversation.
No labels
No milestone
No project
No assignees
2 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
lhumina_code/home#27
No description provided.