Bring all Hero services to 4-pillar standard (OpenRPC + MCP + Health + Socket) #34
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Goal
Bring all user-facing Hero services to the full Hero OS service standard (the "4 pillars"), then wire them to the AI Assistant. hero_books is the reference implementation — it has all 4. Every other service must match.
The 4 Pillars (hero_books = gold standard)
rpc.discoverreturns full API specPOST /mcpwith curated AI-friendly toolsGET /healthorrpc.health~/hero/var/sockets/<service>.sock1. OpenRPC — the service contract
Every service implements
rpc.discover, returning a full OpenRPC 1.3 specification. This is the machine-readable contract that describes every method, its parameters, return types, and errors. hero_inspector auto-discovers services by probing sockets forrpc.discover— if a service doesn't respond, it's invisible to the platform. OpenRPC also powers documentation generation, client SDK generation, and the inspector's MCP gateway. A service without OpenRPC is undocumented and undiscoverable.2. MCP — the AI interface
Every service exposes
POST /mcpwith curated, hand-crafted tool definitions optimized for LLM consumption. Tools are not auto-generated from OpenRPC — they are a deliberately curated subset with AI-friendly descriptions, sensible defaults, and clear input schemas. Context-scoped services (hero_osis, hero_foundry) accept an optionalcontextparameter on every tool, defaulting to the user's active Hero OS context. Global services (hero_auth, hero_books, hero_inspector, hero_indexer) ignore context. Each service owns its MCP interface independently — no routing through gateways or proxies. A stdio↔HTTP bridge script per service connects it to the AI Assistant viamcp_servers.json.3. Health — operational visibility
Every service responds to
GET /health(HTTP services) orrpc.health(raw socket services) with a structured status. This is what zinit monitors, what the inspector checks for liveness, and what deployment automation uses to verify a service came up correctly. Health endpoints should include version, uptime, and any service-specific readiness indicators (e.g., hero_books reports library and embedder status).4. Unix Socket — the universal transport
Every service binds to
~/hero/var/sockets/<service>.sock. Unix sockets are the standard transport in Hero OS — no TCP ports except for external-facing proxies. Sockets enable inspector auto-discovery (scans the socket directory), zero-config inter-service communication, and clean filesystem-based access control. Context-scoped services create per-context sockets:~/hero/var/sockets/{context}/<service>.sock. Socket symlinks provide stable aliases for services with multiple names.Verified Audit (live on herodev, 2026-03-18)
All services running and healthy. Inspector discovers 42 sockets.
rpc.discover/api/rpc/mcp/:idrpc.discoverto backendrpc.discoverKey findings from live testing:
hero_auth_ui.sock(process:hero_auth_server --bind unix:...hero_auth_ui.sock)rpc.discover— inspector sees 19 methods. Only MCP is missing.hero_foundry_server(backend, OpenRPC at/api/rpc) andhero_foundry_admin(UI dashboard, proxies via SDK). MCP goes on the server.rpc.discovervia raw socket (confirmed with socat). Returns full merged per-domain spec. Inspector can't see it because it only probes HTTP./api/rpcnot/rpcContext Architecture
Hero OS contexts (Geomind, Incubaid, ThreeFold, Default) are the fundamental data scoping mechanism. A context is a workspace that isolates all user data — contacts, files, calendar, media, projects. The context selector in Hero OS (top-left) determines what every app shows. Context-awareness is built into the MCP pillar: context-scoped services route tool calls to the right data based on the user's active context.
~/hero/var/sockets/{ctx}/hero_osis_server.sock~/hero/var/hero_foundry/webdav/{ctx}/The AI Island already passes
context_nameto the chat function. Today hero_shrimp doesn't use it because no context-aware MCP tools exist yet. As part of this work, context flows end-to-end: Hero OS → AI Island → hero_shrimp →HERO_CONTEXTenv → bridge script → service endpoint.Known Blockers
5094617hasuse hero_service::ZinitLifecycle(should beHeroLifecycle). Fix on WIP branchdevelopment_mik_34, update Cargo.lock.Execution Plan — Learn Then Replicate
Phase 0: Fix blockers
development_mik_34branch, update hero_indexer Cargo.lockPhase 1: hero_auth — validate bridge pattern (bridge only, MCP exists)
mcp_authbridge script →hero_auth_ui.sockat/mcpmcp_authentry tomcp_servers.jsonPhase 2: hero_foundry — first MCP addition (has OpenRPC)
hero_foundry_server(POST /mcp)repo_list_files,repo_read_file,repo_write_file,repo_find,server_list_repositoriescontextparam for WebDAV scopingmcp_foundrybridge + configPhase 3: hero_inspector — self-MCP
POST /mcp(self) alongside existing/mcp/:id(gateway)services_list,service_detail,scan,statusmcp_inspectorbridge + configPhase 4: hero_indexer — MCP for search
POST /mcpto hero_indexer_uisearch,index_status,collections_listproxy_to_socket()mcp_indexerbridge + configPhase 5: hero_osis — MCP for personal data (largest, context-critical)
POST /mcpto hero_osis_uicontextparameter (default: session context)mcp_osisbridge + configPhase 6: Context flow through to MCP
context_namethrough hero_shrimp's/api/chat(currently ignored)HERO_CONTEXTinto MCP bridge process envHERO_CONTEXTfor context-aware service routingPhase 7: OpenRPC gaps
rpc.discoverrpc.discoverto backend/api/rpcvs/rpc)Phase 8: Build, deploy, full integration test
Acceptance Criteria
POST /mcpwith hand-crafted tool definitionsmcp_servers.jsonRelated
Wire all Hero services to AI Assistant via MCPto Bring all Hero services to 4-pillar standard (OpenRPC + MCP + Health + Socket)Session 15 Progress — Phase 0b: hero_rpc fix ✅
Problem
hero_rpc commit
5094617had a broken import incrates/server/src/server/server.rs:38:hero_serviceexportsHeroLifecycle(the renamed type). This broke compilation ofhero_rpc_serverfor all downstream services — cargo compiles the entire crate including OServer even if consumers only useZinitLifecyclevia the type alias.Fix
Branch:
development_mik_34on hero_rpc (commit54d0258)use hero_service::ZinitLifecycle→use hero_service::HeroLifecyclein OServerpub type ZinitLifecycle = HeroLifecyclealias inlib.rspreserves backwards compat for all 25+ downstream servicesVerified
development_mik_34created, Cargo.toml updated tobranch = "development_mik_34", Docker release build succeeded ✅ZinitLifecycledeprecated in favor ofHeroLifecycle) — non-blocking, can be addressed in a separate cleanupAssessment of my_init / hero_init
my_init(geomind_code) = evolution/rebrand of zinit itself, same service supervisorhero_init(geomind_research) = cloud-init style provisioning tool, different purposeNext: Phase 1 — hero_auth bridge wiring
Session 15 Progress — Phases 1-6 Complete
Phase 1: hero_auth bridge ✅
mcp_authbridge scriptmcp_servers.jsonhero_auth_ui.sock(verified live)Phase 2: hero_foundry MCP ✅
src/http/mcp.rswith 5 curated tools (repo_list_files, repo_read_file, repo_write_file, repo_find, server_list_repositories)/mcproute +/rpcalias (fixes inspector discovery)mcp_foundrybridge scriptPhase 3: hero_inspector self-MCP ✅
openrpc_to_mcp_tools()conversionmcp_inspectorbridge routes to/mcp/hero_inspectorPhase 4: hero_indexer MCP ✅
main.rswith 3 tools (search, index_status, collections_list)mcp_indexerbridge scriptPhase 5: hero_osis MCP ✅
contextparam, defaults toHERO_CONTEXTenv → "default"rpc_proxy_innerinto reusablerpc_call_innerfor MCP dispatchmcp_osisbridge scriptPhase 6: Context plumbing ✅
context_namenow serialized in ChatRequest to shrimpcontextparam to MCP toolsBuild infrastructure
build-local.shupdated to copy MCP bridge scripts todist/bin/mcp_servers.jsonauto-transformed tomcp.json(Claude Desktop format) for shrimpAll commits (local, not pushed yet)
development_mik_34development_mik_34development_mik_34development_mik_34development_mik_34development_mik_34development_mik_34development_mik_34development_mik_6_1development_mik_6_1Related issues filed
Next: Phase 8 — Build in Docker, deploy to herodev, test end-to-end
Session 15 — Phase 8: Deploy & Test Results
Deployed to herodev ✅
All binaries rebuilt in Docker, image packed and pushed as
:herodev, container restarted.MCP Endpoint Test Results
/rpcalias/rpc.What's working
/rpcvs/api/rpcmismatch)/root/hero/bin/in containerRemaining items
Related
Session 15 — Comprehensive Status & Roadmap
What's done (all on WIP branches, NOT merged to development)
Commits by repo
development_mik_3454d0258development_mik_349e07628/rpcaliasdevelopment_mik_34253472edevelopment_mik_3450f712d,9f40a7ddevelopment_mik_34d7ec91fdevelopment_mik_34709ef6ddevelopment_mik_34ddb7160,3cea7f7development_mik_3499e3009development_mik_6_1469edaedevelopment_mik_6_1cb9557fLive verification on herodev
What remains for 100%
dx buildfor context_name to flow from UI dropdown → shrimp. MCP tools work independently without this.bun build --compilefor context plumbing changes to take effect in the compiled binary deployed to herodev.Ecosystem Roadmap — How all issues fit together
Design principle established
OpenRPC is the single source of truth — same philosophy as OSIS generating from OTML schemas. MCP tools, client SDKs, documentation, and service discovery are all auto-generated from the OpenRPC spec. Hand-written MCP handlers from #34 are temporary scaffolding until HeroRpcServer auto-generates
/mcp(proposed extension of #27).Session 15 — Final Verification ✅
All 6 MCP endpoints verified on herodev
Total: 54 MCP tools across 6 services
Fixes applied
proxy_to_service()now sends to/rpcinstead of/(was causing 404 for all services'tools/call)hero_inspector_ui.sock(MCP gateway) nothero_inspector_server.sock(raw RPC)/rpcalias enables inspector to discover 32 OpenRPC methods (was 0)Remaining
Master roadmap filed as #38
Includes #18 (AI broker code generation), #27 (HeroRpcServer), #36 (clean MCP arch), and all other issues in dependency order.
Session 15 — Final Status
Done (verified on herodev from development branch)
Remaining: shrimp-to-bridge stdio integration
Shrimp spawns bridge scripts but MCP initialize handshake times out. Direct curl to endpoints works perfectly. Issue is in the compiled Bun binary stdio pipe to shell scripts. Next session: debug with shrimp from source, add logging to mcp_manager.ts.
Session 15 Handoff — Ready for Session 16
Completed
development, deployed to herodevBlocker for next session
Shrimp cannot load bridge scripts via stdio MCP protocol. Bridge scripts work perfectly via direct curl. The issue is in the Bun compiled binary spawning shell scripts and doing JSON-RPC over stdin/stdout pipes. Shrimp starts, spawns bridges, sends
initialize, but never receives a response — bridges timeout after 15s.Next session priorities
Branch state
development: all MCP endpoints merged (8 repos)development_mik_6_1: hero_aibroker + hero_shrimp (context plumbing + MCP config, not merged)