AI Assistant: Progressive SSE Streaming (word-by-word response rendering) #32
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Current State
The AI assistant (Shrimp) returns responses via Server-Sent Events (SSE). The current implementation reads the stream incrementally and waits for the
event: doneevent before displaying the response.What works now:
doneevent arrives)What's missing:
The Enhancement
Show the AI response progressively as it's generated, word by word — like ChatGPT, Claude web, etc.
Shrimp already sends intermediate SSE events during generation:
event: token— partial content as the LLM generates tokensevent: tool_call— when the agent uses a toolevent: tool_result— tool execution resultevent: done— final complete responseWe currently ignore
tokenevents and only processdone. Progressive streaming would rendertokenevents in real-time.Implementation
1. Service layer (
ai_service.rs)Change
send_messageto accept a callback for streaming updates:Or return a
Streamthat yields partial updates.2. UI component (
island.rs/ chat view)Update the message state model:
Pending→CompletePending→Streaming(partial_content)→Complete(full_content)The chat bubble renders
Streamingstate with a blinking cursor and growing text.3. Token event parsing
Process
event: tokenin the SSE reader loop:4. Agent tool use visualization (stretch goal)
When Shrimp uses tools (web search, file operations, etc.), show status:
This requires parsing
event: tool_callandevent: tool_resultevents.Files
hero_archipelagos/archipelagos/intelligence/ai/src/services/ai_service.rshero_archipelagos/archipelagos/intelligence/ai/src/island.rshero_archipelagos/archipelagos/intelligence/ai/src/views/Priority
Medium — the current fix prevents infinite spin. This enhancement improves UX but is not blocking.