Why embed it
The repo’s main program is intentionally tiny and hardcoded. If you want to:- control the system prompt
- choose your own default model
- expose different tools
- swap the UI layer
- integrate with your application state
Minimal example
The three pieces you compose
openai.Client
Responsible for HTTP requests and streaming SSE responses from an OpenAI-compatible endpoint.
loop.Loop
Owns the agent loop:
- prepends the system message
- advertises tool schemas
- streams assistant deltas
- accumulates tool calls
- executes tools
- appends tool results
- continues until no more tools are called
Your tools
You supply amap[string]loop.Tool. Each tool becomes a function-style schema in the model request.
Collecting the final conversation
AfterRunLoop finishes, call:
A common pattern
In a real application, it is normal to keephrns as the thin agent core and put your policy around it:
- validate user input before creating messages
- choose the model outside the TUI
- wrap tools with logging or authorization
- persist
agent.Messages()somewhere else - render
agent.Chunks()in your own UI