Skip to main content
hrns is a small Go project that gives you the core pieces of an agent harness without hiding the mechanics:
  • a streaming chat client for OpenAI-compatible /chat/completions
  • an agent loop that can execute tool calls and continue the conversation
  • a tiny interactive TUI for manual testing
  • a lightweight skill loader that exposes prompt files as a tool
If you want to play with agents, inspect the moving parts, and then reuse the same pieces inside your own Go code, this repo is aimed at exactly that.

What exists today

The current repo behavior is intentionally narrow:
  • The binary starts an interactive TUI.
  • The TUI creates a persisted provider config on first run.
  • The TUI builds an OpenAI-compatible client from the saved current provider.
  • The default system prompt is hardcoded in main.go.
  • Built-in tools cover file reads, basic file edits, directory globbing, shell commands, HTTP fetches, and skill loading.
  • Skills are discovered from ~/.agents/skills and ./.agents/skills.

Start here

Quickstart

Run the bundled TUI, configure a provider, and send your first prompt.

Provider setup

Configure saved providers for any OpenAI-compatible endpoint.

Embed in Go

Build your own agent wrapper by composing openai.Client, loop.Loop, and your own tools.

Add a tool

Extend the loop with custom tool implementations and simple schemas.

Mental model

The runtime flow is small enough to keep in your head:
  1. main.go loads skills, assembles the prompt and tool map, and starts the TUI.
  2. The TUI loads provider config, builds the client and loop, then collects user input and sends the conversation to loop.RunLoop.
  3. loop.RunLoop streams assistant output, accumulates tool calls, executes tools, appends tool results, and re-prompts the model until no more tools are called.
  4. Streamed chunks are printed back to the terminal as assistant text, reasoning text, or tool-call notices.
If you want the fuller breakdown, read the architecture guide.