I run a multi-agent dev orchestrator on my home server. The agents are fast at a desk, but the useful thinking — what to ship, and how to scope it — almost never happens at a desk. It happens at the gym, on a walk, at dinner.
Two pieces were missing: a reliable bridge from my phone to the orchestrator, and a conversational companion on top of that bridge that I could iterate with and turn into a clean spec.
Crow is the transport — about a thousand lines of Node, one runtime dependency. Inbound Telegram messages land in the orchestrator's mailbox; outbound replies come back over a small local HTTP server. Voice notes are transcribed locally, files dedupe into an inbox, and a permissions file gates which chats can talk to which projects.
teletalk is the conversational layer — a separate Telegram bot with persistent SQLite memory. Every exchange is summarised by a second small LLM call so context survives past the in-process session. Two commands carry the workflow: /draft compresses the recent thread into a structured spec, and /ship opens an issue in the tracker and dispatches it to the orchestrator in a single round-trip.
The handoff between the two bots is one shell command. One piece is the wire, the other is the brain in front of it.
Both bots have been running daily on my home server for several months. Voice-to-text round-trip is on the order of seconds; file capture, busy-state polling, and lifecycle notifications all work without manual intervention. Mean time from "I should do X" to "X is assigned and the orchestrator is on it" is under a minute.
Three things from shipping this:
/draft is a forcing function — if a thread can't produce an acceptance clause I'm willing to live with, the work wasn't ready to dispatch.