← BACK TO WORK
§ 03·00·B / PERSONAL INFRA · 2026

Crow + teletalk.

Year
2026
Role
Solo · author + operator
Stack
Node · Telegram · SQLite
Status
Shipped · in daily use
§ 01 / Problem

The bottleneck was scoping, not compute.

I run a multi-agent dev orchestrator on my home server. The agents are fast at a desk, but the useful thinking — what to ship, and how to scope it — almost never happens at a desk. It happens at the gym, on a walk, at dinner.

Two pieces were missing: a reliable bridge from my phone to the orchestrator, and a conversational companion on top of that bridge that I could iterate with and turn into a clean spec.

§ 02 / Approach

Two small bots, one shell call between them.

Crow is the transport — about a thousand lines of Node, one runtime dependency. Inbound Telegram messages land in the orchestrator's mailbox; outbound replies come back over a small local HTTP server. Voice notes are transcribed locally, files dedupe into an inbox, and a permissions file gates which chats can talk to which projects.

teletalk is the conversational layer — a separate Telegram bot with persistent SQLite memory. Every exchange is summarised by a second small LLM call so context survives past the in-process session. Two commands carry the workflow: /draft compresses the recent thread into a structured spec, and /ship opens an issue in the tracker and dispatches it to the orchestrator in a single round-trip.

The handoff between the two bots is one shell command. One piece is the wire, the other is the brain in front of it.

§ 03 / Outcome

What shipped.

Both bots have been running daily on my home server for several months. Voice-to-text round-trip is on the order of seconds; file capture, busy-state polling, and lifecycle notifications all work without manual intervention. Mean time from "I should do X" to "X is assigned and the orchestrator is on it" is under a minute.

NodeTelegram (grammy)SQLiteWhisperPersonal infra
§ 04 / Gallery

System diagram.

§ 05 / Lessons

What carries forward.

Three things from shipping this:

  • Keeping Crow free of any AI of its own is what made it trustworthy infrastructure. The behaviour is deterministic and a bad day at any model provider has zero blast radius on the transport.
  • The scoping bottleneck was the real bottleneck. /draft is a forcing function — if a thread can't produce an acceptance clause I'm willing to live with, the work wasn't ready to dispatch.
  • Small tools compound. Neither bot is impressive in isolation; the composition — phone, transport, conversation, dispatch — is what gets used every day.