Back to feed

russellromney/honker

russellromney/honker
452
+392/day
5
Python

SQLite extension + bindings for Postgres NOTIFY/LISTEN semantics with durable queues, streams, pub/sub, and scheduler

From the README

honker

honker is a SQLite extension + language bindings that add Postgres-style NOTIFY/LISTEN semantics to SQLite, with built-in durable pub/sub, task queue, and event streams, without client polling or a daemon/broker. Any language that can SELECT load_extension('honker') gets the same features.

honker ships as a Rust crate (honker, plus honker-core/honker-extension), a SQLite loadable extension, and language packages: Python (honker), Node (@russellthehippo/honker-node), Bun (@russellthehippo/honker-bun), Ruby (honker), Go, Elixir, C++. The on-disk layout is defined once in Rust; every binding is a thin wrapper around the loadable extension.

honker works by replacing a polling interval with event notifications on SQLite's WAL file, achieving push semantics and enabling cross-process notifications with single-digit millisecond delivery.

Experimental. API may change.

SQLite is increasingly the database for shipped projects. Those inevitably require pubsub and a task queue. The usual answer is "add Redis + Celery." That works, but it introduces a second datastore with its own backup story, a dual-write problem between your business table and the queue, and the operational overhead of running a broker.

honker takes the approach that if SQLite is the primary datastore, the queue should live in the same file. That means INSERT INTO orders and queue.enqueue(...) commit in the same transaction. Rollback drops both. The queue is just rows in a table with a partial index.

Prior art: pg_notify (fast triggers, no retry/visibility), Huey (SQLite-backed Python), pg-boss and Oban (the Postgres-side gold standards we're chasing on SQLite). If you already run Postgres, use those, as they are excellent.

At a glance

import honker

db = honker.open("app.db")
emails = db.queue("emails")

# Enqueue
emails.enqueue({"to": "alice@example.com"})

# Consume (worker process)
async for job in emails.claim("worker-1"):
    send(job.payload)
    job.ack()

Any enqueue can be atomic with a business write. Rollback drops both.

with db.transaction() as tx:
    tx.execute("INSERT INTO orders (user_id) VALUES (?)", [42])
    emails.enqueue({"to": "alice@example.com"}, tx=tx)

Features

Today:

  • Notify/listen across processes on one .db file
  • Work queues with retries, priority, delayed jobs, and a dead-letter table
  • Any send can be atomic with your business write (commit together or roll back together)
  • Single-digit millisecond cross-process reaction time, no polling
  • Handler timeouts, declarative retries with exponential backoff
  • Delayed jobs, task expiration, named locks, rate-limiting
  • Crontab-style periodic tasks with a leader-elected scheduler
  • Opt-in task result storage (enqueue returns an id, worker persists the return value, caller awaits queue.wait_result(id))
  • Durable streams with per-consumer offsets and configurable