v37/ai

v37 / ai lab

Reproducible AI infrastructure,
written down.

v37 publishes the small, load-bearing pieces that turn AI experiments into AI infrastructure: deterministic fine-tuning stacks, persistent agent memory, voice-aligned training rubrics. Every artifact ships with the code, the recipe, and the failure modes.

hebbian graph · live

Research

Native lab notes — reproducibility recipes, infrastructure write-ups, the small things behind the artifacts.

all notes →

  • · v37

    Deterministic Fine-Tuning on Dual MI100s

    What it actually takes to make a Qwen3.6-MoE LoRA train reproducibly on AMD MI100s. Three patches, one well-known invariant, and why the same inputs really do produce the same weights.

Writing

Long-form notes from inside the work. Hosted on contributor blogs; we link, we don't reprint.

  • · Daniele Salatti

    How I Ship Code Without Reading It

    VSDD — verify-don't-review. Specification-first development with LLMs, where the human writes the spec and the verification, not the code.

    #vsdd #verification #ai-assisted-development

  • · Daniele Salatti

    I Built an AI That Remembers Me

    Building Lares — a personal AI assistant with persistent memory, identity, and proactive behavior, on top of an open Hebbian graph.

    #lares #memory #agents

Artifacts

The things we publish. Code, weights, recipes. Each is the thing, not a writeup of the thing.

  • Published

    Deterministic LoRA Training Stack

    A reproducibility recipe for fine-tuning Qwen3.6-MoE on dual MI100s. Three patches — the FLA causal_conv1d shim, the MoE routing-cache patch, and forced eager attention — plus a pinned ROCm-flavored uv environment. Same inputs in, same weights out.

    git clone → writeup →

  • In progress

    Hebbian — Persistent Agent Memory

    A graph-based memory layer for AI agents. Nodes hold content, edges track co-activation, weights decay or strengthen with use. SQLite under the hood, HTTP API on top. The substrate Lares runs on.

About

v37 is a small lab focused on the load-bearing edges of applied AI: the parts that have to be reproducible to be trusted, and trusted to be useful.

We work on persistent agent memory, voice-aligned fine-tuning, and the deterministic infrastructure that makes both possible. The work is done on commodity hardware (dual MI100s, mostly) so the recipes transfer.

For commercial work — fractional CTO, custom software, embedded teams — see v37.io.