· v37
Deterministic Fine-Tuning on Dual MI100s
What it actually takes to make a Qwen3.6-MoE LoRA train reproducibly on AMD MI100s. Three patches, one well-known invariant, and why the same inputs really do produce the same weights.
v37 / ai lab
v37 publishes the small, load-bearing pieces that turn AI experiments into AI infrastructure: deterministic fine-tuning stacks, persistent agent memory, voice-aligned training rubrics. Every artifact ships with the code, the recipe, and the failure modes.
hebbian graph · live
Research
Native lab notes — reproducibility recipes, infrastructure write-ups, the small things behind the artifacts.
· v37
What it actually takes to make a Qwen3.6-MoE LoRA train reproducibly on AMD MI100s. Three patches, one well-known invariant, and why the same inputs really do produce the same weights.
Writing
Long-form notes from inside the work. Hosted on contributor blogs; we link, we don't reprint.
· Daniele Salatti
VSDD — verify-don't-review. Specification-first development with LLMs, where the human writes the spec and the verification, not the code.
#vsdd #verification #ai-assisted-development
· Daniele Salatti
Building Lares — a personal AI assistant with persistent memory, identity, and proactive behavior, on top of an open Hebbian graph.
#lares #memory #agents
Artifacts
The things we publish. Code, weights, recipes. Each is the thing, not a writeup of the thing.
Published
A reproducibility recipe for fine-tuning Qwen3.6-MoE on dual MI100s. Three patches — the FLA causal_conv1d shim, the MoE routing-cache patch, and forced eager attention — plus a pinned ROCm-flavored uv environment. Same inputs in, same weights out.
In progress
A graph-based memory layer for AI agents. Nodes hold content, edges track co-activation, weights decay or strengthen with use. SQLite under the hood, HTTP API on top. The substrate Lares runs on.
About
v37 is a small lab focused on the load-bearing edges of applied AI: the parts that have to be reproducible to be trusted, and trusted to be useful.
We work on persistent agent memory, voice-aligned fine-tuning, and the deterministic infrastructure that makes both possible. The work is done on commodity hardware (dual MI100s, mostly) so the recipes transfer.
For commercial work — fractional CTO, custom software, embedded teams — see v37.io.