← Back to work— AI systems — 2025 · 01

Audio / DSP — instruments and rooms as continuous acoustic fields

In development· 2024–present

Every virtual instrument on the market — from the Fairlight CMI in 1979 to Kontakt 8 today — operates on the same unexamined axiom: record audio, store it, replay it. More layers, more gigabytes, same paradigm for forty-five years. I build against that axiom. Instruments and rooms are not collections of recordings. They are continuous acoustic fields — living systems with body resonance, material memory, sympathetic coupling, and spatial presence — and that is what I model. The work lives at the intersection of classical DSP, differentiable signal processing, physics-informed neural networks, and hard real-time constraints. Every plugin I ship has to run inside a DAW without dropouts, inside a 64-sample buffer, on a laptop also running Reaper, Ableton, and Logic at the same time. There is no margin for sloppy code: a numerical instability that shows up in one note in ten thousand will eventually show up in someone's master. A drop in the feedback energy budget that silently integrates over a long freeze tail will eventually explode a recording session. The engineering culture is closer to embedded systems than typical consumer software. Underneath everything sits a family of reverb engines written from scratch in C++20: sixteen-line feedback delay networks with Hadamard mixing for sympathetic coupling, Givens rotation for slow spatial evolution, ADAA2 saturation for analog-style non-linearity without aliasing, Kahan-summed energy budgets to keep freeze states numerically stable through arbitrarily long tails. That engine family is public on GitHub as the reference implementation. From there it branches into the plugins people actually use. Sona — Omni-dimensional synthesis engine. C++20, header-only, zero dependencies. Replaces sampling entirely: each instrument is a 128-byte hologenome that unfolds into a 256-voice polyphonic field. Seven acoustic dimensions (excitation, body, coupling, aging, room, mic, memory). A state-space model for cross-note sympathetic resonance. A neural acoustic field replacing static image-source reverb with continuous 3D spatialization. A consciousness knob scaling the whole model from static sampler to fully alive instrument. Ships as VST3/AU plugin plus browser WASM app. Twelve built-in instruments. Includes a capture pipeline that turns any WAV recording into a playable hologenome. Arna — Intelligent reverb plugin for macOS, VST3/AU, under the 1snob imprint. Built on a DiffGFDN (Differentiable Geometric Feedback Delay Network) wrapped around the Nautilus reverb core. One engine, three control modes. The free 1snob mode gives you one knob, one fader, one click — size, decay, tone, diffusion all move together along a carefully mapped curve, producing tape-like textures when automated. The paid Full mode exposes nine knobs for complete parameter control. The Dev mode is the internal calibration lab with sixty-plus faders. Every build passes the full verification gate before shipping: unit tests, ctest, strict pluginval Level 10, torture battery. No build gets approved unless all gates pass. Analyzer — Unified audio analysis platform under the 1snob brand, manufacturer code RPTZ. Not four products sold as a bundle. One analytical brain, thirty-five windows, four modes: Diagnostic for mix health, Technical for FFT/loudness/stereo/phase, Identity/Perceptual for tonal stability and timbre signatures, Behavioral/Multilayer for timeline and cross-window correlation. All thirty-five windows wired to real analyzers. Ships VST3/AU/Standalone. Release gate: strict pluginval Level 10. Sailors (in design) — A concept moving toward a first prototype. A sixteen-line FDN tuned near unity feedback, fed a short excitation burst, produces rich evolving string-like textures naturally. No wavetables, no samples, pure resonance. Target sound: lush orchestral pads, evolving drones, Solina territory, JP-8000 Supersaw territory — but organic, because the sound comes from physical resonance, not oscillators. A reverb that learned to sing. BOM — Research pipeline in computational musical acoustics. Central hypothesis: descriptors derived from an instrument body's impulse response carry enough information to predict stable structure in recordings of the same instrument being played. The body is the instrument, not the strings. The pipeline extracts features from IR recordings and WAV performances, runs dimensional reduction and clustering, and logs every computation through an audit logger so nothing is unreproducible. Findings feed back into how Sona and Arna model body resonance.

C++20JUCEVST3AUDiffGFDNFDNDSPneural acoustic field
— Building something where this problem lives?Start a conversation