— Work / Case studies, 2024 —

EVIDENCE.NOT PITCH DECKS.

A working index of current AI systems, past infrastructure, and the side quests that keep the primary trade honest. Built, shipped, or in production.

— AI systems — 2025
01

Audio / DSP — instruments and rooms as continuous acoustic fields

Every virtual instrument on the market — from the Fairlight CMI in 1979 to Kontakt 8 today — operates on the same unexamined axiom: record audio, store it, replay it. More layers, more gigabytes, same paradigm for forty-five years. I build against that axiom. Instruments and rooms are not collections of recordings. They are continuous acoustic fields — living systems with body resonance, material memory, sympathetic coupling, and spatial presence — and that is what I model. The work lives at the intersection of classical DSP, differentiable signal processing, physics-informed neural networks, and hard real-time constraints. Every plugin I ship has to run inside a DAW without dropouts, inside a 64-sample buffer, on a laptop also running Reaper, Ableton, and Logic at the same time. There is no margin for sloppy code: a numerical instability that shows up in one note in ten thousand will eventually show up in someone's master. A drop in the feedback energy budget that silently integrates over a long freeze tail will eventually explode a recording session. The engineering culture is closer to embedded systems than typical consumer software. Underneath everything sits a family of reverb engines written from scratch in C++20: sixteen-line feedback delay networks with Hadamard mixing for sympathetic coupling, Givens rotation for slow spatial evolution, ADAA2 saturation for analog-style non-linearity without aliasing, Kahan-summed energy budgets to keep freeze states numerically stable through arbitrarily long tails. That engine family is public on GitHub as the reference implementation. From there it branches into the plugins people actually use. Sona — Omni-dimensional synthesis engine. C++20, header-only, zero dependencies. Replaces sampling entirely: each instrument is a 128-byte hologenome that unfolds into a 256-voice polyphonic field. Seven acoustic dimensions (excitation, body, coupling, aging, room, mic, memory). A state-space model for cross-note sympathetic resonance. A neural acoustic field replacing static image-source reverb with continuous 3D spatialization. A consciousness knob scaling the whole model from static sampler to fully alive instrument. Ships as VST3/AU plugin plus browser WASM app. Twelve built-in instruments. Includes a capture pipeline that turns any WAV recording into a playable hologenome. Arna — Intelligent reverb plugin for macOS, VST3/AU, under the 1snob imprint. Built on a DiffGFDN (Differentiable Geometric Feedback Delay Network) wrapped around the Nautilus reverb core. One engine, three control modes. The free 1snob mode gives you one knob, one fader, one click — size, decay, tone, diffusion all move together along a carefully mapped curve, producing tape-like textures when automated. The paid Full mode exposes nine knobs for complete parameter control. The Dev mode is the internal calibration lab with sixty-plus faders. Every build passes the full verification gate before shipping: unit tests, ctest, strict pluginval Level 10, torture battery. No build gets approved unless all gates pass. Analyzer — Unified audio analysis platform under the 1snob brand, manufacturer code RPTZ. Not four products sold as a bundle. One analytical brain, thirty-five windows, four modes: Diagnostic for mix health, Technical for FFT/loudness/stereo/phase, Identity/Perceptual for tonal stability and timbre signatures, Behavioral/Multilayer for timeline and cross-window correlation. All thirty-five windows wired to real analyzers. Ships VST3/AU/Standalone. Release gate: strict pluginval Level 10. Sailors (in design) — A concept moving toward a first prototype. A sixteen-line FDN tuned near unity feedback, fed a short excitation burst, produces rich evolving string-like textures naturally. No wavetables, no samples, pure resonance. Target sound: lush orchestral pads, evolving drones, Solina territory, JP-8000 Supersaw territory — but organic, because the sound comes from physical resonance, not oscillators. A reverb that learned to sing. BOM — Research pipeline in computational musical acoustics. Central hypothesis: descriptors derived from an instrument body's impulse response carry enough information to predict stable structure in recordings of the same instrument being played. The body is the instrument, not the strings. The pipeline extracts features from IR recordings and WAV performances, runs dimensional reduction and clustering, and logs every computation through an audit logger so nothing is unreproducible. Findings feed back into how Sona and Arna model body resonance.

C++20JUCEVST3AUDiffGFDNFDNDSPneural acoustic field
In development · 2024–present
02

Autopoietic agents — multi-agent platforms that keep themselves running

Autopoiesis is a word I borrow carefully. Humberto Maturana coined it to describe living systems that continuously produce the components needed to sustain their own organization. I am not claiming to have built living machines. What I have built is a family of multi-agent platforms where the system's job is to keep the system running — to observe itself, detect degradation, reallocate resources, rebuild broken parts, and continue running anyway. The agents are not scripts that do one thing. They are organs that participate in a closed loop. Most multi-agent projects I see treat agents as LLMs with tools — a thin wrapper around an API call with some prompt templates bolted on. That model breaks down at scale. The swarm either collapses into incoherence the first time two agents touch the same state, or it becomes so risk-averse that nothing happens. The family of platforms I build is structured differently. Each is a different answer to the same question: what does it take to build a swarm that does not collapse under its own weight? The answer has several parts that recur across every project. First, the services do not talk to each other directly; every message passes through an event bus, because direct HTTP calls between components create the kind of tight coupling that turns a partial failure into a total one. Second, agents are untrusted by default — every write is gated by distributed locks and fencing tokens, every action produces evidence artifacts (git diffs, commit hashes, test logs), and no claim counts as real without the artifact. Third, the systems implement dual-LLM separation: one agent reads raw external content in quarantine and outputs only sanitized structured data, another agent holds write privileges but never touches raw input. That pattern is the only defense against prompt injection that actually works at scale. Fourth, state lives outside the chat: chat text is never truth, only the ticket system, the git history, and the CI logs are truth. Fifth, the system has an immune system — a guardrails layer filtering toxicity and validating payloads, a warden sweeping stale locks, a siege engine injecting controlled chaos to test antifragility, a circuit breaker capping daily spend before the swarm can ruin its own budget. Inside this domain I have built systems organized as biological organisms — brain, muscles, hippocampus, economy, each running as its own containerized service with specific responsibilities — coordinated over Redis Pub/Sub with no direct calls between organs. I have built forensic audit swarms organized into specialized squads where each squad owns a vertical (core, QA, infrastructure, security, documentation, optimization) and the squads audit each other. I have built knowledge-mining pipelines that crawl codebases and extract reusable patterns using bio-inspired heuristics: ant-colony optimization for path selection, golden-ratio scheduling for resource allocation, biological vitality scoring for triage. I have built autopoietic economic discovery systems designed to start with zero knowledge and discover on their own how to generate value — not programmed to be autopoietic, designed to survive, with autopoiesis emerging as a consequence. I have built meta-cognitive security analyzers that continuously watch a file tree with local models, streaming live state to a dashboard via WebSocket, with a built-in stress tester that injects file chaos for ten minutes straight so you can watch the engine cope in real time. I have built fully local research stacks on Apple Silicon that enforce strict separation of concerns across the model gateway, the vector store, the search layer, and the observability plane, with every claim labeled by an auditor agent as confirmed, inferred, or hypothesized. I have built orchestration hubs on Azure and GCP that unify multiple coding assistants and workflow engines behind a single endpoint with distributed locking, optimistic-concurrency state, event streams, and shared work queues. They do not all solve the same problem. A biological organism topology is the right shape for a system that has to monitor itself indefinitely. A squad topology is the right shape for a system that has to audit something complex and produce legible output. A knowledge-mining pipeline is the right shape when the goal is to harvest patterns at volume and feed them downstream. A fully local stack is the right shape when sovereignty matters more than scale. The family is wide because the problems are wide. What the platforms share is the underlying discipline: every one of them is observable end to end, fails safely by construction, and treats its own resource budget as a first-class design constraint.

multi-agentautopoiesisdual-LLMRedis Pub/SubDockerMCPPythonFastAPI
In development · 2024–present
03

Scientific imaging — evidence-first pipelines for black-hole investigations

Telescopes give you data. Software gives you images. Between them sits a pipeline where every decision — which frames to stack, which stretch curve to apply, which flare candidate to flag, which color space to render — either preserves the evidence trail or destroys it. Most astronomical imaging software treats the evidence as disposable, and the result is an industry-wide culture where an image's provenance is folklore rather than artifact. AstroRoom treats the evidence as the product. This is where my background in cost-and-change-management engineering meets my background in photography. I spent years auditing capital expenditure across hundreds of millions of dollars; the artifact that mattered at the end of a project was not the final spreadsheet, it was the trail that let someone years later reconstruct exactly what was decided, by whom, under what assumptions, and against which constraints. Without that trail, the number on the spreadsheet was unfalsifiable. With it, every line was auditable end to end. AstroRoom applies the same philosophy to scientific imaging. Every artifact it produces signs its own provenance: every render appends to a hash-chained event ledger, every evidence row carries a confidence score and a claimability indicator, every export can be replayed from its own recipe. If the output of the pipeline ever becomes part of a scientific claim, the claim can be verified back to the original FITS frames without a single gap in the chain. AstroRoom is an evidence-first scientific imaging workspace for Sagittarius A* and Galactic Center investigations. FastAPI backend, React 19 + Vite frontend. The workflow is organized as a six-mode shell, traversed left-to-right like a darkroom, with each stage producing auditable artifacts rather than just passing pixels downstream. Library is where an investigation begins: session entry with resumable state, an astronomical asset gallery, working sets for staging FITS frames. You register a target and the session becomes the unit of provenance for everything that follows. Develop is the FITS viewer. Per-frame adjustments: stretch curves, clipping, colormap selection, cosmic-ray cleanup. Analogous to a raw-image developer in a darkroom — every adjustment is reversible, every adjustment is logged. Build is the heavy-compute stage. Epoch construction aggregates frames into temporally consistent groups. Deep-stack mega-exposures combine many frames into a single high-SNR image. Timelapse generation produces video of structural change over time. Volumetric 3D builds generate true volumetric data cubes from multi-epoch series. Zoom-ladder matrices produce multi-scale views for publication-grade presentation. Analyze has four tabs, each producing auditable outputs rather than just visualizations. Flare detection runs statistical anomaly scoring against baseline epochs. Spectral index maps compute per-pixel spectral gradients. The Galactic Center Excess dashboard is built specifically for SgrA* investigations. Source extraction identifies and catalogs discrete sources against the background. Evidence is the calibrated v2.95 evidence engine. You inspect ledger rows directly — each row carries a confidence score, a claimability label, and the full recipe needed to reproduce it. Export paths produce JSON, CSV, and schema artifacts that downstream tools (or collaborators, or reviewers, or future you) can consume without loss. Export is where the pipeline meets the world. PDF reports bake the full recipe into document metadata. Timelapse MP4s carry provenance sidecars. Zoom-ladder matrices export as annotated PNGs with embedded scale bars, grid overlays, and titles. This is the publication-ready terminus. Two tools are available from every mode. Lit, powered by KnowledgeSessions, harvests papers from arXiv and ADS, embeds them, and lets you query semantically — results can be linked directly into evidence reports so a paper's argument and its empirical backing live in the same document. Agent is a natural-language command bar that accepts any language, normalizes to English, routes to a JSON action, and executes the matching pipeline phase. Recent work has included flare hunts on Sagittarius A*, comparative black-hole quintet studies across SgrA*, M87, NGC 1275, NGC 6240, and Cygnus X-1, deep mega-exposures of M31 rendered as false-color composites with auto-annotation, and triple-product timelapses over Sagittarius A* that emit observed, residual, and confidence MP4s simultaneously from a single run. Every render appends to the hash-chained event ledger; verification is a single API call. Public on GitHub. Licensed PolyForm Noncommercial 1.0.0.

FastAPIReact 19FITSevidence ledgerhash chainastronomySgrA*
In development · 2025–present
— Side quests
— Before AI
04

Engineering discipline — the lens that runs through everything

The other three domains describe what I am building. This one describes how, and where the discipline comes from. A decade before I wrote my first production agent or shipped my first VST, I spent ten years inside an engineering culture that does not exist in most software work. Heavy-industry infrastructure engineering — copper mining megaprojects, mineral processing plants, water treatment facilities, tailings dam structures — operates under constraints that software people rarely encounter. A mistake does not roll back with a git revert. An over-cost is not a line on a spreadsheet, it is months of rework, renegotiated contracts, and careers ended. A failed pressure vessel is not a 500 error, it is a fatality investigation. That environment changes how you think about every artifact you produce, and once the reflex is there, it does not leave. Four ideas from that decade keep surfacing in everything I build now. Cost-and-change management as observability. In capital projects on the order of a billion dollars, nobody cares about the headline number — they care about the delta between what the budget said last week and what it says this week, and why. Every trend gets logged. Every scope change gets tagged with an owner, an estimate, a schedule impact, and an approval chain. Every estimate comes with a contingency band because the number without the band is a lie. That practice translates directly to modern systems work. The cost-per-token dashboards, the FinOps circuit breakers, the evidence ledgers in scientific imaging, the fencing-token protocols in agent swarms — all of them are the same instinct applied to a new substrate. Do not trust a number you cannot audit. Do not ship a system whose state you cannot reconstruct. Observability is not a feature you add at the end, it is the artifact the project produces alongside the work. Contingency and risk analysis as design input. Before a major project begins execution, you do a formal Contingency and Risk Analysis — you enumerate the failure modes, assign probabilities, estimate cost and schedule impact, and build the mitigation plan into the budget from day zero. You do not hope nothing goes wrong. You design the envelope for when it will. That is the same posture I take with every system now. Before a swarm goes to production, a chaos injector runs against it. Before a plugin ships, pluginval Level 10 tortures it. Before an imaging pipeline produces an artifact, the artifact has to carry its own provenance. The question is never will this fail? — the question is always when it fails, what happens next, and can we prove it was handled correctly? Cross-disciplinary coordination at scale. Managing change on projects where construction, procurement, safety, engineering, and PMO teams all had legitimate but conflicting claims on the same scope taught me that the hard part of a complex system is almost never the individual components — it is the interfaces. The dual-LLM pattern in my agent frameworks, the event-bus architecture in the biological-organism systems, the six-mode shell in the imaging pipeline, the three control modes in a single reverb engine — they are all the same move: make the interfaces legible, make the handoffs observable, make every contract between two components something you can inspect. That is not a software insight. That is a project-controls insight applied to software. Building from whiteboard to revenue. Founding a financial-coaching company in a country that did not have a category for what I was selling, winning the top government-backed startup competition, producing a national video course, being featured across CNN Chile, Canal 13, MEGA, and national radio — that experience is not on this page as a credential. It is here because it is the other half of the discipline. Engineering teaches you how to build correct systems. Entrepreneurship teaches you that a correct system nobody adopts is a failed system. Every architecture decision I make now is filtered through both lenses: is it technically sound, and will the person who has to live with it actually use it that way? The elegance that does not survive contact with a user is not elegance, it is decoration. Before any of that, the foundation: a six-year civil engineering degree, an undergraduate thesis on activated-sludge biological process modeling integrated with SCADA-style control loops (which turns out, twenty years later, to have been training for everything I do now in autopoietic systems), two university lectureships, and a professional photography practice across portrait, commercial, wedding, and fine art that ran for a decade in parallel with everything else. The side quests are not separate from the engineering. They are the engineering, on different substrates.

cost engineeringEPCMproject controlsSCADArisk analysisentrepreneurship
In development · 2007–present
— Building AI infrastructure that must be trustworthy?Get in touch