AI Research Lab — Est. 2026

Replication
with Purpose

Excavating foundational AI papers — back to 1956 — and rebuilding them with modern capabilities to find unexplored gaps. Specializing in Mechanistic Interpretability and ML Systems, documented in public.

// MechInterp Sparse Autoencoders // MLOps Drift Detection // Constitutional AI Activation Patching // Production ML Root Cause Analysis // MechInterp Sparse Autoencoders // MLOps Drift Detection // Constitutional AI Activation Patching // Production ML Root Cause Analysis

01 —

Mechanistic Interpretability

Understanding what neural networks actually compute. Sparse autoencoders, attention pattern analysis, circuit-level reverse engineering of model behavior.

02 —

ML Systems & MLOps

Production-grade ML infrastructure. Drift detection, observability, rollback systems, and root cause diagnostics that explain not just that a model failed — but why.

03 —

Paper Replication

Rebuilding foundational AI research from scratch with modern tooling. Every replication is a stress-test for the original claims and a search for what comes next.

The lab runs on a simple principle: ship experiments, not slide decks. Every idea gets built. Every build gets documented. Every failure is more interesting than the success it precedes.