Research Artefacts
Every repository is both a research contribution and a usable tool — MIT or GPL-3.0 licensed for the community.
Research into persistent agent memory architectures. Structured, queryable memory systems for long-running LLM agents.
↗Investigating unified local LLM serving. A drop-in Ollama replacement exploring model management and inference unification.
↗Studying prompt lifecycle management. Extracts prompts from codebases and versions them as first-class dependencies.
↗Exploring multi-modal generation pipelines. Text-to-video synthesis combining LLM scripting with generative media models.
↗Studying AI-assisted data analysis with formal validation. An SQL co-pilot that learns query patterns while preserving privacy.
↗Investigating ephemeral credential models for AI APIs. Scoped, time-limited token proxies for secure LLM access.
↗Exploring declarative prompt specification as a formal language. Write once, run anywhere — treating prompts as portable, typed artefacts.
↗Research into cost-quality optimisation for LLM routing. Implements MIPROv2-based automatic prompt tuning and model selection.
Exploring memory-safe C dialects for AI code generation. A compiler research project targeting agent-written systems code.
↗Research into lightweight vector similarity search. SQLite-backed approximate nearest neighbour retrieval in pure Rust.
↗Research into minimal-overhead sandboxing for untrusted code execution. Lightweight container isolation in pure Zig.
↗Studying caching strategies for high-dimensional vector computations. Eliminates redundant embedding recomputation at scale.
↗Exploring programmable database architectures with embedded scripting. Lua-native data storage for AI workflow prototyping.
↗Liath reimplemented in Rust with RocksDB. Studying performance characteristics of pluggable storage engines with Lua query interfaces.
↗Investigating topology-aware scheduling for latency-critical workloads. NUMA-first memory allocation and thread placement in Rust.
↗Investigating the transformation of keyword search into semantic answer generation. Async chunking and embedding pipelines in Rust.
Bridging natural language and constraint satisfaction. Describe optimisation problems in English, receive mathematically guaranteed solutions via formal solvers.
↗Research into compiling quantitative trading signals from visual specifications. From alpha hypothesis to verified executable in minutes.
↗Studying efficient ranking under sparse feedback. Multi-armed bandit algorithms for achieving better orderings with fewer pairwise comparisons.
Investigating on-device LLM inference limits. Running full language models on mobile hardware via Flutter with zero cloud dependency.
↗Studying LLM integration patterns for browser extensions. A framework for rapid development of AI-augmented web experiences.
↗Research into deliberative search interfaces. A SvelteKit engine that reasons about query intent before retrieving results.
↗Exploring mobile-first agent architectures. On-device AI agents that browse, observe, and automate tasks autonomously.