Ideas, experiments,
and findings.

Technical articles and accessible explanations from across our four research domains.

Safe Computing

Programmable Databases: Why We Built liath Twice

The story of building a Lua-native database in Lua, then rewriting it in Rust with RocksDB — and what the performance tells us.

databaseluarust
Edge Intelligence

Deliberative Search: When the Engine Reasons Before It Retrieves

Why traditional search retrieves first and ranks later — and how slorg inverts this by understanding intent before fetching results.

searchSvelteKitreasoning
LLM Cognition

Persistent Memory for Long-Running Agents

What happens when LLM agents need to remember across sessions — structured memory schemas, retrieval strategies, and the memory-context distinction.

memoryagentsretrieval
Safe Computing

Vector Search Without the Cloud: memista's SQLite-Backed ANN

Building approximate nearest-neighbour search on SQLite in pure Rust — and why you might not need a dedicated vector database.

vector-searchSQLiteANN
Cross-Cutting

Ephemeral Credentials and Zero-Trust AI: Rethinking API Security

Why AI agents need scoped, time-limited credentials — and how perishable implements zero-trust patterns for LLM API access.

securityzero-trustcredentials
LLM Cognition

Prompt Lifecycle Management: From Extraction to Deployment

A practical framework for managing prompts as versioned dependencies — tackling drift, regression, and reproducibility.

promptsversioningdevops
Formal Optimisation

Better Rankings with Fewer Comparisons: Multi-Armed Bandits for Efficient Ordering

How compere uses MAB algorithms to rank items effectively with minimal pairwise feedback — applications in search and recommendation.

MABrankingbandits
LLM Cognition

Formalising Prompts as First-Class Research Objects

Why treating prompts as typed, portable artefacts changes how we reason about LLM behaviour — and how promptel implements this idea.

promptsformal-methodsspecification
Formal Optimisation

From English to Optimal: How savanty Bridges Natural Language and Constraint Solvers

Describe optimisation problems in plain English and receive mathematically guaranteed solutions — no PhD required.

NLPconstraint-satisfactionsolvers
LLM Cognition

Intelligent LLM Routing: Spending Compute Where It Matters

How route-switch uses MIPROv2 to automatically select the right model for each query — balancing cost, quality, and latency.

routingMIPROv2cost-optimisation
Edge Intelligence

Autonomous Mobile Agents: ukkin's Architecture for On-Device AI

Building AI agents that browse, observe, and automate tasks entirely on-device — the autonomy-safety spectrum on mobile.

agentsmobileautonomy
Safe Computing

fastC: Designing a Memory-Safe C Dialect for AI-Generated Code

LLM agents write systems code, but C is unsafe and Rust is hard to generate. fastC explores the middle path.

compilermemory-safetycode-generation
LLM Cognition

Building mullama: What We Learned Replacing Ollama from Scratch

A post-mortem on building a local LLM serving layer — llama.cpp integration, model management, and where existing tools constrain research.

llama.cppinferencelocal-llm
Edge Intelligence

Running Language Models on Your Phone: The llamafu Experiment

What happens when you run a full LLM on mobile hardware with zero cloud dependency — memory, latency, and model quality on consumer devices.

mobilellama.cppflutter
Safe Computing

Sandboxing Untrusted Code in Zig: The zviz Architecture

How zviz uses Zig's comptime capabilities to build gVisor-inspired sandboxing with near-zero runtime cost.

zigsandboxinggVisor
Safe Computing

Why We Write AI Infrastructure in Rust (and Zig, and Go)

Language choice as research methodology — how memory-safe, deterministic-performance languages produce falsifiable systems claims.

rustziggo
Formal Optimisation

Compiling Trading Signals: sigc and the Quantitative Hypothesis Pipeline

From visual signal specification to verified Rust executable — how sigc turns alpha hypotheses into production-ready code in minutes.

quantsignalscompiler
Cross-Cutting

Open Science in AI: Why We Publish Everything

The case for radical openness in AI research — reproducibility, falsifiability, and community trust through 24 open-source projects.

open-scienceopen-sourcereproducibility