Back to Projects

Gothic Grandma, LLC

MUSE Living Worlds

Interactive Storytelling Platform

Founder & Lead Engineer · 2025–Present · Private Platform

C++20 Go Python TypeScript Dart/Flutter PostgreSQL (Supabase) SQLite GPU (CUDA/Metal/Vulkan) React Electron WebSockets Skia
Long-Term Creative Project: Building in off-hours while employed full-time. Exploring new forms of interactive narrative through simulation-backed storytelling.

MUSE Living Worlds is an interactive storytelling platform built on the MUSE simulation ecosystem—a custom C++ runtime that models biological needs, psychological states, and emergent behaviors.

At its core is a deterministic execution engine written in C++ that runs compiled behavioral kernels. Visual tools generate kernel source code, which is compiled into binaries and executed under a unified runtime shared across research tooling and consumer applications.

Platform-first architecture: Rather than building a game, I built the infrastructure that games (and research tools, and creative tools) run on. MUSE is designed like a platform company: central runtime, role-based access control, multi-tenant permission models, real-time collaboration primitives, and observability from the start.

What MUSE creates: Interactive fiction and narrative experiences where readers inhabit characters living in emergent, scientifically-grounded worlds. Not branching dialogue trees—living systems with real causality.
Hover to playTap to play

This is a MUSE Living World.

You are not controlling a character.
You influence what they notice, remember, and attempt — the world responds.

GLYPH is the reader interface for Living Worlds, built in Flutter for deep reading and embodied interaction.

LLMs are used at the periphery — for authoring, interpretation, and assistance — never inside the simulation loop.
All world state, causality, and long-term consequences are produced by deterministic systems, not probabilistic generation.

MUSE is built as a full-stack platform:
execution engine → data persistence → inspection tooling → role-specific interfaces.

Gothic Grandma. Laboratories builds the platform and research layer; Gothic Grandma. Studios builds living worlds on it.

Visit Gothic Grandma →

Platform Architecture

Emergence at Scale
Like Conway's Game of Life demonstrates emergence from simple rules applied to pixels, MUSE scales emergent behavior to 100,000+ entities—where societies, economies, and narratives emerge from distributed systems coordinating through shared state.

Visual → Compiled Pipeline

Visual Modeling
Database Schema
Code Generation
GPU/CPU Kernels
Runtime Execution
Declarative configuration → Optimized compilation → Deterministic execution

Core Architecture

Event-Driven Execution
Burst computation on user prompts · Pauses between events · Serverless-style resource usage · Scales to massive complexity
GPU/CPU Orchestration
Heterogeneous execution · CUDA/Metal/Vulkan · Automatic routing · Parallel workload distribution
Database-Driven Contracts
SQLite registry · All operations defined in schema · Generated execution contracts · Type-safe runtime dispatch
Multi-Tenant Interfaces
Permission-based access · Same runtime · Different capabilities · Role-segmented surfaces

Workbench Architecture

CYPHER (Engineering)
Visual system builder · Full runtime access · Performance analysis
CLIO (Operations)
Control plane · Database inspection · Multi-terminal orchestration
GLYPH (Consumer)
E-reader interface · Vulkan rendering · Interactive typography
PYTHIA (Research) scoped
Scientific instrumentation · Experiment runner · Data export
CODEX (Authoring) scoped
AI-assisted world creation · Semantic input · No-code content
Designed for scale: Event-driven architecture processes 100,000+ concurrent entities in millisecond bursts. Not constrained by continuous rendering—simulation complexity scales independently of user interaction frequency. Burst computation when needed, idle otherwise— efficient resource usage by design.
Technical Foundation: C++ runtime · GPU kernel generation · SQLite registries · Electron/React workbenches · Flutter consumer apps · WebSocket command layer
Scale Achievement: 100K+ entities with emergent behavior · Biological timing · Millisecond burst execution · Apple Silicon optimized

What Problem MUSE Solves

Problem: Modern interactive fiction, educational software, and AI-driven storytelling systems are built on static scripts or probabilistic text generation. They can be immersive, but they do not simulate causality. Their characters do not truly perceive, decide, or change in response to integrated biological, psychological, and social forces. Consequences in these platforms are authored, not emergent.

Solution: MUSE enables embodied, scientifically grounded simulation where:

  • Perception, decision-making, learning, and relationships arise from real models of biology and psychology
  • Long-term change emerges from cause-and-effect, not narrative branching
  • Users can probe systems like scientists—not just consume a story
  • Worlds persist, evolve, and surprise without relying on LLM memory or hallucinated state

The result is a new medium: interactive worlds where people don't just read about perspectives—they inhabit them.

What Makes MUSE Different

Deterministic Simulation, Not Generation

  • Real causality — consequences emerge from biological and psychological systems, not scripts
  • Persistent state — worlds remember everything through database-backed memory, no LLM context limits
  • Inspectable logic — trace any behavior back to root cause through explicit data flow
  • No hallucination risk — simulation truth is ground truth; LLMs only translate at boundaries

Event-Driven Architecture

  • Burst computation on prompts — processes 100,000+ entities in milliseconds when user acts
  • Efficient by design — idle when paused, no wasted cycles on continuous rendering
  • Scales independently — simulation complexity grows with world richness, not interaction frequency
  • Infrastructure-style patterns — central state management, distributed timing, resource-aware scheduling

Visual → Compiled Pipeline

  • Declarative configuration — design systems visually in CYPHER, define behavior through graphs
  • Generated execution — visual models compile to optimized GPU/CPU kernel code automatically
  • Database-driven contracts — all operations defined in SQLite schema, type-safe runtime dispatch
  • No hardcoded logic — every system emerges from schema, enabling full traceability

Multi-Tenant Platform Design

  • Role-segmented access — readers see GLYPH, engineers see CYPHER, researchers see PYTHIA
  • Same runtime core — everyone uses the same simulation instance, different permission layers
  • Separation of concerns — system engineering isolated from world authoring isolated from reading
  • Control plane thinking — CLIO functions as ops dashboard, like modern platform companies expose to customers

Authentication, Authorization & Collaboration

  • OAuth-based identity layer — designed for federated authentication across consumer and enterprise contexts
  • RBAC permission model — readers, authors, researchers, engineers share runtime with different capability surfaces
  • Real-time collaboration infrastructure — operational transform foundations for concurrent editing in CODEX
  • Postgres-backed state layer — persistent, auditable permission hierarchies with cross-world identity

Scientifically Grounded

  • Based on embodied cognition research — not game AI heuristics, actual perceptual and motor control models
  • Biological timing — systems execute at realistic rates (50ms reflexes, 1000ms deliberation)
  • Emergent complexity — societies, economies, cultures arise from simple rules at scale
  • Transparent models — no black boxes; every system is inspectable, verifiable, traceable
Hover to playTap to play

CYPHER Constructor — Visual system designer compiling to GPU kernels

The Ecosystem

Eight core tools spanning reader, creator, research, and engineering roles. Each role sees only what they need.

Platform Foundations vs. Product Surfaces
Platform (implemented): Runtime engine, schema compiler, control plane (CLIO), auth/RBAC layer, real-time collaboration primitives, command discovery system

Products (in alpha): GLYPH reader, CALLIOPE marketplace, CODEX authoring, PYTHIA research interface—surfaces that consume the platform

Consumer Layer

What readers experience. Consumer-facing tools are built in Flutter for reading comfort, animation smoothness, and cross-platform parity.

GLYPH Alpha

Living Worlds E-Reader · Flutter

Interactive fiction interface where you guide, not control. Type or speak to influence what characters notice, remember, and attempt. Highlight to annotate. Ask to inspect. Probe memory, attention, uncertainty.

CALLIOPE Alpha

Launcher & Library · Flutter

Living Worlds marketplace and library manager. Downloads worlds, manages updates, launches GLYPH. Hidden developer mode for server monitoring.

Hover or tap to play

CALLIOPE — Early prototype of the Living Worlds library (development began December 2025)

Core Runtime

FONT Beta

Simulation Runtime · C++ · Hardware-Accelerated

Core simulation engine designed for 100,000+ concurrent entities with emergent behavior. Traditional game engines handle ~1,000 scripted NPCs; FONT scales 100x through infrastructure-style patterns: temporal distribution of computation, pre-filtered batch execution, and distributed timing across GPU kernels.

Heterogeneous execution (GPU/CPU), meaning-first spatial representation, multi-timescale scheduling. The shared substrate everything runs on.

Language Layer

BABEL Alpha

AI Interface · C++ · Cross-Platform

Shared AI instrument across all tiers. Narrative agent in GLYPH, code assistant in CYPHER, operational assistant in CLIO. Permission-aware behavior per context. Designed for multilingual deployment.

System Engineering

CYPHER Alpha

Visual Systems Designer · Electron/React

Graph-based modeling environment for designing systems that deploy to FONT. Define entity capabilities, profile behaviors, tune parameters. ML/statistics for optimization—but no blackboxes. All models transparent, verifiable, traceable.

Control Plane

CLIO Beta

Developer OS · Electron · Python/Flask

Developer OS for managing the entire MUSE Ecosystem. Multi-terminal environment, database explorer, integrated AI pair programming. Background analyzers track dependencies and surface patterns for continuous improvement.

Hover to playTap to play

CLIO — Early prototype of the developer OS (development began October 2025)

Research & Authoring

PYTHIA Scoped

Research Interface · Electron

Scientific instrumentation surface. Set parameters, run experiments on FONT, export results to CSV/MATLAB/database. Same simulation core, research-grade access.

CODEX Scoped

Content Creation · Electron

AI-assisted world authoring. Creators think in semantics and description—no curves, mathematics, or sampling knowledge required. CODEX quantifies their input into data FONT uses to drive simulation.

Impact

  • Built full data → execution → UI → observability → AI-assisted toolchain as a single coherent platform
  • Enabled full lifecycle traceability across development, execution, and analysis
  • Designed for long-horizon extensibility via modular tools and role-segmented permissions
  • Separated system engineering from world authoring—creators think in semantics, not mathematics

Who It's For

  • Readers — experience Living Worlds through GLYPH, interacting via natural language
  • Researchers — run experiments and export data via PYTHIA without touching code
  • Creative directors — author worlds through CODEX using semantics and description
  • System engineers — build the perceptual, behavioral, and physical systems that power everything via CYPHER and CLIO

Why This Exists

MUSE grew out of my work in computational neuroscience—specifically research on sensorimotor control, affordance perception, and embodied cognition. For years I studied how biological agents perceive possibilities for action, coordinate motor primitives, and adapt through experience. I kept thinking: what if people could actually inhabit these models, not just read about them?

Most interactive fiction relies on branching narratives or probabilistic text generation. Both feel authored—because they are. Branching stories give you choices, but the world doesn't truly respond; it follows predetermined paths. LLM-driven narratives feel fluid, but they hallucinate consistency—characters forget, worlds contradict themselves, consequences evaporate.

Real people don't experience life as branching paths or probabilistic outputs. They perceive affordances in their environment, coordinate distributed actions across biological systems, form memories tied to spatial and emotional context, and adapt their behavior based on integrated feedback. That's the kind of experience I wanted to build.

MUSE is an attempt to create scientifically-grounded simulation you can inhabit—where characters have genuine perceptual systems that detect affordances, biological needs that drive motivation, social bonds that evolve over time, and memories that persist with causal integrity. Not to replace traditional storytelling, but to create a new medium where narrative emerges from authentic cause-and-effect.

The technical architecture—event-driven execution, visual-to-compiled pipelines, multi-tenant workbenches— emerged from the requirements of that vision. To simulate 100,000 entities with biological realism, you need infrastructure-style patterns. To let domain experts (writers, researchers, readers) interact without touching code, you need role-segmented interfaces. To ensure reproducibility and traceability, you need database-driven contracts.

Platform lesson: The same way researchers don't want to rebuild ML pipelines for every study (ePOCHE), or scientists don't want Docker chaos for every scan (MR.Flow)—storytellers and readers shouldn't have to choose between rigid branching or hallucinated consistency. MUSE provides a third path: emergent narratives from deterministic, inspectable simulation. Stories that surprise their creators because the world actually responds to what you do.

Building this required combining my research background (embodied cognition, biological timing, affordance theory) with platform engineering thinking (event-driven architecture, central state management, declarative compilation). It's a synthesis of neuroscience and infrastructure design—both focused on making complex systems work reliably at scale.

MUSE is still early. The runtime works. The core workbenches exist. The first Living Worlds are taking shape. But the vision is larger: a platform where anyone can build and experience worlds that think, remember, and evolve—not through scripts or generation, but through authentic simulation.