← Back to Blog

MUSE: How Fighting Game Engines Led Me to Build a 'Being' Simulator

MUSE: How Fighting Game Engines Led Me to Build a 'Being' Simulator

I spent years, off and on, trying to build a game (or four) in Unity before realizing I wasn't trying to build a game at all. I was trying to simulate how beings work—perception, memory, decision-making grounded in research I've been contributing to for over a decade. That required something Unity was never designed for.

Six months after abandoning my last Unity prototype, I have a simulation engine, a natural language interface, a growing ecosystem of integrated tools (twelve, currently), and a company. This is the story of how giving up on game engines led me to build something I never planned—and why MUSE isn't a game engine.

What I Was Actually Trying to Build

I've been worldbuilding a setting called Mechanician(s) Brood for seven years or so. But it was never just lore—it was a sandbox for ideas from my research in sensorimotor neuroscience and embodied cognition. I study how perception and action intertwine, how context shapes everything, and why the body isn't just a vehicle for the brain.

I wanted characters who worked that way. Not AI that picks from a behavior tree. Characters with metabolisms that affected their mood. Memories that decayed and distorted over time. Perception shaped by attention, fatigue, and what they expected to see. Decisions that emerged from biology, not scripts. And not implemented in a way that was so abstract that it became a toy.

That's a hard thing to build in a traditional game engine.

Why Traditional Engines Couldn't Do It

I tried making a top-down RPG, a VR action adventure, a VR tabletop roguelite, a "simple" 2D roguelite. Every attempt hit the same wall.

Game engines are optimized for rendering frames. They run update loops at fixed rates, process input, draw pixels, repeat. Everything happens on the same clock. But biological systems don't work that way. Reflexes are fast. Digestion is slow. Memory consolidation happens as we sleep. Trying to simulate authentic biological processes inside a frame-locked render loop felt... artificial.

I kept trying to bolt science onto a pipeline that wasn't designed for it. The engine fought me at every turn.

The Trance

July 4th, 2025. I spent two days tinkering in Unity before giving up—not on the vision, but on making Unity fit it again.

What followed was something between obsession and revelation. I have a tool now called CLIO that tracks my work. It tells me I wrote 208,791 words across July and August. Not code, but planning documents. System designs inspired by metabolic pathways. Memory architectures based on consolidation research. Perception models grounded in ecological psychology. Diagrams of how characters would sense, decide, and remember.

I also recorded dozens of voice memos—rambling monologues into my phone while walking or driving, working through problems out loud. Most I never listened to again. The ideas went straight into markdown files the moment I sat down. The recordings were onc estep in a process that I am still refining.

It felt like a trance. Years of ideas finally had somewhere to go and it was almost shocking how quickly prototypes came together when much of the trial and error was mitigated thanks to planning and the familiarity I had from thinking about these same systems from the point of view of a scientist.

Early scribbles that seem almost comically simple in hindsight

Early scribbles that seem almost comically simple in hindsight

The First Attempt: Rendering Text

The characters in my Mechanician(s) Brood setting are two inches tall in a massive world. I needed a way to convey scale and detail that sprites couldn't capture. So I thought: what if the entire game was represented in text? Rich, dynamic prose that could describe things a renderer never could. However, I was still thinking in game terms. I imagined text of all sizes in the 2d world. The letters T R E E in the shape of a tree, taking up real (virtual) space.

I built the first version of GLYPH—a Vulkan-based text renderer with signed distance field fonts and text-based effects. Two months of work. It functioned. But I was spending all my time translating character state into text representations instead of building the systems I actually cared about.

The rendering was a distraction & I was solving the wrong problems (slowly).

The Breakthrough

During this period, I'd built a command-line interface for control. Something like:

#attack:sword:thrust @beetle

Fuzzy matching, command registries, fluid integration with mouse input. It worked. But one morning I stopped and asked: why am I making users learn a CLI?

Why not let them type what they want—plain English—and have an LLM parse it into commands?

That question broke everything open.

If natural language was the input, the interface didn't need to be visual at all. I could describe the world in prose. The player could respond in prose. The "game" could be interactive fiction—an e-reader experience, not a rendered one.

Suddenly, all the constraints I'd been fighting dissolved.

I didn't need to translate biological state into sprites and animations. I could describe it directly: The beetle's carapace is cracked. Its movements are erratic—loss of hemolymph, maybe. It's protecting something behind it.

Text could carry the richness my simulation was producing. The medium matched the content.

Before vs MUSE

Before vs MUSE

"This Is Going to Sound Insane"

A few weeks into this revelation, I texted a colleague and long-time friend (a neuroscientist turned to data science and former co-founder at PlatformSTL).

"I have an idea for a company, and I'm curious if you'd want to be involved. It's going to sound insane."

I then tried to describe what I was building. A simulation of brains and beings. Of ecosystems. Emergence at scale—Conway's Game of Life, but with characters who perceive and remember and decide. I didn't even get to the craziest parts. I was worried I'd scare him off.

Because MUSE has so many moving parts. Individually, many of them are familiar concepts—patterns any engineer would recognize. But many had been tweaked, some in minor ways, some drastically. And the cohesive result was something I hadn't seen done before.

It isn't just interactive literature. It's interactive literature emerging from large-scale deterministic simulation. A world where story isn't authored—it unfolds from butterfly effects rippling through a functioning ecosystem. Characters don't follow scripts. They live. And even if you experience a small sliver of that world at any moment, the rest of the world continues to exist in full fidelity.

He didn't think I was insane. He joined the endeavor.

What This Unlocked

The shift to interactive fiction didn't just change the interface. It changed how I thought about the entire architecture.

Time became flexible. Without frames to render, systems could run on their own terms. The simulation could model fast processes and slow processes simultaneously, each at appropriate resolution.

Scale became possible. Text descriptions don't cost more to generate for complex scenes. I could simulate thousands of characters with biological and psychological depth, then describe only what mattered to the current moment.

Interaction became natural. Players don't learn controls or memorize commands. They say what they want to do. The system interprets intent and responds in kind.

I spent the following months building the architecture that made this real. An entity system designed for this kind of simulation. Execution organized around the needs of the content, not the constraints of rendering. And tooling to manage the complexity that emerged.

The Ecosystem Emerges

What started as a game engine became a platform.

FONT is the simulation runtime—the scheduler that orchestrates everything. It doesn't define what systems do; it runs them. The modeling happens elsewhere. FONT is the heartbeat. The name started as a joke—the original game was all text, all font. But "font" also means wellspring. Source. There are fewer things I love more than double or triple entendre's.

BABEL is the semantic interface layer—the same natural language infrastructure running behind every tool, wearing different masks. In GLYPH, it's the narrator translating simulation state into prose. In CYPHER, it's a development assistant parsing queries like "show me everyone nearby who's hungry" into registry calls. In CLIO, it helps me interrogate my own codebase in plain English. One system, many roles. The universal translator I didn't know I was building until I needed it everywhere.

GLYPH is the reader interface—where players actually experience Living Worlds. It started as a Vulkan text renderer I spent two months building. Now it's a Flutter app designed for the thing that matters: hours of comfortable reading with interaction woven through, not bolted on. The pivot hurt, but the result is what I wanted all along.

CYPHER is the research and development workbench—where the simulation gets built and tuned. Visual system design through TESSERA, performance profiling, runtime inspection, research-grade statistical analysis. If FONT is the heartbeat, CYPHER is the operating table. It's not pretty work, but it's where the actual science happens.

CLIO is development management—but also development memory. Session tracking, database exploration, architecture visualization, and a searchable log of every decision, conversation, and rabbit hole that shaped the ecosystem. Under the hood, SIBYL crawls the codebase—C++, Go, Python, TypeScript, Dart, SQL—and extracts dependency graphs, API surfaces, and health metrics that CLIO visualizes. It's how a team of one (until recently) kept a growing ecosystem legible. Six months of context, queryable. I'm writing this post in CLIO, which means this post will become part of the record too. Turtles all the way down.

There are more. Twelve tools and counting. Some are full workbenches, others are internal services—each solving a specific problem: all performing a role that essentially enables FONT to do its job.

The result: worlds where you return to the village after a week and discover the blacksmith's daughter recovered, married the herbalist's son, and they're expecting. Not because I wrote that. Because the simulation ran.

What I Learned

The medium shapes the architecture. When I was building for visual rendering, every decision bent toward frame rates and sprites. The moment I embraced text, the architecture could finally reflect what I was actually simulating.

Fight the tool or change the tool. I spent years fighting Unity. Two days of giving up led to six months of building exactly what I needed. Sometimes the constraint isn't the problem—it's the framing.

Obsessive planning isn't procrastination. Those 200,000 words weren't avoidance. They were the foundation. When I finally started building, I knew what I was building. The voice memos I never listened to again weren't wasted either—they were how I thought through problems before I had the vocabulary to write them down. I was scoping something I didn't yet understand. I wanted, more than anything, to avoid expensive guesswork.

Tools emerge from problems. I didn't plan twelve tools. I planned one engine. Each tool exists because something became too hard to do without it and it made no sense to bolt those capabilities onto the engine directly.

The crazy idea might not be crazy. I was terrified to describe MUSE to anyone. It sounded too ambitious, too sprawling, too strange. But the reactions surprised me.

When I told the now head of simulation research, he was excited. He wanted to see more, hear more. When I told my brothers, they didn't ask for technical details—they jumped straight to implications. So it's like a game where almost anything is possible and cause and effect actually works? They got it immediately (though I want to push back on calling it a game).

When I told another colleague—a neuroscientist as well, oddly enough—she thought I sounded crazy. To be fair, I tried to explain everything at once. I was buzzing with excitement, I hadn't shared it with anyone beyond my friend, and I word-vomited the entire vision in one breathless monologue. Not my best pitch.

But with each passing week, as I built more and explained more coherently, she shifted. Skepticism became curiosity. Curiosity became engagement. She now rounds off our founding team of three.

What's Next

This is the first in a series about building the MUSE Ecosystem. I wanted to start with the origin—why this exists, what I was trying to solve, and how the shape of it emerged.

Future posts will cover BABEL and natural language as a universal interface. The philosophy behind Living Worlds. And what it means to build characters that remember and perceive like biological systems.

Six months ago I wanted to make a game. I'm building a platform for simulating minds—and a new medium for experiencing the stories that emerge from them.

I think it's exactly what I was always trying to make.

This post is part of a series on building the MUSE Ecosystem. Follow for updates on the architecture, the philosophy, and the path toward Living Worlds.

Nathan Baune
Nathan Baune

Neuroscientist & engineer building simulated worlds. Founder Gothic Grandma & Chief Architect of the MUSE Ecosystem.