I've worked with Parkinson's patients. If you ask many of them to count backwards by threes, or recall a story from earlier in the session, you can visibly watch their hand begin to tremor. The cognitive task recruits resources that were suppressing the tremor. The load becomes visible.
This is an exaggerated case—focal degeneration making the interconnectedness of the brain impossible to ignore. But it's not an exception. It's how brains work. "Cognition" isn't separate from motor systems, sensory systems, emotional regulation, or anything else. It's all one system with finite resources. When you recruit one part, other parts feel it.
This shapes how I think about developer tools. Not as interfaces to software, but as environments that compete for the same neural resources your actual work requires.
The Paper That Changed How I Think
Most discussions of cognitive load in software reference working memory limits, chunking, intrinsic versus extraneous load. That's useful, but it's not where my thinking started.
The paper that shaped me was Cisek and Kalaska's 2010 review on neural mechanisms for interacting with a world full of action opportunities. It's essentially a treatise on Bayesian decision-making in the brain—how we hold multiple possible actions in parallel and switch between them as context changes. Not sequential deliberation, but continuous competition between what is salient-what is possible.
The same principle applies to tools.
A tool needs to be focused enough to not distract from the current operation, but not so hidden that the possibility of another feature isn't available when you need it. The actions you could take should remain present without demanding attention. When context shifts, the relevant affordance should be ready.
Many tools fail this. They may overwhelm you with everything at once, or hide features so thoroughly you forget they exist (Amazon's marketplace manages to accomplish both at once). The sweet spot is narrower than people think, and it varies by user.
Why I Built CLIO the Way I Did
I have ADHD. I have intense hyperfocus—the kind where hours disappear and I surface wondering if I ate lunch (the answer is no). There is a common misconception that ADHD means you have worse than average focus. For me, and I know many have the same experience, ADHD is a significant difficulty managing attention/focus volitionally. Often the flipside to that is deep hyperfocus.
The stereotype is of easily distracted children in the classroom. I believe that means they weren't engaged in the first place. If I am engaged, a marching band could wander by playing their rendition of Lady Gaga and I would find myself genuinely wondering why Bad Romance was stuck in my head hours later.
It's a double-edged sword.
Every time I switch between workspaces or workflows, there are moments lost to regaining my composure. Chances for me to lose context. Chances to get distracted if I made the foolish decision of trying to multitask.
If I don't write down a thought instantly, it will be gone. Not in five minutes. In five seconds. It barely grazed my brain and is already dissipating into the aether.
So I built CLIO around my own constraints.
I can log a todo in moments. I can highlight code or text, right-click, and send it to Supabase for perpetuity—then let it safely disappear from my attention before it consumes it. Terminal state persists even if the app crashes. I restart and everything is exactly where I left it. Journal entries autosave. Blog posts (including this one) save as I write. I can be mid-thought, switch to Claude Code restructuring a directory, click a button to end my dev session (which logs my accomplishments and automates the commit and push to the repo), and return to my sentence without losing the thread.
The workbench keeps what I need where I need it. Golden Panel layouts let me arrange things how I want. User-specific layouts save to the cloud and load automatically based on OAuth—my colleagues see their layouts, I see mine.
This isn't just preference. It's cognitive accommodation. The tool reduces the load of remembering, reduces the load of context-switching, reduces the load of hunting for features. That frees resources for the actual work.
The Balance Principle
Here's what I've learned: you can go too far in either direction.
Strip everything away for "simplicity" and users can't find what they need. Features become invisible. Every task requires remembering that something exists and hunting for where it lives. That's cognitive load too—just a different kind.
Expose everything at once and users drown. Every glance at the interface triggers decision fatigue. The mind spends resources filtering instead of working.
The answer isn't minimalism or maximalism. It's adaptability.
CLIO supports both extremes. One user might hide everything behind tabs in a single window—clean, focused, minimal. I might have four panels visible simultaneously because that suits my current workflow. And I can switch layouts based on what I'm doing. Writing mode. Debugging mode. Research mode. I'm not stuck with anything.
The trick is to simplify, keep it clear, keep it consistent—and allow customization where you can, given that it can't lead to a broken product. Flexibility within constraints.
Visual Programming as Cognitive Management
I used to write MUSE systems as hand-crafted C++ classes. Metabolism, perception, memory—each one a file, a class hierarchy, a tangle of dependencies.
C++ classes are almost impossible to hold in your head. You can't easily see how they interconnect. You can't zoom out. You can't change the way you view them without rewriting them. And you certainly can't show them to a non-programmer and have a meaningful conversation.
TESSERA—the visual constructor inside CYPHER—changed this entirely.
Now systems are built as node graphs. Everything saves to a database. The same underlying relationships can be visualized in multiple ways: high-level system dependencies, detailed data flow, individual heuristic logic. You can look down from the clouds or zoom in with a microscope. Switch views with a click. The representation changes; the compiled output doesn't.
This matters for cognitive load because views are cheap. To change a C++ class, you recode it. To change a view of a database-backed graph, you just... change the view. Map it to the UI. Done.
Whatever your mental model, you can create a visualization that matches it. Whatever your workflow, you can optimize for it. The tool adapts to cognition instead of demanding cognition adapt to the tool.
I wouldn't have made it past a few systems doing it the old way. The complexity would have crumbled under its own weight. Visual construction isn't a convenience—it's what makes the project possible at all.
The Sadism of Scientific Software
I need to vent for a moment about scientific GUIs.
Gray on gray on gray. Lists where you manually enter filepaths but can't see the full path. APIs that exist only in ancient wikis where you resort to finding some person's YouTube video from 2014 to actually learn how to use them. Modal dialogs that block everything. Settings buried seventeen clicks deep. Error messages that say "Error" and nothing else (most of the time it just crashes).
Arguably powerful tools. But at what cost?
The MATLAB codebase I inherited required running two separate scripts on two separate computers simultaneously. Anyone who has tried using two mice and two keyboards understands how that goes. Your brain isn't built for it. Every moment is spent managing the coordination, not doing the work (wondering why letters aren't appearing on the screen).
I think you can have the best of both worlds. Power doesn't require pain. Depth doesn't require hostility. The choice to make software hard to use isn't a technical constraint—it's a failure of imagination, or care, or both.
Designing for Users You'll Never Meet
Here's something the software industry doesn't think about enough: cognitive impairment isn't edge-case.
I've worked with stroke patients, amputees, people with Parkinson's, aging populations. The range and variability in how humans process information is enormous. What's effortless for one person is impossible for another—and not because of motivation or intelligence.
This will only become more important. The first generation that grew up with computers is now entering the age where cognitive decline begins. They'll expect to keep using the tools they've always used. They'll have the motivation and the experience. What they won't have is the processing speed, working memory, or attentional control they once did.
Are we building tools that will work for them? Mostly, no. We're building for ourselves—young, neurotypical, highly trained. We assume our cognitive profile is universal. It isn't.
Accessibility isn't a feature you add. It's a lens that changes how you design from the start. And cognitive accessibility is part of that—not just screen readers and color contrast, but information architecture, pacing, error tolerance, and recovery.
The Connection to FONT
I'll say this briefly because it deserves its own post: FONT's architecture reflects these same principles.
Traditional game engines process everything serially. Every entity, every system, every frame—sequential logic, top to bottom, racing to hit 60 FPS.
FONT uses distributed, parallel principles modeled on how brains actually work. Not discrete ticks but fluctuations over time. Closed-loop systems where one calculation impacts another across frames. Behavior emerges from continuous feedback, not from executing a giant checklist every 16 milliseconds.
The brain doesn't process the entire world every moment. It filters for salience. It runs processes at different timescales. It holds possibilities in parallel and commits resources only when needed.
FONT does the same. It's too much to explain here, but the connection is real: the same cognitive science that shapes how I design tools shapes how I design the simulation itself. The principles scale from user interfaces down to execution architecture.
What I've Learned
Cognitive load is embodied. It's not abstract "mental effort." It's neural resources that could be doing something else. When your tool demands attention, it's competing with the work itself.
Affordances should be present but not demanding. The Cisek and Kalaska insight: hold possibilities in parallel, commit based on context. Good tools make features discoverable without making them distracting.
Customization is accommodation. Different brains work differently. Letting users arrange their environment isn't a luxury—it's how you serve cognitive diversity without building twelve different apps.
Visual representations and abstractions can reduce load. Not because pictures are easier than text, but because you can create multiple views of the same truth. Match the representation to the mental model. Don't force the mental model to match the representation.
Simplicity isn't minimalism. It's removing the right things and keeping the right things. That requires understanding what users actually need to hold in their heads—which at times, requires understanding how heads work.
What's Next
This is the fourth post in a series about building the MUSE Ecosystem. The first covered FONT and the origin story. The second covered BABEL and the divide between deterministic simulation and black-box AI. The third covered lessons from research and production infrastructure.
Next, I'll write about Living Worlds—the philosophy behind emergent narrative and what it means to build stories that happen rather than stories that are told.
This post is part of a series on building the MUSE Ecosystem. Follow for updates on the architecture, the philosophy, and the path toward Living Worlds.