I've spent most of my career studying, modeling, or designing around human cognition.
At first, that meant cognitive neuroscience and biomechanics: how the brain integrates sensory information, how perception and action interact, how learning unfolds over time. Later, it meant building tools—interfaces, pipelines, simulations, ML systems—that people use to think, explore, and understand complex data.
None of the ideas in this essay are new. Many people have written thoughtfully about attention economies, dark patterns, and the misuse of behavioral science in technology.
Working next to these ideas every day makes them increasingly difficult to treat as abstract concerns. When cognition becomes something you model, tune, and optimize against in practice, the implications stop being theoretical.
A huge amount of our collective understanding of cognition is not being used to help people.
It's being used to extract from them.
Cognition as a Lever
If you look honestly at much of modern consumer technology, the questions being asked aren't:
- How do we help people understand better?
- How do we support learning and reflection?
- How do we respect attention as finite and valuable?
They're questions like:
- How do we keep people looking longer?
- How many notifications can we send before they disengage?
- How variable should rewards be to maximize return visits?
- How far can we push frustration before churn outweighs profit?
- How much can we charge before people leave?
These outcomes aren't accidental. They are deliberate applications of cognitive science.
Understanding attention, reward prediction, habit formation, uncertainty, novelty, social comparison, memory decay—these are powerful tools. And in many systems, they're used adversarially: not to align with human goals, but to steer behavior toward metrics.
Click-through rate. Engagement time. Retention curves.
People become optimization targets.
What's often left unsaid is that any system designed to maximize or steer behavior will necessarily have long-lasting effects on the people subjected to it. Learning systems adapt. Reward systems recalibrate. Expectations shift. Attention patterns change.
You don't just influence behavior in the moment—you train it.
The Quiet Tragedy
What makes this especially painful—at least for me—is knowing how much good could be done with the same understanding.
Cognitive science can be used to:
- reduce confusion instead of amplify it
- help people build accurate mental models
- design tools that respect cognitive limits rather than exploit them
- support learning, curiosity, and mastery
- give people agency instead of dependency
Instead, much of it is used to probe the boundaries of self-regulation.
Not always maliciously. Often incrementally. "Just one more experiment." "Just a small tweak." "Everyone else is doing it."
But cognition doesn't experience these changes incrementally. Systems that repeatedly reward certain behaviors reshape learning and expectation over time. They influence what feels rewarding, what feels boring, what feels intolerable to wait for.
The aggregate effect is rather bleak: systems that are incredibly sophisticated about how minds work, and largely indifferent to whether those minds are better off.
Optimization Without Meaning
I think this is why so much of mainstream tech culture feels uncomfortable to me.
The obsession with growth. The fetishization of speed. The shallow rhetoric about "users" as abstract units. The assumption that if something increases engagement, it must be good.
There's very little patience. Very little nuance. Very little room for contemplation.
It feels inhuman.
Everything is tuned for momentum—movies, games, social platforms, even professional tools. Addiction loops replace curiosity. Spectacle replaces understanding. Hype replaces value.
And when you question this, you're often told you're being unrealistic. Or idealistic. Or "not pragmatic."
But optimizing systems that actively work against human cognition isn't pragmatism. It's abdication of responsibility.
Tools Shape Thought
One thing neuroscience teaches you early is that structure matters.
The way information is presented shapes what can be thought. The way memory is supported shapes what can be learned. The way feedback is given shapes behavior.
Tools are not neutral. Interfaces are not neutral. Our chosen metrics are not neutral.
If a system is designed to fragment attention, it will fragment attention. If it is designed to reward compulsive checking, it will train compulsive checking. If it is designed to erase history and context, it will produce shallow understanding.
This isn't just a moral claim. It's a cognitive one.
Which means designers and engineers aren't just building products—they're shaping people. Environments that continuously steer behavior don't simply respond to cognition; they reshape it.
An Alternative Use of Understanding
What keeps me grounded—and what I keep returning to in my own work—is the idea that cognition doesn't have to be a battleground.
Understanding how people think can be used to:
- reduce cognitive load
- make systems inspectable rather than opaque
- support exploration instead of funneling behavior
- preserve context and memory over time
- help people ask better questions, not just get faster answers
This is why I care about:
- observability
- persistence
- interpretability
- transparency
- tools that reward patience instead of punishing it
It's why I build systems that let people see what's happening, not just consume outputs. Why I distrust black boxes that demand trust without offering understanding. Why I value long-form writing, design documents, and tools that assume the user is capable of thought.
Not Anti-Technology — Anti-Extraction
To be clear: this isn't anti-technology. And it isn't anti-business.
It's anti–empty acceleration. Anti–optimization divorced from meaning. Anti–treating people like short-memoried wallets instead of minds.
I can get excited about ideas. About mission. About quality. About tools that genuinely change how people understand their world or themselves.
What I can't get excited about is using intelligence to hollow people out.
A Personal Line I Won't Cross
Once you work closely enough with systems that model, predict, and steer behavior, it becomes hard to unsee their effects.
For me, that's drawn a quiet line.
If I'm going to build systems informed by cognition, they need to give something back.
More clarity. More agency. More understanding. More room to think.
Otherwise, what's the point?
Understanding the mind is a privilege. Using that understanding to steer behavior without regard for long-term cognitive consequences feels like a betrayal of that privilege.
Closing Thought
The same insights that can be used to manipulate people are exactly the insights needed to protect them.
The difference isn't technical.
It's philosophical.
And it shows up not just in what we can build—but in what we choose to optimize for too.