Cognitive AI : The Forgotten Science of Modeling Human Intelligence
Before language models, AI tried to understand how we think
Margot Lor-Lhommet
Chief Technical Officer, PhD
The origins of AI: structure, not statistics
AI was created from an observation: understanding is not pattern matching.
In the 1970s, Marvin Minsky asked why statistical associations fail to explain comprehension. His answer: understanding requires organized knowledge structures — frames with slots, defaults, hierarchies. Knowing that “a restaurant” implies menus, waiters, tables, payment. Not because the words co-occur, but because we have a structured representation of what a restaurant is in our mind.
Allen Newell and Herbert Simon pushed further. Their hypothesis: intelligence requires the ability to manipulate explicit symbolic representations. Not just learning correlations — reasoning about structures.
What is cognitive science?
Cognitive science is an interdisciplinary field that draws from psychology, neuroscience, philosophy, linguistics, and artificial intelligence. What unites these disciplines is a shared question: how does the mind work?
The method is cognitive modeling: studying human behavior to infer underlying rules and processes that would explain what we observe. Why do people hesitate before certain decisions? How does a belief connect to an emotion? What makes someone change their mind?
There are many such models. None claims to be the definitive truth about how the mind works.
Instead, they should be understood as frameworks: ways to interpret, reason about, and communicate otherwise deeply abstract things. Tools for thinking about thinking.
What follows are three examples of this approach: models of intention, emotion, and personality. Each emerged from observing humans, formalizing patterns, and testing whether the framework explains what we see.
Modeling intention: beliefs, desires, and commitments
Humans are not stimulus-response machines. The philosopher Michael Bratman observed a crucial distinction: we form beliefs about the world, which may or may not be true. We entertain desires that are seemingly contradictory (wanting to lose weight and wanting to eat cake). And have intentions, plans that we actively commit to pursue.
This architecture explains something difficult to model: why someone says they want something but never acts on it. The desire exists. The intention doesn’t.
Modeling emotion: appraisal, not reaction
Emotions are not automatic responses to events. The same situation triggers different emotions in different people. Why?
The psychologist Richard Lazarus proposed that emotions depend on cognitive appraisal: how we evaluate a situation relative to our goals and resources.
- Primary appraisal: Is this event relevant to my well-being? Good or bad for my goals?
- Secondary appraisal: Can I cope? Do I have the resources to handle this?
This insight was formalized into computational models. Different appraisals generate different emotions. For example, hope when a positive outcome seems possible, fear when a negative one does, anger when someone blocks our goals, shame when we fail our own standards.
The implication is profound: emotions are not noise. They are structured information, signals revealing how someone assesses their situation relative to what they care about. And by looking at their reaction, we can guess what their goals are.
Emotions aren’t irrational disruptions. They’re structured evaluations of how events relate to goals. Understanding this changes how we interpret what people feel and why.
Modeling personality: stable patterns across situations
Psychologists observed that people show stable patterns of behavior, thought, and emotion across time and contexts. This is what we call “personality”, or traits. After decades of research, a consensus emerged: the OCEAN model (also called the Big Five).
- Openness: Curiosity, creativity, receptiveness to new experiences
- Conscientiousness: Organization, discipline, reliability
- Extraversion: Sociability, assertiveness, energy from social interaction
- Agreeableness: Cooperation, empathy, conflict avoidance
- Neuroticism: Tendency toward anxiety, emotional reactivity
These traits influence everything: how people appraise situations, what emotions they feel, how they cope, how they communicate.
Two people facing the same challenge will respond differently depending on their personality profile. And any system that aims to understand people must account for it.
The integration problem
In academic research, these dimensions (reasoning, emotion, personality) are often studied separately. But humans don’t function in silos.
Consider: a person high in neuroticism evaluates criticism as more threatening. This generates anxiety. The anxiety reinforces a belief that they are not competent. This belief reduces their intention to seek new responsibilities.
Personality shapes appraisal shapes emotion shapes belief shapes action.
The challenge is integration. And it operates at multiple levels.
From isolated models to integrated theory
Each cognitive model captures one dimension well. But combining them is not straightforward. How exactly does personality modulate emotional appraisal? At what point does an emotion revise a belief? When does a desire become an intention? These questions require theoretical work that goes beyond any single model. It requires building bridges between frameworks that were not designed to talk to each other.
From theory to computation
Even when a unified theoretical model exists, implementing it computationally introduces new constraints. A concept that works on paper must become data structures, algorithms and update rules. Trade-offs emerge: what level of granularity? How to handle uncertainty? What happens when components produce conflicting outputs? Computational implementation is not just translation: it forces precision that theory can leave implicit.
From computation to real interaction
And even when a computational architecture exists, deploying it in real-world interactions adds another layer of complexity. The system must handle noisy input, incomplete information, evolving context. It must operate in real time. It must fail gracefully. The gap between a working research prototype and a robust, usable system is wider than it appears.
Where do we stand today?
Today, cognitive science is largely invisible in the AI conversation — dominated by generative models.
LLMs are remarkable. They generate fluent, contextually appropriate language. They have transformed what AI can do with words. But they do not maintain explicit, persistent representations of beliefs, goals, or intentions. They predict likely word sequences. This is powerful — and it is not the same thing.
For fifty years, cognitive AI asked a different question: what computational structures are needed to represent human reasoning, emotion, and decision-making?
That question has not been answered by language models. It has been obscured by them.
But for anyone building AI that needs to genuinely understand people, not just talk to them, it is more relevant than ever.
Cognitive AI spent fifty years modeling how humans actually think. That research has been eclipsed by language models — but not replaced. The question it asked remains open, and essential.
I’ve spent 10 years working on exactly this: architectures that combine emotional appraisal with goal reasoning, systems where personality modulates emotional thresholds, frameworks where beliefs, desires, and intentions evolve together. This is the scientific tradition we build on at Tigo Labs.
> _
Beyond Language: A Structural Foundation for Trust in AI
Is trust in AI something to be persuaded into, or something to be earned through structure and transparency?
Human-Centered AI Starts with Where Your Data Lives
Most startups fix infrastructure later. We chose European data sovereignty from day one.