Back to Blog
Vision
December 23, 2025
5 min read

Beyond Language: A Structural Foundation for Trust in AI

Is trust in AI something to be persuaded into, or something to be earned through structure and transparency?

Margot Lor-Lhommet

Margot Lor-Lhommet

Chief Technical Officer, PhD

Beyond Language: A Structural Foundation for Trust in AI
Abstract

Trust in AI should not rest on fluent language or confident-sounding responses. At Tigo Labs, we build on cognitive science to create systems grounded in structured representations — not just language generation. This is what makes trust earned, not performed.

A user asks an AI assistant whether they should take a new job offer. The response is thoughtful, balanced, well-structured. It sounds like advice.

But should it be trusted? What is the system actually doing? And what, exactly, is the user being asked to trust?

These questions matter. When AI systems are designed to support people in their work, their decisions, or their personal development, being precise about what they can and cannot do is not a nice-to-have.

It is the foundation of trust.

Two kinds of trust

At the core of any AI system lies a simple but crucial question: what is the system asking users to trust?

There are, broadly speaking, two kinds of trust.

The first is trust in words. Users trust a system because its responses sound confident, coherent, reassuring. The output feels right.

The second is trust in structure. Users trust a system because its reasoning is grounded, explicit, and accountable. The process is visible.

Think of the difference between trusting someone on their word, versus trusting them because you can see the evidence, the reasoning, and the limits of what they claim.

This distinction determines whether trust is sought through persuasion or earned through transparency.

Why trust in AI is so fragile today

AI systems are increasingly present in domains that involve judgment, guidance, and personal decision-making. At the same time, their internal functioning often remains opaque.

LLMs are remarkably effective at generating fluent, contextually relevant language. They power conversational interfaces, summaries, explanations, and a wide range of decision-support tools.

Large Language Models generate text by identifying statistical patterns in massive amounts of text. Given an input ("the black"), they predict which sequence of words is most likely to follow ("the black cat"). They are, what we call in AI, a "black box". In other words, we cannot see how it works under the "hood".

LLMs are increasingly used as “companions” or “advisors”.

But let’s be clear. Fluency is not understanding. Confusing the two is how trust gets misplaced.

Intent is attributed where there is none. Coherence is interpreted as comprehension. Relevance is confused with judgment.

In sensitive domains (coaching, mental health, career decisions, leadership) this leads people to rely on systems in ways they were never designed to support.

The issue is not the technology. LLMs are powerful tools.

The issue is asking people to trust them for things they cannot do.

Beyond language generation

At Tigo Labs, we use LLMs. But they are not the backbone of our system.

Instead, we build on cognitive science : the study of how humans actually think, reason, and make decisions.

Cognitive science is an interdisciplinary field that studies how mental processes work: how people represent the world, form goals, interpret situations, and act on them. It draws from psychology, neuroscience, linguistics, philosophy, and AI.

In cognitive science, understanding is not measured by fluency.

Understanding is the ability to build and manipulate internal representations: representations of a situation, of goals and trade-offs, of beliefs and constraints, and of how these elements relate over time.

Understanding is inherently situated. It is tied to perspective, history, and evolving context.

When you think about it, this is obvious. Humans don’t just respond to words. They reason about what is happening, why it matters, and what could happen next.

This is what we build at Tigo Labs.

A different foundation for trust

I spent years in academic research before becoming a CTO. That background shaped how I think about trust.

In research, you learn that credibility comes from precision, not from persuasion.

You state what you know. You state what you don’t know. You show your work.

I believe the same principle applies to AI.

Trust does not emerge from confident language or anthropomorphic framing. It emerges from clarity:

  • What can this system do?
  • What can it not do?
  • What are users actually being asked to trust?

Understanding the limits of AI systems is not a weakness.

It is the foundation of building systems people can actually rely on.

End of Article
Share:
Margot Lor-Lhommet

Written by

Margot Lor-Lhommet

Chief Technical Officer, PhD

LinkedIn
Keep Reading

> _