Conguent Systems

We build software that reasons,
not software that guesses.

Neurosymbolic AI systems where every answer is traceable to its source, every decision is explainable, and every claim is provably grounded.

No hallucinations. No black boxes. Just knowledge, structured and searchable, all the way down.

Research Notes

Research Notes · Feb 6, 2026

When Simulations Lie: What Live Testing Taught Us

Our simulation predicted 80% accuracy. Live testing delivered 54%. That's not a rounding error. That's a 26-point gap that calls into question how we validate AI

Research Notes · Jan 30, 2026

The Hallucination Problem Is a Design Problem

Every few months, a new paper announces a technique to "reduce hallucinations" in large language models. Retrieval-Augmented Generation. Chain-of-thought prompting. Constitutional AI. Self-consistency checking. These are patches on

Research Notes · Jan 23, 2026

What Video Games Taught Us About AI Memory

Open-world video games face a problem that looks nothing like AI memory — until you squint. The Rendering Problem In a game like Zelda, Breath of the Wild, the world is

Research Notes · Jan 16, 2026

Crystallization: Teaching AI to Remember

Crystallization: Teaching AI to Remember When an LLM answers a question, the answer evaporates. Next session, it's gone — no trace, no memory, no learning. The weights didn'

Research Notes · Jan 9, 2026

Why We Built Neurosymbolic Beings

The Problem With Remembering Nothing Ask an LLM a brilliant question and you'll get a brilliant answer. Ask it again tomorrow and it has no memory of the