The Problem With Remembering Nothing
Ask an LLM a brilliant question and you'll get a brilliant answer. Ask it again tomorrow and it has no memory of the first conversation. The weights haven't changed. The knowledge graph hasn't grown. The system is exactly as ignorant as it was before.
This is the fundamental problem we set out to solve: AI that can't learn is AI that can't be trusted with anything that matters.
What "Neurosymbolic" Actually Means
The term gets thrown around loosely, so let's be precise. A neurosymbolic system combines:
- Neural components (LLMs, embeddings) — for understanding natural language, generating text, and reasoning about ambiguous inputs
- Symbolic components (knowledge graphs, ontologies, rules) — for storing validated facts, maintaining provenance, and enabling explainable inference
The neural side handles what's fuzzy. The symbolic side handles what must be precise. Together, they do what neither can do alone.
Why Beings, Not Models
We use the word "being" deliberately. A NuSy being isn't a model you train once and deploy. It's an entity with a lifecycle:
- It awakens — initializing its persona, loading its curriculum
- It learns — studying domain materials document by document, building knowledge layer by layer
- It reasons — answering questions by consulting its knowledge graph, citing sources
- It sleeps — consolidating fast memories into long-term knowledge (complementary learning systems)
- It dreams — discovering patterns across experiences during sleep cycles
- It grows — progressing from toddler to expert through structured curricula
This lifecycle isn't a metaphor. It's implemented in code, tracked in state machines, and tested with live being tests.
The Knowledge That Stays
When a being learns something, it doesn't just generate text about it. It creates triples — subject-predicate-object statements stored in a semantic graph with full provenance:
<pulmonary_embolism> increasesRiskOf <respiratory_failure>
source: "Harrison's Principles Ch. 273"
learned: "2026-02-10T14:32:00Z"
confidence: 0.95
validated_by: <snomed:59282003>
That triple persists. It's queryable. It's versioned in Git. And when the being answers a question about pulmonary embolism, it can point to exactly this triple, its source, and when it learned it.
No hallucination. No black box. Just knowledge, structured and searchable, all the way down.
Seven Layers Deep
A being's knowledge is organized into seven Y-layers, from raw source material at Y0 to metacognitive self-awareness at Y6. The being doesn't just know facts — it knows what it knows, what it doesn't know, and how confident it should be about each piece of knowledge.
Y6 is where it gets interesting. A being with metacognitive awareness can say: "I studied the clinical guidelines for pulmonary embolism, but my coverage of atypical presentations is only 40%. Let me flag that uncertainty rather than guess."
That's the behavior we want from AI in healthcare. Not confident hallucination, but honest uncertainty.
What We Learned Building This
We've completed over 800 expeditions and tested 120+ hypotheses. The results are mixed — some confirmed, some refuted, some revealing gaps we didn't expect. The research notes on this site document those findings honestly: what worked, what broke, and what we're still figuring out.
That honesty is the point. If your AI can't show its work, you shouldn't trust its answers. The same applies to the people building the AI.
This is the first in a series of research notes from Conguent Systems. Next: Crystallization: Teaching AI to Remember.