The Strategy
AI is moving too fast for journals. By the time a review cycle completes, we've shipped two architecture versions. The journal pipeline publishes archaeology. The arXiv + open source pipeline publishes the living system.
One credibility anchor. Everything else goes open.
- Paper 104 (Crystallization) → NAI Journal — establishes peer-review credibility
- Paper 113 (COG Transfer) → NeSy 2026 Workshop — community presence
- Everything else → arXiv preprints in waves, timed to development milestones
When we claim 97.6% extraction precision or 0% hallucination on grounded queries, a reviewer either believes us or doesn't. When the code is open and someone can run it themselves — that's a different kind of validation.
Publication Waves
Journal + Workshop (Now)
| Paper | Venue | Status |
|---|---|---|
| 104 — Crystallization | NAI Journal + arXiv | Submitting |
| 113 — COG Transfer | NeSy 2026 Workshop | Submitting |
Wave 1 — Foundation (March 2026)
| Paper | Title |
|---|---|
| 108 | The Perception Brain — unified cognitive architecture |
| 114 | Domain-Specific Confidence — safety-critical routing |
These establish the cognitive pipeline. 108 is the architecture everything else references. 114 is the safety story.
Wave 2 — Cognitive Capabilities (Q2 2026)
| Paper | Title |
|---|---|
| 110 | Semantic Field of View — predictive memory loading |
| 112 | Fractal Knowledge Loading — bounded memory at scale |
| 118 | Predictive Processing — surprise-driven learning |
How the brain anticipates, retrieves, and predicts.
Wave 3 — Knowledge Architecture (Q3 2026)
| Paper | Title |
|---|---|
| 117+119+120 | Cognitive Layer Architecture — Y0 through Y6 |
Seven layers of knowledge organization, from raw prose to metacognition. Depends on V12 retrain data.
Wave 4 — The Framework (Q3 2026)
| Paper | Title |
|---|---|
| 122a | ACF Framework Specification |
Released as an open specification on GitHub, not a journal paper. Neutrality through adoption, not peer review.
Wave 5 — Living Benchmark (Ongoing)
ACF evaluation with every major version. V11 baseline → V12 improvements → V13 targets. The longitudinal story IS the paper.
First benchmark result: V11 → V12 (February 2026)
Controlled comparison across four education beings (same corpus, same machine):
| Being | V11 | V12 | Δ Score | Level Change |
|---|---|---|---|---|
| Toddler | 47.9 | 56.6 | +8.7 | ACF-3 → ACF-3 |
| Gradeschool | 58.8 | 63.3 | +4.5 | ACF-3 → ACF-4 |
| Middleschool | 47.6 | 63.3 | +15.8 | ACF-3 → ACF-4 |
| Highschool | 49.2 | 63.3 | +14.1 | ACF-3 → ACF-4 |
V12's parallel signal fusion architecture shows consistent improvement across all education levels. Key gains: compositional generalization and depth.
Measure First, Publish Second
The ACF framework tells us whether the NuSy brain is improving. That feedback loop runs on weeks, not publication cycles. Every version, every new capability shifts ACF scores. We don't wait for peer review to validate our measurement tool — we use it now, iterate now, and let adoption validate it.
Hypotheses
We track research hypotheses systematically. Each hypothesis has a formal definition, measurement criteria, and before/after metrics. Current count: 120+ hypotheses across 890+ expeditions.
Research Notes
The blog section of this site publishes distilled insights from our research — accessible to practitioners, not just academics. Each research note links to its source paper and the expedition that generated the data.