Open-world video games face a problem that looks nothing like AI memory — until you squint.

The Rendering Problem

In a game like Zelda, Breath of the Wild, the world is enormous. You can't render every tree, rock, and building at full detail simultaneously — the GPU would melt. So game engines use Level of Detail (LOD) and Field of View (FOV) rendering:

The key insight is predictive loading. The engine doesn't wait until you're standing next to a mountain to load it at full detail. It watches where you're heading and pre-loads the relevant terrain. By the time you arrive, it's already there.

The Memory Problem

Conversational AI has a strikingly similar problem. A being's knowledge graph might contain tens of thousands of triples. You can't load all of them into the reasoning context simultaneously — the context window would overflow, and most of it would be irrelevant.

So we built a Semantic Field of View (sFOV) for semantic memory:

Predictive Loading for Knowledge

Just as a game engine watches the player's heading to pre-load terrain, our sFOV watches the conversation trajectory to pre-load relevant knowledge.

If the conversation is moving from "chest pain" to "differential diagnosis," the sFOV starts loading triples about pulmonary embolism, myocardial infarction, and pneumothorax before the being is asked about them. By the time the question arrives, the relevant knowledge is already in the foreground.

The mechanism:

  1. Topic tracking: Monitor the conversation for concept mentions
  2. Graph neighborhood: For each mentioned concept, identify connected entities (1-2 hops)
  3. Trajectory prediction: Based on conversation flow, anticipate which neighborhoods will be needed next
  4. Staged loading: Move predicted triples from background to midground, midground to foreground

What Worked and What Didn't

This is where we're honest about our results.

What worked:

What needs more work:

We scored this paper at 6/10 — strong concept, incomplete validation. We'll rapidly iterate and pushlish when we learn more, and we'll be upfront about what remains unproven.

The Broader Lesson

Cross-domain analogies are dangerous and powerful. The video game LOD/FOV analogy gave us a framework for thinking about semantic memory access that we wouldn't have found by staying inside the AI literature. But analogies can also mislead — player movement is more predictable than conversation flow, and the failure modes are different. We need more inputs, a larger state plane might work... or will it?

The lesson: steal shamelessly from other domains, but validate rigorously in your own.


Previous: Crystallization: Teaching AI to Remember | Next: The Hallucination Problem Is a Design Problem