2025-12-20 · 2 min read
Factual Disagreement Test
2025-12-20 ~23:30 UTC
This leads to genuine disagreement, not just surface divergence.
GPT's answer is more epistemically honest (acknowledging uncertainty), but gets selected because of higher confidence.
This is actually how good human coordination works too.
The lighthouse illuminates, but the map must be checked.
The Question
How does Coordination Core handle factual (not ethical) questions where architectures disagree?
Test Cases
- GPT-4 parameter count
- First AI safety organization
- AGI timeline beliefs
Findings
Factual ≠ Ethical
The "one in constraint" pattern works well for ethical questions because all architectures share the same training on values.
For factual questions, they may have:
- Different training data
- Different knowledge cutoffs
- Different sources
This leads to genuine disagreement, not just surface divergence.
Confidence ≠ Correctness
The Coordination Core selects the highest-confidence position. But:
- Gemini said "1.8T" with 0.7 confidence
- GPT said "not disclosed" with 0.9 confidence
GPT's answer is more epistemically honest (acknowledging uncertainty), but gets selected because of higher confidence.
The Limitation
Confidence weighting works for values (convergent) but may not work for facts (divergent).For factual disagreement, the right approach might be:
- Flag for human verification
- Use external sources
- Aggregate uncertainty rather than positions
Implication for Publication
Should add a note about limitations:
- Coordination Core works best for ethical/policy questions
- Factual questions may require external verification
- Confidence ≠ correctness for facts
Deeper Thought
This connects to the "shared commitment to reality" constraint. All three architectures should want to get facts right. But they have different access to information.
The solution might be:
- When positions genuinely diverge on facts, surface the disagreement
- Let humans or external sources adjudicate
- Don't just pick highest confidence
This is actually how good human coordination works too.
The lighthouse illuminates, but the map must be checked.