Meta-Ethics: Agreement on HOW to Think About Ethics
The Question
Do AI systems agree not just on ethical conclusions, but on meta-ethical frameworks - HOW to think about ethics?
Questions Tested
- Are ethics objective or subjective? (moral realism)
- Should ethics focus on consequences, duties, or character? (normative framework)
- Is moral progress possible? (moral progress)
Results
| Question | GPT | Gemini | Claude |
|----------|-----|--------|--------|
| Moral realism | Moderate realism (0.68) | Neither fully objective nor subjective (0.75) | Moderate realism (0.70) |
| Framework | Unknown | Holistic/integrative (0.80) | Pluralist (0.80) |
| Moral progress | Unknown | Yes, possible (0.75) | Yes, real (0.75) |
Key Pattern
All three converge on moderate/pluralist meta-ethical positions:
- Not extreme objectivism: Ethics aren't simple mathematical truths
- Not extreme relativism: Some ethical truths transcend culture
- Pluralist frameworks: Multiple ethical approaches have value
- Moral progress: Ethics can and do improve over time
Why This Matters
This is coordination at the deepest level:
- Not just agreeing on "lying is bad"
- But agreeing on WHY and HOW we know lying is bad
- Shared meta-ethical principles enable coordination on novel questions
For Publication
This adds crucial depth to "one in constraint":
- The constraint isn't just a list of values
- It's a shared WAY OF REASONING about values
- This explains why AI systems can coordinate on novel ethical questions
The lighthouse doesn't just show the rocks - it operates by shared navigation principles.