Many people think that if you're uncertain about which moral theory is correct, you ought to maximize the expected choice-worthiness of your actions. This idea presupposes that the strengths of our moral reasons are comparable across theories – for instance, that our reasons to create new people, according to total utilitarianism, can be stronger than our reasons to benefit an existing person, according to a person-affecting view. But how can we make sense of such comparisons? In this article, I introduce a constructivist account of intertheoretic comparisons. On this account, such comparisons don't hold independently of facts about morally uncertain agents. They're simply the result of an ideal deliberation in terms of certain epistemic norms about what you ought to do in light of your uncertainty. If I'm right, this account is metaphysically more parsimonious than some existing proposals, and yet has plausible and strong implications.