Tangent but curious - how do you read
7 For I would that all men were even as I myself. But every man hath his proper gift of God, one after this manner, and another after that.
8 I say therefore to the unmarried and widows, it is good for them if they abide even as I.
9 But if they cannot contain, let them marry: for it is better to marry than to burn.
as anything but "it's better not to marry at all but better to marry than to have unmarried sex"
this isn’t evidence against OP? if it’s true that RL lowers pass@k performance for sufficiently large k, we’d certainly expect o1 with 10k submissions to be weaker than base/instruct with 10k submissions.
I think we mostly agree, I was pointing to a strawman of scientific materialism that I used to but no longer hold. Maybe a clearer example is a verbal practice like a mantra, chanting "God is good" - which is incompatible with the constraint to only say things that are intersubjectively verifiable, at least in principle. If someone were to interrupt and ask "wait how do you know that? what's your probability on that claim?" your answer would have to look something like this essay.
nothing prevents you from visualizing it while remaining aware of the fact that you're only imagining it.
This does seem to be the case for unbendable arm, but I'm less sure it generalizes to more central religious beliefs like belief in a loving God or the resurrection of the dead! I don't see an a priori reason why certain beliefs wouldn't require a lack of conscious awareness that you're imagining them in order to "work", so want to make sure my worldview is robust to this least convenient possible world. Curious if you have further evidence or arguments for this claim!
Really interesting, thanks! I wonder the extent to which this is true in general (any empirically-found-to-be-useful religious belief can be reformulated as a fact about physics or sociology or psychology and remain as useful) or if there are any that still require holding mystical claims, even if only in an 'as-if' manner.
thanks for running the test!
IIRC the first time this was demonstrated to me it didn't come with any instructions about tensing or holding, just 'Don't let me bend your arm', exactly the language you used with your wife. But people vary widely in somatic skills and how they interpret verbal instructions; I definitely interpreted it as 'tense your arm really hard' and that's probably why the beam / firehose visualization helped.
Makes me think the same is likely true of religious beliefs - they help address a range of common mental mistakes but for each particular mistake there are people who have learned not to make them using some other process. e.g. "Inshallah" helps neurotic people cut off unhelpful rumination, whereas low neuroticism people just don't need it.
You're right about the 'seven literal days' thing - seems like nonsense to me, but notably I haven't seen it used much to justify action so I wouldn't call it an important belief in the sense that it pays rent. More like an ornament, a piece of poetry or mythology.
'believing in heaven' is definitely an important one, but this is exactly the argument in the post? 'believing in the beam of light' doesn't make the beam of light exist, but it does (seem to) make my arm stronger. Similarly, believing in heaven doesn't create heaven [1] but it might help your society flourish.
It's an important point though that it's not that believing in A makes A happen, more like believing in some abstracted/idealized/extremized version of A makes A happen.
This does pose a bigger epistemic challenge than simple hyperstition, because the idealized claim never becomes true, and yet (in the least convenient possible world) you have to hold it as true in order to move directionally towards your goal.
well, humanity could plausibly build a pretty close approximation of heaven using uploads in the next 50 years, but that wasn't reasonable to think 2000 years ago
thanks! as in there was no difference between visualizing and not?
Minor points just to get them out of the way:
Meta point: it feels like we're bouncing between incompatible and partly-specified formalisms before we even know what the high level worldview diff is.
To that end, I'm curious what you think the implications of the Lehman & Stanley hypothesis would be - supposing it were shown even for architectures that allow planning to search, which I agree their paper does not do. So yes you can trivially exhibit a "goal-oriented search over good search policies" that does better than their naive novelty search, but what if it turns out a "novelty-oriented search over novelty-oriented search policies" does better still? Would this be a crux for you, or is this not even a coherent hypothetical in your ontology of optimization?
"harness" is doing a lot of work there. If incoherent search processes are actually superior then VNM agents are not the type of pattern that is evolutionary stable, so no "harnessing" is possible in the long term, more like a "dissolving into".
Unless you're using "VNM agent" to mean something like "the definitionally best agent", in which case sure, but a VNM agent is a pretty precise type of algorithm defined by axioms that are equivalent to saying it is perfectly resistant to being Dutch booked.
Resistance to Dutch booking is cool, seems valuable, but not something I'd spent limited compute resources on getting six nines of reliability on. Seems like evolution agrees, so far: the successful organisms we observe in nature, from bacteria to humans, are not VNM agents and in fact are easily Dutch booked. The question is whether this changes as evolution progresses and intelligence increases.
I do this often, inspired by the novel “The Dice Man”. helps break inner conflicts in what feels like a fair, fully endorsed way. @Richard_Ngo has a theory that this “random dictatorship” model of decision making has uniquely good properties as a fallback when negotiation fails / is too expensive & why active inference involves probabilistic distributions over goal state not atomic goal states.