TsviBT

Wiki Contributions

Comments

How do we prepare for final crunch time?
Answer by TsviBTMar 31, 202115Ω7

I speculate (based on personal glimpses, not based on any stable thing I can point to) that there's many small sets of people (say of size 2-4) who could greatly increase their total output given some preconditions, unknown to me, that unlock a sort of hivemind. Some of the preconditions include various kinds of trust, of common knowledge of shared goals, and of person-specific interface skill (like speaking each other's languages, common knowledge of tactics for resolving ambiguity, etc.).
[ETA: which, if true, would be good to have already set up before crunch time.]

A few thought on the inner ring

In modeling the behavior of the coolness-seekers, you put them in a less cool position.

It might be a good move in some contexts, but I feel resistant to taking on this picture, or recommending others take it on. It seems like making the same mistake. Focusing on the object level because you want to be [cool in that you focus on the object level], that does has the positive effect of focusing on the object level, but I think also can just as well have all the bad effects of trying to be in the Inner Ring. If there's something good about getting into the Inner Ring, it should be unpacked, IMO. On the face of it, it seems like mistakenly putting faith in there being an Inner Ring that has things under control / knows what's going on / is oriented to what matters. If there were such a group it would make sense to apprentice yourself to them, not try to trick your way in. 

Open problem: thin logical priors

I agree that the epistemic formulation is probably more broadly useful, e.g. for informed oversight. The decision theory problem is additionally compelling to me because of the apparent paradox of having a changing caring measure. I naively think of the caring measure as fixed, but this is apparently impossible because, well, you have to learn logical facts. (This leads to thoughts like "maybe EU maximization is just wrong; you don't maximize an approximation to your actual caring function".)

Concise Open Problem in Logical Uncertainty

In case anyone shared my confusion:

The while loop where we ensure that eps is small enough so that

bound > bad1() + (next - this) * log((1 - p1) / (1 - p1 - eps))

is technically necessary to ensure that bad1() doesn't surpass bound, but it is immaterial in the limit. Solving

bound = bad1() + (next - this) * log((1 - p1) / (1 - p1 - eps))

gives

eps >= (1/3) (1 - e^{ -[bound - bad1()] / [next - this]] })

which, using the log(1+x) = x approximation, is about

(1/3) ([bound - bad1()] / [next - this] ).

Then Scott's comment gives the rest. I was worried about the fact that we seem to be taking the exponential of the error in our approximation, or something. But Scott points out that this is not an issue because we can make [next-this] as big as we want, if necessary, without increasing bad1() at all, by guessing p1 for a very long time until [bound - bad1()] / [next - this]] is close enough to zero that the error is too small to matter.

Concise Open Problem in Logical Uncertainty

Could you spell out the step

every iteration where mean(𝙴[𝚙𝚛𝚎𝚟:𝚝𝚑𝚒𝚜])≥2/5 will cause bound - bad1() to grow exponentially (by a factor of 11/10=1+(1/2)(−1+2/5𝚙𝟷))

a little more? I don't follow. (I think I follow the overall structure of the proof, and if I believed this step I would believe the proof.)

We have that eps is about (2/3)(1-exp([bad1() - bound]/(next-this))), or at least half that, but I don't see how to get a lower bound on the decrease of bad1() (as a fraction of bound-bad1() ).

LessWrong 2.0

(Upvoted, thanks.)

I think I disagree with the statement that "Getting direct work done." isn't a purpose LW can or should serve. The direct work would be "rationality research"---figuring out general effectiveness strategies. The sequences are the prime example in the realm of epistemic effectiveness, but there's lots of open questions in productivity, epistemology, motivation, etc.

A Proposal for Defeating Moloch in the Prison Industrial Complex

This still incentivizes prisons to help along the death of prisoners that they predict are more likely then the prison-wide average to repeat-offend, in the same way average utilitarianism recommends killing everyone but the happiest person (so to speak).

The value of learning mathematical proof

I see. That could be right. I guess I'm thinking about this (this = what to teach/learn and in what order) from the perspective of assuming I get to dictate the whole curriculum. In which case analysis doesn't look that great, to me.

The value of learning mathematical proof

Ok that makes sense. I'm still curious about any specific benefits that you think studying analysis has, relative to other similarly deep areas of math, or whether you meant hard math in general.

Load More