I've heard that Talking Heads song dozens of times and have never watched the video. I was missing out!
neat hadn't seen that thanks
NeurIPS best paper awards will likely contain good leads.
I expect understanding something more explicitly - such as yours and another persons boundaries - w/o some type of underlying concept of acceptance of that boundary can increase exploitability. I recently wrote a shortform post on the topic of legibility that describes some patterns I've noticed here.
I don't think on average Circling makes one more exploitable, but I expect it increases variance, making some people significantly more exploitable than they were before because previously invisible boundaries are now visible, and can thus be attacked (by others but more often by a different part of the same person).
And yeah it does seem similar to the valley of bad rationality; the valley of bad circling, where when you're in the valley you're focusing on a naive form of connection without discernment of the boundaries.
IMO the term "amplification" fits if the scheme results in a 1.) clear efficiency gain and 2.) it's scalable. This looks like (delivering equivalent results but at a lower cost OR providing better results for an equivalent cost. (cost == $$ & time)), AND (~ O(n) scaling costs).
For example if there was a group of people who could emulate [Researcher's] fact checking of 100 claims but do it at 10x speed, then that's an efficiency gain as we're doing the same work in less time. If we pump the number to 1000 claims and the fact checkers could still do it at 10x speed without additional overheard complexity, then it's also scalable. Contrast that with the standard method of hiring additional junior researchers to do the fact checking - I expect it to not be as scalable ("huh we've got all these employees now I guess we need an HR department and perf reviews and...:)
It does seem like a fuzzy distinction to me, and I am mildly concerned about overloading a term that already has an association w/ IDA.
Is there not a distillation phase in forecasting? One model of the forecasting process is person A builds up there model, distills a complicated question into a high information/highly compressed datum, which can then be used by others. In my mind its:
Model -> Distill - > "amplify" (not sure if that's actually the right word)
I prefer the term scalable instead of proliferation for "can this group do it cost-effectively" as it's a similar concept to that in CS.
Thanks for including that link - seems right, and reminded me of Scott's old post Epistemic Learned Helplessness
The only difference between their presentation and mine is that I’m saying that for 99% of people, 99% of the time, taking ideas seriously is the wrong strategy
I kinda think this is true, and it's not clear to me from the outset whether you should "go down the path" of getting access to level 3 magic given the negatives.
Probably good heuristics are proceeding with caution when encountering new/out there ideas, remembering you always have the right to say no, finding trustworthy guides, etc.
I'd also encourage you to link your predictions to Foretold/Metaculus/other prediction aggregator questions, though only if you write your prediction in the thread as well to prevent link rot.