Betawolf
Betawolf has not written any posts yet.

Regarding the psychology of why people overestimate the correlation-causation link, I was just recently reading this, and something vaguely relevant struck my eye:
... (read more)Later, Johnson-Laird put forward the theory that individuals reason by carrying out three fundamental steps [21]:
They imagine a state of affairs in which the premises are true – i.e. they construct a mental model of them.
They formulate, if possible, an informative conclusion true in the model.
They check for an alternative model of the premises in which the putative conclusion is false.
If there is no such model, then the conclusion is a valid inference from the premises.
Johnson-Laird and Steedman implemented the theory in a computer program that made deductions from singly-quantified
... (read more)I would be very surprised if this was not the case. Different fields already use different cutoffs for statistical-significance (you might get away with p<0.05 in psychology, but particle physics likes its five-sigmas, and in genomics the cutoff will be hundreds or thousands of times smaller and vary heavily based on what exactly you're analyzing) and likewise have different expectations for effect sizes (psychology expects large effects, medicine expects medium effects, and genomics expects very small effects; eg for genetic influence on IQ, any claim of a allele with an effect larger than d=0.06 should be greeted with surprise and alarm).
The existing defaults aren't usually well-justified: for example, why does psychology use
... (read more)It's hard to say because how would you measure this other than directly, and to measure this directly you need a clear set of correlations which are proposed to be causal, randomized experiments to establish what the true causal relationship is, and both categories need to be sharply delineated in advance to avoid issues of cherrypicking and retroactively confirming a correlation so you can say something like '11 out of the 100 proposed A->B causal relationships panned out'. This is pretty rare, although the few examples I've found from medicine tend to indicate under 10%. Not great. And we can't explain all of this away as the result of illusory correlations being
For the basic interaction setup, yes. For a sense of community and for reliable collection of the logs, perhaps not. I'm also not sure how anonymous Omegle makes users to each other and itself.
What I was getting at is that the current setup allows for side-channel methods of getting information on your opponent. (Digging to find their identity, reading their Facebook page, etc.).
While I accept that this interaction could be one of many between the AI and the researcher, this can be simulated in the anonymous case via a 'I was previously GatekeeperXXX, I'm looking to resume a game with AIYYY' declaration in the public channel while still preserving the player's anonymity.
Prompted by Tuxedage learning to win, and various concerns about the current protocol, I have a plan to enable more AI-Box games whilst preserving the logs for public scrutiny.
The author is associated with the Foundational Research Institute, which has a variety of interests highly connected to those of Lesswrong, yet some casual searches seem to show they've not been mentioned.
Briefly, they seem to be focused on averting suffering, with various outlooks on that including effective altruism outreach, animal suffering and ai-risk as a cause of great suffering.