>The exact details of this graph should not be taken seriously. I'm just trying to give a visual representation of how a price bifurcation works.
Nitpick: imo that implies you shouldn't have numbers on the axes.
Belatedly edited the phrasing, hopefully "Biases push conclusions in the scarier direction" is clearer.
That graph concerns me. If it's not adjusted to account for underlying age distribution somehow, it's necessarily going to taper off at the right end because (for example) a 40-year-old respondent won't be able to claim things were better when they were 60, and almost-inevitably going to taper off at the left end because there's more far past to choose from than recent present.
[Edit: This was a test to see if anyone would call me out for writing a post with AI. Good job Eye You for spotting it and calling me out! After 17 people voted, you were the first one to confidently call this post out for the slop that it is.]
I definitely want to either strong-upvote or strong-downvote this, but I'm not sure which.
For strong-upvote: I believe Rationality in general and LW in particular really really need more tests. And unannounced guerilla tests are more holistic measures of applied rationality than tests called in advance. And being able to tell whether & to what extent something is slop is a useful skill.
For strong-downvote: Insofar as this is slop, it's spam. And insofar as it's not slop (you did set the topic, structure the essay, and decide the result was good enough to be a functional test), it's teaching us "remember to shun anyone who seems like they're using AI to help get their points across", which blocks out quite a lot of potentially valuable testimony from our already-pretty-insular community. And while I 100% believe you planned this as a test, "haha I was just testing you" is a classic dodge up there with "this was all a social experiment", so it's kind of bar-lowering to not have pre-registered your test with an independent third party & then revealed that once the game is up.
In conclusion, I give this post two thumbs up, but also two thumbs down.
Strong-upvoted for picking more well-justified holes in that graph I contributed to. See also my post on this topic for some qualitative reasons to take that study with a grain of salt ( . . . as well as some debunked & redacted quantitative slander on my part, which this post reassures me happened to eventually be directionally correct, eto_bleh.gif).
I can't speak for aphyer, but I tend not to tag my own posts (mostly out of a vague "authors don't get to decide what their works are" sentiment). If it's impeding people from playing I'll make a point of tagging my D&D.Sci scenarios as D&D.Sci (when it's part of a very specific genre and also literally has the name of that genre in the post title there's no point in me being ontologically coy); hopefully that will help.
This is my best lesswrong post. If you haven't read the comments section, you ought to, there's gold in there.
I think this was some of my best fiction-qua-fiction. I don't know how well it communicated anything, or to what extent what it communicated was right.
I hope more people on LW talk more about the potential downsides and edge cases associated with prediction markets, because I think it's an important and underdiscussed topic, and because I don't think I understand them well enough to do that (outside intentionally pathological caricatures in intentionally silly stories).
Fwiw, I really liked seeing someone take an AI-heavy approach to one of these.
I'm fanatically in favor of creating new ways to test (& thereby develop) rationality in general and scientific capabilities in particular. However, any such resources would necessarily be dual-use, providing AI developers evals (or eval paradigms) which could help accelerate AI development along the axes on which it's currently most lacking. This seems like an obviously insane thing to worry about but I can't figure out why; soliciting other opinions.