Collective LessWrong Value:
If everyone who used LessWrong would pay the same amount you do for the website, how much would you pay? (In USD)
Should probably say "Per year"
Also it's a very tricky question because it seems to assume that we can start charging people without decreasing the number of users, in which case the price should probably be extremely high, higher than any online service has ever costed, due to the fact that it's almost never possible to charge what a public information good, or its impacts, are worth (it's worth a lot).
How month long vacations would you trade for a new sportscar? If you'd trade months of vacation for one sportscar, write 2, if you'd trade one month of vacation for two cars, write 0.5.
Many typos here. Also I hate it. Which sportscar. Why not just give a dollar value. My mind compulsively goes to the tesla roadster which'll probably have cold gas thrusters and so is likely to value a lot more than the average sportscar. The answer will also be conflated with how much people like their work. Some people like their work enough that they'll have to give a negative answer, or they might just answer incorrectly based on varying interpretations of what a vacation is, can you work during a vacation if you want to? I'd say not really, but I'm guessing that's not what you intended.
(previously posted as a root comment)
Where do you live? It's conceivable that a suit actually does mean these things where you live, but doesn't in the bay area. Some scenes/areas just don't expect people to dress in normative ways, they'll celebrate anything as long as it's done well.
It's important to separate the plan from the public advocacy of the plan. A person might internally be fully aware of the tradeoffs of a plan, while being unable to publicly acknowledge them, because coming out and publicly saying "<powerful group> wouldn't do as well under our plan as they would under other plans, but we think it's worth the cost to them for the greater good" will generally lead to righteous failure, do you want to fail righteously? To lose the political game but to be content knowing that you were right and they were wrong and you lost for ostensibly virtuous reasons?
I think Reddit tried something like that; you could award people "Reddit gold", not sure how it worked.
It didn't do anything systemically, just made the comment look different.
You need to have a way to evaluate the outcome
What I plan on doing is evaluating comments partly based on expected eventual findings of deeper discussion of those comments. You can't resolve a prediction market about whether free will is real, you can make a prediction market about what kind of consensus or common ground might be reached if you had Keith Frankish and Michael Edward Johnson undertake 8 hours of podcasting, because that's a test that can/may be run.
Or you can make it about resolutions of investigations undertaken by clusters of the scholarly endorsement network.
The details matter, because they determine how people will try to game this.
The best way to game that is to submit your own articles to the system then allocate all of your gratitude to them, so that you get back the entirety of your subscription fee. But it'd be a small amount of money (well, ideally it wouldn't be, access to good literature is tremendously undervalued, but at first it would be) and you'd have to be especially malignant to do it after spending a substantial amount of time reading and being transformed by other peoples' work.
But I guess the manifestation of this that's hardest to police is; will a user endorse a work even if they know the money will go entirely to a producer who they dislike especially given that the producer has since fired all of the creatives who made the work.
I'd expect the answer to not be apparent to an outsider, reading the literature, but I'd expect people who are good at designing those sorts of systems to be able to give you the answer quite easily if you ask.
I wonder if this is a case of gdm optimising for the destination rather than the journey. Or more concretely, optimising for entirely AI-produced code over coding assistants.
Confoundingly, the creator says he has never used AI, has no interest in it, and wrote it before chat assistants were even a notion.
Gilligan previously slammed AI as he discussed the series. “I have not used ChatGPT, because as of yet, no one has held a shotgun to my head and made me do it,” he told Polygon.
“I will never use it. No offense to anyone who does,” added Gilligan. “I really wasn’t thinking about AI [when I wrote Pluribus], because this was about eight or 10 years ago.”
To the extent that the orthogonality thesis is philosophy, I don't think universal paperclips is really usefully discussing it. It doesn't like, acknowledge moral realist views, right? It just assumes orthogonality.
Does it even have a "manufacture fake moral realist universal paperclipism religion to make humans more compliant" subplot?
Yeah, I notice that using a transitive quality as the endorsement criterion, and making votes public, produces an incentive for a person to give useful endorsements: Failing to issue informative endorsements would indicate them as not having this transitive quality and so not being worthy of endorsement themselves.
We can also make it prominent in a person's profile if, for instance, they've strongly endorsed themselves, or if they've only endorsed a few people without also doing any abstention endorsements (which redistribute trust back to the current distribution). Some will have an excuse for doing this, most will be able to do better.
True. Doing that by default, and also doing some of the aforementioned abstention endorsements by default, would address accidental overconfident votes pretty well.
(Also, howdy, I should probably help with this, I was R&Ding web of trust systems for a while before realising there didn't seem to be healthy enough hosts for them (they can misbehave if placed in the wrong situations), so I switched to working on extensible social software/forums, to build better hosts. It wasn't clear to me that the alignment community needed this kind of thing, but I guess it probably does at this point.)