Wiki Contributions

Comments

Having kids does mean less time to help AI go well, so maybe it’s not so much of a good idea if you’re one of the people doing alignment work.

I love how it has proven essentially impossible to, even with essentially unlimited power, rig a vote in a non-obvious way. I am not saying it never happens deniably, and you may not like it, but this is what peaked rigged election somehow always seems to actually look like.

(Maybe I misunderstood, but isn’t this weak evidence that non-obviously rigging an election is essentially impossible, since you wouldn‘t notice the non-obvious examples?)

Are there any organizations or research groups that are specifically working on improving the effectiveness of the alignment research community? E.g.

  • Reviewing the literature on intellectual progress, metascience, and social epistemology and applying the resulting insights to this community
  • Funding the development of experimental “epistemology software”, like Arbital or Mathopedia

I'll end with this thought: I think you can probably use these ideas of moral weights and moral mountains to quantify how altruistic someone is.

Maybe “altruistic” isn’t the right word. Someone who spends every weekend volunteering at the local homeless shelter out of a duty to help the needy in their community but doesn’t feel any specific obligation towards the poor in other areas is certainly very altruistic. The amount that one does to help those in their circle of consideration seems to be a better fit for most uses of the word altruism.

How about “morally inclusive”?

I would find this deeply frustrating. Glad they fixed it!

Answer by RomanHaukssonDec 05, 202357133

I’m a huge fan of agree/disagree voting. I think it’s an excellent example of a social media feature that nudges users towards truth, and I’d be excited to see more features like it.

Answer by RomanHaukssonDec 05, 2023225

(low confidence, low context, just an intuition)

I feel as though the LessWrong team should experiment with even more new features, treating the project of maintaining a platform for collective truth-seeking like a tech startup. The design space for such a platform is huge (especially as LLMs get better).

From my understanding, the strategy that startups use to navigate huge design spaces is “iterate on features quickly and observe objective measures of feedback”, which I suspect LessWrong should lean into more. Although, I imagine creating better truth-seeking infrastructure doesn’t have as good of a feedback signal as “acquire more paying users” or “get another round of VC funding”.

This is really exciting. I’m surprised you’re the first person to spearhead a platform like this. Thank you!

I wonder if you could use a dominant assurance contract to raise money for retroactive public goods funding.

Load More