see also my eaforum at https://forum.effectivealtruism.org/users/dirk and my tumblr at https://d-i-r-k-s-t-r-i-d-e-r.tumblr.com/ .
Well, you see. The thing about expected value is that in the real world, things with a fifteen percent chance of happening typically don't happen. I would love to have $150,000, but the expected result of the former choice is $0.
Whether the men in this analogy are socially encouraged in their behavior has very little to do with whether women perceive them as pleasant, and is only relevant if you're treating your own arguments as soldiers; you, not Scott, were the one who chose to turn an attempt at providing information into a social fight.
A potentially-helpful option for the buttons issue in particular: https://www.greaterwrong.com/ . It's not an app, but it does give you an alternative frontend which might have different problems than this one.
(Also, it might be possible to install one site or the other as a webapp to your phone. If you're on Safari, you can do this by going to the site and selecting the share button in the top right, then "more", then "add to home screen"; on mobile Chrome, just tap the three-dot menu in the top right and scroll down until you find "add to home screen". The interface will be the same as in the browser, but it does give you a dedicated homescreen icon for opening lesswrong.)
More specifically to me: large sets of cheap washi tape; sparkly stickers; highlighters in attractive pastel colors rather than neons; the Pentel GraphGear 1000 0.3 mm mechanical pencil (notable for its thin lead, so desirable only if you want that); the Ohto Minimo mechanical pen and pencil (notable for each being roughly the length of a credit card); an M5 binder (generic Filofax mini; it serves as both an adorably tiny planner which I always have on my person and a wallet); various notebooks with decorative covers (Paperblanks is amazing if you want lined paper but if you're looking for dot-grid you'll want Tiefossi or the Quirky Cup Collective for their 160-GSM paper and wider cover selections); et cetera.
Sam Bankman-Fried; he had many EA connections, so is particularly salient.
Still available on archive.org; https://web.archive.org/web/20200107082033im_/https://i.vimeocdn.com/video/364878640_1280x720.jpg .
IMO that's because it's not relatively easy to create a good replica of a person; LLMs fine-tuned to speak like a particular target retain LLM-standard confabulation, distractibility, inability to learn from experience, etc which will make them similarly ineffective at alignment research. I'd suggest looking into the AI Village for a better sense of how LLMs do at long-horizon tasks. (Also, I want to point out that inference is costly. The AI village, which only has four agents and only runs them for two hours a day, costs $3700 per month; hundreds of always-on agents would likely cost hundreds of times that much. This could be a good tradeoff if they were superhumanly effective alignment researchers, but I think current frontier LLMs are capable only of subhuman performance).