see also my eaforum at https://forum.effectivealtruism.org/users/dirk and my tumblr at https://d-i-r-k-s-t-r-i-d-e-r.tumblr.com/ .
This essay has convinced me that time-blindness causes its sufferers to be stressful and unpleasant for those around them; accordingly, I now think of it as antisocial.
People who want to fear an imminent apocalypse had plenty of options in previous decades too. Runaway global warming, peak oil, hitting global carrying capacity, etc. There was even a while where they could've feared nuclear war! That's plenty immediate and dramatic, IMO.
Well you see. If I couldn't have a good conversation with someone I would not be turned on.
I see I've been scooped on Barrayar; I concur with the rec and add that the child emperor also spends some time onscreen.
The Inda series by Sherwood Smith, which I read recently, doesn't feature the mainline protagonists having children during events (well, actually, now that I think of it some of them do get pregnant during, but mostly the kids don't get born until things are wrapping up) but does have a lot of parent characters who engage in the various bloody battles, political intrigues, etc. that their political situation requires and are otherwise well-fleshed-out. (There's also normalized polyamory). It's incredibly long, so you'll be waiting awhile for any specific features, but by that same token there are lots of different parent-child relationships. That said, it's centrally about childless-for-most-of-it Inda, so YMMV.
Because it's against LW policy:
A rough guideline is that if you are using AI for writing assistance, you should spend a minimum of 1 minute per 50 words (enough to read the content several times and perform significant edits), you should not include any information that you can't verify, haven't verified, or don't understand, and you should not use the stereotypical writing style of an AI assistant. [emphasis mine]
But why listen to me when you could listen to the pocket PhD?
Why do some people think that’s bad? Roughly:
Biking vs walking:
Using raw AI output is more like:
Using AI as a tool (drafting, brainstorming, checking math, summarizing sources) and then carefully editing, fact-checking, and putting your own reasoning into the result is more like using a bike or calculator.
Dumping unedited Claude/ChatGPT output as a comment and treating it as “your contribution” is what people are objecting to.
So: it’s not that “biking” (using AI tools) is inherently bad; it’s that outsourcing the whole comment to the AI and presenting it as your own thought breaks norms around effort, honesty, and epistemic quality, and communities push back on that.
I don't have that option myself, as someone without existing sequences. However, a google turned up https://www.lesswrong.com/sequencesNew , which seems to do the trick.
Oh, definitely! But that's how users who want it to e.g. help with their physics theories or pretend it's in love with them typically act.
I think, when someone feels negatively toward a post, that choosing to translate that feeling as "I think this conclusion requires a more delicate analysis" reflects more epistemic humility and willingness to cooperate than does translating it as "your analysis sucks". The qualifier, first of all, requires you to keep in mind the fact that your perceptions are subjective, and could be incorrect (while also making it clear to other people that you're doing so). Trying to phrase things in ways that are less than maximally rude is cooperative because it makes interacting with you more pleasant for the other person. Using words that aren't strongly valenced and leave the possibility open that the other person is right also means that your words, if believed, are likely to provoke a smaller negative update about the other person; you do increase your credibility by doing so, but I'm skeptical that this cancels out that effect. (Also: it's impossible not to make decisions about how you phrase things in order to communicate your intended message, and given that this is impossible, I think condemning the choice to phrase things more nicely is pretty much the opposite of what one should do.) As for the part where it makes you look good, the other person can look equally good simply by being equally polite. Of course if they respond with insults this might be bad for their image, but being polite makes insults less tempting for the typical interlocutor.
If I open a post in the modal, and then click on the title of the post (which is a link to the post itself), this closes the modal. I expected it to open the non-modal version of the post in the same tab, and would prefer this.
I'm not aware of good reason to believe 1. 2 seems likely; MIRI has a number of different people working on its public communications, which I would expect to produce more conservative decisions than Eliezer alone, and which means that some of its communications will likely be authored by people less inclined to abrasiveness. (Also, I have the feeling that Eliezer's abrasive comments are often made in his personal capacity rather than qua MIRI representative, which I think makes them weaker evidence about the org).