You can imagine an argument that goes "Violence against AI labs is justified in spite of the direct harm it does, because it would prevent progress towards AGI." I have only ever heard people say that someone else's views imply this argument, and never actually heard someone actually advance this argument sincerely; nevertheless the hypothetical argument is at least coherent.
Yudkowsky's position is that the argument above is incorrect because he denies the premise that using violence in this way would actually prevent progress towards AGI. See e.g. here and the following dialogue. (I assume he also believes in the normal reasons why clever one-time exceptions to the taboo against violence are unpersuasive.)
I would expand "acts to make the argument low status" to "acts to make the argument low status without addressing the argument". Lots of good rationalist material, including the original Sequences, includes a fair amount of "acts to make arguments low status". This is fine—good, even—because it treats the arguments it targets in good faith and has a message that rhymes with "this argument is embarrassing because it is clearly wrong, as I have shown in section 2 above" rather than "this argument is embarrassing because gross stupid creeps believe it".
Many arguments are actually very bad. It's reasonable and fair to have a lower opinion of people who hold them, and to convey that opinion to others along with the justification. As you say, "you shouldn't engage by not addressing the arguments and instead trying to execute some social maneuver to discredit it". Discrediting arguments by social maneuvers that rely on actual engagement with the argument's contents is compatible with this.
What is "EA burnout"? Personally I haven't noticed any differences between burnout in EAs and burnout in other white-collar office workers. If there are such differences, then I'd like to know about them. If there aren't, then I'm skeptical of any model of the phenomenon which is particular to EA.
My impression from rather cursory research is that serious or long-lasting side effects are extremely rare. I would guess that most of the health risk is probably concentrated in car accidents on the way to/from the vaccine clinic. Minor side effects like "the injection site is mildly sore for a couple weeks" are common. Injection with the bifurcated needle method also produced a small permanent scar (older people often have these), although all or most of the current vaccinations are done with the subcutaneous injection method common with other vaccines and so do not produce scarring.
I naively guess that from the perspective of society at large the biggest cost of the vaccine program is the operational overhead of distribution and administration, not the side effects; and that on the personal scale the biggest cost is the time it takes to register for and receive the vaccine, rather than the side effects.
As to the benefit side of the equation, the risks of outbreak are extremely conjectural and rely on several layers of guesswork about technology development and adversarial political decisions—two areas which are notoriously hard to predict—so I don't have much to say on that front beyond "make your best guess".
Black's development of specific heat capacity and latent heat is widely attested, including in the Wikipedia articles on Black and on the history of thermodynamics. I don't recall where I first saw the claim.
Yudkowsky is correct. The advance that made the steam engine useful was Watt's separate condenser. The separate condenser was based on the research of Joseph Black, who did much of the work of quantifying thermodynamics. Black was a close friend of Watt, lent Watt money to finance his R&D, and introduced Watt to his first business partner John Roebuck.
Before Watt, the early, crude steam engines like Savery's and Newcomen's were preceded by early, crude research on pressure from scientists like Papin. These engines were niche tools with only one narrow economically-useful application (pumping water out of mines).
The linked article is completely wrong in claiming Carnot's work was the "First Stirrings of Thermodynamics", and wrong in treating Watt's invention of the separate condenser as a sideshow.
There are investments you can’t make from a structured, nine-to-five, narrowly teleological environment. ... The best search strategies for complex problems like life generally don’t seek out particular homogeneous objectives, but interesting novelty. The search space is too complicated and unknown for linear objective-chasing to work. ... you cannot pursue interesting novelty—things that no one else is doing or which you have never seen before, or the little threads of nagging curiosity or doubt—by chasing along known direct value gradients. But that’s where the treasure is.
Registering that I much prefer the format of the older repositories you link to, where additions are left as comments that can be voted on, over the format here, where everything is in a giant list sorted by topic rather than ranking. For any crowdsourced repository, most suggestions will be mediocre or half-baked, but with voting and sorting it's easy to read only the ones that rise to the top. I'd also be curious to check out the highest-voted suggestions on this topic, but not curious enough to wade through an unranked list of (I assume) mostly mediocre and half-baked ideas to find them.
I'm strongly against letting anyone insert anything into the middle of someone else's post/comment. Nothing should grab the microphone away from the author until they've finished speaking.
When Medium added the feature that let readers highlight an author's text, I found it incredibly disruptive and never read anything on Medium ever again. If LW implemented inline reader commentary in a way that was similarly distracting, that would probably be sufficient to drive me away from here, too.