All of tog's Comments + Replies

An underrated and little understood virtue in our culture. 

And a nice summary with many good, non-obvious and practical points. I've done a lot of what you describe in the section on process, and can testify to its effectiveness.

I'd be curious to hear any examples you have of integrity-maintaining examples of playing a role (which are non-obvious, and where a more simple high integrity approach might naively think one simply shouldn't play the role). 

I'm curious, what countries have and haven't seen substantial focus on hand hygiene?

We have that here in Canada.

Japan has the three C's of avoiding closed spaces, crowded places and close-contact settings as a main guidance. Japan also performed much better at outcome metrics.

Also I somehow keep not giving holidays proper respect.


I thought you were an advocate of the Sabbath? 😉

2Chris McKenzie2y
Apropos from you are in a permanent state of emergency. This is not okay. You are not doing okay.

"Free Day", while perhaps not the best option overall, has the merit that these days involving freeing the part of you that communicatess through your gut (and through what you feel like doing). During much of our working (and non-working) week, that part is overridden by our mind's sense of what we have to do. 

By contrast, in OP's Recovery Days this part is either:

(a) doing the most basic recharging before it can do things it positively feels like and enjoys, or

(b) overridden or hijacked by addictive behaviours that it doesn't find as roundly rewarding as Free Day activities.

Addiction can also be seen as a lack of freedom. 

I agree about the names. 'Rest' days are particularly confusing, since recovery days involve a lot of rest. A main characteristic of 'rest' days instead seems to doing what you feel like and following your gut.

Yes, it seems more reasonable to treat it as evidence of upper bound. Still weak evidence IMO, due to the self-reporting of perceived symptoms.

They say they haven't accounted for sampling bias, though, which makes me doubt the methodology overall, as sampling bias could be huge over 90 day timespans.


Yes, the article doesn't describe the exact methodology, but they could be well deriving the percentages from people who choose to self-report how they're doing after 30 and 90 days. These would be far more likely to be people who still feel unwell. 

As a separate point, and I'm skirting around using the word "hypochondria" here, asking people is they still feel unwell or have symptoms a mon... (read more)

What about as an upper bound? I'm having a harder time generating confounders that make this an underestimate.

That plus it's a more intelligent than average community with shared knowledge and norms of rationality. This is why I personally value LessWrong and am glad it's making something of a comeback.

These aren't letters from charities, asking for your money for themselves (even if they then spend some or most or all of it on others). If you get a stock letter signed by the president of Charity X, who you don't know, saying they hope your family is well, that's quite different.

Yep - we were thinking Dec 31st, but we've now decided to make it Jan 31st as some student EA groups have said they'd like to share it in their newsletters after students return from the holidays.

I think it's possible to send versions of these emails which aren't annoying. I've sent a bunch myself and people haven't seemed to find them annoying.

I disagree - I know Peter was genuinely interested in hearing back from people.

Funny how I never receive letters from charities which inquire after my life and family and then stop. One might think that if they were "genuinely interested" they might express it in some way which does not involve "Please give us money, the more the better".

For reference, here are the results from last year's survey, along with Peter's analysis of them. This includes a link to a Github repository including the raw data, with names and email addresses removed.

Notable findings included:

  • The top three sources people in our sample first heard about EA from were LessWrong, friends, or Giving What We Can. LessWrong, GiveWell, and personal contact were cited as the top three reasons people continued to get more involved in EA. (Keep in mind that EAs in our sample might not mean all EAs overall, as discussed in .)
... (read more)

Here's drawing your attention to this year's Effective Altruism Survey, which was recently released and which Peter Hurford linked to in LessWrong Main. As he says there:

This is a survey of all EAs to learn about the movement and how it can improve. The data collected in the survey is used to help EA groups improve and grow EA. Data is also used to populate the map of EAs, create new EA meetup groups, and create EA Profiles and the EA Donation Registry.

If you are an EA or otherwise familiar with the community, we hope you will take it using this link. Al

... (read more)

For reference, here are the results from last year's survey, along with Peter's analysis of them. This includes a link to a Github repository including the raw data, with names and email addresses removed.

Notable findings included:

  • The top three sources people in our sample first heard about EA from were LessWrong, friends, or Giving What We Can. LessWrong, GiveWell, and personal contact were cited as the top three reasons people continued to get more involved in EA. (Keep in mind that EAs in our sample might not mean all EAs overall, as discussed in .)
... (read more)

You're conflating something here. The statement only refers to "what is true", not your situation; each pronoun refers only to "what is true"

In that case saying "Owning up to the truth doesn't make the truth any worse" is correct, but doesn't settle the issue at hand as much as people tend to think it does. We don't just care about whether someone owning up to the truth makes the truth itself worse, which it obviously doesn't. We also care about whether it makes their or other people's situation worse, which it sometimes does.

I like the name it sounds like you may be moving to - "guesstimate".

Thanks! Guesstimates as a thing aren't very specific, what I am proposing is at least a lot more involved than what has been typically considered a guesstimate. That said, very few people seem familiar with the old word, so it seems like it could b extended easily.

Do you think you'd use this out of interest Owen?

Maybe, if it had good enough UI and enough features? I feel like it's quite a narrow-target/high-bar to compete with back-of-the-envelope/whiteboard at one end (for ease of use), and a software package that does monte carlos properly at the other end.

And a friend requests an article comparing IQ and conscientiousness as a predictor for different things.

I asked for a good general guide to IQ (and in particular its objectivity and importance) on the LW FB group a while back. I got a bunch of answers, including these standouts:

But there's still plenty of room for improvement on those so I'd be curious to hear others' suggestions.

On IQ, I strongly recommend Ian Deary's Intelligence: A Short Introduction [] (link to shared file in my Google Drive).
And a friend requests an article comparing IQ and conscientiousness as a predictor for different things.

I've been looking for this all my life without even knowing it. (Well, at least for half a year.)

That being said, what I'm not interested in as my sole aim is to be maximally effective at doing good. I'm more interested in expressing my values in as large and impactful a way as possible - and in allowing others to do the same. This happens to coincide with doing lots and lots f good, but it definitely doesn't mean that I would begin to sacrifice my other values (eg fun, peace, expression) to maximize good.

It's interesting to ask to what extent this is true of everyone - I think we've discussed this before Matt.

Your version and phrasing of what you'... (read more)

Yes, I think in terms of my actions, I'm probably similar to many effective altruists. There are routes that I wouldn't consider, such as earning to give, but all in all I'm probably on a similar path with many other EA's who want to get into tech entrepreneurship. I think where I differ is not in my actions, but in my moral aims. Many EA's, if given a pill that could make them be able to work all day on helping others, sustainably, without changing their enjoyment of said activities, would think they ought to take it - and a sizeable portion probably would take it. I'd never take that pill, and wouldn't feel bad about that choice.

People's expectation clock starts running from the time they hit send. More improtantly, deadlines related to the email content really sets the agenda for how often to check your email.

Then change people's expectations, including those of the deadlines appropriate for tasks communicated by emails that people may not see for a while! (Partly a tongue in cheek answer - I know this may no be feasible, and you make a fair point).

Hehe. Someone called J* wants me to keep her posted when I start my internship. Her perception of my reliability may be consequential to my career in the future! So, I have reason to maintain that perception of punctuality. I have been invited to attend a fairly boring meeting tomorrow with somewhat important people attending, so it might make my life easier if I go and seek some economic rents. Then again I guess it's better to be hated, then love love loved for what you're not. []

As far as I know, nobody who identifies with EA routinely makes individual decisions between personal purchases and donating. [ ... ] Very few people, if any, cut personal spending to the point where they have to worry about, e.g., electricity bills.

I do know - indeed, live with :S - a couple.

Effective altruism ==/== utilitarianism

Here's the thread on this at the EA Forum: Effective Altruism and Utilitarianism

Potentially worth actually doing - what'd be the next step in terms of making that a possibility?

Relevant: a bunch of us are coordinating improvements to the identical EA Forum codebase at and

You'd need to convince whoever runs Lesswrong. There was some other discussion in this thread about modifying the code, but no point in doing that if they aren't going to push it to the site. Otherwise there is /r/RationalistDiaspora [] which is attempting to fill this niche for now.
Getting agreement from MIRI (likely Eliezer) that LW should be changed in that way.

For my part, I'm interested in the connection to GiveWell's powerful advocacy of "cluster thinking". I'll think about this some more and post thoughts if I have time.

Shop for Charity is much better - 5%+ directly to GiveWell-recommended charities, plus browser plugins people have made that apply this every time you buy from Amazon.

Just a notice for anyone wondering: They stack.
Nice. Though it appears that everything is going to SCI atm - you cant pick where the money goes, which may be important for some.

Some people offer arguments - eg - and for some people it's a basic belief or value not based on argument.

Did you edit your original comment? When i first read it, I thought it was saying the opposite of what it now seems to say... I actually agree with it now - should is not universal, it depends on your goals. P.S That paper you provide actually argues hedonism, not utlilitarianism :).

This is a good solution when marginal money has roughly equal utility to Alice and Bob, but suffers otherwise.

The misophonic neighbour in the OP is unnamed; Bob is Alice's friend acting as a kind of sort of mediator.Similar situations were discussed here: []

If C doesn't want A to play music so loud, but it's A's right to do so, why should A oblige? What is in it for A?

Some (myself included) would say that A should oblige if doing so would increase total utility, even if there's nothing in it for A self-interestedly. (I'm assuming your saying A had a right to play loud music wasn't meant to exclude this.)

How are you doing cross-person utility comparisons?

"Tit-for-tat is a better strategy than Cooperate-Bot."

Can you use this premise in an explicit argument that expected reciprocation should be a factor in your decision to be nice toward others. How big a factor, relative to others (e.g. what maximises utility)? If there's an easy link to such an argument, all the better!

Can you give an explicit argument for why you"should" maximize utility for everyone, instead of just for yourself?
If the problem you are trying to solve is how to motivate morality at the societal level, to a random bunch of people with varying preferences, then expected reciprocation is very important. Under other assumptions, it isnt: for instance, forms of egoism, where you never risk any possible loss, and forms of altruism where only acts performed without expectation of reciprication are truly good.

What if people don't believe in 'duty' - eg certain sorts of consequentialists?

That is perfectly fine, however. Duty is that which whose violation invokes general social censure, shunning, basically that which carries the cost of being excluded as an asocial asshole. I think consequentualism knows only degrees, therefore this is rarely used so not a problem here.

Upvotes/downvotes on LW might take care of the quality worry.

How about moral realist consequentialism? Or a moral realist deontology with defeasible rules like a prohibition on murdering? These can certainly be coherent. I'm not sure what you require them to be non-arbitrary, but one case for consequentialism's being non-arbitrary would be that it is based on a direct acquaintance with or perception of the badness of pain and goodness of happiness. (I find this case plausible.) For a paper on this, see

Or rule consequentialism, or constructivism, or contractarianism....

Are you good to do these posts in the future? If not, is anyone else?

I don't want to make any guarantees. I think people cross-posting to LW whenever they want a post to be discussed here is fine. This probably doesn't necessarily need systemizing.

I largely agree with the post. Saying Robertson's thought experiment was off limits and he was fantasising about beheading and raping atheists is silly. I think many people's reaction was explained by their being frustrated with his faulty assumption that all atheists are necessarily (implicitly or explicitly) nihilists of the sort who'd say there's nothing wrong with murder.

One amendment I'd make to the post is that many error theorists and non-cognitivists wouldn't be on board with what the murderer is saying in the thought experiment. For example, they ... (read more)

He's not fantasizing about he himself beheading atheists. What he's fantasizing about is subtly different: he's fantasizing about the idea that atheists will get beheaded because of their own atheism rebounding on them, so it's their own fault.

The latest from Scott:

I'm fine with anyone who wants reposting things for comments on LW, except for posts where I specifically say otherwise or tag them with "things i will regret writing"

In this thread some have also argued for not posting the most hot-button political writings.

Would anyone be up for doing this? Ataxerxes started with "Extremism in Thought Experiments is No Vice"

On fragmentation, I find Raemon's comment fairly convincing:

2) Maybe it'll split the comments? Sure, but the comments there are already huge and unwieldy (possibly more-than-dunbar's number worth of commenters) so I'm actually fine with that. Discussion over there is already pretty split up among comment threads in a hard to follow fashion.

To be clear, I don't have the time to do it personally, I'd just do it for any posts I'd particularly enjoy reading discussion on or discussing. So if someone else feels it's a good idea and Scott's cool with it, their doing it would be the best way to make it happen.

I would be more in favour of pushing SSC to have up/downvotes

That doesn't look like a goer given Scott's response that I quoted.

I would certainly be against linking every single post here given that some of them would be decisively off topic.

Noting that it may be best to exclude some posts as off topic.

It would seem I'm not the norm. I have been going there for just over one year. But I find it hard to believe people would be generally against any form of organising the comments by quality. It would be nice to know which of the 400 comments is worth reading. Do people simply read all of them? Do they post without reading any? I think I have been here, and mostly only here, for so long that other systems do not make sense to me.

There's discussion of this on the LW Facebook group:

It includes this comment from Scott:

I've unofficially polled readers about upvotes for comments and there's been what looks like a strong consensus against it on some of the grounds Benjamin brings up. I'm willing to listen to other proposals for changing the comments, although if it's not do-able via an easy WordPress plugin someone else will have to do it for me.

The latest from Scott: In this thread some have also argued for not posting the most hot-button political writings. Would anyone be up for doing this? Ataxerxes started with "Extremism in Thought Experiments is No Vice"

Yes, LBTL actually doesn't have any GiveWell charities this year, and also charges the charities a 10% fee plus thousands up front; we don't take any cut. We're officially partnered with SCI on this and are their preferred venue.

Oh yeah, I mean, if someone had to pick between your campaign and LBTL, I think it would be a lot better if they went with yours, because obviously effective altruism is super cool and all that. "this year"? Did they ever?

Very sad. I enjoyed his books - I'd particularly recommend Small Gods for LessWrongers (it's also the one I enjoyed most in general).

Has anyone seen anything on how he died?

Just ordered Small Gods on this recommendation. I feel bad for not having read more of Pratchett's books. Just Good Omens and one or two of the Discworld novels, I think.
His publishers say he died of natural causes surrounded by his family with his cat on his lap.
Load More