Sorted by New

Wiki Contributions


Go through to Less.Online to learn about who's attending, venue, location, housing, relation to Manifest, and more.


While I may be missing the obvious, I didn't see the location anywhere on the site. ('Lighthaven', yes, but unless I've badly failed a search check, neither the LessOnline nor the Lighthaven website gives an address.)

Google Maps seems to know, but for something like this, confirmation would be nice; I don't quite trust that Google isn't showing a previous location or something else with the same name.

"what are all the little-metadata-flags associated with this prediction?"

Some metadata flags I associate with predictions:

  • what kinds of evidence went into this prediction? ('did some research', 'have seen things like this before', 'mostly trusting/copying someone else's prediction')
    • if I'm taking other people's predictions into account, there's a metadata-flags for 'what would my prediction be if I didn't consider other people's predictions?'
  • is this a domain in which I'm well calibrated?
  • is my prediction likely to change a lot, or have I already seen most of the evidence that I expect to for a while?
  • how important is this?

Suppose that having the right amount of slack is important, and that the right amount is enough to handle three surprise problems per week. What actions would one take based on that?

Well, this sounds like it’s about figuring out how much capacity to use, so:
Notice whether you have more or less than you should; if you have more, use it more freely and/or de-prioritize getting it; if you have less, try to use less or get more. Do this independently for several types of slack.

(And at the level more concrete than that are specific things you can do to build up capacity. Which is a hard problem in its own right, and an important one.)

But does slack in fact behave like a limited capacity or resource? Financial slack does, sure. Attention, less so, in that you can’t save it up for later. To what extent stress does is an open question, as far as I know.

But in my experience there are many kinds of surprise problem where handling them is less like spending a resource, and more like making a saving throw: there’s a chance of success mostly based on your own abilities in the domain, and mostly not based on how many other problems you’ve had recently.

(Because often "I'm fine" is false, you see. If this has never bothered you then you are perhaps not in the target audience for this essay.)

This does bother me, but I’ve come to the conclusion that “How are you?” usually isn’t really a question - it’s a protocol, and the password you’re supposed to reply with is “Fine.” Almost no-one will take this to mean that you actually are fine, in my experience - they will take it to mean that you are following the normal rules of conversation, which is true. It’s much like how I can tell jokes, use idioms, or read a passage from a novel out loud - the information I’m conveying is true, even if the literal meaning of the words is not.

So here’s a rule that seems better in some ways than the wizard's literal-truth rule - don’t try to cause people to have false beliefs. Of course, this removes much of your ability to deceive by clever phrasing; it’s a stricter standard of honesty than the wizard's rule.

“Be at least as honest as an unusually honest person”

This doesn’t really tell me much. It just raises the question of “What standards of honesty does an unusually honest person follow?”, which doesn’t seem much easier a question than we started out with.

General ambitious-ness, in any given field, where {X} is not accomplishing much and {Y} is committing to projects you don't have the skills for: Adam has opportunities to do some important things and is skilled enough that they aren't too hard for him. Bob has a range of opportunities of varying significance, so he needs to think about whether something is at his level before trying it. Charles is newer to this field than Bob, so he has to be extra-careful not to be overambitious. David would be in the same situation as Bob, but his boss has really high standards, so if he's careful not to be overambitious, he'll take criticism for not getting enough done. Edgar didn't know he was going to need this skillset, but has been forced into it for one reason or another.

Having a detailed plan, where {X} is disorganization and {Y} is lack of flexibility. Affordance widths depend on what it is that you're planning - how organized and flexible it needs to be.

Self-improvement (or any kind of effort to improve something), where {X} is inefficiency and {Y} is premature optimization. Affordance widths depend on how beneficial your default behavior is, and how much effort it takes to change.

If you have an event you're running, or an online space that you control, or an organization you run, you can set the norms. Rather than opting-by-default into the generic average norms of your peers, you can say "This is a space specifically for X. If you want to participate, you will need to hold yourself to Y particular standard."

Learning a new set of norms/standards and sticking to them in the right contexts is often not easy. Getting a bunch of other people to choose to do so seems likely to be harder. (Although that’s just my immediate sense of how it is; it may be completely wrong…)

Thus, I have a weak expectation that this will only work well when participants and organizers are fairly good at deliberate standard-following and standard-establishing, respectively. Which actually seems like something the rationalist community is pretty good at, but it might still be worth giving some thought to how to be better at it.

In practice, if you're building an organization, you may not have time to do "proper science" - you may need to get a group working ASAP, and you may need to test a few ideas at once to have a chance at succ

You’re missing the end of a word there.

I find the topic of learning how to be a better commenter particularly interesting. If you have any further thoughts on that, I’d like to hear about them.

I think that a common reason that people who might have commented on something end up not doing so is that they aren’t sure if what they had to say is actually worthwhile. Well, just saying ‘I agree!’ probably isn’t, but this does raise the question of how how high that threshold should be.

The first paragraph of this comment is near that borderline, in my opinion - it could pretty much be formulaic: "I find [subtopic] particularly interesting. If you have any further thoughts on that, I’d like to hear about them."

On the other hand, it’s true, and conveys information that an upvote wouldn’t, so I do consider it worthwhile.

What are the norms/rules for commenting on older posts? Many internet communities forbid thread necromancy; I see no mention of it here, but thought it worth checking.

Also, if I'm reading a sequence as it comes out, of course I do not have access to future posts when I make a comment. But if I'm reading through several posts from a month or two ago, and I have a question about one of them, is there an expectation that I read through the rest of the sequence to see if it's answered later before I say anything, or should I comment as I go along, as would be the case if I'd been reading it as it came out?

For example - I'm reading Tensions in Truthseeking. Shall I reply to Writing that Provokes Comments? Should I read the rest of the sequence first? It's not so long that that's unfeasible, but trivial inconveniences could probably reduce my likelihood of commenting significantly.