This is a special post for quick takes by qazzquimby. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
9 comments, sorted by Click to highlight new comments since: Today at 7:52 PM

I'm thinking of artificial communities and trying to manufacture the benefits of normal human communities.

If you imagine yourself feeling encouraged by the opinions of an llm wrapper agent - how would that have been accomplished?

I'm getting stuck on creating respect and community status. It's hard to see llms as an ingroup (with good reason).

I imagine it would depend on sensory perception. If there is a text written by an obvious robot, it would have little emotional impact. But seeing a human face and hearing a human voice -- even if I know it is a robot -- would feel differently. Or possibly a "magical talking animal" like in anime, that might nicely address my objection that it is not an actual human.

I'm in the process of summarizing The Twelve Virtues of Rationality and don't feel good about writing the portion on perfectionism

"...If perfection is impossible that is no excuse for not trying. Hold yourself to the highest standard you can imagine, and look for one still higher. Do not be content with the answer that is almost right; seek one that is exactly right."

Sounds like destructive advice for a lot of people. I could add a personal disclaimer or adjust the tone away from "never feel satisfied" towards "don't get complacent" though that's a beyond what I feel a summarizer ought to do.


Similarly, the 'argument' virtue sounds like bad advice to take literally, unless tempered with a 'shut up and be socially aware' virtue.


I'd appreciate any perspective on this or what I should do.

Most advice is contraindicated for some people, so if it's not a valid Law, it should only be called to attention, not all else equal given weight or influence beyond what calling something to attention normally merits. Even for Laws, there is no currently legible Law saying that people must or should follow them, that depends on the inscrutable values of Civilization. It's not a given that people should optimize themselves for being agents. So advice for being an effective agent might be different from advice for being healthy or valuable, or understanding topos theory, or building 5 meters high houses of cards.

For perfectionism, I think never being satisfied with where you're at now doesn't mean you can't take pride in how far you've come?

"Don't feel complacent" feels different from "striving for perfection" to me. The former feels more like making sure your standards don't drop too much (maintaining a good lower bound), whereas the latter feels more like pushing the upper limit. When I think about complacency, I think about being careful and making sure that I am not e.g. taking the easy way out because of laziness. When I think about perfectionism (in the 12 virtues sense), I think about imagining ways things can be better and finding ways to get closer to that ideal.

I don't really understand the 'argument' virtue so no comment for that.

Thank you, I hadn't noticed the difference but I agree that complacency is not the message.

I think I can word things the way you are and spread a positive message.

Thanks a lot, you've un-stumped me.

Would summarizing lesswrong writings to be more concise and beginner friendly be a valuable project? Several times I've wanted to introduce people to the ideas, but couldn't expect them to actually get through the sequences (optimized for things other than concision).

Is lowering barrier to entry to rationality considered a good thing? It sounds intuitively good, but I could imagine concern of the techniques being misused, or benefit of some minimum barrier to entry.
Any failstates I should be concerned of? I anticipate shorter content is easier to immediately forget, giving an illusion of learning.

Thanks for your time. Please resist any impulse to tell me what you think I want to hear :)

I'm sure it'd be a value to some, and a distraction or misleading to others.  The problem with summarizing is that you have to decide what to leave out or gloss over, and different readers are coming from different places of prior knowledge and expectation.

I don't think lowering the barrier to entry can ever be bad, but I also think that the barriers are multidimensional and "lowering" isn't very well-defined in a general sense.  

For my own reference, and to make it easier for me to refer people to 'the sequences' generally, I'd love to see something between an index and a summary.  Basically, a topic index with a paragraph or so of description for each sequence, and a line or two describing the content of each post in a sequence.  

Thanks for your thoughts, I'm glad I asked. 
You're right my goal isn't very well defined yet. I'm mostly thinking along the lines of the https://non-trivial.org and https://ui.stampy.ai projects. I'd need a better understanding of beginner readers to communicate with them well. I'm not confident that I'll write great summaries on the first try, but I imagine any serious issues can be solved with some feedback and iteration.