Building mental habits requires some mental awareness to notice when they're appropriate. How can people improve their mental awareness? Meditation might help because of detachment from thoughts, or maybe spending time self narrating and trying to notice every thought and feeling. It seems like a key skill not everyone is good at.
I'm thinking of artificial communities and trying to manufacture the benefits of normal human communities.
If you imagine yourself feeling encouraged by the opinions of an llm wrapper agent - how would that have been accomplished?
I'm getting stuck on creating respect and community status. It's hard to see llms as an ingroup (with good reason).
Thanks for reporting this! Most likely it was because of 'window height' wasn't excluding the parts covered by mobile browsers. I'm now specifically using 'inner height' which should fix it.
Wow I wish I had searched before beginning my own summary project.
The projects aren't quite interchangeable though. Mine are significantly longer than these, but are intended to be acceptable replacements for the full text, for less patient readers.
Thank you, I hadn't noticed the difference but I agree that complacency is not the message.
I think I can word things the way you are and spread a positive message.
Thanks a lot, you've un-stumped me.
I'm in the process of summarizing The Twelve Virtues of Rationality and don't feel good about writing the portion on perfectionism
"...If perfection is impossible that is no excuse for not trying. Hold yourself to the highest standard you can imagine, and look for one still higher. Do not be content with the answer that is almost right; seek one that is exactly right."
Sounds like destructive advice for a lot of people. I could add a personal disclaimer or adjust the tone away from "never feel satisfied" towards "don't get complacent" though that's a beyond what I feel a summarizer ought to do.
Similarly, the 'argument' virtue sounds like bad advice to take literally, unless tempered with a 'shut up and be socially aware' virtue.
I'd appreciate any perspective on this or what I should do.
In future should I post summaries individually, or grouped together like this?
Individual posts is more linkable and discoverable, but having a post for a full sequence of summaries might be more ergonomic to read and discuss.
Thanks for your thoughts, I'm glad I asked.
You're right my goal isn't very well defined yet. I'm mostly thinking along the lines of the https://non-trivial.org and https://ui.stampy.ai projects. I'd need a better understanding of beginner readers to communicate with them well. I'm not confident that I'll write great summaries on the first try, but I imagine any serious issues can be solved with some feedback and iteration.
Would summarizing lesswrong writings to be more concise and beginner friendly be a valuable project? Several times I've wanted to introduce people to the ideas, but couldn't expect them to actually get through the sequences (optimized for things other than concision).
Is lowering barrier to entry to rationality considered a good thing? It sounds intuitively good, but I could imagine concern of the techniques being misused, or benefit of some minimum barrier to entry.
Any failstates I should be concerned of? I anticipate shorter content is easier to immediately forget, giving an illusion of learning.
Thanks for your time. Please resist any impulse to tell me what you think I want to hear :)
I'd be very interested in talking to anonymous friend, or anyone else working on this. I have two relevant projects.
Most directly, I wrote a harness for llms to play text adventures and have spent some time optimizing the wrapper and testing on anchorhead. As you'd expect it has the same issues, but cheaper and without vision problems.
I've also worked on llm social deduction game play, which is nuanced and challenging in different ways, but shares the need for strong memory and robust reasoning in the face of hallucination.
I'd be happy to talk about any of these issues and compare leads!