I agree, and I'll keep that in mind. The topic is extremely broad, though, so I don' t know how much time I'll have to focus on it. I'm actually thinking of having several meetups on this, depending on people's interest.
I always forget that...thanks.
I don't have time to write a full report, but Less Wrong Montreal had a meetup on the 22nd of January that went well. Here's the handout that we used; the exercise didn't work out too well because we picked an issue that we all mostly agreed on and understood pretty well. A topic where we disagreed more would have been more interesting (afterwards I thought "free will" might have been a good one).
Thanks for pointing it out, fixed.
I run the Montreal Less Wrong meetup, which for the last few months has started structuring the content of our meetups with varying degrees of success.
This was the first meetup that was posted to meetup.com in an effort to find some new members. There were about 12 of us, most of which were new and had never heard of Less Wrong before; although this was a bit more than I was expecting, the meetup was still a really good introduction to Less Wrong/rationality and was appreciated by all those that were present.
My strategy for the meetup was to show a concrete exercise that was useful and that gave a good idea what Less Wrong/rationality was about. This is a handout I composed for the meetup to explain the exercise we were going to be doing. It's a five-second-level breakdown of a few mental skills for changing your mind when you're in an argument; any feedback on the steps I listed is appreciated, as no one reviewed them before I used them. People found the handout to be useful, and it gave a good idea of what we would be trying to accomplish.
The meetup began by going around and introducing ourselves, and how we came to find the meetup. Some general remarks about the demographics:
After a quick overview of what rationality is, people wanted to go through the handout. We read through each of the skills, several of which sparked interesting discussions. Although the conversation went off on tangents often, the tangents were very productive as they served to explain what rationality is. The tangents often took the form of people discussing situations where they had noticed people reacting in the ways that are described in the handout, and how someone should think in such cases.
The exercise that is described on the second page of the handout was not successful. I had been trying to find beliefs that are not too controversial, but might still cause people to disagree with them. Feedback from the group indicated that I could have used more controversial beliefs (religion, spirituality, politics, etc) as the feelings evoked would have been more intense and easier to to notice; however, that might also have offended more people, so I'm not sure whether that would have been better or not. If I were to run this meetup again, I would rethink this exercise.
The meetup concluded with me giving a brief history of Less Wrong, and mentioning HPMOR and the sequences. I provided everyone with some links to relevant Less Wrong material and HPMOR in the discussion section of the meetup group afterwards.
Let me know if you have any questions or comments, any feedback is appreciated!
I like this idea; seeing as I have a meetup report to post, I just started a monthly Meetup Report Thread. Hopefully, people will do what you describe.
That's true, those points ignore the pragmatics of a social situation in which you use the phrase "I don't know" or "There's no evidence for that". But if you put yourself in the shoes of the boss instead of the employee (in the example given in "I don't know"), where even if you have "no information" you still have to make a decision, then remembering that you probably DO know something that can at least give you an indication of what to do, is useful.
The points are also useful when the discussion is with a rationalist.
The post What Bayesianism Taught Me is similar to this one; your post has some elements that that one doesn't have, and that one has a few that you don't have. Combining the two, you end up with quite a nice list.
I think "seems like a cool idea" covers that; it doesn't say anything about expected results (people could specify).
I don't see how, because the barriers aren't clearly defined, they become irrelevant. There might not be a specific point where a mind is sentient or not, but that doesn't mean all living things are equally sentient (Fallacy of Grey).
I think Armstrong 4, rather than make his consideration for all living things uniform, would make himself smarter and try to find an alternate method to determine how much each living creature should be valued in his utility function.