All of Bae's Theorem's Comments + Replies

dath ilan

"God of the EA community"? The majority of my city's EA community doesn't even know who Yudkowsky is, and of the few who do most have ambivalent opinions of him.

Neo-Mohism

The primary goal of this document is to articulate my personal moral philosophy, and I use the Mohism branding because it has strong corollaries to said moral philosophy, but otherwise I am reinventing it from scratch.

I do think that a lot of the core tenets are widely (if subconsciously) held. As for the ones that aren't widely held, I personally think they should be. But, like any good Neo-Mohist, I'm willing to be convinced otherwise. ;)

The phrasing of this as a philosophy for others to adopt is mostly an aesthetic decision, a reframing to help me look at it more critically.

Neo-Mohism

Thanks for the feedback! 

Basing it on Mohism is more of an aesthetic decision than anything; if classical Mohism has an issue then Neo-Mohism should set out to solve it. :)

I think there's a difference between "no fixed standards" and "the ability to update standards in light of new evidence". Neo-Mohism is definitely about "strong opinions, weakly held" kind of thing. The standards it sets forth are only to be overturned by failing a test, and until then should be treated as the best answer so far.

A Retrospective Look at the Guild of Servants: Alpha Phase

If you would like to attend a Guild mixer to meet the Council and some of the students, come join us saturday! We expect to do this on a monthly or quarterly basis.

https://bit.ly/3lOco5O

Meetup Organizers, Our Virtual Garden is at Your Disposal

Never mind, I found your calendly. Got us scheduled for friday. :)

2Ruby1yWoop! Sorry, should have put my Calendly link in the post.
1Bae's Theorem1yNever mind, I found your calendly. Got us scheduled for friday. :)
Petrov Day celebration

Please note that we have added a Google Form for registration, to make sure we have enough food.

No, the best way to convince me is to show me data. Evidence I can actually update on, instead of self-reporting on results that may be poisoned by motivated reasoning, or any number of other biases. Data I can show to people who know what they are talking about, that they will take seriously.

2Alexei2yYour question was: “What evidence can I show to a non-Rationalist that our particular movement...” I’m saying for non rationalists that’s one of the better ways to do it. They don’t need the kind of data you seem to require. But if you talk about your life in a friendly, open way, that will get you far. Additionally, “example of your own life” is data. And some people know how to process that pretty remarkably.

I see Bayesian Rationality as a methodology as much as it is a calculation. It's being aware of our own prior beliefs, the confidence intervals of those beliefs, keeping those priors as close to the base rates as possible, being cognizant of how our biases can influence our perception of all this, trying to mitigate the effects of those biases, and updating based on the strength of evidence.

I'm trying to get better at math so I can do better calculations. It's a major flaw in my technique I acknowledge and am trying to change.

But as you noted earlier, non

... (read more)

A strong correlation between adopting the virtues and established methods of rationality, and an increased quality of life, but yeah; more handwavey. I don't even know what calculations could be made. That's sorta why I'm here.

1eigen2yIf you're not doing calculations then you are not doing "Bayesian Rationality". Therefore, you very likely cannot explain to someone how "Bayesian Rationality" has worked out for you.

Yes, but they could all be explained by the fact I just sat down and bothered to think about the problem, which wouldn't exactly be an amazing endorsement of rationality as a whole.

I also don't look at rationality as merely a set of tools; it's an entire worldview that emphasizes curiosity and a desire to know the truth. If it does improve lives, it might very well simply be making our thinking more robust and streamlined. If so, I wouldn't know how to falsify or quantify that.

2eigen2yI don't understand how are you getting so many questions about your post instead of sensible replies to it. Did someone really say to you to change the question? Why would you ever do that if what you really want to know is how people are benefited by this way of thinking? What if say to that guy: "no,no..." how about you tell me how you have benefited about Bayesian thinking since that's what I'm interested in knowing?

I ask the question this way to hopefully avoid stepping on toes. I'm fully open to the idea that the answer is "we have none". Also, I am primarily addressing the people who are making a claim. I am not necessarily making a claim myself.

3John_Maxwell2yFair enough. CFAR has some data about participants in their workshops: https://rationality.org/studies/2015-longitudinal-study [https://rationality.org/studies/2015-longitudinal-study] BTW, I think the inventor of Cohen's d said 0.2 is a "small" effect size. I think some LW surveys have collected data on the amount people have read LW and checked to see if that was predictive of e.g. being well-calibrated on things (IIRC it wasn't.) You could search for "survey [year]" on LW to find that data, and you could analyze it yourself if you want. Of course, it's hard to infer causality. I think LW is one of the best online communities. But if reading a great online community is like reading a great book, even the best books are unlikely to produce consistent measurable changes in the life outcomes of most readers, I would guess. Supposedly education research has shown that transfer learning isn't really a thing, which could imply, for example, that reading about Bayesianism won't make you better calibrated. Specifically practicing the skill of calibration could make you better calibrated, but we don't spend a lot of time doing that. I think Bryan Caplan discusses transfer learning in his book The Case Against Education, which also talks about the uselessness of education in general. LW could be better for your human capital than a university degree and still be pretty useless. The usefulness of reading LW has long been a debate topic on LW. Here are some related posts: https://www.lesswrong.com/posts/LgavAYtzFQZKg95WC/extreme-rationality-it-s-not-that-great [https://www.lesswrong.com/posts/LgavAYtzFQZKg95WC/extreme-rationality-it-s-not-that-great] https://www.lesswrong.com/posts/7dRGYDqA2z6Zt7Q4h/goals-for-which-less-wrong-does-and-doesn-t-help [https://www.lesswrong.com/posts/7dRGYDqA2z6Zt7Q4h/goals-for-which-less-wrong-does-and-doesn-t-help] https://www.lesswrong.com/posts/qGEqpy7J78bZh3awf/what-i-ve-learned-from-less-wrong [https://www.lesswrong.com/posts/qGEq

I'm convinced mostly due to its effects on my own life, as stated in the opening paragraph. But I'm unsure of how to test and demonstrate that claim. My question is for my benefit as well as others.

5Alexei2yRight, but how do you know? Are there specific stories of how you were going to make a decision X but then you used a rationality tool Y and it saved the day?
Dojo meetup - hike!

I just realized that I work tomorrow, so we are not doing the hike. Instead, we are doing our usual 6pm meetup at the Johnson County Central Resource Library. We will do our hike next week (June 25th, 9am, Shawnee Lake Dog Park).

Episode 3 of Tsuyoku Naritai! (the 'becoming stronger podcast): Nike Timers

If I find that it does have actual impact on the podcast's effectiveness, then I absolutely will seriously consider changing it. Your criticism has updated me marginally in that direction, but it's not quite enough for me to act on it, particularly since you're the only person to mention it. Thank you for your feedback!

Against Street Epistemology

I'm sure that there are Street Epistemologists that are guilty of this, but that's literally opposite of what I encourage or practice.

Against Street Epistemology

At its core, SE is merely coaching people in asking the Fundamental Question of Rationality. As an SE-er, it's my way of Raising the Sanity Waterline. It's excellent at circumventing the Backfire Effect.

There are as many motivations for SE as there are practitioners.

Episode 1 of "Tsuyoku Naritai!" (the 'becoming stronger' podcast/YT series).

The Bayesian Conspiracy needs to be updated with Jess as a new host. :)

1Mati_Roy3yplease go ahead!:) and let me know if you have trouble editing the wiki
Episode 1 of "Tsuyoku Naritai!" (the 'becoming stronger' podcast/YT series).

I will be sure to include a transcript in all future episode descriptions/show notes.

Episode 1 of "Tsuyoku Naritai!" (the 'becoming stronger' podcast/YT series).

Done! Link to the transcript has been posted in the description, and also here: https://docs.google.com/document/d/1MjTM4revF1upDvO00y0v8jF8G6HUbcABtFDxVYiLyPc/edit?usp=sharing

Is there a different venue/format for the notes you had in mind?

4Said Achmiz3yThis is perfect, thanks.
Rationality Dojo

Due to the nature of assembling a bug list, there might not be a whole lot to discuss and do. If we are finished significantly early, we might simply move on to Day 2: 'Yoda Timers'

Rationality Dojo

The Kansas City Rationalists are putting together a dojo, for the purpose of improving our cognitive abilities, happiness, and efficiency. For content, we will be using the 'Hammertime' sequence. Attendees are expected to read the introduction ('Hammers and Nails') and Day 1 ('Bug Hunt'), as well as put together their bug list. The meeting will consist of meta-discussion about the content, and discussion about our experience putting together our bug lists. Bonus points if you are willing to share the bugs you found!

We will be meeting weekly, at the same time and location.

1Bae's Theorem3yDue to the nature of assembling a bug list, there might not be a whole lot to discuss and do. If we are finished significantly early, we might simply move on to Day 2: 'Yoda Timers'
2Michael Roman3yDid it work? Or does it need adjustment?
If Rationality can be likened to a 'Martial Art', what would be the Forms?

My IRL rationality group is preparing to test that sequence. It looks promising, although we do have some quibbles with it. If we successfully finish testing, we'll publish the details.

If Rationality can be likened to a 'Martial Art', what would be the Forms?

This may seem stupid, but I didn't even think about doing odds calibration on such a small scale. That's a great idea. Making a pinned Google Keep note now.

4Ikaxas3yHow about something like: "Tsuyoku Naritai - The Becoming Stronger Podcast"?
Applied Rationality podcast - feedback?

I was thinking "Tsuyoku Naritai". 😊

2ChristianKl3yThat name doesn't give anybody who reads it an idea about what the podcast is about. It also doesn't contain any keywords that are good for SEO.
What is Wrong?

If I understand you correctly, I wholeheartedly agree that "Less Wrong" is not just referring to a dichotomy of "what is true" and "what is wrong" (although that is part of it, ala 'Map and Territory').

There's a reason rationality is often called "systemized winning"; while the goals that you are trying to win are entirely subjective, rationality can help you decide what is most optimal in your pursuit of goal Y.

1Inyuki3yWell, my main point was, that error can be of arbitrary type, one may be of the modeling of what is ("Map"), another of modeling what we want to be ("Territory"), and one can think of infinite number of various types of "errors", - logical, ethical, pragmatic, moral, ecological, cultural, situational, .. the list goes on and on. And, if each type of error we think of "suboptimality", then "less err" or "less wrong" would be etymologically equivalent to "optimize". So, we're a community for optimization. And that's actually equivalent to intelligence. No matter, if we are seeking for truth or pragmatics, the methods of rationality remain largely the same -- it's the general mathematical methods of optimizing.
Applied Rationality podcast - feedback?

Thank you for the encouragement! I hope I didn't sound like I was fishing for that. I just wanted to emphasize my desire to seek any other more optimal paths.

4Yoav Ravid3ydidn't sound like that at all - all the support is because of how this community is :)
Applied Rationality podcast - feedback?

Good idea! Get comfortable with my own journey before using someone else's. Makes sense.

LW/SSC Mixer

Yes! The facebook event page links to our Facebook group.

What does it mean to "believe" a thing to be true?

Just listened to that about 10 minutes ago. Good sequence. When you say "assumption", I hear "a thing that is accepted to be true".

What changes in cognition when something is accepted as true?

1rthomas23yWhen something is accepted as true, then observations to the contrary become surprising. So, if I’m surprised to find it raining out, then I’d assumed it was going to be sunny.
2Gurkenglas3yI mean that it's used to anticipate experiences, like when you believe that you have a dragon in your garage, expect its breath to keep the house toasty and therefore turn off the heater in advance.
New edition of "Rationality: From AI to Zombies"

Please have a non-leather hardcover option for us vegans. :)

3Rob Bensinger3yYes :) I wasn't thinking real leather, though maybe synthetic leather also has signaling problems..!
New edition of "Rationality: From AI to Zombies"

Please no real leather for us vegans. :)

[This comment is no longer endorsed by its author]Reply
6Rob Bensinger3yNot at present. Some people requested that we release higher-quality versions, so that's been on our radar, and I'd be interested to hear what kinds of variants people would and wouldn't be interested in buying. (Full-color, leather-bound, hardcover, etc.)
Can I use Less Wrong branding in youtube videos?

The most indiscrete example I have in mind would be to include it in the channel name; a clunky example would be 'Less Wrong Street Epistemology'. I had no intention of using the official logo, merely make direct references.

[Insert clever intro here]

Hi, Motasaurus. I certainly hope you stick around! Don't let our disagreements drive you off.

However, on that note, I'm afraid I would have to disagree. While I think you can have "better than average" epistemology and still be a Christian, perhaps even be in the top 25% percentile, I don't believe you can aspire to be a perfect Bayesian and still be a Christian.

I would respectfully point out that the Apostle John is hardly a neutral spectator in determining whether one can be both Christian and Rational. Additionally, he certainly didn't have access to an

... (read more)
[Insert clever intro here]

Also, how would one go about acquiring these CFAR techniques? Is attending a workshop mandatory? I don't quite have the discretionary funds for that. :P

7ChristianKl3yhttps://www.lesswrong.com/sequences/qRxTKm7DAftSuTGvj is a write up by one person that contains descriptions of a lot of CFAR techniques. If you start a rationality dojo you might also email CFAR and as whether they have guidance (or whether you can get PDF of their handbook).
[Insert clever intro here]

One of the hallmarks of a typical dojo is that the Sensei will demonstrate techniques, and show how they are supposed to look once you have mastered them.

Is it possible that this is an optional feature, if only for a rationality dojo?

3ChristianKl3yIt's useful to have someone who has mastered a technique but it's not required. When you are in a good you can also work together to learn a technique together. It's also possible that different people present techniques at different days.
[Insert clever intro here]

That's very kind of you, thank you. It means a lot.

[Insert clever intro here]

Yes, that is the correct sequence of events. I was raised in a Christian household, but the first belief I truly held for myself was atheism. I am no longer a Christian or theist in any way.