lahwran

Hi! On facebook or in person, you probably know me as Lauren C. H.

lahwran's Comments

Honoring Petrov Day on LessWrong, in 2019

I thought you were threatening extortion. As it is, given that people are being challenged to uphold morality, this response is still an offer to throw that away in exchange for money, under the claim that it's moral because of some distant effect. I'd encourage you to follow Jai's example and simply delete your launch codes.

Honoring Petrov Day on LessWrong, in 2019

This seems extremely unprincipled of you :/

Causal Reality vs Social Reality

Agreed @ the differences not being that great. I've heard this model around for a while, and I feel like while it does describe a distinction, that distinction is not clean in the territory.

Causal Reality vs Social Reality

I think a lot of people in the world in general actually live much more in a mindset where concrete physical thinking is real than it might seem! The problem as I see it is, people's causal calibration level varies, and people's impression of their own ability to have their own beliefs about a topic without it embarrassing them varies. The "social reality" case is what you get when someone focuses most or all of their attention on interacting with people and don't have anything hard in their lives, so they simply don't need to be calibrated about physics and must rely on others' skill in such topics.

But I don't think nearly any neuroplastic human is going to be so unfamiliar with [edit: hit submit while trying to put my cursor back! continuing writing...]

... unfamiliar with causal reality that they can't comprehend the necessity of basic tasks. They might feel comfortable and safe and therefore simply not think about the details of the physics that implements their lives, but it's not a case of there being a social reality that's a separate layer of existence. It's more like the social behavior is what you get when people don't have the emotional safety and spare time and thinking to explore learning about the physics of their lives.

does that seem accurate to y'all? what do you think?

Should rationality be a movement?

I agree with this in some ways! I think the rationality community as it is isn't what the world needs most, since putting effort into being friendly and caring for each other in ways that try to increase people's ability to discuss without social risk is IMO the core thing that's needed for humans to become more rational right now.

IMO, the techniques are relatively quite easy to share once you have trust to talk about them, and merely require a lot of practice, but convincing large numbers of people that it's safe to think things through in public without weirding out their friends seems to me to be likely to require making it safe to think things through in public without weirding out their friends. I think that scaling a technical+crafted culture solution to creating emotional safety to discuss what's true, that results in many people putting regular effort into communicating friendliness toward strangers when disagreeing, would do a lot more than scaling discussion of specific techniques for humanity's rationality.

The problem as I see it right now is that this only works if it is seriously massively scaled. I feel like I see the reason CFAR got excited about circling now - seems like you probably need emotional safety to discuss usefully. But I think circling was an interesting thing to learn from, not a general solution. I think we need to design an internet that creates emotional safety for most of its users.

Thoughts on this balance, other folks?

"The Bitter Lesson", an article about compute vs human knowledge in AI

My own thoughts on the topic of ai, as related to this:

I currently expect that the first strongly general AI will be trained very haphazardly, using lots of human knowledge akin to parenting, and will still have very significant "idiot-savant" behaviors. I expect we'll need to use a similar approach to deepmind's starcraft AI for the first version: that is, reaching past what current tools can do individually or automatically, and hacking them together in a complex training system built for the specific purpose. However, I think at this point we're getting pretty close from the capabilities of individual components. If a transformer network was the only module in an system, but the training setup produced training data that required the transformer becoming a general agent, I currently think it would be capable of the sort of abstracted variable-based consequential planning that MIRI folks talk about being dangerous.

Discourse Norms: Moderators Must Not Bully

I strongly agree with this point. This is the core reason I have mostly stopped using less wrong. I just made a post, and being able to set my own moderation standards is kind of cool. That might make less wrong worth of use as a blog, actually.

Discourse Norms: Moderators Must Not Bully

eliezer's problem is what you have if your friend group is getting diluted. this problem is what you have if you're trying to dilute your friend group as much as you can.

Karma-Change Notifications

Hey cool. this is the sort of reward I need to enjoy a site enough to use it.

Load More