This is a special post for quick takes by Raghuvar Nadig. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
10 comments, sorted by Click to highlight new comments since:

I'd call myself a lapsed rationalist. I have an idea I've been thinking about that I'd really like feedback on, have it picked apart etc. - and strongly feel that LessWrong is a good venue for it.

As I'm going through the final edits, while also re-engaging with other posts here, I'm discovering that I keep modifying my writing to make it 'fit' LW's guidelines and norms, and it's not been made easy by the fact that my world-lens has evolved significantly in the last five-ish years since I drifted away from this modality. 

Specifically, I keep second-guessing myself with stuff like "ugh, this is obvious but I should really spell it out", "this is too explicit to the point of being condescending", "this is too philosophical", "this is trivial". 

I haven't actually ever posted anything or gotten feedback here, so I'm sure it's some combination of overthinking, simply being intimidated and being hyper-aware of the naivete in my erstwhile world view.

My goal really is to get to the point that I'm reasonably confident it doesn't get deleted for some reason after I post.

I guess this is serving to dip my toe in the water and express my frustration. Thoughts?

A whole lot depends on the idea, the inferential distance from conventional LW ideas, and what parts of it you want feedback on.

If you’re looking for “is this worth pursuing, and what aspects seem most interesting”, a summary post asking that is probably the best start.

If you’re looking for “please confirm that I’ve solved this controversial/difficult thing”, prepare for disappointment, but still a distillation of the core insight is likely the place to start.

Unless it’s truly horrific or links to sketchy outside websites/scams, it won’t be deleted. It may get downvoted, especially if it’s unclear what the claim actually is.

So I posted my paper, and it did get downvoted, with no comments, to the point I can't comment or post for a while. 

That's alright - the post is still up, and I am not blind to the issue with trying to convince rationalists that love is real and super important biologically and obviously all that actually matters to save the world and exponentially more so because AI people are optimizing for everything else - without coming off as insulting or condescending.  This presumption, of course, is just me rephrasing my past issues with rationalism, but it was always going to be hard to find an overlap of people who value emotions and understand AI.

For now, I'm taking this as a challenge to articulate my idea better, so I can least can get some critique. Maybe I'll take your suggestion and try distilling it in some way.

Well, only -8 with 10 votes, so it's not pure downvoting.  I bounced off it, but didn't react badly enough to downvote.  I think the reason I bounced off is the "inferential distance" I mentioned.  I'm not sure that MORE text is the kind of articulation that would help - for me, more precise description of the model, including concrete examples of what it predicts/recommends and why this framing is the best way to reach those conclusions, is necessary for me to decide whether to put more effort into fully understanding it.

I'm primed to view unconditional love as a very powerful beneficial force in individual and very-small-group interactions, but I don't think it's possible to extend that to distant strangers or large groups.  I also think it's a cluster of ideas that are fine to aggregate when talking about very rich and repeated individual interactions, but likely needs to be decomposed into smaller concepts to apply generally.

I'm especially confused about how to use this idea when there's no opportunity for reciprocation or even knowledge of my attitudes or beliefs, only a very intermediated behavioral impact.

Thanks - that's fair on all levels. Where I'm coming from is an unyielding first-principles belief in the power and importance of love. It took me some life experience and introspection to acquire, and it doesn't translate well to strictly provable models. Takes a lot of iterations of examining things like "people (including very smart ones) just end up believing the world models that make them feel good about themselves" and "people are panicked about AI and their beliefs are just rationalizations of their innate biases", "if my family or any social circle don't really love each other, it always comes through", "Elon's inclination to cage fight or fly to Mars is just repressed fight or flight" to arrive at it.

I tried to justify it through a model of recurrence and self-similarity in consciousness, but clearly that's not sufficient or well articulated enough. 

So yeah, I hear you on the inferential distance from LW ideas, and your model of "unconditional love" being more cloistered.  For what it's worth - it really isn't, maybe I should find an analogue in diffusion models, I dunno.  The negative, anti-harmonic effects at least are clearly visible and pervasive everywhere - there is no model that adequately captures our pandemic trauma and disunity, but it ends up shaping everything because we are animals and not machines, and quite good at hiding our fears and insecurities when posting on social media or being surveyed or even being honest with ourselves.

Thank you for taking the time to reply and engage - it's an unconditional kind act! 

When it comes to your paper I think it falls into the "too vague to be wrong" category.

If you want to convince rationalists that "love is real", your first issue is that this is a vague three-word slogan that doesn't directly make any predictions where we can check whether or not those predictions are true.

I tried to justify it through a model of recurrence and self-similarity in consciousness, but clearly that's not sufficient or well articulated enough. 

Justification is not how you convince rationalists. Part of what the sequences are about is that rationalists should seek true beliefs and not justified beliefs. For that, you need to attempt to falsify the belief. 

One exercise could be for you to just taboo the word love and make your case without it as that will result in you having to think more about what you actually mean. 

Ok, this is me discarding my 'rationalist' hat, and I'm not quite sure of the rules and norms applicable to shortforms, but I simply cannot resist pointing out the sheer poetry of the situation. 

I made a post about unconditional love and it got voted down to the point I didn't have enough 'karma' to post for a while. I'm an immigrant from India and took Sanskrit for six years - let's just say there is a core epistemic clash in its usage on this site[1]. A (very intelligent and kind) person whose id happens to be Christian takes pity and suggests, among other things, an syntactic distancing from the term 'love'. 

TMI: I'm married to a practicing Catholic - named Christian.


 

  1. ^

    Not complaining - I'm out of karma jail now and it's a terrific system. Specifically saying that the essence of 'karma', etymologically speaking, lies in its intangibility and implicit nature. 

Thank you - I agree with you on all counts, and your comment on my thesis needing to be falsifiable is a helpful direction for me to focus. 

I alluded to this above - this constraint to operate within provability was specifically what led me away from rationalist thinking a few years ago - I felt that when it really mattered (Trump, SBF, existential risk, consciousness), there tended to be this edge-case Godelian incompleteness when the models stopping working and people ended up fighting and fitting theories to justify their biases and incentives, or choosing to focus instead on the optimal temperature for heating toast. 

So for the most part, I'm not very surprised. I have been re-acquainting myself the last couple of weeks to try and speak the language better. However, it's sad to see, for instance, the thread on MIRI drama, and hard not to correlate that with the dissonance from real life, especially given the very real-life context of p(doom).

The use of 'love' and 'unconditional love' from the get-go was very intentional, partly because they seem to bring up strong priors and aversion-reflexes, and I wanted to face that head on. But that's a great idea - to try and arrive at these conclusions without using the word.

Regardless, I'm sure my paper needs a lot of work and can be improved substantially. If you have more thoughts, or want to start a dialogue, I'd be interested. 

If you want we can try LessWrong dialogue feature to narrow the discussion.

Thank you!