All of Evenflair's Comments + Replies

The Liar and the Scold

This is really good! I didn't notice the fiction tag at first and thought it was real until the VR stuff. I especially liked how sharp the ending was, like the narrator is so casual about it.

1purge5dSame here. I guess we need to keep training our discriminators.
Is "gears-level" just a synonym for "mechanistic"?

Isn't mechanistic specifically about physical properties? Could you say that an explanation of a social phenomenon is "mechanistic", even though it makes zero references to physical reality?

5Mauricio1moYeah, my impression is that "mechanistic" is often used in social sciences to refer to a very similar idea as "gears-level." E.g. as discussed in this highly-cited overview [https://www.annualreviews.org/doi/abs/10.1146/annurev.soc.012809.102632] (with emphasis added):

You could indeed; think "causal mechanism" rather than "physical mechanism".

The Archetypal Rational and Post-Rational

There's often a complication in defining post-rationality where someone says "post-rationality involves X" and then the other person says "X is compatible with rationality too" and then the cycle repeats.

Thanks for writing this post. My first couple encounters with postrationality were exactly along these lines (where I played the part of saying "but X is part of rationality").

Unfortunately, I'm still confused. Both descriptions (of postrationality and archetypical rationality) line up about equally well with my conception of rationality, and yet the po... (read more)

Why do you need the story?

I don't have anything to add, but this phenomenon was discussed in greater detail in Explain/Worship/Ignore. https://www.lesswrong.com/posts/yxvi9RitzZDpqn6Yh/explain-worship-ignore

The Bat and Ball Problem Revisited

The first time I saw the bat and ball question, it was like there were two parts of my S1. The first one said "the answer is 0.1" and the second one said "this is a math problem, I'm invoking s2". S2 sees the math problem and searches for a formula, at which point she comes up with the algebraic solution. Then s2 pops open a notepad and executes it even though 0.1 seems plausible.

No real thought went into any step of this. I suspect the split reaction in the first bit was due to my extensive practice at doing math problems. After enough failures, I learned to stop using intuition to do math and "invoke s2" became an automatic response.

Why do you believe AI alignment is possible?

Humans aren't aligned once you break abstraction of "humans" down. There's nobody I would trust to be a singleton with absolute power over me (though if I had to take my chances, I'd rather have a human than a random AI).

Has LessWrong Been Mind-Killed on the Topic of God and Religion?

I found the post long and difficult to read, and the bit I did read appeared to be not interesting. I'm also strongly dislike religion (yes, I know it has good parts but it also has a shit ton of bad). In particular, I have no desire to see more content glorifying it here, and your post appeared to be doing that from the cursory inspection I gave it. Thus, the downvote.

I didn't strong downvote because I hadn't read the entire post or given it a proper chance. A weak downvote is minor dislike, a desire to see less of that sort of thing. A strong downvote fo... (read more)

What is the link between altruism and intelligence?

The orthogonality thesis is usually used with AI, because that topic is where it actually matters, but the overarching idea applies to any mind. Making something smarter does not give it morals.

And no, I bet that the psychopaths would use their newfound powers to blend in and manipulate people better. Overt crime would drop, and subtler harm would go up. That's what happens in the real world across the real intelligence gradient.

I'm not a sociopath, but I was a sociopath-lite before transitioning (minimal emotion, sadistic streak, almost no empathy). I onc... (read more)

1Ruralvisitor833moThat's exactly what I mean. The reason in this case is a comprehension of sustained reward. A monkey doesn't sustained reward. If you gave it a peanut, but tell it if it waits for 5 minutes he can have 10 peanuts, he wouldn't understand and just eat the peanut. With intelligence a greater understanding of potential reward comes and we're less easily giving in to sexual, sadistic and other impulses. This can be seen through biology like I mentioned, we're more in control of our biological drives than any other creature. Now what I obviously didn't say is that intelligence magically makes someone non-evil or non-psychopathic, it's just that why would a psychopath in a sort of singularity scenario who is like a super Einstein risk getting incriminated for if he has a true understanding (way better than any of us) of what a successful singularity could bring (Kurzweilian scenarios). I mean the psychopath or sadist would just realise through enhanced reason (like they probably already do) that their behaviour is inherently wrong even though they enjoy it, and edit their brain so they don't want it anymore. I think the flaw the 'orthogonality thesis' has is that it assumes people (or AI) become non-evil through extra intelligence, they just become more altruistic.
What is the link between altruism and intelligence?

This is known as the orthogonality thesis, that intelligence and rationality don't dictate your values. I don't have time right now to explain the whole thing but it's talked about extensively in the sequences if you want to read more. I think it's pretty widely accepted around here as well.

1Ruralvisitor833moI don't think that's what I meant, isn't the orthogonality thesis about AI only? Like if we have a superintelligent AI that there's no reason its morals will be good unless we instruct it to do so, I'm talking about you and me. If our brains got a huge boost and we'd all become twice as smart as Einstein was, will the people that are psychopaths now stop being psychopaths then
4Viliam3moMy "intuition pump" is to imagine a superintelligent gigantic spider. Not some alien with human values in a spider body, but actual spider that was 'magically' increased and given IQ 500.
3Yoav Ravid3moThe Orthogonality Thesis [https://www.lesswrong.com/tag/orthogonality-thesis] tag is a good place to start.
Depositions and Rationality

I'm not sure about this. Arguing with myself can get really hostile as it is, and a lot of the OP seems to encourage an adversarial mindset.

On the other hand, I think there's definitely potential here. Generating ranges with upper and lower bound questions seems super useful, for instance.

The Opt-Out Clause

I did it, nothing happened.

The Opt-Out Clause

Well I tried it, and it didn't work... so I guess the answer is yes?

Transcript: "You Should Read HPMOR"

Hpmor got me to read the sequences by presenting a teaser of what a rationalist could do and then offering the real me that power. This line from the OP resonated deeply:

Taken together, caring deeply about maximizing human fulfillment and improving my cognitive algorithms changed my life. I don’t know if this particular book will have this particular effect on you. For example, you might not be primarily altruistically motivated on reflection. That’s fine. I think you may still selfishly benefit from this viewpoint and skillset.

The sequences then expanded that vision into something concrete, and did in fact completely change my life for the better.

[Book Review] "The Bell Curve" by Charles Murray

Anecdotal, but about a year ago I committed to the rationalist community for exactly the reasons described. I feel more accepted in rationalist spaces than trans spaces, even though rationalists semi-frequently argue against the standard woke line and trans spaces try to be explicitly welcoming.

My current thinking on money and low carb diets

I have this problem with white carbs and ended up cutting them out almost entirely. I do occasionally have a loaf of ciabatta or white tortillas, but I time those meals so I won't be able to eat more later.

Why do humans want to be less wrong?

People don't see desire to be rational as a desire? You mean, it's instrumental rather than terminal?

Definitely the first for me, I remember staring at the cover of the sequences and clearly seeing that they would change me and make me stronger. It was an offer of power, and I made a conscious decision to accept it.

I suppose that insofar as humans are already rational, we probably did evolve it for survival purposes. But... survival purposes often required irrationality.

1acylhalide3moI got the feeling some people didn't, even if they're not explicit about it. Interesting. That's the view from the inside [https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside] though. From the outside view you are likely a deterministic machine that was already pre-disposed to absorbing from the Sequences once in contact with it. That desire to absorb existed before you were exposed. Although ofcourse, on exposure your desires could change. True, I wonder what's the maximum we can deviate from this scripted irrationality.
Why do humans want to be less wrong?

Rationality makes me stronger and thus more able to achieve my goals. You would be better served by asking why humans have the true desires they do.

1acylhalide3moI ask that question too - but that question gets asked a lot. If anything, I am trying to expand the scope of that question. All desires have some source in the physical world. But people typically don't classify "desire to be rational" as a desire, even though at first glance imo it should. If I may ask, "being more able to achieve goals" applies at what level? Like is it - you deliberately think that "Okay if I keep pursuing more rational thought processes I'll get to my desires"? Or is it that animals evolved rationality even before they could think about rationality? Or do you have another explanation?
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Strong upvoted. I disagree that there's an active attempt at suppression (I agree with the other comments) but the last time I tried to dig into "is miri/cfar a cult" it was nearly impossible to do more than verify a few minor claims.

Some of that may just have been me being a few years late, but still. It would be nice if information on something so important was easier to find, rather than hidden (even if the hiding mechanism is the product of apathy instead of malice).

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Really? I do it because it's easier to type. Maybe I'm missing some historical context here.

3ESRogs3moFor some reason a bunch of people started referring to him as "Big Yud" on Twitter. Here's some context [https://twitter.com/esyudkowsky/status/1219327143949111296] regarding EY's feelings about it.
Creating a truly formidable Art

This is really long but I just wanted to address one tiny little tangent:

With all that said, I do think the essence really amounts to "why not… just not do drama?"

I think the answer is basically that most people — and basically all the loud or visible collectives — are highly addicted to the sensations of drama. It lands a little like "why not… just stop smoking?" Ultimately, yes, of course. But in practice I think it's trickier than you seem to think it is.

I've definitely been guilty of the rescuer role, tho I've gotten much better at avoiding the ... (read more)

Book Review: The End of Average

It sounded more to me like it was saying iq is an abstraction that hides a lot of possibly-important complexity. Which is not the same thing as saying that iq doesn't exist or is useless.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I didn't downvote, but I almost did because it seems like it's hard enough to reveal that kind of thing without also having to worry about social disapproval.

Shoulder Advisors 101

Seconding this, I noticed the parallels the moment OP started talking about advisors injecting comments.

Tulpas can be instantiated from fictional characters, and these are called fictives or soulbonds. And it's not about ceding control, I think that's more of a did thing where some alters will hide because they're traumatized and afraid.

I suspect that feeding an advisor attention (by talking to it a lot) will help it grow into a tulpa/alter. But I'm not sure, my advisors aren't even at the level described in the post. I have to invoke them deliberately, an... (read more)

Forget AGI alignment, are you aligned with yourself?

Could you please elaborate on this?

We each try our ideas for a little while, then come back together to discuss what went wrong. Most disagreements are going to be about implementation rather than vision. Or at least, vision differences would require much more divergence.

Would this mean you personally value your own life pretty highly (relative to rest of humanity)?

Yes. I do care about people, and I'm willing to make some sacrifices if they're justified by my true desires. I'm just not willing to die. And obviously I don't want anything bad to happe... (read more)

2acylhalide3moThanks for reply. Makes sense! If your differences are not worth dying for, then you will end up finding ways to work together.
Forget AGI alignment, are you aligned with yourself?

Worst case, I lose, and the clone uses their power to contain me so I stop being a danger to them. Or they just kill me, lol.

If the power was shareable, that greatly increases the divergence needed for a fight. Most disagreements could be easily solved by each of us taking half. Self preservation isn't worth risking to make a few changes to the copy's plans.

With the power, probably indulge in unbridled hedonism for a while. Eventually I'd get bored tho and start trying to build with it. Hedonism is fun and destruction is easy, but creation is challenging and satisfying in a way neither of them are. Transhumanism and the stars are our destiny!

2acylhalide3moCould you please elaborate on this? Would this mean you personally value your own life pretty highly (relative to rest of humanity)? Makes sense, can totally relate!
Forget AGI alignment, are you aligned with yourself?

Both of me would rush to the button as fast as possible. It doesn't much matter which copy gets there first, since we're copies, we'll take care of the slower one. And one of me pressing it is vastly better than anyone else doing so.

A diverged copy would need to be very different before I fought them. I have a strong gut belief in the idea that you should cooperate with people like you, and a copy is maximally similar. Even a diverged copy is still going to be quite similar.

I would be very wary of trying any deception or defecting because the other copy kn... (read more)

2acylhalide3moThanks for replying! I generally agree with your intuition that similar people arw worth cooperating with, but I also feel like when the stakes are high this can break down. Maybe I should have defined the hypothetical such that you're both co-rulers or something until one kills the other. Cause like - worst case in a fight is you lose and the clone does what they want - which is already not that awful (probably), this is already guaranteed. But you may still believe you have something non-trivially better to offer than this worst case. And you may be willing to fight for it. (Just my intuitions :p) Do you have thoughts on what you'll do once you're the ruler?
Do you like excessive sugar?

I don't have hard data, but sugar is very much a superstimulus for me. If I eat something sweet like ice cream, I'll keep eating until it's gone with no real chance of self control. I've previously devoured 52oz tubs of ice cream or an entire plate of cookies.

Sugar, salt, and fat reliably triggers this pattern, to the point where I avoid concentrations of such unless I'm okay with eating the entire thing at once.

If I eat lots of junk food for a long time, I'll crave healthier stuff but empirically it takes a couple months.

What to read instead of news?

Lesswrong. I've made it a habit to check lw before reddit, and while I don't always find something interesting, that's usually because I've been to the site already that day.

AI takeoff story: a continuation of progress by other means

I interpreted the Medallion stuff as a hint that AGI was already loose and sucking up resources (money) to buy more compute for itself. But I'm not sure that actually makes sense, now that I think about it.

5Edouard Harris4moSee my response [https://www.alignmentforum.org/posts/Fq8ybxtcFvKEsWmF8/ai-takeoff-story-a-continuation-of-progress-by-other-means?commentId=MuCpa3ACxLaeFYDki] to point 6 of Daniel's comment — it's rather that I'm imagining competing hedge funds (run by humans) beginning to enter the market with this sort of technology.
The Best Software For Every Need

Software: Typora. Need: Markdown editing. Other programs I've tried: Boostnote, StackEdit, VSCode, Marktext

Most markdown editors have plain text on one side and rendered text on the other. Typora has a single Wysiwyg panel. You can edit it as if it were plain markdown, for example you can bold something by putting stars around it. But you can also edit it as if it were wysiwyg, by doing ctrl I, or through a menu. More importantly, it doesn't take up a lot of screen space and it's much more aesthetically pleasing to not have a bunch of plain text. The only ... (read more)

2dr_s4moSupported. I use Typora for all my creative writing, it's distraction free, does its work great, and helps me export to FF.net and AO3 really easily.
1CraigMichael5moCould you use it to make something like xpath-directed changes to an XML document? Like not just regexy things but if this tag or attribute in a tag does or doesn’t exist, add this.
The Duplicator: Instant Cloning Would Make the World Economy Explode

I know you've probably already read it but just in case you haven't: this post is basically the premise of Age of Em.

Previously discussed on LW here.

2Bjartur Tómas2moNow posted: https://www.lesswrong.com/posts/aNAFrGbzXddQBMDqh/moore-s-law-ai-and-the-pace-of-progress
2Veedrac5moNo, sorry.

No. The trivial objection is that you need science to build AI and AI hardware. My real objection is that we don't know if it's going to take ten or a hundred years to build AI and I'd rather not put all our eggs in one risky basket.

1Nguyen Kien5moI mean we should focus on science which is useful to build AI and AI hardware. I think there're some sciences useless to build AGI( like string theory, cosmology,...)
Pedophile Problems

Strong down voted for the clickbait. This isn't the place for that kind of nonsense.

For God's sake, Google it.

My rule of thumb is that if I'm in a conversation, I'll ask for clarification instead of googling whatever term they just used. If they don't know, or they have a question, then my first stop is google. It does take a certain skill level to know when it's more efficient to ask someone, and knowing how to phrase questions is apparently difficult for people who didn't grow up learning how to program.

I find that this means I get the social benefits of conversion, and when I actually care about the answer, I get it without wasting time.

Outline of Galef's "Scout Mindset"

Okay, I'm sold. I'll read the whole book.

How much do variations in diet quality determine individual productivity?

The idea of a rationalist who doesn't understand that rationality/intelligence doesn't imply values convergence astounds me.

I'll be a vegan when our technology outgrows animal farms, or the personal cost to be a vegan falls below the tiny sliver of me that is vaguely dissatisfied with the status quo.

-6MSRayne6mo
-3MSRayne6moNot to be that guy, but the idea of someone being a rationalist but not a vegan astounds me. You do know animals are sentient, right?
How much do variations in diet quality determine individual productivity?

creatine supplementation increased the average IQ of their sample of vegetarians by 12 points

Creatine didn't improve my iq, but it did improve my scores on a digit memory test and more importantly, my mental stamina. I took it for a year, decided it wasn't doing anything and quit. After about a week, I noticed that I was feeling tired after six or seven hours of programming instead of the eight or nine I had been doing. After taking creatine again my energy returned.

Anecdotal blah blah, but maybe you should try it? It doesn't cost that much for a month's supply, and you can find simple recall tests online and do one measurement before and one after.

4ChristianKl6moAre you a vegetarian?
7Natália Mendonça6moI’m looking for answers less like “this thing made me feel better/worse” and more like “these RCTs with a reasonable methodology showed on average a long-term X-point IQ increase/Y-point HAM-D reduction in the intervention groups, and these analogous animal studies found a similar effect,” in which X and Y are numbers generally agreed to be “very large” in each context. This also seems to be the kind of question that variance component analyses would help elucidate. I do take a creatine supplement, despite expecting it to not to help cognition/mood/productivity that much.
Believing vs understanding

Under what conditions is that itsy bitsy part above the water ever going to bring a customer from a "no" to a "yes"?

This has happened to me several times. For example, I use a specific markdown editor because it has a single killer feature (single pane editing) that none of the others do. Or a few days ago, I looked in a comparison of vector editing software because the one I was using didn't have a specific feature (user friendly perspective transforms). I've picked apps over something as simple as a dark theme, or being able to store my notes as a tree instead of a flat list. Sometimes, a single feature can be exactly what someone wants.

3adamzerner6moYeah that does make sense. I guess it depends on the feature in question and how close the competition is.
Ideal College Education for an Aspiring Rationalist?

That class and then an internship at a semiconductor factory thoroughly dispelled any lingering mysticism around computers.

And the Darkness Answered

Sort of, once you become the shadow, you have to "eat the light" as it were.

That's pretty ironic, but it fits really well with my own thoughts on my inversion lately. Thank you for responding.

And the Darkness Answered

Shiloh thought he had eaten his shadow years ago, but in truth he had only scratched the surface and was ultimately still ruled by the same moral systems which we had been branded with as a child. We couldn’t see the control structures because we were immersed in them, because they were all we had ever known.

How were you able to gain the clarity to see what you were repressing? What did that feel like from the inside? I read the shadow sequence you linked, but it was heavy on poetic metaphors and light on experiential signposts.

9Hivewired6moFrom the inside, we really didn't have the clarity to see what we were repressing. The reason the inversion worked was that it didn't require us to actually know what all was being hidden away. That also makes inversion a fairly risky and high-variance strategy, because we had no idea what the person who came out of that inversion was going to be like, or what they would be willing to do. We just knew that what we were doing wasn't working, and while you can't invert stupidity to get intelligence, you can invert your way out of a morality trap you set for yourself. Inverting definitely will not get you all the way to somewhere good though, it just breaks you out of the trap. Once you're out of the trap, you still have to do the work to reincorporate the parts you have overthrown in a healthy way. Sort of, once you become the shadow, you have to "eat the light" as it were.
Rationality Yellow Belt Test Questions?
  • Something along the lines of the CRT
  • Calibration questions
  • In the sequences, one of the beisutsukai rituals involves having to perform an on-the-fly bayesian update mentally (and then resist being peer pressured into a wrong answer)
  • Something to do with the planning fallacy
  • Confirmation bias - see the harry/hermione scene on the train
  • Something to do with overcomplicated plans/theories
  • "What grade will you get on this test?" -> graded on the accuracy. sort of a calibration-plus-humility question
  • something to test whether they can notice the "quiet stra
... (read more)
The homework assignment incentives, and why it's now so extreme

"Want to study together? " was code for "want to split up the problems and copy off each other?".

Reinforcing Habits

Just start noticing how greasy they are, and how the grease gets all over your fingers and coats the inside of the bag. Notice that you don't want to eat things soaked in that much grease. Become repulsed by it, and then you won't like them either.’"

I tried using this on soda. I spent five minutes visualizing the fizz and sweetness of soda mixed with replayed emotions of disgust and aversion. That was half a year ago and since then, I've had one can of ginger soda. It was okay, but not great, and I haven't had any temptation since then. Notably, there i... (read more)

2MSRayne7moTo an extent, I am intrinsically ambivalent about food, but over time (starting in childhood, actually) I have sort of unconsciously trained myself to be averse to sweetness - anything that is too sweet makes me think of rotting teeth full of cavities, and sugars being transformed into fat deposits, and all the energy in that sugar which some starving person could use more effectively than I, if only they had been the one to eat it instead - and it makes me less interested in eating the sweet thing. Oddly, this mostly shows up after I've already eaten it and makes me guilty without stopping me from eating it in the first place, though, because I don't stop and think about that when the food is actually in front of me - hence why I have multiple cavities in my teeth!
The homework assignment incentives, and why it's now so extreme

I occasionally did homework swaps, and more often let friends copy off me. Such practices were rampant at my school, and I went to a small nerd high school. Your mistake was letting anyone except the person involved know about the trade.

5Viliam7moYeah, my high school was a lot of teamwork, and the university even more so. The right thing to tell the parents is "we are going to learn together".
Are bread crusts healthier?

It depends on the kind of bread. I usually don't eat the ends of sandwich bread loaves, but the crusts of artisan bread are the best part.

I'd take the water, though I'd look for alternative solutions first.

but that's an extreme scenario. Most real life cases, taking the water provides a minor immediate benefit and a high risk of downsides because you're burning reputation. Remember, the optimal strategy in pd is t4t not pure cooperate or pure defect

Beyond that, I generally prefer cooperation for its own sake, and that preference is strong enough to outweigh minor benefits, particularly when the long term costs are unclear. Call that wishy washy if you want -- but I'd still take the water in the desert.

Load More