# All of Korz's Comments + Replies

I do not know about scientific studies (which does not mean much), but at least anecdotally I think the answer is a yes at least for people who are not trained/experienced in making exactly these kinds of decisions.

One thing I have heard anecdotally is that people often significantly increase the prize when deciding to build/buy a house/car/vacation because they "are already spending lots of money, so who cares about adding 1% to the prize here and there to get neat extras" and thus spend years/months/days of income on things which they would not have boug...

Isn't -1 inversion?

I think for quaternions, corresponds both to inversion and a 180 degree rotation.

When using quaternions to describe rotations in 3D space however, one can still represent rotations with unit-quaternions where n is a 'unit vector' distributed along the directions and indicates the rotation axis, and is the 3D rotation angle. If one wishes to rotate any orientation (same type of object as n) by q, the result is . Here, corresponds to and is thus a full 360 turn.

I have tried to read u...

3sen2mo
Thanks for the explanation. I found this post [https://qchu.wordpress.com/2011/02/12/su2-and-the-quaternions/] that connects your explanation to an explanation of the "double cover." I believe this is how it works: * Consider a point on the surface of a 3D sphere. Call it the "origin". * From the perspective of this origin point, you can map every point of the sphere to a 2D coordinate. The mapping [https://en.wikipedia.org/wiki/Stereographic_projection] works like this: Imagine a 2D plane going through the middle of the sphere. Draw a straight line (in the full 3D space) from the selected origin to any other point on the sphere. Where the line crosses the plane, that's your 2D vector representation of the other point. Under this visualization, the origin point should be mapped to a 2D "point at infinity" to make the mapping smooth. This mapping gives you a one-to-one conversion between 2D coordinate systems and points on the sphere. * You can create a new 2D coordinate system for sphere surface points using any point on the sphere as the origin. All of the resulting coordinate systems can be smoothly deformed into one another. (Points near the origin are always large, points on the opposite side of the sphere are always close to the 0,0,0, and the changes are smooth as you move the origin smoothly.) * Each choice of origin on the surface of the sphere (and therefore each 2D coordinate system) corresponds to two unit-length quaternions. You can see this as follows. Pick any choice of i,j,k values from a unit quaternion. There are now either 1 or 2 choices for what the real component of that quaternion might have been. If i,j,k alone have unit length, then there's only one choice for the real component: zero. If i,j,k alone do not have unit length, then there are two choices for the real component since either a positive or a negative value can be used to make the quaternion unit length again. * Tak

Thank you.

I really like your framing of home - it seems very close to how John Vervaeke describes it, but somehow your description made something click for me.

I wish to be annealed by this process.

I'd like to share a similar framing of a different concept: beauty. I struggled with what I should call beautiful for a while, as there seemed to be both some objectivity to it, but also loads of seemingly arbitrary subjectiveness which just didn't let me feel comfortable with feeling something to be beautiful. All the criteria I could use to call something b...

2Jarred Filmer2mo
Your description is beautiful in the sense that you use the word :) thank you for sharing

I will have to try this, thanks for pointing to a mistake I have made in my previous attempts at scheduling tasks!

One aspect which I have the feeling is also important to you (and is important to me) is that the system also has some beauty to it. I guess this is mostly because using the system should feel more rewarding than the alternative of "happen to forget about it" so that it can become a habit.

I recently read (/listened to) the shard theory of human values and I think that its model of how people decide on actions and especially how hyperbolic disco...

Regarding the transporter:

Why does "the copy is the same consciousness" imply that killing it is okay?

From these theories of consciousness, I do not see why the following would be ruled out:

• Killing a copy is equally bad as killing "the sole instance"
• It fully depends on the will of the person

oh.., right - it seems I actually drew B instead of C2. Here is the corrected C2 diagram:

2tgb5mo
Beautiful! That’s also a nice demonstration of B=C2.

Okay, I think I managed to make at least the case C1-C2 intuitive with a Venn-type drawing:

(edit: originally did not use spades for C1)

The left half is C1, the right one is C2. In C1 we actually exclude both some winning 'worlds' and some losing worlds, while C2 only excludes losing worlds.
However due to symmetry reasons that I find hard to describe in words, but which are obvious in the diagrams, C1 is clearly advantageous and has a much better winning/loosing ratio.

(note that the 'true' Venn diagram would need to be higher dimensional so that one c...

3tgb5mo
I think your left diagram is correct but the one for C2 is off somewhat. In both, we’re conditioning on the statement that “you have an ace of spades”, so we’re exclusively looking in that top circle. Both C1 and C2 have the same exact grey shaded area. But in C2, some of the green shaded region inside that circle is also missing: the cases where you have an ace of spades but I happened to tell you about one of the other aces instead. So C2 is a subset of C1 (condition on being told you have the ace of spades) where only a randomly selected subset of the winning hands are chosen (1/2 of the ones with two aces, 1/3 of the ones with three, etc). But that correction doesn’t really change much since your diagram is just the combination of four disjoint diagrams, one for each of the suits. So the ratio of grey to green is right, but I find it harder to compare to C1. Either way, my main point was that C2 might have been driving our intuition that C=B, and in fact, C2=B, so our intuitions isn‘t doing too bad.

Thanks for the attempt at giving an intuition!

Maybe the intuition here is a little clearer, since we can see that winning hands that contain an ace of spades are all reported by C1 but some are not reported by C2, while all losing hands that contain an ace of spades are reported by both C1 and C2 (since there's only one ace for C2 to choose from)

If I am not mistaken, this would at first only say that "in the situations where I have the ace of spades, then being told C1 implies higher chances than being told C2"? Ea...

3Korz5mo
Okay, I think I managed to make at least the case C1-C2 intuitive with a Venn-type drawing: (edit: originally did not use spades for C1) The left half is C1, the right one is C2. In C1 we actually exclude both some winning 'worlds' and some losing worlds, while C2 only excludes losing worlds. However due to symmetry reasons that I find hard to describe in words, but which are obvious in the diagrams, C1 is clearly advantageous and has a much better winning/loosing ratio. (note that the 'true' Venn diagram would need to be higher dimensional so that one can have e.g. aces of hearts and clubs without also having the other two. But thanks to the symmetry, the drawing should still lead to the right conclusions.)

Though it's unclear to me if confidence intervals suggest this notation already. If you had less chance of moving your interval, then it would already be a smaller interval, right?

Counterexample: if I estimate the size of a tree, I might come up with CI 80 % [5 m, 6 m] by eye-balling it and expect that some friend will do a stronger measurement tomorrow. In that case, CI 80 % [5 m, 6m] still seems fine even though I expect the estimate to narrow down soon.

If the tree is instead from some medieval painting, my CI 80 % [5 m, 6 m] could still be true while I ...

I like the idea of your proposal -- communicating how solidified one's credences are should be helpful for quickly communicating on new topics (although I could imagine that one has to be quite good at dealing with probabilities for this to actually provide extra information).

Regarding your particular proposal "CR [ <probability of change>, <min size of change>, <time spent on question> }" is unintuitive to me:

• In "80% CI [5,20]" the probability is denoted with %, while its "unit"-less in your notation
• In "80% CI [5,20]", the braces [] indi
...

After reading your sequence today, there is one additional hypothesis which came to my mind, which I would like to make the case for (note that my knowledge about ML is very limited, there is a good chance that I am only confused):

Claim: Noise favours modular solutions compared to non-modular ones.

What makes me think that? You mention in Ten experiments that "We have some theories that predict modular solutions for tasks to be on average broader in the loss function landscape than non-modular solutions" and propose to experimentally test this.
If this...

I just found the link for their summary on job-satisfaction in particular: https://80000hours.org/career-guide/job-satisfaction/

A re-interpretation that makes me perceive the lines as equal length without any problems is to label them as "some kind of magnets that hover above each other"

As this is somewhat realistic, while letting the lines integrate well with the rest of the railroad (maybe this is an artistic impression for Maglev trains?), this does not break immersion for me.

Trends of different quantities:

Generally, I agree with your points :)

I recently stumbled upon this paper "The World’s Technological Capacity to Store, Communicate, and Compute Information", which has some neat overviews regarding data storage, broadcasting and compute trends:
From a quick look at the Figures my impression is that compute and storage look very much like 'just' exponential, while there is a super-exponential figure (Fig. 4) for the total communication bandwidth (1986-2007)[1]

General

I can see two mechanisms that potentially may make it l

...

This does not quite match your question, but I want to recommend taking a look at https://80000hours.org/ if you don't already know them.

Their focus is on providing resources for impactful altruistic careers, but they still have lots of nice general advice regarding job satisfaction and how to approach the topic of career choice. There also are examples where people describe their experiences with different career paths or the content of the paths themselves and lots more.

Depending how much you are interested in an altruistic focus for your career, you'll find a larger or smaller portion of their writing relevant, but there is a lot of good stuff in any case :)

2Tom Paine6mo
Thank you I haven't seen this before; I'll check it out.

Compute: A very simple attempt at estimating (non-biological) computing power:

• A version of Moore's law applies to the cost of computation power. And Moore's law held true quite steadily, so we can assume an exponential growth,
• The figure in this article on the growth of the semiconductor industry shows significant oscillations in the growth rate in the last 25 years, but seems totally compatible with exponential growth plus a lot of noise.

If we just naively combine these two, we get two multiplying exponential growths for the total semiconductor compu...

1[comment deleted]6mo
2Just Learning6mo
Thank you for your research! First of all, I don't expect the non-human parameter to give a clear power-law, since we need to add humans as well. Of course, close to singularity the impact of humans will be very small, but maybe we are not that close yet. Now for the details: Compute: 1. Yes, Moore's law was a quite steady exponential for quite a while, but we indeed should multiply it. 2. The graph shows just a five years period, and not the number of chips produced, but revenue. The five years period is too small for any conclusions, and I am not sure that fluctuations in revenue are not driven mainly by market price rather than by produced amount. Data storage: Yes, I saw that one before, seems more like they just draw a nice picture rather than real data. General remarks: I agree with the point that AGI appearance can be sufficiently random. I can see two mechanisms that potentially may make it less random. First, we may need a lot of computational resources, data storage etc. to create it, and as a lab or company reaches the threshold, it happens easily with already existing algorithms. Second, we may need a lot of digitalized data to train AGI, so the transition again happens only as we have that much data. Lastly, notice that cthe reation of AGI is not a singularity in a mathematical sense yet. It will certainly accelerate our progress, but not to infinity, so if the data will predict for example singularity in 2030, it will likely mean AGI earlier than that. How trustworthy would this prediction be? Depends on the amount of data and noise. If we have just 10-20 datapoints scattered all around the graph, so you can connect the dots in any way you like - not really. If, instead, we are lucky and the control parameter happened to be something easily measurable (something such that you can get just-in-time statistics, like the number of papers on arXiv right now, so we can get really a lot of data points) and the parameter continues to change as theory pre

If one believes the orthogonality thesis (and we only need a very weak version of it), just knowing that there is an AGI trying to improve the world is not enough to predict how exactly it would reason about the more quirky aspects about human character and values. It seems to me that something that could be called "AGI-humans" is quite possible, but a more alien-to-us "total hedonistic utility maximizing AGI" also seems possible.

From how I understood arguments of Eliezer Yudkowsky here, the way that we are selecting for AI models will favour models with c...

1Michael Bright8mo
Yes, I do. Me too. But I'm adopting the term "AGI-humans" from today. ...

One concept which I think is strongly related is common knowledge (if we extend it to not only refer to beliefs but also to norms).
I think that a good part of the difference between the scenarios for answering Alec is captured by the difference between sharing personal advice and knowledge compared to sharing common norms and knowledge. The latter will give Alec a lot of information about how to work together and cooperate with "the typical member of" the AI safety community, which is important information independently of whether it would be best fo...

a proposal that is related in meaning to Liquid Breaks would be
organic breaks - I like some of the connotations (organic growth of e.g. trees has a lot of flexibility, while still following simple rules; the method (work:break ratio) can be organically adapted to the user and one's current capacity)

2bfinn1y
Interesting. (Similarly I had thought of Natural Breaks before.)

Earned Breaks - would put the focus on the emotional aspect of coupling the two lengths and would sound non-technical

3bfinn1y
I agree with the concept. It would help if it were a standard phrase, or used alliteration/rhyme/pun, to be more memorable/catchy though. A similar name I thought of a while back was Well-Earned Breaks. This would have been ideal, as it's a standard phrase in British English, meaning 'a break you deserve for working hard'. But it turns out Americans don't understand it.

Great post, thanks! I would not have guessed that brains are this impressive.

Re: algorithms of the brain, I could still imagine that the 'algorithms' we rely on for higher concepts (planning, deciding on moral values, etc.) are more wasteful with regards to resources. But your arguments certainly reshape my expectations for AI

Some typos/a suggestion:

"The predicted wire energy is 10−19W/bit/nm" J/bit/nm
"[...] a brain-size ANN will cost almost 3kW" 3kJ
"if we assume only 10% of synapses are that large, the minimal brain synaptic volume is about 10^18 nm...

3jacob_cannell1y
Thanks! And thanks for the typo catches. I agree that the higher level software that runs on the brain (multi-step algorithms) - the mind - is more evolutionarily recent, a vastly larger search space, still evolving rapidly, etc. I added a note in the beginning to clarify this article is specifically not arguing for high efficiency or near optimality for mental software.

Interesting comparison!
I intended to write something like the following as a new comment, but you provided a great metaphor:

One aspect that I want to point to is that the jungle can be 'out to get you':

As ads become the norm in public spaces—which comprise a large part of the interaction between an individual and "society at large"—this will affect the trust-level of the people involved. A consequence of this would be that bad and manipulative ads provide a limit on how much individuals can trust that their society is friendly to them and whether behaving ...

Good points!

But what is this supposed to be specifically?

My estimate of what could plausibly be achieved in the near future is my model of "the thing that happens during childhood", maybe with better control by the individuum. This would be some phases of heightened learning rate/'priority' for different parts of our mind (fear and recognizing safe/unsafe situations; calibrating and exploring perception and motion; status and norms; sexuality; sense of self and identity; ...). I assume that this 'emotional rejuvenation' could work relatively well 'f...

Thanks for writing!

One trend in longevity-enhanced cultures which I expect to become common, is for "old and worn-out souls" to trigger a kind of mental rejuvenation: Use some type of medicine/gene-activation/.. to trigger a softer version of what happens during childhood.

I see several reasons:

1. This would allow for a culturally-acceptable and powerful method for people to reinvent themselves and start anew (possibly with an agreement that the usual date is every 100th birthday), which would be a more important element to cultures suitable to longer life-s
...
3Gunnar_Zarncke1y
Mental rejuvenation would solve some of the issues. But what is this supposed to be specifically? Increased memory consolidation or some kind of amnesia? What about habits and motor memory? Without more elaboration, it amounts to hand-waving. Also, the body wouldn't adapt to the environment, e.g. new foods. Positing additional drugs for 'reprogramming the body' might help but again what would they do? You can of course stack fixes on fixes with drugs. I'm not sure how this compares to regular evolution.
3Malmesbury1y
Interesting. It's hard to imagine the new markets opened by anti-aging. That being said, if such a drug existed I would probably take it regularly even if I'm not immortal. Now I wonder if that would be possible to do without erasing your memories. Otherwise, it would defeat the point of immortality.
See the story, "Good-bye, Robinson Crusoe", by John Varley.
5dkirmani1y
This is what psychedelics do, especially high doses.

Regarding the link: Possibly you did not activate the markdown editor for your profile?

2oumuamua1y
Thank you, I had no idea markdown needed to be actively activated in the profile.

Ah, this makes sense thanks!

I wouldn't say that this is what happens with Shoulder Advisors or with the no-self experience of meditation. There are many failure modes of the brain making sense of agency and identity.

This sounds right. Maybe the cases that I am concerned about additionally contain fear responses, and purely having a non-unified or unclear sense of self is more normal/safer than I thought.

It seems I am not as worried about gpt3 as you, but when listening to the  simulated interview with simulated Elon Musk by Lsusr in the clearer thinking podcast episode 073 (starts in minute 102), I was quite concerned

Parts like

“Dumbledore gave you this?” Hermione gasped.
“Yes… he didn’t really explain why. But then he couldn’t explain the rock either.”

and

But the warm glow of academic pride washed over her face all the same.

are really well-done. It feels as if your writing could have been part of HPMOR.

Thank you!

2Henry Prowbell1y
Thanks for the encouragement. Appreciate it :)

am an unsure whether I am correctly understanding your position:

• Would you agree that some of the aspects that make the ego/self different compared to shoulder advisors are the ones that I stated? (it doesn't seem to contradict the formulation 'privileged shoulder advisor' as far as I can tell)
• The 'matter of degree'-question where our views differ is about the question whether there are such things as 'shoulder advisors+' that are e.g. halfway between a pure shoulder advisor and the ego?

If I am not misunderstanding you, this is a...

3Gunnar_Zarncke1y
A, I missed a "t". "can" -> "cant". Sorry about that typo. I mostly agree with it being a matter of degree. But I want to respond to this part of your comment: I wouldn't say that this is what happens with Shoulder Advisors or with the no-self experience of meditation. There are many failure modes of the brain making sense of agency and identity. I think the default mode of society is to encourage and reinforce an interpretation around ego, identity, and agency which is stable and beneficial (at least in the sense of societal productivity, I guess there are cultures with very differt patterns that are stable but probably less scalable e.g. the Piraha [https://en.wikipedia.org/wiki/Pirah%C3%A3_people] ).

My thoughts on this are mostly from introspection. When I try to imagine a shoulder advisor in comparison to my self (note that I do not have shoulder advisors currently), there seem to be some additional properties to my self which a should advisor would not have.

Trying to get at the differences, what comes up is:

• bodily sensations and urges are 'directly fed into and fuel (/delegate vote power to)' my self, but not shoulder advisors
• decisions on movement likewise are directly connected to myself, while shoulder advisors are only influencing my mental dialo
...
3Gunnar_Zarncke1y
You describe it as a matter of degree and I cant disagree with that.

Another name proposal: essentialized skill
- sounds impressive (I think. I am not a native speaker)
- the essence-part suggests some kind of deep mastery
- the ized-part suggests that it was once just a regular skill and can in principle be acquired
- the essence-part also suggests that the skill is now part of one's nature and is thus not necessarily under conscious control/supervision

I decided to propose this name before reading the comments, I also like some properties of the other proposals though.

I would guess that there is some additional machinery involved in the ego compared to shoulder advisors (this might not contradict your description of ego as privileged shoulder advisor), as tulpas seem to be quite related to shoulder advisors while being 'closer to ego' in some sense.
Probably this distinction is an important reason why shoulder advisors seem much less problematic from the standpoint of mental health.

4Gunnar_Zarncke1y
What additional machinery do you have in mind or what else makes you think that?

After having read a few GPT-3 generated texts, its type of pattern-matching babbling really reminds me of what is here described as apologist. Maybe the apologist part of the mind just does not do sufficiently model-based thinking to catch mistakes that are obvious to an explicitly model-based way of thinking ("revolutionary")?

It seems very plausible to me that there are both high-level model-based and model-free parts in the human mind. This would also match the seemingly obvious mistakes in the apologists reasoning and explain why it is effectively impos...

I want to add a mechanism which might contribute to a weakening of institutions that is related to the 'stronger memes' described by ete (I have not thought this out properly, but I am quite confident that I am pointing at something real even if I might well be mistaken in many of the details):

In myself, and I think this is quite common, while considering my life/career options, I noticed an internal drive (think elephant from elephant in the brain) that made me focus on the highest-...

I am not sure whether my take on this is correct, so I'd be thankful if someone corrects me if I am wrong:

I think that if the goal was only 'predicting' this bit-sequence after knowing the sequence itself, one could just state probability 1 for the known sequence.

In the OP instead, we regard the bit-sequence as stemming from some sequence-generator, of which only this part of the output is known. Here, we only have limited data such that singling out a highly complex model out of model-space has to be weighed against the models' fit to the bit-sequence.

Thanks for sharing!

2benkuhn2y
Thanks, fixed!

I was also surprised to learn about this formalism at my university, as it wasn't mentioned in either the introductory nor the advanced lecture on QM, but turns out to be very helpful for understanding how/when classical mechanics can be a good approximation in a QM universe.

The maths you are using looks a bit different than what I am used to, but I am somewhat confident that your uncalibrated experiment is equivalent to a suitably defined decohering quantum channel. The amplitudes that you are calculating would be transition amplitudes from the prepared initial state to the measured final state (Denoting the initial state as |i>, the final state as |f> and the time evolution operator as U, your ...

2justinpombrio3y
Thanks for all the pointers! I was, somewhat embarrassingly, unaware of the existence of that whole field.
I’m just sick of struggling through life. The inefficiencies all around me are staggering and overwhelming.

Your mileage will vary, but a train of thought that helped me change my perspective on this (and I fully endorse this shift) was to realize that my emotions were ill-calibrated:

When I considered the state of the world, my emotional reaction was mostly negative, but when I tried to compare this reaction to a world in which earth is replaced by a lifeless rock I realized that this would clearly not be an improvement. After contemplating this, I d...

Up-voted for thoroughly putting the idea into less wrong context - i enjoyed being reminded of all the related ideas

A thought: I am a bit surprised that one can distil a single belief network explaining a whole lot of the variance of beliefs across many people. This makes me take the idea more seriously that a large number of people regularly do have very similar beliefs (down to the argumentative structure). Remembering You Have About Five Words this surprises me as I would expect a less reliable transmission of beliefs? (It might well be that I am just misunderstanding something)

Now reading the post for the second time, I again find it fascinating – and I think I can pinpoint my confusion more clearly now:

One aspect that sparks confusion when matched against my (mostly introspection + lesswrong-reading generated) model, is the directedness of annealing:
On the one hand, I do not see how the mechanism of free energy creates such a strong directedness as the OP describes with 'aesthetics',
on the other hand if in my mind I replace the term "high-energy-state" with "currently-active-goal-function(s)"...

4Michael Edward Johnson2y
This will be a terribly late and very incomplete reply, but regarding your question, >Is there some mechanism that would allow for evolution to somewhat define the 'landscape' of harmonics? Is reframing the harmonics as goals compatible with the model? Something like this seems to be pointed at in the quote >>Panksepp’s seven core drives (play, panic/grief, fear, rage, seeking, lust, care) might be a decent first-pass approximation for the attractors in this system. A metaphor that I like to use here is that I see any given brain as a terribly complicated lock. Various stimuli can be thought of as keys. The right key will create harmony in the brain's harmonics. E.g., if you're hungry, a nice high-calorie food will create a blast of consonance which will ripple through many different brain systems, updating your tacit drive away from food seeking. If you aren't hungry -- it won't create this blast of consonance. It's the wrong key to unlock harmony in your brain. Under this model, the shape of the connectome is the thing that evolution has built to define the landscape of harmonics and drive adaptive behavior. The success condition is harmony. I.e., the lock is very complex, the 'key' that fits a given lock can be either simple or complex, and the success condition (harmony in the brain) is relatively simple.

I am very much impressed by the exchange in the parent-comments and cannot upvote sufficiently.

With regards to the 'mental motion':

In contrast, the model description you gave made it sound like craving was an active process that one could simply refrain from [...]

As I see it, the perspective of this (sometimes) being an active process makes sense from the global workspace theory perspective: There is a part of one's mind that actually decides on activating craving or not. (Especially if trained through meditation) it is possible to connect ...

I think the meaning behind 'identical particles' is very hard to pin down without directly using mathematical definitions*. The analogy with (secretly numbered) billiard balls gives a strong intuition for non-identical particles. There are also intuitive examples that behave more like identical particles:

For example, the intuition for symbols nicely matches identical symbol/particle behaviour:

If I represent a Helium atom with the symbol "H" and no atom with "_", the balloons interior might be described by

"H__H_H____H__H__...

That was the first thing I did when I created an account here.

Oops - I didn't notice the 'load more' option for the posts on your profile earlier, I upvoted your post now.

I have not yet written any posts myself and have only skimmed the detailed rules about karma some time ago, but I can easily imagine that the measures against spam can sometimes lead good posts from new accounts to be overlooked.

a) I liked reading your guide: You managed to include many important LW-related concepts while still keeping a hands-on feeling. This makes it a nice reference for people who do not enjoy a more technical/analytical approach. Have you considered creating a link-post on lesswrong?

b) You write:

The good news is that the virtuous cycle here also works: I've found that if one person is consistently unusually virtuous in their conversations and arguments, a little bubble of sanity spreads around that person to everyone in the vicinity over time.

This see...

1[anonymous]3y
Yes, I didn’t have niceness fields intentionally in mind when I commented, but it is definitely the same idea. That was the first thing I did when I created an account here. It got no upvotes and did not get promoted to front page, so... maybe it was just too much to digest from somebody with no karma at the time?
One could say that there is still a difference between probabilities so high/low that you can use ~1/~0 writings and probable but not THAT probable situations such as 98:2

I don't think that Eliezer would disagree with this.

As I understand it, he generally argues for following the numbers and in this post he tries to bind the reader's emotions to reality: He gives examples that make it emotionally clear that it already is in our interest to follow the numbers ('hot water need not *necessarily* burn you, but you correctly do not count on th...

Thanks for writing this post! Your writing helps me a lot in tying together other's claims and my own experiences into a more coherent model.

As Richard_Kennaway points out in their comment, the goal of insight meditation and 'enlightenment' is not necessarily the same as the goal of rationality (e.g. instrumental rationality/shaping the world's future towards a desired goal seems a part of rationality but not of 'enlightenment' as far as I can tell). I would be very interested in your opinion of how instrumental rationality r...

2Kaj_Sotala3y
I do think that this stuff is relevant for instrumental rationality as well; we'll get to the details of how once I start talking about unsatisfactoriness / craving in the next post.

My original reading was 'there was less arrogance in Einstein's answer than you might think'. After rereading Eliezer's text and the other comments again today, I cannot tell how much arrogance (regarding rationality) we should assume. I think it is worthwhile to compare Einstein not only to a strong Bayesian:

On the one hand, I agree that a impressive-but-still-human Bayesian would probably have accumulated sufficient evidence at the point of having the worked-out theory that a single experimental result against the theory is not enough...

I do not have any experience with tulpas, but my impression of giving one's models the feel of agency is that one should be very careful:

There are many people who perceive the world as being full of ghosts, spirits, demons, ..., while others (and science) do not encounter such entities. I think that perceiving one's mental models themselves as agentic is a large part of this difference (as such models can self-reinforce by triggering strong emotions)

If I model tulpas as a supercharged version of modelling other people (where the tulpa may be experienced ...

Regarding "intuitive moral sense", I would add that one's intuitions can be somewhat shaped by consciously thinking about their implications, noticing inconsistencies and settling on solutions/improvements.

For example, the realisation that I usually care about people more the better I know them made me realize that the only reason I do not care about strangers at all is the fact that I do not know them. As this collided with another intuition that refuses such a reason as arbitrary (I could have easily ended up knowing and thus caring for di...