Posts

Sorted by New

Wiki Contributions

Comments

Thank you.

I really like your framing of home - it seems very close to how John Vervaeke describes it, but somehow your description made something click for me.

I wish to be annealed by this process.

I'd like to share a similar framing of a different concept: beauty. I struggled with what I should call beautiful for a while, as there seemed to be both some objectivity to it, but also loads of seemingly arbitrary subjectiveness which just didn't let me feel comfortable with feeling something to be beautiful. All the criteria I could use to call something beautiful just seemed off. A frame which helped me re-conceptualize much of my thinking about it is:

Beautiful is that which contributes to me wishing to be a part of this world.

Of this framing, I really like how compatible it is with being agentic, and also that it emphasizes subjectiveness without any sense of arbitrariness.

I will have to try this, thanks for pointing to a mistake I have made in my previous attempts at scheduling tasks!

One aspect which I have the feeling is also important to you (and is important to me) is that the system also has some beauty to it. I guess this is mostly because using the system should feel more rewarding than the alternative of "happen to forget about it" so that it can become a habit.

I recently read (/listened to) the shard theory of human values and I think that its model of how people decide on actions and especially how hyperbolic discounting arises fits really well with your descriptions:

To sustain ongoing motivation, it is important to clearly feel the motivation/progress in the moment. Our motivational system does not have access to an independent world model which could tell it that the currently non-motivating task is actually something to care about and won't endorse it if there is no in-the-moment expectation of relevant progress. As you describe, your approach feels more like "making the abstract knowledge of progress graspable in-the-moment to one's motivational system" instead of "trying to trick it into doing 'the right thing' ".

Regarding the transporter:

Why does "the copy is the same consciousness" imply that killing it is okay?

From these theories of consciousness, I do not see why the following would be ruled out:

  • Killing a copy is equally bad as killing "the sole instance"
  • It fully depends on the will of the person

oh.., right - it seems I actually drew B instead of C2. Here is the corrected C2 diagram:

Okay, I think I managed to make at least the case C1-C2 intuitive with a Venn-type drawing:

(edit: originally did not use spades for C1)

The left half is C1, the right one is C2. In C1 we actually exclude both some winning 'worlds' and some losing worlds, while C2 only excludes losing worlds.
However due to symmetry reasons that I find hard to describe in words, but which are obvious in the diagrams, C1 is clearly advantageous and has a much better winning/loosing ratio.
 
(note that the 'true' Venn diagram would need to be higher dimensional so that one can have e.g. aces of hearts and clubs without also having the other two. But thanks to the symmetry, the drawing should still lead to the right conclusions.)

Thanks for the attempt at giving an intuition!

I am not sure I follow your reasoning:

Maybe the intuition here is a little clearer, since we can see that winning hands that contain an ace of spades are all reported by C1 but some are not reported by C2, while all losing hands that contain an ace of spades are reported by both C1 and C2 (since there's only one ace for C2 to choose from)

If I am not mistaken, this would at first only say that "in the situations where I have the ace of spades, then being told C1 implies higher chances than being told C2"? Each time I try to go from this to C1 > C2, I get stuck in a mental knot. [Edited to add:] With the diagrams below, I think I now get it: If we are in C2 and are told "You have the ace of spades", we do have the same grey/loosing area as in C1, but the winning worlds only had a random 1/2 to 1/4 (one over the number of actual aces) chance of telling us about the ace of spades. Thus we should correspondingly reduce the belief that we are in these winning worlds. I hope this is finally correct reasoning. [end of edit]

I can only find an intuitive argument why B≠C is possible: If we initially imagine to be with equal probability in any of the possible worlds, when we are told "your cards contain an ace" we can rule out a bunch of them. If we are instead told "your cards contain this ace", we have learned something different, and also something more specific. From this perspective it seems quite plausible that C > B

Though it's unclear to me if confidence intervals suggest this notation already. If you had less chance of moving your interval, then it would already be a smaller interval, right?

Counterexample: if I estimate the size of a tree, I might come up with CI 80 % [5 m, 6 m] by eye-balling it and expect that some friend will do a stronger measurement tomorrow. In that case, CI 80 % [5 m, 6m] still seems fine even though I expect the estimate to narrow down soon.

If the tree is instead from some medieval painting, my CI 80 % [5 m, 6 m] could still be true while I do not expect any significant changes to this estimate.

 

I think that the credal resilience is mostly the expected ease of gaining (/loosing??) additional information so that it does provides additional  info to the current estimates. 
But there is something to your statement: If I expected that someone could convince me that the tree is actually 20 m tall, this should already be included in my intervals. 
 

I like the idea of your proposal -- communicating how solidified one's credences are should be helpful for quickly communicating on new topics (although I could imagine that one has to be quite good at dealing with probabilities for this to actually provide extra information).

Regarding your particular proposal "CR [ <probability of change>, <min size of change>, <time spent on question> }" is unintuitive to me:

  • In "80% CI [5,20]" the probability is denoted with %, while its "unit"-less in your notation
  • In "80% CI [5,20]", the braces [] indicate an interval, while it is more of a tuple in your notation

A reformulation might be

80% CI [5,20] CR [0.1, 0.5, 1 day] --> 80% CI [5,20] 10 % CR [0.5}

Things I dislike about this proposal:

  • The "CR" complicates notation. Possibly one could agree on a default time interval such as 1 day and only write the time range explicitly if it differs? Alternatively, "1 day CR" or "CR_1d" might be usable
  • the "[0.5]" still feels unintuitive as notation and is still not an interval. Maybe there is a related theoretically-motivated quantity which could be used? Possibly something like the 'expected information gain' can be translated into its influence on the [5,20] intervals (with some reasonable assumptions on the distribution)? 
    • 80% CI [5,20] 0.5 @ 10 % CR might be an alternative, with the "\divideontimes" symbol being the  of multiplication and hinting at the possible modification of the interval [5, 20]. For latex-free notation, "[5, 20] x0.5" might be a suitable simplified version.

Overall, I don't yet have a good intuition for how to think about expected information gain (especially if it is the expectation of someone else). 
Also, it would be nice if there was a theoretical argument that one of the given numbers is redundant -- getting an impression of what all the 6 numbers mean exactly would take me sufficiently long that it is probably better to just have the whole sentence

My 80% confidence interval is 5-20. I think there's 10% chance I'd change my upper or lower bound by more than 50% of the current value if I spent another ~day investigating this.

But this would of course be less of a problem with experience :)

After reading your sequence today, there is one additional hypothesis which came to my mind, which I would like to make the case for (note that my knowledge about ML is very limited, there is a good chance that I am only confused):

Claim: Noise favours modular solutions compared to non-modular ones.

What makes me think that? You mention in Ten experiments that "We have some theories that predict modular solutions for tasks to be on average broader in the loss function landscape than non-modular solutions" and propose to experimentally test this. 
If this is a true property of a whole model, then it will also (typically) be the case for modules on all size-scales. Plausibly the presence of noise creates an incentive to factor the problem into sub-problems which can then by solved with modules in a more noise-robust fashion (=broader optimum of solution). As this holds true on all size-scales, we create a bias towards modularity (among other things).

Is this different from the mentioned proposed modularity drivers? I think so. In experiment 8 you do mention input noise, which made me think about my hypothesis. But I think that 'local' noise might provide a push towards modules in all parts of the model via the above mechanism, which seems different to input noise.

Some further thoughts

  • even if this effect is true, I have no idea about how strong it is
  • noisy neurons actually seem a bit related to connection costs to me in that (for a suitable type of noise) recieving information from many inputs could become costly
  • even if true, it might not make it easier to actually train modular models. This effect should mostly "punish modular solutions less than non-modular ones" instead of actually helping in training. A quick online search for "noisy neural network" indicated that these have indeed been researched and that performance does degrade. My first click mentioned the degrading performance and aimed to minimize the degradation. However I did not see non-biological results after adding "modularity" to the search (didn't try for long, though).
  • this is now pure speculation, but when reading "large models tend towards modularity", I wondered whether there is some relation to noise? Could something like the finite bit resolution of weights lead to an effective noise that becomes significant at sufficient model sizes? (the answer might well be an obvious no)

I just found the link for their summary on job-satisfaction in particular: https://80000hours.org/career-guide/job-satisfaction/ 

Load More