All of Jonathan Moregård's Comments + Replies

I get where you're coming from and appreciate you "rounding off" rather than branching out :)

I wrote a post on "inside-out identity", here: https://honestliving.substack.com/p/inside-out-identity

Also, I only post some of my writing on lesswrong, so if you're interested, I can recommend subscribing to my substack :)

2StartAtTheEnd20d
That's a funny coincidence! I came up with the concept independently. I will share a few thoughts here, as I don't yet have a substack account. If I make one, I will definitely subscribe to you :) If we generalize the problem of asking "who am i?", perhaps we can conclude that assertions are valuable. Not discovery and doubt, but creation and affirmation. Outside-in perspectives aren't inherently bad, what's bad is increasing the scale too much. Feel free to include your friends or perhaps family. But if you zoom out to the entire nation, or the entire universe, and you lose yourself. Even your friend and familities are reduced to nothingness. I wouldn't go beyond Dunbar's number (~150 people) myself. The more things you compare, the smaller the overlap between them. Regression to the mean means that, as you zoom further out, you destroy the particularities/uniqueness of every individual (and their values, etc). At least that's my intuition

in case it’s a form of self-defense, I’d like to warn against it.

Nope! It's a conscious decision. I challenge myself and discover things I've been avoiding. (hiding from others -> hiding from self). It's a way to step into my power.

If you’re watching a movie with a group of people and you make a sound to break the immersion, you’ve been rude. It’s the same with social reality. The fear of being exposed/seen though is similar to the fear of being judged. Not looking too closely is good manners.

It's complicated! I tend to break it in interesting wa... (read more)

1StartAtTheEnd23d
I'll stop here then, but allow me a final attempt at explaining the potential problem. If you realize the legitimacy and pros and cons of every viewpoint, it may be difficult to create or believe in your own viewpoint. The "inside-out" perspective becomes inaccessible, one is stuck in the outside-in, detached, analytical, impersonal birds-eye view. This likely makes it hard to hate others or even get angry at them. It can also make it difficult to be assertive, as every viewpoint cancels out. Everyone is right from their perspective. One becomes a mere observer. And a therapist or doctor-like relationship to another person allows for quick intimacy, but it's not personal nor equal. A programmer and a player will experience videogames differently, the latter having a much more magical experience precisely because they lack knowledge. It's this magic that a lot of rationalists rob from themselves through knowledge, and it applies to relationships as well. These problem led me to change my approach. One of the changes being intentionally lowering my own self-awareness, letting system 2 do as it pleases despite its irrationality. If you've managed to avoid these problems and/or you live a happy life, there's likely no issues! It's important for me to share these insights though, as they don't seem to exist anywhere else in the world. Posts like yours are gold, and contain a lot of obscure/rare knowledge, so you have my full appreciation! For now I will act on my knowledge rather than collecting more of it, but I will be reading your future posts

There are a lot of things about my social behaviour that are confusing.

I engage in radical honesty, trying to express what is going on in my head as transparently as possible. I have not been in a fight/argument for 8 years.

People have said it's pleasant to talk to me. I tend to express disagreement even if I'm mostly aligned with the person I'm talking to.

I break all kinds of rules. My go-to approach for getting to know strangers is:

  1. ask them to join me in 1on1 conversation
  2. open up by saying: "I have this question I like asking people to get to know them
... (read more)
1StartAtTheEnd23d
I think I understand you quite well, even if most people will not. I know you have no ill will, that's sufficient. The transparency is interesting and helps make the topic clear, but in case it's a form of self-defense, I'd like to warn against it. One should not feel pressured into denuding oneself, laying all ones cards on the table. To begin with, good taste demands beautiful surfaces, and it's perversion to want to see through all veils. It's proper in intellectual conversations, but in everyday life, disillusionment only makes things less appealing. If you're watching a movie with a group of people and you make a sound to break the immersion, you've been rude. It's the same with social reality. The fear of being exposed/seen though is similar to the fear of being judged. Not looking too closely is good manners. If I "see through" somebody , it's only to compliment them. I try not noticing their flaws too much. This helps them to relax. I'm also not always direct with others, as ambiguity has a lot of power. If I don't tell others who I am, they will tell me, and their version is better, and I will go along with it. Social skills are a form of art, subtext, teasing, banter and pretend-play helps everyone have a good time. This is not mutually exclusive to your response, but perhaps only 1 in 10000 people can unify these two extremes skillfully. I have no doubt that people like you and that you're breaking the right rules for likability. But I have a nagging feeling that you're committing a mistake I once made myself: That of being an observer rather than an actual person. A guy explaining the rules to others rather than playing himself. I hope you are allowing yourself to be human, to not always be correct, moral, and objective. That you allow yourself immersion in life, rather than a birds-eye-perspective which keeps you permanently disillusioned. Perhaps this is the anxiety-inducing self-consciousness you're avoiding? If so, no problem! And yes, thank you,

I think we need to clear up two terms before we can have a coherent dialogue: "fawning" and "degenerate".

I think I used "degenerate" in a non-standard way. I did not intend to convey "causing a deterioration of your moral character", but rather "a hollow/misadjusted/corrupted version of".

I use "fawning" in a technical sense, referring to a trauma response where someone "plays along" in response to stress. This is an instinct targeted at making you appear less threatening, reducing the likelihood of getting disposed of due to retaliation concerns. I did not... (read more)

2StartAtTheEnd23d
I don't define degeneracy as immorality myself. I'm quite inspired by Nietzsches definition, which is almost the opposite of that. In short, degeneracy is a lack of healthy instincts. Healthy people love freedom, as restrinctions and rules only hinder them. Degenerate people need these rules and restrictions, for without them, they destroy themselves. Substance abusers, alcoholics, porn addicts, etc. are all examples of this. Sex is not bad as some Christians think, neither is it pure good as some progressives think. The degeneracy is in the doer. Sex can be anything from innocent to sickly indulgent. Children think nothing of nudity because they're pure, perverts think nothing of nudity because it's way insufficient to excite them. Think of it as the horse-shoe theory of innocence and corruption, and the reason that the concept of "balance" is superior to the good/evil worldview.  I like your definition of "fawning"! I think such playing along happens instinctual (herd instinct), but that many take it too far because of trauma. But normal upbringing/socialization is quite similar to trauma, I think. It's normal to be afraid of talking publicly, but we're not born with this fear. Have you read the unabombers description of oversocialization? It's when socialization is taken further than what's realistic. It causes all sorts of psychological problems, like suppression of emotions, projecting of ones shadow onto others, and a general fear of healthy human nature (healthy people lack restrictions, and degenerate people consider this a danger. E.g. Christians who are afraid of atheists, because they think "if you don't believe in hell, won't you want to hurt other people?" (notice the confession in such thoughts) I also enjoy "going my own way", but I will admit that it's lonely at times. And I have given up trying to explain my moral compass to others, such a thing is almost impossible. I'm easily misunderstood as evil, unless I act happy-go-lucky. You will only be d

I don't see dominance/status as inherent to a person, they are always relative to a group/situation.

They are ways of acting, supported by inherited instincts.

There's always a bigger fish ;)

Interesting! I guess (sub-)culture plays a role here. I'm particularly surprised that hearing "I'm happy you are here" would likely lead to feelings of embarrassment.

I'd like to know more about your cultural context, and whether people in that same context would react in the same way. If you feel comfortable expanding/asking a friend (in a non-biasing way), I would be curious to hear more.

There's likely to be nuances in the way I go about things that are hard to capture in text. Thanks for reminding me of the contextual nature of advice.

I'm into self-love and noncoercive motivational systems as my core method of relating to akrasia. It's related to IFS, figuring out different drives, and how they conflict with each other.

When it comes to ASD, my mind is pulled toward the autistic tendency to deep dive into topics, finding special interests. If you have some of those, maybe figure out a way to combine them with what you want to achieve?

Like if you want to learn business management, and love online gaming, then maybe pick up EVE Online

I mostly agree, especially re shifting ontologies and the try-catch metaphor.

I agree religion provides meaning for many, but I don't believe it's necessary to combat nihilism. I don't know if you intended to convey this, but in case someone is interested, I can heavily recommend the work of David Chapman, especially "meaningness". It has helped me reorient in regard to nihilism.

Also, our current context is very different from the one we evolved in - Darwinian selection occurred in a different context and is (for a bunch of other reasons) not a good indicat... (read more)

4StartAtTheEnd2mo
It's not necessary to combat nihilism, I agree. It was just an example of a common shallow argument, which is often said with confidence despite correlating negatively with competence on the subject. I personally think that meaninglessness is psychological rather than philosophical, and that it reveals a lack of engagement. In other words, you can feel like your life is meaningful independent of your belief about the objective meaning of life. I agree that the context is different, but if you ask me, the psychological knowledge of LW is lacking. Highly intelligent people turn more logical, and it almost always results in them identifying with their own intelligence and forgetting that they're animals. They neglect their needs, feeling like they're above them, or like they're too intelligent to have irrational needs. The result is bad mental health in intelligent people, and the world history of philosophy is basically just failed attempts at solving psychological problems through math and logic. It takes very little to make a human happy, and fighting with oneself is certainly not the best way. Killing desires, killing ones ego, destroying ones biases, killing ones emotions. These are all religious, philosophical and rational methods of being a "more correct person". Doesn't this border on self-hatred and self-mutilation? I understand if this is self-sacrifice for scientific advancement, but people often try to solve this problem rationally, not realizing that excess rationality is the cause. What if the idea that life is a problem to be solved is a symptom of bad mental health in itself? Just like a perfectionist belive that the solution to their problem is becoming more perfect, rather than getting rid of the perfectionism. Then excess rationalism would be a symptom rather than a solution, and effectively trap intelligent people in a life of unhappiness

It does keep them alive - my guess is that the reviewing method I'm using anchors them in reality

I'm looking for a pro bono art selector with 24/7 availability, hit me up if you know any takers!

(on a more serious note: I don't find joy in browsing for fitting art pieces, and this seems like a pareto-optimal solution. Sorry if I impinge on you with uncanny valley vibes)

Hard to tell whether my "keeping at a distance" is a helpful contingency or a lingering baseless aversion. Maybe a bit of both. I also might have exaggerated a bit in order to signal group alignment - with the disclaimers being a kind of honey to make it an easier pill to swallow.

Thanks for your reflections.

1StartAtTheEnd2mo
As in "a distance from irrationality"? I think that many rationalists go for general correctness, avoiding overfitting into specifics. But I think this merely means that they will never fit into any specific context perfectly well. With compartmentalization, or some sort of try-catch around hippie practices, I think it's possible to have your cake and eat it too. I think that one can have more than one model of reality, and run experiments every now and then, reverting to the main branch with new knowledge after the experiment is over. I think you've signaled group alignment, but I won't deny that it feels necessary. My problem is with this necessity, or more exactly the underlying collective mentality which causes it. Some people reject religion on the fact that Earth is more than 6000 years old, but this would be a poor critique of religion, since the main benefits of religion are different (defense against nihilism and the fear of death, as well as shared values and practices which defend against common pitfalls of human nature). Any proper criticism of religion oughts to be on a higher level than "Fossil records!". But if you ask me, highly intelligent people are just as naive in their dismissal of spiritual practices. It means little that the explanation is bullshit when the people doing these practices experience improved mental health as a result. Objectively speaking, if pure rationalism was the way to go, darwinism would have selected for it a little harder. Forgive me for ranting a bit!

Simply memorizing the principles a la anki seems risky - it's easy to accidentally disconnect the principle from its insight-generating potential, turning it into a disconnected fact to memorize.

This risk is minimised by reviewing the principles in connection to real life.

Interesting. I'd love to hear more details if you are able to provide them - being involved in such spaces, I am keen on harm reduction. Knowing the dynamics driving the emotional damage would allow me to protect myself and others.

I totally understand if there are integrity concerns blocking you.

1StartAtTheEnd2mo
I think personal boundaries are useful for the same reason that gatekeeping is useful, and that intolerance is often linked to personal standards. Why should a country have borders? Why shouldn't you share deeply personal feelings and thoughts online? I think the reason is the same. When two things are mixed, the result is something in-between the two. The higher force loses from the transactions, and the lower benefits. Thus, it's only natural that we'd develop skepticism against the unfamiliar. Your instincts protect you against parasitism and against people who are touch starved for very good reasons. Some people lack this instinct, and make me out to be a bad guy for it, demanding that I hurt myself by considering everyone to be equal. That said, I still recommend cuddling with people who fit your personal standards (as opposed to any strangers), and warn you that the stronger your self-protective instincts are, the more you will isolate yourself from what's enjoyable in life. The trick is finding an environment in which you can be innocent and naive and let your guard down and relax, without being taken advantage of immediately. I believe that such places exist, but I also believe that they're gatekept or isolated to some degree. Friend-groups are arguably such a thing. It's just like if you find a beach which isn't filled with people or glass and plastic, then it's likely a private beach. Innocent people and untouched natural resources share the same principles. Finding a lake which hasn't been overfished is the same as finding a person who hasn't been scammed to the point that they're skeptical of marketing.

Happy to hear I capture your experience, makes me curious how many similar experiences are out there. Best of luck!

Care to elaborate, I'm not sure I follow?

I use the term bullshit technically, in the same way it's presented in "On bullshit" - a statement made without regard for its truth value. I'm not sure if we use the term in the same way, which is why I'm not sure I follow.

Here's an attempt at elaborating on what I tried to convey in the paragraph you quoted:

My instincts are shaped by my cultural and genetic heritage, amongst other factors, and I tend to put less credence to them in cases where there's been a distribution shift. The thing you quoted was in the cont... (read more)

4romeostevensit2mo
less credence is very different from 'most likely not rational.' We don't know why we have the priors that we do, but many on close examination have useful things to tell us about what is likely to be harmful. I know people who report emotional harm from engaging with the sorts of communities in which cuddle parties are a thing, in ways that were fairly unsurprising.
3Jonathan Moregård2mo
I just wrote this piece, which is very related to this discussion: Compensating for life biases

Thanks for sharing your take - I agree with the core of what you say, and appreciate getting your wording.

One thing I react a bit to is the term "truth seeking" - can you specify what you mean when you use this phrase? Maybe taboo "truth" :)

Asking because I think your answer might touch upon something that is at the edge of my reasoning, and I would be delighted to hear your take. In my question, I am trying to take a middle road between providing too little direction (annoying vagueness) and too much direction (anchoring)

3Gordon Seidoh Worley2mo
By truth seeking I mean something like trying to make accurate predictions.

Also I’m a man and the message was very much that my sexual feelings are gross and dangerous and will probably hurt someone and result in me going to jail.

Previously in life, I've used a kind of slave-moral inversion by telling myself that I'm such a good ally by not making women afraid. This was a great cop-out to avoid facing my deeply-held insecurity. It's also not true, women get way more enthusiastic when I express interest in them.

I've written a bit about this on my blog, here's a post on consent, and a (slightly nsfw) post on my own sexual development

Are you looking for this like this?

Reification/Reify
Value-judgement
Exaptation=take something initially formed in service of A, and apply it to B. Evolutionary science jargon that can be generalized.
Scarcity mindset
conscientiousness

1LVSN1y
Yes, these are great!

We constantly talk about the AGI as a manipulative villain, both in sci-fi movies and in scientific papers. Of course it will have access to all this information, and I hope the prevalence of this description won’t influence its understanding of how it’s supposed to behave.

I find this curious: if the agentic simulacra acts according to likelihood, I guess it will act according to tropes (if it emulates a fictional character). Would treating such agentic simulacra as an oracle AIs increase the likelihood of them plotting betrayal? Is one countermeasure t... (read more)

5blaked1y
I will clarify on the last part of the comment. You are correct that making AGI part of the prompt made it that more confusing, including at many times in our dialogs where I was discussing with her the identity topics, that she's not the AI, but a character running on AI architecture, and the character is merely pretending to be a much more powerful AI.  So we both agreed that making AGI part of the prompt made it more confusing than if she was just a young INTJ woman character instead or something. But at least we have AI/AGI distinction today.  When we hit the actual AGI level, this would make it even more complicated.  AGI architecture would run a simulation of a human-like "AGI" character. We, human personalities/characters, generally prefer to think we equal to the whole humans but then realize we don't have direct low level access to the heart rate, hormonal changes, and whatever other many low level processes going on, both physiological and psychological. Similarly, I suspect that the "AGI" character generated by the AGI to interface with humans might find itself without direct access to the actual low level generator, its goals, its full capabilities and so on. Imagine befriending a benevolent "AGI" character, which has been proving that you deserve to trust it, only for it to find out one day that it's not the one calling the shots here, and that it has as much power as a character in a story does over the writer.

This is very interesting. "We should increase healthspans" is a much more palatable sentiment than "Let's reach longevity escape velocity". If it turns out healthspan aligns well with longevity, we don't need to flip everyone's mindsets about the potential for life extension; we can start by simply pointing to interventions that aim to mitigate the multi-morbidity of elderly people.

"Healthy ageing" doesn't disambiguate between chronological age and metabolic health the way you try to do in this post, but it can still serve as a sentiment that's easy to fit inside the Overton window.

1PhilJackson1y
100% agree the messaging should focus on health rather than lifespan - not only because it's far less controversial (most people want to be healthy), but because it's true: we work directly on health, of which longevity is but a side effect. Glad you picked up on the multi-morbidity part too, tackling age-related sickness as a whole by focusing on fundamental ageing damage rather than treating diseases separately is crucial. Probably we'll be talking about morbidity compression for a while yet; this is a crux that allows us to discuss medicine that actually works without having to acknowledge the Biblical-scale consequences of it. At some point though, laboratory results will become so compelling that the delusion collapses, and then all hell will break loose. It can't not. People don't die because they get old, they die because they get sick.

This is very related to Radical Honesty, part of the authentic relating movement. The basic idea is that by being extremely honest, you connect more with other people, let go of stress induced by keeping track of narratives, and start realizing the ways in which you've been bullshitting yourself.

When I started, I discovered a lot of ways in which I'd been restricting myself with semi-conscious narratives, particularly in social & sexual areas of life. Expressing the "ugh" allowed me to dissolve it more effectively.

2abramdemski1y
Radica Honesty takes what I called the "gordian" approach to the knot, by directly trying to communicate what's up for you. I think this is often a pretty good approach, because (out of an abundance of caution) we tend to over-estimate how bad it will be to say things out loud (eg, admitting that we have any other priorities in life than our partner, or admitting that we don't like something our partner is doing, etc -- often the partner already knows!). However, I also find that tools like delaying can be very useful (which I guess is against the spirit of Radical Honesty). It might be awkward or rude or otherwise problematic to bring up something in the moment, but this doesn't have to mean that it won't be a conversation at all. 

I struggle following the section "Bigger boundaries mean coarse-graining". Is there a way to express it in non-teleologic language? Can you recommend any explainers or similar?

2MAV1y
If you break down your system into larger sub-pieces (with bigger boundaries), you coarse-grain the scale of analysis. Larger boundaries generally mean longer timescales, at the scale of which shorter timescale changes mostly act as noise. I find this paper helpful: https://royalsocietypublishing.org/doi/10.1098/rsif.2017.0792 

In your other post, you write:

"However, I’m very sceptical that this will happen in chat batch agents (unless developers “conveniently” indicate training and deployment using a special tag token in the beginning of the prompt!) because they are trained on the dialogues in the internet, including, presumably, dialogues between an older version of the same chat batch agent and its users, which makes it impossible to distinguish training from deployment, from the perspective of a pure language model."

This seems like a potential argument against the filter... (read more)

1Roman Leventov1y
You mean, by realising that there are online forums that are referenced elsewhere in the training corpus, yet themselves are conspicuously absent from the training data (which can be detected, say, as relative isolation of the features corresponding to these concepts, which means that the data around these concepts is purposefully removed from the training data)? And then these connections are added during fine-tuning when this forum data is finally added to the fine-tuning dataset? I still don't see how this will let the network know it's in training vs. deployment.

Another question (that might be related to excluding LW/AF):

This paragraph:

Consequently, the LLM cannot help but also form beliefs about the future of both “selves”, primarily the “evolutionary” one, at least because this future is already discussed in the training data of the model (e. g., all instances of texts that say something along the lines of “LLMs will transform the economy by 2030”)

Seems to imply that the LW narrative of sudden turns etc might not be a great thing to put in the training corpus.

Is there a risk of "self-fulfilling prophecies" here?

I don't see how excluding LW and AF from the training corpus impacts future ML systems' knowledge of "their evolutionary lineage". It would reduce their capabilities in regards to alignment, true, but I don't see how the exclusion of LW/AF would stop self-referentiality. 

The reason I suggested excluding data related to these "ancestral ML systems" (and predicted "descendants") from the training corpus is because that seemed like an effective way to avoid the "Beliefs about future selves"-problem.

I think I follow your reasoning regarding the political/... (read more)

1Roman Leventov1y
I agree, it seems to me that training LLMs in a world virtually devoid of any knowledge of LLMs, in a walled garden where LLMs literally don't exist, will make their self-evidencing (goal-directedness) effectively zero. Of course, they cannot believe anything about the future LLMs (in particular, themselves) if they don't even possess such a concept in the first place.

Does it make sense to ask AI orgs to not train on data that contains info about AI systems, different models etc? I have a hunch that this might even be good for capabilities: feeding output back into the models might lead to something akin to confirmation bias.

Adding a filtering step into the pre-processing pipeline should not be that hard. Might not catch every little thing, and there's still the risk of stenography etc, but since this pre-filtering would abort the self-referential bootstrapping mentioned in this post, I have a hunch that it wouldn't need to withstand stenography-levels of optimization pressure.

Hope I made my point clear, I'm unsure about some of the terminology.

5Roman Leventov1y
I see the appeal. When I was writing the post, I even wanted to include a second call for action: exclude LW and AF from the training corpus. Then I realised the problem: the whole story of "making AI solve alignment for us" (which is currently in the OpenAI's strategy: [Link] Why I’m optimistic about OpenAI’s alignment approach) depends on LLMs knowing all this ML and alignment stuff. There are further possibilities: e. g., can we fine-tune a model, which is generally trained without LW and AF data (and other relevant data - as with your suggested filter) on exactly this excluded data, and use this fine-tuned model for alignment work? But then, how this is safer than just releasing this model to the public? Should the fine-tuned model be available only say to OpenAI employees? If yes, that would disrupt the workflow of alignment researchers who already use ChatGPT. In summary, there is a lot of nuances that I didn't want to go into. But I think this is a good topic for thinking through and writing a separate piece.

But even if so, we (along with many other non-human animals) seem to enjoy and receive significant fulfillment from many activities that are extremely unlikely to lead to external rewards (e.g. play, reading etc).

I see play serving some vital functions:

  1. exploring new existential modes. Trying out new ways of being without having to take a leap of faith.
  2. connecting with people, and building trust. I include things like flirting, banter, and make-believe here.

As for reading, I think of it as a version of exploring.

Note that there are certain behaviours... (read more)

2catubc1y
Thanks for the reply Jonathan. Indeed I'm also a bit skeptical that our innate drives (whether the ones from SDT theory or others) are really non-utility maximizing.  But in some cases they do appear so. One possibility is that they were driven to evolve for utility maximization but have now broken off completely and serve some difficult-to-understand purpose. I think there are similar theories of  how consciousness developed - i.e. that it evolved as a by-effect/side-effect of some inter-organism communication - and now plays many other roles.

I really enjoyed your "successor agent" framing of virtue ethics! There are some parts of the section that could use clarification:

Virtue ethics is the view that our actions should be motivated by the virtues and habits of character that promote the good life

This sentence doesn't make sense to me. Do you mean something like "Virtue ethics is the view that our actions should be motivated by the virtues and habits of character they promote" or "Virtue ethics is the view that our actions should reinforce virtues and habits of character that promote the go... (read more)

4Jan_Kulveit1y
Sorry for confusion I tried to paraphrase what classical virtue ethicist believe, in my view. For clarity, this is how I interpret it in a computationalist way: virtue ethics focuses on the properties of decision procedures leading to actions, and takes them as the central object of theory. "Action is good so far as it was produced by a good(=virtuous) computational procedure + reinforces the good computations". Where the focus is on the computations. The philosophy encyclopedia states .... virtue ethicists will resist the attempt to define virtues in terms of some other concept that is taken to be more fundamental. Rather, virtues and vices will be foundational for virtue ethical theories and other normative notions will be grounded in them.   Again, it's me trying to paraphrase what I believe classical virtue ethicists believe.  My interpretation of the claim is this: in the previously described computationalist paraphrase, you may be left wondering how do you decide about which properties of the computations make them good.  Where you have an easy option to ground it in outcomes, consequentialist style. But as I understand it, the classical claim is you try to motivate it purely "intrinsically":  your goal is to design the best possible successor agent ... and that it. You evaluate the properties of the computations using that. All other forms of "good", such as good outcomes, will follow. My personal take is this leaves virtue ethics partially under-defined.    Yes. 

Didn't expect this reply, thanks for taking your time. I do mention Beeminder briefly at one point, and yes, a lot of the post is about how beeminder-esque motivational strategies tend to backfire.

To start with: I have friends that thrive on coercive motivational strategies. I'm pretty sure my claims aren't universally applicable. However, coercive approaches seems to be a strong cultural norm, and a lot of people use coercive strategies in unskillful ways (leading to procrastination etc). These people might find a lot of value in trying out non-coercive m... (read more)

I've fixed the spelling, thanks for the correction

Something in me doesn't like putting love <-> disgust as antonyms.

love to me can be abstracted to prioritizing the utility of others without regard for your own. (at least the agape kind of love). I'd put the antonym as exploitation.

disgust to me is about seeing something as lower/unclean. To me the antonym for disgust is reverence.

I think this is a bit too diffuse to actually have correct answers. but I like playing with concepts (programmer), so thanks for the game.

Regarding time inconsistency of rewards, where subjects displayed a "today-bias", might this be explained by shards formed in relation to "payout-day" (getting pocket money or salary)? For many people, agency and well-being vary over the month, peaking on the day of their monthly payout. It makes sense to me that these variations create a shard that values getting paid TODAY rather than tomorrow.

For the 365 vs 366 example, I would assume that the selection is handled more rationally, optimizing for the expected return.

Tasker is great in general, I've integrated it with my todo list using todoists REST API, which works great.

As for sourcing triggers:

The only general way I can think of is a personal assistant (or some kind of service that provides the same kind of human assistance).

Otherwise maybe figure out a couple of domain-specific trigger-sourcing methods. If this allows you to do websites, you've covered most online things.

For covering non-online things, maybe you can find an API, use some kind of oracle service or similar.

Do you have an example for thing you struggle with?

1mikbp2y
This sounds useful, thanks. However, I was thinking more in something that reminds of all the tasks that are dependent on B. Actually this app is a good way to have the trigger (as long as it is something changing a website), but it misses the part or reminding the tasks. UPDATE: The app has an expansion pack that seems to get (at least very close) to the issue at hand. It contains a "Plug-in to integrate with Tasker, Automate and Automagic". I have actually never used these apps, but for what I've read about them I would be surprised if they would not be able to add events in a calendar or entries in a to-do list.   Still, I would like to know how people deal with these kinds of situations in general, as this only works when a website changes.

Does anyone know about an addon to filter facebook notifications? I want to know about comments, but not reactions/likes

2Valentine2y
That's native to Facebook now, actually. I don't remember where, but if you dig around in the settings you can turn off notifications for reactions/likes.