Wiki Contributions

Comments

I partly support the spirit behind this feature, of providing more information (especially to the commenter), making the readers more engaged and involved, and expressing a reaction with more nuance than with a mere upvote/downvote. I also like that, as with karma, there are options for negative (but constructive) feedback, which I mentioned here when reviewing a different social discussions platform that had only positive reactions such as "Aha!" and "clarifying".

In another sense, I suspect (but could be wrong) that this extra information could also have the opposite effect of "anchoring" the readers of the comments and biasing them towards the reactions left by others. If they saw that a comment had been reacted with a "verified" or "wrong", for example, they could anchor on that before reading the comment. Maybe this effect would be less pronounced than in other communities, but I don't think LessWrong would be unaffected by this.

(Comment on UI: when there are nested comments, it can be confusing to tell whether the reaction corresponds to the parent or the child comment:

Edit: I see Raemon already mentioned this)

Certainly; it wasn't my intention to make it seem like an 'either-or'. I believe there's a lot of room for imported quality teaching, and a fairly well-educated volunteer might be better at teaching than the average local teacher. I didn't find how they taught there too effective: a lot of repeating the teacher's words, no intuition built for maths or physics…I think volunteers could certainly help with that. Also by teaching the subjects they are more proficient at than the local teachers (e.g. English). I agree there is the potential to use volunteers in a variety of ways to raise the level of education, and also to try to make the changes permanent once the volunteers leave.

Strong upvote. I found that almost every sentence was extremely clear and conveyed a transparent mental image of the argument made. Many times I found myself saying to myself "YES!" or "This checks" as I read a new point.

That might involve not working on a day you’ve decided to take off even if something urgent comes up; or deciding that something is too far out of your comfort zone to try now, even if you know that pushing further would help you grow in the long term

I will add that, for many routine activities or personal dilemmas with short- and long-term intentions pulling you in opposite directions (e.g. exercising, eating a chocolate bar), the boundaries you set internally should be explicit and unambiguous, and ideally be defined before being faced by the choice.

This is to avoid rationalising momentary preferences (I am lazy right now + it's a bit cloudy -> "the weather is bad, it might rain, I won't enjoy running as much as if it was sunny, so I won't go for a run") that run counter to your long-term goals, where the result of defecting a single time would be unnoticeable for the long run. In this cases it can be helpful to imagine your current self in a bargaining game with your future selves, in a sort of prisoner's dilema. If your current now defects, your future selves will be more prone to defecting as well. If you coordinate and resist tempation now, future resistance will be more likely. In other words, establishing a Schelling fence.

At the same time, this Schelling fence shouldn't be too restrictive nor be merciless towards any possible circumstance, because then this would make you more demotivated and even less inclined to stick to it. One should probably experiment with what works for him/her in order to find a compromise between a bucket broad and general enough for 70-90% of scenarios to fall into, while being merciful towards some needed exceptions.

Thank you very much for this sequence. I knew fear was a great influence (or impediment) over my actions, but I hadn't given it such a concrete form, and especially a weapon (= excitement) to combat it, until now.

Following matto's comment, I went through the Tunning Your Cognitive Strategies exercise, spotting microthoughts and extracting the cognitive strategies and deltas between such microthoughts. When evaluating a possible action, the (emotional as much as cognitive) delta "consider action X -> tiny feeling in my chest or throat -> meh, I'm not sure about X" seemed quite recurring. Thanks to your pointers on fear and to introspecting about it, I have added "-> are you feeling fear? -> yes, I have this feeling in my chest -> is this fear helpful? -> Y, so no -> can you replace fear with excitement?" (a delta about noticing deltas) as a cognitive strategy.

Why I (beware of other-optimizing) can throw away fear in most situations is that I have developed the mental techniques, awareness and strength to counter the negatives which fear wants to point at.

As many, I developed fear as a kid, in response to being criticised or rejected, at a time when I didn't have the mental tools to deal with these situations. For example, I took things too personally, thought others' reactions were about me and my identity, and failed to put myself in others' shoes and understand that when other kids criticise it is often unfounded and just to have a laugh. To protect my identity I developed aversion, a bias towards inaction, and fear of failure and of being criticised. This propagated to also lead to demotivation, self-doubt, and underconfidence.

Now I can evaluate whether fear is an emotion worth having. Fear points at something real and valuable: the desire to do things well and be liked. But as I said, for me personally fear is something I can do away with in most situations because I have the tools to respond better to negative feedback. If I write an article and it gets downvoted, I won't take it as a personal issue that hurts my intrinsic worth; I will use the feedback to improve and update my strategies. In several cases, excitement can be much more useful (and motivating, leading to action) than fear: excitement of commenting or writing on LessWrong over fear of saying the wrong thing; excitement of talking or being with a girl rather than fear of rejection.

Thank you very much for this post, I find it extremely valuable.

I also find it especially helpful for this community, because it touches on what I believe are two main sources of anxiety over existential dread that might be common among LWers:

  • Doom itself (end of life and Earth, our ill-fated plans, not being able to see our children or grandchildren grow, etc.), and
  • uncertainty over one's (in)ability to prevent the catastrophe (can I do better? Even if it's unlikely I will be the hero or make a difference, isn't it worth wagering everything on this tiny possibility? Isn't the possibility of losing status, friends, resources, time, etc. better than the alternative of not having tried our best and humanity coming to an end?)

It depends on the stage in one's career, the job/degree/path one is pursuing, and other factors, but I expect that many readers here are unusually prone to the second concern compared to outsiders, perhaps due to their familiarity with coordination failures and defections, their intuition that there's always a level above and possibility for doing better/optimisation...I am not sure how this angst over uncertainty, even if it's just a lingering thought in the back of one's mind, can really be cleared, but particularly Fabricated Options conceptualises a response to it and says "we'll always be uncertain, but don't stress too much, it's okay".

As others have pointed out, there's a difference between a) problems to be tackled for the sake of the solution, vs b) problems to be tackled for the sake (or fun) of the problem. Humans like challenges and puzzles and to solve things themselves rather than having the answers handed down to them. Global efforts to fight cancer can be inspiring, and I would guess a motivation for most medical researchers is their own involvement in this same process. But if we could push a button to eliminate cancer forever, no sane person would refuse to.

I think we should aim to have all a) solved asap (at least those problems above a certain threshold of importance), and maintain b). At the same time, I suspect that the value we attach to b) also bears some relation to the importance of the solution to those problems. E.g. that a theoretical problem can be much more immersive, and eventually rewarding, when the whole of civilisation is at stake, than when it's a trivial puzzle.

So I wonder how to maintain b) once the important solutions can be provided much faster and easily by another entity or superintelligence. Maybe with fully immersive similations that reproduce e.g. the situation and experience of trying to find a cure to cancer, or with large-scale puzzles (such as scape rooms) but which are not life-or-death (nor happiness-or-suffering).

The phases you mentioned in learning anything seem especially relevant for sports.

1.  To have a particular kind of feelings (felt senses) that represent something (control, balance, singing right, playing the piano right, everything being done)
2.  A range of intensity that we should keep that feeling sense in, in some given context (either trying to make sure we have some positive feeling, or that we avoid some negative feeling)
3.  Various strategies for keeping it within that range

Below the surface, every sport is an extremely complex endeavour for the body, and mastering it is a marvelous achievement of the mind. You realise this particularly when starting out. I had my first golf class yesterday, and it's far from the laid-back activity I thought it was. Just knowing how to grip the club correctly is a whole new world: whether the hands overlap or interlock, where the thumb is pointing at, getting the right pressure...This is before even starting with the backswing, impact, and follow-through.

In fact, though, knowing is not the right word. It's feeling. I have been playing tennis for my whole life, and as I was shown the techniques for golf I constantly compared them with those of tennis, with which it shares many postures and motions. It is astonishing how complex and how sensitive each stroke or swing is, and yet how it gets done seemlessly and almost unthinkingly when one masters it. If one tried to get each tiny detail exactly right, it seems impossible we could even hit the ball. Timothy Gallwey in The Inner Game of Tennis presented this same process of focusing and of being aware of your body and sensations in order to enhance these felt senses and let your mind adjust the intensity to the right felt standards.

On a different note, a failure mode of mine as a youngster, and which I'm still trying to overcome, was related to the fear of being accused of something, but with completely different countermeasures than the example you gave; it's more like a contradictory failure mode.

My sister was often envious and critical of any dissonant action, so I became afraid of her disapproving anything I did, at any moment. At the same time, if I made the same choices as her, she would also accuse me of copying her. So this ended up making me try to settle in a neutral territory and almost become a yes-boy.

For example, in a restaurant I would be afraid of ordering salmon, because my sister might order it, or because even if she didn't, it might seem like I was copying her predilection for healthy food. However, I would also be afraid of overcorrecting and of ordering something too unhealthy, or of asking for more food because I hadn't had enough. And so I would end up ordering a middle-ground option, like, say, steak.

Thank you for your explanations. My confusion was not so much from associating agency with consciousness and morality or other human attributes, but with whether it was judged from an inside, mechanistic point of view, or from an outside, predicting point of view of the system. From the outside, it can be useful to say that "water has the goal to flow downhill", or that "electrons have the goal to repel electrons and attract protons", inasmuch as "goal" is referred to as "tendency". From an inside view, as you said, it's nothing like the agency we know; they are fully deterministic laws or rules. Our own agency is in part an illusion, because we too act deterministically; the laws of physics, but more specifically, the patterns or laws of our own human behaviour. These seem much more complex and harder for us to understand than the laws of gravity or electromagnetism, but reasons do exist for every single one of our actions and decisions, of course.

I find LW's definiton of agency useful:

A key property of agents is that the more agentic a being is, the more you can predict its actions from its goals since its actions will be whatever will maximize the chances of achieving its goals. Agency has sometimes been contrasted with sphexishness, the blind execution of cached algorithms without regard for effectiveness.

Although, at the same time, agency and sphexishness might not be truly opposed; one refers to an outside perspective, the other to an inside perspective. We are all sphexish in a sense, but we attribute to others and even to the I this agency property because we are ignorant of many of our own rules.

(I reply both to you and @Ericf here). I do struggle a bit to make up my mind on whether drawing a line of agency is really important. We could say that a calculator has the 'goal' of returning the right result to the user; we don't treat a calculator as an agent, but is it because of its very nature and the way in which it was programmed, or is it for a matter of capabilities, it being incapable of making plans and considering a number of different paths to achieve its goals?

My guess is that there is something that makes up an agent and which has to do with the ability to strategise in order to complete a task; i.e. it has to explore different alternatives and choose the ones that would best satisfy its goals. Or at least a way to modify its strategy. Am I right here? And, to what extent is a sort of counterfactual thinking needed to be able to ascribe to it this agency property; or is following some pre-programmed algorithms to update its strategy enough? I am not sure about the answer, and about how much it matters.

There are some other questions I am unclear about:

  • Would having a pre-programmed algorithm/map on how to generate, prioritise and execute tasks (like for AutoGPT) limit its capacity for finding ways to achieve its goals? Would it make it impossible for it to find some solutions that a similarly powerful AI could have reached?
  • Is there a point at which it is unnecessary for this planning algorithm to be specified, since the AI would have acquired the capacity to plan and execute tasks on its own?

This seems to me more like a tool AI, much like a piece of software asked to carry out a task (e.g. an Excel sheet for doing calculations), but with the addition of processes or skills for the creation of plans and searches for solutions which would endow it with an agent-like behaviour. So, for the AutoGPT-style AI here contemplated, it appears to me like this agent-like behaviour would not emerge out of the AI's increased capabilities and achievement of general intelligence to reason, devise accurate models of the world and of humans, and plan; nor would it emerge out of a set of values specified. It would instead come from the capabilities to plan that would be specified.

I am not sure this AutoGPT-like AI counts as an agent in the sense of conferring the advantages of a true agent AI — i.e. having a clear distinction between beliefs and values. Although I would expect that it would still be able to produce the harmful consequences you mentioned (perhaps, as you said, starting with asking for permission from the user for access to his resources or private information, and doing dangerous things with those) as it was asked to carry out more complex and less-well-understood tasks, with increasingly complex processes and increasing capabilities. The level of capabilities and the way of specifying the planning algorithms, if any, seem very relevant.

Load More