Wiki Contributions

Comments

Less Wrong Community Weekend 2022

Hi! Questions about volunteering follow:

"They will only be expected to work either before, after or during the event while joining sessions is still often feasible."

Could I get a rephrasing of that? I'm not certain, if the options of before/during/after are (or can be) exclusive, and I am also unclear on what is meant by "joining sessions is often feasible".

I am happy to help, but I would like to know how much of the time during the event (if any) would be, basically, not the event^^

Best regards

"Stuck In The Middle With Bruce"

This sounds like a case of "wrong" perspective. (Whoa, what?! Yes, keep reading pls^^)

Like someone believing (to believe) in Nihilism. To Nihilism, I haven't thought of a good and correct counter-statement, except:

"You are simply wrong on all accounts, but by such a small amount that it's hard to point to, because it will sound like »You don't have a right to your own perspective«", (Of course, I also would not agree with disallowing personal opinions (as long as we ARE talking about opinions, not facts).)

Granted, I haven't tried to have that kind of discussion since I really started reading and applying the Sequences. But that may be due to my growing habit of not throwing myself into random and doomed discussions, that I don't have a stake in.

But for Bruce, I think I can formulate it:

I am aware of the fact that I still don't allow myself to succeed sometimes. I have recently found that I stand before a barrier that I can summarize as a negative kind of sunk cost fallacy ("If I succeed here, I could have just done that ten years ago"), and I still haven't broken through, yet.*

But... Generalizing this kind of observation to "We all have this Negativity-Agent in our brain" feels incorrect to me. It both obscures the mistake and makes it seem like there is a plan to it.

If I think "Okay, you just detected that thought-pattern that you identified as triggering a bad situation, now instead do X" I feel in control, I can see myself progress, I can do all the things.

If I think "Damn, there's Bruce again!", not only do I externalize the locus of control, I am also "creating" an entity, that can then rack up "wins" against me, making me feel less like I can "beat" them.

It's not an agent. It's a habit that I need to break. That's a very different problem!

I assume that people will say "Bruce is a metaphor". But, provided I have understood correctly, the brain is very prone to considering things as agents (f.e. natural gods, "The System", The whole bit about "life being (not) fair", ...), so feeding it this narrative would seem like a bad idea.

I predict that it will be harder to get rid of the problem, once one gives it agency and/or agenthood. (Some might want an enemy to fight, but even there I take issue with externalizing the locus of control.)

[*In the spirit of "Don't tell me how flawed you are, unless you also tell me how you plan to fix it", I am reading through Fun Theory to defuse it (yes, first read, I am not procrastinating with "need to read more"):

For me it's: I don't want to do X, I want to do something enjoyable Y. And then, when I do Y, I drift into random things, that often aren't all that enjoyable, but just continue the status quo. All the while X is beginning to loom, accrue negative charge and triggering avoidance routines. But if I do X instead, I don't know how to allow myself to take breaks without sliding into the above pattern. So I intend to optimize my fun and expand the area of things that I find fun. That reorientation should help me with dosing it, too. (And yes, I do have adhd, in case you read it out of the text and were wondering if you should point me there ^^)

Also I recently discovered a belief (in...) that I like to learn. I realized that I really don't like learning. I like understanding, but what I call "learning" has a very negative connotation, so I barely do it. Will discover how to effectively facilitate understanding, too. ]

Tsuyoku Naritai! (I Want To Become Stronger)

I hope that you are not still struggling with this, but for anyone else in this situation: I would think that you need to change the way you set your goals. There is loads of advice out there on this topic, but there's a few rules I can recall off the top of my head:

  • "If you formulate a goal, make it concrete, achievable, and make the path clear and if possible decrease the steps required." In your case, every one of the subgoals already had a lot of required actions, so the overarching goal of "publish a book" might be too broadly formulated.
  • "If at all possible don't use external markers for your goals." What apparently usually happens is that either you drop all your good behaviour once you cross the finish line, or your goal becomes/reveals itself to be unreachable and you feel like you can do nothing right (seriously, the extend to which this happens... incredible.), etc.
  • "Focus more on the trajectory than on the goal itself." Once you get there, you will want different things and what you have learned and acquired will just be normal. There is no permanent state of "achieving the goal", there is the path there, and then the path past it.

Very roughly speaking.

All the best.

How much should we value life?

If I may recommend a book that might make you shift your non-AI related life expectancy: Lifespan by Sinclair.

Quite the fascinating read, my takeaway would be: We might very well not need ASI to reach nigh-indefinite life extension. Accidents of course still happen, so in a non-ASI branch of this world I currently estimate my life expectancy at around 300-5000 years, provided this tech happens in my lifetime (which I think is likely) and given no cryonics/backups/...

(I would like to make it clear that the author barely talks about immortality, more about health and life span, but I suspect that this has to do with decreasing the risk of not being taken seriously. He mentions f.e. millennia old organisms as ones to "learn" from.)


Interestingly, the increase in probability estimation of non-ASI-dependent immortality automatically and drastically impacts the importance of AI safety, since you are a) way more likely to be around (bit selfish, but whatever) when it hits, b) we may actually have the opportunity to take our time (not saying we should drag our feet), so the benefit from taking risks sinks even further, and c) if we get an ASI that is not perfectly aligned, we actually risk our immortality, instead of standing to gain it.


All the best to you, looking forward to meeting you all some time down the line.

(I am certain that the times and locations mentioned by HJPEV will be realized for meet-ups, provided we make it that far.)

"If You're Not a Holy Madman, You're Not Trying"

It seems to me that the agents you are considering don't have as complex a utility function as people, who seem to at least in part consider their own well being as part of their utility funciton. Additionally, people usually don't have a clear idea of what their actual utility function is, so if they want to go all-in on it, they let some values fall by the wayside. AFAIK this limitation not a requirement for an agent.


If you had your utility function fully specified, I don't think you could be considered both rational and also not a "holy madman". (This borders on my answer to the question of free will, which so far as I can tell, is a question that should not explicitly be answered, so as to not spoil it for anyone who wants to figure it out for themselves.)

Suffice it to say that optimized/optimal function should be a convergent instrumental goal, similar to self-preservation, and a rational agent should thereby have it as a goal. If I am not mistaken, this means that a problem in work-life balance, as you put it, is not something that an actual rational agent would tolerate, provided there are options to choose from that don't include this problem and have a similar return otherwise.

Or did I misinterpret what you wrote? I can be dense sometimes...^^

3 Levels of Rationality Verification

An idea that might be both unsustainable and potentially dangerous, but also potentially useful, is to have someone teach as a final test. Less an exam and more a project (with oversight?). Of course, these trainees could be authentic or disguised testers.

Problems with this idea (non-exhaustive): - Rationality doesn't necessarily make you good at teaching, - Teaching the basics badly are likely to have negative effects on the trainee, - This could potentially be gamed by reformulated regurgitation.

So... What behaves differently in the presence of Rationality. I like Brennan's idea of time pressure, though he himself demonstrates that you don't need to have finished training for it, and it doesn't really hit the mark.

Or: What requires Rationality? Given Hidden Knowledge (may only require facts that are known, but not to them), one could present new true facts that need to be distinguished from new well-crafted falsehoods (QM anyone?^^). This still only indicates, but it may be part of the process. If they game this by studying everything, and thinking for themselves, and coming to correct conclusions, I think that counts as passing the test. Maybe I am currently not creative enough though. This test could also be performed in isolation, and since time would probably be a relevant component, it would likely not require huge amounts of resources to provide this isolation. Repeat tests could incorporate this (or seemingly incorporate it) too.

If you wanted to invest more effort, you could also specifically not isolate them, but put them in a pressured situation (again, I am being influenced by memories of a certain ceremony. But it is simply really good.) This doesn't have to be societal pressure, but this kind at least makes rash decisions less likely to be costly.

I can't really formulate the idea concretely, but: A test inspired by some of ye olden psychology experiments might provide double yield by both testing the rationality of the person in question and also disabuse them of their trust. Though I can see a lot of ways this idea could go awry.


An issue that most if not all of my tests run into is that they limit what could be taught, since it is still part of the test. This is a problem that should be solved, not just because it irritates me, but because this also means that random chance could easier change the results.

This is, I think, because so far all tests check for the correct answer. This, in itself, may be the wrong approach. Since we try to test techniques which have an impact on the whole person, not "just" their problem solving. I would for example hope that a crisis situation would on average benefit from the people being trained in rationality, not just in regards to "the problem solving itself", but also the emotional response, the ability to see the larger picture, prioritization and initial reaction speed, and so on.

(Maybe having them devise a test is a good test...^^ Productive, too, on the whole.)

(I can think of at least one problem of yours that I still haven't solved, though I therefore can't say whether or not my not-solving-it is actually showing a lack of rationality[though it's likely], or rather depends on something else. Not sure if I should mention it, but since you (thankfully) protect the answer, I don't think that I need to. This, still, is asking for a correct answer though.)


That's all I can think of for now. Though I am not really satisfied... Do I need to be "at a higher level" to be able to evaluate this, since I don't fully grasp what it is that should be tested yet? Seems like either an option or a stop sign..

Timeless Identity
If there's any basis whatsoever to this notion of "continuity of consciousness"—I haven't quite given up on it yet, because I don't have anything better to cling to—then I would guess that this is how it works.

Why "cling to"? It all adds up to normality, right? What you are saying sounds like someone resisting the "winds of evidence" (in this case added complexity, I am guessing).

I tried to come up with ways to explain my observations of consciousness, but they all seem incomplete too, so far. But I don't see how that impacts your argument here. I'm not saying "stop asking". I just don't see the reason to "cling" to this "notion of continuity".

And if you think there is a reason, and I don't see it, I am somewhat worried.

Best regards

Timeless Causality

I would sooner associate experience with the arrows than the nodes, if I had to pick one or the other!  I would sooner associate consciousness with the change in a brain than with the brain itself, if I had to pick one or the other.

This also lets me keep, for at least a little while longer, the concept of a conscious mind being connected to its future Nows, and anticipating some future experiences rather than others.  Perhaps I will have to throw out this idea eventually, because I cannot seem to formulate it consistently; but for now, at least, I still cannot do without the notion of a "conditional probability".

 

I wrote a long comment, but the main question is: Why do you guess an answer here, instead of "Shut up and calculate"?

Why are/were you favouring the hypothesis? Considering what I have read so far from you, I find it more likely that I have missed something, than there being no reason, but I can't find it....

An Intuitive Explanation of Bayes's Theorem

Well, intelligence doesn't equate skills. It's probably easier to aquire skills (like mental maths) with high intelligence, but no matter the intelligence, you still need to learn it.

P(easy learning | high intelligence) may be higher than P(easy learning | not high intelligence) for a given subject (f.e. mental math), but P(mental math) is not dependent on the ease of learning [P(mental math | no easy learning) would be low] but rather on actually learning/training it: P(mental math | no learning) is pretty low.

So people who learn mental math may have different speeds or difficulty doing so, however I would guess that it is more dependent on educational context, curiosity, or need(,...), rather than ease of learning.

But, if your self-assessment is correct, and my mentioned assumptions are as well, you should be able to remedy the predicament relatively easily ;)

Luna Lovegood and the Chamber of Secrets - Part 11

The finale was a specific instance of two people who were in a very unusual situation. You can not "just" KO a powerful wizard. The whole reason that worked, was the restrictions that arose from this situation.

If someone was able to KO Lord Voldemort in a confrontation in which he was allowed to use magic, I assume they would afterwards be able to perform rituals to change their mind, similar to how Bellatrix was broken. 

Also...I mean, you can just kill them at that point. Being also able to change their mind doesn't seem like that much of an additional burden. 

 

In regards to how the world looks: We already were told several ways to overthrow the Ministry of Magic. Since nobody has bothered to do so, I would assume that the same logic proposed to answer the question of "why not" in the story, also applies to this, right?

 

I would also like to add that the spells in this world are not at all balanced. As was also noted in HPMoR: The False Memory Charm should be unforgivable. And I think Obliviate is pretty close to that, too.

Load More