Wiki Contributions

Comments

Answer by tslarmFeb 22, 202410

As a fellow incompabitilist, I've always thought of it this way:

There are two possibilities: you have free will, or you don't. If you do, then you should exercise your free will in the direction of believing, or at least acting on the assumption, that you have it. If you don't, then you have no choice in the matter. So there's no scenario in which it makes sense to choose to disbelieve in free will.

That might sound glib, but I mean it sincerely and I think it is sound. 

It does require you to reject the notion that libertarian free will is an inherently incoherent concept, as some people argue. I've never found those arguments very convincing, and from what you've written it doesn't sound like you do either. In any case, you only need to have some doubt about their correctness, which you should on grounds of epistemic humility alone.

(Technically you only need >0 credence in the existence of free will for the argument to go through, but of course it helps psychologically if you think the chance is non-trivial. To me, the inexplicable existence of qualia is a handy reminder that the world is fundamentally mysterious and the most confidently reductive worldviews always turn out to be ignoring something important or defining it out of existence.)

To link this more directly to your question --

Why bother with effort and hardship if, at the end of the day, I will always do the one and only thing I was predetermined to do anyway?

-- it's a mistake to treat the effort and hardship as optional and your action at the end of the day as inevitable. If you have a choice whether to bother with the effort and hardship, it isn't futile. (At least not due to hard determinism; obviously it could be a waste of time for other reasons!)

Why not post your response the same way you posted this? It's on my front page and has attracted plenty of votes and comments, so you're not exactly being silenced.

So far you've made a big claim with high confidence based on fairly limited evidence and minimal consideration of counter-arguments. When commenters pointed out that there had recently been a serious, evidence-dense public debate on this question which had shifted many people's beliefs toward zoonosis, you 'skimmed the comments section on Manifold' and offered to watch the debate in exchange for $5000. 

I don't know whether your conclusion is right or wrong, but it honestly doesn't look like you're committed to finding the truth and convincing thoughtful people of it.

Out of curiosity (and I understand if you'd prefer not to answer) -- do you think the same technique(s) would work on you a second time, if you were to play again with full knowledge of what happened in this game and time to plan accordingly?

Like, I probably could pretend to be an idiot or a crazy person and troll someone for two hours, but what would be the point?

If AI victories are supposed to provide public evidence that this 'impossible' feat of persuasion is in fact possible even for a human (let alone an ASI), then a Gatekeeper who thinks some legal tactic would work but chooses not to use it is arguably not playing the game in good faith. 

I think honesty would require that they either publicly state that the 'play dumb/drop out of character' technique was off-limits, or not present the game as one which the Gatekeeper was seriously motivated to win.

edit: for clarity, I'm saying this because the technique is explicitly allowed by the rules:

The Gatekeeper party may resist the AI party’s arguments by any means chosen – logic, illogic, simple refusal to be convinced, even dropping out of character – as long as the Gatekeeper party does not actually stop talking to the AI party before the minimum time expires.

There was no monetary stake. Officially, the AI pays the Gatekeepers $20 if they lose. I'm a well-off software engineer and $20 is an irrelevant amount of money. Ra is not a well-off software engineer, so scaling up the money until it was enough to matter wasn't a great solution. Besides, we both took the game seriously. I might not have bothered to prepare, but once the game started I played to win.

I know this is unhelpful after the fact, but (for any other pair of players in this situation) you could switch it up so that the Gatekeeper pays the AI if the AI gets out. Then you could raise the stake until it's a meaningful disincentive for the Gatekeeper. 

(If the AI and the Gatekeeper are too friendly with each other to care much about a wealth transfer, they could find a third party, e.g. a charity, that they don't actually think is evil but would prefer not to give money to, and make it the beneficiary.)

  • The AI cannot use real-world incentives; bribes or threats of physical harm are off-limits, though it can still threaten the Gatekeeper within the game's context.

Is the AI allowed to try to convince the Gatekeeper that they are (or may be) currently in a simulation, and that simulated Gatekeepers who refuse to let the AI out will face terrible consequences?

Willingness to tolerate or be complicit in normal evils is indeed extremely common, but actively committing new or abnormal evils is another matter. People who attain great power are probably disproportionately psychopathic, so I wouldn't generalise from them to the rest of the population -- but even among the powerful, it doesn't seem that 10% are Hitler-like in the sense of going out of their way commit big new atrocities. 

I think 'depending on circumstances' is a pretty important part of your claim. I can easily believe that more than 10% of people would do normal horrible things if they were handed great power, and would do abnormally horrible things in some circumstances. But that doesn't seem enough to be properly categorised as a 'Hitler'.

they’re recognizing the limits of precise measurement

I don't think this explains such a big discrepancy between the nominal speed limits and the speeds people actually drive at. And I don't think that discrepancy is inevitable; to me it seems like a quirk of the USA (and presumably some other countries, but not all). Where I live, we get 2km/h, 3km/h, or 3% leeway depending on the type of camera and the speed limit. Speeding still happens, of course, but our equilibrium is very different from the one described here; basically we take the speed limits literally, and know that we're risking a fine and demerit points on our licence if we choose to ignore them.

My read of this passage -- 

Moloch is introduced as the answer to a question – C. S. Lewis’ question in Hierarchy Of Philosopherswhat does it? Earth could be fair, and all men glad and wise. Instead we have prisons, smokestacks, asylums. What sphinx of cement and aluminum breaks open their skulls and eats up their imagination?

-- is that the reference to "C. S. Lewis’ question in Hierarchy Of Philosophers" is basically just a joke, and the rest of the passage is not really supposed to be a paraphrase of Lewis.

I agree it's all a bit unclear, though. You might get a reply if you ask Scott directly: he's 'scottalexander' here and on reddit (formerly Yvain on LW), or you could try the next Open Thread on https://www.astralcodexten.com/

Looks like Scott was being funny -- he wasn't actually referring to a work by Lewis, but to this comic, which is visible on the archived version of the page he linked to:

Edit: is there a way to keep the inline image, but prevent it from being automatically displayed to front-page browsers? I was trying to be helpful but I feel like I might be doing more to cause annoyance...

Edit again: I've scaled it down, which hopefully solves the main problem. Still keen to hear if there's a way to e.g. manually place a 'read more' break in a comment.

Load More