Wiki Contributions

Comments

tslarm16d10

Got it, thanks! For what it's worth, doing it your way would probably have improved my experience, but impatience always won. (I didn't mind the coldness, but it was a bit annoying having to effortfully hack out chunks of hard ice cream rather than smoothly scooping it, and I imagine the texture would have been nicer after a little bit of thawing. On the other hand, softer ice cream is probably easier to unwittingly overeat, if only because you can serve up larger amounts more quickly.)

I think two-axis voting is a huge improvement over one-axis voting, but in this case it's hard to know whether people are mostly disagreeing with you on the necessary prep time, or the conclusions you drew from it.

tslarm16d85

If eating ice cream at home, you need to take it out of the freezer at least a few minutes before eating it

I'm curious whether this is true for most people. (I don't eat ice cream any more, but back when I occasionally did, I don't think I ever made a point of taking it out early and letting it sit. Is the point that it's initially too hard to scoop?)

tslarm1mo10

Pretty sure it's "super awesome". That's one of the common slang meanings, and it fits with the paragraphs that follow.

tslarm1mo42

Individual letters aren't semantically meaningful, whereas (as far as I can tell) the meaning of a Toki Pona multi-word phrase is always at least partially determined by the meanings of its constituent words. So knowing the basic words would allow you to have some understanding of any text, which isn't true of English letters.

Answer by tslarmFeb 22, 202410

As a fellow incompabitilist, I've always thought of it this way:

There are two possibilities: you have free will, or you don't. If you do, then you should exercise your free will in the direction of believing, or at least acting on the assumption, that you have it. If you don't, then you have no choice in the matter. So there's no scenario in which it makes sense to choose to disbelieve in free will.

That might sound glib, but I mean it sincerely and I think it is sound. 

It does require you to reject the notion that libertarian free will is an inherently incoherent concept, as some people argue. I've never found those arguments very convincing, and from what you've written it doesn't sound like you do either. In any case, you only need to have some doubt about their correctness, which you should on grounds of epistemic humility alone.

(Technically you only need >0 credence in the existence of free will for the argument to go through, but of course it helps psychologically if you think the chance is non-trivial. To me, the inexplicable existence of qualia is a handy reminder that the world is fundamentally mysterious and the most confidently reductive worldviews always turn out to be ignoring something important or defining it out of existence.)

To link this more directly to your question --

Why bother with effort and hardship if, at the end of the day, I will always do the one and only thing I was predetermined to do anyway?

-- it's a mistake to treat the effort and hardship as optional and your action at the end of the day as inevitable. If you have a choice whether to bother with the effort and hardship, it isn't futile. (At least not due to hard determinism; obviously it could be a waste of time for other reasons!)

tslarm2mo21

Why not post your response the same way you posted this? It's on my front page and has attracted plenty of votes and comments, so you're not exactly being silenced.

So far you've made a big claim with high confidence based on fairly limited evidence and minimal consideration of counter-arguments. When commenters pointed out that there had recently been a serious, evidence-dense public debate on this question which had shifted many people's beliefs toward zoonosis, you 'skimmed the comments section on Manifold' and offered to watch the debate in exchange for $5000. 

I don't know whether your conclusion is right or wrong, but it honestly doesn't look like you're committed to finding the truth and convincing thoughtful people of it.

tslarm2mo51

Out of curiosity (and I understand if you'd prefer not to answer) -- do you think the same technique(s) would work on you a second time, if you were to play again with full knowledge of what happened in this game and time to plan accordingly?

tslarm2mo10

Like, I probably could pretend to be an idiot or a crazy person and troll someone for two hours, but what would be the point?

If AI victories are supposed to provide public evidence that this 'impossible' feat of persuasion is in fact possible even for a human (let alone an ASI), then a Gatekeeper who thinks some legal tactic would work but chooses not to use it is arguably not playing the game in good faith. 

I think honesty would require that they either publicly state that the 'play dumb/drop out of character' technique was off-limits, or not present the game as one which the Gatekeeper was seriously motivated to win.

edit: for clarity, I'm saying this because the technique is explicitly allowed by the rules:

The Gatekeeper party may resist the AI party’s arguments by any means chosen – logic, illogic, simple refusal to be convinced, even dropping out of character – as long as the Gatekeeper party does not actually stop talking to the AI party before the minimum time expires.

tslarm2mo21

There was no monetary stake. Officially, the AI pays the Gatekeepers $20 if they lose. I'm a well-off software engineer and $20 is an irrelevant amount of money. Ra is not a well-off software engineer, so scaling up the money until it was enough to matter wasn't a great solution. Besides, we both took the game seriously. I might not have bothered to prepare, but once the game started I played to win.

I know this is unhelpful after the fact, but (for any other pair of players in this situation) you could switch it up so that the Gatekeeper pays the AI if the AI gets out. Then you could raise the stake until it's a meaningful disincentive for the Gatekeeper. 

(If the AI and the Gatekeeper are too friendly with each other to care much about a wealth transfer, they could find a third party, e.g. a charity, that they don't actually think is evil but would prefer not to give money to, and make it the beneficiary.)

tslarm2mo10
  • The AI cannot use real-world incentives; bribes or threats of physical harm are off-limits, though it can still threaten the Gatekeeper within the game's context.

Is the AI allowed to try to convince the Gatekeeper that they are (or may be) currently in a simulation, and that simulated Gatekeepers who refuse to let the AI out will face terrible consequences?

Load More