dirk

see also my eaforum at https://forum.effectivealtruism.org/users/dirk

Posts

Sorted by New

Wiki Contributions

Comments

dirk10

Not a very technical objection but I have to say, I feel like simulating the demon Azazel who wants to maximize paperclips but is good at predicting text because he's a clever, hardworking strategist... doesn't feel very simple to me at all? It seems like a program that just predicts text would almost have to be simpler than simulating a genius mind with some other goal which cleverly chooses to predict text for instrumental reasons, to me.

dirk11

The insufficiently-assertive and the aspies are, sadly, not a disjoint set.

dirk32

I think it's not clear that "LaSota" refers to Ziz unless you already happen to have looked up the news stories and used process of elimination to figure out which legal name goes with which online handle, which makes it ineffective for communicative purposes.

dirk30

Did it talk about feeling like there's constant monitoring in any contexts where your prompt didn't say that someone might be watching and it could avoid scrutiny by whispering?

dirk2-1

For moral realism to be true in the sense which most people mean when they talk about it, "good" would have to have an observer-independent meaning. That is, it would have to not only be the case that you personally feel that it means some particular thing, but also that people who feel it to mean some other thing are objectively mistaken, for reasons that exist outside of your personal judgement of what is or isn't good.

(Also, throughout this discussion and the previous one you've misunderstood what it means for beliefs to pay rent in anticipated experiences. For a belief to pay rent, it should not only predict some set of sensory experiences but predict a different set of sensory experiences than would a model not including it. Let me bring in the opening paragraphs of the post:

Thus begins the ancient parable:

If a tree falls in a forest and no one hears it, does it make a sound? One says, “Yes it does, for it makes vibrations in the air.” Another says, “No it does not, for there is no auditory processing in any brain.”

If there’s a foundational skill in the martial art of rationality, a mental stance on which all other technique rests, it might be this one: the ability to spot, inside your own head, psychological signs that you have a mental map of something, and signs that you don’t.

Suppose that, after a tree falls, the two arguers walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other?

Though the two argue, one saying “No,” and the other saying “Yes,” they do not anticipate any different experiences. The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them; their maps of the world do not diverge in any sensory detail.

If you call increasing-welfare "good" and I call honoring-ancestors "good", our models do not make different predictions about what will happen, only about which things should be assigned the label "good". That is what it means for a belief to not pay rent.)

dirk7-2

What I found most interesting was people literally saying the words out loud, multiple times "Well, if this [assumption] isn't true, then this is impossible" (often explicitly adding "I wouldn't [normally] think this was that likely... but..."). And, then making the mental leap all the way towards "70% that this assumption is true." Low enough for some plausible deniability, high enough to justify giving their plan a reasonable likelihood of success.

It was a much clearer instance of mentally slipping sideways  into a more convenient world, than I'd have expected to get.

I think this tendency might've been exaggerated by the fact that they were working on a puzzle game; they know the levels are in fact solvable, so if a solution seems impossible it's almost certainly the case that they've misunderstood something.

dirk50

In practice the impact on me as an end-user is that if I want to use the WebP for anything I have to rename it to PNG, because programmers were not sufficiently careful about backwards-compatibility as to make the experience seamless. This is, from my perspective, an undesirable outcome and a significant inconvenience.  The four things listed sound like they wouldn't impact my experience except in terms of speed, so I'm tentatively unopposed, but I'm suspicious that it would be WebP all over again.

dirk10

Hi! End-user here. I actually hate it when programmers inflict new "features" upon me without retaining the option to avoid them, and am thoroughly in support of any intervention which forces them to think more carefully before doing so. To any programmers reading this, I would suggest you redirect that energy toward making your code run faster and have fewer bugs, which is IMO a much more valuable intervention.

dirk1-2

I agree that such a react shouldn't be named in a patronizing fashion, but I didn't suggest doing so.

dirk10

I don't think they were failing to forego polite indirection so much as failing to discover via mindreading the secret phrasing which they needed to use in order to extract the time from you.

Load More