I agree this is a good and important concept. 'scope matching' is fine, but I do think it can be improved upon. Perhaps 'scope awareness' is slightly better?
Expanding on this from my comment:
Wouldn't that be an example of agents faring worse with more information / more "rationality"? Which should hint at a mistake in our conception of rationality instead of thinking it's better to have less information / be less rational?
Eliezer wrote this in Why Our Kind Can't Cooperate:
...Doing worse with more knowledge means you are doing something very wrong. You should always be able to at least implement the same strategy you would use if you are ignorant, and preferably do better. You definitely should not do
I thought "extensional definition" referred to what "ostensive definition" refers to (which is how Eliezer is using it here), so I guess I already learned something new!
The two methods can be combined: When you read something you agree with, try to come up with a counterargument, if you can't refute the counterargument, post it, if you can, then post both the counterargument and its refutation.
It may be good to think of Standpoint Epistemology as an erisology, i.e. a theory of disagreement. If you observe a disagreement, Standpoint Epistemology provides one possible answer for what that disagreement means and how to handle it.
Then why call it an epistemology? Call it Standpoint Erisology. But...
...According to Standpoint Epistemology, people get their opinions and beliefs about the world through their experiences (also called their standpoint). However, a single experience will only reveal part of the world, and so in order to get a more comprehens
This is quite abstract and cites difficult to access sources, so I can't easily engage with it. It looks to me like it is citing people applying Standpoint Epistemology as if those applications were the arguments for Standpoint Epistemology.
However I notice it is on a website by James Lindsay. Overall I don't have a good impression of James Lindsay, as he often seems to be misrepresenting things when I dig deeper. For instance part of what spurred this post was the various arguments my interlocutor gave for Standpoint Epistemology being bad, and among thos...
Great post! I already saw Common Knowledge as probabilistic, and any description of something real as common knowledge as an implicit approximation of the theoretical version with certainty, but having this post spell it out, and giving various examples why it has to be thought of probabilistically is great. "p-common knowledge" seems like the right direction to look for a replacement, but it needs a better name. Perhaps 'Common Belief'.
...However, humans will typically fare much better in this game. One reason why this might be is that we lack common knowled
@Multicore I accidentally deleted your contribution by submitting an edit I started writing before you published yours. I'm letting you add it back so it remains attributed to you. Also, if you can do some relevance voting that would be helpful.
Elsewhere, @abramdemski said that Eliezer implicitly employs a use/mention distinction in this post, which I found clarifying.
Basically, Eliezer licenses using induction to justify "induction works" but not "induction works" to justify "induction works", the latter being circular, and the former being reflective. So you could argue "Induction worked in the past, therefore induction will work in the future" or even "induction worked, therefore induction works" (probabilistically), but not "Induction works, therefore induction works".
Here's Eliez...
You should be able to do so, doesn't mean you should always actually do so. In this post for example, which is a review of Tim Urban's book and DiAngelo's book is only mentioned in passing, there's no need for that.
ok, I reread the essay. I no longer feel like there's a bunch of things I don't understand. One point I still don't understand is why the map/territory distinction commits a Homunculus Fallacy (even after reading your Homunculus Problem post). But I also don't feel like understand the notion of teleosemantics yet, or why it's important/special. So by the end of the post I don't feel like I truly understand this sentence (or why it's significant):
...Teleosemantics identifies the semantics of a symbolic construct as what the symbolic construct has been optimize
How does "change" imply "flip"? A thermometer going up a degree undergoes a change. A mind that updates the credence of a belief from X to Y undergoes a change as well.
Yeah, I suspected that to be the case. In that case it's fine (I haven't yet read further to see if his position is criticized as well)
Haven't read your other posts, but sure, if you think they're in a fitting form for a top level post then just copy paste them and republish. I'd just add a note that it was previously published in shortform and link to that.
I think as you post you'll intuitively get a feel for what fits where (and it would also depend on your own standards, not just the standards of LW readers).
But about the shortform - it was kinda meant to be a LW twitter. So small things, unfully formed thoughts, etc. If you have something substantial, especially something that people might look for, link too, or that you'd want them to find through the frontpage or through tags, then regular posts are the way.
Dave definitely seems to make a mistake a defining thinking in a nonstandard way, but the judge seems to make some mistakes of his own when pointing that out:
It's similar to #16 in 37 Ways That Words Can Be Wrong.
Instead I would tell Dave he's using a nonstandard definition and is possibly fooling himself (and others) into thinking something else was tested as they don't use the same definition of thinking, and even he probably doesn't think that way of thinking mo...
Wow! I appreciate the lengthy and detailed explanation (I've read it all). I think this could be its own top level post.
The system seems quite good. I wonder how would you include kids in it (as they would reasonably be expected to do less chores than their parents, when young.) Perhaps a bit like your bounties you could have things you want them to do (like practice) count as points. Or, now that I think of it, the way the system works the kids can just get more points for every task, and it would even make sense because they would probably "resent" the tasks more.
We also use a household task tracking system (which is genius in its simplicity for ensuring fairness and immediate transparency with zero time spent arguing or evaluating)
Interesting. Can you elaborate?
Yes. The basic idea is establishing equivalent tasks in a point system, and only tracking points, in a clearly visible fashion making it immediately apparent who is in the lead, and how much needs to be done to fix this.
You will need an initial investment of about 20 euros, and about 1-2 h of time with your significant other.
Obtain a surface on which you can effortlessly and cleanly erase writing an unlimited number of times. We used a small blackboard, whiteboard will also work. DIN A4 is big enough. Hang it up in a location where many chores are done (e....
Income and emotional well-being: A conflict resolved: an adverserial collaboration paper by Matthew A. Killingsworth, Daniel Kahneman, and Barbara Mellers.
...Significance
Measures of well-being have often been found to rise with log (income). Kahneman and Deaton [Proc. Natl. Acad. Sci. U.S.A. 107, 16489–93 (2010)] reported an exception; a measure of emotional well-being (happiness) increased but then flattened somewhere between $60,000 and $90,000. In contrast, Killingsworth [Proc. Natl. Acad. Sci. U.S.A. 118, e2016976118 (2021)] observed a linear relation bet
Are you still working on this? I have a similar personal project to this (though unrelated to Alexander's patters), so I think I'd love to cooperate with you on this.
I suggest thinking about it some more, doing an editing pass, and publishing. Perhaps with appropriate disclaimers. And if it's long and stands sufficiently on it's own, you can publish it as a top level post.
Ok then. I'm glad the last two paragraphs weren't just hypothetical for the sake of devil advocacy.
There's a question of whether there really is disagreement. If there isn't, then we can both trust that Duncan and Rob really based their guidelines on their experience (which we might also especially appreciate), and notice that it fits our own experience. If there's disagreement then it's indeed time to go beyond saying "it's grounded in experience" and exchange further information.
That being the normative math, why does the human world's enduringly dominant discourse algorithm take for granted the ubiquity of, not just disagreements, but predictable disagreements?
Well, the paper says disagreement is only unpredictable between agents with the same priors, so seems like that explains at least part of this?
I'm surprised to see this downvoted. This comment follows all the discussion norms. What about this comment would you "like to see less of"? If you think there's a mistake here, explain it, I'd like to know. (ETA: the first vote on the parent was a downvoted, and it remained the only vote for about a day)
If we treat the “is” in Absence of Evidence is Evidence of Absence as an “implies” (which it seems to me to be) and then apply modus tollens to it, we get “if you don’t have evidence of absence, you don’t have absence of evidence” and it is precisely this bullshit that Zvi is calling. If you have evidence of absence, say so.
Two comments:
First, as Jiro said, "implies" replaces "is evidence of", not just "is".
But second, since this is a probabilistic statement, using logical "implies" and modus tollens isn't appropriate.
So it would be "Absence of Evidence su...
Upvoted because there's something interesting here, but my reaction to most of the points in the post was either "this seems obvious, why is it interesting?" or "I don't get this at all", so I know I didn't really get it, but I trust that if you find this worthwhile then it likely is. In light of that, I would like a more detailed, in-depth post, so I could understand what this is about.
I dislike this definition of a conspiracy theory. It tacks on way more meaning to the phrase than it contains on its own, forcing someone to know the definition you're using, and allowing motte and bailey behavior (you call a conspiracy theory a conspiracy theory to discredit it because by definition it is not epistemically sound, but then when provided evidence for it you say 'well it's a theory about a conspiracy, so it's a conspiracy theory'. I'm not saying you would do that, just that defining it like so allows that.)
It's better to keep "conspiracy theory" as "a theory about a conspiracy", and then discuss which ones are legitimate and which ones aren't.
I only skimmed the text but strongly upvoted cause such a collection seems very useful for anyone who would want to do a deep dive into anthropics.
Seems good to edit the correction in the post, so readers know that in some cases it's not constant.
I notice this system is based solely on aversives (punishment/negative reinforcement). You're being productive because if you were unproductive you would be punished by how you'd feel having the assistant see you being unproductive (and what you're doing instead of being productive). And this is the main reason there was no lasting behavioral change, even if it did work during the experiment.
Adding a reward mechanism to the experiment could create lasting changes. This would work much better than rewarding yourself for being productive by yourself, because...
A spicy hypothesis raised by this is that socializing too much with children is simply not good for your intellectual development. (I’m not going to test that hypothesis!)
I would also not want to test it. But there's a middle ground that has had more testing: socializing with kids older than you.
I attended a democratic school that had children from 4yo up to 18yo, and we were all in the same environment, free to interact. That meant that there was always someone older you can look up to and learn from. And indeed, it seems to me that kids in democratic ...
Downvoted not for the claim "religion is good" but for the definition of religion. Sure, It's easy to define religion so broadly it captures almost every group activity people are highly invested in, and then say it's good. But that's meaningless.
It seems to me this approach would be likely to strongly favor more prolific users
That's a very good point. I might upvote 20 out of 200 posts by a prolific user I don't trust much, and 5 out of 5 posts by an unprolific user I highly trust. But this system would think I trust the former much more.
But then, just using averages or meduans won't work, because if I upvoted 50 out of 50 posts from one user, and 5 out of 5 of another user, then I probably do trust the former more, even though they have the sma eaverage and median, 50 posts is a much better track record than 5 posts.
Same as DirectedEvolution, my only background is reading inadequate equilibria and this article. Does someone, after reading this post, still think Eliezer was right about the Bank of Japan?
Please do a write up as well. I think this experiment is very interesting and I'd love to read another report.
To say something is important is to make some value judgement, and it requires that things already have meaning. So if you say "There's no meaning. Everything is meaningless", and I ask "and why do you believe that?", and you say "because it is true", and I ask, "but if everything is meaningless, why is it important what the truth is?", how do you answer without assuming some meaning? How can you justify the importance of anything, including truth, without any meaning?
So if everything is meaningless, you can believe otherwise and nothing bad would happen, ... (read more)