All of Yoav Ravid's Comments + Replies

To say something is important is to make some value judgement, and it requires that things already have meaning. So if you say "There's no meaning. Everything is meaningless", and I ask "and why do you believe that?", and you say "because it is true", and I ask, "but if everything is meaningless, why is it important what the truth is?", how do you answer without assuming some meaning? How can you justify the importance of anything, including truth, without any meaning?

So if everything is meaningless, you can believe otherwise and nothing bad would happen, ... (read more)

1kremlin2mo
Thank you, I believe I understand

Looking forward to your post!

I agree this is a good and important concept. 'scope matching' is fine, but I do think it can be improved upon. Perhaps 'scope awareness' is slightly better?

Cause "according to the criterion of others' welfare" doesn't require "at ones own expense".

1Zolmeister3mo
Sure, but it does not preclude it. Moreover, if the costs of the actions are not borne by the altruist (e.g. by defrauding customers, or extortion), I would not consider it altruism. In this sense, altruism is a categorization tag placed on actions. I do see how you might add a second, deontological definition ('a belief system held by altruists'), but I wouldn't. From the post [https://www.lesswrong.com/posts/EA39yRbhBbrccXnHi/inner-goodness], "Humane" or "Inner Goodness" seem more apt in exploring these ideas.

Expanding on this from my comment:

Wouldn't that be an example of agents faring worse with more information / more "rationality"? Which should hint at a mistake in our conception of rationality instead of thinking it's better to have less information / be less rational?

Eliezer wrote this in Why Our Kind Can't Cooperate:

Doing worse with more knowledge means you are doing something very wrong.  You should always be able to at least implement the same strategy you would use if you are ignorant, and preferably do better.  You definitely should not do

... (read more)

Formatting error: The end of the first paragraph is at the start of the second paragraph.

that it, it shouldn’t apply

That is*

2jasoncrawford3mo
Fixed, thanks

I thought "extensional definition" referred to what "ostensive definition" refers to (which is how Eliezer is using it here), so I guess I already learned something new!

The two methods can be combined: When you read something you agree with, try to come up with a counterargument, if you can't refute the counterargument, post it, if you can, then post both the counterargument and its refutation.

It may be good to think of Standpoint Epistemology as an erisology, i.e. a theory of disagreement. If you observe a disagreement, Standpoint Epistemology provides one possible answer for what that disagreement means and how to handle it.

Then why call it an epistemology? Call it Standpoint Erisology. But...

According to Standpoint Epistemology, people get their opinions and beliefs about the world through their experiences (also called their standpoint). However, a single experience will only reveal part of the world, and so in order to get a more comprehens

... (read more)
2tailcalled3mo
Erisology is a very recently coined term, whereas the standard is to call it an epistemology. Also, as you point out, there are epistemological elements to it, and part of why I'm emphasizing the erisology angle is because I've been thinking of writing about a typology of erisologies that I have coined: Mistake theory (disagreements originate in mistakes): * ability erisology: some people are smarter (or otherwise better) than others and this leads to disagreements between the informed and the uninformed * standpoint erisology: people have different experiences, leading to different beliefs, and if they cannot pool their info to come to a shared set of knowledge, then they end up persistently disagreeing Conflict theory (disagreements originate in conflict): * battle erisology: some people represent concerns that are in tension with your concerns and they are trying to promote their concerns by spreading lies and interfering in communication * trauma erisology: different people have had different negative experiences that makes them avoid different sources of information, leading to them having different opinions due to their different sources of information (Trauma erisology would be conflict theory because the "negative experiences" are usually related to conflict.) "Sets up a motte and bailey" sounds like nonsense to me. Any time you mention multiple things, it is possible to ignore critiques of the less-defensible things and shift the discussion to the more-defensible claims. If someone directly disagreed with the whole "white people need to shut up and listen" element, I wouldn't deny that I've linked to that, and would instead engage with that area. I've found it (or rather, theories adjacent to it which I learned before I got into this specific instantiation of it) to have profound implications for how I've been thinking about knowledge and evidence. 🤷 If you have already integrated the lessons for this, then maybe it is less

This is quite abstract and cites difficult to access sources, so I can't easily engage with it. It looks to me like it is citing people applying Standpoint Epistemology as if those applications were the arguments for Standpoint Epistemology.

However I notice it is on a website by James Lindsay. Overall I don't have a good impression of James Lindsay, as he often seems to be misrepresenting things when I dig deeper. For instance part of what spurred this post was the various arguments my interlocutor gave for Standpoint Epistemology being bad, and among thos... (read more)

Great post! I already saw Common Knowledge as probabilistic, and any description of something real as common knowledge as an implicit approximation of the theoretical version with certainty, but having this post spell it out, and giving various examples why it has to be thought of probabilistically is great. "p-common knowledge" seems like the right direction to look for a replacement, but it needs a better name. Perhaps 'Common Belief'.

However, humans will typically fare much better in this game. One reason why this might be is that we lack common knowled

... (read more)
2Yoav Ravid3mo
Expanding on this from my comment: Eliezer wrote this in Why Our Kind Can't Cooperate [https://www.lesswrong.com/posts/7FzD7pNm9X68Gp5ZC/why-our-kind-can-t-cooperate]: It's interesting to note that in case, he specifically talked about coordination when saying that. And this post claims common knowledge can make rational agent specifically less able to cooperate. The given example is this game You didn't say the name of the game, so I can't go read about it, but thinking about it myself, it seems like one policy rational agents can follow that would fare better than naming zero is picking a number at random. If my intuition is correct, that would let each of them win half of the time, and for the amount named to be pretty high (slightly less than 50?). An even better policy would be to randomly pick between the maximum and the maximum-1, which I expect would outperform even humans. With this policy, common knowledge/belief would definitely help. The InfiniCheck message app problem is a bit more complicated. Thinking about it, It seems like the problem is that it always creates an absence of evidence (which is evidence of absence [https://www.lesswrong.com/posts/mnS2WYLCGJP2kQkRn/absence-of-evidence-is-evidence-of-absence]) which equals the evidence, i.e, the way the system is built, it always provides an equal amount of evidence and counter-evidence, so the agent is always always perfectly uncertain, and demands/desires additional information. (If so, then a finite number of checks should make the problem terminate on the final check - correct?) The question is whether it can be said that the demand/desire for additional information, rather than the additional information itself, creates the problem, or that these can't actually be distinguished, cause that would just be calling the absence of evidence "demand" for information rather than just information (which it is).  Also, this actually seems like a case where humans would be affected in a similar way. Eve

@Multicore I accidentally deleted your contribution by submitting an edit I started writing before you published yours. I'm letting you add it back so it remains attributed to you. Also, if you can do some relevance voting that would be helpful.

Elsewhere, @abramdemski said that Eliezer implicitly employs a use/mention distinction in this post, which I found clarifying. 

Basically, Eliezer licenses using induction to justify "induction works" but not "induction works" to justify "induction works", the latter being circular, and the former being reflective. So you could argue "Induction worked in the past, therefore induction will work in the future" or even "induction worked, therefore induction works" (probabilistically), but not "Induction works, therefore induction works".

 Here's Eliez... (read more)

You should be able to do so, doesn't mean you should always actually do so. In this post for example, which is a review of Tim Urban's book and DiAngelo's book is only mentioned in passing, there's no need for that.

3[comment deleted]3mo

ok, I reread the essay. I no longer feel like there's a bunch of things I don't understand. One point I still don't understand is why the map/territory distinction commits a Homunculus Fallacy (even after reading your Homunculus Problem post). But I also don't feel like understand the notion of teleosemantics yet, or why it's important/special. So by the end of the post I don't feel like I truly understand this sentence (or why it's significant):

Teleosemantics identifies the semantics of a symbolic construct as what the symbolic construct has been optimize

... (read more)

How does "change" imply "flip"? A thermometer going up a degree undergoes a change. A mind that updates the credence of a belief from X to Y undergoes a change as well.

2Adam Zerner3mo
Yeah that's a fair question/point. I was thinking about that as well. I think I just get the impression that, thinking about common usage, in the context of "change your mind" people usually mean some sort of "flip". Not everyone though, some people might just mean "update".

Perhaps 'implicated', 'implies' and  'implied' are words people will find less confusing?

"undefinability of truth" seems to be missing a link (currently it links back to this post)

Yeah, I suspected that to be the case. In that case it's fine (I haven't yet read further to see if his position is criticized as well)

Haven't read your other posts, but sure, if you think they're in a fitting form for a top level post then just copy paste them and republish. I'd just add a note that it was previously published in shortform and link to that.

I think as you post you'll intuitively get a feel for what fits where (and it would also depend on your own standards, not just the standards of LW readers).

But about the shortform - it was kinda meant to be a LW twitter. So small things, unfully formed thoughts, etc. If you have something substantial, especially something that people might look for, link too, or that you'd want them to find through the frontpage or through tags, then regular posts are the way. 

Dave definitely seems to make a mistake a defining thinking in a nonstandard way, but the judge seems to make some mistakes of his own when pointing that out:

  1. "proving" that his definition of thinking is right
  2. talking about the "true meaning" of a word.

It's similar to #16 in 37 Ways That Words Can Be Wrong.

Instead I would tell Dave he's using a nonstandard definition and is possibly fooling himself (and others) into thinking something else was tested as they don't use the same definition of thinking, and even he probably doesn't think that way of thinking mo... (read more)

2Gordon Seidoh Worley3mo
This is great. Alas, the poor judge is a rhetorical device doomed to never read the sequences. He exists purely to advocate a different sort of wrong view that is commonly held by many folks, though thankfully not by a sizable chunk of folks on this site.

Wow! I appreciate the lengthy and detailed explanation (I've read it all). I think this could be its own top level post.

The system seems quite good. I wonder how would you include kids in it (as they would reasonably be expected to do less chores than their parents, when young.) Perhaps a bit like your bounties you could have things you want them to do (like practice) count as points. Or, now that I think of it, the way the system works the kids can just get more points for every task, and it would even make sense because they would probably "resent" the tasks more. 

1Portia3mo
If you think this sort of thing could be its own top level post, I think I have been severely abusing the the idea behind the shortform function with what I have poured in there in the last two days. (Mostly because I could not find the button for actual posts, and liked the low pressure scenario of not needing to edit, because I loathe loathe loathe editing text, to the degree where telling myself I need to first often means I publish nothing at all.) I feel my thoughts there on civil disobedience for AI safety, an open letter on AI funding, thoughts on recurrent feedback and sentience in AI vs. biological systems, and tactical concerns on recruiting AI researchers were more important than how I ensure that my girlfriend and I do fair shares of housework. - Can one retrospectively upgrade shortforms into posts without having to edit them?
1Portia3mo
There is unfortunately an inherent difficulty with the system as is in adding a third party. If I erase my girlfriend's points because I did the same amount of stuff, the board is empty, because we are on equal terms; my labour counters hers. That is part of why I love this system - as you add points, you do minimal math on the fly in seconds (if the top column (1 point) reads KKKK, and the second (5 points) reads K, and you do four one point things, you erase on the top to KKK and add at the bottom to KK), but you always end up with a board which instantly establishes the status quo, there is no additional task where you need to sit down with a lengthy piece of paper with scribbles and interpret it too make sense of it.  But if you add a a third party - if she erases my point with hers, this looks equalised, but my girlfriend hasn't contributed zilch. If you find a hack that addresses that, I would be curious to hear it. We won't be having kids, but we are poly, so we might end up having an application for it. If you want to use this system as is with your significant other,  you could still employ the essence of the point ideas for kids. E.g. say they have to contribute x household points per week (tracked separately on their own tracking system), but let them chose which tasks they take, and when they do them, respecting their own talents and time and trusting them to choose wisely and plan. If by the end of the week they have not done them, well, they did not choose wisely, and you get to pick the shitty tasks for them that everyone else carefully avoided, and your kids being lazy becomes the excellent scenario of them having to do the shit you don't want to. (Within safe limits of course, but kids can take out the trash or clean the toilet.)  Another aspect I like about this system is that it encourages you to be proactive. Basically, if you slack, and end up with someone else needing to assess the todos and tell you what needs to be done, this is never in

We also use a household task tracking system (which is genius in its simplicity for ensuring fairness and immediate transparency with zero time spent arguing or evaluating)

Interesting. Can you elaborate?

Yes. The basic idea is establishing equivalent tasks in a point system, and only tracking points, in a clearly visible fashion making it immediately apparent who is in the lead, and how much needs to be done to fix this.

You will need an initial investment of about 20 euros, and about 1-2 h of time with your significant other.

Obtain a surface on which you can effortlessly and cleanly erase writing an unlimited number of times. We used a small blackboard, whiteboard will also work. DIN A4 is big enough. Hang it up in a location where many chores are done (e.... (read more)

Income and emotional well-being: A conflict resolved: an adverserial collaboration paper by Matthew A. Killingsworth, Daniel Kahneman, and Barbara Mellers.

Significance

Measures of well-being have often been found to rise with log (income). Kahneman and Deaton [Proc. Natl. Acad. Sci. U.S.A. 107, 16489–93 (2010)] reported an exception; a measure of emotional well-being (happiness) increased but then flattened somewhere between $60,000 and $90,000. In contrast, Killingsworth [Proc. Natl. Acad. Sci. U.S.A. 118, e2016976118 (2021)] observed a linear relation bet

... (read more)

Are you still working on this? I have a similar personal project to this (though unrelated to Alexander's patters), so I think I'd love to cooperate with you on this.

2Vaniver3mo
Yes, with a group, but slowly. The current target, I think, is trying to get the whole prerequisite graph for a particular idea implemented, and then expand from there.

I never heard "red for girls" (or "red not for boys", for that matter), only "pink for girls".

I suggest thinking about it some more, doing an editing pass, and publishing. Perhaps with appropriate disclaimers. And if it's long and stands sufficiently on it's own, you can publish it as a top level post. 

1LVSN3mo
Yep!

Ok then. I'm glad the last two paragraphs weren't just hypothetical for the sake of devil advocacy.

There's a question of whether there really is disagreement. If there isn't, then we can both trust that Duncan and Rob really based their guidelines on their experience (which we might also especially appreciate), and notice that it fits our own experience. If there's disagreement then it's indeed time to go beyond saying "it's grounded in experience" and exchange further information.

4Said Achmiz3mo
Well, I certainly disagree! So, yes, there is disagreement.

That being the normative math, why does the human world's enduringly dominant discourse algorithm take for granted the ubiquity of, not just disagreements, but predictable disagreements?

Well, the paper says disagreement is only unpredictable between agents with the same priors, so seems like that explains at least part of this?

I'm surprised to see this downvoted. This comment follows all the discussion norms. What about this comment would you "like to see less of"? If you think there's a mistake here, explain it, I'd like to know. (ETA: the first vote on the parent was a downvoted, and it remained the only vote for about a day)

If we treat the “is” in Absence of Evidence is Evidence of Absence as an “implies” (which it seems to me to be) and then apply modus tollens to it, we get “if you don’t have evidence of absence, you don’t have absence of evidence” and it is precisely this bullshit that Zvi is calling. If you have evidence of absence, say so.

Two comments:

First, as Jiro said, "implies" replaces "is evidence of", not just "is".

But second, since this is a probabilistic statement, using logical "implies" and modus tollens isn't appropriate.

So it would be "Absence of Evidence su... (read more)

6Yoav Ravid3mo
I'm surprised to see this downvoted. This comment follows all the discussion norms. What about this comment would you "like to see less of"? If you think there's a mistake here, explain it, I'd like to know. (ETA: the first vote on the parent was a downvoted, and it remained the only vote for about a day)

Does someone know a good analysis of the real effects of subsidies on price?

Upvoted because there's something interesting here, but my reaction to most of the points in the post was either "this seems obvious, why is it interesting?" or "I don't get this at all", so I know I didn't really get it, but I trust that if you find this worthwhile then it likely is. In light of that, I would like a more detailed, in-depth post, so I could understand what this is about.

2Gordon Seidoh Worley3mo
If it's helpful, the idea of teleosemantics is, in my judgement, the same thing I'm trying to point people at when I write about the problem of the criterion [https://www.lesswrong.com/tag/the-problem-of-the-criterion]. I go into some depth about the stuff around purpose in this post [https://www.lesswrong.com/posts/agvmvrzM6um462DC2/the-purpose-of-purpose]. I think that's the deepest and trickiest part to grok, and failing to grok it I think the stuff about truth [https://www.lesswrong.com/posts/omoK8jCMPNWmabFCt/truth-seeking-is-motivated-cognition] being contingent [https://www.lesswrong.com/posts/EnBqRkhPXfYFxPKCv/contingency-is-not-arbitrary] will fall flat. I'm also writing a book [https://www.lesswrong.com/s/HMs2yT9D6LjYR5jQT] about this topic. The chapter I hope to post this week tackles the problem of the criterion more head on, and the next chapter I'll write is about purpose (telos). You might find that helpful to in understanding the idea, but alas it's not done yet.
2abramdemski3mo
Perhaps if you could gesture toward some of the points you don't get at all?

I dislike this definition of a conspiracy theory. It tacks on way more meaning to the phrase than it contains on its own, forcing someone to know the definition you're using, and allowing motte and bailey behavior (you call a conspiracy theory a conspiracy theory to discredit it because by definition it is not epistemically sound, but then when provided evidence for it you say 'well it's a theory about a conspiracy, so it's a conspiracy theory'. I'm not saying you would do that, just that defining it like so allows that.)

It's better to keep "conspiracy theory" as "a theory about a conspiracy", and then discuss which ones are legitimate and which ones aren't.

3DPiepgrass4mo
That's a very reasonable concern. But I don't think your proposal describes how people use the term "conspiracy theory" most of the time. Note that the reverse can happen too, where people dismiss an idea as a "conspiracy theory" merely because [https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world] it's a theory about a conspiracy. Perhaps we just have to accept that there are two meanings and be explicit about which one we're talking about.

I only skimmed the text but strongly upvoted cause such a collection seems very useful for anyone who would want to do a deep dive into anthropics.

2avturchin4mo
Thanks! I will turn it into a proper scientific article one day.

Seems good to edit the correction in the post, so readers know that in some cases it's not constant.

I notice this system is based solely on aversives (punishment/negative reinforcement). You're being productive because if you were unproductive you would be punished by how you'd feel having the assistant see you being unproductive (and what you're doing instead of being productive). And this is the main reason there was no lasting behavioral change, even if it did work during the experiment.

Adding a reward mechanism to the experiment could create lasting changes. This would work much better than rewarding yourself for being productive by yourself, because... (read more)

2Portia3mo
This is why I activated sound on my todoist. It makes a satisfying sound when I tick a task off. I'm embarrassed by how effective it is, and yet would feel silly not utilising the fact that my brain apparently adores clear, reachable, positive feedback so much it gets interested in tasks it otherwise would not. I also have treats I enjoy (like liquorice and barebells vegan protein bars) which I only eat at my desk, so that a craving for them has me wanting to get started on work right away. We also use a household task tracking system (which is genius in its simplicity for ensuring fairness and immediate transparency with zero time spent arguing or evaluating), which involves writing my initials on a blackboard in our living room when I do chores. It is eerily satisfying. Visited my family in Christmas, and both my girlfriend and I were helping clear up the kitchen, and then clearly had a moment where we both wanted to jot down our initials, realised this was not a thing here, that there would be nothing tangible gained from how much we had respectively helped, and felt visibly annoyed and disappointed, and far less interested in helping the next night.  It had me wondering if I should go back to school measures - like, take a pretty wall calendar, and every day that I did the thing I selected as highest priority, or engaged for x hours, or whatever metric, I put a cute star sticker on my calendar, until I see an emerging row. Why are brains so into this. I feel like I am fooling a small, easily manipulatable child into following my life goals rather than arguing online, except that small child has the ability to fuck up my future.

A spicy hypothesis raised by this is that socializing too much with children is simply not good for your intellectual development. (I’m not going to test that hypothesis!)

I would also not want to test it. But there's a middle ground that has had more testing: socializing with kids older than you.

I attended a democratic school that had children from 4yo up to 18yo, and we were all in the same environment, free to interact. That meant that there was always someone older you can look up to and learn from. And indeed, it seems to me that kids in democratic ... (read more)

Downvoted not for the claim "religion is good" but for the definition of religion. Sure, It's easy to define religion so broadly it captures almost every group activity people are highly invested in, and then say it's good. But that's meaningless.

7AnthonyC4mo
"Religion" as conventionally used is already so broad that it includes examples that lack almost every individual feature we typically associate with the category. It hardly seems coincidental that the OP chose Soto Zen, which manages to reject a lot of the components of other religions that are more problematic. Some religions lack gods. Some lack moral laws. Some lack a consistent set of claims about the universe at all. Some lack worship or prayer. Some lack priests or other intermediaries.  If we start from words being defined extensionally instead of by dictionaries, then this is fine. But if you try to find any vaguely natural seeming intensional definition at all, I think it's going to include a lot of extra stuff we don't usually think of as religious. Some of that stuff is good, and it might add up to a net good.
0Gordon Seidoh Worley4mo
Is it, though? As I see it part of my confusion when younger is that I only really thought of religion as one thing. By expanding what we think of as religion it changes the category in a way that is more useful to us. As I think of it, it's like taking back a word that we've let a few organizations take away from us and control the meaning of.

It seems to me this approach would be likely to strongly favor more prolific users

That's a very good point. I might upvote 20 out of 200 posts by a prolific user I don't trust much, and 5 out of 5 posts by an unprolific user I highly trust. But this system would think I trust the former much more.

But then, just using averages or meduans won't work, because if I upvoted 50 out of 50 posts from one user, and 5 out of 5 of another user, then I probably do trust the former more, even though they have the sma eaverage and median, 50 posts is a much better track record than 5 posts.

Same as DirectedEvolution, my only background is reading inadequate equilibria and this article. Does someone, after reading this post, still think Eliezer was right about the Bank of Japan?

5Yitz4mo
I do not. More accurately, I think he could be correct about the ground truth of his theory, but that he was incorrect to update as heavily as he seems to have, considering the evidence in favor is very weak.

Please do a write up as well. I think this experiment is very interesting and I'd love to read another report. 

Load More