I would have expected this post to mention Rand's excellent intuitive grasp of extortion-resistant decision theories!
...There was a different look in Mr. Thompson's eyes when he drew back, as if cornered, yet looked straight at Galt and said slowly, "Without me, you couldn't get out of this room, right now."
Galt smiled. "True."
"You wouldn't be able to produce anything. You could be left here to starve."
"True."
"Well, don't you see?" The loudness of homey joviality came back into Mr. Thompson's voice as if the hint given and received were now to be safely eva
Thanks. The thing that threw me off is that the ingredients label for the coffee-flavored Postum variant includes "natural coffee flavor". I can't quickly find reliable information about what "natural coffee flavor" means: a blog post from another beverage maker reports that natural coffee flavor "may be extracted from a variety of plants like chicory, garlic, and yes, sometimes coffee beans" but that the author "can't guarantee that the flavor company I buy natural coffee flavor from didn't extract one of the flavor compounds from coffee beans". I'm surpr...
The mechanisms are complementary: the drug attracts people into acquiring a taste for something that's not naturally tasty—but once acquired, people still have positive associations with the taste. (I've been drinking iced Postum every morning, despite objective reports that Postum doesn't actually taste good.)
The statement not saying much is essential for getting an impressively comprehensive list of signatories: the more you say, the more likely it is that someone whom you want to sign will disagree.
Relatedly, when we made DontDoxScottAlexander.com, we tried not to wade into a bigger fight about the NYT and other news sites, nor to make it an endorsement of Scott and everything he's ever written/done. It just focused on the issue of not deanonymizing bloggers when revealing their identity is a threat to their careers or personal safety and there isn't a strong ethical reason to do so. I know more high-profile people signed it because the wording was conservative in this manner.
I miss coffee. I used to have iced-coffee in the morning a lot, but I've been scared of caffeine ever since my insomnia scare of January 2021. (I've been to psych prison for sleep deprivation madness twice, and I never want that to happen again.) Yes, I know about decaf, but they don't get all of the caffeine out, and if you haven't been using, your tolerance is all gone and you're super-sensitive to even small doses; I feel safer just abstaining altogether.
I was catching up with Seinfeld on Netflix out of '90s nostalgia, and in one episode they mentioned ...
We should distinguish between possible worlds that have low realityfluid due to implausible physics (like FTL travel, which only occurs in simulations, because causal universes don't look like that), and those that have low realityfluid due to implausibly fine-tuned sociology/psychology/&c. (which mostly occur in simulations rather than the basement, because the fine-tuning is most parsimoniously explained in terms of what Powers in the basement are interested in simulating).
This thought occasioned by how I'm really enjoying Hello, Tomorrow! on Apple T...
I complained at the time about you substituting a word definition without acknowledging it, which I think you're doing again here.
Bloom specifically used the phrase "Platonic ideal Art of Discourse"! When someone talks about Platonic ideals of discourse, I think it's a pretty reasonable reading on my part to infer that they're talking about simple principles of ideal reasoning with wide interpersonal appeal, like the laws of probability theory, or "Clarifying questions aren't attacks", or "A debate in which one side gets unlimited time, but the other si...
According to my subjective æsthetic whims, it's cute and funny to imagine the protagonist as not remembering both author's names, in accordance with only being a casual formal epistemology fan. (The mentions of casual fandom, family archives, not reading all of chapter 3, &c. make this a short story that happens to be told in the second person, rather than "you" referring to the reader.)
seems like it is missing something in terms of making predictions about any world
I mean, you're right, but that's not what I was going for with that sentence. Suppose we were talking about a tiny philosophical "world" of opaque variables, rather than the real physical universe in all its richness and complexity. If you're just drawing samples from the original joint distribution, both networks will tell you exactly what you should predict to see. But if we suppose that there are "further facts" about some underlying mechanisms that generate that distrib...
I mean, it's true as a special case of minimum description length epistemology favoring simpler models. Chapter 18 of the Koller and Friedman book has a section about the Bayesian score for model comparison, which has a positive term involving the mutual information between variables and their parents (rewarding "fit"), and a negative term for the number of parameters (penalizing complexity).
What's less clear to me (Wentworth likely knows more) is how closely that kind of formal model comparison corresponds to my intuitive sense of causality. The first net...
My model is a hybrid system [...] I think this gets you the properties that you're saying would be good [about Rule of Law]
This alleged "hybrid system" doesn't get you the benefits of rule of law, because the distinguishing feature of the rule of law is that the law is not an optimizer. As Yudkowsky explains in "Free to Optimize", the function of the legal system is "to provide a predictable environment in which people can optimize their own futures." In a free country (as contrasted to an authoritarian dictatorship), a good citizen is someone who pays ...
I think I mostly meant "answer" in the sense of "reply" (to my complaint about rate-limiting Achmiz being an outrage, rather than to a narrower question); sorry for the ambiguity.
I have a lot of extremely strong disagreements with this, but they can wait three months.
I think the last three months are a pretty definitive demonstration that talking about "norms" is toxic and we should almost never do it. I'm not interested, at all, in "norms." (The two posts I wrote about them were "defensive" in nature, arguing that one proposed norm was bad as stated, and expressing skepticism about the project of norms lists.)
I'm intested in probability theory, decision theory, psychology, math, and AI. Let's talk about those things, not "norms." If anyone dislikes a comment about probability theory, decision theory, psychology, math,...
I also think it makes sense to have a pretty strong bias against talking about what the "norms" of a space are, instead of asking about what thing is true, or what thing is optimal under various game-theoretic considerations.
That said, there is definitely a real thing that the "norms" of a space are talking about. Different spaces share different assumptions. There is value in coordinating on shared meaning of words and shared meaning of gestures of social punishment and reward. It seems quite important to help people orient around how people in a sp...
Thanks, to clarify: I don't intend to make a "how dare the moderators moderate Less Wrong" objection. Rather, the objection is, "How dare the moderators permanently restrict the account of Said Achmiz, specifically, who has been here since 2010 and has 13,500 karma." (That's why the grandparent specifies "long-time, well-regarded", "many highly-upvoted contributions", "We were here first", &c.) I'm saying that Said Achmiz, specifically, is someone you very, very obviously want to have free speech as a first-class citizen on your platform, even though y...
We were here first. This is our garden, too—or it was. Why is the mod team persecuting us? By what right—by what code—by what standard?
I endorse much of Oliver's replies, and I'm mostly burnt out from this convo at the moment so can't do the followthrough here I'd ideally like. But, it seemed important to publicly state some thoughts here before the moment passed:
Yes, the bar for banning or permanently limiting the speech of a longterm member in Said's reference class is very high, and I'd treat it very differently from moderating a troll, crank, or confus...
Hmm, I am still not fully sure about the question (your original comment said "I think Oli Habryka has the integrity to give me a staight, no-bullshit answer here", which feels like it implies a question that should have a short and clear answer, which I am definitely not providing here), but this does clarify things a bit.
There are a bunch of different dimensions to unpack here, though I think I want to first say that I am quite grateful for a ton of stuff that Said has done over the years, and have (for example) recently recommended a grant to him ...
I continue to be disgusted with this arbitrary moderator harrassment of a long-time, well-regarded user, apparently on the pretext that some people don't like his writing style.
Achmiz is not a spammer or a troll, and has made many highly-upvoted contributions. If someone doesn't like Achmiz's comments, they're free to downvote (just as I am free to upvote). If someone doesn't want to receive comments from Achmiz, they're free to use already-existing site functionality to block him from commenting on their own posts. If someone doesn't like his three-year-o...
I think Oli Habryka has the integrity to give me a staight, no-bullshit answer here.
Sure, but... I think I don't know what question you are asking. I will say some broad things here, but probably best for you to try to operationalize your question more.
Some quick thoughts:
I think it is strawmanning Zack's post still has some useful frames and it's reasonable for it to be fairly upvoted. [...] I think the amount of strawmanning here is just not bad enough
Why do you think it's strawmanning, though? What, specifically, do you think I got wrong? This seems like a question you should be able to answer!
As I've explained, I think that strawmanning accusations should be accompanied by an explanation of how the text that the critic published materially misrepresents the text that the original author published. In a later comment,...
I'm not sure what other user you're referring to besides Achmiz—it looks like there's supposed to be another word between "about" and "and" in your first sentence, and between "about" and "could" in the last sentence of your second paragraph, but it's not rendering correctly in my browser? Weird.
Anyway, I think the pattern you describe could be generated by a philosophical difference about where the burden of interpretive labor rests. A commenter who thinks that authors have a duty to be clear (and therefore asks clarifying questions, or makes attempted cr...
Thanks for engaging, I found this comment very… traction-ey? Like we’re getting closer to cruxes. And you’re right that I want to disagree with your ontology.
I think “duty to be clear” skips over the hard part, which is that “being clear” is a transitive verb. It doesn’t make sense to say if a post is clear or not clear, only who one is clear and unclear to.
To use a trivial example: Well taught physics 201 is clear if you’ve had the prerequisite physics classes or are a physics savant, but not to laymen. Poorly taught physics 201 is clear to a ...
I don't think we can make that many moderation calls on users this established that there [sic] this controversial without causing some pretty bad things to happen.
Indeed. I would encourage you to ask yourself whether the number referred to by "that many" is greater than zero.
I did list "actually just encourage people to use the ban tool more" is an option. [...] If you actually want to advocate for that over a Said-specific-rate-limit, I'm open to that (my model of you thinks that's worse).
Well, I'm glad you're telling actual-me this rather than using your model of me. I count the fact your model of me is so egregiously poor (despite our having a number of interactions over the years) as a case study in favor of Said's interaction style (of just asking people things, instead of falsely imagining that you can model them).
Yes...
We already let authors write their own moderation guidelines! It's a blank text box!
Because it's a blank text box, it's not convenient for commenters to read it in detail every time, so I expect almost nobody reads it, these guidelines are not practical to follow.
With two standard options, color-coded or something, it becomes actually practical, so the distinction between blank text box and two standard options is crucial. You might still caveat the standard options with additional blank text boxes, but being easy to classify without actually reading is the important part.
a high level goal of "users who want to have the sorts of conversations that actually depend on a different culture/vibe than Said-and-some-others-explicitly-want are able to do so".
We already have a user-level personal ban feature! (Said doesn't like it, but he can't do anything about it.) Why isn't the solution here just, "Users who don't want to receive comments from Said ban him from their own posts"? How is that not sufficient? Why would you spend more dev time than you need to, in order to achieve your stated goal? This seems like a question you s...
Did we read the same verdict? The verdict says that the end of the ban is conditional on the users in question "credibly commit[ting] to changing their behavior in a fairly significant way", "accept[ing] some kind of tech solution that limits their engagement in some reliable way that doesn't depend on their continued behavior", or "be[ing] banned from commenting on other people's posts".
The first is a restriction on variety of speech. (I don't see what other kind of behavioral change the mods would insist on—or even could insist on, given the textual natu...
The tech solution I'm currently expecting is rate-limiting. Factoring in the costs of development time and finickiness, I'm leaning towards either "3 comments per post" or "3 comments per post per day". (My ideal world, for Said, is something like "3 comments per post to start, but, if nothing controversial happens and he's not ruining the vibe, he gets to comment more without limit." But that's fairly difficult to operationalize and a lot of dev-time for a custom-feature limiting one or two particular-users).
I do have a high level goal of "users who want ...
Don't apologize; please either take your time, or feel free to just not reply at all; I am also very time-poor at the moment.
I condemn the restrictions on Said Achmiz's speech in the strongest possible terms. I will likely have more to say soon, but I think the outcome will be better if I take some time to choose my words carefully.
Huh, I happened to glance at the moderation page, and the ban was still there; I guess I must have forgotten to click "Submit" when I tried to remove it the other month? It should be fixed now, ChristianKI.
Thanks for writing this!! There's a number of places where I don't think you've correctly understood my position, but I really appreciate the engagement with the text I published: if you didn't get what I "really meant", I'm happy to do more work to try to clarify.
TEACH, so that B ends up believing X if X is right and Y if Y is right.
CONVINCE, so that B ends up believing X.
EXPOUND, so that the audience ends up believing X.
I'm unhappy with the absence of an audience-focused analogue of TEACH. In the following, I'll use TEACH to refer to making someone...
(Considering the general problem of how forum moderation should work, rather than my specific guilt or innocence in the dispute at hand) I think positing non-truth-tracking motivations (which can be more general than "malice or antipathy") makes sense, and that there is a real problem here: namely, that what I called "the culture of unilateral criticism and many-to-many discourse" in the great-grandparent grants a structural advantage to people who have more time to burn arguing on the internet, analogously to how adversarial court systems grant a structur...
I think this is the consensus view around LW [that AI can't help with alignment research] but haven't seen anything I found persuasive as a defense
I thought it was an argument from inaccessible information: we know how to specify rewards for "Win a Go game", "Predict the next token", or "A human approved this output"; we don't know how to specify rewards for "Actually good alignment research".
I'm imagining that the counterargument might be that earlier weak alignment techniques (and the generation/verification gap) might be enough to bootstrap later, more automated alignment techniques?
Yeah, I don't find "we can't verify good alignment research" nearly as persuasive as other people around here:
My current guess is that we will continue to see small 10-person teams push the cutting-edge forward in AI
Possible counterevidence (10 months later)?—the GPT-4 contributors list lists almost 300 names.[1]
Methodology: I copied text from the contributors page (down to just before it says "We also acknowledge and thank every OpenAI team member"), used some quick Emacs keyboard macros to munge out the section headers and non-name text (like "[topic] lead"), deduplicated and counted in Python (and subtracted one for a munging error I spotted after the fa
I don't understand the motivation for defining "okay" as 20% max value. The cosmic endowment, and the space of things that could be done with it, is very large compared to anything we can imagine. If we're going to be talking about a subjective "okay" standard, what makes 20% okay, but 0.00002% not-okay?
I would expect 0.00002% (e.g., in scenarios where AI "'pension[s] us off,' giv[ing] us [a percentage] in exchange for being parents and tak[ing] the rest of the galaxy for verself", as mentioned in "Creating Friendly AI" (2001)) to subjectively feel great. ...
I agree that it often makes sense to write "This seems X to me" rather than "This is X" to indicate uncertainty or that the people I'm talking to are likely to disagree.
you even think that me saying "treat these statements differently" is me generically trying to forbid you from saying one of them.
Thanks for clarifying that you're not generically trying to forbid me from saying one of them. I appreciate it.
When you shot from the hip with your "this is insane" comment at me, you were [...] culpably negligent
Yes, I again agree that that was a bad com...
That's not what I meant. I affirm Vaniver's interpretation ("Zack's worry is that [...] establishing the rule with user-chosen values [...] will mean there's nothing stopping someone from deciding that criticism has to be above 8 and below 6").
(In my culture, it's important that I say "That's not what I meant" rather than "That's a strawman", because the former is agnostic about who is "at fault". In my culture, there's a much stronger duty on writers to write clearly than there is on readers to maintain uncertainty about the author's intent; if I'm unhapp...
There's absolutely nothing that Zack is currently accomplishing that couldn't have been accomplished if he'd first written a comment to Rob saying "did you mean X?" [...] Acting like a refusal to employ that bare minimum of social grace is a virtue is bullshit
It's not that I think refusing to employ the bare minimum of social grace is a virtue. It's that I wasn't aware—in fact, am still not aware—that confirming interpretations with the original author before publishing a critical essay constitutes the bare minimum of social grace. The idea that it's so...
how he confidently dismisses ANNs
I don't think this is a fair reading of Yudkowsky. He was dismissing people who were impressed by the analogy between ANNs and the brain. I'm pretty sure it wasn't supposed to be a positive claim that ANNs wouldn't work. Rather, it's that one couldn't justifiably believe that they'd work just from the brain analogy, and that if they did work, that would be bad news for what he then called Friendliness (because he was hoping to discover and wield a "clean" theory of intelligence, as contrasted to evolution or gradient des...
One of Yudkowsky's claims in the post you link is:
It's hard to build a flying machine if the only thing you understand about flight is that somehow birds magically fly. What you need is a concept of aerodynamic lift, so that you can see how something can fly even if it isn't exactly like a bird.
This is a claim that lack of the correct mechanistic theory is a formidable barrier for capabilities, not just alignment, and it inaccurately underestimates the amount of empirical understandings available on which to base an empirical approach.
It's true that ...
I asked GPT-4 to write a "highly critical comment" in response to this post. It replied:
...Zack M. Davis' blog post, "Maybe Lying Can't Exist?!", is a prime example of overcomplicating a simple concept with unnecessarily convoluted language and philosophical ramblings. Davis spends an inordinate amount of time discussing sender-receiver games, formal models, and the evolution of communication, only to conclude that deception is "ontologically parasitic." This statement is not only pretentious but also blatantly obvious. Of course, lying depends on the exist
I asked GPT-4 to write a "highly critical comment" in response to this post. It replied:
...I find this blog post both convoluted and problematic in its attempt to explore the concept of deception as cooperation. The author, Zack M. Davis, delves into the intricacies of information theory and signaling games, yet the essence of the argument is lost in a sea of technical jargon and excessive parenthetical digressions.
The premise that deception can be understood as a cooperative effort between sender and receiver is intriguing, but the author fails to clearly
The problem isn't the first speaker wanting to persuade third parties; the problem is the second person "moving the goalposts" there.
I mean, I agree that I have soapbox-like tendencies (I often have an agenda, and my contributions to our discourse often reflect my agenda), but I thought I've been meeting the commonsense relevance standard—being an Alighieri scholar who only brings it up when there happens to be a legitimate Alighieri angle on the topic, and not just randomly derailing other people's discussions.
I could be persuaded that I've been getting this wrong, but, again, I'm going to need more specific examples (of how some particular post I made misses the relevance standard) be...
Note that in the original footnote in my post, "on the same side" is a hyperlink going to a comment by Val
Thanks for pointing this out. (I read Val's comment while writing my post, but unfortunately neglected to add the hyperlink when pasting the text of the footnote into my draft.) I have now edited the link into my post.
the goal isn't to trick people into thinking your disagreements are small, it's to make typical disagreements feel less like battles between warring armies
I think the fact that disagreements often feel like battles between warring ...
I'm definitely doing #2. I can see your case that the paragraph starting with "But there's a reason for that" is doing #4. But ... I'm not convinced that this kind of "frame manipulation" is particularly bad?
If someone is unhappy with the post's attempt to "grab the frame" (by acting as if my conception of rationalist is the correct one), I'm happy to explain why I did that in the comments. Do I have to disclaim it in the post? That just seems like it would be worse writing.
I think it's significant that the "blantant lying" example was an in-person conversation, rather than a published blog post. I think I'm much more prone to exaggerate in real-time conversations (especially emotionally-heated conversations) than I am in published writing that I have time to edit.
Thanks for chiming in; this is encouraging to hear. I'm imagining the pretty emotional debate you're thinking of is the one on "My Dating Plan ala Geoffrey Miller" in July 2020? Interestingly, I think my behavior there was much ruder than anything Duncan's objected to from me, so I think your reaction is evidence that there's a lot of interpersonal variation in how much "softening" different people think is desirable or necessary.
fairly frequently does a "Write a reply to a person's post as if it's a rebuttal to the post, which mostly goes off and talks about an unrelated problem/frame that Zack cares about
Would it help if we distinguished between a "reply" (in which a commentator explains the thoughts that they had in reaction to a post, often critical or otherwise negative thoughts) and a "rebuttal" (in which the commentator directly contradicts the original post, such that the original post and the rebuttal can't "both be right")? I often write replies that are not rebuttals, but I think this is fine.
Everyone sometimes issues replies that are not rebuttals, but there is an expectation that replies will meet some threshold of relevance. Injecting "your comment reminds me of the medieval poet Dante Alighieri" into a random conversation would generally be considered off-topic, even if the speaker genuinely was reminded of him. Other participants in the conversation might suspect this speaker of being obsessed with Alighieri, and they might worry that he was trying to subvert the conversation by changing it to a topic no one but him was interested in. They...
I do think Zack should acknowledge his judgment here has not been good and the result is not living up to the standards that flow fairly naturally from the sequences
Sorry, I'm going to need more specific examples of me allegedly "lying by exaggeration/overconfidence" before I acknowledge such a thing. I'm eager to admit my mistakes, when I've been persuaded that I've made a mistake. If we're talking specifically about my 4 December 2021 comment that started with "This is insane", I agree that it was a very bad comment that I regret very much. If we're t...
Thanks for telling me (strong-upvoted). That makes sense as a reason for you to be furious with me. As the grandparent says, I owe you a bigger apology than my previous apology, which appears below.
I hereby apologize for my blog comment of 4 December 2021, on an earlier revision of "Basics of Rationalist Discourse". In addition to the reasons that it was a bad comment in context that I listed in my previous apology, it was also a bad comment for failing to acknowledge that the text of the post contained a paragraph addressing the comment's main objection, ...
In your view, is there an important difference between frame control, and the author having a particular frame that they use in a particular essay?
I'm proud of this blog post. I think it's a good blog post that clearly explains my ideas in a way that's engaging to read. If someone wants to talk about my motivations for writing this post and why I chose the analogies I did, I'm happy to have that discussion in the comment section, like we're doing now.
But it seems to me that a blog post that talked about my objections to Bensinger's Goodwill element, withou...
Thanks for your thoughts. (Strong-upvoted.)
the essay where I finally gave up [...] This was not an aberrant event. [...] one more straw on the camel's back
Yes, that December 2021 incident was over the line. I'm sorry. In retrospect, I wish I hadn't done that—but if I had taken a few more moments to think, I would have been able to see it without retrospect. That was really stupid of me, and it made things worse for both of us.
You're also correct to notice that the bad behavior that I don't endorse on reflection can be seen as a more extreme version of ...
(I think I would have noticed if that paragraph had been in the December 2021 version, but if you say it was, I'll take your word for it—which would imply that my December 2021 behavior was even worse than I've already admitted; I owe you a much bigger apology in that case.)
It was. That's why I was (and remain) so furious with you (Edit: and also am by default highly mistrustful of your summaries of others' positions).
I'll agree that the "physicist motors" analogy in particular rests on the "one who studies" definition, although I think a lot of the points I make in this essay don't particularly depend on the analogy and could easily be written up separately.
I guess you could view the "foreign policy" motivating this post as being driven by two motives: first, I'd rather not waste precious time (in the year 2023, when a lot of us have more important things to do) fighting over the "rationalist" brand name; if someone else who also cares about thinking well, thinks that ...
so when I see the brand name being used to market a particular set of discourse norms without a clear explanation of how these norms are derived from the law, that bothers me enough to quickly write an essay or two about it
Seems great to me! I share your intuition that Goodwill seems a bit odd to include. I think it's right to push back on proposed norms like these and talk about how justified they are, and I hope my list can be the start of a conversation like that rather than the end.
I do have an intuition that Goodwill, or something similar to Goodwill,...
Then it would appear that we're in a conflict over a shared resource: I want to post "Zack-type" things on Less Wrong—including long-form criticism of other posts on Less Wrong—and (assuming I'm reading your comment correctly; feel free to correct me if not) it seems like you want me to not do that.
It looks like we can't both get what we want at the same time. That's a very unfortunate situation for us to be in. If you have any suggestions for Pareto improvements, I'm listening. I'm not sure what else I can say.
We don't need to rely on Huemer's gloss; the distaste for map-territory distinctions and reasoning under uncertainty being too subjective can also be seen in the source material. Consider this line from Atlas Shrugged:
(Psychologically, Rand is totally in the right in that people very often do use such language to evade res... (read more)