All of lsusr's Comments + Replies

The secret is out. Ben's secret identity is Ben Pace.

I think the Dialogue feature is really good. I like using it, and I think it nudges community behavior in a good direction. Well done, Lightcone team.

4habryka9d
Thank you! I also am very excited about it, though sadly adoption hasn't been amazing. Would love to see more people organically produce dialogues!

How do you know that this approach doesn't miss entire categories of error?

7JenniferRM7d
I do NOT know that "the subjective feeling of being right" is an adequate approach to purge all error. Also, I think that hypotheses are often wrong, but they motivate new careful systematic observation, and that this "useful wrongness" is often a core part of a larger OODA loop of guessing and checking ideas in the course of learning and discovery. My claim is that "the subjective feeling of being right" is a tool whose absence works to disqualify at least some wrongnesses as "maybe true, maybe false, but not confidently and clearly known to be true in that way that feels very very hard to get wrong". Prime numbers fall out of simple definitions, and I know in my bones that five is prime. There are very few things that I know with as much certainty as this, but I'm pretty sure that being vividly and reliably shown to be wrong about this would require me to rebuild my metaphysics and epistemics in radical ways. I've been wrong a lot, but the things I was wrong about were not like my mental state(s) around "5 is prime". And in science, seeking reliable generalities about the physical world, there's another sort of qualitative difference that is similar. For example, I grew up in northern California, and I've seen so many Sequoia sempervirens that I can often "just look" and "simply know" that that is the kind of tree I'm seeing. If I visit other biomes, the feeling of "looking at a forest and NOT knowing the names of >80% of the plants I can see" is kind of pleasantly disorienting... there is so much to learn in other biomes! (I've only ever seen one Metasequoia glyptostroboides that was planted as a specimen at the entrance to a park, and probably can't recognize them, but my understanding is that they just don't look like a coastal redwood or even grow very well where coastal redwoods naturally grow. My confidence for Sequoiadendron giganteum is in between. There could hypothetically be a fourth kind of redwood that is rare. Or it might be that half the coas

The points you bring up are subtle and complex. I think a dialogue would be a better way to explore them rather than a comment thread. I've PM'd you.

I tried that too. It didn't work on my first ~1 hour attempt.

I want to express appreciation for a feature the Lightcone team implemented a long time ago: Blocking all posts tagged "AI Alignment" keeps this website usable for me.

I will bet at odds 10:1 (favorable to you) that I will not let the AI out

I too am confident enough as gatekeeper that I'm willing to offer similar odds. My minimum and maximum bets are my $10,000 USD vs your $1,000 USD.

I was wondering how long it would take for someone to ask these questions. I will paraphrase a little.

How does rhetorical aikido differ from well-established Socratic-style dialogue?

Socratic-style dialogue is a very broad umbrella. Pretty much any question-focused dialogue qualifies. A public schoolteacher asking a class of students "What do you think?" is both "Socratic" and ineffective at penetrating delusion.

The approach gestured at here is entirely within the domain of "Socratic"-style dialogue. However, it is far more specific. The techniques I prac... (read more)

3UnderTruth8d
Thank you for your reply and further explanation. Your examples are helpful, and on thinking about them, I'm led to wonder how these & other "techniques" serve the distinct goals of "Trying to arrive at The True Answer", "Trying to show this person that they have incoherent beliefs, because they have failed to properly examine them", and "Trying to converse in a manner that will engage this person, so that it has some real, hopefully positive, effect for them" -- and possibly others.

Thanks. ❤️

I stole that line from Eric Raymond who stole it from Zen.

I skipped two years of math in grade school. That saved me two years of class time, but the class was still too easy. That's because the speed of the class was the same. Smart kids don't just know more. They learn much faster.

For smart students to learn math at an appropriate speed, it's not enough to skip grades. They need an accelerated program.

Personal counterfactual: I was smarter than my peers and didn't skip any grades.

Result: I didn't physically play with or date the other students.

Exceptions: I did play football and did Boy Scouts, but those were both after-school activities. Moreover, neither of them were strictly segregated by age. Football was weight-based, and Boy Scouts lumped everyone from 11 to 17 into the same troop.

Putting students in the same math class based on age (ignoring intelligence) is like putting students on the same football team based on age (ignoring size).

Different people have different preferences regarding translation. Personally, I'm okay with you translating anything I write here as long as you include a link back to my original here on Less Wrong.

I don't believe this website has any official English-only policy. However, English is the primary language used here. I recommend you just post it in Russian, but include a short note in English at the top explaining something like "This is a Russian translation of …. The original can be found at …."

The video can be summarized by these two lines at timestamp 5:39.

Justin: How do you feel genuine love towards those that cause—you know—monumental suffering for others?

Lsusr: How can you not? They're human beings.

I use the word "love" but, as you noted, that word has many definitions. It would be less ambiguous if I were to say "compassion".

That's funny. When I read lc's username I think "that username looks similar to 'lsusr'" too.

I don't plan to read David Chapman's writings. His website is titled "Meta-rationality". When I'm teaching rationality, one of the first things I have to do is tell students repeatedly is to stop being meta.

Empiricism is about reality. "Meta" is at least one step away from reality, and therefore at least one step farther from empiricism.

2AnthonyC2mo
Telling people to stop being meta is very important, but I think you may be misunderstanding the way in which Chapman is using the term. AFAICT it's really more about being able to step back from your own viewpoint and assumptions and effectively apply a mental toolbox and different mental stances effectively to a problem that isn't trivial or already-solved. Personally I've found it has helped keep me from going too meta in a lot of cases, by re-orienting my thinking to what's needed.
1Mo Putera2mo
Chapman's old work programming Pengi with Phil Agre at the MIT AI Lab seems to suggest otherwise, but I respect your decision to not read his writings, since they mirror mine after attempting to and failing to grok him.

The first paragraph was supposed to be sarcastic satire.

I meant side-comments. I never use them myself, but people often use them to comment on my posts. When they do, the comments tend to be constructive, especially compared to blockquotes.

2Raemon2mo
Ah cool. That was my best guess but wasn't sure.

Another improvement I didn't notice until right now is the "respond to a part of the original post" feature. I feel like it nudges comments away from nitpicking.

2Raemon2mo
I didn't quite parse that – which UI element are you referring to?

TL;DR: I don't think it matters much.

This question is a rounding error compared to a much bigger problem in civic planning: car-centric cities are expensive and enable worse quality-of-life compared to traditional, walkable cities. They're not even natural. They only exist as a result of government intervention. For a more detailed dive into this subject, I recommend the Not Just Bikes YouTube channel.

I'm glad you enjoyed it.

The way I think about things, if the person I'm talking with is smiling, laughing, and generally having a good time, then that's what's important.

In a more recent video, I've tried out a toga instead.

Hm... your new student seems like an interesting person to talk to. Mind asking if he'd be interested in a chat with someone else his age?

I've sent you his Discord information via PM. (After obtaining permission, of course.)

Say with a straight face that student loans help the economy, and the power of social cognition will make it so.

XD

Yep. In a debate competition, you can win with arguments that are obviously untrue to anyone who knows what you're talking about, which is why I'm much less interested in traditional debate these days. (Not to discour... (read more)

4Lyrongolem2mo
Thank you very much! I think I'll enjoy the chat. Just sent him the friend request. Oh, and, my discord is the same as my lesswrong btw. YES! Hahhahahaa... it's quite dumb. The information you can reasonably convey in 4 minutes is so short that even when your case is common sense it's hard to actually prove your point. I can bring up a variety of commonsense and economic arguments for why student loan forgiveness inflates prices, but my opponents can basically just say 'nu-uh' the entire debate, citing some random article saying it... somehow creates 1.2 million jobs? I sometimes wish I could just throw a book at them and say 'read the damn research!'  But then, I should talk, I'm equally guilty. On the affirmative side I decided to all in on an emotional appeal to the starving children of bankrupt parents, and when my opponents brought up the obvious objection (rising tuition prices due to overcharge) I decided to sneakily claim that forgiveness wasn't an actual subsidy and thus doesn't allow the government to read prices. I also told the judge, verbatim, that my opponents were 'misrepresenting their own evidence' by claiming that forgiveness as a subsidy. I even invited the judge to examine the evidence himself, saying that it was on our side (it wasn't). Seeming reasonable won us that debate, even though I most definitely was not being reasonable.  “The Dark Side of the Force is a pathway to many abilities some consider to be unnatural.”  But hey, is fine. This is debate, and the only crime is to lose. We went undefeated again. Long live the dark arts! 

Thank you for checking my numbers.

Many readers appeared to dislike my example post. IIRC, prior to mentioning it here, it's karma (excluding my auto hard upvote) was close to zero, despite it having about 40 votes.

Which makes you feel like it's improving how you think?

I'm learning how to film, light and edit video. I'm learning how to speak better too, and getting a better understanding about how the media ecosystem works.

Making videos is harder than writing, which means I learn more from it.

3nim3mo
Ah, that makes perfect sense. On the other side, watching videos is often easier than reading, so I often feel like I learn more from the latter =)

Here's part of a comment on one of my posts. The comment negatively impacted my desire to post deviant ideas on LessWrong.

Bullshit. If your desire to censor something is due to an assessment of how much harm it does, then it doesn't matter how open-minded you are. It's not a variable that goes into the calculation.

I happen to not care that much about the object-level question anymore (at least as it pertains to LessWrong), but on a meta level, this kind of argument should be beneath LessWrong. It's actively framing any concern for unrestricted speech as

... (read more)

I think I'm less open to weird ideas on LW than I used to be, and more likely to go "seems wrong, okay, next". Probably this is partly a me thing, and I'm not sure it's bad - as I gain knowledge, wisdom and experience, surely we'd expect me to become better at discerning whether a thing is worth paying attention to? (Which doesn't mean I am better, but like. Just because I'm dismissing more ideas, doesn't mean I'm incorrectly dismissing more ideas.)

But my guess is it's also partly a LW thing. It seems to me that compared to 2013, there are more weird ideas... (read more)

2Rafael Harth2mo
You don't have to justify your updates to me (and also, I agree that the comment I wrote was too combative, and I'm sorry), but I want to respond to this because the context of this reply implies that I'm against against weird ideas. I vehemently dispute this. My main point was that it's possible to argue for censorship for genuine reasons (rather than become one is closed-minded). I didn't advocate for censoring anything, and I don't think I'm in the habit of downvoting things because they're weird, at all. This may sound unbelievable or seem like a warped framing, but I honestly felt like I was going against censorship by writing that comment. Like as a description of my emotional state while writing it, that was absolutely how I felt. Because I viewed (and still view) your post as a character attack on people-who-think-that-sometimes-censorship-is-justified, and one that's primarily based on an emotional appeal rather than a consequentialist argument. And well, you're a very high prestige person. Posts like this, if they get no pushback, make it extremely emotionally difficult to argue for a pro-censorship position regardless of the topic. So even though I acknowledge the irony, it genuinely did feel like you were effectively censoring pro-censorship arguments, even if that wasn't the intent. I guess you could debate whether or not censoring pro-censorship views is pro or anti censorship. But regardless, I think it's bad. It's not impossible for reality to construct a situation in which censorship is necessary. In fact, I think they already exist; if someone posts a trick that genuinely accelerates AI capabilities by 5 years, I want that be censored. (Almost all examples I'd think of would relate to AI or viruses.) The probability that something in this class happens on LW is not high, but it's high enough that we need to be able to talk about this without people feeling like they're impure for suggesting it.
2MondSemmel2mo
Hi there, lsusr! I read the post & comment which you linked, and indeed felt that the critical comment was too combative. (As a counterexample, I like this criticism of EY for how civil it is.) That being said, I think I understand the sentiment behind its tone: the commenter saw your post make a bunch of strong claims, felt that these claims were wrong and/or insufficiently supported by sources, and wrote the critical comment in a moment of annoyance. To give a concrete example, "We do not censor other people more conventional-minded than ourselves." is an interesting but highly controversial claim. Both because hardly anything in the world has a 100% correlation, and because it leads to unintuitive logical implications like "two people cannot simultaneously want to censor one another". Anyway, given that the post began with a controversial claim, I expected the rest of the post to support this initial claim with lots of sources and arguments. Instead, you took the claim further and built on it. That's a valid way to write, but it puts the essay in an awkward spot with readers that disagree with the initial claim. For this reason, I'm also a bit confused about the purpose of the essay: was it meant to be a libertarian manifesto, or an attempt to convince readers, or what? EDIT: Also, the majority of LW readers are not libertarians. What reaction did you expect to receive from them? If I were to make a suggestion, the essay might have worked better if it had been a dialogue between a pro-liberty and a pro-censorship character. Why? Firstly, if readers feel like an argument is insufficiently supported, they can criticize or yell at the character, rather than at you. And secondly, such a dialogue would've required making a stronger case in favor of censorship, and it would've given the censorship character the opportunity to push back against claims by the liberty character. This would've forestalled having readers make similar counterarguments. (Also see Scott's
2gilch3mo
Hmm, is LessWrong really so intolerant of being reminded of the existence of "deviant ideas"? Social Dark Matter was pretty well received, with 248 karma, and was posted quite recently. The much older KOLMOGOROV COMPLICITY AND THE PARABLE OF LIGHTNING opened with a quote from the same Paul Graham essay you linked to (What You Can’t Say). I was not personally offended by your example post and upvoted it just now. I probably at least wouldn't have downvoted it had I seen it earlier, but I hadn't.

Thanks for watching out! Your comment thoroughly passes any reasonable cost-benefit expected value calculation. That post is a useful, concise resource.

I actually did run into (what I think are) vitamin deficiency issues initially. I began taking a daily multivitamin (that includes vitamin B12, among other things), and the problems went away. I also drink a bit of milk that seems to be tolerably-sourced.

Answer by lsusrDec 06, 20234832

First of all, I appreciate all the work the LessWrong / Lightcone team does for this website.

The Good

  • I was skeptical of the agree/disagree voting. After using it, I think it was a very good decision. Well done.
  • I haven't used the dialogue feature yet, but I have plans to try it out.
  • Everything just works. Spam is approximately zero. The garden is gardened so well I can take it for granted.
  • I love how much you guys experiment. I assume the reason you don't do more is just engineering capacity.

And yet…

Maybe there's a lot of boiling feelings out there

... (read more)

I wonder what fraction of the weirdest writers here feel the same way. I can't remember the last time I've read something on LessWrong and thought to myself, "What a strange, daring, radical idea. It might even be true. I'm scared of what the implications might be." I miss that.

I thought Genesmith's latest post fully qualified as that! 

I totally didn't think adult gene editing was possible, and had dismissed it. It seems like a huge deal if true, and it's the kind of thing I don't expect would have been highlighted anywhere else.

1mike_hawke2mo
Do you remember any examples from back in the day?
2lsusr2mo
Another improvement I didn't notice until right now is the "respond to a part of the original post" feature. I feel like it nudges comments away from nitpicking.
2MondSemmel3mo
There are also writers with a very large reach. A recommendation I saw was to post where most of the people and hence most of the potential readers are, i.e. on the biggest social media sites. If you're trying to have impact as a writer, the reachable audience on LW is much smaller. (Though of course there are other ways of having a bigger impact than just reaching more readers.)
1nim3mo
I enjoy your content here and would like to continue reading you as you grow into your next platforms. YouTube grows your audience in the immediate term, among people who have the tech and time to consume videos. However, text is the lowest common denominator for human communication across longer time scales. Text handles copying and archiving in ways that I don't think we can promise for video on a scale of hundreds of years, let alone thousands. Text handles search with an ease that we can only approximate for video by transcribing it. Transcription is tractable with AI, but still requires investment of additional resources, and yields a text of lower quality and intentionality than an essay crafted directly by its own author. Plenty of people spend time in situations where they can read text but not listen to audio, and plenty of people spend time in situations where they can listen to audio but not read text. Compare the experience of listening to an essay via text to speech to the experience of reading a youtube video's auto-generated transcript. Which makes you feel like it's improving how you think?
6MondSemmel3mo
The post about not paying one's taxes was pretty out there and had plenty interesting discussion, but now it's been voted down to the negatives. I wish it was a bit higher (at 0-ish karma, say), which might've happened if people could disagree-vote on it. But yes, overall this criticism seems true, and important.
2Yoav Ravid3mo
One thing that could help is to be able to have automatic crossposting from your YouTube channel like you can currently have from a blog. It would be even more powerful if it generated a transcript automatically (though that's currently difficult and expansive).

Over the years, I feel like I've gotten fewer "yes and" comments and more "we don't want you to say that" comments. This might be because my writing has changed, but I think what's really going on is that this happens to every community as it gets older. What was once radical eventually congeals into dogma.

This is the part I'm most frustrated with. It used to be you could say some wild stuff on on this site and people would take you seriously. Now there's a chorus of people who go "eww, gross" if you go too far past what they think should be acceptable. Le... (read more)

Thank you!

What word would you use in your native language?

5mrfox3mo
I reflected a bit and I think there isn't one. So it wasn't about the english after all :D. For me it's in two (somewhat intersecting) dichotomies: 1. Being there for them, but giving them space to grow 2. Being patient and not to expect unreasonable things of them, but not being condescending and meeting them on eye level ("auf Augenhöhe begegnen", don't know if that translates, but it's about meeting as equals. [Even if you're not technically, like squatting down to talk to a child on "their" terms, taking them seriously.])

Fascinating. You're one of the names on Less Wrong that I associate with positive, constructive dialogue. We may have a scissor statement here.

The reason my tone was much more aggressive than normal is that I knew I'd be too conflict averse to respond to this post unless I do it immediately, while still feeling annoyed. (You've posted similar things before and so far I've never responded.) But I stand by all the points I made.

The main difference between this post and Graham's post is that Graham just points out one phenomenon, namely that people with conventional beliefs tend to have less of an issue stating their true opinion. That seems straight-forwardly true. In fact, I have several opinions ... (read more)

I appreciate your earnest attempt to understand what I'm writing. I don't think "weirdos/normies" nor "Critical thinkers/uncritical thinkers" quite point at what I'm trying to point at with "independent/conventional".

"Independent/dependent" is about whether what other people think influences you to reach the same conclusions as other people. "Weirdos/normies" is about whether you reach the same conclusions as other people. In other words, "weirdos/normies" is correlation. "Independent/dependent" is causation in a specific direction. Independent tends to co... (read more)

1Q Home3mo
I still don't see it. Don't see a causal mechanism that would cause it. Even if we replace "independent-minded" with "independent-minded and valuing independent-mindedness for everyone". I have the same problems with it as Ninety-Three and Raphael Harth. To give my own example. Algorithms in social media could be a little too good at radicalizing and connecting people with crazy opinions, such as flat earth. A person censoring such algorithms/their output could be motivated by the desire to make people more independent-minded. I think the value of a general point can only stem from re-evaluating specific opinions. Therefore, sooner or later the conversation has to tackle specific opinions. If "derailment" is impossible to avoid, then "derailment" is a part of the general point. Or there are more important points to be discussed. For example, if you can't explain to cave people General Relativity, maybe you should explain "science" and "language" first — and maybe those tangents are actually more valuable than General Relativity. I dislike Graham's essay for the same reason: when Graham does introduce some general opinions ("morality is like fashion", "censuring is motivated by the fear of free-thinking", "there's no prize for figuring out quickly", "a statement can't be worse than false"), they're not discussed critically, with examples. Re:say looks weird to me. Invisible opponents are allowed to say only one sentence and each sentence gets a lengthy "answer" with more opinions.

Your comment is not a censure of me.

I didn't feel the need to distinguish between censorship of ideas and censorship of independent-minded people, because censorship of ideas censors the independent-minded.

give enough examples to know what kind of exceptions to look for

I deliberately avoided examples for the same reason Paul Graham's What You Can't Say deliberately avoids giving any specific examples: because either my examples would be mild and weak (and therefore poor illustrations) or they'd be so shocking (to most people) they'd derail the whole conversation.

4Dagon3mo
Without examples, I have trouble understanding "censorship of independent-minded people".  It's probably not formal censorship (but maybe it is - most common media disallows some words and ideas).  There's a big difference between "negative reactions to beliefs that many/most find unpleasant, even if partially true" and "negative reactions to ideas that contradict common values, with no real truth value". They're not the same motives, and not the same mechanisms for the idea-haver to refine their beliefs.   In many groups, especially public ones, even non-committal exploration of these ideas is disallowed, because at least some observers will misinterpret the discussion as advocacy or motivated attempts to move the overton window.  In these cases, the restriction is distributed enough that there's no clear way to have the discussion with the right folks.  Meaning your use of "we" and framing the title as advice is confusing. Another way of framing my confusion/disagreement is that I think "independent-minded" and "conventional-minded" are not very good categories, and the model of opposition is not very useful.  Different types of heresy have different groups opposing them for different reasons.

Did you read the Paul Graham article I linked? Do you disagree with it too?

6Rafael Harth3mo
I hadn't, but did now. I don't disagree with anything in it.

Independent-mindedness is multi-dimensional. You can be more independent-minded in one domain than another.

If you are predicting that two people will never try to censor each other in the same domain, that also happens. If your theory is somehow compatible with that, then it sounds like there are a lot of epicycles in this "independent-mindedness" construct that ought to be explained rather than presented as self-evident.

I made my November 20, 2023 08:58:05 UTC post between the dip and the recovery.

November 20, 2023 19:54:45 UTC

Result: Microsoft has gained approximately $100B in market capitalization.

4RHollerith3mo
Can you explain why you think that "Microsoft has gained approximately $100B in market capitalization?" I see a big dip in stock price late Thursday, followed by a recovery to exactly the start price 2 hours later.

November 20, 2023 08:58:05 UTC

If my phone wasn't broken right now I'd be creating a Robinhood (or whatever) account so I can long Microsoft. Ideally I'd buy shares, but calls (options to buy) are fine.

Why? Because after the disaster at OpenAI, Satya Nadella just hired Sam Altman to work for Microsoft directly.

7gwern3mo
I agree that I think MS is undervalued now. The current gain in the stock is roughly equivalent to MS simply absorbing OA LLC's valuation for free, but that's an extremely myopic way to incorporate OA: most of the expected value of the OA LLC was past the cap, in the long tail of high payoffs, so "OA 2" should be worth much more to MS than 'OA 1'.
2lsusr3mo
November 20, 2023 19:54:45 UTC Result: Microsoft has gained approximately $100B in market capitalization.

My deontological terminal value isn't to causally win. It's for FTD agents to acausally lose. Either I win, or the FDT agents abandon FDT. (Which proves that FDT is an exploitable decision theory.)

I'm not sure I see the pathological case of the problem statement: an agent has utility function of 'Do worst possible action to agents who exactly implement (Specific Decision Theory)' as a problem either. Do you have a specific idea how you would get past this?

There's a Daoist answer: Don't legibly and universally precommit to a decision theory.

But the expl... (read more)

1MinusGix5mo
I assume what you're going for with your conflation of the two decisions is this, though you aren't entirely clear on what you mean: * Some agent starts with some (potentially broken in various manners, like bad heuristics or unable to consider certain impacts) decision theory, because there's no magical apriori decision algorithm * So the agent is using that DT to decide how to make better decisions that get more of what it wants * CDT would modify into Son-of-CDT typically at this step * The agent is deciding whether it should use FDT. * It is 'good enough' that it can predict if it decides to just completely replace itself with FDT it will get punched by your agent, or it will have to pay to avoid being punched. * So it doesn't completely swap out to FDT, even if it is strictly better in all problems that aren't dependent on your decision theory * But it can still follow FDT to generate actions it should take, which won't get it punished by you? Aside: I'm not sure there's a strong definite boundary between 'swapping to FDT' (your 'use FDT') and taking FDT's outputs to get actions that you should take. Ex: If I keep my original decision loop but it just consistently outputs 'FDT is best to use', is that swapping to FDT according to you? Does if (true) { FDT() } else { CDT() } count as FDT or not? (Obviously you can construct a class of agents which have different levels that they consider this at, though) But you're whatever agent you are. You are automatically committed to whatever decision theory you implement. I can construct a similar scenario for any DT. 'I value punishing agents that swap themselves to being DecisionTheory.' Or just 'I value punishing agents that use DecisionTheory.' Am I misunderstanding what you mean? How do you avoid legibly being committed to a decision theory, when that's how you decide to take actions in the first place? Inject a bunch of randomness so others can't analyze your algorithm? Make your internals absurdly intric

Correct. The last time I was negotiating with a self-described FDT agent I did it anyway. 😛

My utility function is "make functional decision theorists look stupid", which I satisfy by blackmailing them. Either they cave, which mean I win, or they don't cave, which demonstrates that FDT is stupid.

1MinusGix5mo
If your original agent is replacing themselves as a threat to FDT, because they want FDT to pay up, then FDT rightly ignores it. Thus the original agent, which just wants paperclips or whatever, has no reason to threaten FDT. If we postulate a different scenario where your original agent literally terminally values messing over FDT, then FDT would pay up (if FDT actually believes it isn't a threat). Similarly, if part of your values has you valuing turning metal into paperclips and I value metal being anything-but-paperclips, I/FDT would pay you to avoid turning metal into paperclips. If you had different values - even opposite ones along various axes - then FDT just trades with you. However FDT tries to close off the incentives for strategic alterations of values, even by proxy, to threaten. So I see this as a non-issue. I'm not sure I see the pathological case of the problem statement: an agent has utility function of 'Do worst possible action to agents who exactly implement (Specific Decision Theory)' as a problem either. You can construct an instance for any decision theory. Do you have a specific idea how you would get past this? FDT would obviously modify itself if it can use that to get around the detection (and the results are important enough to not just eat the cost).
1quetzal_rainbow5mo
I definitely would like to hear the details! (I mean, of last particular case)
2quetzal_rainbow5mo
"Saying you are gonna do it anyway in hope that FDT agent yields" and "doing it anyway" are two very different things.

There's a couple different ways of exploiting an FDT agent. One method is to notice that FDT agents have implicitly precommitted to FDT (rather than the theorist's intended terminal value function). It's therefore possible to contrive scenarios in which those two objectives diverge.

Another method is to modify your own value function such that "make functional decision theorists look stupid" becomes a terminal value. After you do that, you can blackmail them with impunity.

FDT is a reasonable heuristic, but it's not secure against pathological hostile action.

4quetzal_rainbow5mo
"Modifying your utility function" is called threat-by-proxy and FDT agents ignore them, ao you are deincentivized to do this.

I'm not sure if this is the right course of action. I'm just thinking about the impact of different voting systems on group behavior. I definitely don't want to change anything important without considering negative impacts.

But I suspect that strong downvotes might quietly contribute to LW being more group thinky.

Consider a situation where a post strongly offends a small number of LW regulars, but is generally approved of by the median reader. A small number of regulars hard downvote the post, resulting in a suppression of the undesirable idea.

I think this... (read more)

8Kaj_Sotala4mo
I believe that this is actually part of the design intent of strongvotes - to help make sure that LW rewards the kind of content that long-time regulars appreciate, avoiding an "Eternal September" scenario where an influx of new users starts upvoting the kind of content you might find anywhere else on the Internet and driving the old regulars out, until the thing that originally made LW unique is lost.

Proposal: Remove strong downvotes (or limit their power to -3). Keep regular upvotes, regular downvotes, and strong upvotes.

Variant: strong downvoting a post blocks that user's posts from appearing on your feed.

6Raemon5mo
Say more about what you want from option 1?

The decline of dueling coincided with firearms getting much more reliable. Duels should have the possibility of death, but should not (usually) be "to the death".

4lc6mo
Possibly, but I don't really buy it. Dueling declined first in the northern United States, and then was ended in the south only after public opinion changed, not before. It persisted in places like Peru until well into the mid twentieth century, when people surely weren't using flintlock pistols. There are also studies like this one (https://www.sciencedirect.com/science/article/abs/pii/S0147596720300378) that claim that the decline of dueling was pretty closely connected to either economic development or the presence of the federal government (as measured by post offices).
5dr_s6mo
True, but to be fair, wasn't this the point of duelling pistols? Much like fencing swords aren't like real swords, duelling pistols weren't crafted to be the most accurate or damaging firearms possible. And they came in pairs so the duellists would both have the same exact standing. Of course, if you have to impose artificial restrictions on your supposed deadly combat to make it less deadly, you might as well simply remove the death element at all and make it a game of paintball.

Great digest, as always. My favorite parts were the link to the US census policy explanation and the reminder that most people don't distinguish between choices and mandates.

Your comment made me upvote. I think LW is exactly the right place for this sort of writing. It's got error corrections, empiricism, high epistemic standards, ontological bucketing, a willingness to admit when one is wrong, and maps.

Note to future readers: This thread was in response to my original post, in which I mistakenly switched the $0 and $100.

I mixed up the $100 and $0 in the original post. This is now fixed.

You're right. Fixed. Thanks.

Load More