LESSWRONG
LW

3328
Taymon Beal
3628580
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
14$1,000 Bounty for Pro-BLM Policy Analysis
5y
10
The Illustrated Petrov Day Ceremony
Taymon Beal21d10

I think that either omitting the don't-read-the-citations-aloud stage direction, or making it easier to follow (with a uniform italic-text-is-silent convention), would be fine, and I don't have a strong opinion as to which is better. But before Boston made the change I'm now suggesting, what tended to happen was that people inconsistently read or didn't read the citations aloud, and this was confusing and distracting.

Reply1
The Illustrated Petrov Day Ceremony
Taymon Beal21d50

This is good and I approve of it.

A few random notes and nitpicks:

  • I believe the first Petrov Day was in Boston in 2013, not 2014.
  • "More than 20 people"? 20 seems to me like far too many; I never do a table with more than 11. (If you have exactly 11 people you have to put them all at one table, because you need at least six to do it properly, because that's how many Children there are at the end. But if I had 22 people I might split them into three groups rather than two; I haven't yet had to actually decide this.)
  • Boston significantly reduced the incidence of people reading the quote citations out loud by putting them in italic text, just like the stage directions, and then including a uniform "don't read italic text out loud" stage direction.
  • The version of the ceremony on the site includes the inaccurate account of the Arkhipov incident made up by Noam Chomsky. You can see Boston's corrected-after-fact-checking version starting on page 30 of this doc.
  • I have also been repeatedly told that the story in the ceremony of the Black Death's effect on human progress is wrong, but haven't changed it because I don't really understand what's wrong with it and don't have an alternative lined up.
  • Petrov received the Dresden Peace Prize, not the International Peace Prize, which was long defunct by 2013.
  • Hitler's rise to power in Germany started in 1919 and was complete by 1934, so can't really be said to have occurred "in 1939". (I just replaced this with "in the 1920s".)
  • I still think the gag of duplicating the "preserving knowledge required redundancy" section is hilarious and should be included :-P
Reply1
The noncentral fallacy - the worst argument in the world?
Taymon Beal22d10

Domain seems to have expired, so I bought it and got it working again.

Reply31
The title is reasonable
Taymon Beal25d161

(Epistemic status: Not fully baked. Posting this because I haven't seen anyone else say it[1], and if I try to get it perfect I probably won't manage to post it at all, but it's likely that this is wrong in at least one important respect.)

For the past week or so I've been privately bemoaning to friends that the state of the discourse around IABIED (and therefore on the AI safety questions that it's about) has seemed unusually cursed on all sides, with arguments going in circles and it being disappointingly hard to figure out what the key disagreements are and what I should believe conditional on what.

I think maybe one possible cause of this (not necessarily the most important) is that IABIED is sort of two different things: it's a collection of arguments to be considered on the merits, and it's an attempt to influence the global AI discourse in a particular object-level direction. It seems like people coming at it from these two perspectives are talking past each other, and specifically in ways that lead each side to question the other's competence and good faith.

If you're looking at IABIED as an argumentative disputation under rationalist debate norms, then it leaves a fair amount to be desired.[2] A number of key assumptions are at least arguably left implicit; you can argue that the arguments are clear enough, by some arbitrary standard, but it would have been better to make them even clearer. And while it's not possible to address every possible counterargument, the book should try hard to address the smartest counterarguments to its position, not just those held by the greatest number of not-necessarily-informed people. People should not hesitate to point out these weaknesses, because poking holes in each others' arguments is how we reach the truth. The worst part, though, is that when you point this out, proponents don't eagerly accept feedback and try to modulate their messaging to point more precisely at the truth; instead, they argue that they should be held to a lower epistemic standard and/or that the hole-pokers should have a higher bar for hole-poking. This is really, really not a good look! If you behaved like that on LessWrong or the EA Forum, people would update some amount towards the proposition that you're full of shit and they shouldn't trust you. And since a published book is more formal and higher-exposure than a forum post, that means you should be more epistemically careful. Opponents are therefore liable to conclude that proponents have turned their brains off and are just doing tribal yelling, with a thin veneer of verbal sophistication applied on top for the sake of social convention.

If you're looking at IABIED as an information op, then it's doing a pretty good job balancing a significant and frankly kind of unfair number of constraints on what a book has to do and how it has to work. In particular, it bends extremely far over backwards to accommodate the notoriously nitpicky epistemic culture of rationalists and EAs, despite these not being the most important audiences. Further hedging is counterproductive, because in order to be useful, the book needs to make its point forcefully enough to overcome readers' bias towards inaction. The world is in trouble because most actors really, really want to believe that the situation doesn't require them to do anything costly. If you tell them a bunch of nuanced hedgey things, those biases will act on your message in their brains and turn it into something like "there's a bunch of expert disagreement, we don't know things for sure, but probably whatever you were going to do anyway is fine". Note that this is not about "truth vs. propaganda"; basically every serious person agrees that some kind of costly action is or will be required, so if you say that the book overstates its case, or other things that people will predictably read as "the world's not on fire", they will thereby end up with a less accurate picture of the world, according to what you yourself believe. And yet opponents insist upon doing precisely this! If you actually believe that inaction is appropriate, then so be it, but we know perfectly well that most of you don't believe that and are directionally supportive of making AI governance and policy more pro-safety. So saying things that will predictably soothe people further asleep is just a massive own-goal by your own values; there's no rationalist virtue in speaking to the audience that you feel ought to exist instead of the one that actually does. Proponents are therefore liable to conclude that opponents either just don't care about the real-world stakes, or are so dangerously naive as to be a liability to their own side.

  1. ^

    Though it's likely someone did and I just didn't see it.

  2. ^

    I've been traveling, haven't made it all the way through the book yet, and am largely going by the reviews. I'm hoping to finish it this week, and if the book's content turns out to be relevantly different from what I'm currently expecting, I'll come back and post a correction.

Reply
ACX Ballots Everywhere: 2025 Greater Boston Municipal Primaries
Taymon Beal1mo10

We're in the room now and can let people in.

Reply
Maximizing Communication, not Traffic
Taymon Beal9mo5-3

You don't think the GitHub thing is about reducing server load? That would be my guess.

Reply
Parable of the vanilla ice cream curse (and how it would prevent a car from starting!)
Taymon Beal10mo30

This is addressed in the FAQ linked at the top of the page. TL;DR: The author insists that the gist of the story is true, but acknowledges that he glossed over a lot of intermediate debugging steps, including accounting for the return time.

Reply
MIRI 2024 Mission and Strategy Update
Taymon Beal2y10

Does that logic apply to crawlers that don't try to post or vote, as in the public-opinion-research use case? The reason to block those is just that they drain your resources, so sophisticated measures to feed them fake data would be counterproductive.

Reply
MIRI 2024 Mission and Strategy Update
Taymon Beal2y69

I didn't downvote (I'm just now seeing this for the first time), but the above comment left me confused about why you believe a number of things:

  • What methodology do you think MIRI used to ascertain that the Time piece was impactful, and why do you think that methodology isn't vulnerable to bots or other kinds of attacks?
  • Why would social media platforms go to the trouble of feeding fake data to bots instead of just blocking them? What would they hope to gain thereby?
  • What does any of this have to do with the Social Science One incident?
  • In general, what's your threat model? How are the intelligence agencies involved? What are they trying to do?
  • Who are you even arguing with? Is there a particular group of EAsphere people who you think are doing public opinion research in a way that doesn't make sense?

Also, I think a lot of us don't take claims like "I've been researching this matter professionally for years" seriously because they're too vaguely worded; you might want to be a bit more specific about what kind of work you've done.

Reply
Brighter Than Today Versions
Taymon Beal2y30

For people in Boston, I made a straw poll to gauge community sentiment on this question: https://forms.gle/5BJEG5fJWTza14eL9

Reply
Load More