Taymon Beal


Sorted by New
3[Event]Petrov Day in BostonCambridgeSep 26th
3[Event]Boston SSC MeetupMIT Building 1, Massachusetts Avenue, Cambridge, MA, USAOct 27th
3[Event]Petrov Day in Boston199 Harvard Street, Cambridge, MA, USASep 26th
3[Event]Boston SSC MeetupMIT Building 35, Massachusetts Avenue, Cambridge, MA, USASep 22nd


$1,000 Bounty for Pro-BLM Policy Analysis

Cross-posting from Facebook:

Any policy goal that is obviously part of BLM's platform, or that you can convince me is, counts. Police reform is the obvious one but I'm open to other possibilities.

It's fine for "heretics" to make suggestions, at least here on LW where they're somewhat less likely to attract unwanted attention. Efficacy is the thing I'm interested in, with the understanding that the results are ultimately to be judged according to the BLM moral framework, not the EA/utilitarian one.

Small/limited returns are okay if they're the best that can be done. Time preference is moderately high (because that matches my assessment of the BLM moral framework) but still limited.

Suggestions from non-Americans are fine.

Reality-Revealing and Reality-Masking Puzzles
It is easy to get the impression that the concerns raised in this post are not being seen, or are being seen from inside the framework of people making those same mistakes.

I don't have a strong opinion about the CFAR case in particular, but in general, I think this is impression is pretty much what happens by default in organizations, even when people running them are smart and competent and well-meaning and want to earn the community's trust. Transparency is really hard, harder than I think anyone expects until they try to do it, and to do it well you have to allocate a lot of skill points to it, which means allocating them away from the organization's core competencies. I've reached the point where I no longer find even gross failures of this kind surprising.

(I think you already appreciate this but it seemed worth saying explicitly in public anyway.)

This looks like a duplicate.

Nash equilibriums can be arbitrarily bad

Nit: I think this game is more standardly referred to in the literature as the "traveler's dilemma" (Google seems to return no relevant hits for "almost free lunches" apart from this post).

Book review: The Sleepwalkers by Arthur Koestler

Irresponsible and probably wrong narrative: Ptolemy and Simplicius and other pre-modern scientists generally believed in something like naive realism, i.e., that the models (as we now call them) that they were building were supposed to be the way things really worked, because this is the normal way for humans to think about things when they aren't suffering from hypoxia from going up too many meta-levels, so to speak. Then Copernicus came along, kickstarting the Scientific Revolution and with it the beginnings of science-vs.-religion conflict, spurring many politically-motivated clever arguments about Deep Philosophical Issues. Somewhere during that process somebody came up with scientific anti-realism, and it gained traction because it was politically workable as a compromise position, being sufficiently nonthreatening to both sides that they were content to let it be. Except for Galileo, who thought it was bullshit and refused to play along, which (in conjunction with his general penchant for pissing people off, plus the political environment having changed since Copernicus due to the Counter-Reformation) got him locked up.

Book review: The Sleepwalkers by Arthur Koestler

Oh, I totally buy that it was relevant in the Galileo affair; indeed, the post does discuss Copernicus. But that was after the controversy had become politicized and so people had incentives to come up with weird forms of anti-epistemology. Absent that, I would not expect such a distinction to come up.

Book review: The Sleepwalkers by Arthur Koestler

This essay argues against the idea of "saving the phenomenon", and suggests that the early astronomers mostly did believe that their models were literally true. Which rings true to me; the idea of "it doesn't matter if it's real or not" comes across as suspiciously modern.

What LessWrong/Rationality/EA chat-servers exist that newcomers can join?

For EAs and people interested in discussing EA, I recommend the EA Corner Discord server, which I moderate along with several other community members. For a while there was a proliferation of several different EA Discords, but the community has now essentially standardized on EA Corner and the other servers are no longer very active. Nor is there an open EA chatroom with comparable levels of activity on any other platform, to the best of my knowledge.

I feel that we've generally done a good job of balancing access needs associated with different levels of community engagement. A number of longtime EAs with significant blogosphere presences hang out here, but the culture is also generally newcomer-friendly. Discussion topics range from 101 stuff to open research questions. Speaking only for myself, I generally strive to maintain civic/public moderation norms as much as possible.

Also you can get a pretty color for your username if you donate 10% or do direct work.

LW Update 2019-03-12 -- Bugfixes, small features

The Slate Star Codex sidebar is now using localStartTime to display upcoming meetups, fixing a longstanding off-by-one bug affecting displayed dates.

Load More