All of Taymon Beal's Comments + Replies

Exterminating humans might be on the to-do list of a Friendly AI

This strikes me as a purely semantic question regarding what goals are consistent with an agent qualifying as "friendly".

2RomanS2moI think that regardless of how we define "Friendly", an advanced enough Friendly AGI might sometimes take actions that will be perceived as hostile by some humans (or even all humans). This makes it much harder to distinguish the actions of: * rogue AGI * Friendly AGI that failed to preserve its Friendliness * Friendly AGI that remains to be Friendly
4Charlie Sanders3moRobin’s sense of honor would probably prevent him from litigating this, but that absolutely would not hold up in court.
Petrov Day 2021: Mutually Assured Destruction?

Correction: The annual Petrov Day celebration in Boston has never used the button.

2habryka4moFixed!
Takeaways from one year of lockdown

I've talked to some people who locked down pretty hard pretty early; I'm not confident in my understanding but this is what I currently believe.

I think characterizing the initial response as over-the-top, as opposed to sensible in the face of uncertainty, is somewhat the product of hindsight bias. In the early days of the pandemic, nobody knew how bad it was going to be. It was not implausible that the official case fatality rate for healthy young people was a massive underestimate.

I don't think our community is "hyper-altruistic" in the Strangers Drowning... (read more)

1[comment deleted]10mo
4mingyuan1yThis is my favorite take/summary. Author endorses.
We Need Browsers as Platforms

Docker is not a security boundary.

2hamnox1yIt is not, but I do see people treating it that way.
Manifesto of the Silent Minority

Eh, if you read the raw results most are pretty innocuous.

Industrial literacy

Not at the scale that would be required to power the entire grid that way. At least, not yet. This is of course just one study (h/t Vox via Robert Wiblin) but provides at least a rough picture of the scale of the problem.

7Davidmanheim1yThis is disingenuous, I think. Of course they don't exist at the necessary scale yet, because the market is small. If the market grew, and was profitable, scaling would be possible. Rare earths aren't rare enough to be a real constraint, we'd just need to mine more of them. The only thing needed would be to make more of things we know how to make. (And no, that wouldn't happen, because the new tech being developed would get developed far faster, and used instead.)
0MikkW1ySorry, I didn't mean to imply that there isn't a current practical problem with solar / wind; my reason for my previous post is that I read Christian's statement as implying that it is fundamentally physically impossible to rely on solar without being chronically exposed to outages, which simply isn't true, but it is true that we still need to develop our technology and infrastructure to accommodate the dynamics that exist with solar power To be clear, I am in favour of using nuclear power for precisely this reason, although it also seems that the problems with renewables will be taken care of by the free market fairly quickly as renewables make up a larger proportion of our energy consumption
$1,000 Bounty for Pro-BLM Policy Analysis

Cross-posting from Facebook:

Any policy goal that is obviously part of BLM's platform, or that you can convince me is, counts. Police reform is the obvious one but I'm open to other possibilities.

It's fine for "heretics" to make suggestions, at least here on LW where they're somewhat less likely to attract unwanted attention. Efficacy is the thing I'm interested in, with the understanding that the results are ultimately to be judged according to the BLM moral framework, not the EA/utilitarian one.

Small/limited returns are okay if they're the best that can b

... (read more)
Reality-Revealing and Reality-Masking Puzzles
It is easy to get the impression that the concerns raised in this post are not being seen, or are being seen from inside the framework of people making those same mistakes.

I don't have a strong opinion about the CFAR case in particular, but in general, I think this is impression is pretty much what happens by default in organizations, even when people running them are smart and competent and well-meaning and want to earn the community's trust. Transparency is really hard, harder than I think anyone expects until they try to do it, and to do it we... (read more)

2habryka2yCool, will transfer ownership and make this a draft

This looks like a duplicate.

2habryka3yI will move it back to drafts
Nash equilibriums can be arbitrarily bad

Nit: I think this game is more standardly referred to in the literature as the "traveler's dilemma" (Google seems to return no relevant hits for "almost free lunches" apart from this post).

2Stuart_Armstrong3yThat's useful; I added a link to the other game in the main text (as far as I can tell, I came up with this independently).
Book review: The Sleepwalkers by Arthur Koestler

Irresponsible and probably wrong narrative: Ptolemy and Simplicius and other pre-modern scientists generally believed in something like naive realism, i.e., that the models (as we now call them) that they were building were supposed to be the way things really worked, because this is the normal way for humans to think about things when they aren't suffering from hypoxia from going up too many meta-levels, so to speak. Then Copernicus came along, kickstarting the Scientific Revolution and with it the beginnings of science-vs.-religion conflict, spurrin... (read more)

2Raemon3yThanks, that at least sounds like a plausible narrative and I understand what you meant better.
Book review: The Sleepwalkers by Arthur Koestler

Oh, I totally buy that it was relevant in the Galileo affair; indeed, the post does discuss Copernicus. But that was after the controversy had become politicized and so people had incentives to come up with weird forms of anti-epistemology. Absent that, I would not expect such a distinction to come up.

6Raemon3yI'm not sure I parsed this comment thread, wondering if you could explain in a bit more detail what you think happened?
Book review: The Sleepwalkers by Arthur Koestler

This essay argues against the idea of "saving the phenomenon", and suggests that the early astronomers mostly did believe that their models were literally true. Which rings true to me; the idea of "it doesn't matter if it's real or not" comes across as suspiciously modern.

8Richard_Ngo3yHmm, interesting. It doesn't discuss the Galileo affair, which seems like the most important case where the distinction is relevant. Nevertheless, in light of this, "geocentric models with epicycles had always been in the former category" is too strong and I'll amend it accordingly.
What LessWrong/Rationality/EA chat-servers exist that newcomers can join?

For EAs and people interested in discussing EA, I recommend the EA Corner Discord server, which I moderate along with several other community members. For a while there was a proliferation of several different EA Discords, but the community has now essentially standardized on EA Corner and the other servers are no longer very active. Nor is there an open EA chatroom with comparable levels of activity on any other platform, to the best of my knowledge.

I feel that we've generally done a good job of balancing access needs associated with different levels... (read more)

LW Update 2019-03-12 -- Bugfixes, small features

The Slate Star Codex sidebar is now using localStartTime to display upcoming meetups, fixing a longstanding off-by-one bug affecting displayed dates.

LW2.0 Mailing List for Breaking API Changes

You probably want to configure this such that anyone can read and subscribe but only you can post.

Open Thread January 2019

I don't feel like much has changed in terms of evaluating it. Except that the silliness of the part about cryptocurrency is harder to deny now that the bubble has popped.

Norms of Membership for Voluntary Groups

I linked this article in the EA Discord that I moderate, and made the following comments:

Posting this in #server-meta because it helps clarify a lot of what I, at least, have struggled to express about how I see this server as being supposed to work.
Specifically, I feel pretty strongly that it should be run on civic/public norms. This is a contrast to a lot of other rationalsphere Discords, which I think often at least claim to be running on guest norms, though I don’t have a super-solid understanding of the social dynamics involved.
The standard failure mo
... (read more)
LW Update 2018-11-22 – Abridged Comments

I fear that this system doesn't actually provide the benefits of a breadth-first search, because you can't really read half a comment. If I scroll down a comment page without uncollapsing it, I don't feel like I got much of a picture of what anyone actually said, and also repeatedly seeing what people are saying cut off midsentence is really cognitively distracting.

Reddit (and I think other sites, but on Reddit I know I've experienced this) makes threads skimmable by showing a relatively small number of comments, rather than a small sni... (read more)

8Raemon3yHmm, nod. The original version of the truncation did actually do something more similar to that, but it came with a different set of technical challenges and annoyances and at the time it had seemed to me that the truncation system would be less annoying. (I thought "not being able to see comments at all" and thus not knowing what the thread structure even looked like" would be worse) I am curious whether the various people who've expressed dislike of the abridgment would feel fine with a version that showers fewer comments rather than less-of-each-comment.
LW Update 2018-11-22 – Abridged Comments

You don't currently expand comments that are positioned below the clicked comment but not descendants of it.

4Raemon3yOh, yeah that makes sense.
LW Update 2018-11-22 – Abridged Comments

Idea: If somebody has expanded several comments, there's a good chance they want to read the whole thread, so maybe expand all of them.

4Raemon3yClicking on comment expands all comments below it. We don’t currently expand comments above it because that changes your screen position which can be disorienting, although i could imagine changing my mind about that
Speculative Evopsych, Ep. 1

Would you mind saying in non-metaphorical terms what you thought the point was? I think this would help produce a better picture of how hard it would have been to make the same point in a less inflammatory way.

Ecosystems, and organisms in them, generally don't care about stuff that can't be turned into power-within-the-ecosystem. Box two exists, but unless the members of box one can utilize box two for e.g. information/computation/communication, it doesn't matter to anyone in box one.

Other places where this applies:

  • Highly competitive industries won't care about externalities or the long-term future. Externalities and the future are in box two. They might not even be modeled.
  • Young people have a personal interest in making their life better when they're olde
... (read more)
Rationality Is Not Systematized Winning

There's an argument to be made that even if you're not an altruist, that "societal default" only works if the next fifty years play out more-or-less the same way the last fifty years did; if things change radically (e.g., if most jobs are automated away), then following the default path might leave you badly screwed. Of course, people are likely to have differing opinions on how likely that is.

4Kaj_Sotala3yDoes LW-style rationality give you any major advantage in figuring out what to do as a consequence of major automation, though?
Modes of Petrov Day

No, we didn't participate in this in Boston. Our Petrov Day is this Wednesday, the actual anniversary of the Petrov incident.

Modes of Petrov Day

Some disconnected thoughts:

In Boston we're planning Normal Mode. (We rejected Hardcore Mode in previous years, in part because it was a serious problem for people who underwent significant inconvenience to be able to attend.)

I'm good at DevOps and might be able to help the Seattle folks make their app more available if they need it.

I happened to give a eulogy of sorts for Stanislav Petrov last year.

I'm currently going through the latest version of the ritual book and looking for things to nitpick, since I know that a few points (notably the ... (read more)

9Raemon3yThe Seattle and UK groups apparently celebrated today, but with a somewhat different scenario (instead of going home quietly, they each had cakes, and if they got nuked they had to burn their cake instead of eating it. At least this is my present understanding).
Berkeley REACH Supporters Update: September 2018

Thanks for this update!

I have a question as a donor, that I regret not thinking of during the fundraising push. Could you identify a few possible future outcomes, that success or failure on could be measured within a year, that if achieved would indicate that REACH was probably producing significant value from an EA perspective (as opposed to from a community-having-nice-things perspective)? And could you offer probability estimates on those outcomes being achieved?

I certainly understand if this would be overly time-consuming, but I'd feel comfortable... (read more)

7stardust3yJust now saw the edit. This seems plausibly already the case? There were no EA meetups in Berkeley before I started REACH. Most of the EA meetups draw 8-25 people and it's not always the same people (there's a core set of 4 or so who are usually there). This I'm not sure about, happy to hear any anecdotes if this is the case for anyone! I would say at least 5 people have become significantly more involved in the community than they would've in the counterfactual. Some of them are at least applying for jobs in EA priority areas.

I am not very good at making up numbers in this way and have stopped trying. I am not a superforecaster :) So I'm not going to make any actual predictions, but I'll give some categories where I see potential for impact.

First, let me give an overview of what has been achieved so far based on the metrics I have access to:

  • number of visits (909 logged visits between 4/26 and 8/30, with about 300 unique visitors -- this is a lossy collection strategy, not everyone who comes in actually signs in, I'd guess at most 80% of visits get logged, but ag
... (read more)
Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why

Then I think the post should have waited until those arguments were up, so that the discussion could be about their merits. The problem is the "hyping it up to Be An Internet Event", as Ray put it in a different subthread; since the thing you're hyping up is so inflammatory, we're left in the position of having arguments about it without knowing what the real case for it is.

… since the thing you’re hyping up is so inflammatory, we’re left in the position of having arguments about it without knowing what the real case for it is.

Are we, though? Must we have arguments about it? What reason is there for us not to say something like, “this raises red flags but we’ll consider and discuss it properly after it takes place; make sure to document it properly and exhaustively, to signal to us all that you are acting in good faith”, and then say no more for now?

Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why

I think it's an antisocial move to put forth a predictably inflammatory thesis (e.g., that an esteemed community member is a pseudo-intellectual not worth reading) and then preemptively refuse to defend it. If the thesis is right, then it would be good for us to be convinced of it, but that won't happen if we don't get to hear the real arguments in favor. And if it's wrong, then it should be put to bed before it creates a lot of unproductive social conflict, but that also won't happen as long as people can claim that we haven'... (read more)

2Evan_Gaensbauer3yI was clarifying my intentions, but my friend, who is the main draw for the event, does intend to defend the thesis Scott is a pseudo-intellectual not worth reading. The real arguments will be up in a little over a week though.
3Arkanj3l3yMeetup hasn't happened yet. Should the refutations be given on your time frame?
Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why

Unless a comment was edited or deleted before I got the chance to read it, nobody but you has used the word "violence" in this thread. So I don't understand how an argument about the definition of "violence" is in any way relevant.

2Evan_Gaensbauer3yI was contrasting it with Ben's use of the word 'peaceful,' and making some background assumptions as to what the context for using the word was (Said remarked on the odd diction). Apparently those assumptions were wrong.
Last Chance to Fund the Berkeley REACH

Hmmm. Do you think that's a bug, or a feature?

LessWrong seems like a bit of a weird example since CFAR's senior leadership were among the people pushing for it in the first place. IIRC even people working at EA meta-orgs have encountered difficulties and uncertainty trying to personally fund projects through the org.

Last Chance to Fund the Berkeley REACH

I've just pledged $40 per month.

I could afford to pay more. I'd do so if I ever actually visited REACH, but I live thousands of miles away (and did give a small donation when I visited for the pre-EA Global party, and will continue to do so if I ever come back). I'd also pay more if I were more convinced that it was a good EA cause, but the path from ingroup reinforcement to global impact is speculative and full of moral hazard and I'm still thinking about it.

My pledge represents a bet that REACH will ultimately make a difference in my ... (read more)

Last Chance to Fund the Berkeley REACH

This is a problem I've been thinking about for awhile in a broader EA context.

It's claimed fairly widely that EA needs a lot more smallish projects, including ones that aren't immediately legible enough to be fundable by large institutional donors (e.g., because the expected value depends on assessments of the competence and value alignment of the person running the project, which the large institutional funders can't assess). It's also claimed (e.g., by Nick Beckstead of OpenPhil at EA Global San Francisco 2017) that smallish earn... (read more)

9Raemon4yI think to some extent we have this, but it depends upon local trust networks that aren't always legible to newer people. (For example, LessWrong is funded through CFAR)
Using the LessWrong API to query for events

Re: local events: Although I haven't checked this with Scott, my default assumption for the SSC sidebar is that keeping it free of clutter and noise is of the highest importance. As such, I'm only including individual events that a human actually took explicit action to advertise, to prevent the inclusion of "weekly" events from groups that have since flaked or died out.

(This is also why the displayed text only includes the date and Google-normalized location, to prevent users from defacing the sidebar with arbitrary text.)

LW proper may have different priorities. Might be worth considering design options here for indicating how active a group is.

7habryka4yYeah, it would be great to somehow know how active a group is. Though it's definitely non-trivial to make that happen. Right now all groups were created in the last few months, so you know at least that if they are dead, they haven't been dead for long. My model is that showing a group that is actually in the same city but might or might not be active is more important than showing far away events that are guaranteed to be real. In both cases the user will be disappointed, but I think your hit rate is going to be higher for the groups, since there are so many more of them, and most events don't get advertised. I don't think this means you have to show more clutter or anything, as long as you can get the user's rough location (via IP if necessary) you should be able to show nearby groups with nothing else but the location.
Using the LessWrong API to query for events

So correct me if I'm wrong here, but the way timezones seem to work is that, when creating an event, you specify a "local" time, then the app translates that time from whatever it thinks your browser's time zone is into UTC and saves it in the database. When somebody else views the event, the app translates the time in the database from UTC to whatever it thinks their browser's time zone is and displays that.

I suppose this will at least sometimes work okay in practice, but if somebody creates an event in a time zone other than the ... (read more)

9habryka4yYep, that is how it works. This seemed to make the most sense for potential remote events, i.e. when people want to Skype into an event. I think it could totally make sense to instead display the time in the local timezone, but it's not obviously better to me and would require a bunch of engineering effort. But if someone else feels strongly about it, I would definitely merge a PR and help getting them set up with the codebase and everything to make that change happen. Also if enough people feel that the other way would be better, then I can get around to fixing it. I do think that in general, focusing on listing events over groups was a mistake. When I get around to fixing our UI, I think the community page should feature groups a lot more prominently than events, which also mostly sidesteps this problem.
Using the LessWrong API to query for events

Also, two other questions:

  • Is there any way to link the new event form to have a type box prechecked? How hard is this to implement in Vulcan?
  • How do time zones of events work?
2habryka4yIt would be pretty trivial to implement individual prefilled options via the URL. Solving the problem generally though might turn out to be harder (i.e. generally allowing you to prefill forms based on URL parameters and available fields). Time zones are currently all saved in the DB as UTC+0. So that's what your queries should translate to, though I haven't needed to test this extensively. On a higher level: I think it would be more valuable to have a sidebar of nearby local groups instead of nearby events. Most groups don't get around to publishing all of their events on LessWrong, and while this is something I want to fix in the long-run, right now we only have a single upcoming SSC meetup listed, even though I know of many SSC groups that have weekly meetups and are listed.
Using the LessWrong API to query for events

Thanks. I'd originally written up a wishlist of server-side functionality here, but at this point I'm thinking maybe I'll just do the sorting and filtering on the client, since this endpoint seems able to provide a superset of what I'm looking for. It's less efficient and definitely an evil hack, but it means not needing server-side code changes.

I'll note that filter: "SSC" doesn't work in the GraphiQL page; events that don't match the filter still get returned.

More generally, the way the API works now basi... (read more)

2habryka4yAh, sorry. I made a small error in the query above. Here is the corrected one (I wrote "filter" instead of "filters"): https://www.lesswrong.com/graphiql?query=%7BPostsList(terms%3A%20%7Bview%3A%20%22nearbyEvents%22%2C%20filters%3A%20%22SSC%22%2C%20lat%3A%200%2C%20lng%3A%200%7D)%20%7B%0A%20%20_id%0A%20%20createdAt%0A%20%20title%0A%20%20mongoLocation%0A%20%20location%0A%20%20types%0A%7D%7D [https://www.lesswrong.com/graphiql?query=%7BPostsList(terms%3A%20%7Bview%3A%20%22nearbyEvents%22%2C%20filters%3A%20%22SSC%22%2C%20lat%3A%200%2C%20lng%3A%200%7D)%20%7B%0A%20%20_id%0A%20%20createdAt%0A%20%20title%0A%20%20mongoLocation%0A%20%20location%0A%20%20types%0A%7D%7D]
1Taymon Beal4yAlso, two other questions: * Is there any way to link the new event form to have a type box prechecked? How hard is this to implement in Vulcan? * How do time zones of events work?
Meta-tations on Moderation: Towards Public Archipelago

I think I agree that if you see that as the development of explicit new norms as the primary point then Facebook doesn't really work and you need something like this. I guess I got excited because I was hoping that you'd solved the "audience is inclined towards nitpicking" and "the people I most want to hear from will have been prefiltered out" problems, and now it looks more like those aren't going to change.

4Raemon4yMy expectation is that the new rules will result in less nitpicking (since authors will have a number of tools to say 'sorry this comment doesn't seem to be pulling its weight'), although you may have to learn which authors enforce which sorts of norms to figure it out. I'm not 100% which things are prefiltering people you care about out, so am not sure whether this will make a difference.
Meta-tations on Moderation: Towards Public Archipelago

I guess there's an inherent tradeoff between archipelago and the ability to shape the culture of the community. The status quo on LW 2.0 leans too far towards the latter for my tastes; the rationalist community is big and diverse and different people want different things, and the culture of LW 2.0 feels optimized for what you and Ben want, which diverges often enough from what I want that I'd rather post on Facebook to avoid dealing with that set of selection effects. Whether you should care about this depends on how many other people are in a s... (read more)

2Raemon4yQuick note: this isn't what I mean by archipelago (see other comment)
4ESRogs4yJeff copies those comments by hand. Source: some facebook thread that I can't find right now. EDIT: Looks like I am wrong: https://www.jefftk.com/p/external-comment-integration.
9habryka4yI've been thinking about that, though I am somewhat worried about the legality of that integration (it's unclear whether you can copy people's content like that without their direct consent, or what would count as consent), and also think it removes most of the levers to shape the culture of a community. For example, it seems clear to me that the rationality community could not have formed its culture on Facebook, though it might be able to preserve its culture on Facebook. The forces towards standard online discussion norms on Facebook are quite high (for example, you can't display the moderation norms easily accessible below a comment, you can't reduce the attention a comment gets by downvoting it, you can't collapse a comment by default, etc.)
Meta-tations on Moderation: Towards Public Archipelago

Thanks for articulating why Facebook is a safer and more pleasant place to comment than LW. I tried to post pretty much this on a previous thread but wasn't able to actually articulate the phenomenon so didn't say anything.

That being said, I still feel like I'd rather just post on Facebook.

There are two specific problems with Facebook as a community forum that I'm aware of. The first is that the built-in archiving and discovery tools are abysmal, because that's not the primary use case for the platform. Fortunately, we know there&#... (read more)

2Wei_Dai3yI'm curious which worldviews and approaches you saw as over-represented, and which are the ones you most wanted to hear from, and whether anything has changed since you wrote this comment. Are your friends here now? If not, why?
Other than that, Facebook seems to have the whole "archipelago" thing pretty much solved.

I actually think there's a deep sense in which Facebook has not accomplished archipelago, and an additional deep sense in which they have not accomplished public archipelago.

Archipelago doesn't just mean "you've filter-bubbled yourself such that people you only hang out with likeminded people." It means you've filtered yourself and then used that filtering to enforce norms that you wouldn't be able to enforce otherwise, allo... (read more)

There are two specific problems with Facebook as a community forum that I'm aware of. The first is that the built-in archiving and discovery tools are abysmal, because that's not the primary use case for the platform. Fortunately, we know there's a technical solution to this, because Jeff Kaufman implemented it on his blog.

I don't understand this response. That there exists a solution doesn't mean that there exists a solution that 1) is easy to use and 2) that people will actually use. One of the many advantages of hosting a conver... (read more)

Arbital postmortem

Thanks for the informative writeup.

I already said all of this on Facebook, but just to reiterate:

  • I believed from the first announcement, and continue to believe, that much of the value of Arbital as it exists is in the software itself. (By comparison, if Wikipedia stopped existing, MediaWiki would still be important and valuable.)
  • I, personally, want my own Arbital instance that I can use to write about EA donation opportunities. (I think Malcolm Ocean has said he wants one too.)
  • If and when it gets open sourced under any of the usual open source licenses, I will contribute documentation, automation scripts, and/or settings cleanup as needed to make it self-hostable.