All of Taymon Beal's Comments + Replies

Does that logic apply to crawlers that don't try to post or vote, as in the public-opinion-research use case? The reason to block those is just that they drain your resources, so sophisticated measures to feed them fake data would be counterproductive.

2trevor1mo
The crawlers still have to scroll, which is one of the best sources of human biodata related to various topics (only for people with massive sample sizes). On platforms where they aren't scrolling much (e.g. youtube), they're clicking.  Although researching public opinion theoretically isn't quite as aggressive as researching and steering public opinion simultaneously, researching public opinion is still central to building capabilities, especially in the current paradigm where many tasks related to data analysis and experimental research are being automated and scaled.

I didn't downvote (I'm just now seeing this for the first time), but the above comment left me confused about why you believe a number of things:

  • What methodology do you think MIRI used to ascertain that the Time piece was impactful, and why do you think that methodology isn't vulnerable to bots or other kinds of attacks?
  • Why would social media platforms go to the trouble of feeding fake data to bots instead of just blocking them? What would they hope to gain thereby?
  • What does any of this have to do with the Social Science One incident?
  • In general, what's you
... (read more)
2trevor2mo
Ah, yes, I thought that methodology wasn't vulnerable to bots or other kinds of attacks because I was wrong. Oops. Glad I asked. For the other stuff, I've explained it pretty well in the past but you're right that I did an inadequate job covering it here. Blocking bots is basically giving constructive feedback to the people running the botnet (since they can tell when bots are blocked), on how to run botnets without detection; it's critical to conceal every instance of a detected bot for as long as possible, which is why things like shadowbanning and vote fuzzing are critical for security for modern social media platforms. This might potentially explain why amateurish bots are so prevalent; state-level attackers can easily run both competent botnets and incompetent botnets simultaneously and learn valuable lessons about the platform's security system from both types of botnets (there's other good explanations though). Not sure how I missed such a large hole in my explanation in the original comment, but still, glad I asked.

For people in Boston, I made a straw poll to gauge community sentiment on this question: https://forms.gle/5BJEG5fJWTza14eL9

I assume this is referring to the ancient fable "The Ant and the Grasshopper", which is about what we would today call time preference. In the original, the high-time-preference grasshopper starves because it didn't spend the summer stockpiling food for winter, while the low-time-preference ant survives because it did. Of course, alternate interpretations have been common since then.

1Gerald Monroe1y
Yes because the grasshopper in an alternate history does the ant strategy and gets eaten by a bird before they can enjoy surviving the winter. Or huddles in misery all winter, eating stored food, only to die of aging right before spring.
2Gretta Duleba1y
Yes, that's correct, I was referring to the fable. I should probably have included a broader hint about that.

Boston

Saturday, December 17; doors open at 6:30, Solstice starts at 7:15
69 Morrison Ave., Somerville, MA 02144

RSVPs appreciated for planning purposes: https://www.facebook.com/events/3403227779922411

Let us know in advance if you need to park onsite (it's accessible by public transportation). We're up a flight of stairs.

6sid1y
I think the date is Dec 17 per the Facebook event?

As someone who was very unhappy with last year's implementation and said so (though not in the public thread), I think this is an improvement and I'm happy to see it. In previous years, I didn't get a code, but if I'd had one I would have very seriously considered using it; this year, I see no reason to do that.

I do think that, if real value gets destroyed as a result of this, then the ethical responsibility for that loss of value lies primarily with the LW team, and only secondarily with whoever actually pushed the button. So if the button got pushed and ... (read more)

I'm glad you're happier with this year's version!

I'm not sure I'd say primarily/secondarily, probably I'd guess more at 50-50 (that might be the Shapley attribution?) between LessWrong and the pusher, if someone pushes the button. But overall agree LW gets a bunch of culpability.

So this wound up going poorly for me for various reasons. I ultimately ended up not doing the fast, and have been convinced that I’m not going to be able to in the future either, barring unanticipated changes in my mental-health situation. Other people are going to be in a different situation and that seems fine. But there are a couple community-level things that I feel ought to be expressed publicly somewhere, and this is where they're apparently allowed, so:

First, it's not a great situation if there are like three rationalist holidays and one of them is ... (read more)

9David Hornbein2y
The solution to your first problem may not be easy, but it is obvious: those who want community holidays with different emphasis and/or more variety of holidays can create those holidays. The culture belongs to those who put in the work to create it, both in practice and in justice. This goes double if you're correct that "we currently don’t have enough rationalist holidays and people are desperate for more" (which I have no independent opinion on).

This strikes me as a purely semantic question regarding what goals are consistent with an agent qualifying as "friendly".

2RomanS2y
I think that regardless of how we define "Friendly", an advanced enough Friendly AGI might sometimes take actions that will be perceived as hostile by some humans (or even all humans).  This makes it much harder to distinguish the actions of: * rogue AGI * Friendly AGI that failed to preserve its Friendliness * Friendly AGI that remains to be Friendly
4Charlie Sanders2y
Robin’s sense of honor would probably prevent him from litigating this, but that absolutely would not hold up in court.

Correction: The annual Petrov Day celebration in Boston has never used the button.

2habryka2y
Fixed!

I've talked to some people who locked down pretty hard pretty early; I'm not confident in my understanding but this is what I currently believe.

I think characterizing the initial response as over-the-top, as opposed to sensible in the face of uncertainty, is somewhat the product of hindsight bias. In the early days of the pandemic, nobody knew how bad it was going to be. It was not implausible that the official case fatality rate for healthy young people was a massive underestimate.

I don't think our community is "hyper-altruistic" in the Strangers Drowning... (read more)

1[comment deleted]3y
4mingyuan3y
This is my favorite take/summary. Author endorses.

Docker is not a security boundary.

2hamnox3y
It is not, but I do see people treating it that way.

Eh, if you read the raw results most are pretty innocuous.

Not at the scale that would be required to power the entire grid that way. At least, not yet. This is of course just one study (h/t Vox via Robert Wiblin) but provides at least a rough picture of the scale of the problem.

7Davidmanheim3y
This is disingenuous, I think. Of course they don't exist at the necessary scale yet, because the market is small. If the market grew, and was profitable, scaling would be possible. Rare earths aren't rare enough to be a real constraint, we'd just need to mine more of them.  The only thing needed would be to make more of things we know how to make. (And no, that wouldn't happen, because the new tech being developed would get developed far faster, and used instead.) 
0MikkW3y
Sorry, I didn't mean to imply that there isn't a current practical problem with solar / wind; my reason for my previous post is that I read Christian's statement as implying that it is fundamentally physically impossible to rely on solar without being chronically exposed to outages, which simply isn't true, but it is true that we still need to develop our technology and infrastructure to accommodate the dynamics that exist with solar power To be clear, I am in favour of using nuclear power for precisely this reason, although it also seems that the problems with renewables will be taken care of by the free market fairly quickly as renewables make up a larger proportion of our energy consumption

Cross-posting from Facebook:

Any policy goal that is obviously part of BLM's platform, or that you can convince me is, counts. Police reform is the obvious one but I'm open to other possibilities.

It's fine for "heretics" to make suggestions, at least here on LW where they're somewhat less likely to attract unwanted attention. Efficacy is the thing I'm interested in, with the understanding that the results are ultimately to be judged according to the BLM moral framework, not the EA/utilitarian one.

Small/limited returns are okay if they're the best that can b

... (read more)
It is easy to get the impression that the concerns raised in this post are not being seen, or are being seen from inside the framework of people making those same mistakes.

I don't have a strong opinion about the CFAR case in particular, but in general, I think this is impression is pretty much what happens by default in organizations, even when people running them are smart and competent and well-meaning and want to earn the community's trust. Transparency is really hard, harder than I think anyone expects until they try to do it, and to do it we... (read more)

2habryka4y
Cool, will transfer ownership and make this a draft

This looks like a duplicate.

2habryka5y
I will move it back to drafts

Nit: I think this game is more standardly referred to in the literature as the "traveler's dilemma" (Google seems to return no relevant hits for "almost free lunches" apart from this post).

2Stuart_Armstrong5y
That's useful; I added a link to the other game in the main text (as far as I can tell, I came up with this independently).

Irresponsible and probably wrong narrative: Ptolemy and Simplicius and other pre-modern scientists generally believed in something like naive realism, i.e., that the models (as we now call them) that they were building were supposed to be the way things really worked, because this is the normal way for humans to think about things when they aren't suffering from hypoxia from going up too many meta-levels, so to speak. Then Copernicus came along, kickstarting the Scientific Revolution and with it the beginnings of science-vs.-religion conflict, spurrin... (read more)

2Raemon5y
Thanks, that at least sounds like a plausible narrative and I understand what you meant better.

Oh, I totally buy that it was relevant in the Galileo affair; indeed, the post does discuss Copernicus. But that was after the controversy had become politicized and so people had incentives to come up with weird forms of anti-epistemology. Absent that, I would not expect such a distinction to come up.

6Raemon5y
I'm not sure I parsed this comment thread, wondering if you could explain in a bit more detail what you think happened?

This essay argues against the idea of "saving the phenomenon", and suggests that the early astronomers mostly did believe that their models were literally true. Which rings true to me; the idea of "it doesn't matter if it's real or not" comes across as suspiciously modern.

8Richard_Ngo5y
Hmm, interesting. It doesn't discuss the Galileo affair, which seems like the most important case where the distinction is relevant. Nevertheless, in light of this, "geocentric models with epicycles had always been in the former category" is too strong and I'll amend it accordingly.

For EAs and people interested in discussing EA, I recommend the EA Corner Discord server, which I moderate along with several other community members. For a while there was a proliferation of several different EA Discords, but the community has now essentially standardized on EA Corner and the other servers are no longer very active. Nor is there an open EA chatroom with comparable levels of activity on any other platform, to the best of my knowledge.

I feel that we've generally done a good job of balancing access needs associated with different levels... (read more)

The Slate Star Codex sidebar is now using localStartTime to display upcoming meetups, fixing a longstanding off-by-one bug affecting displayed dates.

You probably want to configure this such that anyone can read and subscribe but only you can post.

I don't feel like much has changed in terms of evaluating it. Except that the silliness of the part about cryptocurrency is harder to deny now that the bubble has popped.

I linked this article in the EA Discord that I moderate, and made the following comments:

Posting this in #server-meta because it helps clarify a lot of what I, at least, have struggled to express about how I see this server as being supposed to work.
Specifically, I feel pretty strongly that it should be run on civic/public norms. This is a contrast to a lot of other rationalsphere Discords, which I think often at least claim to be running on guest norms, though I don’t have a super-solid understanding of the social dynamics involved.
The standard failure mo
... (read more)

I fear that this system doesn't actually provide the benefits of a breadth-first search, because you can't really read half a comment. If I scroll down a comment page without uncollapsing it, I don't feel like I got much of a picture of what anyone actually said, and also repeatedly seeing what people are saying cut off midsentence is really cognitively distracting.

Reddit (and I think other sites, but on Reddit I know I've experienced this) makes threads skimmable by showing a relatively small number of comments, rather than a small sni... (read more)

8Raemon5y
Hmm, nod. The original version of the truncation did actually do something more similar to that, but it came with a different set of technical challenges and annoyances and at the time it had seemed to me that the truncation system would be less annoying. (I thought "not being able to see comments at all" and thus not knowing what the thread structure even looked like" would be worse) I am curious whether the various people who've expressed dislike of the abridgment would feel fine with a version that showers fewer comments rather than less-of-each-comment.

You don't currently expand comments that are positioned below the clicked comment but not descendants of it.

4Raemon5y
Oh, yeah that makes sense.

Idea: If somebody has expanded several comments, there's a good chance they want to read the whole thread, so maybe expand all of them.

4Raemon5y
Clicking on comment expands all comments below it. We don’t currently expand comments above it because that changes your screen position which can be disorienting, although i could imagine changing my mind about that

Would you mind saying in non-metaphorical terms what you thought the point was? I think this would help produce a better picture of how hard it would have been to make the same point in a less inflammatory way.

Ecosystems, and organisms in them, generally don't care about stuff that can't be turned into power-within-the-ecosystem. Box two exists, but unless the members of box one can utilize box two for e.g. information/computation/communication, it doesn't matter to anyone in box one.

Other places where this applies:

  • Highly competitive industries won't care about externalities or the long-term future. Externalities and the future are in box two. They might not even be modeled.
  • Young people have a personal interest in making their life better when they're olde
... (read more)

There's an argument to be made that even if you're not an altruist, that "societal default" only works if the next fifty years play out more-or-less the same way the last fifty years did; if things change radically (e.g., if most jobs are automated away), then following the default path might leave you badly screwed. Of course, people are likely to have differing opinions on how likely that is.

4Kaj_Sotala5y
Does LW-style rationality give you any major advantage in figuring out what to do as a consequence of major automation, though?

No, we didn't participate in this in Boston. Our Petrov Day is this Wednesday, the actual anniversary of the Petrov incident.

Some disconnected thoughts:

In Boston we're planning Normal Mode. (We rejected Hardcore Mode in previous years, in part because it was a serious problem for people who underwent significant inconvenience to be able to attend.)

I'm good at DevOps and might be able to help the Seattle folks make their app more available if they need it.

I happened to give a eulogy of sorts for Stanislav Petrov last year.

I'm currently going through the latest version of the ritual book and looking for things to nitpick, since I know that a few points (notably the ... (read more)

9Raemon5y
The Seattle and UK groups apparently celebrated today, but with a somewhat different scenario (instead of going home quietly, they each had cakes, and if they got nuked they had to burn their cake instead of eating it. At least this is my present understanding).

Thanks for this update!

I have a question as a donor, that I regret not thinking of during the fundraising push. Could you identify a few possible future outcomes, that success or failure on could be measured within a year, that if achieved would indicate that REACH was probably producing significant value from an EA perspective (as opposed to from a community-having-nice-things perspective)? And could you offer probability estimates on those outcomes being achieved?

I certainly understand if this would be overly time-consuming, but I'd feel comfortable... (read more)

7stardust5y
Just now saw the edit. This seems plausibly already the case? There were no EA meetups in Berkeley before I started REACH. Most of the EA meetups draw 8-25 people and it's not always the same people (there's a core set of 4 or so who are usually there). This I'm not sure about, happy to hear any anecdotes if this is the case for anyone! I would say at least 5 people have become significantly more involved in the community than they would've in the counterfactual. Some of them are at least applying for jobs in EA priority areas.

I am not very good at making up numbers in this way and have stopped trying. I am not a superforecaster :) So I'm not going to make any actual predictions, but I'll give some categories where I see potential for impact.

First, let me give an overview of what has been achieved so far based on the metrics I have access to:

  • number of visits (909 logged visits between 4/26 and 8/30, with about 300 unique visitors -- this is a lossy collection strategy, not everyone who comes in actually signs in, I'd guess at most 80% of visits get logged, but ag
... (read more)

Then I think the post should have waited until those arguments were up, so that the discussion could be about their merits. The problem is the "hyping it up to Be An Internet Event", as Ray put it in a different subthread; since the thing you're hyping up is so inflammatory, we're left in the position of having arguments about it without knowing what the real case for it is.

… since the thing you’re hyping up is so inflammatory, we’re left in the position of having arguments about it without knowing what the real case for it is.

Are we, though? Must we have arguments about it? What reason is there for us not to say something like, “this raises red flags but we’ll consider and discuss it properly after it takes place; make sure to document it properly and exhaustively, to signal to us all that you are acting in good faith”, and then say no more for now?

I think it's an antisocial move to put forth a predictably inflammatory thesis (e.g., that an esteemed community member is a pseudo-intellectual not worth reading) and then preemptively refuse to defend it. If the thesis is right, then it would be good for us to be convinced of it, but that won't happen if we don't get to hear the real arguments in favor. And if it's wrong, then it should be put to bed before it creates a lot of unproductive social conflict, but that also won't happen as long as people can claim that we haven'... (read more)

2Evan_Gaensbauer5y
I was clarifying my intentions, but my friend, who is the main draw for the event, does intend to defend the thesis Scott is a pseudo-intellectual not worth reading. The real arguments will be up in a little over a week though.
3Arkanj3l5y
Meetup hasn't happened yet. Should the refutations be given on your time frame?

Unless a comment was edited or deleted before I got the chance to read it, nobody but you has used the word "violence" in this thread. So I don't understand how an argument about the definition of "violence" is in any way relevant.

2Evan_Gaensbauer5y
I was contrasting it with Ben's use of the word 'peaceful,' and making some background assumptions as to what the context for using the word was (Said remarked on the odd diction). Apparently those assumptions were wrong.

Hmmm. Do you think that's a bug, or a feature?

LessWrong seems like a bit of a weird example since CFAR's senior leadership were among the people pushing for it in the first place. IIRC even people working at EA meta-orgs have encountered difficulties and uncertainty trying to personally fund projects through the org.

I've just pledged $40 per month.

I could afford to pay more. I'd do so if I ever actually visited REACH, but I live thousands of miles away (and did give a small donation when I visited for the pre-EA Global party, and will continue to do so if I ever come back). I'd also pay more if I were more convinced that it was a good EA cause, but the path from ingroup reinforcement to global impact is speculative and full of moral hazard and I'm still thinking about it.

My pledge represents a bet that REACH will ultimately make a difference in my ... (read more)

This is a problem I've been thinking about for awhile in a broader EA context.

It's claimed fairly widely that EA needs a lot more smallish projects, including ones that aren't immediately legible enough to be fundable by large institutional donors (e.g., because the expected value depends on assessments of the competence and value alignment of the person running the project, which the large institutional funders can't assess). It's also claimed (e.g., by Nick Beckstead of OpenPhil at EA Global San Francisco 2017) that smallish earn... (read more)

9Raemon6y
I think to some extent we have this, but it depends upon local trust networks that aren't always legible to newer people. (For example, LessWrong is funded through CFAR)

Re: local events: Although I haven't checked this with Scott, my default assumption for the SSC sidebar is that keeping it free of clutter and noise is of the highest importance. As such, I'm only including individual events that a human actually took explicit action to advertise, to prevent the inclusion of "weekly" events from groups that have since flaked or died out.

(This is also why the displayed text only includes the date and Google-normalized location, to prevent users from defacing the sidebar with arbitrary text.)

LW proper may have different priorities. Might be worth considering design options here for indicating how active a group is.

7habryka6y
Yeah, it would be great to somehow know how active a group is. Though it's definitely non-trivial to make that happen. Right now all groups were created in the last few months, so you know at least that if they are dead, they haven't been dead for long. My model is that showing a group that is actually in the same city but might or might not be active is more important than showing far away events that are guaranteed to be real. In both cases the user will be disappointed, but I think your hit rate is going to be higher for the groups, since there are so many more of them, and most events don't get advertised. I don't think this means you have to show more clutter or anything, as long as you can get the user's rough location (via IP if necessary) you should be able to show nearby groups with nothing else but the location.

So correct me if I'm wrong here, but the way timezones seem to work is that, when creating an event, you specify a "local" time, then the app translates that time from whatever it thinks your browser's time zone is into UTC and saves it in the database. When somebody else views the event, the app translates the time in the database from UTC to whatever it thinks their browser's time zone is and displays that.

I suppose this will at least sometimes work okay in practice, but if somebody creates an event in a time zone other than the ... (read more)

9habryka6y
Yep, that is how it works. This seemed to make the most sense for potential remote events, i.e. when people want to Skype into an event. I think it could totally make sense to instead display the time in the local timezone, but it's not obviously better to me and would require a bunch of engineering effort. But if someone else feels strongly about it, I would definitely merge a PR and help getting them set up with the codebase and everything to make that change happen. Also if enough people feel that the other way would be better, then I can get around to fixing it. I do think that in general, focusing on listing events over groups was a mistake. When I get around to fixing our UI, I think the community page should feature groups a lot more prominently than events, which also mostly sidesteps this problem.

Also, two other questions:

  • Is there any way to link the new event form to have a type box prechecked? How hard is this to implement in Vulcan?
  • How do time zones of events work?
2habryka6y
It would be pretty trivial to implement individual prefilled options via the URL. Solving the problem generally though might turn out to be harder (i.e. generally allowing you to prefill forms based on URL parameters and available fields). Time zones are currently all saved in the DB as UTC+0. So that's what your queries should translate to, though I haven't needed to test this extensively. On a higher level: I think it would be more valuable to have a sidebar of nearby local groups instead of nearby events. Most groups don't get around to publishing all of their events on LessWrong, and while this is something I want to fix in the long-run, right now we only have a single upcoming SSC meetup listed, even though I know of many SSC groups that have weekly meetups and are listed.

Thanks. I'd originally written up a wishlist of server-side functionality here, but at this point I'm thinking maybe I'll just do the sorting and filtering on the client, since this endpoint seems able to provide a superset of what I'm looking for. It's less efficient and definitely an evil hack, but it means not needing server-side code changes.

I'll note that filter: "SSC" doesn't work in the GraphiQL page; events that don't match the filter still get returned.

More generally, the way the API works now basi... (read more)

2habryka6y
Ah, sorry. I made a small error in the query above. Here is the corrected one (I wrote "filter" instead of "filters"): https://www.lesswrong.com/graphiql?query=%7BPostsList(terms%3A%20%7Bview%3A%20%22nearbyEvents%22%2C%20filters%3A%20%22SSC%22%2C%20lat%3A%200%2C%20lng%3A%200%7D)%20%7B%0A%20%20_id%0A%20%20createdAt%0A%20%20title%0A%20%20mongoLocation%0A%20%20location%0A%20%20types%0A%7D%7D
1Taymon Beal6y
Also, two other questions: * Is there any way to link the new event form to have a type box prechecked? How hard is this to implement in Vulcan? * How do time zones of events work?

I think I agree that if you see that as the development of explicit new norms as the primary point then Facebook doesn't really work and you need something like this. I guess I got excited because I was hoping that you'd solved the "audience is inclined towards nitpicking" and "the people I most want to hear from will have been prefiltered out" problems, and now it looks more like those aren't going to change.

4Raemon6y
My expectation is that the new rules will result in less nitpicking (since authors will have a number of tools to say 'sorry this comment doesn't seem to be pulling its weight'), although you may have to learn which authors enforce which sorts of norms to figure it out. I'm not 100% which things are prefiltering people you care about out, so am not sure whether this will make a difference.

I guess there's an inherent tradeoff between archipelago and the ability to shape the culture of the community. The status quo on LW 2.0 leans too far towards the latter for my tastes; the rationalist community is big and diverse and different people want different things, and the culture of LW 2.0 feels optimized for what you and Ben want, which diverges often enough from what I want that I'd rather post on Facebook to avoid dealing with that set of selection effects. Whether you should care about this depends on how many other people are in a s... (read more)

2Raemon6y
Quick note: this isn't what I mean by archipelago (see other comment)
4ESRogs6y
Jeff copies those comments by hand. Source: some facebook thread that I can't find right now. EDIT: Looks like I am wrong: https://www.jefftk.com/p/external-comment-integration.
9habryka6y
I've been thinking about that, though I am somewhat worried about the legality of that integration (it's unclear whether you can copy people's content like that without their direct consent, or what would count as consent), and also think it removes most of the levers to shape the culture of a community. For example, it seems clear to me that the rationality community could not have formed its culture on Facebook, though it might be able to preserve its culture on Facebook. The forces towards standard online discussion norms on Facebook are quite high (for example, you can't display the moderation norms easily accessible below a comment, you can't reduce the attention a comment gets by downvoting it, you can't collapse a comment by default, etc.)

Thanks for articulating why Facebook is a safer and more pleasant place to comment than LW. I tried to post pretty much this on a previous thread but wasn't able to actually articulate the phenomenon so didn't say anything.

That being said, I still feel like I'd rather just post on Facebook.

There are two specific problems with Facebook as a community forum that I'm aware of. The first is that the built-in archiving and discovery tools are abysmal, because that's not the primary use case for the platform. Fortunately, we know there&#... (read more)

2Wei Dai5y
I'm curious which worldviews and approaches you saw as over-represented, and which are the ones you most wanted to hear from, and whether anything has changed since you wrote this comment. Are your friends here now? If not, why?
Other than that, Facebook seems to have the whole "archipelago" thing pretty much solved.

I actually think there's a deep sense in which Facebook has not accomplished archipelago, and an additional deep sense in which they have not accomplished public archipelago.

Archipelago doesn't just mean "you've filter-bubbled yourself such that people you only hang out with likeminded people." It means you've filtered yourself and then used that filtering to enforce norms that you wouldn't be able to enforce otherwise, allo... (read more)

There are two specific problems with Facebook as a community forum that I'm aware of. The first is that the built-in archiving and discovery tools are abysmal, because that's not the primary use case for the platform. Fortunately, we know there's a technical solution to this, because Jeff Kaufman implemented it on his blog.

I don't understand this response. That there exists a solution doesn't mean that there exists a solution that 1) is easy to use and 2) that people will actually use. One of the many advantages of hosting a conver... (read more)

Load More