lincolnquirk

Wiki Contributions

Comments

Dating profiles from first principles: heterosexual male profile design

Regardless of the precise mechanism, Tinder almost certainly shows more attractive people more often. If it didn't, it would have a retention problem because there are lots of people who swipe tinder to fantasize about matching with hot people, and they wouldn't get enough hot people to keep them going. Most likely, Tinder has determined a precise ratio of "hot people" and "people in your league" to show you, in order to keep you swiping.

Given the existence of the incentive and likelihood that Tinder et al. would follow such an incentive, it makes sense to try to have your profile be more generally attractive so you get shown to more people.

Book Review: A Pattern Language by Christopher Alexander

Use the table of contents / "summary of the language" section.

For your project I would recommend skipping to 28 and then going from there, and skipping patterns which don't seem relevant.

How to think about and deal with OpenAI

Yes: A far higher % of OpenAI reads this forum than the other orgs you mentioned. In some sense OpenAI is friends with LW, in a way that is not true for the others.

How to think about and deal with OpenAI

What should be done instead of a public forum? I don't necessarily think there needs to be a "conspiracy", but I do think that it's a heck of a lot better to have one-on-one meetings with people to convince them of things. At my company, when sensitive things need to be decided or acted on, a bunch of slack DMs fly around until one person is clearly the owner of the problem; they end up in charge of having the necessary private conversations (and keeping stakeholders in the loop). Could this work with LW and OpenAI? I'm not sure.

How to think about and deal with OpenAI

Ineffective, because the people arguing on the forum are lacking knowledge about the situation. They don't understand OpenAI's incentive structure, plan, etc. Thus any plans they put forward will be in all likelihood useless to OpenAI.

Risky, because (some combination of):

  • it is emotionally difficult to hear that one of your friends is plotting against you (and openAI is made up of humans, many of whom came out of this community)
    • it's especially hard if your friend is misinformed and plotting against you; and I think it likely that the openAI people believe that Yudkowsky/LW commentators are misinformed or at least under-informed (and they are probably right about this)
  • to manage that emotional situation, you may want to declare war back on them, cut off contact, etc.; any of these actions if declared as an internal policy would be damaging to the future relationship between openAI and the LW world
  • openAI has already had a ton of PR issues over the last few years and so they probably have a pretty well developed muscle for dealing internally with bad PR, which this would fall under. If true, the muscle probably looks like internal announcements with messages like "ignore those people/stop listening to them, they don't understand what we do, we're managing all these concerns and those people are over indexing on them anyway"
  • the evaporative cooling effect may eject some people who were already on the fence about leaving, but the people who remain will be more committed to the original mission, more "anti LW" and less inclined to listen to us in the future
  • hearing bad arguments makes one more resistant to similar (but better) arguments in the future

I want to state for the record that I think OpenAI is sincerely trying to make the world a better place, and I appreciate their efforts. I don't have a settled opinion on the sign of their impact so far.

How to think about and deal with OpenAI

I'd like to put in my vote for "this should not be discussed in public forums". Whatever is happening, the public forum debate will have no impact on it; but it does create the circumstances for a culture war that seems quite bad.

Common knowledge about Leverage Research 1.0

When I learned it from Geoff in 2011, they were recommending yEd Graph Editor. The process is to generally write things you do or want to do as nodes, and then connect them to each other using "achieves or helps to achieve" edges (i.e., if you go to work, that achieves making money, which achieves other things you want).

Common knowledge about Leverage Research 1.0

I believe this. Aversion factoring is a separate insight from goal factoring.

What should one's policy regarding dental xrays be?

XKCD says that the dental X-ray (5 μSv) is half the average daily background radiation dose (10 μSv), and 1/8th of a cross country flight (40 μSv). To me this means that the radiation exposure is quite irrelevant in the grand scheme of things. (https://xkcd.com/radiation/)

If this were false, it would presumably be because dental X-rays are especially harmful in some way that isn't just "because of radiation".

Outline of Galef's "Scout Mindset"

I didn't read Scout Mindset yet, but I've listened to Julia's interviews on podcasts about it, and I have read the other books that Rob mentions in that paragraph.

The reason I nodded when Rob wrote that was that Julia's memetics are better. Her ideas are written in a way which stick in one's mind, and thus spread more easily. I don't think any of those other sources are bad -- in fact I get more from them than I expect to from Scout Mindset -- but Scout Mindset is more practically oriented (and optimized for today's political climate) in a way which those other books are not.

It also operates at a different, earlier level in the "EA Funnel": the level at which you can make people realize that more is possible. Those other books already require someone to be asking "how can I Do Good Better?" before they'll pick it up.

Load More