Larks

Wiki Contributions

Comments

Larks50

A bit dated but have you read Robin's 2007 paper on the subject?

Prediction markets are low volume speculative markets whose prices offer informative forecasts on particular policy topics. Observers worry that traders may attempt to mislead decision makers by manipulating prices. We adapt a Kyle-style market microstructure model to this case, adding a manipulator with an additional quadratic preference regarding the price. In this model, when other traders are uncertain about the manipulator’s target price, the mean target price has no effect on prices, and increases in the variance of the target price can increase average price accuracy, by increasing the returns to informed trading and thereby incentives for traders to become informed.

Larks20

Yes, sorry for being unclear. I meant to suggest that this argument implied 'accelerate agents and decelerate planners' could be the desirable piece of differential progress.

This post seems like it was quite influential. This is basically a trivial review to allow the post to be voted on.

Larks127

I agree in general, but think the force of this is weaker in this specific instance because NonLinear seems like a really small org. Most of the issues raised seem to be associated with in-person work and I would be surprised if NonLinear ever went above 10 in-person employees. So at most this seems like one order of magnitude in difference. Clearly the case is different for major corporations or orgs that directly interact with many more people. 

Larks1915

I think there will be some degree to which clearly demonstrating that false accusations were made will ripple out into the social graph naturally (even with the anonymization), and will have consequences. I also think there are some ways to privately reach out to some smaller subset of people who might have a particularly good reason to know about this. 

If this is an acceptable resolution, why didn't you just let the problems with NonLinear ripply out into the social graph naturally?

Larks134

If most firms have these clauses, one firm doesn't, and most people don't understand this, it seems possible that most people would end up with a less accurate impression of their relative merits than if all firms had been subject to equivalent evidence filtering effects.

In particular, it seems like this might matter for Wave if most of their hiring is from non-EA/LW people who are comparing them against random other normal companies.

Larks30

I would typically aim for mid-December, in time for the American charitable giving season.

Larks364

After having written an annual review of AI safety organisations for six years, I intend to stop this year. I'm sharing this in case someone else wanted to in my stead.

Reasons

  • It is very time consuming and I am busy.
  • I have a lot of conflicts of interests now.
  • The space is much better funded by large donors than when I started. As a small donor, it seems like you either donate to:
    • A large org that OP/FTX/etc. support, in which case funging is ~ total and you can probably just support any.
    • A large org than OP/FTX/etc. reject in which case there is a high chance you are wrong.
    • A small org OP/FTX/etc. haven't heard of, in which case I probably can't help you either.
  • Part of my motivation was to ensure I stayed involved in the community but this is not at threat now.

Hopefully it was helpful to people over the years. If you have any questions feel free to reach out.

Load More