Ofer

Send me anonymous feedback: https://docs.google.com/forms/d/e/1FAIpQLScLKiFJbQiuRYBhrBbVYUo_c6Xf0f8DN_blbfpJ-2Ml39g1zA/viewform

Any type of feedback is welcome, including arguments that a post/comment I wrote is net negative.


Some quick info about me:

I have a background in computer science (BSc+MSc; my MSc thesis was in NLP and ML, though not in deep learning).

You can also find me on the EA Forum.

Feel free to reach out by sending me a PM. (Update: I've turned off email notifications for private messages. If you send me a time sensitive PM, consider also pinging me about it via the anonymous feedback link above.)

Wiki Contributions

Comments

I'm interested in hearing what you think the counterfactuals to impact shares/retroactive funding in general are, and why they are better.

The alternative to launching an impact market is to not launch an impact market. Consider the set of interventions that get funded if and only if an impact market it launched. Those are interventions that no classical EA funder decides to fund in a world without impact markets, so they seem unusually likely to be net-negative. Should we move EA funding towards those interventions, just because there's a chance that they'll end up being extremely beneficial? (Which is the expected result of launching a naive impact market.)

I expect prosocial projects to still be launched primarily for prosocial reasons, and funding to be a way of enabling them to happen and publicly allocating credit. People who are only optimizing for money and don't care about externalities have better ways available to pursue their goals, and I don't expect that to change.

It seems that according to your model, it's useful to classify (some) humans as either:

(1) humans who are only optimizing for money, power and status; and don't care about externalities.

(2) humans who are working on prosocial projects primarily for prosocial reasons.

If your model is true, how come the genes that cause humans to be type (1) did not completely displace the genes that cause humans to be type (2) throughout human evolution?

According to my model (without claiming originality): Humans generally tend to have prosocial motivations, and people who work on projects that appear prosocial tend to believe they are doing it for prosocial reasons. But usually, their decisions are aligned with maximizing money/power/status (while believing that their decisions are purely due to prosocial motives).

Also, according to my model, it is often very hard to judge whether a given intervention for mitigating x-risks is net-positive or net-negative (due to an abundance of crucial considerations). So subconscious optimizations for money/power/status can easily end up being extremely harmful.

If you describe the problem as "this encourages swinging for the fences and ignoring negative impact", impact shares suffer from it much less than many parts of effective altruism. Probably below average. Impact shares at least have some quantification and feedback loop, which is more than I can say for the constant discussion of long tails, hits based giving, and scalability.

But a feedback signal can be net-negative if it creates bad incentives (e.g. an incentive to regard an extremely harmful outcome that a project can end up causing as if that potential outcome was neutral).

(To be clear, my comment was not about the funding of your specific project but rather about the general funding approach that is referred to in the title of the OP.)

How do you avoid the problem of incentivizing risky, net-negative projects (that have a chance of ending up being beneficial)?

You wrote:

Ultimately we decided that impact shares are no worse than the current startup equity model, and that works pretty well. “No worse than startup equity” was a theme in much of our decision-making around this system.

If the idea is to use EA funding and fund things related to anthropogenic x-risks, then we probably shouldn't use a mechanism that yields similar incentives as "the current startup equity model".

Ofer3moΩ240

The smooth graphs seem like good evidence that there are much smoother underlying changes in the model, and that the abruptness of the change is about behavior or evaluation rather than what gradient descent is learning.

If we're trying to predict abrupt changes in the accuracy of output token sequences, the per-token log-likelihood can be a useful signal. What's the analogous signal when we're talking about abrupt changes in a model's ability to deceptively conceal capabilities, hack GPU firmware, etc.? What log-likelihood plots can we use to predict those types of abrupt changes in behavior?

Does everyone who work at OpenAI sign a non-disparagement agreement? (Including those who work on governance/policy?)

Yes. To be clear, the point here is that OpenAI's behavior in that situation seems similar to how, seemingly, for-profit companies sometimes try to capture regulators by paying their family members. (See 30 seconds from this John Oliver monologue as evidence that such tactics are not rare in the for-profit world.)

Another bit of evidence about OpenAI that I think is worth mentioning in this context: OPP recommended a grant of $30M to OpenAI in a deal that involved OPP's then-CEO becoming a board member of OpenAI. OPP hoped that this will allow them to make OpenAI improve their approach to safety and governance. Later, OpenAI appointed both the CEO's fiancée and the fiancée's sibling to VP positions.

Ofer3moΩ110

Sorry, that text does appear in the linked page (in an image).

Ofer3moΩ110

The Partnership may never make a profit

I couldn't find this quote in the page that you were supposedly quoting from. The only google result for it is this post. Am I missing something?

[This comment is no longer endorsed by its author]Reply
Load More