Sorted by New

Wiki Contributions


Example in California:

I OBJECT to the use of my personal information, including my information on Facebook, to train, fine-tune, or otherwise improve AI.

I assert that my information on Facebook includes sensitive personal information as defined by the California Consumer Privacy Act: I have had discussions about my religious or philosophical beliefs on Facebook.

I therefore exercise my right to limit the disclosure of my sensitive personal information.

Despite any precautions by Meta, adversaries may later discover "jailbreaks" or otherwise adversarial prompts to reveal my sensitive personal information. Therefore, I request that Meta do not use my personal information to train, fine-tune, or otherwise improve AI.

I expect this objection request to be handled with due care, confidentiality, and information security.

Or you could have an LLM write it for you.

Example prompt:

Meta, Inc wants to train AI on my personal data. Its notice is as follows:

> You have the right to object to Meta using the information you’ve shared on our Products and services to develop and improve AI at Meta. You can submit this form to exercise that right.
> AI at Meta is our collection of generative AI features and experiences, like Meta AI and AI Creative Tools, along with the models that power them.
> Information you’ve shared on our Products and services could be things like:
> - Posts
> - Photos and their captions
> - The messages you send to an AI
> We do not use the content of your private messages with friends and family to train our AIs.
> We’ll review objection requests in accordance with relevant data protection laws. If your request is honored, it will be applied going forward.
> We may still process information about you to develop and improve AI at Meta, even if you object or don’t use our Products and services. For example, this could happen if you or your information:
> - Appear anywhere in an image shared on our Products or services by someone who uses them
> - Are mentioned in posts or captions that someone else shares on our Products and services
> To learn more about the other rights you have related to information you’ve shared on Meta Products and services, visit our Privacy Policy. 

I live in [JURISDICTION]. Could you take on the role of a Dangerous Professional as Patrick McKenzie would say, and help me draft an objection request under [JURISDICTION] law?

Example in UK / EU:

I OBJECT to the use of my personal data, including my data on Facebook, to train, fine-tune, or otherwise improve AI.

Against legitimate interest: I assert that Meta's processing of my personal information to train, fine-tune, or otherwise improve AI (thereafter: "to train AI") would violate the requirements of legitimate interest under GDPR as follows:

  • Failing the "necessity" prong: OpenAI and Anthropic have successfully trained highly capable AI models without the use of my personal data. My personal data is therefore unnecessary to the training of AI by Meta.

  • Failing the "reasonable expectations" prong: much of my data on Facebook predates 30 November 2022, when OpenAI released ChatGPT, and I could reasonably expect that my personal data would be used to train AI. Therefore, I had been using Facebook with the reasonable expectation that my data will not be used to train AI.

I expect this objection request to be handled with due care, confidentiality, and information security.

Re: opting out to Facebook training AI on your data:

Fill in the form like a Dangerous Professional, as Patrick McKenzie would put it.

Gorton, G. (2018), Financial Crises is a survey article. I thought its explanation of banking and financial crises as information shocks was enlightening.

Banking and financial crises as information shocks

Money, or bank notes, (or similar on-demand debt liabilities of a bank,) need to be information-insensitive (thus, interchangeable: $1 at Bank A == $1 at Bank B) to facilitate exchange. Otherwise, uninformed agents (any non banking-professionals) face adverse selection, haircuts on bank debts they hold, and thus withdraw from banking-facilitated commerce.

Banking crises are triggered by common-knowledge information releases (eg. news broadcast) that are unexpectedly bad (actual news worse than forecast by more than a threshold) (cites Gorton, 1988), which make "money at those particular banks" information-sensitive ($1 at those particular banks might not be worth $1).

Financial crises are slightly more general: debt liabilities of companies in general go from being information-insensitive (interchangeable: usable as general collateral, with holders indifferent between different companies' debt) to information-sensitive (particular companies' debt might not be worth face value)

Banking crises

  • Frequency:

    • 147 banking crises in 1970–2011 (cites Laeven & Valencia, 2012)
  • Business cycle peaks and credit booms are only loosely linked to banking crises:

    • Only about 30% of banking crises were preceded by a credit boom (cites Laeven & Valencia, 2012)
    • Most credit booms are not followed by a banking crisis (cites Gorton & Ordonez, 2018)
      • Good booms (not followed by a banking crisis): Positive and lasting shock to Total Factor Productivity (TFP) and Labour Productivity (LP)
      • Bad booms (followed by a banking crisis): Positive but transient shock to TFP and LP

If we had the ability to create one machine capable to centrally planning our current world economy, how much processing power/ memory would it need to have? Interested in some Fermi estimates.

To which I would reply, this is AI-complete, at which point the AI would solve the problem by taking control of the future. That’s way easier than actually solving the Socialist Calculation Debate.


As a data point, Byrne Hobart argues in Amazon sees like a state that Amazon is approximately solving the economic calculation problem (ECP) in the Socialist Calculation Debate when it sets prices on the goods on its online marketplace.

US's 2022 GDP of $25 462B is 116x Amazon's 2022 online store revenue of $220B.

Assuming  scaling[1], it would take 500 000x the compute Amazon uses (for its own marketplace, excluding AWS) to approximately solve the ECP for the US economy (to the same level of approximation as "Amazon approximately solves the ECP"[2]).

In practice, I expect Amazon's "approximate solution" to be much more like [3], so maybe a factor of merely 800x.

  1. ^

    The economic calculation problem (ECP) is a convex optimisation problem (convexity of utility functions) which can be solved by linear programming in .

  2. ^

    Which is not a vastly superhuman level of approximate solution? For example, Amazon is no longer growing fast enough to double every 4 years. Its marketplace also doesn't particularly incentivise R&D, while I think a god-like AI would incentivise R&D.

  3. ^

    I just made it up.

Ah, increasing the number of researchers is simply increasing  in . I didn't realize that!

Minor comment on one small paragraph:

Price's Law says that half of the contributions in a field come from the square root of the number of contributors. In other words, productivity increases linearly as the number of contributors increases exponentially. Therefore, as the number of AI safety researchers increases exponentially, we might expect the total productivity of the AI safety community to increase linearly.

I think Price's law is false, but I don't know what law it should be instead. I'll look at the literature on the rate of scientific progress (eg. Cowen & Southwood (2019)) to see if I could find any relationship between number of researchers and research productivity.

Price's law is a poor fit; Lotka's law is a better fit

The most prominent citation for Price's law, Nicholls (1988), says that Price's law is a poor fit (section 4: Validity of the Price Law):

Little empirical investigation of the Price law has been carried out to date [4,14]. Glänzel and Schubert [12] have reported some empirical results. They analyzed Lotka’s Chemical Abstracts data and found that the most prolific  authors contributed less that 20% of the total number of papers. They also refer, but without details, to the examination of “several dozens” of other empirical data sets and conclude that “in the usually studied populations of scientists, even the most productive authors are not productive enough to fulfill the requirements of Price’s conjecture” [12]. Some incidental results of scientometric studies suggest that about 15% of the authors will be necessary to generate 50% of the papers [16,17].

To further examine the empirical validity of Price’s hypothesis, 50 data sets were collected and analyzed here. ... the contribution of the most prolific  group of authors fell considerably short of the [50% of the papers] predicted by Price. ... The actual proportion of all authors necessary to generate at least 50% of the papers was found to be much larger that . Table 2 summarizes these results. In some cases, ..., more than half of the total number of papers is generated by those authors contributing only a single paper each. The absolute and relative size of  for various population sizes t is given in Table 3. All the empirical results referred to here are consistent; and, unfortunately, there seems little reason to suppose that further empirical results would offer any support for the Price law.

Nicholls (1988) continues, saying that Lotka's law (number of authors with  publications is proportional to ) has good empirical support, and finds  to be a good fit for sciences and humanities, and  to be a good fit in social sciences.

A different paper, Chung & Cox (1990), also finds that Price's Law is a poor fit while Lotka's law with  between 1.95 to 3.26 to be a good fit in finance.

(Allison, Price, Griffith, Moravcsik & Stewart (1976) discusses the mathematical relationship between Price's Law and Lotka's Law: neither implies the other; nor are they contradictory.)

Later edits:

Porby, in his post Why I think strong general AI is coming soon, mentions a tangentially related idea: core researchers contribute much more insight than newer researchers. New researchers need a lot of time to become core researchers.

In Porby's model, the research productivity at year  may be proportional to the number of researchers at year .

Load More