I have signed no contracts or agreements whose existence I cannot mention.
They thought they found in numbers, more than in fire, earth, or water, many resemblances to things which are and become; thus such and such an attribute of numbers is justice, another is soul and mind, another is opportunity, and so on; and again they saw in numbers the attributes and ratios of the musical scales. Since, then, all other things seemed in their whole nature to be assimilated to numbers, while numbers seemed to be the first things in the whole of nature, they supposed the elements of numbers to be the elements of all things, and the whole heaven to be a musical scale and a number.
The pardon example does not at all seem very targeted, the constitution doesn’t even say that weird shit needs to happen before it can be used, and my impression is (though I haven’t done a review of the literature) that much of the time it’s used for nepotism and cronyism, so that one’s friends, families, and political allies don’t have to obey the laws. Recently its been used as a defense for the president himself to avoid laws and justice.
Yes, comparatively its less dumb than things of the form “the president can decide whether there’s an emergency x and then under those circumstances they get a whole bunch more power”, but its still a great tax on the principle of equality under and rule of law.
That this power was used against Nixon is special because it was the president helping their political enemies, it is clearly, on its face, a bad thing to set such a precedent that once president one is not subject to laws anymore because other presidents will bail you out.
It is interesting how the US Constitution has contingencies for "things can be weird, so the US president can do weird things in response". For example the history of US pardons (including the pardon of Nixon) was surprising to me. I wonder how much AI specs and RSPs should include this kind of targeted exception to regular process.
This seems like exactly the wrong lesson to take? If you’re the president, you want the ability to do weird things when things get weird, but if you’re creating a system to contain the president, you really don’t want to give them the ability to do weird things when things get weird, indeed it is a standard play when turning a democracy into a dictatorship to suspend laws due to emergencies and other exceptions.
Given our RSPs and RLHF constitutions are meant to be systems to contain AI labs and models, it does not seem good to have “if things are going weird, do tf you want” clauses. If nothing else, things will get weird, that is all but guaranteed so if you have such a clause, the whole framework becomes just “do tf you want”.
I also just really don’t know how you can look at Trump and say “wow, people sure were real smart when they gave the president all those emergency powers and stuff, weren’t they?”
That was not in fact your original point. Before you said,
rationality thinks wisdom is intelligence
I don't think either EAs or rationalists think about or in terms of wisdom at all.
EA is not philosophically unified enough to have an answer to the question "what is wisdom", and rationalists (at their best) would just respond with "Of what decision relevance is this 'wisdom' thing you talk about?" or "Standard definitions seem to conflate wisdom with several things which seem relatively distinct eg knowing what you want, knowing about the world, and knowing how to achieve what you want".
I would otherwise agree with you, but I think the AI alignment ecosystem has been burnt many times in the past over giving a bunch of money to people who said they cared about safety, but not asking enough questions about whether they actually believed “AI may kill everyone and that is a near or the number 1 priority of theirs”.
It matters whether you grow rice (needs irrigation funded and controlled by a larger group, and coordination of when you drain your fields) or wheat (more suitable for family farms), as studies have shown (including quasi-experimental work).
The "quasi experimental" study is in my opinion very silly. Here are their descriptions of the tests of cultural differences they ran
To test individualism:
Participants drew circles to represent the self and their friends in a diagram of their social network. Later, we measured the size of the self circle and the average of the friend circles.
To test loyalty/nepotism:
The task asks people to imagine going into a business deal with a friend, who then lies during the deal, which causes the participant to make less money in the deal. Participants can punish the friend for their dishonesty by paying a small amount of money to delete money from their bank account (paying 0–100 RMB to delete 0–1000 RMB [US$148]). In another scenario, the friend is honest, and they can reward the friend by paying to add money to their bank account. Crucially, participants completed two identical scenarios with a stranger. Thus, participants completed four scenarios in total (reward/punish friend/stranger). We analyzed whether participants treated the friend better than the stranger, even though the friend and the stranger acted the same. Treating the friend better could be seen positively as loyalty or negatively as nepotism.
To measure categorical vs relational thought style:
We measured holistic thought with the 14-item picture version of the triad task. In each triad, farmers saw a target object, such as the rabbit in Fig. 2. Then they chose one of two other objects to pair with it, such as a dog or carrot. One pairing belongs to the same abstract category (rabbits and dogs are mammals), and one pairing shares a functional relationship (rabbits eat carrots). We calculated the percentage of relational pairings as a measure of holistic thought.
I'd expect (and Claude agrees) there have been extensive & rigorous study made of each of these properties in the field of psychometrics, which makes their use of these tests unnecessary & strange. If one was charitable they could say that they wanted to use tests which had previously been correlated with rice-dominated/wheat-dominated farming areas, or that the relevant psychometric tests perhaps weren't translated to Chinese.
I don't know the methodology behind how the statement you have has been drafted, I don't think you even mention the statement directly in this post, but I will say that I think the correct methodology here is not to first come up with a statement, and then email it out and ask a bunch of researchers whether they'd sign it. That is a recipe for exactly what you're encountering here. The researchers you're reaching out to have a very different perspective on public communication and how they'd like to represent their beliefs than you do, in your role as de-facto public relations managers in this project.
That is a good thing! We would like our researchers to primarily be thinking on simulacra level 1, and that means they will sign very different statements than what may seem optimal from the media's perspective.
However, as you point out, it can also be a bad thing, and decrease the PR manager's ability to... well... manage.
That is why I believe the solution is to first email out the main researchers who you'd like to sign the statement, and ask what sorts of statements they would be willing to sign. What properties must the statement have in order for them to be comfortable putting their name below it. Then you create a statement which you know will have at least some base of support among researchers. You should expect a reasonable amount of iteration & compromise here, and not to, on your first try, create a statement for which everyone you want to sign signs.
I will say also that it seems likely this is how the CAIS statement was drafted too. They spent (if I remember right) quite a while work-shopping it. That statement (to my knowledge) did not just appear out of the aether in its current state. It took work and compromise to get it.
Again, I don't know whether you actually fell for the trap I mention, but it seems likely given the pushback you're getting
¯\(ツ)/¯, but if you could...
On the subject of “capturing dark energy” (I don’t think this technically captures any energy previously existing), my favorite proposal is to connect distant galaxies together with strings, and use the resulting tension to turn a turbine. In principle your limit would then only be the strength and length of your intergalactic rope.
See Mining Energy in an Expanding Universe for what I think is the earliest proposal for this idea.
Well Orpheus apparently agrees with me, so you probably understood the original comment better than I did!