LESSWRONG
LW

Oliver Kuperman
3560
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
The Case for An AI Safety Political Party in the US
Oliver Kuperman2h10

I mean, the Green New Deal was widely influential? Sure it did not pass, but I think it’s pretty easy to argue the Green Party had an effect on US environmental policy. Did you read the section on that which included a study which demonstrated Green Party candidates entering into competitive elections had an effect on Democratic Party platforms? It’s not an airtight case, but little is in politics and it’s not a stretch to think the Democratic Party being very supportive of environmental policies is at least partially due to the influence of the US Green Party.

On the issue of politicization, that is one of the reasons I propose a third party that has generally moderate positions.  However, even if the issue does get politicized before the party is able to influence policy, I don’t see how that is a bad thing compared to the status quo? I’d rather have one party support AI safety and another party oppose it than both parties ignoring the issue.

On reposting, I am not sure what I am supposed to do when nobody leaves any comments? The point of this essay was not so much to convince people as to figure out why people have not vigorously pursued this option before. This is the biggest platform for serious discussion on AI safety, and this essay is written for an audience that already takes AI somewhat seriously, so I do not know what else to do with this.

Anyways thanks for the feedback. 

Reply
The Case for An AI Safety Political Party in the US
Oliver Kuperman2h10

Thanks for the response! I agree that “Reasonable chance of success” is kinda a vague claim that people might take as an attempt to guarantee immediate electoral success. However a major point of this essay is that a third party doesn’t need to actually win any elections to have a substantial impact (although I think the ability to get a sitting senator to run under your ticket is not the best predictor for electoral success). I agree that If one narrowly focuses on long term electoral success, this project loses a ton of its value, but I reject that framing. 

Ross Perot never won, and I think few people would deny his campaign had major impacts on US policy, or at least public discourse. RFK Jr also did not win, but he got to take over the DHS by leveraging his political support. The US Green Party never won, but their influence I think has still been substantial given the democratic party’s embrace of environmental policies.

I think maybe reposting this on a weekend is a better idea, as people will have more time to read it, but I agree this post might be too long. After you read more of it, could you tell me which parts you think I should cut? 

Also, any other reasons you think this post might be so downvoted? I might understand why it would not receive upvotes due to its length but the subject seems somewhat novel, topical, and covers a wide range of possible? Are there any blatant writing flaws you see?

Reply
Should I Divest from AI?
Oliver Kuperman7mo10

Could you link me to his work? If he is correct, it seems a little bit counterintuitive. 

Reply
Should I Divest from AI?
Oliver Kuperman7mo10

Given your response, it seems like there should be a stronger push towards AI divestment from within the LessWrong and EA communities. Assuming that many members are heavily invested in index funds like S&P500, that means that millions are dollars are being spent by the less wrong community alone on the stock of companies pursuing AI capabilities research (Microsoft, Google, and Nvidia alone make up more than 10% of the Index’s market cap), which is not an intuitively negligible effect in my view. One could rationalize this by saying that they could use the excess gains to invest in AI safety, but you seem to disagree with this (I am uncertain myself given a lack of experience with AI safety non-profits ).

Reply
Why isn't AI containment the primary AI safety strategy?
Oliver Kuperman7mo30

Very interesting paper. Thanks for sharing! I agree with several of the limitations suggested in the paper, such as the correlation between number of uses of the oracle AI and catastrophic risk, the analogy to AI to a nuclear power plant (obviously with the former having potentially much worse consequences), and the disincentives for corporations to cooperate with containment safety measures. However, one area I would like to question you on is the potential dangers of super intelligence. Its referred to throughout the paper, but never really explicitly explained. 

I agree that super intelligent AI, as opposed to human level AI, should probably be avoided, but if we design the containment system well enough, I would like to know how having a super intelligent AI in a box would really be that dangerous. Sure, the super intelligent AI could theoretically make subtle suggestions which end up changing the world (a la the toothpaste example you use), and exploit other strategies we are not aware of, but in the worst case I feel that still buys us valuable time to solve alignment. 

In regards to open weight models, I agree that at some point regulation has to be put in place to prevent unsafe AI development (possibly on an international level). This may not be so feasible, but regardless, I view comprehensive alignment as unlikely to be achieved before 2030, so I feel like this is still the best safety strategy to pursue if existential risk mitigation is our primary concern.

Reply
My Critique of Effective Altruism
Oliver Kuperman1y*30

As a full throated defender of pulling the lever (given traditional assumptions such as a lack of an audience, complete knowledge of each outcomes, productivity of the people on the tracks) , there are numerous issues with your proposals:

1.) Vague alternative: You seem to be pushing towards some form of virtue ethics/basic intuitionism, but there are numerous problems with this approach. Besides determining whose basic intuitions count and whose don't, or which virtues are important, there is very real problems when these virtues conflict. For instance, imagine you are walking at night, and are trying to cross a street. The sign says red, but no cars are around. Do you jaywalk? In this circumstance, one is forced to make a decision which pits two virtues/ intuitions against each other. The beauty of utilitarianism is that it allows us to choose in these circumstances. 

2.) Subjective Morality: Yes, utilitarianism may not be "objective" in the sense that there is no intrinsic reason to value human flourishing, but I believe utilitarianism to be the viewpoint which closest conforms to what most people value. To illustrate why this matters, I take an example from Alex O'Connor. Image you need to decide what color to paint a room. Nobody has very strong opinions, but most people in your household prefer the color blue. Yes, blue might not be "objectively" the best, but if most of the people in your household like the color blue the most, there is little reason not to. We are all individually going to seek what we value, so we might as well collectively agree to a system which reflects the preferences of most people.

3.) Altruism in Disguise:

Another thing to notice is that virtue ethics can be a form of effective altruism when practiced in specific ways. In general, bettering yourself as a person by becoming more rational, less biased, etc, will in fact make the world a better place, and giving time to form meaningful relationships, engage in leisure, etc. can actually increase productivity in the long run. 

You also seem advocate for fundamental changes in society, changes I am not sure I would agree with, but if your proposed changes are indeed the best way to increase the general happiness of the population, it would be, by definition, the goal of the EA movement. I think a lot of people look at the recent stuff with SBF and AI research and come to think the EA movement is only concerned with lofty existential risk scenarios, but there is a lot more to it then that. 

 

Edit:
Almost forgot this, but citation: Alex O'Connor(in this video) formulated the blue room example. We use it differently (he uses it to argue against objective morality), but he verbalized it.

Reply
-3The Case for An AI Safety Political Party in the US
9h
4
6Should I Divest from AI?
Q
7mo
Q
4
1Why isn't AI containment the primary AI safety strategy?
Q
7mo
Q
3
-2Why be moral if we can't measure how moral we are? Is it even possible to measure morality?
Q
1y
Q
0
-1Thoughts on the relative economic benefits of polyamorous relationships?
Q
1y
Q
4