All of Chris Cooper's Comments + Replies

Dumb Dichotomies in Ethics, Part 1: Intentions vs Consequences

>> Is it possible that philosophers just don’t know about the concept? Maybe it is so peculiar to math and econ that “expected value” hasn’t made its way into the philosophical mainstream.

 

I believe the concept of expected value is familiar to philosophers and is captured in the doctrine of rule utilitarianism: we should live by rules that can be expected to maximize happiness, not judge individual actions by whether they in fact maximize happiness. (Of course, there are many other ethical doctrines.)

Thus, it's a morally good rule to live by tha... (read more)

1aaronb503moI don't think rule utilitarianism, as generally understood, is the same as expected consequences. Perhaps in practice, their guidance generally coincides, but the former is fundamentally about social coordination to produce the best consequences and the latter is not. Hypothetically, you can imagine a situation in which someone is nearly certain that breaking one of these rules, just this once, would improve the world. Rule consequentialism says they should not break the rule, and expected consequences says they should.
just_browsing's Shortform

>>Each block has a reference code. If you paste that reference code elsewhere, the same block appears

>>It's hard to reliably right-click the tiny bullet point (necessary for grabbing the block reference)

I never need to do this. If you type   "(())" [no quotes] at the destination point and then start typing in text from the block you're referencing, blocks containing that text will appear in a window. Keep typing until you can see the desired block and click on it to insert it. 

If you type the trigger "/block", the menu that appears contains four fun things you can do with blocks.

1just_browsing3moAh, that is a big timesaver. Thanks!
Call for volunteers: assessing Kurzweil, 2019

I'm guessing you want respondents to put in serious research - you're not looking for people's unreflective attitudes - sorry, intuitions?

2Stuart_Armstrong1yI'm looking for people's intuitions on the meaning of the predictions (what was Kurzweil saying?). For most predictions, the research needed beyond that is small.
AI Safety Reading Group


* The question addressing Gwern's post about Tool AIs wanting to be Agent AIs.

When Søren posed the question, he identified the agent / tool contrast with the contrast between centralized and distributed processing, and Eric denied they are the same contrast. He then went on to discuss the centralized / distributed contrast. He regards it as of no particular significance. In any system, even within a neural network, different processes are conditionally activated according to the task in hand and don't use the whole network. These different proces... (read more)

2NaiveTortoise2yThanks!
TAISU - Technical AI Safety Unconference

I'm fillling in the booking form now. I intend to stay for the four days.

Chris Cooper

Rationality Café No. 6 - The Sequences, Part 1; Section B Repeat

Similar question: I'm at Pinkman's - packed. How will you make yourself known? Giant paperclip on table?