LESSWRONG
LW

Chris Cooper
18190
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Dumb Dichotomies in Ethics, Part 1: Intentions vs Consequences
Chris Cooper5y20

>> Is it possible that philosophers just don’t know about the concept? Maybe it is so peculiar to math and econ that “expected value” hasn’t made its way into the philosophical mainstream.

 

I believe the concept of expected value is familiar to philosophers and is captured in the doctrine of rule utilitarianism: we should live by rules that can be expected to maximize happiness, not judge individual actions by whether they in fact maximize happiness. (Of course, there are many other ethical doctrines.)

Thus, it's a morally good rule to live by that you should bring puppies in from the cold – while taking normal care not to cause traffic accidents and not to distract children playing near the road, and provided that you're reasonably sure that the puppy hasn't got rabies, etc. – the list of caveats is open-ended.

After writing the above I checked the Stanford Encyclopedia of Philosophy. There are a huge number of variants of consequentialism but here's one relevant snippet:

https://plato.stanford.edu/entries/consequentialism-rule/#FulVerParRulCon

Rule-consequentialist decision procedure: At least normally, agents should decide what to do by applying rules whose acceptance will produce the best consequences, rules such as “Don’t harm innocent others”, “Don’t steal or vandalize others’ property”, “Don’t break your promises”, “Don’t lie”, “Pay special attention to the needs of your family and friends”, “Do good for others generally”.

This is enough to show that expected value is familiar in ethics. 
 

Reply
just_browsing's Shortform
Chris Cooper5y10

>>Each block has a reference code. If you paste that reference code elsewhere, the same block appears

>>It's hard to reliably right-click the tiny bullet point (necessary for grabbing the block reference)

I never need to do this. If you type   "(())" [no quotes] at the destination point and then start typing in text from the block you're referencing, blocks containing that text will appear in a window. Keep typing until you can see the desired block and click on it to insert it. 

If you type the trigger "/block", the menu that appears contains four fun things you can do with blocks.

Reply
Call for volunteers: assessing Kurzweil, 2019
Chris Cooper5y40

I'm guessing you want respondents to put in serious research - you're not looking for people's unreflective attitudes - sorry, intuitions?

Reply
AI Safety Reading Group
Chris Cooper6y50


* The question addressing Gwern's post about Tool AIs wanting to be Agent AIs.

When Søren posed the question, he identified the agent / tool contrast with the contrast between centralized and distributed processing, and Eric denied they are the same contrast. He then went on to discuss the centralized / distributed contrast. He regards it as of no particular significance. In any system, even within a neural network, different processes are conditionally activated according to the task in hand and don't use the whole network. These different processes within the system can be construed as different services.

Although there is mixing and overlapping of processes within the human brain, this is a design flaw rather than a desirable feature.

I thought there was some mutual misunderstanding here. I didn't find the tool / agent distinction being addressed in our discussion.

* The question addressing his optimism about progress without theoretical breakthroughs (related to NNs/DL).

Regarding breakthroughs versus incremental progress: Eric reiterated his belief that we are likely to see improvements in doing particular tasks but a system that – in his examples – is good at counting leaves on a tree is not going to be good at navigating a Mars rover, even if both are produced by the same advanced learning algorithm. I couldn't identify any crisp arguments to support this.

Reply
Largest open collection quotes about AI
Chris Cooper6y32

This is a fine job, Dmitry.

If it's possible to edit your post, it would be a good idea to link to your improved spreadsheet at

https://docs.google.com/spreadsheets/d/19edstyZBkWu26PoB5LpmZR3iVKCrFENcjruTj7zCe5k/edit?fbclid=IwAR1_Lnqjv1IIgRUmGIs1McvSLs8g34IhAIb9ykST2VbxOs8d7golsBD1NUM#gid=1448563947


Reply1
TAISU - Technical AI Safety Unconference
Chris Cooper6y50

I'm fillling in the booking form now. I intend to stay for the four days.

Chris Cooper

Reply
Rationality Café No. 6 - The Sequences, Part 1; Section B Repeat
Chris Cooper7y10

Similar question: I'm at Pinkman's - packed. How will you make yourself known? Giant paperclip on table?

Reply
Waterfall Diagram
Chris Cooper9y*10

The banner reading "Your proposal has been submitted" lingers into subsequent editing processes, causing confusion.

Reply
Waterfall Diagram
Chris Cooper9y*10

This page is screwed.

A panel within the first diagram reads:

 Posterior Odds   
      3:4

Impossible to see where this comes from. Revise to read:

 Posterior Odds   
      20% x 90% :  80% x 30%
 =     0.18 : 0.24
 =        3:4
Reply
6Beyond intelligence: why wisdom matters in AI systems
2mo
0
Bayes' rule
9y
(+10/-11)