LESSWRONG
LW

lukeprog
36863Ω3458408758
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Rationality and Philosophy
The Science of Winning at Life
No-Nonsense Metaethics
nikola's Shortform
lukeprog6mo50

Are you able to report the median AGI timeline for ~all METR employees? Or are you just saying that the "more than half" is how many responded to the survey question?

Reply
AI #13: Potential Algorithmic Improvements
lukeprog2y71

no one is currently hard at work drafting concrete legislative or regulatory language

I'd like readers to know that fortunately, this hasn't been true for a while now. But yes, such efforts continue to be undersupplied with talent.

Reply
AI #12:The Quest for Sane Regulations
lukeprog2y20

Where is the Arnold Kling quote from?

Reply
Shut Up and Divide?
lukeprog3y160

I haven't read the other comments here and I know this post is >10yrs old, but…

For me, (what I'll now call) effective-altruism-like values are mostly second-order, in the sense that a lot of my revealed behavior shows that a lot of the time I don't want to help strangers, animals, future people, etc. But I think I "want to want to" help strangers, and sometimes the more goal-directed rational side of my brain wins out and I do something to help strangers at personal sacrifice to myself (though I do this less than e.g. Will MacAskill). But I don't really detect in myself a symmetrical second-order want to NOT want to help strangers. So that's one thing that "Shut up and multiply" has over "shut up and divide," at least for me.

That said, I realize now that I'm often guilty of ignoring this second-orderness when e.g. making the case for effective altruism. I will often appeal to my interlocutor's occasional desire to help strangers and suggest they generalize it, but I don't symmetrically appeal to their clearer and more common disinterest in helping strangers and suggest they generalize THAT. To be more honest and accurate while still making the case for EA, I should be appealing to their second-order desires, though of course that's a more complicated conversation.

Reply
Humans are very reliable agents
lukeprog3y40

Somewhat related: Estimating the Brittleness of AI.

Reply
Clem's Memo
lukeprog3y90

See also e.g. Stimson's memo to Truman of April 25, 1945.

Reply
Ideal governance (for companies, countries and more)
lukeprog3y40

Some other literature OTOH:

  • Collective Reflective Equilibrium in Practice
  • Not "ideal," but exploring what's possible: Legal Systems Very Different from Ours
  • There's a pretty large literature on various forms of "deliberative democracy," e.g. see here and here
  • I would guess there's been interesting discussions of ideal governance in the context of DAOs
Reply
Epistemic Legibility
lukeprog3y290

Lots of overlap between this concept and what Open Phil calls reasoning transparency.

Reply
List of Probability Calibration Exercises
lukeprog3y50

The Open Philanthropy and 80,000 Hours links are for the same app, just at different URLs.

Reply
Forecasting Newsletter: December 2021
lukeprog4y20

On Foretell moving to ARLIS… There's no way you could've known this, but as it happens Foretell is moving from one Open Phil grantee (CSET) to another (UMD ARLIS). TBC I wasn't involved in the decision for Foretell to make that transition, but it seems fine to me, and Foretell is essentially becoming another part of the project I funded at ARLIS.

Reply
Load More
40Features that make a report especially helpful to me
3y
0
93Preliminary thoughts on moral weight
7y
49
29Quick thoughts on empathic metaethics
8y
0
26MIRI's 2014 Summer Matching Challenge
11y
32
24Will AGI surprise the world?
11y
129
30Some alternatives to “Friendly AI”
11y
44
23An onion strategy for AGI discussion
11y
12
19Can noise have power?
11y
42
34Calling all MIRI supporters for unique May 6 giving opportunity!
11y
48
33Is my view contrarian?
11y
96
Load More
Timeless Decision Theory
11y
Highly Advanced Epistemology 101 For Beginners
13y
(+25)
Highly Advanced Epistemology 101 For Beginners
13y
(+47)
Rationality and Philosophy
13y
(+58)
Acausal Trade
13y
(+58/-74)
Highly Advanced Epistemology 101 For Beginners
13y
(+19)
Timeless Decision Theory
13y
(+6/-6)
Predictionbook
13y
(+16)
Predictionbook
13y
(+8)
AI Takeoff
13y
(+12/-10)
Load More