This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
LW
Login
Home
All Posts
Concepts
Library
Sequence Highlights
Rationality: A-Z
The Codex
HPMOR
Best Of
Community Events
AISC end of program presentations
Thu Jun 15
•
Online
Effective Altruism Virtual Programs Jul-Aug 2023
Sat Jun 17
•
Online
Argentines LW/SSC/EA/MIRIx - Call to All
Tue Apr 18
•
Online
EA/LW/SSC Argentina First Meeting!
Sun Jun 4
•
Online
Subscribe (RSS/Email)
About
FAQ
All Posts
Sorted by Magic (New & Upvoted)
Timeframe:
All time
Daily
Weekly
Monthly
Yearly
Sorted by:
Magic (New & Upvoted)
Top
Top (Inflation Adjusted)
Recent Comments
New
Old
Filtered by:
All Posts
Frontpage
Curated
Questions
Events
Show Low Karma
Show Events
24
Rational Animations is looking for an AI Safety scriptwriter, a lead community manager, and other roles.
Writer
4h
0
115
I still think it's very unlikely we're observing alien aircraft
dynomight
1d
44
18
Distilling Singular Learning Theory
Liam Carroll
4h
0
168
Lightcone Infrastructure/LessWrong is looking for funding
habryka
2d
22
44
AXRP Episode 22 - Shard Theory with Quintin Pope
Ω
DanielFilan
18h
Ω
1
38
AI #16: AI in the UK
Zvi
1d
11
6
Scaffolded LLMs: Less Obvious Concerns
Stephen Fowler
3h
1
132
The Dial of Progress
Zvi
3d
95
145
UFO Betting: Put Up or Shut Up
RatsWrongAboutUAP
3d
99
19
Leveling Up Or Leveling Off? Understanding The Science Behind Skill Plateaus
lynettebye
13h
3
29
Matt Taibbi's COVID reporting
ChristianKl
1d
23
77
My guess for why I was wrong about US housing
romeostevensit
3d
11
20
Developing a technology with safety in mind: Lessons from the Wright Brothers
jasoncrawford
16h
3
5
DSLT 1. The RLCT Measures the Effective Dimension of Singular Models
Liam Carroll
4h
0
5
[Linkpost] Mapping Brains with Language Models: A Survey
Bogdan Ionut Cirstea
4h
0
23
Why "AI alignment" would better be renamed into "Artificial Intention research"
chaosmage
1d
12
171
Inference-Time Intervention: Eliciting Truthful Answers from a Language Model
Ω
likenneth
5d
Ω
3
122
ARC is hiring theoretical researchers
Ω
paulfchristiano
,
Jacob_Hilton
,
Mark Xu
4d
Ω
10
47
Instrumental Convergence? [Draft]
Ω
J. Dmitri Gallow
2d
Ω
11
33
Why libertarians are advocating for regulation on AI
RobertM
2d
12
349
The ants and the grasshopper
Richard_Ngo
7d
31
13
Philosophical Cyborg (Part 2)...or, The Good Successor
ukc10014
1d
0
14
human intelligence may be alignment-limited
bhauth
15h
3
63
MetaAI: less is less for alignment.
Ω
Cleo Nardo
3d
Ω
10
9
[Linkpost] World first as UK hosts inaugural AUKUS AI and autonomy trial
NinaR
1d
0
37
On the Apple Vision Pro
Zvi
2d
12
29
Looking Back On Ads
jefftk
1d
9
27
Philosophical Cyborg (Part 1)
ukc10014
,
Roman Leventov
,
NicholasKees
2d
4
164
Updates and Reflections on Optimal Exercise after Nearly a Decade
romeostevensit
8d
31
70
Manifold Predicted the AI Extinction Statement and CAIS Wanted it Deleted
David Chee
4d
14
57
TASRA: A Taxonomy and Analysis of Societal-Scale Risks from AI
Ω
Andrew_Critch
3d
Ω
1
206
The Base Rate Times, news through prediction markets
vandemonian
3d
38
34
Anthropic | Charting a Path to AI Accountability
Gabriel Mukobi
2d
1
314
Things I Learned by Spending Five Thousand Hours In Non-EA Charities
jenn
15d
31
372
Statement on AI Extinction - Signed by AGI Labs, Top Academics, and Many Other Notable Figures
Ω
Dan H
17d
Ω
71
7
Does anyone's full-time job include reading and understanding all the most-promising formal AI alignment work?
Q
NicholasKross
11h
Q
2
181
Launching Lightspeed Grants (Apply by July 6th)
habryka
9d
22
19
Progress links and tweets, 2023-06-14
jasoncrawford
2d
1
5
Dreaming of Utility
Mariven
8h
0
53
Introduction to Towards Causal Foundations of Safe AGI
Ω
tom4everitt
,
Lewis Hammond
,
Francis Rhys Ward
,
RyanCarey
,
James Fox
,
mattmacdermott
,
sbenthall
4d
Ω
4
1
Press the happiness button!
Spiarrow
20h
3
136
What will GPT-2030 look like?
Ω
jsteinhardt
9d
Ω
37
31
Multiple stages of fallacy - justifications and non-justifications for the multiple stage fallacy
AronT
3d
2
74
Ethodynamics of Omelas
dr_s
6d
16
39
Contingency: A Conceptual Tool from Evolutionary Biology for Alignment
Ω
clem_acs
4d
Ω
0
2
A more effective Elevator Pitch for AI risk
Iknownothing
1d
0
136
Algorithmic Improvement Is Probably Faster Than Scaling Now
Ω
johnswentworth
10d
Ω
17
38
If you are too stressed, walk away from the front lines
Neil
4d
12
92
Takeaways from the Mechanistic Interpretability Challenges
Ω
scasper
8d
Ω
5
34
Aura as a proprioceptive glitch
pchvykov
4d
4