This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
Tags
LW
Login
Ought
Settings
Dakara
v1.1.0
Dec 30th 2024 GMT
1
•
Applied to
Ought will host a factored cognition “Lab Meeting”
by
jungofthewon
2y
ago
•
Applied to
Rant on Problem Factorization for Alignment
by
Multicore
2y
ago
•
Applied to
Prize for Alignment Research Tasks
by
stuhlmueller
3y
ago
•
Applied to
Elicit: Language Models as Research Assistants
by
stuhlmueller
3y
ago
•
Applied to
Supervise Process, not Outcomes
by
stuhlmueller
3y
ago
•
Applied to
Forecasting Thread: Existential Risk
by
MichaelA
4y
ago
•
Applied to
GPT-3 and the future of knowledge work
by
plex
4y
ago
•
Applied to
Beta test GPT-3 based research assistant
by
Multicore
4y
ago
•
Applied to
Automating reasoning about the future at Ought
by
Ben Pace
4y
ago
•
Applied to
The Majority Is Always Wrong
by
Raemon
4y
ago
•
Applied to
Current AI Safety Roles for Software Engineers
by
Multicore
4y
ago
•
Applied to
Factored Cognition
by
Ben Pace
5y
ago
•
Applied to
Solving Math Problems by Relay
by
Ben Pace
5y
ago
•
Applied to
The Stack Overflow of Factored Cognition
by
Ben Pace
5y
ago
•
Applied to
[AN #86]: Improving debate and factored cognition through human experiments
by
Ben Pace
5y
ago
•
Applied to
Update on Ought's experiments on factored evaluation of arguments
by
Ben Pace
5y
ago
•
Applied to
Ought: why it matters and ways to help
by
Ben Pace
5y
ago
Ben Pace
v1.0.0
Jul 22nd 2020 GMT
(+90)
2
Ought
is an AI alignment research non-profit focused on the problem of
Factored Cognition
.
•
Created by
Ben Pace
at
5y