LESSWRONG
LW

jungofthewon
352Ω434360
Message
Dialogue
Subscribe

coo @ ought.org. by default please assume i am uncertain about everything i say unless i say otherwise :)

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Survey of NLP Researchers: NLP is contributing to AGI progress; major catastrophe plausible
jungofthewon3y40

Sure! Prior to this survey I would have thought:

  1. Fewer NLP researchers would have taken AGI seriously, identified understanding its risks as a significant priority, and considered it catastrophic. 
    1. I particularly found it interesting that underrepresented researcher groups were more concerned (though less surprising in hindsight, especially considering the diversity of interpretations of catastrophe). I wonder how well the alignment community is doing with outreach to those groups. 
  2. There were more scaling maximalists (like the survey respondents did)

I was also encouraged that the majority of people thought the majority of research is crap.

...Though not sure how that math exactly works out. Unless people are self-aware of their publishing crap :P

Reply
(My understanding of) What Everyone in Technical Alignment is Doing and Why
jungofthewon3y10

All good, thanks for clarifying.

Reply
Survey of NLP Researchers: NLP is contributing to AGI progress; major catastrophe plausible
jungofthewon3yΩ220

This was really interesting, thanks for running and sharing! Overall this was a positive update for me. 

Results are here

I think this just links to PhilPapers not your survey results? 

Reply
(My understanding of) What Everyone in Technical Alignment is Doing and Why
jungofthewon3y30

and Ought either builds AGI or strongly influences the organization that builds AGI.

 

"strongly influences the organization that builds AGI" applies to all alignment research initiatives right? Alignment researchers at e.g. DeepMind have less of an uphill battle but they still have to convince the rest of DeepMind to adopt their work. 

Reply
Common misconceptions about OpenAI
jungofthewon3yΩ580

I also appreciated reading this.

Reply
Deliberate Grieving
jungofthewon3y30

I found this post beautiful and somber in a sacred way.  Thank you.

Reply
How to do theoretical research, a personal perspective
jungofthewon3yΩ350

This was really helpful and fun to read. I'm sure it was nontrivial to get to this level of articulation and clarity. Thanks for taking the time to package it for everyone else to benefit from. 

Reply
Rant on Problem Factorization for Alignment
jungofthewon3yΩ670

If anyone has questions for Ought specifically, we're happy to answer them as part of our AMA on Tuesday.

Reply
Rant on Problem Factorization for Alignment
jungofthewon3yΩ12171

I think we could play an endless and uninteresting game of "find a real-world example for / against factorization."

To me, the more interesting discussion is around building better systems for updating on alignment research progress -   

  1. What would it look like for this research community to effectively update on results and progress? 
  2. What can we borrow from other academic disciplines? E.g. what would "preregistration" look like? 
  3. What are the ways more structure and standardization would be limiting / taking us further from truth? 
  4. What does the "institutional memory" system look like? 
  5. How do we coordinate the work of different alignment researchers and groups to maximize information value?
Reply
Supervise Process, not Outcomes
jungofthewon3y10

Thanks for that pointer. It's always helpful to have analogies in other domains to take inspiration from.

Reply
Load More
35Ought will host a factored cognition “Lab Meeting”
Ω
3y
Ω
1
71Elicit: Language Models as Research Assistants
Ω
3y
Ω
6
145Supervise Process, not Outcomes
Ω
3y
Ω
9
34Beta test GPT-3 based research assistant
5y
2
17Automating reasoning about the future at Ought
5y
0
52Brainstorming positive visions of AI
Q
5y
Q
25