LESSWRONG
LW

JakubK
400111000
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Inflection.ai is a major AGI lab
JakubK2y70

Relevant tweet/quote from Mustafa Suleyman, the co-founder and CEO:

Powerful AI systems are inevitable. Strict licensing and regulation is also inevitable. The key thing from here is getting the safest and most widely beneficial versions of both.

Reply
Best introductory overviews of AGI safety?
JakubK2y20

Thanks for writing and sharing this. I've added it to the doc.

Reply
Open Problems in AI X-Risk [PAIS #5]
JakubK2y10

What happened to black swan and tail risk robustness (section 2.1 in "Unsolved Problems in ML Safety")?

Reply
All AGI Safety questions welcome (especially basic ones) [May 2023]
JakubK2y40

It's hard to say. This CLR article lists some advantages that artificial systems have over humans. Also see this section of 80k's interview with Richard Ngo:

Rob Wiblin: One other thing I’ve heard, that I’m not sure what the implication is: signals in the human brain — just because of limitations and the engineering of neurons and synapses and so on — tend to move pretty slowly through space, much less than the speed of electrons moving down a wire. So in a sense, our signal propagation is quite gradual and our reaction times are really slow compared to what computers can manage. Is that right?

Richard Ngo: That’s right. But I think this effect is probably a little overrated as a factor for overall intelligence differences between AIs and humans, just because it does take quite a long time to run a very large neural network. So if our neural networks just keep getting bigger at a significant pace, then it may be the case that for quite a while, most cutting-edge neural networks are actually going to take a pretty long time to go from the inputs to the outputs, just because you’re going to have to pass it through so many different neurons.

Rob Wiblin: Stages, so to speak.

Richard Ngo: Yeah, exactly. So I do expect that in the longer term there’s going to be a significant advantage for neural networks in terms of thinking time compared with the human brain. But it’s not actually clear how big that advantage is now or in the foreseeable future, just because it’s really hard to run a neural network with hundreds of billions of parameters on the types of chips that we have now or are going to have in the coming years.

Reply
All AGI Safety questions welcome (especially basic ones) [May 2023]
JakubK2y30

The cyborgism post might be relevant:

Executive summary: This post proposes a strategy for safely accelerating alignment research. The plan is to set up human-in-the-loop systems which empower human agency rather than outsource it, and to use those systems to differentially accelerate progress on alignment. 

  1. Introduction: An explanation of the context and motivation for this agenda.
  2. Automated Research Assistants: A discussion of why the paradigm of training AI systems to behave as autonomous agents is both counterproductive and dangerous.
  3. Becoming a Cyborg: A proposal for an alternative approach/frame, which focuses on a particular type of human-in-the-loop system I am calling a “cyborg”.
  4. Failure Modes: An analysis of how this agenda could either fail to help or actively cause harm by accelerating AI research more broadly.
  5. Testimony of a Cyborg: A personal account of how Janus uses GPT as a part of their workflow, and how it relates to the cyborgism approach to intelligence augmentation.
Reply
How MATS addresses “mass movement building” concerns
JakubK2y10

Does current AI hype cause many people to work on AGI capabilities? Different areas of AI research differ significantly in their contributions to AGI.

Reply
AI policy ideas: Reading list
JakubK2y70
  • An AI Policy Tool for Today: Ambitiously Invest in NIST (Anthropic 2023)
  • National Security Addition to the NIST AI RMF (Special Competitive Studies Project 2023)
  • Existential risk and rapid technological change - a thematic study for UNDRR (Stauffer et al. 2023), especially section 4.3 ("30 actions to reduce existential risk")
  • Crafting Legislation to Prevent AI-Based Extinction: Submission of Evidence to the Science and Technology Select Committee’s Inquiry on the Governance of AI (Cohen and Osborne 2023)
  • Why we need a new agency to regulate advanced artificial intelligence: Lessons on AI control from the Facebook Files (Korinek 2021)
Reply
A decade of lurking, a month of posting
JakubK2y43

I've grown increasingly alarmed and disappointed by the number of highly-upvoted and well-received on AI, alignment, and the nature of intelligent systems, which seem fundamentally confused about certain things.

Can you elaborate on how all these linked pieces are "fundamentally confused"? I'd like to see a detailed list of your objections. It's probably best to make a separate post for each one.

Reply
GPT-4 solves Gary Marcus-induced flubs
JakubK2y10

That was arguably the hardest task, because it involved multi-step reasoning. Notably, I didn't even notice that GPT-4's response was wrong.

Reply
GPT-4 solves Gary Marcus-induced flubs
JakubK2y10

I believe that Marcus' point is that there are classes of problems that tend to be hard for LLMs (biological reasoning, physical reasoning, social reasoning, practical reasoning, object and individual tracking, nonsequiturs). The argument is that problems in these class will continue to hard.

Yeah this is the part that seems increasingly implausible to me. If there is a "class of problems that tend to be hard ... [and] will continue to be hard," then someone should be able to build a benchmark that models consistently struggle with over time.

Reply
Load More
posts
10Averting Catastrophe: Decision Theory for COVID-19, Climate Change, and Potential Disasters of All Kinds
2y
0
16Notes on "the hot mess theory of AI misalignment"
2y
0
56GPT-4 solves Gary Marcus-induced flubs
2y
29
10Next steps after AGISF at UMich
2y
0
41List of technical AI safety exercises and projects
2y
5
116-paragraph AI risk intro for MAISI
2y
0
11Big list of AI safety videos
3y
2
7Summary of 80k's AI problem profile
3y
0
5New AI risk intro from Vox [link post]
3y
1
21Best introductory overviews of AGI safety?
Q
3y
Q
9
Load More