MaxRa

Posts

Sorted by New

Comments

What will 2040 probably look like assuming no singularity?

Very cool prompt and list. Does anybody have predictions on the level of international conflict about AI topics and the level of "freaking out about AI" in 2040, given the AI improvements that Daniel is sketching out?

What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)

Good point relating it to markets. I think I don't understand Acemoglu and Robinson's perspective well enough here, as the relationship between state, society and markets is the biggest questionmark I left the book with. I think A&R don't necessarily only mean individual liberty when talking about power of society, but the general influence of everything that falls in the "civil society" cluster.

What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)

I was reminded of the central metaphor of Acemoglu and Robinson's "The Narrow Corridor" as a RAAP candidate:

  • civil society wants to be able to control the government & undermines government if not
  • the government wants to become more powerful
  • successful societies inhabit a narrow corridor in which strengthening governments are strongly coupled with strengthening civil societies

 

My AGI Threat Model: Misaligned Model-Based RL Agent

So rather than escaping and setting up shop on some hacked server somewhere, I expect the most likely scenario to be something like "The AI is engaging and witty and sympathetic and charismatic [...]"

(I'm new to thinking about this and would find responses and pointers really helpful) In my head this scenario felt unrealistic because I expect transformative-ish AI applications to come up before highly sophisticated AIs start socially manipulating their designers. Just for the sake of illustrating, I was thinking of stuff like stock investment AIs, product design AIs, military strategy AIs, companionship AIs, question answering AIs, which all seem to have the potential to throw major curves. Associated incidences would update safety culture enough to make the classic "AGI arguing itself out of a box" scenario unlikely. So I would worry more about scenarios were companies or governments feel like their hands are tied in allowing usage of/relying on potentially transformative AI systems.

Full-time AGI Safety!

Congrats, those are great news! :) I'd love to read your proposal, will shoot you a mail.

Conservatism in neocortex-like AGIs

Thanks, I find your neocortex-like AGI approach really illuminating.

Random thought:

(I think you also need to somehow set up the system so that "do nothing" is the automatically-acceptable default operation when every possibility is unpalatable.)

I was wondering if this is necessarily the best „everything is unpalatable“ policy. I could imagine that the best fallback option could also be something like „preserve your options while gathering information, strategizing and communicating with relevant other agents“, assuming that this is not unpalatable, too. I guess we may not yet trust the AGI to do this, option preservation might cause much more harm than doing nothing. But I still wonder if there are cases in which every option is unpalatable but doing nothing is clearly worse.

Rationality and Geoguessr

Thanks for sharing, just played my first round and it was a lot of fun! 

AI Winter Is Coming - How to profit from it?

Make bets here? I expect many people should be willing to bet against an AI winter. Would additionally give you some social credit if you win. I’d be interested in seeing some concrete proposals.

AI Research Considerations for Human Existential Safety (ARCHES)

Really enjoyed reading this. The section on "AI pollution" leading to a loss of control about the development of prepotent AI really interested me.

Avoiding [the risk of uncoordinated development of Misaligned Prepotent AI] calls for well-deliberated and respected assessments of the capabilities of publicly available algorithms and hardware, accounting for whether those capabilities have the potential to be combined to yield MPAI technology. Otherwise, the world could essentially accrue “AI-pollution” that might eventually precipitate or constitute MPAI.

  • I wonder how realistic it is to predict this e.g. would you basically need the knowledge to build it to have a good sense for that potential?
  • I also thought the idea of AI orgs dropping all their work once the potential for this concentrates in another org is relevant here - are there concrete plans when this happens?
  • Are there discussion about when AI orgs might want to stop publishing things? I only know of MIRI, but would they advise others like OpenAI or DeepMind to follow their example?
Predictive coding = RL + SL + Bayes + MPC

Thanks a lot for the elaboration!

in particular I still can't really put myself in the head of Friston, Clark, etc. so as to write a version of this that's in their language and speaks to their perspective.

Just a sidenote, one of my profs is part of the Bayesian CogSci crowd and was fairly frustrated with and critical of both Friston and Clark. We read one of Friston's papers in our journal club and came away thinking that Friston is reinventing a lot of wheels and using odd terms for known concepts.

For me, this paper by Sam Gershman helped a lot in understanding Friston's ideas, and this one by Laurence Aitchison and Máté Lengyel was useful, too. 

I would say that the generative models are a consortium of thousands of glued-together mini-generative-models

Cool, I like that idea, I previously thought about the models as fairly separated and bulky entities, that sounds much more plausible.

Load More