AntonTimmer

Posts

Sorted by New

Wiki Contributions

Comments

DeepMind alignment team opinions on AGI ruin arguments

I misused the definition of a pivotal act which makes it confusing. My bad! 

I understood the phrase pivotal act more in the spirit of out-off distribution effort. To rephrase it more clearly: Do "you" think an out-off distribution effort is needed right now ? For example sacrificing the long term (20 years) for the short term (5 years) or going for high risk-high reward strategies. 

Or should we stay on our current trajectory, since it maximizes our chances of winning ? (which as far as I can tell is "your" opinion)

DeepMind alignment team opinions on AGI ruin arguments

As far as I can tell the major disagreements are about us having a plan and taking a pivotal act. There seems to be general "consensus" (Unclear, Mostly Agree, Agree) about what the problems are and how an AGI might look. Since no pivotal acts is needed either you think that we will be able to tackle this problem with the resources we have and will have, you have (way) longer timelines (let's assume Eliezer timeline is 2032 for argument's sake) or you expect the world to make a major shift in priorities concerning AGI.

Am I correct in assuming this or am I missing some alternatives ?

All AGI safety questions welcome (especially basic ones) [monthly thread]

This seems to boil down to the "AI in the box" problem. People are convinced that keeping an AI trapped is not possible. There is a tag which you can look up (AI Boxing) or you can just read up here.

Church vs. Taskforce

Reading this 13 years later is quite interesting when you think about how far the LW community and EA community have come. 

[$20K in Prizes] AI Safety Arguments Competition

"If AGI systems can become as smart as humans, imagine what one human/organization could do by just replicating this AGI."

A Quick Guide to Confronting Doom

I feel the same. I think there are just a lot of problems which one could try to solve/solve which are increasing the good in the world. The difference between alignment and the rest seems to be the probability at which humans will go extinct is much higher.

Ideal governance (for companies, countries and more)

Curtis Yarvin might be intresting. There are two post I would recommend mainly general theory of collaboration and one about Monarchism. He gives an interesting perspective.