Recently, the moderators of LessWrong have decided to change the site's policies on LLM usage. The essence of the policy can be summarized by the following excerpt: > With all that in mind, our new policy is this: > > * "LLM output" includes all of: > * * text...
Introduction: The next presidential election represents a significant opportunity for advocates of AI safety to influence government policy. Depending on the timeline for the development of artificial general intelligence (AGI), it may also be one of the last U.S. elections capable of meaningfully shaping the long-term trajectory of AI governance....
Introduction: Artificial Intelligence is advancing rapidly, raising significant concerns about its safe development and deployment. Despite widespread public concern about AI, there is a notable absence of a sustained political movement dedicated to addressing these issues. While certain organizations and individuals are engaged in shaping AI policy, these efforts have...
In recent years, AI has been all the rage in the stock market, and there is no reason to see that slowing down. With the picture of AGI on the horizon becoming clearer and clearer, faster and smarter models being released, and more and more investment being poured into AI...
Introduction When discussing AI safety, alignment—ensuring AI systems pursue human-approved goals—is often the primary focus. However, containment, which restricts AI’s ability to exert influence beyond controlled environments, is in my opinion a more intuitive and less complex approach. This post will outline common objections to AI containment, explain why they...
Imagine that you believe that the purpose of your life is to be the best marathon runner in the world, or even just an above average marathon runner. Despite the fact that you are committed to the mission, imagine that running is not particularly enjoyable to you, and your training...