All around excellent back and forth, I thought, and a good look back at what the Biden admin was thinking about the future of AI. an excerpt: > [Ben Buchanan, Biden AI adviser:] What we’re saying is: We were building a foundation for something that was coming that was not...
https://x.com/hamandcheese/status/1858897287268725080 > "The annual report of the US-China Economic and Security Review Commission is now live. 🚨 > > Its top recommendation is for Congress and the DoD to fund a Manhattan Project-like program to race to AGI."
I am an artist. ”Eleven evil wizard schoolgirls in an archduke's library, dressed in red and black Asmodean schoolgirl uniforms, perched on armchairs and sofas”[1] Sigh, at least it’s not more catgirls. I don’t even draw them well. I stretched briefly before starting this one, my arms reaching as far...
Surprised I couldn't find this anywhere on lesswrong so thought I'd add it. Seems like there would be some alignment implications of LLM behavior changing over time, at the least gaining a bit more context. Someone else I spoke to about this immediately deflated it with regards to some sort...
There is some discussion on the forum about using AI to detect whether or not something is a deepfake, and perhaps some trust in anti-deepfake bots to be better resourced etc. in this arms race. But could we give ourselves a bit of breathing room here? Could it be incredibly...
Hello, I have been engaged with EA for about 4 years, university then ops. I am now trying to contribute to AGI Alignment non-technically, and learning about it to be the best support. I am in that phase of emotionally confronting the seemingly likely drastic changes of the next few...