I think there is a major flaw with the setup of the “ForumMagnum” that LessWrong and EA use that causes us to lose many great safety researchers, authors or scare them away:
It’s actually ridiculous: you can double downvote multiple new posts even WITHOUT opening them! For example here (I tried it on one post and then removed the double downvote, please be careful, maybe just believe me it’s sadly really possible to ruin multiple new posts each day like that). UPDATE: My bad, it’s actually double downvoting a particular tag to remove it from that post but the problem is still there: sadistic people (or malicious bots/AI agents) can open new posts and double downvote them in mass without reading at all! https://www.lesswrong.com/w/ai?sortedBy=new
If someone in a bad mood gives your new post a "double downvote" because of a typo in the first paragraph or because a cat stepped on a mouse, even though you solved alignment, people can ignore this “-1” karma post, we're going to scare that genius away and probably make a supervillain instead.
Why not to at least ask people why they downvote? It will really help to improve posts. I think some downvote without reading because of a bad title or another easy to fix thing.
Sadly most ignore posts with some "-1" karma/rating.
If someone downvotes (especially "double downvotes"), the UI should ask: "Why?". Maybe give some common reasons and an ability to send anonymous or public feedback (a comment)
It sometimes feels some people get some sadistic pleasure out of downvoting everything and everyone.
For example, X allows to "downvote" in a more civilized way, by commenting, unfollowing, muting, flagging if it's really naughty, etc.
I for one almost stopped writing here because of anonymous double downvotes of long articles that took days to write (usually if you randomly get a downvote early instead of an upvote, so your post has “-1” karma now, then no one else will open or read it), I have no idea what most of those anonymous double downvoters didn’t like.
Some of my articles take 40 minutes to read, so it can be anything, downvotes give me zero information and just demotivate more and more.
I suspect it’s often something in the title or the first paragraph, it was like that with one of my posts where I politely asked downvoters to at least comment why they downvoted (the post got 20 downvotes from 7 people somehow, because a commenter catastrophized that by my polite asking I destroy the voting system and his followers rage downvoted me :-) It’s not his fault and it wasn’t his intention but it’s strange and majorly demotivating as you can imagine).
Thank you for reading!
usually if you randomly get a downvote early instead of an upvote, so your post has “-1” karma now, then no one else will open or read it
I will say that I often do read -1 downvoted posts, I will also say that much of the time it is deserved, despite how noisy a signal it may be.
Some of my articles take 40 minutes to read, so it can be anything, downvotes give me zero information and just demotivate more and more.
I think you should try writing shorter posts. Both for your sake (so you get more targeted information), and for the readers' sake.
sadistic people (or malicious bots/AI agents) can open new posts and double downvote them in mass without reading at all!
We do alt-account detection and mass-voting detection. I am quite confident we would reliably catch any attempts at this, and that this hasn't been happening so far.
Why not to at least ask people why they downvote? It will really help to improve posts. I think some downvote without reading because of a bad title or another easy to fix thing.
Because this would cause people to basically not downvote things, drastically reducing the signal to noise ratio of the site.
Some ideas, please steelman them:
The elephant in the room: even if current major AI companies will align their AIs, there will be hackers (can create viruses with agentic AI component to steal money), rogue states (can decide to use AI agents to spread propaganda and to spy) and military (AI agents in drones and to hack infrastructure). So we need to align the world, not just the models:
Imagine a agentic AI botnet starts to spread on user computers and GPUs. I call it the agentic explosion, it's probably going to happen before the "intelligence-agency" explosion (intelligence on its own cannot explode, an LLM is a static geometric shape - a bunch of vectors - without GPUs). Right now we are hopelessly unprepared. We won't have time to create "agentic AI antiviruses".
To force GPU and OS providers to update their firmware and software to at least have robust updatable blacklists of bad AI (agentic?) models. And to have robust whitelists, in case there will be so many unaligned models, blacklists will become useless.
We can force NVIDIA to replace agentic GPUs with non-agentic ones. Ideally those non-agentic GPUs are like sandboxes that run an LLM internally and can only spit out the text or image as safe output. They probably shouldn't connect to the Internet, use tools, or we should be able to limit that in case we'll need to.
This way NVIDIA will have the skin in the game and be directly responsible for the safety of AI models that run on its GPUs.
The same way Apple feels responsible for the App Store and the apps in it, doesn't let viruses happen.
NVIDIA will want it because it can potentially like App Store take 15-30% cut from OpenAI and other commercial models, while free models will remain free (like the free apps in the App Store).
Replacing GPUs can double NVIDIA's business, so they can even lobby themselves to have those things. All companies and CEOs want money, have obligations to shareholders to increase company's market capitalization. We must make AI safety something that is profitable. Those companies that don't promote AI safety should go bankrupt or be outlawed.
AI safety is one of the most critical issues of our time, and sometimes the most innovative ideas come from unorthodox or even "crazy" thinking. I’d love to hear bold, unconventional, half-baked or well-developed ideas for improving AI safety. You can also share ideas you heard from others.
Let’s throw out all the ideas—big and small—and see where we can take them together.
Feel free to share as many as you want! No idea is too wild, and this could be a great opportunity for collaborative development. We might just find the next breakthrough by exploring ideas we’ve been hesitant to share.
A quick request: Let’s keep this space constructive—downvote only if there’s clear trolling or spam, and be supportive of half-baked ideas. The goal is to unlock creativity, not judge premature thoughts.
Looking forward to hearing your thoughts and ideas!
P.S. The AIs are moving fast, the last similar discussion was a month ago and was well received, so let's try again and see how the ideas changed.