News of the new models percolates slowly through the US government and beyond.
Well fleshed out scenario, but this kind of assumption is always a dealbreaker for me.
Why would the government not be aware of the development of the mightiest technology and weapon ever created if "we" are aware of it?
Could you please elaborate why you choose to go for the "stupid and uninformed government", instead of the more plausible scenario where the government actually knows exactly what is going on in every step of the process and is the driving force behind it?
For the majority of human history we lived in a production market for food. We searched for that which tasted well but there was never enough to fill the void. Only the truly elite could afford to import enough food to reach the point of excess.
Humanity ~300.000 years. Agriculture ~12.000 years. We have been hunters and gathers for the vast majority of human history.
Yes, but as I wrote in the answer to habryka (see below), I am not talking about the present moment. I am concerned with the (near) future. With the break neck speed at which AI is moving it wont be long until it will be hopeless to figure out if its AI generated or not.
So my point and rhetorical question is this: AI is not going to go away. Everyone(!) will use it, all day every day. So instead of trying to come up with arbitrary formulas for how much AI generated content a post can or cannot contain, how can we use AI to the absolute limit to increase the quality of posts and make Lesswrong even better than it already is?!
I know the extremely hard work that a lot of people put into writing their posts, and that the moderators are doing a fantastic job at keeping the standards very high, all of which is much appreciated. Bravo!
But I assume that this policy change is forward looking and that is what I am talking about, the future. We are at the beginning of something truly spectacular that have already yielded results in certain domains that are nothing less than mind blowing. Text generation is one of those fields which have had extreme progress in just a few years time. If this progress continue (which is likely to assume), very soon text generation will be as good or better than the best human writers in pretty much any field.
How do you as moderators expect to keep up with this progress if you want to keep the forum "AI free"? Is there anything more concrete than a mere policy change that could be done to nudge people into NOT posting AI generated content? IMHO Lesswrong is a competition in cleaver ideas and smartness, and I think a fair assumption is that if you can get help from AI to reach "Yudkowsky-level" smartness, you will use it no matter what. Its just like when say athletes use PEDs to get an edge. Winning >> Policies
I understand the motif behind the policy change but its unenforceable and carry no sanctions. In 12-24 months I guess it will be very difficult (impossible) to detect AI spamming. The floodgates are open and you can only appeal to peoples willingness to have a real human to human conversation. But perhaps those conversations are not as interesting as talking to an AI? Those who seek peer validation for their cleverness will use all available tools in doing so no matter what policy there is.
I unfortunately believe that such policy changes are futile. I agree that right now its possible (not 100% by any means) to detect a sh*tpost, at least within a domain a know fairly well. Remember that we are just at the beginning of Q2 2025. Where are we with this Q2 2026 or Q2 2027?
There is no other defense for the oncoming AI forum slaughter than that people find it more valuable to express their own true opinions and ideas then to copy paste or let an agent talk for them.
No policy change is needed, a mindset change is.
Oh, I mean "required" as in to get a degree in a certain subject you need to write a thesis as your rite of passage.
Yes, you are right. Adept or die. AI can be a wonderful tool for learning but as it is used right now, where everyone have to say that they don´t use it, it beyond silly. I guess there will be some kind of reckoning soon.
Yes, sometimes they are slow, other times they are fast. A private effort to build a nuke or go to the moon in the time frames they did would not have been possible. AFAIK the assumption that Chinese AI development is government directed everyone agrees to, but for some very strange reason people like to think that US AI is directed by a group of quirky nerds that wants to save the world and just happens to get their hands on a MASSIVE amount of compute (worth billions upon billions of dollars). Imagine when the government gets to hear what these nerds are up to in a couple of years...
IF there is any truth to how important the race to AGI/ASI is to win.
THEN governments are the key-players in those races.