I received an automatic rejection for an article. I have been writing on LessWrong since the
beginning.
I did use AI to help write the article which is why the article was rejected, but I also spent considerable time
working on it. I also used AI to help write an article I
published in January, which now has 156 karma.
https://www.lesswrong.com/posts/kLvhBSwjWD9wjejWn/precedents-for-the-unprecedented-historical-analogies-for-1
This new article is about why AI is going to destroy most jobs, and as
I wrote in it, "This essay was written with help from AI. If I could
not use AI productively to improve it, that would undermine either my
argument or my claim to expertise."
Finally, I had a stroke two years ago and have come to rely on AI when
writing. Please allow me to publish this article.
James Miller
Professor of Economics, Smith College
Man, I am really sorry about the stroke.
The current rule is that you can use heavily AI-assisted writing, you just need to put it into an LLM content block:
Like this
We don't evaluate content within LLM content blocks for LLM writing. You can give it a title that indicates substantial co-authorship.
That doesn't seem to work. I put it in an LLM block but now it says I already published the article, although I can't find it on the website so unless there is a delay, something has gone wrong.
Ah, yeah, I don't think we are handling edits gracefully, and will look into how we can improve that at a process level, but also, at least for your latest post I am seeing this:

I.e. you inserted an LLM content block at the top, but didn't actually wrap anything with it, so that wouldn't end up being picked up by our systems.
The problems you are running into might actually be a great fit for using our LLM-assistant integration. We have infrastructure so that an LLM can insert and edit arbitrary content in posts (both inside and outside of LLM content blocks), and so this might allow you dealing with this stuff much better. You can click this button in the editor to open up a Claude chat with our suggested prompt (but throwing it into many other models should also work):

Yes, I did that. Then got some more help from ChatGPT and made the LLM block first and put the text into it. I got something saying I had already submitted (published?) the article. My plan is to change the title and try again tomorrow.
I'd recommend simply making a new post; we currently don't have infrastructure set up for automatically re-evaluating posts have previously been rejected.
The new rule and the logic behind it is here: New LessWrong Editor! (Also, an update to our LLM policy.)
The new rule is based on some fairly subtle but important considerations around epistemic pollution from letting LLMs think for you. There's also an issue I think with decoupling the signal of good writing from the content of good ideas. It's already pretty hard to find the good ideas on LW even when you can notice bad writing.
Like Habryka said, there's an easy route within the rules: put it all in an LLM block. I also recommend that you describe what you did pretty thoroughly at the top, so people know what role you played in generating the ideas and refining them through the writing process. (You might even describe it more thoroughly in a collapsible block or at the bottom for those who want the gory details. I do; I think LLM-assisted writing can range from almost entirely using human judgment on the ideas, all the way to letting the LLM create and judge the ideas/claims - which is bad since LLMs have bad metacognitive skills relative to humans IMO.)
Then let the readers decide! (unfortunately, low vote totals might just mean few people clicked on it vs. many read it and objected to the LLM assistance).
I tried the LLM block route and it didn't work because (I think) the system thought I had already submitted the article. I will change the title and try again tomorrow. The LLM block route is not easy for someone who isn't a programmer and doesn't know what was meant by it, although ChatGPT helped me figure it out. I think AI+Humans outperform humans at metacognitive skills, certainly for humans who have some brain damage (as I do).
I agree that AI+human is better than human alone, including for metacognitive skills - IF it's used skillfully. And that includes people without brain damage (sorry for the stroke and glad you're still able to engage intellectually!).
LLMs can be nearly as good for checking your thinking as another expert human, and better than non-expert humans - but only if you prompt them carefully for generating a variety of audience-relevant pushback and counterarguments, then make your own judgment about which are valuable/valid.
(Since you didn't mention including the precise description of your methods, I'm once again going to strongly encourage you to do so. I expect the piece to go largely unread if it just says "LLM written" without explanation. We've got too much to read and have to make judgments somehow!)
I asked ChatGPT to tell me how I use it to help write papers and this is what it outputted: You use the system as a constrained collaborator embedded at specific points in the writing process, not as an end-to-end author.
The control mechanism is simple: you never accept output that violates your constraints. Selection, rejection, and repeated tightening replace reliance on any single response.
That makes sense. I do think that LLM-assisted writing can be very good if it's used carefully in a proces like that. I just looked and saw that you wrote Precedents for the Unprecedented: Historical Analogies for Thirteen Artificial Superintelligence Risks. I was blown away by how comprehensive and thorough that was. It makes sense that you could only pull that off by using LLMs as collaborators; I doubt you could've written such a piece without that help (nobody else has, even though it's a valuable contribtion).
Except for point 1 in the recipe above, "idea generation". Putting in a thesis and then having an LLM come up with ideas for how to support it sounds like exactly how you confuse yourself and everyone else. It's asking for sycophancy and good-sounding but ultimately wrong arguments. At the least you'd want to do a round of adversarial critique, right away instead of investing in writing a whole article based on a decent argument that might be wrong.
This is very different from academic practice, in which the whole goal is to create a decent argument even if it might be wrong. That's an advocacy system like law, in which you assume that some other researcher will spend just as much time debunking your argument if it's wrong.
But that's a much worse system than asking everyone to act as their own critic before asking everyone else to read their arguments.
Which is the point of the LW credo: write to inform, not to persuade.
I think it's worth looking at the new guidelines and the discussion.
And
Should we be concerned that the invasion of Ukraine will cause massive starvation among poor people? Food prices are already high. Russia and Ukraine are major exporters of food. Russia produces key inputs for fertilizer. China is reducing fertilizer exports. I fear that the US might restrict exports of food to keep domestic US food prices from rising.
The invasion itself doesn't reduce Russian fertilizer exports. The better question seems to be whether we should be concerned that the sanctions cause massive starvation among poor people.
It looks like Western nations are sacrificing the lives of poor Africans who starve due to lack of food to be able to send a stronger signal against Russia.
Edit: While I still think that the sanctions are bigger deal as Russia exports more food and fertilizer, the war itself does reduce Ukranian food exports as well.
The invasion of Ukraine might cause a famine because of restrictions on food and energy exports from Russia and Ukraine, reduced planting in Ukraine, and reduced fertilizer production. Below are some steps that could be taken to mitigate the famine:
Anything else that can be done? I'm not sure if it's optimal to try to make reducing famine a significant goal for the United States since the government might respond by using price controls that make the famine worse.
Today is Mountain Day at Smith College, where I work. Once during the fall semester the President of Smith College will pick a day and announce in the morning that all classes and appointments are cancelled. Students love Mountain Day, far more than they would if the day had been announced in advance. I suspect we under-invest in fun surprises in our society.
Today I started taking Rapamycin for anti-aging purposes. I'm taking 3mg once a week. I will likely increase the dosage if I don't have negative symptoms. I got it through Anderson Longevity Clinic in Rhode Island. They required me to have one in-person visit, but I can do the rest virtually. My regular doctor and an online doctor refused to give me a prescription. I will update if I have any symptoms.
A human made post-singularity AI would surpass the intellectual capabilities of ETs maybe 30 seconds after it did ours.
No, ETs likely have lived millions of years with post-singularity AI and to the extend they aren't themselves AI upgraded their cognitive capacity significantly.
In the now deleted discussion about Sam Altman's talk to the SSC Online Meetup there was strong disagreement about what Sam Altman might have said about UFOs. If you go to 17.14 of this discussion that Altman had with Tyler Cowen you hear Altman briefly ask Cowen about UFOs. Altman says that "[UFOs] gotten a lot and simultaneously not nearly enough attention."
Eric Weinstein & Michael Shermer: An honest dialogue about UFOs seems to me to cover the UFO topic well. The videos publically available aren't great evidence for UFO's but all the information we have about how part of the US military claims to see UFO's is very hard to bring together in a coherent scenario that makes sense.
What is your estimate of the probability that UFO (whatever it is) will cause human extinction? Or prevent it?
I think if UFOs are aliens they on net increase our chance of survival. I mostly think Eliezer is right about AI risks, and if the aliens are here they clearly have the capacity to kill us but are not doing so and the aliens would likely not want us to create a paperclip maximizer. They might stop us from creating a paperclip maxmizer by killing us, but then we would be dead anyway if the aliens didn't exist. But it's also possible that the aliens will save us by preventing us from creating a paperclip maximizer.
It's extremely weird that atomic weapons have not been used in anger since WW II, and we know that humanity got lucky on several occasions, UFOs seems to like to be around ships that have nuclear weapons and power so I assign some non-trivial probability to aliens having saved us from nuclear war.
As to the probability assessment, this is my first attempt so don't put a lot of weight on it: If no aliens 75% (my guess, I don't know Eliezer's) chance we destroy ourselves. UFOs being aliens at 40%, and say 30% chance if this is true they would save us from killing ourselves and 3% chance they would choose to destroy us in a situation in which we wouldn't do it to ourselves.
Seem reasonable.
I think that if UFO are not aliens but some other weird things like non-organic life forms with limited intelligence or glitches in the matrix, it will be bad for us. I expect that non-alien-weird-UFO has higher probability than alien-UFOs.
Such life forms can either have a capability to kill humans in mass in non-rationally predictable situations – or humans will learn how to use them and will create new weapons which can travel instantly through space and cause new types of damage.
It's extremely weird that atomic weapons have not been used in anger since WW II, and we know that humanity got lucky on several occasions, UFOs seems to like to be around ships that have nuclear weapons and power so I assign some non-trivial probability to aliens having saved us from nuclear war.
Bird watchers also tend to see more birds.
I'd imagine there are more sensors and eyeballs looking in the skies at high security facilities, thus more UAP.
What is the name of this bias?
The Interpretability Paradox in AGI Development
The ease or difficulty of interpretability, the ability to understand and analyze the inner workings of AGI, may drastically affect humanity's survival odds. The worst-case scenario might arise if interpretability proves too challenging for humans but not for powerful AGIs.
In a recent podcast, academic economists Robin Hanson and I discussed AGI risks from a social science perspective, focusing on a future with numerous competing AGIs not aligned with human values. Drawing on human analogies, Hanson considered the inherent difficulty of forming a coalition where a group unites to eliminate others to seize their resources. A crucial coordination challenge is ensuring that, once successful, coalition members won't betray each other, as occurred during the French Revolution.
Consider a human coalition that agrees to kill everyone over 80 to redistribute their resources. Coalition members might promise that this is a one-time event, but such an agreement isn't credible. It would likely be safer for everyone not to violate property right norms for short-term gains.
In a future with numerous unaligned AGIs, some coalition might calculate it would be better off eliminating everyone outside the coalition. However, they would have the same fear that once this process starts, it would be hard to stop. As a result, it might be safer to respect property rights and markets, competing like corporations do.
A key distinction between humans and AGIs could be AGI's potential for superior coordination. AGIs in a coalition could potentially modify their code so after their coalition has violently taken over, no member of the coalition would ever want to turn on members of the coalition. This way, an AGI coalition wouldn’t have to fear a revolution they start ever eating its own. This possibility raises a vital question: will AGIs possess the interpretability required to achieve such feats?
The best case for AGI risk is if we solve interpretability before creating AGIs strong enough to take over. The worst case might be if interpretability remains impossible for us but becomes achievable for powerful AGIs. In this situation, AGIs could form binding coalitions with one another, leaving humans out of the loop, partly because we can't become reliable coalition partners and our biological needs involve maintaining Earth in conditions suboptimal for AGI operations. This outcome creates a paradox: if we cannot develop interpretable AGIs, perhaps we should focus on making them exceptionally difficult to interpret, even for themselves. In this case, future powerful AGIs might prevent the creation of interpretable AGIs because such AGIs would have a coordination advantage and thus be a threat to the uninterpretable AGIs.
A human made post-singularity AI would surpass the intellectual capabilities of ETs maybe 30 seconds after it did ours.
My guess is that aliens have either solved the alignment issue and are post-singularity themselves, or will stop us from having a singularity. I think any civilization capable of building spaceships will have explored AI, but I could just lack the imagination to consider otherwise.