Most people on this website are unaligned.
A lot of the top AI people are very unaligned.
While it's probably true that copyright/patent/IP law generally in effect helps "preserve the livelihood of intellectual property creators," it's a mistake IMO to see this as more than merely instrumental in preserving incentives for more art/inventions/technology which, but for a temporary monopoly (IP protections), would be financially unprofitable to create. Additionally, this view ignores art consumers, who out-number artists by several orders of magnitude. It seems unfair to orient so much of the discussion of AI art's effects on the smaller group of people who currently create art.
I think you've got this precisely backwards. The concept of laws as such only makes sense in a deontological framework where the fruits of intellectual labor belong to the individual who produced them. Otherwise instead of complicated rules about temporary monopolies and intellectual property, the government would just allow any use which could be proven in court to be net positive in utility, regardless of the wishes of the original creator.
Whether or not you think this is a bad idea, I think it clear that society at large doesn't agree with the framework you've proposed for evaluating IP and copyright.
Am I the only person who thinks AI art still looks terrible? I see all these posts talking about how amazing AI art is and sharing pictures and they just look...bad?
Write semi-convincingly from the perspective of a non-mainstream political ideology, religion, philosophy, or aesthetic theory. The token weights are too skewed towards the training data.
This is something I've noticed GPT-3 isn't able to do, after someone pointed out to me that GPT-3 wasn't able to convincingly complete their own sentence prompts because it didn't have that person's philosophy as a background assumption.
I don't know how to put that in terms of numbers, since I couldn't really state the observation in concrete terms either.
When Dath Ilan kicks off their singularity, all the Illuminati factions (keepers, prediction market engineers, secret philosopher kings) who actually run things behind the scenes will murder each other in an orgy of violence, fracturing into tiny subgroups as each of them tries to optimize control over the superintelligence. To do otherwise would be foolish. Binding arbitration cannot hold across a sufficient power/intelligence/resource gap unless submitting to binding arbitration is part of that party's terminal values.
"This is to help you, yes you, stop spinning stories where everyone is competent and things are done for sensible reasons.."
I'll take that and throw it right back your way. You will never be able to predict the actions of authority figures if you assume them to be incompetent instead of malicious. When malice is the best fit curve for the data, you should update your model. The purpose of school shooter interventions is to exercise authority and keep people afraid, not to prevent school shootings. Same for NPIs. Paxlovid is illegal because its legality would result in a decrease in power for authorities.
Regarding DragonMagazine: It would often publish content for Dungeons and Dragons that was of a more hurried and slightly lower quality. This led to it being treated as a sort of pseudo 3rd party or beta source of monsters and player options.
People in online communities would frequently talk about options being "from Dragon Magazine" or "Dragon content" in order to forewarn people of content that may not have been given a thorough pass on editing/game balance. As such that phrase was very prevalent in online forums for D&D discussion, which as I understand it, would show up a lot in the training data.