Who says you contribute to the pool at the same rate you'd contribute to your own children? Surely other people in the pool would have different priorities than you, wouldn't they? What if there are N people in the pool and you contribute 1/5N to the children in each pool?
Add that to the fact, that maybe you only have one standout chromosome, and you could easily see a situation where genetic analysis of the population in your family + your pool shows a sudden disappearance of 90% of your genes with a proliferation of 5% of your genes. Is that equivalent t...
Maximizing the amount of your genetic material in the (near) future is my null hypothesis. I don't think it's totally accurate, but in the absence of a good understanding of which parts of our genetic material produce the non-quantifiable traits we care about: things like the shape of one's smile, personality, taste in food, overall "mood", then I expect people to be reluctant to trade off genetic density at rates greater than ~25-60%
The alternative extreme hypothesis would be a "parent" who wants to maximize their "children's" traits to the point where they'd prefer 0% genetic inheritance if the resultant child would be superior in some respect.
That comparison misses something crucial, which is the density of genetic material passed on. Each generation represents a dilution of the first parent's genetic material with non-kin, but also has the potential for increased numbers of descendants at each generation. By the time your family would be producing your great-grandkids, they'd have the potential to have 2 dozen or more of your direct descendants.
With chromosomal selection you're trading off a massive amount of genetic saturation: essentially getting the percentage genetic inheritance of a great...
Regarding DragonMagazine: It would often publish content for Dungeons and Dragons that was of a more hurried and slightly lower quality. This led to it being treated as a sort of pseudo 3rd party or beta source of monsters and player options.
People in online communities would frequently talk about options being "from Dragon Magazine" or "Dragon content" in order to forewarn people of content that may not have been given a thorough pass on editing/game balance. As such that phrase was very prevalent in online forums for D&D discussion, which as I understand it, would show up a lot in the training data.
...While it's probably true that copyright/patent/IP law generally in effect helps "preserve the livelihood of intellectual property creators," it's a mistake IMO to see this as more than merely instrumental in preserving incentives for more art/inventions/technology which, but for a temporary monopoly (IP protections), would be financially unprofitable to create. Additionally, this view ignores art consumers, who out-number artists by several orders of magnitude. It seems unfair to orient so much of the discussion of AI art's effects on the smaller group of
Actually you got it backwards. The so called intellectual property doesn’t have typical attributes of property:
– exclusivity: if I take it from you, you don’t have it anymore
– enforceability: it’s not trivial to even find out my “art was stolen”
– independence: I can violate your IP by accident even if I never seen any of your works (typical for patents), this can’t happen with proper property
– clear definition: you usually don’t need courts to decide whether I actually took your car or not.
Besides that, IP is in direct conflict with proper property rights ...
Am I the only person who thinks AI art still looks terrible? I see all these posts talking about how amazing AI art is and sharing pictures and they just look...bad?
Some people feel this way, but I've done this test and most people just can't tell for good prompts that play to AI's strengths. And also, people don't cherry pick results enough, some images are just excellent, even if the modal image is a good bit jank.
Write semi-convincingly from the perspective of a non-mainstream political ideology, religion, philosophy, or aesthetic theory. The token weights are too skewed towards the training data.
This is something I've noticed GPT-3 isn't able to do, after someone pointed out to me that GPT-3 wasn't able to convincingly complete their own sentence prompts because it didn't have that person's philosophy as a background assumption.
I don't know how to put that in terms of numbers, since I couldn't really state the observation in concrete terms either.
When Dath Ilan kicks off their singularity, all the Illuminati factions (keepers, prediction market engineers, secret philosopher kings) who actually run things behind the scenes will murder each other in an orgy of violence, fracturing into tiny subgroups as each of them tries to optimize control over the superintelligence. To do otherwise would be foolish. Binding arbitration cannot hold across a sufficient power/intelligence/resource gap unless submitting to binding arbitration is part of that party's terminal values.
"This is to help you, yes you, stop spinning stories where everyone is competent and things are done for sensible reasons.."
I'll take that and throw it right back your way. You will never be able to predict the actions of authority figures if you assume them to be incompetent instead of malicious. When malice is the best fit curve for the data, you should update your model. The purpose of school shooter interventions is to exercise authority and keep people afraid, not to prevent school shootings. Same for NPIs. Paxlovid is illegal because its legality would result in a decrease in power for authorities.
Thoughtfulness, pro-sociality, and conscientiousness have no bearing on people's ability to produce aligned AI.
They do have an effect on people's willingness to not build AI in the first place, but the purpose of working at Meta, OpenAI, and Google is to produce AI. No one who is thoughtful, pro-social, and conscientious is going to decide not to produce AI while working at those companies, while still having a job.
Hence, the effect of discouraging those sorts of people from working at those companies has no net increase in Pdoom.
If you want to avoid building unaligned AI, you should avoid building AI.