All of Lycaos King's Comments + Replies

Thoughtfulness, pro-sociality, and conscientiousness have no bearing on people's ability to produce aligned AI. 

They do have an effect on people's willingness to not build AI in the first place, but the purpose of working at Meta, OpenAI, and Google is to produce AI. No one who is thoughtful, pro-social, and conscientious is going to decide not to produce AI while working at those companies, while still having a job. 

Hence, the effect of discouraging those sorts of people from working at those companies has no net increase in Pdoom.

If you want to avoid building unaligned AI, you should avoid building AI.

Who says you contribute to the pool at the same rate you'd contribute to your own children? Surely other people in the pool would have different priorities than you, wouldn't they? What if there are N people in the pool and you contribute 1/5N to the children in each pool?

Add that to the fact, that maybe you only have one standout chromosome, and you could easily see a situation where genetic analysis of the population in your family + your pool shows a sudden disappearance of 90% of your genes with a proliferation of 5% of your genes. Is that equivalent t... (read more)

Well it depends on how large the pool is, but unless other members of the pool have a significantly higher set of polygenic scores than you it's pretty likely you'll have roughly equal contributions. I suppose I'd have to do the math to see exactly how big of an influence that would have. Interesting hypothesis. This matches fairly well with my own observations, though that might just be because there is no way for parents to have a quarter of their DNA be in any of their children. One interesting test might be to see if grandparents favor grandchildren with more of their DNA than ones with less, since there can be variance among grandchildren and not among children.

Maximizing the amount of your genetic material in the (near) future is my null hypothesis. I don't think it's totally accurate, but in the absence of a good understanding of which parts of our genetic material produce the non-quantifiable traits we care about: things like the shape of one's smile, personality, taste in food, overall "mood", then I expect people to be reluctant to trade off genetic density at rates greater than ~25-60%

The alternative extreme hypothesis would be a "parent" who wants to maximize their "children's" traits to the point where they'd prefer 0% genetic inheritance if the resultant child would be superior in some respect.

I've thought about this more and I don't think the downside you're pointing at exists unless the members of the pool have significantly fewer children than you do. Suppose there are N members in the pool and you contribute 1/N to the children in each pool. Then the next generation will still have the same amount of your DNA as they would if you conceived normally unless you have more children than other people in the pool. Genetic density doesn't really matter if your goal is maximizing the amount of your DNA out there. Also, if you DO care about maximizing the amount of your DNA out there, have you considered donating to a sperm bank?

That comparison misses something crucial, which is the density of genetic material passed on. Each generation represents a dilution of the first parent's genetic material with non-kin, but also has the potential for increased numbers of descendants at each generation. By the time your family would be producing your great-grandkids, they'd have the potential to have 2 dozen or more of your direct descendants.

With chromosomal selection you're trading off a massive amount of genetic saturation: essentially getting the percentage genetic inheritance of a great... (read more)

That’s only a downside if the other people in the pool don’t plan on having kids. Also, is your goal really just to maximize the amount of your genetic material in the future? If so just focus on cloning yourself as much as possible. I personally like some things about myself and dislike other things. The whole point of embryo selection or any other genetic engineering is to have kids that are better than me. I’d like them to have some of the traits I admire about myself, but other than that I’m not very picky.

Regarding DragonMagazine: It would often publish content for Dungeons and Dragons that was of a more hurried and slightly lower quality. This led to it being treated as a sort of pseudo 3rd party or beta source of monsters and player options.

People in online communities would frequently talk about options being "from Dragon Magazine" or "Dragon content" in order to forewarn people of content that may not have been given a thorough pass on editing/game balance. As such that phrase was very prevalent in online forums for D&D discussion, which as I understand it, would show up a lot in the training data.

Would it have often been rendered as "DragonMagazine" with no space, though?  Searching the web for that string turns up very little.

Most people on this website are unaligned.

A lot of the top AI people are very unaligned. 

If anyone on this website had a decent chance of gaining capabilities that would rival or exceed those of the global superpowers, then spending lots of money/effort on a research program to align them would be warranted.
Do you dislike sports and think they're dumb? If so, you are in at least one small way unaligned with most of the human race. Just one example.
We sort-of know this already; people are corruptible, etc. There are lots of things individual humans want that would be bad if a superintelligence wanted them too.
6the gears to ascension4mo
any chance you'd be willing to go into more detail? it sounds like you're saying unaligned relative to human baseline. I don't actually think I disagree a priori, I do think people who seek to have high agency have a tendency to end up misaligned with those around them and harming them, for basically exactly the same reasons as any ai that seeks to have high agency. it's not consistent, though, as far as I can tell; some people successfully decide/reach-internal-consensus to have high agency towards creating moral good and then proceed to successfully apply that decision to the world. The only way to know if this is happening is to do it oneself, of course, and that's not always easy. Nobody can force you to be moral, so it's up to you to do it, and there are a lot of ways one can mess it up, notably such as by accepting instructions or worldview that sound moral but aren't, and claims that someone knows what's moral often come packaged with claims of exactly the type you and I are making here; "you're wrong and should change", after all, is a key way people get people to do things.

While it's probably true that copyright/patent/IP law generally in effect helps "preserve the livelihood of intellectual property creators," it's a mistake IMO to see this as more than merely instrumental in preserving incentives for more art/inventions/technology which, but for a temporary monopoly (IP protections), would be financially unprofitable to create. Additionally, this view ignores art consumers, who out-number artists by several orders of magnitude. It seems unfair to orient so much of the discussion of AI art's effects on the smaller group of

... (read more)
2Andrew Currall3mo
  Yes, this is 100% backwards. The purpose of copyright law is to incetivise the production of art so that consumers of art can benefit from it. It incidentally protects artists livelihoods, but that is absolutely not it's main purpose. We only want to protect the livelihood of artists because humans enjoy consuming art- the consumption is the ultimate point. We don't have laws protecting the livelihood of people who throw porridge at brick walls because we don't value that activity. We also don't have laws protecting the livelihood of people who read novels, because while lots of people enjoy doing that, other people don't value the activity.  If we can get art produced without humans invovled, that is 100% a win for society. In the short term it puts a few people out of work, which is unfortunate, but short-lived. The fact that AI art is vastly more efficiently-produced than human art is a good thing, that we should be embracing. 

Actually you got it backwards. The so called intellectual property doesn’t have typical attributes of property:

– exclusivity: if I take it from you, you don’t have it anymore

– enforceability: it’s not trivial to even find out my “art was stolen”

– independence: I can violate your IP by accident even if I never seen any of your works (typical for patents), this can’t happen with proper property

– clear definition: you usually don’t need courts to decide whether I actually took your car or not.

Besides that, IP is in direct conflict with proper property rights ... (read more)

Am I the only person who thinks AI art still looks terrible? I see all these posts talking about how amazing AI art is and sharing pictures and they just look...bad? 

Some people feel this way, but I've done this test and most people just can't tell for good prompts that play to AI's strengths. And also, people don't cherry pick results enough, some images are just excellent, even if the modal image is a good bit jank.

Write semi-convincingly from the perspective of a non-mainstream political ideology, religion, philosophy, or aesthetic theory. The token weights are too skewed towards the training data.

This is something I've noticed GPT-3 isn't able to do, after someone pointed out to me that GPT-3 wasn't able to convincingly complete their own sentence prompts because it didn't have that person's philosophy as a background assumption.

I don't know how to put that in terms of numbers, since I couldn't really state the observation in concrete terms either.

Have you tried doing this with longer prompts, excerpted from some philosopher's work or something? I've found it can do surprisingly well at matching tone on longer coherent prompts.
4Lone Pine10mo
You should state specific philosophies. I don't know what counts as mainstream enough.

When Dath Ilan kicks off their singularity, all the Illuminati factions (keepers, prediction market engineers, secret philosopher kings) who actually run things behind the scenes will murder each other in an orgy of violence, fracturing into tiny subgroups as each of them tries to optimize control over the superintelligence. To do otherwise would be foolish. Binding arbitration cannot hold across a sufficient power/intelligence/resource gap unless submitting to binding arbitration is part of that party's terminal values.

"This is to help you, yes you, stop spinning stories where everyone is competent and things are done for sensible reasons.."

I'll take that and throw it right back your way. You will never be able to predict the actions of authority figures if you assume them to be incompetent instead of malicious. When malice is the best fit curve for the data, you should update your model. The purpose of school shooter interventions is to exercise authority and keep people afraid, not to prevent school shootings. Same for NPIs. Paxlovid is illegal because its legality would result in a decrease in power for authorities.

I think that Zvi has a pretty decent track record for predicting the actions of authority figures