cousin_it

Wiki Contributions

Comments

Wait, but you can't just talk about compensating content creators without looking on the other side of the picture. Imagine a business that sells some not-very-good product at too-high price. They pay Google for clever ad targeting, and find some willing buyers (who end up dissatisfied). So the existence of such businesses is a net negative to the world, and is enabled by ad targeting. And this might not be an edge case: depending on who you ask, most online ads might be for stuff you'd regret buying.

If the AI can rewrite its own code, it can replace itself with a no-op program, right? Or even if it can't, maybe it can choose/commit to do nothing. So this approach hinges on what counts as "shutdown" to the AI.

I don't know if we have enough expertise in psychology to give such advice correctly, or if such expertise even exists today. But for me personally, it was important to realize that anger is a sign of weakness. I should have a lot of strength and courage, but minimize signs of anger or any kind of wild lashing out. It feels like the best way to carry myself, both in friendly arguments, and in actual conflicts.

Yeah, it would have to be at least 3 individuals mating. And there would be some weird dynamics: the individual that feels less fit than the partners would have a weaker incentive to mate, because its genes would be less likely to continue. Then the other partners would have to offer some bribe, maybe take on more parental investment. Then maybe some individuals would pretend to be less fit, to receive the bribe. It's tricky to think about, maybe it's already researched somewhere?

Cochran had a post saying if you take a bunch of different genomes and make a new one by choosing the majority allele at each locus, you might end up creating a person smarter/healthier/etc than anyone who ever lived, because most of the bad alleles would be gone. But to me it seems a bit weird, because if the algorithm is so simple and the benefit is so huge, why hasn't nature found it?

Coming back to this idea again after a long time, I recently heard a funny argument against morality-based vegetarianism: no animal ever showed the slightest moral scruple against eating humans, so why is it wrong for us to eat animals? I go back and forth on whether this "Stirnerian view" makes sense or not.

Here's a debate protocol that I'd like to try. Both participants independently write statements of up to 10K words and send them to each other at the same time. (This can be done through an intermediary, to make sure both statements are sent before either is received.) Then they take a day to revise their statements, fixing the uncovered weak points and preemptively attacking the other's weak points, and send them to each other again. This continues for multiple rounds, until both participants feel they have expressed their position well and don't need to revise more, reaching a kind of Nash equilibrium. Then the final revisions of both statements are released to the public, side by side.

Note that in this kind of debate the participants don't try to change each other's mind. They just try to write something that will eventually sway the public. But they know that if they write wrong stuff that the other side can easily disprove, they won't sway the public. So only the best arguments remain, within the size limit.

I think ideas like Nash equilibrium get their importance from predictive power: do they correctly predict what will happen in the real world situation which is modeled by the game. For example, the biological situations that settle on game-theoretic equilibria even though the "players" aren't thinking at all.

In your particular game, saying "Nash equilibrium" doesn't really narrow down what will happen, as there are equilibria for all temperatures from 30 to 99.3. The 99 equilibrium in particular seems pretty brittle: if Alice breaks it unilaterally on round 1, then Bob notices that and joins in on round 2, neither of them end up punished and they get 98.6 from then on.

More generally, in any game like this where everyone's interests are perfectly aligned, I'd expect cooperation to happen. The nastiness of game theory really comes from the fact that some players can benefit by screwing over others. The game in your post doesn't have that, so any nastiness in such a game is probably an analysis artifact.

I don't see any group of people on LW running around criticizing every new idea. Most criticism on LW is civil, and most of it is helpful at least in part. And the small proportion that isn't helpful at all, is still useful to me as a test: can I stop myself from overreacting to it?

Load More