justinpombrio

Wiki Contributions

Comments

Will you fly to the Sahel with a backpack full of antibiotics?

I imagine you suggesting this, a bunch of people nodding along in agreement, then no one doing it because of personal safety and because it's weird.

People often study on their own for the GREs or for the Actuarial exams. In both cases the results are taken seriously, there are a ton of prep materials, and I think the exams are funded by a flat fee to test takers (which works due to the magic of economy of scale).

Person who doesn't know squat about poker here. The first example was clear: they disagreed about whether to fold or call, and it was right on the edge. Especially since where the other player was from was relevant.

It was long for someone who doesn't know poker though.

Wow, this got heated fast. Partly my fault. My assumptions were unwarranted and my model therefore unrealistic. Sorry.

I think we've been talking past each other. Some clarifications on my position:

  • I'm not suggesting that one only reclines if one is given permission to do so from the person behind them. I'm suggesting cooperation on the act that is controlled by one person but effects two people. If reclining is a minor convenience to the person in front, but would crush the legs of the person behind, it does not happen. If the person in front has a back problem and the person in back is short, reclining does happen.
  • None of the blame goes toward other passengers. The blame all goes to the airlines. If you want to recline but don't get to, that's the airline's fault. If you don't want the person in front of you to recline but they do, that's the airline's fault. They should make better seat arrangements. I would preferentially fly on an airline that didn't stuff me in like cattle. I'm all for protesting with you about this.

If you disagree with this, would you agree that if airplanes were naturally occurring rather than being engineered, then the decision of whether to recline should be a conversation between the two people it effects? If so, what breaks the symmetry between the two effected people, when the situation is engineered by the airline?

EDIT: Or, to get at my emotional crux, if my very long legs would be smooshed if you were to recline, and reclining was a minor convenience for you, would you say "ok, I won't recline, but let's be angry at the airline for putting us in this situation", or "nope I'm reclining anyways, blame the airline"?

EDIT: I no longer endorse this model.

Say that flights are on average 80% full, 20% of passengers are tall and will be miserable for the whole flight if they're reclined into, 50% of passengers want to recline, and planes are shaped like donuts so that every seat has a seat behind it.

If passengers behave like you, then 8% of passengers are miserable in exchange for 50% of passengers getting to recline. If passengers instead ask before reclining (or unrecline if asked to), then 0% of passengers are miserable and 42% get to recline. The passengers pick between these two situations.

The second situation is better than the first. Should airlines not allow seats to recline, or increase spacing between seats by (say) 12% and thus increase ticket prices by (say) 8%, because passengers like you insist on choosing the first situation over the second?

An internet search suggests that extra legroom tends to cost $20-$100, typically in between, which matches what I remember seeing. Have you seen it for $10? If so I need to pay more attention next time and shell out the $10!

From the point of view of a powerful AI, Earth is infested with many nests of humans that could do damage to important things. At the very least it makes sense to permanently neuter their ability to do that.

That's a positive outcome, as long as said humans aren't unduly harmed, and "doing damage to important things" doesn't include say, eating plants or scaring bunnies by walking by them.

An AI that maximises total group reward function because it cares only for its own reward function, which is defined as “maximise total group reward function” appears aligned right up until it isn’t.

The AI does not aim to maximize its reward function! The AI is trained on a reward function, and then (by hypothesis) becomes intelligent enough to act as an inner optimizer that optimizes for heuristics that yielded high reward in its (earlier) training environment. The aim is to produce a training environment such that the heuristics of the inner optimizer tries to maximize tend towards altruism.

What is altruistic supposed to mean here?

What does it mean that humans are altruistic? It's a statement about our own messy utility function, which we (the inner optimizers) try to maximize, that was built from heuristics that worked well in our ancestral environment, like "salt and sugar are great" and "big eyes are adorable". Our altruism is a weird, biased, messy thing: we care more about kittens than pigs because they're cuter, and we care a lot more about things we see than things we don't see.

Likewise, whatever heuristics work well in the AI training environment are likely to be weird and biased. But wouldn't a training environment that is designed to reward altruism be likely to yield a lot of heuristics that work in our favor? "Don't kill all the agents", for example, is a very simple heuristic with very predictable reward, that the AI should learn early and strongly, in the same way that evolution taught humans "be afraid of decomposing human bodies".

You're saying that there's basically no way this AI is going to learn any altruism. But humans did. What is it about this AI training environment that makes it worse than our ancestral environment for learning altruism?

The answer to all of these considerations is that we would be relying on the training to develop a (semi) aligned AI before it realized it learned how to manipulate the environment, or broke free. Once one of those things happen, its inner values are frozen in place, so they had better be good enough at that point.

What I'm not getting, is that humans are frequently altruistic, and it seems like if we designed a multi-agent environment entirely around rewarding altruism, we should get at least as much altruism as humans? I should admit that I would consider The Super Happy People to be a success story...

Load More