Back when you were underestimating Covid, how much did you hear from epidemiologists? Either directly or filtered through media coverage?
I was going to give an answer about how "taking the outside view" should work, but I realized I needed this information first.
I don't think it invalidates the claim that "Without the minimum wage law, lots of people would probably be paid significantly less." (I believe that's one of the claims you were referring to. Let me know if I misinterpreted your post.)
I don't have a whole lot of time to research economies around the world, but I checked out a couple sources with varying perspectives (two struck me as neutral, two as libertarian). One of the libertarian ones made no effort to understand or explain the phenomenon, but all three others agreed that these countries rely on strong unions to keep wages up.
IMO, that means you're both partially right. As you said, some countries can and do function without minimum wages - it's clearly possible. But as the original poster said, if a country has minimum wage laws, removing those laws will in fact tend to reduce wages.
Some countries without minimum wages still have well-paid workers. Other countries without minimum wages have sweatshops. I think that market forces push towards the "sweatshop" end of the scale (for the reasons described by the original poster), and unions are one of the biggest things pushing back.
Most of the research is aware of that limitation. Either they address it directly, or the experiment is designed to work around it, assuming mental state based on actions just as you suggest.
My point here isn't necessarily that you're wrong, but that you can make a stronger point by acknowledging and addressing the existing literature. Explain why you've settled on suicidal behavior as the best available indicator, as opposed to vocalizations and mannerisms.
This is important because, as gbear605 pointed out, most farms restrict animals' ability to attempt suicide. If suicide attempts are your main criterion, that seems likely to skew your results. (The same is true of several other obvious indicators of dissatisfaction, such as escape attempts.)
I'm afraid I don't have time to write out my own views on this topic, but I think it's important to note that several researchers have looked into the question of whether animals experience emotion. I think your post would be a lot stronger if you addressed and/or cited some of this research.
I do want to add - separately - that superrational agents (not sure about EDT) can solve this problem in a roundabout way.
Imagine if some prankster erased the "1" and "2" from the signs in rooms A1 and A2, leaving just "A" in both cases. Now everyone has less information and makes better decisions. And in the real contest, (super)rational agents could achieve the same effect by keeping their eyes closed. Simply say "tails," maximize expected value, and leave the room never knowing which one it was.
None of which should be necessary. (Super)rational agents should win even after looking at the sign. They should be able to eliminate a possibility and still guess "tails." A flaw must exist somewhere in the argument for "heads," and even if I haven't found that flaw, a perfect logician would spot it no problem.
Oh right, I see where you're coming from. When I said "you can't control their vote" I was missing the point, because as far superrational agents are concerned, they do control each other's votes. And in that case, it sure seems like they'll go for the $2, earning less money overall.
It occurs to me that if team 4 didn't exist, but teams 1-3 were still equally likely, then "heads" actually would be the better option. If everyone guesses "heads," two teams are right, and they take home $4. If everyone guesses "tails," team 3 takes home $3 and that's it. On average, this maximizes winnings.
Except this isn't the same situation at all. With group 4 eliminated from the get go, the remaining teams can do even better than $4 or $3. Teammates in room A2 knows for a fact that the coin landed heads, and they automatically earn $1. Teammates in room A1 are no longer responsible for their teammates' decisions, so they go for the $3. Thus teams 1 and 2 both take home $1 while team 3 takes home $3, for a total of $5.
Maybe that's the difference. Even if you know for a fact that you aren't on team 4, you also aren't in a world where team 4 was eliminated from the start. The team still needs to factor into your calculations... somehow. Maybe it means your teammate isn't really making the same decision you are? But it's perfectly symmetrical information. Maybe you don't get to eliminate team 4 unless your teammate does? But the proof is right in front of you. Maybe the information isn't symmetrical because your teammate could be in room B?
I don't know. I feel like there's an answer in here somewhere, but I've spent several hours on this post and I have other things to do today.
I'm going to rephrase this using as many integers as possible because humans are better at reasoning about those. I know I personally am.
Instead of randomness, we have four teams that perform this experiment. Teams 1 and 2 represent the first flip landing on heads. Team 3 is tails then heads, and team 4 is tails then tails. No one knows which team they've been assigned to.
Also, instead of earning $1 or $3 for both participants, a correct guess earns that same amount once. They still share finances so this shouldn't affect anyone's reasoning; I just don't want to have to double it.
Team 1 makes 2 guesses. Each "heads" guess earns $1, each "tails" guess earns nothing.
Team 2 makes 2 guesses. Each "heads" guess earns $1, each "tails" guess earns nothing.
Team 3 makes 1 guess. Guessing "heads" earns nothing, guessing "tails" earns $3.
Team 4 makes 1 guess. Guessing "heads" earns nothing, guessing "tails" earns $3.
If absolutely everyone guesses "heads," teams 1 and 2 will earn $4 between them. If absolutely everyone guesses "tails," teams 3 and 4 will earn $6 between them. So far, this matches up.
Now let's look at how many people were sent to each room.
Three people visit room A1: one from team 1, one from team 2, and one from team 3. 2/3 of them are there because the first "flip" was heads.
Three people visit room A2: one from team 1, one from team 2, and one from team 4. 2/3 of them are there because the first "flip" was heads.
Two people visit room B: one from team 3 and one from team 4. They don't matter.
The three visitors to A1 know they aren't on team 4, thus they can subtract that team's entire winnings from their calculations, leaving $4 vs. $3.
The three visitors to A2 know they aren't on team 3, thus they can subtract that team's entire winnings from their calculations, leaving $4 vs. $3.
Do you see the error? Took me a bit.
If you're in room A1, you need to subtract more than just team 4's winnings. You need to subtract half of team 1 and team 2's winnings. Teams 1 and 2 each have someone in room A2, and you can't control their vote. Thus:
Three people visit room A1: one from team 1, one from team 2, and one from team 3. If all three guess "heads" they earn $2 in all. If all three guess "tails" they earn $3 in all.
Three people visit room A1: one from team 1, one from team 2, and one from team 4. If all three guess "heads" they earn $2 in all. If all three guess "tails" they earn $3 in all.
Guessing "tails" remains the best way to maximize expected value.
The lesson here isn't so much to do with EDT agents, it's to do with humans and probabilities. I didn't write this post because I'm amazing and you're a bad math student, I wrote this post because without it, I wouldn't have been able to figure it out either.
Whenever this sort of thing comes up, try to rephrase the problem. Instead of 85%, imagine 100 people in a room, with 85 on the left and 15 on the right. Instead of truly random experiments, imagine the many-worlds interpretation, where each outcome is guaranteed to come up in a different branch. (And try to have an integer number of branches, each representing an equal fraction.) Or use multiple teams like I did above.
It is a stretch, which is why it needed to be explained.
And yes, it would kind of make him immune to dying... in cases where he could be accidentally rescued. Cases like a first year student's spell locking a door, which an investigator could easily dispel when trying to investigate.
Oh, and I guess once it was established, the other time travel scenes would have had to be written differently. Or at least clarify that "while Draco's murder plot was flimsy enough that the simplest timeline was the timeline in which it failed, Quirrel's murder plot was bulletproof enough that the simplest outcome was for it to succeed." Because authors write the rules, they can get away with a lot of nonsense. But in this kind of story, they do need to acknowledge and (try to) explain any inconsistencies.
And here's the line I was referring to:
"The earlier experiment had measured whether Transfiguring a long diamond rod into a shorter diamond rod would allow it to lift a suspended heavy weight as it contracted, i.e., could you Transfigure against tension, which you in fact could." (Chapter 28, foreshadowing the nanotube, which may or may not have been what you were talking about)
I don't mind the occasional protagonist who makes their own trouble. I agree it would be annoying if all protagonists were like that (and I agree that Harry is annoying in general), there's room in the world for stories like this.
Now that you mention it, your first example does sound like a Deus Ex Machina. Except that
the story already established that the simplest possible time loop is preferred, and it's entirely possible that if Harry hadn't gotten out to pass a note, someone would have gone back in time to investigate his death, and inadvertently caused a paradox by unlocking the door.
This wouldn't have had to be a long explanation or full-blown lecture, just enough to confirm this interpretation. But since it wasn't confirmed and there are multiple valid interpretations of the mechanics, it does come across as a bit of an "I got out of jail free" moment.
I... don't understand your second example. I think that part of the story works just fine. Harry's solution was plausible, and even foreshadowed
in chapter 28 when he used transfiguration to apply force.
It's been a while since I read it, but off the top of my head I can't recall any blatant cases of Deus ex Machina. I'd ask for concrete examples, but I don't think it would be useful. I'm sure you can provide an example, and in turn I'll point out reasons why it doesn't count as Deus ex Machina. We'd argue about how well the solution was explained, and whether enough clues were presented far enough in advance to count as valid foreshadowing, and ultimately it'll come down to opinion.
Instead, I can go ahead and answer your question. Eliezer definitely meant to teach useful lessons. Not everything Harry does is meant to be a good example (I mean, even Eliezer knows better than to write a completely perfect character), which is probably why he gets into all that trouble. But whenever a character goes into Lecture Mode while solving a problem, it's meant to be both useful and accurate.
Wait a minute, are you talking about Lecture Mode when you say "Deus ex Machina"? I can kind of see that: the situation seems hopeless and then someone (usually Harry) gives a long explanation and suddenly the problem is solved. Thing is, these lectures don't pull the solution out of nowhere. The relevant story details were established beforehand, and the lecture just puts them together. (Or at least, that was the author's intent. As I said, it comes down to opinion.)