I suspect, like many things in politics, that the main issue here is domestic politics more than foreign affairs.
If you've ever compared election results between single and multi-member systems, you'll have noticed a trend. Even if via first preference count, a minor party seems to best represent a significant chunk of the population, unless they're geographically concentrated you can expect them to pick up on the order of ~0 seats.
Similarly, if we're not going to abandon democratic principles, we should probably have the consent of the majority in an area before we perform an experiment on them. Problem with this is that even if world/country wide there's a quorum of people who would consent to a given experiment, it's highly unlikely that they all live in the same place.
While something like a Schengen area might in principle alleviate some of these concerns, it introduces two main additional ones:
1) Does your experiment actually improve society? Or does it just attract the types of people who improve society themselves?
2) Most people aren't a big fan of being told they have to move cities/countries to continue living their lifestyle. I suspect that Lesswrong users as a cohort undervalue stability relative to the rest of the population.
It's worth noting that factory farming isn't just coincidentally out of the limelight, in some (many?) areas it's illegal to document. https://en.m.wikipedia.org/wiki/Ag-gag
While many of these laws seem somewhat reasonable on the surface, since they're billed as strengthening trespass law, you can't gather video evidence of a moral crime taking place on private property without at least some form of trespass.
I think a different use of MI is warranted here. While I highly doubt the ability to differentiate whether a value system is meshing well with someone for "good" or "bad" reasons, it seems more plausible to me that you could measure the reversibility of a value system.
The distinguishing feature of a trap here isn't so much the badness, as the fact that it's irreversible. If you used interpretability techniques to check whether someone could be reprogrammed from a belief, you'd avoid a lot of tricky situations.
Apologies for the late reply.
With a bit over 600k 0-3 year Olds in swim lessons at the time of the linked report, and around 1.2 million children in that age range in Australia, I'd estimate at least half of kids below 4 have taken swim lessons. So quite common, but not to the extent that I had thought.
Notably, swim lessons for young children are highly subsidized by most states, with many offering a fixed number of free lessons.
A bit later in primary school, the majority of kids will be given free swim lessons at their local public pool though.
Are child swim lessons common in America? Over here, free swim lessons are now provided for children, and mandatory swim lessons are provided as part of primary school. My understanding is that it's made a relatively large dent in the rate of child drowning injury.
In particular, once your child is proficient at swimming, you can get lessons on plain clothes swimming incase of a trip, fall, or if another kid needs rescuing.
A transplant seems unnecessary if there's any realistic change of probe technology advancing. Surely it'd be possible to grow the same neurones in wet lab, use brain probes to connect them to a living person, and keep the tinkering inside someone's head to a minimum.
(Putting aside the profound ethical issues) In that case, neuronal material could even be swapped out on the fly if one batch is proving ineffective for a given task (or, a new batch could have old signals replayed to it to get it up to speed).
Is there something I'm missing on the neuroscience end? I'm not at all familiar with the field.
I think there's a difference between consequences and suffering (as written in the OP) though.
If a child plays too many videogames you might take away their switch, and while that might decrease their utility, I'd hardly describe it as suffering in any meaningful sense.
Similarly, in the real world, people generally get quite low utility from physical violence. It's either an act of impulse not particularly sensitive to severity of punishment (like in people with anger management issues), or of very low utility. It's therefore easy to imagine that the optimal level of punishment for crime might be a decrease in access to some goods, and seperation from broader society to decrease the probability of future impulsive acts harming anyone.
This is the closest I got, by probing ChatGPT for details on Muhammad's conquests, and seeming very inclined towards divine inspiration.
I probably could've done a better job if I was a (ex or otherwise) Muslim, and I imagine it might've been more receptive in arabic.
I think a big part of the problem is that people fundamentally misunderstand what the funnel is. The way to get people into a field isn't rousing arguments, it's cool results, accessible entry research, and opportunity.
As a kid, I didn't go into pure mathematics because someone convinced me that it was a good use of my time, it was because I saw cool videos about mathematical theorems and decided that it looked fun. I didn't move into applied maths because someone convinced me, but because there was interesting, non-trivial modelling that I could pick up and work on; and I didn't move into the trading industry because someone convinced me that options liquidity is the primary measure of a civilizations virtue, it was because nobody else would hire me in Australia, but a trading firm offered me a shit tonne of money.
Doing interesting work is itself an important part of the recruitment funnel, keeping some easy problems on hand for grads is another important part, and (imo) diversifying the industry out of like 2 cities (London and SanFran) would be a great way to remove a thin wedge from the top of the funnel.
Some people are going to go into whatever field they think is maximum utility, but I reckon they're the exception. Most scientists are fundamentally amoral people who will go into whatever they find interesting, and whatever they can get work in. I've seen people change fields from climate research into weapons manufacturing because the opportunity wasn't there, and ML Safety is squandering most of the world.
Am I alone in not seeing any positive value whatsoever in humanity, or specific human beings, being reconstructed? If anything, it just seems to increase the S-risk of humanlike creatures being tortured by this ASI.
As for more abstract human values, I'm not remotely convinced that we could either:
a) Convince such a more technologically advanced civilization to update towards our values.
or
b) That they would interpret them in a way that's meaningful to me, and not actively contra my interests.