Due to constant MWI branching, delayed observation produces more exact copies which don’t know yet the result of the coin toss.
Applying Self-Sampling Assumption (SSA) to all exact copies, we get that I am more likely to be in the wining branch.
This isn't how you should be doing anthropics in MWI though. You need to weigh each copy by the Born probability of its branch, which will make this effect disappear. It all adds up to normality!
As another outside observer I also got the impression that the Duncan conflict was the most significant of the ones leading up to the ban, since he wrote a giant post advocating for banning Said, left the site in a huff shortly thereafter, and seems to be the main example of a top contributor by your lights who said they didn't post due to Said.
Well that would be a rather unnatural conspiracy! IMO you can basically think of law, property rights etc. as being about people getting together to make agreements for their mutual benefit, which can be in the form of ganging up on some subgroup depending on how natural of a Schelling point it is to do that, how well the victims can coordinate, etc. "AIs ganging up on humans" does actually seem like a relatively natural Schelling point where the victims would be pretty unable to respond? Especially if there are systematic differences between the values of a typical human and typical AI, which would make ganging up more attractive. These Schelling points also can arise in periods of turbulence where one system is replaced by another, e.g. colonialism, the industrial revolution. It seems plausible that AIs coming to power will feature such changes(unless you think property rights and capitalism as devised by humans are the optimum of methods of coordination devisable by AIs?)
which is far, far safer and easier to coordinate than trying to completely disempower all non-lawyers and take everything from them
But it would probably be a lot less dangerous if lawyers outnumbered non-lawyers by several million, were much smarter, thought faster, had military supremacy, etc. etc. etc.
The truth is, capitalism and property rights has existed for 5000 years and has been fairly robust to about 5 orders of magnitude increase in population
During which time many less-powerful human and non-human populations were in fact destroyed or substantially harmed and disempowered by the people who did well at that system?
I haven't fully read through your paper, but from the parts I have read sounds like it might be similar to the neural tangent kernel applied to the case of ReLU networks
OK I see, didn't get the connection there.
humanity has a bad track record at that
People do devote some effort to things like preserving endangered species, things of historical significance that are no longer immediately useful, etc. If AIs devoted a similar fraction of their resources to humans that would be enough to preserve our existence.
So why would AIs be more willing to do that?
He spells out possible reasons in the paragraph immediately following your quote: "Pretraining of LLMs on human data or weakly successful efforts at value alignment might plausibly seed a level of value alignment that's comparable to how humans likely wouldn't hypothetically want to let an already existing sapient octopus civilization go extinct". If you disagree you should respond to those. Most people on LW are already aware that ASIs would need some positive motivation to preserve human existence.
The default on Polymarket is no expiration. It is only after actively choosing to have an expiration date that it goes to 1 day
Hmmm you're right, guess I misremembered.
I mean, sportsbooks can shut down your account if you're too consistently successful so that seems like a major downside for "serious gamblers". This is also a way that they're pretty blatantly more predatory than PMs.