LESSWRONG
LW

737
interstice
2480Ω108385541
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Prediction markets are sub-optimal betting vehicles
interstice10d20

most serious gamblers prefer the stable source of liquidity and wide variety of markets

I mean, sportsbooks can shut down your account if you're too consistently successful so that seems like a major downside for "serious gamblers". This is also a way that they're pretty blatantly more predatory than PMs.

Reply
Evolution favors the ability to change subjective probabilities in MWI + Experimental test
interstice19d75

Due to constant MWI branching, delayed observation produces more exact copies which don’t know yet the result of the coin toss.

Applying Self-Sampling Assumption (SSA) to all exact copies, we get that I am more likely to be in the wining branch.

This isn't how you should be doing anthropics in MWI though. You need to weigh each copy by the Born probability of its branch, which will make this effect disappear. It all adds up to normality!

Reply
Banning Said Achmiz (and broader thoughts on moderation)
interstice21d120

As another outside observer I also got the impression that the Duncan conflict was the most significant of the ones leading up to the ban, since he wrote a giant post advocating for banning Said, left the site in a huff shortly thereafter, and seems to be the main example of a top contributor by your lights who said they didn't post due to Said.

Reply
Rauno's Shortform
interstice23d20

I've always liked Adam Cadre's website as a mix of blogging and longer-form content stuff.

Reply
The Problem
interstice1mo20

Well that would be a rather unnatural conspiracy! IMO you can basically think of law, property rights etc. as being about people getting together to make agreements for their mutual benefit, which can be in the form of ganging up on some subgroup depending on how natural of a Schelling point it is to do that, how well the victims can coordinate, etc. "AIs ganging up on humans" does actually seem like a relatively natural Schelling point where the victims would be pretty unable to respond? Especially if there are systematic differences between the values of a typical human and typical AI, which would make ganging up more attractive. These Schelling points also can arise in periods of turbulence where one system is replaced by another, e.g. colonialism, the industrial revolution. It seems plausible that AIs coming to power will feature such changes(unless you think property rights and capitalism as devised by humans are the optimum of methods of coordination devisable by AIs?)

Reply
The Problem
interstice1mo63

which is far, far safer and easier to coordinate than trying to completely disempower all non-lawyers and take everything from them

But it would probably be a lot less dangerous if lawyers outnumbered non-lawyers by several million, were much smarter, thought faster, had military supremacy, etc. etc. etc.

The truth is, capitalism and property rights has existed for 5000 years and has been fairly robust to about 5 orders of magnitude increase in population

During which time many less-powerful human and non-human populations were in fact destroyed or substantially harmed and disempowered by the people who did well at that system?

Reply
It turns out that DNNs are remarkably interpretable.
interstice1mo20

I haven't fully read through your paper, but from the parts I have read sounds like it might be similar to the neural tangent kernel applied to the case of ReLU networks

Reply
Permanent Disempowerment is the Baseline
interstice1mo62

OK I see, didn't get the connection there.

humanity has a bad track record at that

People do devote some effort to things like preserving endangered species, things of historical significance that are no longer immediately useful, etc. If AIs devoted a similar fraction of their resources to humans that would be enough to preserve our existence.

Reply1
Permanent Disempowerment is the Baseline
interstice1mo52

So why would AIs be more willing to do that?

He spells out possible reasons in the paragraph immediately following your quote: "Pretraining of LLMs on human data or weakly successful efforts at value alignment might plausibly seed a level of value alignment that's comparable to how humans likely wouldn't hypothetically want to let an already existing sapient octopus civilization go extinct". If you disagree you should respond to those. Most people on LW are already aware that ASIs would need some positive motivation to preserve human existence.

Reply
Many prediction markets would be better off as batched auctions
interstice1mo20

The default on Polymarket is no expiration. It is only after actively choosing to have an expiration date that it goes to 1 day

Hmmm you're right, guess I misremembered.

Reply
Load More
4
5y
2
Future Fund Worldview Prize
3 years ago
49Alignment Might Never Be Solved, By Humans or AI
3y
6
15Will Values and Competition Decouple?
3y
11
9Kolmogorov's AI Forecast
Q
3y
Q
1
41Tao, Kontsevich & others on HLAI in Math
3y
5
36What's the Relationship Between "Human Values" and the Brain's Reward System?
Q
3y
Q
17
18Consciousness: A Compression-Based Approach
3y
14
15Algorithmic Measure of Emergence v2.0
4y
2
5Advancing Mathematics By Guiding Human Intuition With AI
4y
0
33NTK/GP Models of Neural Nets Can't Learn Features
Ω
4y
Ω
33
4
5y
2
Load More