LESSWRONG
LW

glauberdebona
18Ω5270
Message
Dialogue
Subscribe

Computer scientist and engineer, amateur philosopher and chess player.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Amplifying the Computational No-Coincidence Conjecture
glauberdebona6mo10

Correcting myself: "doubly" just added above.

Reply
Amplifying the Computational No-Coincidence Conjecture
glauberdebona6mo10

For the time-complexity of the verifier to grow only linearly with O(log1/ε), we need also to assume the complexity of the reduction fm is O(|C|γ) for a fixed γ, regardless of m, which I admit is quite optimistic. If the time-complexity of the reduction fm is doubly exponential in m, the (upper bound of the) time-complexity of the verifier will be exponential in 1/ε, even with the stronger version of Reduction-Regularity. 

Reply
Amplifying the Computational No-Coincidence Conjecture
glauberdebona6mo10

By "unconditionally", do you mean without an additional assumption, such as Reduction-Regularity?

I actually have a result in the other direction (an older version of this post). That is, with a slightly stronger assumption, we could derive a better upper bound (on 1/ε) for the time-complexity of the verifier.  The stronger assumption is just a stronger version of Reduction-Regularity, requiring Prob(f(C)∈¯¯¯¯¯V0|f(C)∈¯¯¯¯P)≥δ -- that is, ¯¯¯¯¯V0 replaces ¯¯¯¯L in the inequality. It could still lead to an exponential bound in 1/ε if the complexity of the reduction fm increases exponentially with m. In the version presented in this post, the exponential bound relies on the complexity of fm being O(|C|γ) for a fixed γ, regardless of m, which is admittedly optimistic. If we assume this with the stronger version of Reduction-Regularity, then I think the upper bound for the time-complexity of the verifier would increase only linearly with m=O(log1/ε), which is a huge improvement (I can add something in the appendix about this if you are interested in the details). In the end, I opted for presenting the weakest assumption here, even if corresponding to a worse complexity bound.

Reply
A computational no-coincidence principle
glauberdebona6mo10

Your comment seems to imply that the conjecture's "99%" is about circuits C for which P(C) is false. Otherwise, it would be impossible for a V to miss 100% of random reversible circuits without missing the circuits C for which P(C) is true. In the conjecture, should "random reversible circuits C" be read as "random reversible circuits C for which P(C) is false"? It does not change much indeed, but I might have to correct my presentation of the conjecture here.

Reply
A computational no-coincidence principle
glauberdebona6mo10

I think I have an idea to make that 99% closer to 1. But its development became too big for a comment here, so I made a post about it.

Reply
Sleeping Beauty: an Accuracy-based Approach
glauberdebona7mo20

Thanks for the feedback! The summary here and the abstract in the draft paper have been updated; I hope it is clearer now.

Reply
Open Thread Winter 2024/2025
glauberdebona7mo60

Hi everyone,

I’m Glauber De Bona, a computer scientist and engineer. I left an assistant professor position to focus on independent research related to AGI and AI alignment (topics that led me to this forum). My current interests include self-location beliefs, particularly the Sleeping Beauty problem.

I’m also interested in philosophy, especially formal reasoning -- logic, Bayesianism, and decision theory. Outside of research, I’m an amateur chess player.

Reply
8Amplifying the Computational No-Coincidence Conjecture
Ω
6mo
Ω
6
7Sleeping Beauty: an Accuracy-based Approach
Ω
7mo
Ω
2