I have a potential category of questions that could fit on Metaculus and work as an "AGI fire alarm." The questions are of the format "After an AI system achieves task x, how many years will it take for world output to double?"
Yes, the value of minimizing response time is a well-studied area of human-computer interfaces: https://www.nngroup.com/articles/response-times-3-important-limits/
I'm curious what cards people have paid to put in your deck so far. Can you share, if the buyers don't mind?
Ralph Merkle's Dao Democracy addresses size of preferences because constituents only "vote" by reporting their own overall happiness level. Everything else is handled by conditional prediction markets (like in futarchy) to maximize future happiness of the constituents. This means that if some issue is very important to a voter, it will have a greater impact on their reported happiness, which will have a greater impact on what proposals get passed.
For reference: section 40 of Reframing Superintelligence: Comprehensive AI Services as General Intelligence.
Has this new congruency-based approach led to less, the same, or more productivity than what you were doing before and how long have you been doing it?
Is losing weight one of your goals with this?
Like you said, since it hasn't been studied you're not going to find anything conclusive about it, but it may be a good idea to skip the fast once a month (i.e. 3 weeks where you do 88 hour fasts, then 1 week where you don't fast at all).
I object to the demonstration because it's based on the false assumption that there's a fixed amount of value (candy, money) to be distributed and that by participating in capitalism, you're playing a zero-sum game. Most games played in capitalism are positive-sum -- you can make more candy.
Do you have a source for the 80% figure?
I agree that this is a really important concept. Two related ideas are asymmetric risk and Barbell strategies, both of which are things that Nassim Nicholas Taleb writes about a lot.