Strong upvote for a healthy dose of bro humor which isn't that common on LW. We need more "people I want to have a beer with" represented in our community :D.
Thats interesting. Can you elaborate more?
None: None of the above; TAI is created probably in the USA and what Asia thinks isn't directly relevant. I say there's a 40% chance of this.
I would say it might still be relevant in this case. For example, given some game-theoretical interpretations, China might conclude that doing a nuclear first strike might be a rational move if the US creates the first TAI and suspects that will give their enemies an unbeatable advantage. Asian AI risk hub might successfully convince Chinese leadership to not do that if they have information that US TAI is built in a way that would prevent usage just for the interest of its country of origin.
Not sure about anti-gay laws in Singapore, but from what I gathered from the recent trends, the LGTB situation is starting to improve there and in East Asia in general. OTOH the anti-drug attitudes are still super strong (for example you can still get the death penalty for dealing harder drugs), therefore I presume it's an even bigger deal-breaker giving the number of people who are experimenting with drugs in the broader rationalist community.
Not to mention a pretty brutal Anti-Drug laws.
What would be the consequence of Belarus joining the western military alliance in terms of Russia's nuclear strategy? Let's say that in the near future Belarus joins NATO, and gives the US free hand in installing any offensive or defensive (ABM) Nuclear weapon system on Belarus territory. Would this dramatically increase the Russian fear of a successful nuclear first strike by the US?
Excellent question! Was thinking about it myself lately, especially after GPT-3 release. IMHO, it is really hard to say as it is not clear which commercial entity will bring us over the finish line, and if there will be an investment opportunity at the right moment. It also quite possible that even the first company that does it might even bungle its advantage and investing there might be a wrong move (seems to be a common pattern in the history of technology).My idea is just to play it safe and save money as much as possible until there is a clear example we arrived at the AGI level (when AI completely surpasses humans on Winograd schemas for example), and if there won't be any FOOM try to find the companies that are mostly focused on the practical application where you get the biggest bang for the buck.But honestly, at the point where you will have AGI widely available its quite possible that the biggest opportunity is just learning to utilize it properly. If you have access to AGI you can just ask it yourself: "how to benefit from AGI given my current circumstances?" and it will probably give you the best answer.
We haven’t managed to eliminate romantic travails
Ah! Then, it isnt utopia in my definition :-) .
Love it. It is almost like anti-Black Mirror episode where humans are actually non-stupid.
Would be useful to mention examples of contemporary ideas that could be analogues of heliocentrism in its time. I would suggest String Theory to be one possible candidate. The part when Geocentrist is challenging Heliocentrist to provide some proof while Heliocentrist is desperately trying to explain away lack of experimental evidence kinda reminds me of debates between string theorist and their sceptics. (it doesn't mean String Theory is true just there seems to be a similar state of uncertainty).
This is great. Thanks for posting it. I will try to use this example and see if I can find some people who would be willing to do the same. Do you know of any new remote group that is recruiting members?