The regulation you mention sounds very drastic & clumsy to my ears. I'd suggest starting by proposing something more widely acceptable, such as regulating highly effective self modifying software that lacks security safeguards.
Basing ethical worth off of qualia is very close to dualism, to my ears. I think instead the question must rest on a detailed understanding of the components of the program in question, & the degree of similarity to the computational components of our brains.
Excellent point. We essentially have 4 quadrants of computational systems:
Good point. In my understanding it could go either way, but I'm open to the idea that the worst disasters are less than 50% likely, given a nuclear war.
Good point. Unless of course one is more likely to be born into universes with high human populations than universes with low human populations, because there are more 'brains available to be born into'. Hard to say.
In general, whenever Reason makes you feel paralyzed, remember that Reason has many things to say. Thousands of people in history have been convinced by trains of thought of the form 'X is unavoidable, everything is about X, you are screwed'. Many pairs of those trains of thought contradict each other. This pattern is all over the history of philosophy, religion, & politics.
Future hazards deserve more research funding, yes, but remember that the future is not certain.
What's the status of this meetip, CitizenTen? Did you hear back?
I have similar needs. I use a spreadsheet, populated via a Google Form accessible via a shortcut from my phone's main menu. I find it rewarding to make the spreadsheet display secondary metrics & graphs too.
Other popular alternatives include Habitica & habitdaily.app (iPhone only). I'm still looking for a perfect solution, but my current tools are pretty good for my needs.
I'm not sure either. Might only be needed for the operating fees.
Agreed. We might refer to them as 'leaderless orgs' or 'staffless networks'.