That one is interesting! Where do you take probabilities from?
Could you do it as a text post with short explanations of each?
I think yes, it would help to avoid confusion.
Have a sufficient financial safety net
I think, this condition is important only if I am going to leave my full-time job and switch to unpaid AI Safety projects. For some people (who have financial security), this may be the case. Many, including myself, do not have this security. It does not mean I can't do any projects until I get enough funds to survive. Rather, it means that I can do only part-time projects (for me, it was organising mentoring programs and leading AI Safety Camp project). Meanwhile, I still think applying to the roles that seem to be a good fit for me makes quite a lot of sense - I would rather spend 40 hours/week working on AI Safety than on a regular job. Maybe it should be something like 80% projects, 20% applying (the numbers are random).
I feel that the percentage of people who can afford not to have paid work and only do AI Safety projects till AGI arrives is not that high. It would be nice to have also a strategy and recommendations, what a person can do for AI Safety with 10 hours/week, or 5, or even 1. I think the boundary where one can do something useful is quite low - even with 5 minutes/week they can e.g. repost stuff in social networks.
Thank you very much for catching the mistake! I checked, you are completely right.
I don't think they passed it in a full sense. Before LLM, there was a 5 minute Turing test, and some chatbots were passing it. I think 5 minutes is not enough. I bet that if you give me 10 hours, any currently existing LLM and human, we will communicate only via text, I will be able to figure out who is who (if both will try hard to persuade in their humanity). I don't think LLM can come up yet with a consistent non-contradicting life story. It would be an interesting experiment :)
Would you mean similarity on the outer level (e.g. Turing test) or at inner (e.g. neural network structure should resemble brain structure?
If the first - would it mean that when AI passes Turing test it would be sentient?
If the second - what are the criteria for similarity? Full brain emulation or something less complicated?
I see that in the very first post in this series, you write that you think that more children is good and then all the posts will be about how, not why. It would be nice if you give here a link to the very first post, and maybe briefly mention it as well.
I think more children as an abstract concept (spherical cow) is a good thing. But in the universe where the choice is between "more children" and "more immigration", it seems that the latter might be a better way. Of course, I am biased, since I am myself an immigrant. But it seems that immediately getting an educated young adult without spending money growing up the child, and maybe spending a little bit on their integration, can be a better choice. It's better for society. It is better for this immigrant.
In a distant future (if without AGI), when the Earth population will start to decrease, we will need measures to support fertility. Now, I think, we need measures to support immigration.