Public attention on AI x-risk has skyrocketed. I don’t expect this to wane anytime soon. This has a few potentially negative implications for those of us pursuing technical solutions to the problem:
Since those in the field will no doubt read more x-risk related media and talk more about it with non-technical people (e.g. family, friends), we are more likely to be swayed into particular research directions that may not be as important. Some mechanisms for this:
While some may thrive under this added pressure, for many it may become an extra mental burden that can hurt productivity and mental health.
That’s not to say that someone working on technical solutions should shut out mainstream media completely. There are some upsides:
What I’m advocating for is merely to be extra mindful of how the increase in public attention on AI x-risk manifests in your life, and to adjust accordingly.
For what it's worth, I plan to make the following adjustments:
I actually never realised that you could upweight how likely posts are to appear on “Latest posts” based on tags. This is a really nice feature, props to LW team. Still, I do have an inclination to read the title of every post and then read the posts that I’m immediately attracted to (not necessarily the ones that I’d endorse reading after careful reflection).