For my own epistemics, I would honestly be somewhat reluctant to trust the results of AI analysis that somebody else designed, although it could be a useful data point.
A helpful thing to do could be to list the five most plausible candidates for "unprecedented overreach and institutional erosion" in the Trump era, and for each make a case that there is comparable precedent from the Biden era (or other presidents, but that risks expanding the time window too much for a fair comparison). As a non-very politics involved person I have heard that Biden's decision to forgive student loan debts was egregiously unconstitutional. I would be interested in similar parallels to the Trump administration pressuring and going after FED officials.
As I posted as a reply to OP, the EU is specifically working on measures to enable age verification in an convenient, privacy-preserving and accurate manner (open source app that checks the biometric data on your id). More generally, parents being genuinely concerned about their children's safety seems so well established to me that it makes sense to assume they are being genuine in this discussion as well.
As another counter piece of evidence, Jonathan Haidt is probably the best known scientific advocate for social media restrictions for kids, and his work spans much other research about children's welfare (such as "children should be allowed to play unsupervised outside"). Although to be fair, he is also critical about social media's impact on democracy, but he proposes other measures to deal with this iirc. An interesting test of your hypothesis would be to look up which measure for age verification he supports.
I think your proposal is, on the high level, relatively close to how the EU plans to implement age verification (decentralized, via one app reading the data from your id and then sharing just the old enough/too young binary variable with the platform.
You can read more here.
As an aside, you mention that the laws in Australia favor large incumbents, but I think important nuance is that the law only applies to very large platforms in the first place. I still think your conclusion that more privacy preserving measures would be preferrable holds, for what it's worth.
My apologies, I have now removed the term. It was also inappropriate to ascertain so much about your personality from just this post.
[Note: Edited a very rude word out]
I realize this is a harsh thing to say, but given your focus on truth seeking and the general agreement with the sentiments expressed I think it's okay to say.
You write that you "instead started seeing them as NPCs to manipulate for fun." and "while I still held any amount of respect for them." These are feelings that mature people with a kind character shouldn't have, and they sound borderline sociopathic. Most people don't enjoy manipulating strangers who didn't do anything intentionally to hurt them, and yet you did in your story. That should serve as a reflection for whether your apparent ranking function that prioritizes truth seeking is not missing other important facets of life like being a kind person.
It is to your credit that you have noticed this and kind of feel bad about it, but this is just the first step to actually update the other parts of your world model. Questions that you should ask yourself: do you respect highly intelligent and competent criminals or dictators, or do you hold them to a similar degree of contempt as the attendees of these meetups?
FWIW, Daniel Kokotajlo has commented in the past:
> If there was an org devoted to attempting to replicate important papers relevant to AI safety, I'd probably donate at least $100k to it this year, fwiw, and perhaps more on subsequent years depending on situation. Seems like an important institution to have. (This is not a promise ofc, I'd want to make sure the people knew what they were doing etc., but yeah)
but I think ideally these should have been done by a MATS scholar, or ideally by an eager beginner on a career transitioning grant who wants to demonstrate their abilities so they can get into MATS later.
A problem here is that, I believe, this is on the face of it not quite aligned with MATS scholars' career incentives, as replicating existing research does not feel like projects that would really advance their prospects of getting hired. At least when I was involved in hiring, I would not have counted this as strong evidence or training for strong research skills (sorry for being part of the problem). On the other hand, it is totally plausible to incorporate replication of existing research as part of a larger research program investigating related issues (i.e. Ryan's experiment about time horizon without COTs could fit well within a larger work investigating time horizons in general).
This may look different for the "eager beginners", or something like the AI safety camp could be a good venue for pure replications.
On closer inspection, I believe this does not add much towards understanding the described people's psychology.
Although the described reactions seem accurate, the analogy seems week and the posts jumps too quickly towards unflattering conclusions about the outgroup. In particular, the case of being forcibly moved by a company towards another location is an extremely radical action given our current social norms and thus people can be expected to be indignant.
On the other hand, organizations imposing large but longer term changes on societies without asking is the norm. such as introducing social media or the internet.
I don't quite think so? My impression is the criticism is that LW is too much of an echochamber, in that people just express agreement with each other too much but probably that is mostly not because of people being nice but folks just outright having very similar believes
What is your opinion on this possible objection:
Even granting that it would be good if much more people worked on AI safety questions, practically there is still a bottleneck around (paid) positions as the field is funding constrained. Thus, even if you scale these training programs now, the immediate question for most folks is what they will do after finishing the program, and if there is no near term job where they can work on AI safety, they will probably lose most of the acquired skills or/and drop out of the field again, thus nullifying the effort put into training them.