The issue:
* I love LessWrong, and largely credit the information I've found on it with shaping the way I think and view the world.
* At the same time, I think it's quite unfriendly to newcomers - I for one found it difficult to develop a coherent picture of the AI safety problem in all of its dimensions - taking several months of reading through old threads to realize why certain people held certain beliefs, for example.
* * I think this is the case because much of the discussion here is happening "on the cutting edge" so to speak; most users are actively exploring different lines of thought with new ideas built on complex world models, and are not necessarily revisiting or referencing the years of prior knowledge and belief that underpin their thinking.
* I'm interested in seeing how this process could be made more efficient; I think a more streamlined "onboarding" process for newcomers could save a lot of valuable time and help people become more impactful quicker.
* A potential solution is to develop a curated "101 curriculum" for AI safety (I'm sure many exist), but I think interfacing with the forum directly is better because:
* * A) It feels like you are part of an active discussion between experts, rather than a student in a class. This personally enhanced my sense of agency around AI safety and big problems more generally.
* B) Reading especially prescient discussions from years before the AI boom greatly boosted my respect for particular thinkers and the broader community.
What features could improve the UX? I have a few rough ideas:
* Encourage and facilitate creation of "current beliefs" pages for each user, where they can detail their present-day positions on certain subjects (AI timelines are a relatively simple example), recount their intellectual journey / past updates, reference particularly impactful posts and threads, etc.
* * This would have been, and will be, super useful for me - there are many thinkers I respect transitively, wit