Here's my vote for Epistemic Roguelikes. It seems like a riskier path, but with a lot more upside.
The usual answer to this question is to do whichever one you're personally the most excited to work on. If the question of what LW people would like happens to be relevant to that, I will also chime in alongside Drake Morrison that epistemic rougelikes sound really cool.
I vote that you work on horizon modelling and automated risk model updates with your pal technicalities
Understand is a puzzle game that is basically what you describe as an Epistemic Roguelike, in that you deduce a different ruleset for every set of levels. It's not an actual roguelike though, purely focused on the puzzle of figuring out rules, which are limited to a simple grid with shapes.
I would say that as something exploring a relatively unplumbed space of content, Epistemic Roguelikes are more likely to be interesting in a time where AI can make average copies of any existing content, and mainly struggles with new concepts.
Correspondingly, I think that D&D Sci might be less useful now that you could plausibly automate a large chunk of the process of creating scenarios? That's my impression based on checking out the posts, although I haven't actually completed one.
I find myself, for the first time in a while, with enough energy and stability to attempt nontrivial projects outside my dayjob. Regarding the next ~10 months, I’ve narrowed my options to two general approaches; as expected beneficiaries of both, I’d like the LessWrong hivemind’s help choosing between them.
The first option is making more D&D.Sci Scenarios, running them on a more consistent schedule, crossposting them to more platforms, and getting more adventurous about their form and content. The second is creating Epistemic Roguelikes, a new[1] genre of rationalist videogame about deducing and applying the newly-randomized ruleset each run.
Prima facie, prioritizing D&D.Sci this year (and leaving more speculative aspirations to be done next year if at all) seems like the obvious move, since:
However:
Any thoughts would be appreciated.
As far as I know; please prove me wrong!
I tried a handful of them on chatgpt-thinking; tough straightforward ones like the original were handled better than the average human player at the time, but easy tricky ones like these two were fumbled.
I’m pretty bearish on AI by LW standards, so I actually don’t think this is likely, but the possibility perturbs me.