Relevant SSC: Setting the Default
Can second the not-driving-a-car commute thing. A long commute by bus I used to have amounted to 5 km of walking going to and from the bus stops, with optional podcast listening, and an hour of focused book-reading time every day. It made a big extra dent in my schedule, but walking and book-reading are both things I'd want to be doing regularly in any case.
Given that the rules partially exist to keep outsiders with guidebooks from barging in and ruining the party, probably not very good ones. I guess someone might write a somewhat tongue in cheek anthropology book like Kate Fox's Watching the English, but that would require a sort of relaxed attitude to absorb and reading it with a rigid "I must obey the precepts to succeed" mindset probably wouldn't end well. Productively learning stuff of this sort from books instead of social immersion is it's own kind of extra hard mode whose nature is very rarely explicated because book-learning unwritten rules is taboo.
What's your general career plan here? If you just want to learn academic results and apply them by eg. becoming a data scientist (not an actual scientist, you can tell because there's "scientist" in the name), you should be fine. Basically anything up to a master's degree and going off to work in industry and you can be completely oblivious. Are you planning on going into something like math where you can basically be a crazy hermit and still do groundbreaking stuff? Again, you can just go do you. The point where you really need to know the local culture is if you're trying to build a regular academic career where you are employed as a researcher in an academic institution, are publishing frequently in peer-reviewed journals and are trying to get on a tenure track for professorship. So, is this specifically what you're after?
There might not really be good answers to this. Most of rationality stuff is meta-level practices to apply to object-level activities, and "daily/routine practice" is very much something in the object level. The idea that there's a practice regimen for rationality that looks something like existing school curriculums we all get trained to assume a practice regimen should look like feels related to the failed idea (see also) that we could use the existing school curriculum model to teach critical thinking.
So the boring advice might be, have an object level craft of the sort you might study for an university degree (medicine, law, engineering, science, pie-making) you are learning. Try to get very good at it. Study rationality techniques as tools to help you get very good at the object level craft. Skipping the object level craft is like trying to go from Kegan stage 3 to Kegan stage 5, which doesn't work if you skip stage 4.
Still worse than a computer, since they can't take feedback on words that you've learned better. It only works if your learning rates for different words are what the tape maker expected.
Also this won't work for the end run of spaced repetition where a well-practiced card might pop up a year after it was last reviewed. The long-lived cards are going to be a very eclectic mix. Then again, school courses usually don't expect you to retain the stuff from each course past the duration of the course, so this isn't that much of a shortcoming for education.
Black Mirror episode White Christmas isn't explicitly based on Hanson's stuff but has a very similar premise.
We're already drowning in inert content, I don't see how adding more would help. We've had a way to get something like the martial art of rationality since ancient Athens, which is structured interaction with an actual human mentor who knows how to engage with the surrounding world and can teach and train other people with face-to-face interaction. This thing isn't mechanizable, like arithmetic or algebra is, so simple interactive programs are not going to be much better than just a regular book. This also isn't a not mechanizable but still clearly delimited topic like wood-carving or playing tennis, so you can't even say you're unquestionably doing the thing when going it alone, even though you might do better with some professional training. What you're trying to teach is the human ability to observe an unexpected situation, make sense of it and respond sensibly to it at a level above baseline adult competency, and the one way we know how to teach that is to have someone competent in the thing you're trying to learn you can interact with.
Like, yeah, maybe this will help, but I can't help but feel that people are compulsively eating ice and this is planning an ice shavings machine for your kitchen instead of getting an appointment for for having your blood work done.
"What can we know about what happens to other people when they practice meditation" is a different (and important) question from "what is the best mindset for personally making progress with the practice of meditation" though.
The problem is that we think statements have a somewhat straightforward relation to reality because we can generally make sense of them quite easily. In reality it turns out that that ease comes from a lot of hidden work our brain does being smart on the spot every time it needs to fit a given sentence to the given state of reality, and nobody really appreciated this until people started trying to build AIs that do anything similar and repeatedly ended up with things with no ability to distinguish between things that are realistically plausible and incoherent nonsense.
I'm not really sure how to communicate this effectively beyond gesturing at the sorry history of the artificial intelligence research program from the 1950s onwards despite thousands of extremely clever people putting their minds to it. The sequences ESrogs suggests in the sibling reply also deal with stuff like this.
Your first problem is that you need a theory for just how do statements relate to the state of the world. Have you read Wittgenstein's Philosophical Investigations?
Overall, this basically sounds like analytical philosophy plus 1970s style AI. Lots of people have probably figured this would be a nice thing to have, but once you drop out of the everyday understanding of language and try to get to the bottom of what's really going on, you end up in the same morass where AI research and modern philosophy are stuck in.