I’ve played around with Anki a bit, but never used it seriously because I was never sure what I wanted to memorize, versus look up when needed.
I wonder if it might be better to look at it a different way, using a note-taking tool to leverage forgetting rather than remembering? That is, you could use it to take notes and start reviewing cards more seriously when you’re going to take a test. Afterwards, you might slack off and forget things, but you still have your notes.
After all, we write things down so we don’t have to remember them.
Such a tool would be unopinionated about remembering things. You could start out taking notes, optimize some of them for memorization, take more notes, and so on. The important thing is persistence. Is this really a note-taking system you’ll keep using?
Teaching people to use such a tool would fall under “learning how to learn.” Ideally you would want them to take their own notes, see how useful it is for studying for a test, and get in the habit of using them for other classes. If not, at least they would know that such tools exist.
Back when I was in school, I remember that there was a teacher that had us keep a journal, probably for similar reasons. Maybe that got some people to start keeping a diary, who knows? For myself, I got in the habit of taking notes in class, but I found that I rarely went back to them; it was write-only. I kept doing it because I thought taking the notes helped a bit to remember the material, though.
You talked about rest but have you looked into stretches, putting your wrists in hot and cold water in tubs, ice packs, and so on? I had a different problem (tendonitis) and these helped.
This isn't my area of expertise, but I found this quote in an article about anticipating climate change in the Netherlands to be food for thought:
If we turn the Netherlands into a fort, we will need to build gigantic dikes, but also, and perhaps more importantly, gigantic pumping stations. This is essential, because at some point we will need to pump all of the water from the Rhine, Meuse, Scheldt and Ems – which by that time will be lower than sea level – over those enormous dikes. The energy costs will be higher – but that is not the only problem, because when the enormous pumping stations pump out the fresh water, the heavier salt water will seep in under the ground. You can get rid of the water, but not the salt, which is disastrous for agriculture in its current form. Instead of a fort, it may make more sense to talk about a semi-porous bath tub.
The South Bay infill wouldn't be the same - much smaller, creeks instead of rivers (though flooding is still a concern), and probably no agriculture. But I wonder what other engineering problems are swept under the rug by assuming that "modern engineering is well up to the task?" Thinking about such questions from a very high-level view often misses important details.
This is just spitballing, but it seems like it would be prudent to build up the new land higher than the new anticipated sea level. And the very expensive land around the infill might actually end up downhill, below sea level. Which might make drainage interesting.
Here's a nice introduction to causal inference in a machine learning context:
ML beyond Curve Fitting: An Intro to Causal Inference and do-Calculus
Here's an earlier paper by Judea Pearl:
Bayesianism and Causality, or, Why I am Only a Half-Bayesian
Hmm. I don't know anything about Galleani, but wanting to inspire the masses to action via "propaganda of the deed" seems incompatible with directly terrorizing the masses? (Excuses about "collateral damage" aside.)
It seems like this might have something to do with tribalism: who do the terrorists consider "us" versus "them"?
I'm not sure this will help in your case, but the usual framework for using causality for calculations seems to be that you have a DAG respresenting the causal connections between variables (without probabilities) and statistical data. From this, some things can be calculated that couldn't be inferred with statistical data alone.
The cause graph can't usually be inferred from the data. However, some statistical tests could disprove the cause graph. For example, the cause graph might imply that certain statistical variables are independent.
Surveys are really hard to design correctly.
Remember, these were true/false questions, so 50% means no knowledge at all.
This isn't apparent from the data. A score of 50% could mean that nobody knows the answer and everyone is guessing randomly. Or it could mean that 50% of survey-takers know the right answer and 50% mistakenly believe the wrong answer. Or something in between. Without more information, we can't distinguish which is which.
I'd also argue that three of the questions were ambiguous or uncertain:
Part of test-taking ability seems to be selectively ignoring ambiguity if you think the people who designed the test weren't testing for that edge case.