I'm curious whether it's possible to turn a resource, like a math textbook, into flashcards in such a way that you memorize the flashcards themselves, enabling a purely mental flashcard review.
Somewhat related to this: to have Anki flashcards or some higher-level flashcard representation (which could yield an Anki deck, e.g. using a genanki workflow) , more consistently extracted from textbooks and made available would make me very happy; I've wished for a reality in which, on data or code sharing platforms (e.g. Zenodo or GitHub), the majority of textbook authors or researchers shared their .tex files (for others to use, possibly with Anki) or any Anki cards associated with their work.
Thank you for sharing some thoughts. Somes comments:
I done similarly with my CLAUDE.md, including a line close to what you've written but also the line "Please search for and consider other solutions beyond those I've suggested before writing any code." It seems likely to me that there are better ways to communicate this notion to Claude, though, and I've done less experimentation than I would like.
Upvoted on the basis of clarity, useful / mentoring tone, and the value of the suggestions. Thank you for coming back to this.
In a first-pass read, there is not much I would add, save for mentioning that I’d expect (1)-(4) to change from what they are now were they to actually be implemented in some capacity, given the complexities (jurisdictional resources, public desire, participation, etc…).
I have the Myth of The Rational Voter on my shelf unread!
If I have any sufficiently useful or interesting ideas or comments regarding your remarks, I will add them here.
Agree. This post captures the fact that, time and again, historical and once perceived as insurmountable benchmarks in AI have been surpassed. Those not fully cognizant of the situation have been iteratively surprised. People, for reasons I cannot fully work out, will continue to engage in motivated reasoning against current and near-term-future-expected AI capabilities and or economical value, with some part of the evidence-downplaying consisting of shifting AGI-definitional or capability-threshold-to-impress goalposts (see moving goalposts). On a related note, your post also makes me imagine the apologue of the boiling frog of late w.r.t. scaling curves.
Although this is an old question, I want to provide an answer, since this is a topic that I am interested in and believe matters for GCR and X-Risk reduction, though it seems quite plausible that this field will radically transform under different level of AI capabilities.
First, if the author of this post has updated their believes about the field of decline and collapse or has formulated a resource list of their own, I would appreciate these being remarked, so I may engage with them.
Of note, I have not fully read some of these books and other resources, but I have minimally skimmed all of them. There are resources I am not including, since these I feel are not worth their time in opportunity costs.
The following are not put in priority order, but I have provided simple ratings of one to three ✯ indicating how much I believe the books are valuable to thinking about collapse. Ratings come after titles so as not to prime the reader.
Books:
Web-Links:
Some further considerations
Anna's previous comment used the term "proto-model" and alluded to the greater dearth of formalization in this field. It is worth adding here that "this field" (which, at times, I have referred to as "cliodynamics", "studies in complex societies", "historical dynamics", "studies in collapse", "civilizational dynamics"), is a collection of different academic disciplines, each of which has different levels of quantitative rigor.
Many who have entertained theorizing about human societies and their rise and fall (even the notions of "rise" and "fall" are somewhat dubious) have seldom incorporated quantitative measures or models, though I have still found their work valuable.
The authors in the anthology How Worlds Collapse seem not to interact and or collaborate much with those who study global catastrophic risk (e.g., those who would cite books such as X-Risk or Global Catastrophic Risks), which seems to be a loss for both fields, since those studying GCR and or X-Risks have adopted, more readily (or seemingly so), models and mathematics, with a good canonical paper for the latter entry being Classifying Global Catastrophic Risks (2018), and those in the field of collapse typically are more ready to consider patterns across historical complex societies and psychological dynamics relevant to recovery of complex societies under forces of calamity.
Anyway, best of wishes in your studies of human societies and their dynamics, including their decline.
This covers Toynbee, Spengler, Gobineau, and other historical figures in the field of "collapse" or "complex societies", including Peter Turchin. ↩︎
From their sites: Seshat: Global History Databank was founded in 2011 to bring together the most current and comprehensive body of knowledge about human history in one place. The huge potential of this knowledge for testing theories about political and economic development has been largely untapped. Our unique Databank systematically collects what is currently known about the social and political organization of human societies and how civilizations have evolved over time. This massive collection of historical information allows us and others to rigorously test different hypotheses about the rise and fall of large-scale societies across the globe and human history. ↩︎
I expect to post additional comments on this thread, but for now, w.r.t.
Sometimes the preferences people report or even try to demonstrate are better modeled as a political strategy and response to coercion, than as an honest report of intrinsic preferences.
has the author of this post read Private Truths, Public Lies: The Social Consequences of Preference Falsification (Kuran, 1997)? I've read but have not yet written a review of the book, so I cannot comment too critically on its value in this present conversation, but I believe the author should minimally check it out or skim its table of contents. To pull a better overview (from GoodReads) than I can provide off hand:
Preference falsification, according to the economist Timur Kuran, is the act of misrepresenting one's wants under perceived social pressures. It happens frequently in everyday life, such as when we tell the host of a dinner party that we are enjoying the food when we actually find it bland. In Private Truths, Public Lies Kuran argues convincingly that the phenomenon not only is ubiquitous but has huge social and political consequences. Drawing on diverse intellectual traditions, including those rooted in economics, psychology, sociology, and political science, Kuran provides a unified theory of how preference falsification shapes collective decisions, orients structural change, sustains social stability, distorts human knowledge, and conceals political possibilities.
A common effect of preference falsification is the preservation of widely disliked structures. Another is the conferment of an aura of stability on structures vulnerable to sudden collapse. When the support of a policy, tradition, or regime is largely contrived, a minor event may activate a bandwagon that generates massive yet unanticipated change.
In distorting public opinion, preference falsification also corrupts public discourse and, hence, human knowledge. So structures held in place by preference falsification may, if the condition lasts long enough, achieve increasingly genuine acceptance. The book demonstrates how human knowledge and social structures co-evolve in complex and imperfectly predictable ways, without any guarantee of social efficiency.
Private Truths, Public Lies uses its theoretical argument to illuminate an array of puzzling social phenomena. They include the unexpected fall of communism, the paucity, until recently, of open opposition to affirmative action in the United States, and the durability of the beliefs that have sustained India's caste system.
Thank you for the typo-linting.
To provide a better response to your first question than the one I’ve provided below, I would need to ask him to explain more than he has already.
From what he has remarked, the first several meetings were very stressful (for most people, this would, of course, be the case!) but soon he adjusted and developed a routine for his meetings.
While the routine could go off course based on the responsiveness of the individual(s) present (one staffer kept nodding Yes, had no questions, and then 20 minutes later remarked that they “take into account” what had been said; another staffer remarked that the US simply needed to innovate AI as much as possible, and that safety that stifled this was not to be prioritized; these are statements paraphrased from my friend), I get the sense that in most instances he has been able to provide adequate context on his organization and the broader situation with AI first (not sure which of these two comes first and for how long they are discussed).
Concerning provision of a description of the AI context, I am not sure how dynamic they are; I think he mentioned querying the staffers on their familiarity, and the impression he had was that most staffers listened well and thought critically about his remarks.
After the aforementioned descriptions, my friend begins discussing measures that can be taken in support of AI Safety.
He mentioned that he tries to steer framings away from those invoking thoughts of arms races or weaponization and instead focuses on uncontrollability “race to the bottom” scenarios, since the former categorization(s) given by others to the staffers has, in his experience, in some instances downplayed concerns for catastrophe and increased focus on further expanding AI capabilities to “outcompete China”.
My friend’s strategy framings seem appropriate and he is a good orator, but I have not the nuanced suggestion that I wanted for my conversation with him, as I’ve not thought enough about which AI risk framings and safety proposals to have ready for staffers and as I believe talking to staffers qualifies as an instance of the proverb “an ounce of practice outweighs a pound of precept”.
I am a someone who has participated informally in self-experimentation and in quantified-self activities.
Reading work composed by Gwern (e.g. this piece) and niplav's (e.g. this piece), alongside writing in the fields of medicine (e.g. this paper) and psychology (e.g. this paper), has been both illuminating and motivating.
I am trying to formalize my data-recording for SE/QS and have found that the following approaches have not "stuck":
.csv,.txt, or.jsonfilesOne approach that has given me optimism and feels relatively more entertaining (i.e. that seems to "stick" better) has been the following: once per week or month I generate a LaTeX PDF document (using Python to generate the document w/ configuration and command-line arguments) with checkboxes next to categorical variables I want to track (e.g. whether I have consumed at least 2 liters of water; these are loaded from a
config.jsonfile) and EOD I fill in the boxes with a pencil for what I did. The goal is to eventually be able to scan the PDF sheets and automatically extract the data for analysis.Here is a section of an example sheet: here. Here are some of the variables: □ Reading for at least an hour and a half. □ Studying Anki cards at least once. □ Bathing at least once. □ Making at least one comment on an on-line forum. □ Consuming at least a coffee’s worth of caffeine. □ Going to bed before 1:00 AM EST. □ Drinking at least 2 liters of water. □ Meditating for at least 15 minute