Re aliens - Fair enough. Some very simple alien, perhaps the Vulcan equivalent of a flatworm, may be well within our capability to understand. Is that really what we're interested in?
Re machine learning - The data for machine learning is generally some huge corpus. The question is whether we're even capable of understanding the data in something like the manner the algorithm does. My intuition says no, but it's an open question.
I'd like to add two pieces of evidence in favor of the weak unlearnability hypothesis:
(1) Humpback whales have songs that can go on for days. Despite decades of study, we don't really understand what they're saying.
(2) The output of machine learning algorithms (e.g. Google's Deep Dream) can be exceedingly counterintuitive to humans.
Whales are our distant cousins and humans created machine learning. We might reasonably suppose that actual aliens, with several billion years of completely independent evolution, might be much harder to understand.
We actually do pretty much the opposite of that in the U.S. Student loans have a Federal guarantee, so the incentive is to sign people up for as much education as possible. If they succeed, great. If they fail, they'll be paying off the loans until they die at which time Uncle Sam will pay the balance. With compounding interest, the ones who fail are the most profitable.
we don’t have a step-by-step checklist to follow in order to use informal mathematical arguments
If we did, the checklist would define a form and the mathematical arguments would become formal.
Terrence Tao uses the term post-rigorous to describe the sort of argument you're talking about. It's one of three stages. In the pre-rigorous stage, concepts are fuzzy and expressed inexactly. In the rigorous stage, concepts are precisely defined in a formal manner. In the post-rigorous stage, concepts are expressed in a fuzzy and inexact way f... (read more)
the current generation of physicists seems to have lost the way in some important (but hard to pin down) sense
My impression of physics (1) post-1970-or-so is that it's lost the balance between theory and experiment that makes science productive. Hypotheses like "superstring theory" or "dark matter" are extremely difficult to test by experiment (through no fault of the physicists' own). Physicists have tried to to make up for it with improvements in theory, but without experiments bringing discipline to the process it doesn't quite work.
In one s... (read more)
I think most of this is just aging, and is normal. I associate that "challenge the world as hard as you can" mentality with testosterone and with teenage boys (who are very high in testosterone). It's a good mindset to have when you're starting out and need to make a place for yourself in the world.
At 29, you have (hopefully) established yourself a bit but are still young enough to be attractive to women. Your instincts are probably telling you (through the medium of lowered testosterone) that it's time to settle down and raise some kids. Circle of life and all that.
I should mention that, like many people who were raised religious and lost their faith, I miss it. It was comforting to believe that the world was in good hands and that it all could work out in the end. I had friends at church. Many of them were attractive females.
Losing my religion felt less like an act of will and more like figuring out the answer to a math problem. It wasn't something I wanted, rather the opposite. I fought it for a while, but there's no cure for enlightenment. I've tried to go back to church, but it just doesn't work when you don't believe in it. I no longer see God there, just some schmuck wearing felt.
I guess this can take a pretty nasty and irrational form, but I see this continuous with other benign community bonding rituals and pro-social behavior (like Petrov day or the solstice).
I agree, I just think that community bonding rituals have such a strong tendency to lead to ingroup-vs-outgroup conflicts that I am much more skeptical of the whole idea than you seem to be.
Part of this is my perception that generally neither group is entirely right about every issue, and therefore no group I pick will have my wholehearted support. This is acce... (read more)
I think most people on LW fall into one of two groups:
Just for context, I'd like to point out that the SAT has been revised and renormed since 1994 (twice IIRC). Current test scores are not straightforwardly comparable to the scores discussed in the book and in the post.
One of the most important decisions in war is when to stop. Humans evolved fear to solve this problem; there's a point at which soldiers will de-escalate the conflict (i.e. flee the battlefield rather than stay and die). However, signalling fear makes you a target so people don't discuss it candidly. I am concerned that military leaders may, in the calm of the office, design AI that has no provisions for de-escalating conflict; this seems very likely to lead to nuclear war.
Perhaps. OTOH, even the Atari 2600 was already a consumer-grade mass-market product; gene sequencing is only now getting there.
To be honest, there are a few other times and places where technological progress has been even faster like Japan between 1865 and 1945 or Shenzhen between 1975 and 2020. Nevertheless, such meteoric rises are a vanishingly small part of human history. There are lots of places and industries where the last 40 years have seen only very modest improvements, quite a few where the trend has been to modest decline, and ... (read more)
Moore's Law had processing power doubling every 18 months to two years for decades; the Atari 2600 of my youth had 128 bytes of RAM; the comparably-priced machine I'm typing this on has 8 billion. No other technology has ever improved by seven orders of magnitude in four decades AFAIK. The economic shifts that came with that made California (and more specifically the Bay Area) what it is today, and my point was that California is highly atypical.
On the other hand, I totally agree with the view that progress has overall slowed down. ... (read more)
It seems like many disagreements ultimately stem from different estimates about the options available. Examples:
I basically agree. A heuristic lets System 1 function without invoking (the much slower) System 2. We need heuristics to get through the day; we couldn't function if we had to reason out every single behavior we implement. A bias is a heuristic when it's dysfunctional, resulting in a poorly-chosen System 1 behavior when System 2 could give a significantly better outcome.
One barrier to rationality is that updating one's heuristics is effortful and often kind of annoying, so we always have some outdated heuristics. The quicker things change, the worse it gets. Too much trust in one's heuristics risks biased behavior; too little yields indecisiveness.
I think we need to distinguish between some related things here:
I can think of a few skills that, while not "rationality" in themselves, make it much easier to reason effectively. Numeracy is one. The innumerate can't really see the difference between a million, a billion, a trillion, and a godzillion.
It helps to have, in memory, a set of references to compare to. For example, there are about a third of a billion people in the United States. Therefore a billion dollars is roughly $3 each, a trillion dollars is roughly $3,000 each, and a million dollars is roughly nothing (.3 cents) each.
A working knowledge of history is also helpful, as is a rough understanding of manufacturing.
Varying the problem helps, as does varying your approach to the problem. Studying math generally involves many years of working progressively complex problems. But this is different from a "kata", which is a set of moves rigorously repeated in a specific order and invariant manner*.
Psychologically speaking, a kata functions to take a set of moves that the student consciously understands and build muscle memories that can execute the moves effectively at the sub-second timescale of a fight. Reasoning uses different cognitive systems, althou... (read more)
Perhaps, but it would surprise me if you don't have hundreds of common sudoku patterns in your memory. Not entire puzzles, but heuristics for solving limited parts of the puzzle. That's how humans learn. We do pattern recognition whenever possible and fall back on reason when we're stumped. "Learning" substantially consists of developing the heuristics that allow you to perform without reason (which is slow and error-prone).
Math problems are like "katas" for rationality. The difference is that, once you've solved a problem once with rationality, you can solve it again much more easily from memory without engaging your rational facilities again. Therefore you don't get the benefit from repeating the same exercises again and again.
There have been dozens of stories like that; George W Bush got elected on the strength of his education "reforms". Long-term experience justifies a strong belief (confidence over 90%) that the results will ultimately turn out to be due to a combination of selection bias (cherry-picking) and test fraud. The links are just examples; I've been offhandedly following education research and reform for decades. There's a lot more evidence where that came from, and it tells a very consistent story.
Education simply isn't a green field - the space ... (read more)
The evidence indicates that throwing more effort/money at how we do education does not improve IQ scores (for which SAT scores are a thinly-veiled proxy, except that every decade or so they make cosmetic changes to the SAT methodology) or student outcomes. Attempts to rethink education have failed. And IQ is generally useful enough that it is strongly correlated with outcomes we want.
If you're used to the tech sector with rapid change every decade, moving into the human services sector is going to be a very depressing experience. The low-... (read more)
Pretty much. If an intervention is well outside of the set of experiences of your population, there's probably a reason for that. Perhaps it's just too new, but it's likely that it's inconsistent with the way the culture usually functions (its values as actually implemented) and/or has fairly obvious side effects.
The simplest and most useful answer is that heritability tells you the amount of variation that environmental factors don't control*. Traits with very high heritability** are generally going to be worse targets for intervention than traits with low heritability.
*In the range of environments over which the data was collected. The heritability of a trait as measured in Somalia or North Korea may be much lower that as measured in America. You can interpret this as meaning that there is much more hope for useful intervention in Somalia... (read more)
True, but "high and stable heritability" across hundreds (perhaps thousands) of attempted interventions is a pretty good description of the real-world results of education research and practice. See Freddie DeBoer's "Education Doesn't Work" for a brief treatment or Kathryn Paige Harden's The Genetic Lottery for a book-length version.
So how should Armenia have retained Nagorno-Karabakh?
Use the Iraqi playbook. In the kinetic phase of the war, Armenia is probably hopeless. So make only a token show of resistance.
Before Azerbaijan takes over NK, scatter weapons caches to your co-ethnics. Train NK locals as insurgents. Make sure your border is permeable to insurgents; give them a place to rest, recover, and prepare.
Don't let Azerbaijan consolidate its control. Use ambushes, snipers, and IEDs to discourage Azerbaijani troops from leaving their compounds.... (read more)
The other reason, as noted by Clausewitz, is that the enemy leader is the only person who can order its army to surrender. If you kill them, victory gets much harder to achieve.
In the first case you cite, you've misidentified your enemy. You're not fighting the nation, you're fighting some subset of it. The usual response is to identify a significant subset that opposes your enemy subset and supply them weapons. Be careful - a lot of Afghan anti-American insurgents started out as a US funded anti-Soviet insurgents. The enemy of your enemy often stops being your friend when your first enemy has fled.
For the second case - the enemy is probably not stupid or politically naive (they're leading a country, after... (read more)
Once the enemy's tanks are rolling, the war will be decided in a matter of days or weeks -- no time to go about changing the cultural attitudes of an entire population!
Contemporary war happens in two phases. The first phase involves tanks and planes and lasts days or weeks. The second phase involves putting boots on the ground and asserting the victor's will over the victim. As you may imagine, the second phase involves a lot of human rights abuses.
America is great at the first phase, but is generally unwilling to admit that the second ph... (read more)
Whoops, this was meant as a response to the post, not ChristianKI's comment.
I think you may overestimate how much control over an enemy's internal politics you can reasonably expect. The enemy is going to be as hardened as possible against your influence and will assuredly establish strong social norms against yielding to your influence, for values of "strong" that look like "succumbing to enemy pressure is treason, punishable by death". Nations pull together in war.
Until roughly 1980, US corporations did lots of (paid) training. Some still do; McDonalds operates Hamburger University. They found that a lot of new hires left the company soon after training - the companies couldn't capture the value of the training very well. Because of that they shifted toward hiring college graduates (pre-trained for general skills, if not for company specifics (which don't travel well anyway)) and, later, unpaid internships.
IQ tests are designed to produce a bell curve with a mean at 100 and a standard deviation of 15. That's inherent to the definition of IQ. Actual implementations aren't perfect, but they're not far off.
This isn't really my field, and I see your point. The poster asked for other studies so I linked a study I'd recently seen. It's less about me endorsing the study than about trying to provide an entry point into the relevant literature.
Can Super Smart Leaders Suffer From Too Much of a Good Thing? The Curvilinear Effect of Intelligence on Perceived Leadership Behavior and references therein.
Fair enough. I'm a chemist by training, so I described what I know.
Actually, when these theories are in competition researching phlogiston looks exactly like researching the new chemistry. What I mean is that even scientists holding on to the phlogiston theory will be aware of the results that favor the new chemistry and will design experiments specifically so that the results expected by one theory will be easily distinguishable from the predictions of the other theory. As evidence piles up, both theories will be modified by their adherents to explain the experimental results; the worse theory will require mo... (read more)
I'm suggesting there's a common denominator which all morally relevant agents are inherently cognizant of.
This naturally raises the question of whether people who don't agree with you are not moral agents or are somehow so confused or deceitful that they have abandoned their inherent truth. I've heard the second version stated seriously in my Bible-belt childhood; it didn't impress me then. The first just seems ... odd (and also raises the question of whether the non-morally-relevant will eventually outcompete the moral, leading to their extinc... (read more)
Indeed. A certain coronavirus has recently achieved remarkable gains in Darwinist terms, but this is not generally considered a moral triumph. Quite the opposite, as a dislike for disease is a near-universal human value.
It is often tempting to use near-universal human values as a substitute for objective values, and sometimes it works. However, such values are not always internally consistent because humanity isn't. Values such as disease prevention came into conflict with other values such as prosperity during the pandemic, with so... (read more)
I think a simpler way to state the objection is to say that "value" and "meaning" are transitive verbs. I can value money; Steve can value cars; Mike can value himself. It's not clear what it would even mean for objective reality to value something. Similarly, a subject may "mean" a referent to an interpreter, but nothing can just "mean" or even "mean something" without an implicit interpreter, and "objective reality" doesn't seem to be the sort of thing that can interpret.
I think sociopaths are likely underrepresented in the physical sciences. Sociopaths' defining method is the creation of social realities for others to inhabit, and it's very hard to use that when you're in the lab mucking with vacuum systems or running rats through mazes or whatever. Sociopaths are much more likely to be attracted to business or politics, with a few in the humanities. What sociopaths there are in science probably gravitate toward positions where they have control over tangible resources (e.g. grants).
OTOH, Aspergians like myself seem to be overrepresented in the physical sciences, partly because the relative distance from social constructs appeals to us.
I think "slacker" would be a better word than Rao's "loser" for this group. Their chief characteristic is that they don't work very hard because there's little benefit for them if they do. "Loser" seems needlessly pejorative - their actions are reasonable given their situations and risk tolerance (usually risk-averse). "Slacker" seems to define them better.
If you believe you attempt to do good because you truly like to do good, you're either a saint or you don't really know yourself well. Could be either, but I know which way I'd bet it.
This isn't a case where we need more research. This is a case where we have over a century of credible data(1) and the strongest theoretical constructs in psychology or any other social science. We just ignore the answers we have because nobody likes them. We'd rather believe that effort matters more than genetics.
(1) The US military in WWI and WWII tested tens of millions of men from broad swaths of society.
In the U.S., things people want are no longer gated by IQ scores because the Supreme Court has ruled that doing so violates the Civil Rights Act (Griggs V. Duke Power). Prior to 1971 IQ scores were commonly used in hiring decisions; my mother got the highest score her employer had ever seen and was fast-tracked to management.
Small quibble - general intelligence varies by age, and IQ tests are age-adjusted. But that's a small clarification of your basic claim, which is supported by the data as I understand it.
It's designed to be a normal distribution, but actual implementations don't work out exactly that way. For starters, the distribution is skewed rightward because brain damage is a thing and brain augmentation isn't (yet).
I would avoid athletic metaphors for IQ. People naturally tend to assume brains work like muscles. Muscles get stronger with exercise. Brains do not get smarter with "exercise" (study/puzzles/classes/etc). The data is clear - it just doesn't work that way.
That's true, but in the actual case society(1) wants to maximize equality of outcomes among its members, and we've spent decades looking for a method that will provide that outcome, and nothing we've come up with works(2). You might think we "ought" to be doing that, but the judgement is now between "we should continue to pursue this value, knowing that it's never worked before and we have no reason to believe that it will start working any time soon" and "we should pursue other values that seem to be achievable" - which is a very different judgement... (read more)