All of alkexr's Comments + Replies

I'm not an expert either, and I won't try to end the F-35 debate in a few sentences. I maintain my position that the original argument was sloppy. "F-35 isn't the best for specific wars X, Y and Z, therefore it wasn't a competent military decision" is non sequitur. "Experts X, Y and Z believe that the F-35 wasn't a competent decision" would be better in this case, because that seems to be the real reason why you believe what you believe.

Generally, in security threat modelling is important. There's the saying "Generals always fight the last war" which is about a common mistake in militaries that they are not sufficiently doing threat modeling and investing in technology that would actually help with the important threats. There are forces where established military units aren't looking for new ways of acting. Pilots wants planes that are flown by pilots. Defense contractors want to produce weapons that match their competencies.  I do see the question of whether a military is able to think well about future threats and then invest money into building technology to counter those threats as an important aspect of competency.  This is not that I just copied the position from someone else but I have a model feed by what I read and which I apply. The argument I made seems also be made by military generals: Aircraft we don't need is what the F-35 program is about. The main threat related to countering China is defending Taiwan (and hopefully in a way where there's deterrence that prevents the war from happening in the first place). EDIT: If you would make some argument about the Navy already having the correct position here because Michael Gilday is advocating the correct position, if there would be a hypercompetent faction in the military, that group should have no problems with exerting their power in a way to get defense contractors to produce the weapons that high military leaders consider desirable to develop. 

F-35 aren't the crucial component to winning the kind of wars in Iraq or Afghanistan. They also aren't the kind of weapon that are important to defend Taiwan. They are just what the airforce culture wants instead of being a choice made by a hypercompetent military. 

I mostly agree with your perception of state (or something) competence, but this seems to me like a sloppy argument? True, the US does have to prepare for the most likely wars, but they also have to be prepared for all other wars that don't happen because they were prepared, aka. deterrence... (read more)

I didn't just talk about Taiwan, I also talked about Afghanistan and Iraq. Those were wars that the US military essentially lost.  The US military failed to create the kind of innovation that they would have needed to pursue those conflicts successfully.  F-35 also doesn't help with the Ukraine war.  A key alternative for the F-35 plan would have been unmanned aircraft for the same job.  What wars do you think the F-35 deters?  When it comes to military matters, the beliefs I have come from reading some articles and interviews.I wouldn't be surprised if there are other people here with a lot more domain knowledge.  Evaluating whether or not the military spends its money well is generally hard as a lot of relevant information is secret.  Palmer Luckey from Anduril who would know seems to say that there was a severe underinvestment into autonomous vehicles.  Alex Karp from Palantir also speaks about underinvestment of the military into AI. 
  • You believe that there is a strong evolutionary pressure to create powerful networks of individuals that are very good at protecting their interests and surviving in competition with other similar networks
  • You believe that these networks utilize information warfare to such an extent that they have adapted by cutting themselves off from most information channels, and are extremely skeptical of what anyone else believes
  • You believe that this policy is a better adaptation to this environment than what anyone else could come up with
  • These networks have adapted by
... (read more)
My model actually considers information warfare to have mostly become an issue recently (10-20 years) and that these institutions evolved before that. Mainly, information warfare is worth considering because  1) it is highly relevant to AI governance, as no matter what your model of government elites looks like, the modern information warfare environment strongly indicates that they will (at least initially) see the concept of a machine god as some sort of 21st-century-style ploy 2) although there are serious falsifiability problems that limit the expected value of researching potential high-competence decisionmaking and institutional structure within intelligence agencies, I'm arguing that the expected value is not very low because the evidence for incompetence is also weak (albeit less weak) and that evidence of incompetence all the way up is also an active information battleground (e.g. the news articles about Trump and the nuclear chain of command during the election dispute and jan 6th).
  1. You're saying that these hypothetical elites are hypercompetent to such a hollywoodical degree that normal human constraints that apply to everyone else don't apply to them, because of "out of distribution" reasons. It seems to me that "out of distribution" here stands as a synonym of magic.
  2. You're saying that these hypothetical elites are controlling the world thanks to their hypercompetence, but are completely oblivious to the idea that they themselves could lose control to an AI that they know to be hypercompetent relative to them.
  3. It seems to me that lie
... (read more)
I think that "hypercompetent" was a poor choice of words on my part, since the crux of the post is that it's difficult to evaluate the competence of opaque systems. It's actually the other way around; existing as an inner regime means surviving many years of evolutionary pressure from being targeted by all the rich, powerful, and high IQ people in any civilization or sphere of influence (and the model describes separate inner regimes in the US, China, and Russia which are in conflict, not a single force controlling all of civilization). That is an extremely wide variety of evolutionary pressure (which can shape people's development), because any large country has an extremely diverse variety of rich, powerful, and/or high-IQ people. The elites I'm describing are extremely in-tune with the idea that it's worthwhile for foreign intelligence agencies to shape information warfare policies to heavily prioritize targeting members of the inner regime. Therefore, it's worthwhile for themselves to cut themselves off of popular media entirely, and be extremely skeptical of anything that unprotected people believe. I definitely think that molochian races-to-the-bottom are a big element here. I hope that the people in charge aren't all nihilistic moral relativists, even though that seems like the kind of thing that would happen. You definitely sensed/smelled real anxiety! The more real exposure people get to dangerous forces like intelligence agencies, the more they realize that it makes sense to be scared. I definitely think that the prospect is scary of EA/LW of stepping on the toes of powerful people, and being destroyed or damaged as a result. Specific strings of text in specific places can absolutely push all of history off course down into a chasm! As you might imagine, writing this post feels a bit tough for a bunch of reasons. It’s a sensitive topic, there are lots of complicated legal issues to consider, and it’s generally a bit weird to write publicly about an a

I immediately recognize the pattern that's being playing out in this post and in the comments. I've seen it so many times, in so many forms.

Some people know the "game" and the "not-game", because they learned the lesson the hard way. They nod along, because to them it's obvious.

Some people only know the "game". They think the argument is about "game" vs "game-but-with-some-quirks", and object because those quirks don't seem important.

Some people only know the "not-game". They think the argument is about "not-game" vs "not-game-but-with-some-quirks", and ob... (read more)

(Meta: writing this in separate comment to enable voting / agreement / discussion separately)

If you want to make the case for tactical nuclear deployment not happening (which I hope is the likely outcome), I want to see a model of how you see things developing differently

I'll list a few possible timelines. I don't think any of these is particularly likely, but they are plausible, and together with many other similar courses of events they account for significant chunks of probability mass.

  1. Discontinuity in power in Russia.
  2. Internal turmoil or collapse in Rus
... (read more)
I admit I am biased since I am a Korean, but I see Korean War as an obvious model of War in Ukraine. From June 1950 to June 1951, situation developed rapidly, with wildly moving front. I think War in Ukraine is now in this phase. From July 1951 to July 1953, ceasefire negotiation was ongoing while war was ongoing, while front barely moved, while lots and lots of soldiers were dying. For two years. With declassified Soviet papers, we now know why it took two years, and that even two years was a luck. Korean War was a proxy war. It was a war between US and USSR, but North Korea, South Korea, US, China were fighting, and USSR was not! Stalin was in favor of a war where others were fighting and USSR was not. Ceasefire was achieved in 1953, after Stalin was dead. Similarly, War in Ukraine is a proxy war between US and Russia, but Russia and Ukraine are fighting, and US is not! I think US is in favor of a war where others are fighting and US is not. With absence of Stalin to be dead, I fear the war will continue indefinitely.

On Nord Stream sabotage: 

  1. Looks like sabotage. Accidents very rarely look like this. (Very high confidence.)
  2. Probably by state actors. It seems like a task that requires significant resources and planning. Also there was plenty of military presence in the area, it's just hard to believe that non-state actors could perform something like this unnoticed. (Medium confidence.)
  3. It wasn't an ally of Germany. There is always a chance that you get caught / leave evidence, and after attacking the critical infrastructure of an ally no one will have a reason to tru
... (read more)
I will have to steal the term "proof by lack of imagination"! I have a slightly lower confidence in "no ally of Germany". Lets, hypothetically, say it was Poland. The Polish government opposed (very publicly) the making of Nord steam in the first place. They have (very publicly) continued to criticize it and use language of the sort that Germany is doing something immoral by buying the gas. So, if they hypothetically had done it, and were caught they could simply say "Yes. We have been telling you to do this publicly for a decade. We grew tired of waiting." (Then lean hard into a hopefully muted enough reaction from USA/UK to move on - plus, its not like it would get them kicked out of the EU, I think that requires a unanimous vote, and remaining in the EU I am not sure what Germany could really do to punish them.) Obviously sabotage and diplomatic pressure are different, but I think most people put them closer than you might expect. (Legitimate diplomatic pressure could, for example, involve withdrawing money that was of greater value than the damage of the sabotage).
The US has a habit of attacking critical German infrastructure. Until Snowden, nobody in the German government believed but afterward, we had to accept it.  In this case, the information could be restricted to fewer people knowing that the US attacked the infrastructure so it's less likely that anybody finds out. Biden was also willing to publically threaten to prevent North Stream 2 from being created. When it comes to taking actions to keep trust, threatening to do something against North Stream 2 destroys trust. 

Thus I claim we don't know whether people see dreams.

That's a pretty bold claim just a few sentences after claiming to have aphantasia.

Some of my dreams have no visuals at all, just a vague awareness of the setting and plot points. Others are as vivid and detailed as waking experience (or even more, honestly), at least as far as vision is concerned. Dreams can fall anywhere on a spectrum between these extremes, and sometimes they can even be a mixture (e.g. a visual experience of the place and an awareness of characters in that place that don't appear visually).

Yes, people do see dreams. I'm fairly certain I can tell the difference.

I guess there is a difference between whether all people see dreams or whether any person sees dreams. I don't mean to claim that dreams do not happen but whether sight is involved for all people. I guess a person that has had a non-visual dream can be certain it is not the case that every dream has been seen. I am imagining a questionare that has options like "A) I see my dreams in color B) I see my dreams in black and white" to have different results if a option like "C) I don't see in my dreams" was present. The possible misdirection would be to not include C based on a guess that nobody reports it based preconception of what dreams are.

Yes, I'm aware of all that, and I agree with your premises, but your argument doesn't prove what you think it does. Let's try to reductio it ad absurdum, and turn the same argument against the possibility of fast technological or scientific feedback cycles. 

If you live in a technologically backwards society (think bronze age), you can't become more advanced technologically yourself, because you'll starve spending your time trying to do science. The technology of society (including agriculture, communication, tools, etc.) needs to progress as a whole. ... (read more)

It seems pretty likely that moral and social progress are just inherently harder problems, given that you can't [...] have fast feedback cycles from reality (like you do when trying to make scientific, technological and industrial progress).

We can't? Have we tried? Have you tried? Is there some law of physics I'm missing? What would a real, genuine attempt to do just that even look like? Would you recognize it if it was done right in front of you?

6Martin Sustrik2y
The thing you are missing, I think, is the nature of common knowedge which underpins the society. Thanks to how it works, people can't achieve moral/societal progress individually. If you live in a violent society you can't get less violent by yourself. If you do, you'd get killed. If you live in a corrupt society you can't get less corrupt all by yourself. If you do, you'd be in disadvantage to all the corrupt people. The society can progress only as a whole, thus the limit on the speed of progress is determined by the speed in which the majority is able to change their attitude (get less violent, corrupt etc.) And given how unlikely an average person is to change their attitude the social progress may move one funeral at a time.

There are multiple meanings of "progress" afoot here. Tabooing the word, my reading of your point is "moving toward any specific imagined future state of the world we all agree is good is good, therefore moving forward is good".

(Another non-native having a go at it...)

When your advice both ways seems fine,
Calibrate, then make it rhyme.

Words of wisdom aren't so wise Unless it's clear when it applies. Timeless wisdom dies, in time Unless you also make it rhyme.

more transparent to outsiders

There is the danger of it being more transparency-illuding instead. (Yeah, I just invented that term, but what did I mean by it?)

My gut feeling is that attracting more attention to a metric, no matter how good, will inevitably Goodhart it.

That is a good gut feeling to have, and Goodhart certainly does need to be invoked in the discussion. But the proposal is about using a different metric with a (perhaps) higher level of attention directed towards it, not just directing more attention to the same metric. Different metrics create different incentive landscapes to optimizers (LessWrongers, in this case), and not all incentive landscapes are equal relative to the goal of a Good LessWro... (read more)

The way this topic appears to me is that there are different tasks or considerations that require different levels of conscientiousness for the optimal solution. In this frame, one should just always apply the appropriate level of conscientiousness in every context, and the trait conscientiousness is just a bias people have in one direction or the other that one should try to eliminate.

This frame is useful, because it opens up the possibility to do things like "assess required conscientiousness for task", "become aware of bias", "reduce bias", etc. But it ... (read more)

Most people who hire practice psychographic nepotism — hiring people with personality traits like their own to do things like they do. Since those people are almost always managers, you get conscientiousness all the way down unless there’s some kind of Human Resources black swan event. Of course, it will depend to some extent on the job what traits are more beneficial. But what I rarely see are mangers who say “I want to hire someone that complements my skill set.” For example, conscientiousness doesn’t seem to be correlated with innovation at the group level, so why add more of it unless your hiring someone to do routine things well. I wouldn’t be surprised if it’s possible with all Big 5 traits to practice being more of the opposite of whatever your trait is and adopt it on an as-beneficial basis (e.g. have an extroverted person be more introverted or an agreeable person be more disagreeable in circumstances where one is the better strategy). That may be a strategy worth trying. I’ve never asked a high-conscientious person “Could you try coming to work looking a bit more disheveled? Try to stay out of Excel unless it’s really really really important. Ohh and leave some crumbs and an empty soda can on your desk when you go home for the day.”

I think it's empirical observation.

The world doesn't just happen to behave in a certain way. The probability that all examples point in a single direction without some actual mechanism causing it is negligible.

I ended up using mathematical language because I found it really difficult to articulate my intuitions. My intuition told me that something like this had to be true mathematically, but the fact that you don't seem to know about it makes me consider this significantly less likely.

If we have a collection of variables , and , then  is positively correlated in practice with most  expressed simply in terms of the variables.

Yes, but  also happens to be very strongly correlated with most  that are e... (read more)

You have a true goal, . Then you take the set of all potential proxies that have an observed correlation with , let's call this . By Goodhart's law, this set has the property that any  will with probability 1 be uncorrelated with  outside the observed domain.

Then you can take the set . This set will have the property that any  will with probability 1 be uncorrelated with  outside the observed domain. This is Goodhart's law, and it still applies.

Your claim is that ... (read more)

[This comment is no longer endorsed by its author]Reply
If we have a collection of variables {v}, and V=max(v), then V is positively correlated in practice with most U expressed simply in terms of the variables. I've seen Goodhart's law as an observation or a fact of human society - you seem to have a mathematical version of it in mind. Is there a reference for that.

Your  is correlated with , and that's cheating for all practical purposes. The premise of Goodhart's law is that you can't measure your true goal well. That's why you need a proxy in the first place.

If you select a proxy at random with the only condition that it's correlated with your true goal in the domain of your past experiences, Goodhart's law claims that it will almost certainly not be correlated near the optimum. Emphasis on "only condition". If you specify further conditions, like, say, that your proxy is your true goal, then, wel... (read more)

V and V' are symmetric; indeed, you can define V as 2U-V'. Given U, they are as well defined as each other.

Some frames worth considering:

  • Strong Prune, weak Babble among LessWrongers
  • Conversation failing to evolve past the low-hanging fruit
  • People being reluctant to express thoughts that might make their account look stupid in a way that's visible to the entire internet
  • Everyone can participate, and as the number of people involved in a conversation increases it becomes more and more difficult to track all positions
  • Even lurkers like me can attempt to participate, and it's costly in terms of conversational effort to figure out what background knowledge someone is mi
... (read more)
3Adam Zerner3y
Yeah those frames all make sense. And I like the idea of a follow up post summarizing the takeaways from the comment section, as well as giving the process a name.

The first layer of internal visual experience I have when reading is a degree of synesthesia (letters have colors). Most of the time I'm not aware that this is happening. It does make recalling writing easier (I sometimes deduce missing letters, words or numbers from the color).

Then there is the "internal blackboard", which I use for equations or formulas. I use conscious effort to make the equation appear as a visual experience (in its written form). I can then manipulate this image as if the individual symbols or symbol groups were physical objects that ... (read more)

Absence of evidence of X is evidence of absence of X.

A claim about the absence of evidence of X is evidence of:

  • the speaker's belief of the listeners' belief in X,
  • absence of evidence of NOT X,
  • the speaker's intention of changing the listeners' belief in X.

No paradox to resolve here.

Non sequitur. Buying isn't the inverse operation of selling. Both cost positive amounts of time and both have risks you may not have thought of. But it probably is a good idea to go back in time and unsell your soul. Except that going back in time is probably a bad idea too. Never mind. It's probably a good investment to turn your attention to somewhere other than the soul market.

These rituals are inefficient in cases where there is mutual trust between all participants. But sticking to formality is a great Schelling fence against those trying to gain an advantage by exploiting unwitting bureaucrats.

The basis of the original post isn't existential threats, but narratives - ways of organizing the exponential complexity of all the events in the world into a comparatively simple story-like structure.

Here’s a list of alternative high level narratives about what is importantly going on in the world—the central plot, as it were—for the purpose of thinking about what role in a plot to take

Memetic tribes are only tangentially relevant here. I didn't really intend to present any argument, just a set of narratives present in some other communities you probably haven't encountered.

The above narratives seem to be extremely focused into a tiny part of narrative-space, and it's actually a fairly good representation of what makes LessWrong a memetic tribe. I will try to give some examples of narratives that are... fundamentally different, from the outside view; or weird and stupid, from the inside view. (I'll also try to do some translation between conceptual frameworks.) Some of these narratives you already know - just look around the political spectrum, and notice what narratives people live in. There are aslo some narratives I find b... (read more)

I don't understand your post. Why are memetic tribes relevant to the discussion of potential existential risks; which is the basis of the original post? Is your argument that all communities have some sort of shared existential threat, that is contradictory to the other existential threats of other communities? It seems to me the point of a rationalist community should be to find the greatest existential threats and focus on finding solutions.

You'd also have to consider the long-term effects on the incentive landscape of e.g. establishing the precedent of companies getting $4B deals in case of a pandemic regardless of whether their vaccine works or not. In general, doing things the reasonable way has the downside of incentivizing bad actors to extract any free energy you put into the system by being reasonable until you're potentially no better off than the way Delenda Est Club is handling the situation right now. In any case, I don't see any long-term systemic effects even being considered here, so I'd be surprised if the suggestions didn't have some significant fallout further down the line.

Lockdown incentivized politicians to establish positions on a lockdown, which has led to people having strong opinions about it. Even assuming no damage from further polarization, we have a roughly 50% chance of having an anti-lockdown government when the next pandemic hits, with a 10% chance of this new incentive being the deciding factor in not enacting a lockdown (or failing to implement it). Even if we assume that only 10% of the effects of this polarization is the result of the lockdown actually happening, with a 1% yearly chance of a pandemic more da... (read more)

I mean, a lot of it I think has to do with the lockdown rules being fairly obviously being written by the dumbest people in the room. As a couple of examples, here where I am they shut down all "nonessential" jobs and it rapidly became clear that they had no idea what was actually essential and no idea what actually spread the virus.  Specifically: Automotive repair shops were shut down entirely for months.  It's as if they had no conception that all those "essential" transport jobs to get food back to the stores actually have to do vehicle maintenance.  It wasn't until shipping started to take a hit that they actually listened to complaints. "non-essential" rural workers taking advantage of the down time to catch up on maintenance were having jack-booted thugs show up on their property (in the middle of nowhere, with no workers who didn't live on-premises) and order them to cease working and go sit inside their homes because somehow that would make everyone safer.  Never mind that these people's only possible exposure would have been coming directly from the aforementioned jackboots. Logging and mining operations that go  weeks on end with little to no outside contact ordered to shut down and send all their people home, despite the fact that those people were almost certainly at less risk of exposure working in a remote region than back in the city or town. I expect it'll be less an anti-lockdown backlash than an anti-idiot backlash.  But people may have a hard time differentiating the two.

I live in a social environment where expressing opinions or otherwise giving information about myself could have negative consequences, ranging from mild inconvenience to serious discrimination. I have no intention to hide my real identity from those who know the account, but I do want to hide my account from those who know my real identity (and aren't close friends). I use this name for most online activity.

I was going to write an answer but this sums up my thought process perfectly.

I've been aware for a while now that having enough awareness to notice being trapped is not enough to step outside the pattern, but I can't step outside this pattern. I also believe that admitting that there is no substitute for practice isn't going to be causally linked to me actually practicing (due to a special case of the same trap), so I'll just go on staying trapped for now I guess.

Being self-sufficient and robust as a national economy is accepting a competitive disadvantage relative to a global just-in-time supply chain in times of prosperity in exchange for a competitive advantage during a crisis. Selection pressures will push economies accepting this tradeoff towards being actively interested in a world with more crises.

You ascribe too much agency to the great hulking amoebas that human societies are.

Question: how does postrationality and instrumental rationality relate to each other? To me it appears that you are simply arguing for instrumental rationality over epistemic rationality, or am I missing something?

Not speaking for gworley, but I but instrument rationality can be more attainable than epistemic rationality even if it is less valuable.
Instrumental rationality informs epistemics which then informs instrumental. Which thrn informs epistemics. And like this all the way down.
However, if this is really what 'postrationality' is about, then I think it remains safe to say that it is a poisonous and harmful philosophy that has no place on LW or in the rationality project.

It feels like calling someone's philosophy poisonous and harmful doesn't advance the conversation, regardless of its truth value, and this proves the point of the main post well.

If a philosophy is poisonous and harmful, I think it commendable and necessary to say so.

Two points:

  1. Advancing the conversation is not the only reason I would write such a thing, but actually it serves a different purpose: protecting other readers of this site from forming a false belief that there's some kind of consensus here that this philosophy is not poisonous and harmful. Now the reader is aware that there is at least debate on the topic.

  2. It doesn't prove the OP's point at all. The OP was about beliefs (and "making sense of the world"). But I can have the belief "postrationality is poisonous and harmful" without having to post a comm

... (read more)

Being able to speak is probably more important than being as smart as a human. Cultural / memetic evolution is orders of magnitude faster than biological, but its ability to function is dependent on having a memory better than mortal minds. Speech gives some limited non-mortal memory, as does writing the printing press, or the internet. These inventions enable more efficient evolution. AI will ramp up evolution to even higher speeds, since external memory will be replaced with internal 1) lossless and 2) intelligent memory. As such I am unconvinced that th... (read more)

People not being able to come up with any idea but that diseases are a curse of the gods is strong evidence not for diseases being a curse of gods but for the ignorance of those people. The most likely answer to that question is either something no one will think of for centuries to come or simply that the model of separating objects into "sorts of things" is not useful for deciphering the misteries of the universe despite being an evolutionary advantage on the ancestral savanna.

It's problematic when applied to the universe , because "universe" is a very broad category. If you are going to say it is some specific thing chosen from an even broader category, then you have to explain why that thing and not something else -- the more specific your model of the universe, the more bits of information are unaccounted for.
So, I don't think that I would have the same kind of intuition about diseases and curses as I do about mathematical objects and existence, even if I didn't know any possible cause of disease except for curse. But of course my introspection about that could be wrong. I don't think that I am separating objects into "sorts of things". It is more like I am asking the question "what does it mean to be a thing?" and answering it "to be a thing is to be a mathematical object".

You might have gone too far with speculation - your theory can be tested. If your model was true, I would expect a correlation between, say, the ability to learn ball sports and the ability to solve mathematical problems. It is not immediately obvious how to run such an experiment, though.

I think that's good, isn't it? :-D Maybe…? I think it's more complicated than I read this implying. But yes, I expect the abilities to learn to be somewhat correlated, even if the actualized skills aren't. Part of the challenge is that math reasoning seems to coopt parts of the mind that normally get used for other things. So instead of mentally rehearsing a physical movement in a way that's connected to how your body can actually move and feel, the mind mentally rehearses the behavior (!) of some abstract mathematical object in ways that don't necessarily map onto anything your physical body can do. I suspect that closeness to physical doability is one of the main differences between "pure" mathematical thinking and engineering-style thinking, especially engineering that's involved with physical materials (e.g., mechanical, electrical — as opposed to software). And yes, this is testable, because it suggests that engineers will tend to have developed more physical coordination than mathematicians relative to their starting points. (This is still tricky to test, because people aren't randomly sorted into mathematicians vs. engineers, so their starting abilities with learning physical coordination might be different. But if we can figure out a way to test this claim, I'd be delighted to look at what the truth has to say about this!)
Sports/math is an obvious thing to check, but I'm not sure whether it quite gets at the thing Val is pointing at. I'd guess that there are a few clusters of behaviors and adaptations for different type of movement. I think predicting where a ball will end up doesn't require... I'm not sure I have a better word than "reasoning". In the Distinctions in Types of Thought sense, my guess is that for babies first learning how to move, their brain is doing something Effortful, which hasn't been cached down to the level of S1 intuition. But they're probably not doing something sequential. You can get better at it just by throwing more data at the learning algorithm. Things like math have more to do with the skill of carving up surprising data into new chunks, and the ability to make new predictions with sequential reasoning. My understanding is that "everything good-associated tends to be correlated with everything else good", a la wealth/height/g-factor so I think I expect sports/math to be at least somewhat correlated. But I think especially good ball players are probably maxed out on a different adaptation-to-execute than especially good math-problem-solvers. I do agree that it'd be really good to formulate the movement/social distinction hypothesis into something that made some concrete predictions, and/or delve into some of the surrounding literature a bit more. (I'd be interested in a review of Where Mathematics Comes From)