LESSWRONG
LW

3090
David J Higgs
15060
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
Fake thinking and real thinking
David J Higgs1mo10

I'm glad this kind of content exists on LessWrong: writing that doesn't shy away from an explicit focus on personal virtue, in a "how algorithms feel from the inside" kind of way. I used to devour anything I could find written by C.S. Lewis as a still-religious teenager, because I felt a certain quality of thought and feeling emanating from each page. I felt the sincerity of effort orienting towards truth, beauty, and goodness.

Unfortunately, much of his worldview turned out to be importantly wrong, but his writings are hardly alone in that compared to other genuine historical truth seekers. I hope that like this post, my future thinking can manage to orient in that direction which Lewis was among the first to bring before my attention.

Reply
Fake thinking and real thinking
David J Higgs1mo20

This strikes me as a good sort of constructive feedback, but one that didn't apply in my case, and I'll try to explain why. Thinking real instead of fake seems like a canonical example of rationality that is especially contingent upon emotions and subjective experience, and intervening on that level is extremely tricky and fraught.

In my case, the copious examples, explanations of why the examples are relevant, pointers to ways of avoiding the bad/getting at the good, etc. mostly seemed helpful in conveying the right pre-rational mental patterns for allowing the relevant rational thoughts to occur (either by getting out of the way or by participating in the generation of rational thought directly).

It was also simply enjoyable throughout, in a way that harkened back a little to when I would read C.S. Lewis as a still-religious teenager. Not imitation of Lewis's writing, but rather pointing in the direction Lewis was trying to point (toward truth, beauty and goodness). This last element seems potentially load bearing, in that I don't know whether I'd have found the continuous details helpful if I hadn't found them intellectually pleasant in this particular way.

Reply
Four ways learning Econ makes people dumber re: future AI
David J Higgs2mo100

I'm guessing "species" is there mainly as emphasis that we are NOT talking about (mere) tool AI, and also maybe to marginally increase the clickbait for Twitter/X purposes.

Reply
Hyperbolic model fits METR capabilities estimate worse than exponential model
David J Higgs2mo20

Don't forget intellectual charity, which might actually be the most LW distinguishing feature relative to other smart online communities.

Reply
Why Should I Assume CCP AGI is Worse Than USG AGI?
David J Higgs6mo31

Counter-counterpoint: big groups like bureaucracies are not composed of randomly selected individuals from their respective countries. I strongly doubt that say, 100 randomly selected Google employees (the largest plausible bureaucracy that might potentially develop AGI in the very near term future?) would answer extremely similarly to 100 randomly selected Americans.

Of course, in the only moderately near term or median future, something like a Manhatten Project for AI could produce an AGI. This would still not be identical to 100 random Americans, but averaging across the US security & intelligence apparatus, the current political facing portion of the US executive administration, and the leadership + relevant employee influence from a (mandatory?) collaboration of US frontier labs would be significantly closer on average. I think it would at least be closer to average Americans than a CCP Centralized AGI Project would be to average Chinese people, although I admit I'm not very knowledgeable on the gap between Chinese leadership and average Chinese people other than basics like (somewhat) widespread VPN usage.

Reply
Thoughts on AI 2027
David J Higgs6mo31

If you haven't already, you should consider reading the Timelines Forecast and Takeoff Forecast research supplements linked to on the AI 2027 website. But I think there are a good half dozen (not necessarily independent) reasons for thinking that if AI capabilities start to takeoff in short timeline futures, other parts of the overall economy/society aren't likely to massively change nearly as quickly. 

The jagged capabilities frontier in AI that already exists and will likely increase, Moravec's Paradox, the internal model/external model gap, the lack of compute available for experimentation + training + synthetic data creation + deployment, the gap in ease of obtaining training data for tasks like Whole Brain Emulation versus software development & AI Research, the fact that diffusion/use of publicly available model capabilities is relatively slow for both reasons of human psychology & economic efficiency, etc.

Basically, the fact that the most pivotal moments of AI 2027 are written as occurring mostly within 2027, rather than say across 2029-3034, means that it's possible for substantial RSI in terms of AI capabilities before substantial transformations occur in society overall. I think the most likely way AI 2027 is wrong on this matter is that not nearly as fast of an "intelligence explosion" occurs, not that the speed of societal impacts that occur simultaneously is underestimated. The reasons for thinking this are basically taking scaling seriously & priors (which are informed by things like the industrial revolution).

Reply