In the pre-LLM era, it seemed more likely (compared to now) that there was an algorithmically simple core of general intelligence, rather than intelligence being a complex aggregation of skills. If you're operating under the assumption that general intelligence has a simple algorithmic structure, decision theory is an obvious place to search for it. So the early focus on decision theory wasn't random.
There are the terms "closed individualism," "open individualism," and "empty individualism" used in this Qualia Computing post.
My own experience is very different from those described in this post. I find it relaxing instead of stressful to spend time doing nothing, and felt this way even when I was a child and hadn't started meditating regularly. I also don't enjoy using a smartphone, due to the small screen size and reliance on touch inputs, so I don't fill gaps in activities by browsing the Internet on my phone. It's also common for me to have brief interactions with strangers even though I'm young. People frequently ask me for directions when I'm on my way to or from work.
The easiest way to promote justice is to focus on punishing people who behave badly (since that's easier than rewarding people who behave well).
The premises of the toy model don't require this to be true. Whether it's true, and to what extent, can vary between environments.
The orthogonality question is an engineering question
People usually think about the orthogonality question ("Is the orthogonality thesis true?") as a philosophical question. The usual way of approaching the orthogonality question is by taking a starting point of "assume an AGI exists" and then reasoning about what goals the AGI would have. But one can flip the usual starting point around and ask, for a specific goal, "is it realistically achievable to create a general intelligence that has this goal?" This reframing turns the orthogonality question into an engineering question that has more direct practical relevance than the philosophical version. The engineering version is a question about the types of results an AI developer can expect from different engineering decisions, rather than speculation about an idealized AGI; it's grounded in what's realistically achievable instead of what might be theoretically possible.
Instances of the engineering version of the orthogonality question also open the broader orthogonality question up to empirical testing. And so far, the empirical evidence we've received has pointed toward the answer "no." Ever since the early days of reinforcement learning, researchers have been creating models with narrow goals, and so far, none of those systems has shown full generalization in the type of intelligence it's developed. Protein-folding models only fold proteins; chess engines don't model their environments outside the confines of the 64 squares. Language prediction has generalized further than most other training objectives, but language models still perform poorly at non-linguistic tasks (understanding images, acting within physical environments) and have jagged capabilities even within the set of language-based problems. Each new failure to create general intelligence from a narrow training objective is (usually weak) empirical evidence that narrow training signals are too impoverished to let a model develop highly general capabilities. Maybe general intelligence from a narrow goal would be possible with truly gargantuan amounts of compute, but recall that the engineering version of the orthogonality question is about what's practically achievable.
There was likely a midwit-meme effect going on at the philosophy meetup, where, in order to distinguish themselves from the stereotypical sports-bar-goers, the attendees were forming their beliefs in ways that would never occur to a true "normie." You might have a better experience interacting with "common people" in a setting where they aren't self-selected for trying to demonstrate sophistication.
Just spitballing, but maybe you could incorporate some notion of resource consumption, like in linear logic. You could have a system where the copies have to "feed" on some resource in order to stay active, and data corruption inhibits a copy's ability to "feed."
I don't remember, it was something I saw in the New York Times Book Review section a few years ago.
The spiralism attractor is the same type of failure mode as GPT-2 getting stuck repeating a single character or ChatGPT's image generator turning photos into caricatures of black people. The only difference between the spiralism attractor and other mode collapse attractors is that some people experiencing mania happen to find it compelling. That is to say, the spiralism attractor is centrally a capabilities failure and only incidentally an alignment failure.
On the first question, reaching superintelligence might require designing, testing, at-scale manufacturing, and installation of new types of computing hardware, which would probably take more than two years.