The main thing I got out of reading Bostrom's Deep Utopia is a better appreciation of this "meaning of life" thing. I had never really understood what people meant by this, and always just rounded it off to people using lofty words for their given projects in life.
The book's premise is that, after the aligned singularity, the robots will not just be better at doing all your work but also be better at doing all your leisure for you. E.g., you'd never study for fun in posthuman utopia, because you could instead just ask the local benevolent god to painlessly, seamlessly put all that wisdom in your head. In that regime, studying with books and problems for the purpose of learning and accomplishment is just masochism. If you're into learning, just ask! And similarly for any psychological state you're thinking of working towards.
So, in that regime, it's effortless to get a hedonically optimal world, without any unendorsed suffering and with all the happiness anyone could want. Those things can just be put into everyone and everything's heads directly—again, by the local benevolent-god authority. The only challenging values to satisfy are those that deal with being practically useful. If...
Crucially: notice if your environment is suppressing you feeling your actual morals, leaving you only able to use your model of your morals.
That's a good line, captures a lot of what I often feel is happening when talking to people about utilitarianism and a bunch of adjacent stuff (people replacing their morals with their models of their morals)
...The human brain does not start out as an efficient reasoning machine, plausible or deductive. This is something which we require years to learn, and a person who is an expert in one field of knowledge may do only rather poor plausible reasoning in another. What is happening in the brain during this learning process?
Education could be defined as the process of becoming aware of more and more propositions, and of more and more logical relationships between them. Then it seems natural to conjecture that a small child reasons on a lattice of very open structure: large parts of it are not interconnected at all. For example, the association of historical events with a time sequence is not automatic; the writer has had the experience of seeing a child, who knew about ancient Egypt and had studied pictures of the treasures from the tomb of Tutankhamen, nevertheless coming home from school with a puzzled expression and asking: ‘Was Abraham Lincoln the first person?’
It had been explained to him that the Egyptian artifacts were over 3000 years old, and that Abraham Lincoln was alive 120 years ago; but the meaning of those statements had not registered in his mind. This makes us wonder whether
Minor spoilers for planecrash (Book 3).
...Keltham was supposed to start by telling them all to use their presumably-Civilization-trained skill of 'perspective-taking-of-ignorance' to envision a hypothetical world where nothing resembling Coordination had started to happen yet. Since, after all, you wouldn't want your thoughts about the best possible forms of Civilization to 'cognitively-anchor' on what already existed.
You can imagine starting in a world where all the same stuff and technology from present Civilization exists, since the question faced is what form of Governance is best-suited to a world like that one. Alternatively, imagine an alternative form of the exercise involving people fresh-born into a fresh world where nothing has yet been built, and everybody's just wandering around over a grassy plain.
Either way, you should assume that everybody knows all about decision theory and cooperation-defection dilemmas. The question being asked is not 'What form of Governance would we invent if we were stupid?'
Civilization could then begin - maybe it wouldn't actually happen exactly that way, but it is nonetheless said as though in stori
A decent handle for rationalism is 'apolitical consequentialism.'
'Apolitical' here means avoiding playing the whole status game of signaling fealty to a political tribe and winning/losing status as that political tribe wins/loses status competitions. 'Consequentialism' means getting more of what you want, whatever that is.
Minor spoilers from mad investor chaos and the woman of asmodeus (planecrash Book 1) and Peter Watt's Echopraxia.
..."Suppose everybody in a dath ilani city woke up one day with the knowledge mysteriously inserted into their heads, that their city had a pharaoh who was entitled to order random women off the street into his - cuddling chambers? - whether they liked that or not. Suppose that they had the false sense that things had always been like this for decades. It wouldn't even take until whenever the pharaoh first ordered a woman, for her to go "Wait why am I obeying this order when I'd rather not obey it?" Somebody would be thinking about city politics first thing when they woke up in the morning and they'd go "Wait why we do we have a pharaoh in the first place" and within an hour, not only would they not have a pharaoh, they'd have deduced the existence of the memory modification because their previous history would have made no sense, and then the problem would escalate to Exception Handling and half the Keepers on the planet would arrive to figure out what kind of alien invasion was going on. Is the source of my confusion - at all clear here?"
"You think
We've all met people who are acting as if "Acquire Money" is a terminal goal, never noticing that money is almost entirely instrumental in nature. When you ask them "but what would you do if money was no issue and you had a lot of time", all you get is a blank stare.
Even the LessWrong Wiki entry on terminal values describes a college student for which university is instrumental, and getting a job is terminal. This seems like a clear-cut case of a Lost Purpose: a job seems clearly instrumental. And yet, we've all met people who act as if "Have a Job" is a terminal value, and who then seem aimless and undirected after finding employment …
You can argue that Acquire Money and Have a Job aren't "really" terminal goals, to which I counter that many people don't know their ass from their elbow when it comes to their own goals.
--Nate Soares, "Dark Arts of Rationality"
Why does politics strike rationalists as so strangely shaped? Why does rationalism come across as aggressively apolitical to smart non-rationalists?
Part of the answer: Politics is absolutely rife with people mixing their ends with their means and vice versa. It's pants-on-head confused, from a rationalist perspective, to be ul...
Yudkowsky has sometimes used the phrase "genre savvy" to mean "knowing all the tropes of reality."
For example, we live in a world where academia falls victim to publishing incentives/Goodhearting, and so academic journals fall short of what people with different incentives would be capable of producing. You'd be failing to be genre savvy if you expected that when a serious problem like AGI alignment rolled around, academia would suddenly get its act together with a relatively small amount of prodding/effort. Genre savvy actors in our world know what academia is like, and predict that academia will continue to do its thing in the future as well.
Genre savviness is the same kind of thing as hard-to-communicate-but-empirically-validated expert intuitions. When domain experts have some feel for what projects might pan out and what projects certainly won't but struggle to explain their reasoning in depth, the most they might be able to do is claim that that project is just incompatible with the tropes of their corner of reality, and point to some other cases.
“What is the world trying to tell you?”
I've found that this prompt helps me think clearly about the evidence shed by the generator of my observations.
There's a rationality-improving internal ping I use on myself, which goes, "what do I expect to actually happen, for real?"
This ping moves my brain from a mode where it's playing with ideas in a way detached from the inferred genre of reality, over to a mode where I'm actually confident enough to bet about some outcomes. The latter mode leans heavily on my priors about reality, and, unlike the former mode, looks askance at significantly considering long, conjunctive, tenuous possible worlds.
God dammit people, "cringe" and "based" aren't truth values! "Progressive" is not a truth value! Say true things!
Having been there twice, I've decided that the Lightcone offices are my favorite place in the world. They're certainly the most rationalist-shaped space I've ever been in.
Academic philosophers are better than average at evaluating object-level arguments for some claim. They don't seem to be very good at thinking about what rationalization in search implies about the arguments that come up. Compared to academic philosophers, rationalists strike me as especially appreciating filtered evidence and its significance to your world model.
If you find an argument for a claim easily, then even if that argument is strong, this (depending on some other things) implies that similarly strong arguments on the other side may turn up with n...
Modest spoilers for planecrash (Book 9 -- null action act II).
...Nex and Geb had each INT 30 by the end of their mutual war. They didn't solve the puzzle of Azlant's IOUN stones... partially because they did not find and prioritize enough diamonds to also gain Wisdom 27. And partially because there is more to thinkoomph than Intelligence and Wisdom and Splendour, such as Golarion's spells readily do enhance; there is a spark to inventing notions like probability theory or computation or logical decision theory from scratch, that is not directly me
Epistemic status: politics, known mindkiller; not very serious or considered.
People seem to have a God-shaped hole in their psyche: just as people banded around religious tribal affiliations, they now, in the contemporary West, band together around political tribal affiliations. Intertribal conflict can be, at its worst, violent, on top of mindkilling. Religious persecution in the UK was one of the instigating causes of British settlers migrating to the American colonies; religious conflict in Europe generally was severe.
In the US, the 1st Amendment legall...
If you take each of the digits of 153, cube them, and then sum those cubes, you get 153:
1 + 125 + 27 = 153.
For many naturals, if you iteratively apply this function, you'll return to the 153 fixed point. Start with, say, 298:
8 + 729 + 512 = 1,249
1 + 8 + 64 + 729 = 802
512 + 0 + 8 = 516
125 + 1 + 216 = 342
27 + 64 + 8 = 99
729 + 729 = 1,458
1 + 64 + 125 + 512 = 702
343 + 0 + 8 = 351
27 + 125 + 1 = 153
1 + 125 + 27 = 153
1 + 125 + 27 = 153...
These nine fixed points or cycles occur with the following frequencies (1 <= n <= 10e9):
33.3% : (153 → )
29.5% : (371 → )
17.8% : (370 → )
5.0% : (55 → 250 → 133 → )
4.1% : (160 → 217 -> 352 → )
3.8% : (407 → )
3.1% : (919 → 1459 → )
1.8% : (1 → )
1.5% : (136 → 244 → )
No other fixed points or cycles are possible (except 0 → 0, which isn't reachable from any nonzero input) since any number with more than four digits will have fewer digits in the sum of its cubed digits.
A model I picked up from Eric Schwitzgebel.
The humanities used to be highest-status in the intellectual world!
But then, scientists quite visibly exploded fission weapons and put someone on the moon. It's easy to coordinate to ignore some unwelcome evidence, but not evidence that blatant. So, begrudgingly, science has been steadily accorded more and more status, from the postwar period on.
When the sanity waterline is so low, it's easy to develop a potent sense of misanthropy.
Bryan Caplan's writing about many people hating stupid people really affected me on this point. Don't hate, or even resent, stupid people; trade with them! This is a straightforward consequence of Ricardo's comparative advantage theorem. Population averages are overrated; what matters is whether the individual interactions between agents in a population are positive-sum, not where those individual agents fall relative to the population average.
"Ignorant people do not exist."
It's really easy to spend a lot of cognitive cycles churning through bad, misleading ideas generated by the hopelessly confused. Don't do that!
The argument that being more knowledgeable leaves you strictly better off than being ignorant does relies you simply ignoring bad ideas when you spend your cognitive cycles searching for improvements on your working plans. Sometimes, you'll need to actually exercise this "simply ignore it" skill. You'll end up needing to do so more and more, to approach bounded instrumental rationality, the more inadequate civilization around you is and the lower its sanity waterline.
You sometimes misspeak... and you sometimes misthink. That is, sometimes your cognitive algorithm a word, and the thought that seemed so unimpeachably obvious in your head... is nevertheless false on a second glance.
Your brain is a messy probabilistic system, so you shouldn't expect its cognitive state to ever perfectly track the state of a distant entity.
Policy experiments I might care about if we weren't all due to die in 7 years:
A shard is a contextually activated behavior-steering computation. Think of it as a circuit of neurons in your brain that is reinforced by the subcortex, gaining more staying power when positively reinforced and withering away in the face of negative reinforcement. In fact, whatever modulates shard strength in this way is reinforcement/reward. Shards are born when a computation that is currently steering steers into some reinforcement. So shards can only accrete around the concepts currently in a system's world model (presumably, the world model is shared ...
My favorite books, ranked!
Non-fiction:
1. Rationality, Eliezer Yudkowsky
2. Superintelligence, Nick Bostrom
3. The Age of Em, Robin Hanson
Fiction:
1. Permutation City, Greg Egan
2. Blindsight, Peter Watts
3. A Deepness in the Sky, Vernor Vinge
4. Ra, Sam Hughes/qntm
Epistemic status: Half-baked thought.
Say you wanted to formalize the concepts of "inside and outside views" to some degree. You might say that your inside view is a Bayes net or joint conditional probability distribution—this mathematical object formalizes your prior.
Unlike your inside view, your outside view consists of forms of deferring to outside experts. The Bayes nets that inform their thinking are sealed away, and you can't inspect these. You can ask outside experts to explain their arguments, but there's an interaction cost associated with inspecti...
Because your utility function is your utility function, the one true political ideology is clearly Extrapolated Volitionism.
Extrapolated Volitionist institutions are all characteristically "meta": they take as input what you currently want and then optimize for the outcomes a more epistemically idealized you would want, after more reflection and/or study.
Institutions that merely optimize for what you currently want the way you would with an idealized world-model are old hat by comparison!
...Bogus nondifferentiable functions
The case most often cited as an example of a nondifferentiable function is derived from a sequence , each of which is a string of isosceles right triangles whose hypotenuses lie on the real axis and have length . As , the triangles shrink to zero size. For any finite , the slope of is almost everywhere. Then what happens as ? The limit is often cited carelessly as a nondifferentiable function. Now it is clear that the limit of the derivativ
Only make choices that you would not make in reverse, if things were the other way around. Drop out of school if and only if you wouldn't enroll in school from out of the workforce. Continue school if and only if you'd switch over from work to that level of schooling.
Flitting back and forth between both possible worlds can make you less cagey about doing what's overdetermined by your world model + utility function already. It's also part of the exciting rationalist journey of acausally cooperating with your selves in other possible worlds.
...Science fiction books have to tell interesting stories, and interesting stories are about humans or human-like entities. We can enjoy stories about aliens or robots as long as those aliens and robots are still approximately human-sized, human-shaped, human-intelligence, and doing human-type things. A Star Wars in which all of the X-Wings were combat drones wouldn’t have done anything for us. So when I accuse something of being science-fiction-ish, I mean bending over backwards – and ignoring the evidence – in order to give basically human-shaped beings a c
Spoilers for planecrash (Book 2).
...Keltham will now, striding back and forth and rather widely gesturing, hold forth upon the central principle of all dath ilani project management, the ability to identify who is responsible for something. If there is not one person responsible for something, it means nobody is responsible for it. This is the proverb of dath ilani management. Are t
...What would it mean for a society to have real intellectual integrity? For one, people would be expected to follow their stated beliefs to wherever they led. Unprincipled exceptions and an inability or unwillingness to correlate beliefs among different domains would be subject to social sanction. Valid attempts to persuade would be expected to be based on solid argumentation, meaning that what passes for typical salesmanship nowadays would be considered a grave affront. Probably something along the lines of punching someone
You can usually save a lot of time by skimming texts or just reading pieces of them. But reading a work all the way through uniquely lets you make negative existential claims about its content: only now can you authoritatively say that the work never mentions something.
Past historical experience and brainstorming about human social orders probably barely scratches the possibility space. If the CEV were to weigh in on possible posthuman social orders,[1] optimizing in part for how cool that social order is, I'd bet what it describes blows what we've seen out of the water in terms of cool factor.
(Presumably posthumans will end up reflectively endorsing interactions with one another of some description.)
One important idea I've picked up from reading Zvi is that, in communication, it's important to buy out the status cost imposed by your claims.
If you're fielding a theory of the world that, as a side effect, dunks on your interlocutor and diminishes their social status, you can work to get that person to think in terms of Bayesian epistemology and not decision theory if you make sure you aren't hurting their social image. You have to put in the unreasonable-feeling work of framing all your claims such that their social status is preserved or fairly increas...
I regret to inform you, you are an em inside an inconsistent simulated world. By this, I mean: your world is a slapdash thing put together out of off-the-shelf assets in the near future (presumably right before a singularity eats that simulator Earth).
Your world doesn't bother emulating far-away events in great detail, and indeed, may be messing up even things you can closely observe. Your simulators are probably not tampering with your thoughts, though even that is something worth considering carefully.
What are the flaws you...
When another article of equal argumentative caliber could have just as easily been written for the negation of a claim, that writeup is no evidence for its claim.
...The explicit definition of an ordered pair is frequently relegated to pathological set theory...
It is easy to locate the source of the mistrust and suspicion that many mathematicians feel toward the explicit definition of ordered pair given above. The trouble is not that there is anything wrong or anything missing; the relevant properties of the concept we have defined are all correct (that is, in accord with the demands of intuition) and all the correct properties are present. The trouble is that the concept has some irreleva