1 min read

2

This is a special post for quick takes by David Udell. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
David Udell's Shortform
139 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The main thing I got out of reading Bostrom's Deep Utopia is a better appreciation of this "meaning of life" thing. I had never really understood what people meant by this, and always just rounded it off to people using lofty words for their given projects in life.

The book's premise is that, after the aligned singularity, the robots will not just be better at doing all your work but also be better at doing all your leisure for you. E.g., you'd never study for fun in posthuman utopia, because you could instead just ask the local benevolent god to painlessly, seamlessly put all that wisdom in your head. In that regime, studying with books and problems for the purpose of learning and accomplishment is just masochism. If you're into learning, just ask! And similarly for any psychological state you're thinking of working towards.

So, in that regime, it's effortless to get a hedonically optimal world, without any unendorsed suffering and with all the happiness anyone could want. Those things can just be put into everyone and everything's heads directly—again, by the local benevolent-god authority. The only challenging values to satisfy are those that deal with being practically useful. If... (read more)

6Garrett Baker
Many who believe in God derive meaning, despite God theoretically being able to do anything they can do but better, from the fact that He chose not to do the tasks they are good at, and left them tasks to try to accomplish. Its common for such people to believe that this meaning would disappear if God disappeared, but whenever such a person does come to no longer believe in God, they often continue to see meaning in their life[1]. Now atheists worry about building God because it may destroy all meaning to our actions. I expect we'll adapt. (edit: That is to say, I don't think you've adequately described what "meaning of life" is if you're worried about it going away in the situation you describe) ---------------------------------------- 1. If anything, they're more right than wrong, there has been much written about the "meaning crisis" we're in, possibly attributable to greater levels of atheism. ↩︎
2JBlack
I'm pretty sure that I would study for fun in the posthuman utopia, because I both value and enjoy studying and a utopia that can't carry those values through seems like a pretty shallow imitation of a utopia. There won't be a local benevolent god to put that wisdom into my head, because I will be a local benevolent god with more knowledge than most others around. I'll be studying things that have only recently been explored, or that nobody has yet discovered. Otherwise again, what sort of shallow imitation of a posthuman utopia is this?
2tailcalled
The tricky part is, on the margin I would probably use various shortcuts, and it's not clear where those shortcuts end short of just getting knowledge beamed into my head. I already use LLMs to tell me facts, explain things I'm unfamiliar with, handle tedious calculations/coding, generate simulated data/brainstorming and summarize things. Not much, because LLMs are pretty bad, but I do use them for this and I would use them more on the margin.
1Nate Showell
The concept of "the meaning of life" still seems like a category error to me. It's an attempt to apply a system of categorization used for tools, one in which they are categorized by the purpose for which they are used, to something that isn't a tool: a human life. It's a holdover from theistic worldviews in which God created humans for some unknown purpose.   The lesson I draw instead from the knowledge-uploading thought experiment -- where having knowledge instantly zapped into your head seems less worthwhile acquiring it more slowly yourself -- is that to some extent, human values simply are masochistic. Hedonic maximization is not what most people want, even with all else being equal. This goes beyond simply valuing the pride of accomplishing difficult tasks, as such as the sense of accomplishment one would get from studying on one's own, above other forms of pleasure. In the setting of this thought experiment, if you wanted the sense of accomplishment, you could get that zapped into your brain too, but much like getting knowledge zapped into your brain instead of studying yourself, automatically getting a sense of accomplishment would be of lesser value. The suffering of studying for yourself is part of what makes us evaluate it as worthwhile.

Use your actual morals, not your model of your morals.

Crucially: notice if your environment is suppressing you feeling your actual morals, leaving you only able to use your model of your morals.

That's a good line, captures a lot of what I often feel is happening when talking to people about utilitarianism and a bunch of adjacent stuff (people replacing their morals with their models of their morals)

0Vladimir_Nesov
Detailed or non-intuitive actual morals don't exist to be found and used, they can only be built with great care. None have been built so far, as no single human has lived for even 3000 years. Human condition curses all moral insight with goodhart. What remains is scaling Pareto projects of locally ordinary humanism.

The human brain does not start out as an efficient reasoning machine, plausible or deductive. This is something which we require years to learn, and a person who is an expert in one field of knowledge may do only rather poor plausible reasoning in another. What is happening in the brain during this learning process?

Education could be defined as the process of becoming aware of more and more propositions, and of more and more logical relationships between them. Then it seems natural to conjecture that a small child reasons on a lattice of very open structure: large parts of it are not interconnected at all. For example, the association of historical events with a time sequence is not automatic; the writer has had the experience of seeing a child, who knew about ancient Egypt and had studied pictures of the treasures from the tomb of Tutankhamen, nevertheless coming home from school with a puzzled expression and asking: ‘Was Abraham Lincoln the first person?’

It had been explained to him that the Egyptian artifacts were over 3000 years old, and that Abraham Lincoln was alive 120 years ago; but the meaning of those statements had not registered in his mind. This makes us wonder whether

... (read more)

Minor spoilers for planecrash (Book 3).

Keltham's Governance Lecture

Keltham was supposed to start by telling them all to use their presumably-Civilization-trained skill of 'perspective-taking-of-ignorance' to envision a hypothetical world where nothing resembling Coordination had started to happen yet.  Since, after all, you wouldn't want your thoughts about the best possible forms of Civilization to 'cognitively-anchor' on what already existed.

You can imagine starting in a world where all the same stuff and technology from present Civilization exists, since the question faced is what form of Governance is best-suited to a world like that one.  Alternatively, imagine an alternative form of the exercise involving people fresh-born into a fresh world where nothing has yet been built, and everybody's just wandering around over a grassy plain.

Either way, you should assume that everybody knows all about decision theory and cooperation-defection dilemmas.  The question being asked is not 'What form of Governance would we invent if we were stupid?'

Civilization could then begin - maybe it wouldn't actually happen exactly that way, but it is nonetheless said as though in stori

... (read more)

A decent handle for rationalism is 'apolitical consequentialism.'

'Apolitical' here means avoiding playing the whole status game of signaling fealty to a political tribe and winning/losing status as that political tribe wins/loses status competitions. 'Consequentialism' means getting more of what you want, whatever that is.

4LVSN
I think having answers for political questions is compatible and required by rationalism. Instead of 'apolitical' consequentialism I would advise any of the following which mean approximately the same things as each other: • politically subficial consequentialism (as opposed to politically superficial consequentialism; instead of judging things on whether they appear to be in line with a political faction, which is superficial, rationalists aspire to have deeper and more justified standards for solving political questions) • politically impartial consequentialism • politically meritocratic consequentialism  • politically individuated consequentialism • politically open-minded consequentialism • politically human consequentialism (politics which aim to be good by the metric of human values, shared as much as possible by everyone, regardless of politics) • politically omniscient consequentialism (politics which aim to be good by the metric of values that humans would have if they had full, maximally objection-solved information on every topic, especially topics of practical philosophy)
3David Udell
I agree that rationalism involves the (advanced rationalist) skills of instrumentally routing through relevant political challenges to accomplish your goals … but I'm not sure any of those proposed labels captures that well. I like "apolitical" because it unequivocally states that you're not trying to slogan-monger for a political tribe, and are naively, completely, loudly, and explicitly opting out of that status competition and not secretly fighting for the semantic high-ground in some underhanded way (which is more typical political behavior, and is thus expected). "Meritocratic," "humanist," "humanitarian," and maybe "open-minded" are all shot for that purpose, as they've been abused by political tribes in the ongoing culture war (and in previous culture wars, too; our era probably isn't too special in this regard) and connotate allegiance to some political tribes over others. What I really want is an adjective that says "I'm completely tapping out of that game."
7lc
The problem is that whenever well meaning people come up with such an adjective, the people who are, in fact, not "completely tapping out of that game" quickly begin to abuse it until it loses meaning.  Generally speaking, tribalized people have an incentive to be seen as unaffiliated as possible. Being seen as a rational, neutral observer lends your perspective more credibility.
4Rana Dexsin
“apolitical” has indeed been turned into a slur around “you're just trying to hide that you hate change” or “you're just trying to hide the evil influences on you” (or something else vaguely like those) in a number of places.

Minor spoilers from mad investor chaos and the woman of asmodeus (planecrash Book 1) and Peter Watt's Echopraxia.

"Suppose everybody in a dath ilani city woke up one day with the knowledge mysteriously inserted into their heads, that their city had a pharaoh who was entitled to order random women off the street into his - cuddling chambers? - whether they liked that or not.  Suppose that they had the false sense that things had always been like this for decades.  It wouldn't even take until whenever the pharaoh first ordered a woman, for her to go "Wait why am I obeying this order when I'd rather not obey it?"  Somebody would be thinking about city politics first thing when they woke up in the morning and they'd go "Wait why we do we have a pharaoh in the first place" and within an hour, not only would they not have a pharaoh, they'd have deduced the existence of the memory modification because their previous history would have made no sense, and then the problem would escalate to Exception Handling and half the Keepers on the planet would arrive to figure out what kind of alien invasion was going on.  Is the source of my confusion - at all clear here?"

"You think

... (read more)
1MackGopherSena
[edited]
1David Udell
I don't get the relevance of the scenario. Is the idea that there might be many such other rooms with people like me, and that I want to coordinate with them (to what end?) using the Schelling points in the night sky? I might identify Schelling points using what celestial objects seem to jump out to me on first glance, and see which door of the two that suggests -- reasoning that others will reason similarly. I don't get what we'd be coordinating to do here, though.

We've all met people who are acting as if "Acquire Money" is a terminal goal, never noticing that money is almost entirely instrumental in nature. When you ask them "but what would you do if money was no issue and you had a lot of time", all you get is a blank stare.

Even the LessWrong Wiki entry on terminal values describes a college student for which university is instrumental, and getting a job is terminal. This seems like a clear-cut case of a Lost Purpose: a job seems clearly instrumental. And yet, we've all met people who act as if "Have a Job" is a terminal value, and who then seem aimless and undirected after finding employment …

You can argue that Acquire Money and Have a Job aren't "really" terminal goals, to which I counter that many people don't know their ass from their elbow when it comes to their own goals.

--Nate Soares, "Dark Arts of Rationality"

Why does politics strike rationalists as so strangely shaped? Why does rationalism come across as aggressively apolitical to smart non-rationalists?

Part of the answer: Politics is absolutely rife with people mixing their ends with their means and vice versa. It's pants-on-head confused, from a rationalist perspective, to be ul... (read more)

4Dagon
I often wonder if this framing (with which I mostly agree) is an example of typical mind fallacy.  The assumption that many humans are capable of distinguishing terminal from instrumental goals, or in having terminal goals more abstract than "comfort and procreation", is not all that supported by evidence. In other words, politicized debates DO rub you the wrong way, but on two dimensions - first, that you're losing, because you're approaching them from a different motive than your opponents.  And second that it reveals not just a misalignment with fellow humans in terminal goals, but an alien-ness in the type of terminal goals you find reasonable.

Yudkowsky has sometimes used the phrase "genre savvy" to mean "knowing all the tropes of reality."

For example, we live in a world where academia falls victim to publishing incentives/Goodhearting, and so academic journals fall short of what people with different incentives would be capable of producing. You'd be failing to be genre savvy if you expected that when a serious problem like AGI alignment rolled around, academia would suddenly get its act together with a relatively small amount of prodding/effort. Genre savvy actors in our world know what academia is like, and predict that academia will continue to do its thing in the future as well.

Genre savviness is the same kind of thing as hard-to-communicate-but-empirically-validated expert intuitions. When domain experts have some feel for what projects might pan out and what projects certainly won't but struggle to explain their reasoning in depth, the most they might be able to do is claim that that project is just incompatible with the tropes of their corner of reality, and point to some other cases.

3Viliam
How is "genre savviness" different from "outside view" or "reference class forecasting"?
1David Udell
I think they're all the same thing: recognizing patterns in how a class of phenomena pan out.

“What is the world trying to tell you?”

I've found that this prompt helps me think clearly about the evidence shed by the generator of my observations.

There's a rationality-improving internal ping I use on myself, which goes, "what do I expect to actually happen, for real?"

This ping moves my brain from a mode where it's playing with ideas in a way detached from the inferred genre of reality, over to a mode where I'm actually confident enough to bet about some outcomes. The latter mode leans heavily on my priors about reality, and, unlike the former mode, looks askance at significantly considering long, conjunctive, tenuous possible worlds.

God dammit people, "cringe" and "based" aren't truth values! "Progressive" is not a truth value! Say true things!

7lc
Based.
4David Udell
I've noticed that people are really innately good at sentiment classification, and, by comparison, crap at natural language inference. In a typical conversation with ordinary educated people, people will do a lot of the former relative to the latter. My theory of this is that, with sentiment classification and generation, we're usually talking in order to credibly signal and countersignal our competence, virtuous features, and/or group membership, and that humanity has been fine tuned to succeed at this social maneuvering task. At this point, it comes naturally. Success at the object-level-reasoning task was less crucial for individuals in the ancestral environment, and so people, typically, aren't naturally expert at it. What a bad situation to be in, when our species' survival hinges on our competence at object-level reasoning.

Having been there twice, I've decided that the Lightcone offices are my favorite place in the world.  They're certainly the most rationalist-shaped space I've ever been in.

Academic philosophers are better than average at evaluating object-level arguments for some claim. They don't seem to be very good at thinking about what rationalization in search implies about the arguments that come up. Compared to academic philosophers, rationalists strike me as especially appreciating filtered evidence and its significance to your world model.

If you find an argument for a claim easily, then even if that argument is strong, this (depending on some other things) implies that similarly strong arguments on the other side may turn up with n... (read more)

Modest spoilers for planecrash (Book 9 -- null action act II).

Nex and Geb had each INT 30 by the end of their mutual war.  They didn't solve the puzzle of Azlant's IOUN stones... partially because they did not find and prioritize enough diamonds to also gain Wisdom 27.  And partially because there is more to thinkoomph than Intelligence and Wisdom and Splendour, such as Golarion's spells readily do enhance; there is a spark to inventing notions like probability theory or computation or logical decision theory from scratch, that is not directly me

... (read more)

Epistemic status: politics, known mindkiller; not very serious or considered.

People seem to have a God-shaped hole in their psyche: just as people banded around religious tribal affiliations, they now, in the contemporary West, band together around political tribal affiliations. Intertribal conflict can be, at its worst, violent, on top of mindkilling. Religious persecution in the UK was one of the instigating causes of British settlers migrating to the American colonies; religious conflict in Europe generally was severe.

In the US, the 1st Amendment legall... (read more)

153

If you take each of the digits of 153, cube them, and then sum those cubes, you get 153:

1 + 125 + 27 = 153.

For many naturals, if you iteratively apply this function, you'll return to the 153 fixed point. Start with, say, 298:

8 + 729 + 512 = 1,249

1 + 8 + 64 + 729 = 802

512 + 0 + 8 = 516

125 + 1 + 216 = 342

27 + 64 + 8 = 99

729 + 729 = 1,458

1 + 64 + 125 + 512 = 702

343 + 0 + 8 = 351

27 + 125 + 1 = 153

1 + 125 + 27 = 153

1 + 125 + 27 = 153...

These nine fixed points or cycles occur with the following frequencies (1 <= n <= 10e9):
33.3% : (153 → )
29.5% : (371 → )
17.8% : (370 → )
 5.0% : (55 → 250 → 133 → )
 4.1% : (160 → 217 -> 352 → )
 3.8% : (407 → )
 3.1% : (919 → 1459 → )
 1.8% : (1 → )
 1.5% : (136 → 244 → )

No other fixed points or cycles are possible (except 0 → 0, which isn't reachable from any nonzero input) since any number with more than four digits will have fewer digits in the sum of its cubed digits.

A model I picked up from Eric Schwitzgebel.

The humanities used to be highest-status in the intellectual world!

But then, scientists quite visibly exploded fission weapons and put someone on the moon. It's easy to coordinate to ignore some unwelcome evidence, but not evidence that blatant. So, begrudgingly, science has been steadily accorded more and more status, from the postwar period on.

When the sanity waterline is so low, it's easy to develop a potent sense of misanthropy.

Bryan Caplan's writing about many people hating stupid people really affected me on this point. Don't hate, or even resent, stupid people; trade with them! This is a straightforward consequence of Ricardo's comparative advantage theorem. Population averages are overrated; what matters is whether the individual interactions between agents in a population are positive-sum, not where those individual agents fall relative to the population average.

"Ignorant people do not exist."

It's really easy to spend a lot of cognitive cycles churning through bad, misleading ideas generated by the hopelessly confused. Don't do that!

The argument that being more knowledgeable leaves you strictly better off than being ignorant does relies you simply ignoring bad ideas when you spend your cognitive cycles searching for improvements on your working plans. Sometimes, you'll need to actually exercise this "simply ignore it" skill. You'll end up needing to do so more and more, to approach bounded instrumental rationality, the more inadequate civilization around you is and the lower its sanity waterline.

2David Udell
I hereby confer on you, reader, the shroud of epistemic shielding from predictably misleading statements. It confers irrevocable, invokable protection from having to think about predictably confused claims ever again. Take those cognitive cycles saved, and spend them well!

You sometimes misspeak... and you sometimes misthink. That is, sometimes your cognitive algorithm a word, and the thought that seemed so unimpeachably obvious in your head... is nevertheless false on a second glance.

Your brain is a messy probabilistic system, so you shouldn't expect its cognitive state to ever perfectly track the state of a distant entity.

1Morpheus
I find this funny. I don't know about your brain, but mine sometimes produces something closely resembling noise similar to dreams (admittedly more often in the morning when sleep deprived).
1David Udell
Note that a "distant entity" can be a computation that took place in a different part of your brain! Your thoughts therefore can't perfectly track other thoughts elsewhere in your head -- your whole brain is at all noisy, and so will sometimes distort the information being passed around inside itself.

Policy experiments I might care about if we weren't all due to die in 7 years:

  1. Prediction markets generally, but especially policy prediction markets at the corporate- and U.S. state- levels. The goal would be to try this route to raising the sanity waterline in the political domain (and elsewhere) by incentivizing everyone's becoming more of a policy wonk and less of a tribalist.
  2. Open borders experiments of various kinds in various U.S. states, precluding roads to citizenship or state benefits for migrant workers, and leaving open the possibility of mass de
... (read more)

Become consequentialist enough, and it'll wrap back around to being a bit deontological.

4Daniel Kokotajlo
"The rules say we must be consequentialists, but all the best people are deontologists, and virtue ethics is what actually works." --Yudkowsky, IIRC.
3Daniel Kokotajlo
I think this quote stuck with me because in addition to being funny and wise I think it's actually true, or close enough to true.

A shard is a contextually activated behavior-steering computation. Think of it as a circuit of neurons in your brain that is reinforced by the subcortex, gaining more staying power when positively reinforced and withering away in the face of negative reinforcement. In fact, whatever modulates shard strength in this way is reinforcement/reward. Shards are born when a computation that is currently steering steers into some reinforcement. So shards can only accrete around the concepts currently in a system's world model (presumably, the world model is shared ... (read more)

5Thomas Kwa
I'm pretty skeptical that sophisticated game theory happens between shards in the brain, and also that coalitions between shards are how value preservation in an AI will happen (rather than there being a single consequentialist shard, or many shards that merge into a consequentialist, or something I haven't thought of). To the extent that shard theory makes such claims, they seem to be interesting testable predictions.

My favorite books, ranked!

Non-fiction:

1. Rationality, Eliezer Yudkowsky

2. Superintelligence, Nick Bostrom

3. The Age of Em, Robin Hanson

Fiction:

1. Permutation City, Greg Egan

2. Blindsight, Peter Watts

3. A Deepness in the Sky, Vernor Vinge

4. Ra, Sam Hughes/qntm

Epistemic status: Half-baked thought.

Say you wanted to formalize the concepts of "inside and outside views" to some degree. You might say that your inside view is a Bayes net or joint conditional probability distribution—this mathematical object formalizes your prior.

Unlike your inside view, your outside view consists of forms of deferring to outside experts. The Bayes nets that inform their thinking are sealed away, and you can't inspect these. You can ask outside experts to explain their arguments, but there's an interaction cost associated with inspecti... (read more)

Because your utility function is your utility function, the one true political ideology is clearly Extrapolated Volitionism.

Extrapolated Volitionist institutions are all characteristically "meta": they take as input what you currently want and then optimize for the outcomes a more epistemically idealized you would want, after more reflection and/or study.

Institutions that merely optimize for what you currently want the way you would with an idealized world-model are old hat by comparison!

1TAG
Since when was politics about just one person?
2David Udell
A multiagent Extrapolated Volitionist institution is something that computes and optimizes for a Convergent Extrapolated Volition, if a CEV exists. Really, though, the above Extrapolated Volitionist institutions do take other people into consideration. They either give everyone the Schelling weight of one vote in a moral parliament, or they take into consideration the epistemic credibility of other bettors as evinced by their staked wealth, or other things like that. Sometimes the relevant interpersonal parameters can be varied, and the institutional designs don't weigh in on that question. The ideological emphasis is squarely on individual considered preferences -- that is the core insight of the outlook. "Have everyone get strictly better outcomes by their lights, probably in ways that surprise them but would be endorsed by them after reflection and/or study."

Bogus nondifferentiable functions

The case most often cited as an example of a nondifferentiable function is derived from a sequence , each of which is a string of isosceles right triangles whose hypotenuses lie on the real axis and have length . As , the triangles shrink to zero size. For any finite , the slope of  is  almost everywhere. Then what happens as ? The limit  is often cited carelessly as a nondifferentiable function. Now it is clear that the limit of the derivativ

... (read more)
2David Udell

Back and Forth

Only make choices that you would not make in reverse, if things were the other way around. Drop out of school if and only if you wouldn't enroll in school from out of the workforce. Continue school if and only if you'd switch over from work to that level of schooling.

Flitting back and forth between both possible worlds can make you less cagey about doing what's overdetermined by your world model + utility function already. It's also part of the exciting rationalist journey of acausally cooperating with your selves in other possible worlds.

4JBlack
It's probably a useful mental technique to consider from both directions, but also consider that choices that appear symmetric at first glance may not actually be symmetric. There are often significant transition costs that may differ in each direction, as well as path dependencies that are not immediately obvious. As such, I completely disagree with the first paragraph of the post, but agree with the general principle of considering such decisions from both directions and thank you for posting it.

Ten seconds of optimization is infinitely better than zero seconds of optimization.

7Raemon
Literal zero seconds of optimization is pretty rare tho (among humans). Your freewheeling impulses come pretty pre-optimized.

Science fiction books have to tell interesting stories, and interesting stories are about humans or human-like entities. We can enjoy stories about aliens or robots as long as those aliens and robots are still approximately human-sized, human-shaped, human-intelligence, and doing human-type things. A Star Wars in which all of the X-Wings were combat drones wouldn’t have done anything for us. So when I accuse something of being science-fiction-ish, I mean bending over backwards – and ignoring the evidence – in order to give basically human-shaped beings a c

... (read more)

Spoilers for planecrash (Book 2).

"Basic project management principles, an angry rant by Keltham of dath ilan, section one:  How to have anybody having responsibility for anything."

Keltham will now, striding back and forth and rather widely gesturing, hold forth upon the central principle of all dath ilani project management, the ability to identify who is responsible for something.  If there is not one person responsible for something, it means nobody is responsible for it.  This is the proverb of dath ilani management.  Are t

... (read more)
5Richard_Kennaway
Thanks for posting this extract. I find the glowfic format a bit wearing to read, for some reason, and it is these nuggets that I read Planecrash for, when I do. (Although I had no such problem with HPMOR, which I read avidly all the way through.)

What would it mean for a society to have real intellectual integrity?  For one, people would be expected to follow their stated beliefs to wherever they led.  Unprincipled exceptions and an inability or unwillingness to correlate beliefs among different domains would be subject to social sanction.  Valid attempts to persuade would be expected to be based on solid argumentation, meaning that what passes for typical salesmanship nowadays would be considered a grave affront.  Probably something along the lines of punching someone

... (read more)
5David Udell
Cf. "there are no atheists in a foxhole." Under stress, it's easy to slip sideways into a world model where things are going better, where you don't have to confront quite so many large looming problems. This is a completely natural human response to facing down difficult situations, especially when brooding over those situations over long periods of time. Similar sideways tugs can come from (overlapping categories) social incentives to endorse a sacred belief of some kind, or to not blaspheme, or to affirm the ingroup attire when life leaves you surrounded by a particular ingroup, or to believe what makes you or people like you look good/high status. Epistemic dignity is about seeing "slipping sideways" as beneath you. Living in reality is instrumentally beneficial, period. There's no good reason to ever allow yourself to not live in reality. Once you can see something, even dimly, there's absolutely no sense in hiding from that observation's implications. Those subtle mental motions by which we disappear observations we know that we won't like down the memory hole … epistemic dignity is about coming to always and everywhere violently reject these hidings-from-yourself, as a matter of principle. We don't actually have a choice in the matter -- there's no free parameter of intellectual virtue here, that you can form a subjective opinion on. That slipping sideways is undignified is written in the very mathematics of inference itself.
1David Udell
Minor spoilers for mad investor chaos and the woman of asmodeus (planecrash Book 1).

You can usually save a lot of time by skimming texts or just reading pieces of them. But reading a work all the way through uniquely lets you make negative existential claims about its content: only now can you authoritatively say that the work never mentions something.

1TLW
If you allow the assumption that your mental model of what was said matches what was said, then you don't necessarily need to read all the way through to authoritatively say that the work never mentions something, merely enough that you have confidence in your model. If you don't allow the assumption that your mental model of what was said matches what was said, then reading all the way through is insufficient to authoritatively say that the work never mentions something. (There is a third option here: that your mental model suddenly becomes much better when you finish reading the last word of an argument.)

Past historical experience and brainstorming about human social orders probably barely scratches the possibility space. If the CEV were to weigh in on possible posthuman social orders,[1] optimizing in part for how cool that social order is, I'd bet what it describes blows what we've seen out of the water in terms of cool factor.

  1. ^

    (Presumably posthumans will end up reflectively endorsing interactions with one another of some description.)

One important idea I've picked up from reading Zvi is that, in communication, it's important to buy out the status cost imposed by your claims.

If you're fielding a theory of the world that, as a side effect, dunks on your interlocutor and diminishes their social status, you can work to get that person to think in terms of Bayesian epistemology and not decision theory if you make sure you aren't hurting their social image. You have to put in the unreasonable-feeling work of framing all your claims such that their social status is preserved or fairly increas... (read more)

An Inconsistent Simulated World

I regret to inform you, you are an em inside an inconsistent simulated world. By this, I mean: your world is a slapdash thing put together out of off-the-shelf assets in the near future (presumably right before a singularity eats that simulator Earth).

Your world doesn't bother emulating far-away events in great detail, and indeed, may be messing up even things you can closely observe. Your simulators are probably not tampering with your thoughts, though even that is something worth considering carefully.

What are the flaws you... (read more)

When another article of equal argumentative caliber could have just as easily been written for the negation of a claim, that writeup is no evidence for its claim.

The explicit definition of an ordered pair  is frequently relegated to pathological set theory...

It is easy to locate the source of the mistrust and suspicion that many mathematicians feel toward the explicit definition of ordered pair given above. The trouble is not that there is anything wrong or anything missing; the relevant properties of the concept we have defined are all correct (that is, in accord with the demands of intuition) and all the correct properties are present. The trouble is that the concept has some irreleva

... (read more)