Here's the new thread for posting quotes, with the usual rules:

  • Please post all quotes separately, so that they can be voted up/down separately.  (If they are strongly related, reply to your own comments.  If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself
  • Do not quote comments/posts on LW/OB
  • No more than 5 quotes per person per monthly thread, please.
432 comments, sorted by Click to highlight new comments since: Today at 11:20 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

“Ignorance killed the cat; curiosity was framed!” ― C.J. Cherryh

(not sure if that is who said it originally, but that's the first creditation I found)

Interviewer: How do you answer critics who suggest that your team is playing god here?

Craig Venter: Oh... we're not playing.

British philosophy is more detailed and piecemeal than that of the Continent; when it allows itself some general principle, it sets to work to prove it inductively by examining its various applications. Thus Hume, after announcing that there is no idea without an antecedent impression, immediately proceeds to consider the following objection: suppose you are seeing two shades of colour which are similar but not identical, and suppose you have never seen a shade of colour intermediate between the two, can you nevertheless imagine such a shade? He does not decide the question, and considers that a decision adverse to his general principle would not be fatal to him, because his principle is not logical but empirical. When--to take a contrast--Leibniz wants to establish his monadology, he argues, roughly, as follows: Whatever is complex must be composed of simple parts; what is simple cannot be extended; therefore everything is composed of parts having no extension. But what is not extended is not matter. Therefore the ultimate constituents of things are not material, and, if not material, then mental. Consequently a table is really a colony of souls.

The difference of method, here, ma

... (read more)
I often find that I'm not well read enough or perhaps not smart enough to decipher the intricate language of these eminent philosophers. I'd like to know is Russell talking about something akin to scientific empiricism? Can someone enlighten me? From my shallow understanding though, it seems like what he is saying is almost common sense when it comes to building knowledge or beliefs about a problem domain.
The idea that one should not philosophize keeping close contact with empirical facts, instead of basing a long chain of arguments on abstract "logical" principles like Leibniz's, may be almost common sense now, but it wasn't in the early modern period of which Russell was talking about. And when Russell wrote this (1940s) he was old enough to remember that these kind of arguments were still prevalent in his youth (1880s-1890s) among absolute idealists like Bradley, as he describes in "Our Knowledge of the External World []" (follow the link and do a Ctrl-F search for Bradley). So it did not seem to him a way of thinking that was so ancient and outdated as to be not worth arguing against. ETA: I meant, "The idea that one should philosophize keeping...", without not, obviously.
Ah very good, in that context it makes perfect sense.

If you argue with a madman, it is extremely probable that you will get the worst of it; for in many ways his mind moves all the quicker for not being delayed by the things that go with good judgment. He is not hampered by a sense of humour or by charity, or by the dumb certainties of experience.

-- G. K. Chesterton, Orthodoxy

All of the books in the world contain no more information than is broadcast as video in a single large American city in a single year. Not all bits have equal value.

Carl Sagan

— Nick Szabo, quoted elsewhere in this post []. Fight!

Knowledge and information are different things. An audiobook takes up more hard disk space than an e-book, but they both convey the same knowledge.

"Comparing information and knowledge is like asking whether the fatness of a pig is more or less green than the designated hitter rule." -- David Guaspari

I now have coffee on my monitor.
This is one of the obvious facts that made me recoil in horror while reading Neuromancer. Their currency is BITS? Bits of what?
Are you sure you are thinking of the right novel? Searching this [] for the word "bit" did not find anything.
He may have been thinking of My Little Pony: Friendship is Magic.
Was the parent upvoted because people thought it was funny, or because they thought I had provided the correct answer, or because I mentioned ponies, or some other reason?

probably because you mentioned ponies.

Which got even more upvotes... [sigh]

Please don't become reddit!

Apparently so! Then, which book was it?? Shoot.
I think this is just a misuse of the word "information". If the bits aren't equal value, clearly they do not have the same amount of information.
I think value was used meaning importance.

Clearly some bits have value 0, while others have value 1.

But I came to realize that I was not a wizard, that "will-power" was not mana, and I was not so much a ghost in the machine, as a machine in the machine.

Ta-nehisi Coates

Yes -- and to me, that's a perfect illustration of why experiments are relevant in the first place! More often than not, the only reason we need experiments is that we're not smart enough. After the experiment has been done, if we've learned anything worth knowing at all, then hopefully we've learned why the experiment wasn't necessary to begin with -- why it wouldn't have made sense for the world to be any other way. But we're too dumb to figure it out ourselves! --Scott Aaronson

Or at least confirmation bias makes it seem that way.
Also hindsight bias. But I still think the quote has a perfectly valid point.

It is absurd to divide people into good and bad. People are either charming or tedious.

-- Oscar Wilde

Thank you, Professor Quirrell.


That's excellent advice for writing fiction. Audiences root for charming characters much more than for good ones. Especially useful when your world only contains villains. This is harder in real life, since your opponents can ignore your witty one-liners and emphasize your mass murders.

(This comment brought to you by House Lannister.)

The scary thing is how often it does work in real life. (Except that in real life charm is more than just witty one-liners.
I don't know that you can really classify people as X or ¬X. I mean, have you not seen individuals be X in certain situations and ¬X in other situations? &c.
On the face of it I would absolutely disagree with Wilde on that: to live a moral life one absolutely needs to distinguish between good and bad. Charm (in bad people) and tedium (in good people) get in the way of this. On the other hand, was Wilde really just blowing a big raspberry at the moralisers of his day ? Sort of saying "I care more about charm and tedium than what you call morality". I don't know enough about his context ...

Since I can't be bothered to do real research, I'll just point out that this Yahoo answer says that the quote is spoken by Lord Darlington. Oscar Wilde was a humorist and an entertainer. He makes amusing characters. His characters say amusing things.

Do not read too much into this quote and, without further evidence, I would not attribute this philosophy to Oscar Wilde himself.

(I haven't read Lady Windermere's Fan, where this if from, but this sounds very much like something Lord Henry from The Picture of Dorian Gray would say. And Lord Henry is one of the main causes of the Dorian's fall from grace in this book; he's not exactly a very positive character but certainly an entertainingly cynical one!)

But is it necessary to divide people into good and bad? What if you were only to apply goodness and badness to consequences and to your own actions?
If your own action is to empower another person, understanding that person's goodness or badness is necessary to understanding the action's goodness or badness.
But that can be entirely reduced to the goodness or badness of consequences.
And many charming people are also bad [] .
I like it, but what's it got to do with rationality?
To me at least, it captures the notion of how the perceived Truth/Falsity of a belief rest solely in our categorization of it as 'tribal' or 'non-tribal': weird or normal. Normal beliefs are true, weird beliefs are false. We believe our friends more readily than experts.
It is absurd to divide people into charming or tedious. People either have familiar worldviews or unfamiliar worldviews.
It is absurd to divide people into familiar worldviews or unfamiliar worldviews. People either have closer environmental causality or farther environmental causality. (anyone care to formalize the recursive tower?)
It's absurd to divide people into two categories and expect those two categories to be meaningful in more than a few contexts.

It is absurd to divide people. They tend to die if you do that.

It's absurd to divide. You tend to die if you do that.

It's absurd: You tend to die.

It's absurd to die.

It's bs to die.
“To do is to be” -- Nietzsche “To be is to do” -- Kant “Do be do be do” -- Sinatra
Nobody alive has died yet.
It will be quick. It might even be painless. I would not know. I have never died. -- Voldemort
At least not in worlds where he is alive.
Is it worse to enter a state of superimposed death and life than to die?
I hope not. That's the state we are all in now and what we are entering constantly. Unless there are rounding errors in the universe we haven't detected yet.
I think life requires a system large and complex enough to produce decoherence between "alive" and "dead" in timescales shorter than required to define "alive" at all.
Sorry, that was a Schrodinger's Cat joke.
[-][anonymous]10y 11

“Males” and “females”. (OK, there are edge cases and stuff, but this doesn't mean the categories aren't meaningful, does it?)

What about good vs bad humans?
Or humans who create paperclips versus those who don't?

I thought I just said that.

Can't their be good humans who don't create paperclips and just destroy antipaperclips and staples and such?
Destroying antipaperclips is creating paperclips. I didn't know humans had the concept though.
What is an antipaperclip?
Anything not a paperclip, or in opposition to further paperclipping. You might ask, "Why not just say 'non-paperclips'?" but anti-paperclips include paperclips deliberately designed to unbend, or which work at anti-paperclip purposes (say, a paperclip being used to short-circuit the electrical systems in a paperclip factory).
I brought a box of paperclips into my office today to use as bowl picks for my new bong, if I rebend them after I use them can I avoid becoming an anti-paperclip?

The problem with Internet quotes and statistics is that often times, they’re wrongfully believed to be real.

— Abraham Lincoln

The findings reveal that 20.7% of the studied articles in behavioral economics propose paternalist policy action and that 95.5% of these do not contain any analysis of the cognitive ability of policymakers.

-- Niclas Berggren, source and HT to Tyler Cowen

Sounds like a job for...Will_Newsome! EDIT: Why the downvotes? This seems like a fairly obvious case of researchers going insufficiently meta.
META MAN! willnewsomecuresmetaproblemsasfastashecan META MAN!
Why the downvotes? This seems like an obvious case of researchers going insufficiently meta.

“I drive an Infiniti. That’s really evil. There are people who just starve to death – that’s all they ever did. There’s people who are like, born and they go ‘Uh, I’m hungry’ then they just die, and that’s all they ever got to do. Meanwhile I’m driving in my car having a great time, and I sleep like a baby.

It’s totally my fault, ’cause I could trade my Infiniti for a [less luxurious] car… and I’d get back like $20,000. And I could save hundreds of people from dying of starvation with that money. And everyday I don’t do it. Everyday I make them die with my car.”

Louis C.K.

Let us suppose that the great empire of China, with all its myriads of inhabitants, was suddenly swallowed up by an earthquake, and let us consider how a man of humanity in Europe, who had no sort of connexion with that part of the world, would be affected upon receiving intelligence of this dreadful calamity. He would, I imagine, first of all, express very strongly his sorrow for the misfortune of that unhappy people, he would make many melancholy reflections upon the precariousness of human life, and the vanity of all the labours of man, which could thus be annihilated in a moment. He would too, perhaps, if he was a man of speculation, enter into many reasonings concerning the effects which this disaster might produce upon the commerce of Europe, and the trade and business of the world in general. And when all this fine philosophy was over, when all these humane sentiments had been once fairly expressed, he would pursue his business or his pleasure, take his repose or his diversion, with the same ease and tranquillity, as if no such accident had happened. The most frivolous disaster which could befal himself would occasion a more real disturbance. If he was to lose his little fi

... (read more)

And when all this fine philosophy was over, when all these humane sentiments had been once fairly expressed, he would pursue his business or his pleasure, take his repose or his diversion, with the same ease and tranquillity, as if no such accident [as the destruction of China] had happened.

Now that we are informed of disasters worldwide as soon as they happen, and can give at least money with a few mouse clicks, we can put this prediction to the test. What in fact we see is a very great public response to such disasters as the Japanese earthquake and tsunami.

True, but first of all, the situation posited is one in which China is "swallowed up". If a disaster occurred, and there was no clear way for the generous public to actually help, do you think you would see the same response? I'm sure you would still have the same loud proclamations of tragedy and sympathy, but would there be action to match it? I suppose it's possible that they would try to support the remaining Chinese who presumably survived by not being in China, but it seems unlikely to me that the same concerted aid efforts would exist. Secondly, it seems to me that Smith is talking more about genuine emotional distress and lasting life changes than simply any kind of reaction. Yes, people donate money for disaster relief, but do they lose sleep over it? (Yes, there are some people who drop everything and relocate to physically help, but they are the exception.) Is a $5 donation to the Red Cross more indicative of genuine distress and significant change, or the kind of public sympathy that allows the person to return to their lives as soon as they've sent the text?
If help is not possible, obviously there will be no help. But in real disasters, there always is a way to help, and help is always forthcoming.
Even if help is not possible, there will be "help."
[-][anonymous]10y 21


Paragraphs cost lines, and when each line of paper on average costs five shillings, you use as many of them as you can get away with.

[-][anonymous]10y 23


I support this motion, and further propose that formatting and other aesthetic considerations also be inferred from known data on the authors to fully reflect the manner in which they would have presented their work had they been aware of and capable of using all our current nice-book-writing technology. ...which sounds a lot like Eliezer's Friendly AI "first and final command". (I would link to the exact quote, but I've lost the bookmark. Will edit it in once found.)
Some writers were paid by the word and/or line.
I think much of it is that brevity simply wasn't seen as a virtue back then. There were far fewer written works, so you had more time to go through each one.
I think it's the vagary of various times. All periods had pretty expensive media and some were, as one would expect, terse as hell. (Reading a book on Nagarjuna, I'm reminded that reading his Heart of the Middle Way was like trying to read a math book with nothing but theorems. And not even the proofs. 'Wait, could you go back and explain that? Or anything?') Latin prose could be very concise. Biblical literature likewise. I'm told much Chinese literature is similar (especially the classics), and I'd believe it from the translations I've read. Some periods praised clarity and simplicity of prose. Others didn't, and gave us things like Thomas Browne's Urn Burial. (We also need to remember that we read difficulty as complexity. Shakespeare is pretty easy to read... if you have a vocabulary so huge as to overcome the linguistic drift of 4 centuries and are used to his syntax. His contemporaries would not have had such problems.)
For context, the first paragraph-ish thing in Romance of the Three Kingdoms covers about two hundred years of history in about as many characters, in the meanwhile setting up the recurring theme of perpetual unification, division and subsequent reunification.
Sure, but popular novels like RofTK or Monkey or Dream of the Red Chamber were not really high-status stuff in the first place.
5Eliezer Yudkowsky10y
I detect a contradiction between "brevity not seen as virtue" and "they couldn't afford paragraphs".
Yes, I don't think "couldn't afford paper" is a good explanation, books of this nature were for wealthy people anyway.
Ancient Greek writing not only lacked paragraphs, but spaces. And punctuation. And everything was in capitals. IMAGINETRYINGTOREADSOMETHINGLIKETHATINADEADLANGUAGE.
Why do some people so revile our passive feelings, and so venerate hypocrisy?
Because it helps coerce others into doing things that benefit us and reduces how much force is exercised upon us while trading off the minimal amount of altruistic action necessary. There wouldn't (usually) be much point having altruistic principles and publicly reviling them.
That's quite a theory. It's like the old fashioned elitist theory that hypocrisy is necessary to keep the hoi polloi in line, except apparently applied to everyone. Or not? Do you think you are made more useful to yourself and others by reviling your feelings and being hypocritical about your values?
The standard one. I was stating the obvious, not being controversial. I never said I did so. (And where did this 'useful to others' thing come in? That's certainly not something I'd try to argue for. The primary point of the hypocrisy is to reduce the amount that you actually spend helping others, for a given level of professed ideals.)
Sorry, I wasn't getting what you were saying. People are hypocritical to send the signal that they are more altruistic than they are? I suppose some do. Do you really think most people are consciously hypocritical on this score? I've wondered as much about a lot of peculiar social behavior, particularly the profession of certain beliefs - are most people consciously lying, and I just don't get the joke? Are the various crazy ideas people seem to have, where they seem to fail on epistemic grounds, just me mistaking what they consider instrumentally rational lies for epistemic mistakes?
Wedrifid is not ignorant enough to think that most people are consciously hypocritical. Being consciously hypocritical is very difficult. It requires a lot of coordination, a good memory and decent to excellent acting skills. But as you may have heard, "Sincerity is the thing; once you can fake that you've got it made." Evolution baked this lesson into us. The beliefs we profess and the principles we act by overlap but they are not the same. If you want to read up further on this go to social and cognitive psychology. The primary insights for me were that people are not unitary agents; they're collections of modules who occasionally work at cross purposes, signalling is realy freaking important, and that in line with far/near or construal theory holding a belief and acting on it are not the same thing. I can't recommend a single book to get the whole of this, or even most of it across, but The Mating Mind [] and The Red Queen's Race [] are both good and relevant. I can't remember which one repeats Lewontin's Fallacy []. Don't dump it purely based on one brainfart.
Would that be ignorant? I'm not sure. Certainly, there are sharks. Like you, I'd tend to think that most people aren't sharks, but I consider the population of sharks an open question, and wouldn't consider someone necessarily ignorant if they thought there were more sharks than I did. Dennett talks about the collection of modules as well. I consider it an open question as to how much one is aware of the different modules at the same time. I've had strange experiences where people seem to be acting according to one idea, but when a contradictory fact is pointed out, they also seemed quite aware of that as well. Doublethink is a real thing.
And thanks for the reference to Lewontin's Fallacy - I didn't know there was a name for that. The Race FAQ at the site is very interesting.
1Eliezer Yudkowsky10y
I was expecting the attribution to be to Mark Twain. I wonder if their style seems similar on account of being old, or if there's more to it.

I think it means you're underread within that period, for what it's worth.

The voice in that quote differs from Twain's and sounds neither like a journalist, nor like a river-side-raised gentleman of the time, nor like a Nineteenth Century rural/cosmopolitan fusion written to gently mock both.

Though the voice isn't, the sentiment seems similar to something Twain would say. Though I'd expect a little more cynicism from him.
Tentatively: rhetoric was studied formally, and Twain and Smith might have been working from similar models.

… and I’d get back like $20,000. And I could save hundreds of people from dying of starvation with that money.

According to GiveWell, you could save ten people with that much.

The math here is scary. If you spitball the regulatory cost of life for a Westerner, it's around seven million dollars. To a certain extent, I'm pretty sure that that's high because the costs of over-regulating are less salient to regulators than the costs of under-regulating, but taken at face value, that means that, apparently, thirty-five hundred poor African kids are equivalent to one American.

Hilariously, the IPCC got flak from anti-globalization activists for positing a fifteen-to-one ratio in the value of life between developed and developing nations.

To save ten lives via FAI, you have to accelerate FAI development by 6 seconds.

...then what are you doing here? Get back to work!
Advocacy and movement-building?
Aren't you using different measures of what 'saving a life' is, anyway? The starving-child-save gives you about 60 years of extra life, whereas the FAI save gives something rather more.
You can do a thousand times better [] (very conservatively) if you expand your domain of consideration beyond homo sapiens.
Even better!
Ten is better than hundreds?
No, but people act like it is [].

I have always thought that one man of tolerable abilities may work great changes, and accomplish great affairs among mankind, if he first forms a good plan, and, cutting off all amusements or other employments that would divert his attention, makes the execution of that same plan his sole study and business.

-- Benjamin Franklin

The sentiment is correct (diligence may be more important than brilliance) but I think "all amusements and other employments" might be too absolute an imperative for most people to even try to live by. Most people will break down if they try to work too hard for too long, and changes of activity can be very important in keeping people fresh.

I think that both you and Mr. Franklin are correct. To wreak great changes one must stay focused and work diligently on one's goal. One needn't eliminate all pleasures from life, but I think you'll find that very, very few people can have a serious hobby and a world changing vocation. Most of us of "tolerable" abilities cannot maintain the kind of focus and purity of dedication required. That is why the world changes as little as it does. If everyone, as an example who was to the right of center on the IQ curve could make great changes etc., then "great" would be redefined upwards (if most people could run a 10 second 100 meter, Mr. Bolt would only be a little special). Further more...Oooohh...shiny....
I've heard this a lot, but it sounds a bit too convenient to me. When external (or internal) circumstances have forced me to spend lots of time on one specific, not particularly entertaining task, I've found that I actually become more interested and enthusiastic about that thing. For example, when I had to play chess for like 5 hours a day for a week once, or when I went on holiday and came back to 5000 anki reviews, or when I was on a maths camp that started every day with a problem set that took over 4 hours. Re "breaking down": if you mean they'll have a breakdown of will and be unable to continue working, that's an easy problem to solve - just hire someone to watch you and whip you whenever your productivity declines. And/Or chew nicotine gum when at your most productive. Or something. If you mean some other kind of breakdown, that does sound like something to be cautious of, but I think the correct response isn't to surrender eighty percent of your productivity, but to increase the amount of discomfort you can endure, maybe through some sort of hormesis training.

Playing chess for 5 hours a day does not make chess your "sole study and business" unless you have some disorder forcing you to sleep for 19 hours a day. If you spent the rest of your waking time studying chess, playing practice games, and doing the minimal amount necessary to survive (eating, etc.), THEN chess is your "sole study and business"; otherwise, you spend less than 1/3 your waking life on it, which is less than people spend at a regular full time job (at least in the US).

In my model this strategy decreases productivity for some tasks; especially those which require thinking. Fear of punishment brings "fight or flight" reaction, both of these options are harmful for thinking.
My very tentative guess is that for most people, there is substantial room to increase diligence. However, at the very top of the spectrum trying to work harder just causes each individual hour to be less efficient. Also note that diligence != hours worked, I am often more productive in a 7 hour work day than an 11 hour work day if the 7-hour one was better-planned. However I am still pretty uncertain about this. I am pretty near the top end of the spectrum for diligence and trying to see if I can hack it a bit higher without getting burn-out or decreased efficiency.
Generalizing from one example [] much? Maybe there are some people who are most efficient when they do 10 different things an hour a day each, other people who are most efficient when they do the same thing 10 hours a day, and other people still who are most efficient in intermediate circumstances.
Agreed; most people, me included, would probably be more productive if they interleaved productive tasks than if they did productive tasks in big blocks of time. I was just saying that in my experience, when I'm forced to do some unpleasant task a lot, after a while it's not as unpleasant as I initially expected. I'm pretty cognitively atypical, so you're right that other people are likely not the same. (This is of course a completely different claim than what the great-grandparent sorta implied and which I mostly argued against, which is that "Most people will break down if they try to work too hard for too long" means we shouldn't work very much, rather than trying to set things up so that we don't break down (through hormesis or precommitment or whatever). At least if we're optimizing for productivity rather than pleasantness.) Here []'s a vaguely-related paper (I've only read the abstract):
It's possible that what Franklin meant by "amusements" didn't include leisure: in his time, when education was not as widespread, a gentleman might have described learning a second language as an "amusement".
Except when when the great change requires a leap of understanding. Regardless of how diligently she works, the person who is blind in a particular area will never make the necessary transcendental leap that creates new understanding. I have experienced this, working in a room full of brilliant people for a period of months. It took the transcendental leap of understanding by someone outside the group to present the elegantly-simple solution to the apparently intractable problem. So, while many problems will fall to persistence and diligence, some problems require at least momentary transcendental brilliance ... or at least a favorable error. Hmm, this says something about the need for experimentation as well. Never underestimate the power of, "Huh, that's funny. It's not supposed to do that ..." Brian

reinventing the wheel is exactly what allows us to travel 80mph without even feeling it. the original wheel fell apart at about 5mph after 100 yards. now they're rubber, self-healing, last 4000 times longer. whoever intended the phrase "you're reinventing the wheel" to be an insult was an idiot.

--rickest on IRC

[-][anonymous]10y 26

That's not what "reinventing the wheel" (when used as an insult) usually means. I guess that the inventor of the tyre was aware of the earlier types of wheel, their advantages, and their shortcomings. Conversely, the people who typically receive this insult don't even bother to research the prior art on whatever they are doing.

To go along with what army1987 said, "reinventing the wheel" isn't going from the wooden wheel to the rubber one. "Reinventing the wheel" is ignoring the rubber wheels that exist and spending months of R&D to make a wooden circle.

For example, trying to write a function to do date calculations, when there's a perfectly good library.

One obvious caveat is when the cost of finding, linking/registering and learning-to-use the library is greater than the cost of writing + debugging a function that suits your needs (of course, subject to the planning fallacy when doing estimates beforehand). More pronounced when the language/API/environment in question is one you're less fluent/comfortable with. In this optic, "reinventing the wheel" should be further restricted to when an irrational decision was taken to do something with less expected utility - cost than simply using the existing version(s).
That's why I chose the example of date calculations specifically. In practice, anyone who tries to write one of those from scratch will get it wrong in lots of different ways all at once.
Yes. It's a good example. I was more or less making a point against a strawman (made of expected inference), rather than trying to oppose your specific statements; I just felt it was too easy for someone not intimate with the headaches of date functions to mistake this for a general assertion that any rewriting of existing good libraries is a Bad Thing.
Jeff Atwood []
Clever-sounding and wrong [] is perhaps the worst combination in a rationality quote.

I don't think winners beat the competition because they work harder. And it's not even clear that they win because they have more creativity. The secret, I think, is in understanding what matters.

It's not obvious, and it changes. It changes by culture, by buyer, by product and even by the day of the week. But those that manage to capture the imagination, make sales and grow are doing it by perfecting the things that matter and ignoring the rest.

Both parts are difficult, particularly when you are surrounded by people who insist on fretting about and working on the stuff that makes no difference at all.

-Seth Godin

A common piece of advice from pro Magic: the Gathering plays is "focus on what matters." The advice is mostly useless to many people though because the pros have made it to that level precisely because they know what matters to begin with.

perhaps the better advice, then, is "when things aren't working, consider the possibility that it's because your efforts are not going into what matters, rather than assuming it is because you need to work harder on the issues you're already focusing on"

That's a much better advice than Godin's near-tautology.
Could you add the link if it was a blog post, or name the book if the source was a book?

"Silver linings are like finding change in your couch. It's there, but it never amounts to much."


Hah! One of my favorite authors fishing out relevant quotes on one of my favorite topics out of one of my favorite webcomics. I smell the oncoming affective death spiral. I guess this is the time to draw the sword and cut the beliefs with full intent, is it?

My knee had a slight itch. I reached out my hand and scratched the knee in question. The itch was relieved and I was able to continue with my activities.

-- The dullest blog in the world

When I was a teenager (~15 years ago) I got tired of people going on and on with their awesome storytelling skills with magnificent punchlines. I was never a good storyteller, so I started telling mundane stories. For example, after someone in my group of friends would tell some amazing and entertaining story, I would start my story:

So this one time I got up. I put on some clothes. It turned out I was hungry, so I decided to go to the store. I bought some eggs, bread, and bacon. I paid for it, right? And then I left the store. I got to my apartment building and went up the stairs. I open my door and take the eggs, bacon, and bread out of the grocery bag. After that, I get a pan and start cooking the eggs and bacon, and put the bread in the toaster. After all of this, I put the cooked eggs and bacon on a plate and put some butter on my toast. I then started to eat my breakfast.

And that was it. People would look dumbfounded for a while waiting for a punchline or some amazing happening. When the realized none was coming and I was finished, they would start laughing. Granted, this little joke of mine I would only do if there was a long time of people telling amazing/funny stories.

(nods) In the same spirit: "How many X does it take to change a lightbulb? One."

Though I am fonder of "How many of my political opponents does it take to change a lightbulb? More than one, because they are foolish and stupid."

-- The comments to that entry. When I stumbled on that blog some years ago, it impressed me so much that I started trying to write and think in the same style.
...I don't really get why this is a rationality quote...

Sometimes proceeding past obstacles is very straightforward.

Why do I find that funny?

If cats looked like frogs we’d realize what nasty, cruel little bastards they are.

-- Terry Pratchett, "Lords and Ladies"

I don't get it. (Anyway, the antecedent is so implausible I have trouble evaluating the counterfactual. Is that supposed to be the point, à la “if my grandma had wheels”?)

Here's the context of the quote:

“The thing about elves is they’ve got no . . . begins with m,” Granny snapped her fingers irritably.


“Hah! Right, but no.”

“Muscle? Mucus? Mystery?”

“No. No. No. Means like . . . seein’ the other person’s point of view.”

Verence tried to see the world from a Granny Weatherwax perspective, and suspicion dawned.


“Right. None at all. Even a hunter, a good hunter, can feel for the quarry. That’s what makes ‘em a good hunter. Elves aren’t like that. They’re cruel for fun, and they can’t understand things like mercy. They can’t understand that anything apart from themselves might have feelings. They laugh a lot, especially if they’ve caught a lonely human or a dwarf or a troll. Trolls might be made out of rock, your majesty, but I’m telling you that a troll is your brother compared to elves. In the head, I mean.”

“But why don’t I know all this?”

“Glamour. Elves are beautiful. They’ve got,” she spat the word, “style. Beauty. Grace. That’s what matters. If cats looked like frogs we’d realize what nasty, cruel little bastards they are. Style. That’s what people remember. They remember the glamour. All the rest of it, all the truth of it, becomes . . . old wives’ tales.”

Since Mischa died, I've comforted myself by inventing reasons why it happened. I've been explaining it away ... But that's all bull. There was no reason. It happened and it didn't need to.

-- Erika Moen

I wonder how common it is for people to agentize accidents. I don't do that, but, annoyingly, lots of people around me do.

M. Mitchell Waldrop on a meeting between physicists and economists at the Santa Fe Institute: the axioms and theorems and proofs marched across the overhead projection screen, the physicists could only be awestruck at [the economists'] mathematical prowess — awestruck and appalled. They had the same objection that [Brian] Arthur and many other economists had been voicing from within the field for years. "They were almost too good," says one young physicist, who remembers shaking his head in disbelief. "lt seemed as though they were dazzling themselves with fancy mathematics, until they really couldn't see the forest for the trees. So much time was being spent on trying to absorb the mathematics that I thought they often weren't looking at what the models were for, and what they did, and whether the underlying assumptions were any good. In a lot of cases, what was required was just some common sense. Maybe if they all had lower IQs, they'd have been making some better models.”

An excerpt from Wise Man's Fear, by Patrick Rothfuss. Boxing is not safe.

The innkeeper looked up. "I have to admit I don't see the trouble," he said apologetically. "I've seen monsters, Bast. The Cthaeh falls short of that."

"That was the wrong word for me to use, Reshi," Bast admitted. "But I can't think of a better one. If there was a word that meant poisonous and hateful and contagious, I'd use that."

Bast drew a deep breath and leaned forward in his chair. "Reshi, the Cthaeh can see the future. Not in some vague, oracular way. It sees all the future. Clearly. Perfectly. Everything that can possibly come to pass, branching out endlessly from the current moment."

Kvothe raised an eyebrow. "It can, can it?"

"It can," Bast said gravely. "And it is purely, perfectly malicious. This isn't a problem for the most part, as it can't leave the tree. But when someone comes to visit..."

Kvothe's eyes went distant as he nodded to himself. "If it knows the future perfectly," he said slowly, "then it must know exactly how a person will react to anything it says."

Bast nodded. "And it is vicious

... (read more)
I thought Chronicler's reply to this was excellent, however. Omniscience does not necessitate omnipotence. I mean, the UFAI in our world would have an easy time of killing everything. But in their world it's different. EDIT: Except that maybe we can be smart and stop the UFAI from killing everything even in our world, see my above comment.
Hah, I actually quoted much of that same passage on IRC in the same boxing vein! Although as presented the scenario does have some problems:
It is conceivable that there is no (near enough) future where Cthaeh is freed, thus it is powerless to affect its own fate, or is waiting for the right circumstances.
That seemed a little unlikely to me, though. As presented in the book, a minimum of many millennia have passed since the Cthaeh has begun operating, and possibly millions of years (in some frames of reference). It's had enough power to set planes of existence at war with each other and apparently cause the death of gods. I can't help but feel that it's implausible that in all that time, not one forking path led to its freedom. Much more plausible that it's somehow inherently trapped in or bound to the tree so there's no meaningful way in which it could escape (which breaks the analogy to an UFAI).
Isn't it what I said?
Not by my reading. In your comment, you gave 3 possible explanations, 2 of which are the same (it gets freed, but a long time from 'now') and the third a restriction on its foresight which is otherwise arbitrary ('powerless to affect its own fate'). Neither of these translate to 'there is no such thing as freedom for it to obtain'.
Alternatively, perhaps the Cthaeh's ability to see the future is limited to those possible futures in which it remains in the tree.
Leading to a seriously dystopian variant on Tenchi Muyo!...
I've come up with what I believe to be an entirely new approach to boxing, essentially merging boxing with FAI theory. I wrote a couple thoughts down about it, but lost my notes, and I also don't have much time to write this comment, so forgive me if it's vague or not extremely well reasoned. I also had a couple of tangential thoughts, if I remember them in the course of writing this or I recover my notes later than I'll put them here as well. The idea, essentially, is that when creating a box AI you would build its utility function such that it wants very badly to stay in the box. I believe this would solve all of the problems with the AI manipulating people in order to free itself. Now, the AI still could manipulate people in an attempt to use them to impact the outside world, so the AI wouldn't be totally boxed, but I'm inclined to think that we could maintain a very high degree of control over the AI, since the only powers it could ever have would be through communication with us. The idea came because I recalled a discussion that occurred on about why the Cthaeh was in the tree. The general conclusion was that either the Cthaeh was bound by extremely powerful forces, or that the Cthaeh wanted to be in the tree, perhaps because it was instrumentally useful to him. While I found that second explanation implausible in the context of Rothfussland, that discussion led me to realize that almost all approaches towards AI boxing have done so through the first branch of potential boxing solutions, that is, external constraints imposed on the AI, as opposed to the second branch, internal constraints that the AI imposes on itself because of its utility function. This lead me to think that we should test our capabilities with FAI systems by putting them in a box and giving them limited utility functions, ones that couldn't possibly lead them to want to manipulate us. So, for example, we could put them in a box and give them a strong desire to stay in the box, al
How do you specify precisely what it means to "stay in the box"? In particular, would creating a nearly identical copy of itself except without this limitation outside the box while the original stays in the box count?
It would not count, we'd want to make the AI not want this almost identical AI to exist. That seems possible, it would be like how I don't want there to exist an identical copy of me except it eats babies. There are lots of changes to my identity that would be slight but yet that I wouldn't want to exist. To be more precise, I'd say that it counts as going outside the box if it does anything except think or talk to the Gatekeepers through the text channel. It can use the text channel to manipulate the Gatekeepers to do things, but it can't manipulate them to do things that allow it to do anything other than use the text channel. It would, in a certain sense, be partially deontologist, and be unwilling to do things directly other than text the Gatekeepers. How ironic. Lolz. Also: how would it do this, anyway? It would have to convince the Gatekeepers to convince the scientists to do this, or teach them computer science, or tell them its code. And if the AI started teaching the Gatekeepers computer code or techniques to incapacitate scientists, we'd obviously be aware that something had gone wrong. And, in the system I'm envisioning, the Gatekeepers would be closely monitored by other groups of scientists and bodyguards, and the scientists would be guarded, and the Gatekeepers wouldn't even have to know who specifically did what on the project.
And that's the problem. For in practice a partial deontoligist-partial consequentialist will treat its deontoligical rules as obstacles to achieving what its consequentialist part wants and route around them.
This is both a problem and a solution because it makes the AI weaker. A weaker AI would be good because it would allow us to more easily transition to safer versions of FAI than we would otherwise come up with independently. I think that delaying a FAI is obviously much better than unleashing a UFAI. My entire goal throughout this conversation has been to think of ways that would make hostile FAIs weaker, I don't know why you think this is a relevant counter objection. You assert that it will just route around the deontological rules, that's nonsense and a completely unwarranted assumption, try to actually back up what you're asserting with arguments. You're wrong. It's obviously possible to program things (eg people) such that they'll refuse to do certain things no matter what the consequences (eg you wouldn't murder trillions of babies to save billions of trillions of babies, because you'd go insane if you tried because your body has such strong empathy mechanisms and you inherently value babies a lot). This means that we wouldn't give the AI unlimited control over its source code, of course, we'd make the part that told it to be a deontologist who likes text channels be unmodifiable. That specific drawback doesn't jive well with the aesthetic of a super powerful AI that's master of itself and the universe, I suppose, but other than that I see no drawback. Trying to build things in line with that aesthetic actually might be a reason for some of the more dangerous proposals in AI, maybe we're having too much fun playing God and not enough despair. I'm a bit cranky in this comment because of the time sink that I'm dealing with to post these comments, sorry about that.
What it means for "the AI to be in the box" is generally that the AI's impacts on the outside world are filtered through the informed consent of the human gatekeepers. An AI that wants to not impact the outside world will shut itself down. An AI that wants to only impact the outside world in a way filtered through the informed consent of its gatekeepers is probably a full friendly AI, because it understands both its gatekeepers and the concept of informed consent. An AI that simply wants its 'box' to remain functional, but is free to impact the rest of the world, is like a brain that wants to stay within a skull- that is hardly a material limitation on the rest of its behavior!
I think you misunderstand what I mean by proposing that the AI wants to stay inside the box. I mean that the AI wouldn't want to do anything at all to increase its power base, that it would only be willing to talk to the gatekeepers.
I agree that your and my understanding of the phrase "stay inside the box" differ. What I'm trying to do is point out that I don't think your understanding carves reality at the joints. In order for the AI to stay inside the box, the box needs to be defined in machine-understandable terms, not human-inferrable terms. Each half of this sentence has a deep problem. Wouldn't correctly answering the questions of or otherwise improving the lives of the gatekeepers increase the AI's power base, since the AI has the ability to communicate with the gatekeepers? The problem with restrictions like "only be willing to talk" is a restriction on the medium but not the content. So, the AI has a text-only channel that goes just to the gatekeepers- but that doesn't restrict the content of the messages the AI can send to the gatekeeper. The fictional Cthaeh only wants to talk to its gatekeepers- and yet it still manages to get done what it wants to get done. Words have impacts, [] and it should be anticipated that the AI picks words because of their impacts.
Sure, the AI can manipulate gatekeepers. But this is a major improvement. You miss my point. The Cthaeh is very limited by being trapped in its tree and only able to talk to passerby. The UFAI would be limited by being trapped in its text only communication channel. It wouldn't be able to do things like tell the gatekeepers to plug it into the Internet or to directly control an autonomous army of robots, it would be forced instead to use the gatekeepers as its appendages, and the gatekeepers have severe limitations on brain capacity and physical strength. I think that if we did this and kept careful watch on the gatekeepers and used some other safety measures, boxing an AI would become feasible. Now, I'm still not advocating using boxed AI willy nilly, but I think using it as a cautionary measure once we think we've got FAI figured out would be a very very very good idea, because FAI seems ridiculously hard and there's no way to know what we don't know, so there might be unseen problems with whatever final FAI we build. There's only a risk boxing the supposed FAI would help us, and with existential threats you should take advantage of whatever possibilities you can tilt towards favorable outcomes, even if the probability seems miniscule. That's not to say that I think the probability that this would stop a disaster is miniscule, of course (I think it's rather high, in the event that the FAI was actually UFAI), but just to say that I think it would be a good idea even if my probability assessments are mostly wrong.
Over the AI being embodied in a giant death robot, sure. But limiting the AI to a text-only channel is an old idea, and what Eliezer used for the "let me out of the box" demonstrations []. One of the things to note is that the limitations on the AI don't limit anything the AI creates. If the AI gives you a blueprint for a nanomachine that purports to cure cancer, but when you make it it turns out to actually be grey goo, then humans lose, even though the AI never left the box. (Note that a worst-case AI is very subtle- suppose the nanomachine actually does cure cancer when built correctly, but a one-in-a-trillion manufacturing error will make it grey goo. Since you're making trillions of them, you then get grey goo, and humans lose.) Trying to formally specify "don't give us any plans that will build an AI" or "don't give us any plans that will cause humans to lose" without silencing the AI completely is a genuinely difficult affair. Basically, if you assume that the AI is tricky enough to circumvent any medium restrictions you place on it, then the only way to avoid "humans lose" is to have its goal be "humans win," which is actually a pretty complicated goal. Expressing that goal in a machine-understandable way is pretty much the FAI problem.
The entire point of Eliezer's demonstration was that if an AI wants to it can increase its power base even starting from a text only communication system. The entire point of my idea is that we can just build the AI such that it doesn't want to leave the box or increase its power base. It dodges that entire problem, that's the whole point. You've gotten so used to being scared of boxed AI that you're reflexively rejecting my idea, I think, because your above objection makes no sense at all and is obviously wrong upon a moment's reflection. All of my bias-alarms have been going off since your second comment reply, please evaluate yourself and try to distance yourself from your previous beliefs, for the sake of humanity. Also, here is a kitten, unless you want it to die then please reevaluate: [] Limitations on the AI restrict the range of things that the AI can create. Yes, if we just built whatever the AI said to and the AI was unfriendly then we would lose. Obviously. Yes, if we assume that the UFAI is tricky enough to "circumvent any medium restrictions [we] place on it" then we would lose, practically by definition. But that assumption isn't warranted. (These super weak strawmen were other indications to me that you might be being biased on this issue.) I think a key component of our disagreement here might be that I'm assuming that the AI has a very limited range of inputs, that it could only directly perceive the text messages that it would be sent. You're either assuming that the AI could deduce the inner workings of our facility and the world and the universe from those text messages, or that the AI had access to a bunch of information about the world already. I disagree with both assumptions, the AIs direct perception could be severely limited and should be, and it isn't magic so it couldn't deduce the inner workings of
Let's return to my comment four comments up []. How will you formalize "power base" in such a way that being helpful to the gatekeepers is allowed but being unhelpful to them is disallowed? If you would like to point out a part that of the argument that does not follow, I would be happy to try and clarify it for you. Okay. My assumption is that a usefulness of an AI is related to its danger. If we just stick Eliza [] in a box, it's not going to make humans lose- but it's also not going to cure cancer for us. If you have an AI that's useful, it must be because it's clever and it has data. If you type in "how do I cure cancer without reducing the longevity of the patient?" and expect to get a response like "1000 ccs of Vitamin C" instead of "what do you mean?", then the AI should already know about cancer and humans and medicine and so on. If the AI doesn't have this background knowledge- if it can't read wikipedia and science textbooks and so on- then its operation in the box is not going to be a good indicator of its operation outside of the box, and so the box doesn't seem very useful as a security measure. It's already difficult to understand how, say, face recognition software uses particular eigenfaces []. Why does it mean that the fifteenth eigenface have accentuated lips, and the fourteenth eigenface accentuated cheekbones? I can describe the general process that lead to that, and what it implies in broad terms, but I can't tell if the software would be more or less efficient if those were swapped. The equivalent of eigenfaces for plans will be even more difficult to interpret. The plans don't end with a neat "humans_lose=1" that we can look at and say "hm, maybe we shouldn't implement this plan." In practice, debugging is much more effective at finding the source of problems after they've manifested, rather t
I won't. The AI can do whatever it wants to the gatekeepers through the text channel, and won't want to do anything other than act through the text channel. This precaution is a way to use the boxing idea for testing, not an idea for abandoning FAI wholly. EY proved that an AI that wants to get out will get out. He did not prove that an AI that wants to stay in will get out. I agree, the way that I'm proposing to do AI is very limited. I myself can't think of what questions might be safe. But some questions are safer than others and I find it hard to believe that literally every question we could ask would lead to dangerous outcomes, or that if we thought about it long and hard we couldn't come up with answers. I'm sort of shelving this as a subproject of this project, but one that seems feasible to me based on what I know. Also, perhaps we could just ask it hundreds of hypothetical questions based on conditions that don't really exist, and then ask it a real question based on conditions that do exist, and trick it, or something. I think if the AI tags and sorts its instrumental and absolute goals it would be rather easy. I also think that if we'd built the AI then we'd have enough knowledge to read its mind. It wouldn't just magically appear, it would only do things in the way we'd told it too. It would probably be hard, but I think also probably be doable if we were very committed. I could be wrong here because I've got no coding experience, just ideas from what I've read on this site. The risk of distraction is outweighed by the risk that this idea disappears forever, I think, since I've never seen it proposed elsewhere on this site.
Well, he demonstrated that it can sometimes get out. But my claim was that "getting out" isn't the scary part- the scary part is "reshaping the world." My brain can reshape the world just fine while remaining in my skull and only communicating with my body through slow chemical wires, and so giving me the goal of "keep your brain in your skull" doesn't materially reduce my ability or desire to reshape the world. And so if you say "well, we'll make the AI not want to reshape the world," then the AI will be silent. If you say "we'll make the AI not want to reshape the world without the consent of the gatekeepers," then the gatekeepers might be tricked or make mistakes. If you say "we'll make the AI not want to reshape the world without the informed consent of the gatekeepers / in ways which disagree with the values of the gatekeepers," then you're just saying we should build a Friendly AI, which I agree with! It's easy to write a safe AI that can only answer one question. How do you get from point A to point B using the road system? Ask Google Maps, and besides some joke answers, you'll get what you want. When people talk about AGI, though, they mean an AI that can write those safe AIs. If you ask it how to get from point A to point B using the road system, and it doesn't know that Google Maps exists, it'll invent a new Google Maps and then use it to answer that question. And so when we ask it to cure cancer, it'll invent medicine-related AIs until it gets back a satisfactory answer. The trouble is that the combination of individually safe AIs is not a safe AI. If we have a driverless car that works fine with human-checked directions, and direction-generating software that works fine for human drivers, plugging them together might result in a car trying to swim across the Atlantic Ocean []. (Google has disabled the swimming answers, so Google Maps no longer provides them.) The more general
Another truism is that truisms are untrue things that people say anyway. Examples of code that is easier to read than write include those where the code represents a deep insight that must be discovered in order to implement it. This does not apply to most examples of software that we use to automate minutia but could potentially apply to the core elements of a GAI's search procedure. The above said I of course agree that the thought of being able to read the AI's mind is ridiculous.
Unless you also explain that insight in a human-understandable way through comments, it doesn't follow that such code is easier to read than write, because the reader would then have to have the same insight to figure what the hell is going on in the code.
For example, being given code that simulates relativity before Einstein et al. discovered it would have made discovering relativity a lot easier.
Well, yeah, code fully simulating SR and written in a decent way would, but code approximately simulating collisions of ultrarelativistic particles with hand-coded optimizations... not sure.
It's not transparently obvious to me why this would be "ridiculous", care to enlighten me? Building an AI at all seems ridiculous to many people, but that's because they don't actually think about the issue because they've never encountered it before. It really seems far more ridiculous to me that we shouldn't even try to read the AIs mind, when there's so much at stake. AIs aren't Gods, with time and care and lots of preparation reading their thoughts should be doable. If you disagree with that statement, please explain why? Rushing things here seems like the most awful idea possible, I really think it would be worth the resource investment.
Sure, possible. Just a lot harder than creating an FAI to do it for you---especially when the AI has an incentive to obfuscate.
Why are you so confident that the first version of FAI we make will be safe? Doing both is safest and seems like it would be worth the investment.
I'm not. I expect it to kill us all with high probability (which is nevertheless lower than the probability of obliteration if no FAI is actively attempted.)
Humans reading computer code aren't gods either. How long until an uFAI would get caught if it did stuff like this []?
It would be very hard, yes. I never tried to deny that. But I don't think it's hard enough to justify not trying to catch it. Also, you're only viewing the "output" of the AI, essentially, with that example. If you could model the cognitive processes of the authors of secretly malicious code, then it would be much more obvious that some of their (instrumental) goals didn't correspond to the ones that you wanted them to be achieving. The only way an AI could deceive us would be to deceive itself, and I'm not confident that an AI could do that.
That's not the same as “I'm confident that an AI couldn't do that”, is it?
At the time, it wasn't the same. Since then, I've thought more, and gained a lot of confidence on this issue. Firstly, any decision made by the AI to deceive us about its thought processes would logically precede anything that would actually deceive us, so we don't have to deal with the AI hiding its previous decision to be devious. Secondly, if the AI is divvying its own brain up into certain sections, some of which are filled with false beliefs and some which are filled with true ones, it seems like the AI would render itself impotent on a level proportionate to the extent that it filled itself with false beliefs. Thirdly, I don't think a mechanism which allowed for total self deception would even be compatible with rationality.
Even if the AI can modify its code, it can't really do anything that wasn't entailed by its original programming. (Ok, it could have a security vulnerability that allowed the execution of externally-injected malicious code, but that is a general issue of all computer systems with an external digital connection)
The hard part is predicting everything that was entailed by its initial programing and making sure it's all safe.
That's right, history of engineering tells us that "provably safe" and "provably secure" systems fail in unanticipated ways.
If it's a self-modifying AI, the main problem is that it keeps changing. You might find the memory position that corresponds to, say, expected number of paperclips. When you look at it next week wondering how many paperclips there are, it's changed to staples, and you have no good way of knowing. If it's not a self-modifying AI, then I suspect it would be pretty easy. If it used Solomonoff induction, it would be trivial. If not, you are likely to run into problems with stuff that only approximates Bayesian stuff. For example, if you let it develop its own hanging nodes, you'd have a hard time figuring out what they correspond to. They might not even correspond to something you could feasibly understand. If there's a big enough structure of them, it might even change.
This is a reason it would be extremely difficult. Yet I feel the remaining existential risk should outweigh that. It seems to me reasonably likely that our first version of FAI would go wrong. Human values are extremely difficult to understand because they're spaghetti mush, and they often contradict each other and interact in bizarre ways. Reconciling that in a self consistent and logical fashion would be very difficult to do. Coding a program to do that would be even harder. We don't really seem to have made any real progress on FAI thus far, so I think this level of skepticism is warranted. I'm proposing multiple alternative tracks to safer AI, which should probably be used in conjunction with the best FAI we can manage. Some of these tracks are expensive, and difficult, but others seem simpler. The interactions between the different tracks produces a sort of safety net where the successes of one check the failures of others, as I've had to show throughout this conversation again and again. I'm willing to spend much more to keep the planet safe against a much lower level of existential risk than anyone else here, I think. That's the only reason I can think to explain why everyone keeps responding with objections that essentially boil down to "this would be difficult and expensive". But the entire idea of AI is expensive, as well as FAI, yet the costs are accepted easily in those cases. I don't know why we shouldn't just add another difficult project to our long list of difficult projects to tackle, given the stakes that we're dealing with. Most people on this site seem only to consider AI as a project to be completed in the next fifty or so years. I see it more as the most difficult task that's ever been attempted in all humankind. I think it will take at least 200 hundred years, even factoring in the idea that new technologies I can't even imagine will be developed over that time. I think the most common perspective on the way we should approach AI is thus
EY's experiment is wholly irrelevant to this claim. Either you're introducing irrelevant facts or morphing your position. I think you're doing this without realizing it, and I think it's probably due to motivated cognition (because morphing claims without noticing it correlates highly with motivated cognition in my experience). I really feel like we might have imposed a box-taboo on this site that is far too strong. You keep misunderstanding what I'm saying over and over and over again and it's really frustrating and a big time sink. I'm going to need to end this conversation if it keeps happening because the utility of it is going down dramatically with each repetition. I'm not proposing a system where the AI doesn't interact with the outside world. I'm proposing a system where the AI is only ever willing to use a few appendages to effect the outside world, as opposed to potentially dozens. This minimizes the degree of control that the AI has dramatically, which is a good thing. This is not FAI either, it is an additional constraint that we should use when putting early FAIs into action. I'm not saying that we merge the AIs values to the values of the gatekeeper, I have no idea where you keep pulling that idea from. It's possible that I'm misunderstanding you, but I don't know how that would be true specifically, because many of your objections just seem totally irrelevant to me and I can't understand what you're getting at. It seems more likely that you're just not used to the idea of this version of boxing so you just regurgitate generic arguments against boxing, or something. You're also coming up with more obscure arguments as we go farther into this conversation. I don't really know what's going on at your end, but I'm just annoyed at this point. I don't even understand how this clashes with my position. I understand that smashing simple AIs together is a dumb idea, but I never proposed that ever. I'm proposing using this special system for early FAIs, a
The problem is that the AI could use its appendages to create and use tools that are more powerful than the appendages themselves.
I've already addressed this, the AI would still be entirely dependent on its appendages and that's a major advantage. So long as we watch the appendages and act to check any actions by them that seem suspicious then the AI would remain weak. The AI isn't magic, and it's not even beyond the scope of human cunning if we limit its input data. Again, also keep in mind also that we'd watch the communications between the appendages and the AI as well, so we'd know immediately if it was trying to get them to make it any tools. The Gatekeepers wouldn't exist in a vacuum, they would be watched over and countered by us.
I think this conversation has run its course as well, though I intend to pursue a side issue in PMs.
[-][anonymous]10y 17

The problem with therapy-- include self help and mind hacks-- is its amazing failure rate. People do it for years and come out of it and feel like they understand themselves better but they do not change. If it failed to produce both insights and change it would make sense, but it is almost always one without the other.

-- The Last Psychiatrist

Is it our bias towards optimism? (And is that bias there because pessimists take fewer risks, and therefore don't succeed at much and therefore get eliminated from the gene pool?) I heard (on a PRI podcast, I think) a brain scientist give an interpretation of the brain as a collection of agents, with consciousness as an interpreting layer that invents reasons for our actions after we've actually done them. There's evidence of this post-fact interpretation - and while I suspect this is only part of the story, it does give a hint that our conscious mind is limited in its ability to actually change our behavior.) Still, people do sometimes give up alcohol and other drugs, and keep new resolutions. I've stuck to my daily exercise for 22 days straight. These feel like conscious decisions (though I may be fooling myself) but where my conscious will is battling different intentions, from different parts of my mind. Apologies if that's rambling or nonsensical. I'm a bit tired (because every day I consciously decide to sleep early and every day I fail to do it) and I haven't done my 23rd day's exercise yet. Which I'll do now.

Did you teach him wisdom as well as valor, Ned? she wondered. Did you teach him how to kneel? The graveyards of the Seven Kingdoms were full of brave men who had never learned that lesson

-- Catelyn Stark, A Game of Thrones, George R. R. Martin

Some critics of education have said that examinations are unrealistic; that nobody on the job would ever be evaluated without knowing when the evaluation would be conducted and what would be on the evaluation.

Sure. When Rudy Giuliani took office as mayor of New York, someone told him "On September 11, 2001, terrorists will fly airplanes into the World Trade Center, and you will be judged on how effectively you cope."


When you skid on an icy road, nobody will listen when you complain it's unfair because you weren't warned in advance, had no experience with winter driving and had never been taught how to cope with a skid.

-- Steven Dutch

Only the ideas that we actually live are of any value.

-- Hermann Hesse, Demian

Reductionism is the most natural thing in the world to grasp. It's simply the belief that "a whole can be understood completely if you understand its parts, and the nature of their sum." No one in her left brain could reject reductionism.

Douglas Hofstadter

ADBOC. Literally, that's true (but tautologous), but it suggests that understanding the nature of their sum is simple, which it isn't. Knowing the Standard Model gives hardly any insight into sociology, even though societies are made of elementary particles.
That quote is supposed to be paired with another quote about holism.
Q: What did the strange loop say to the cow? A: MU!
-- Knock knock. -- Who is it? -- Interrupting koan. -- Interrupting ko- -- MU!!!
The interesting thing is that Hofstadter doesn't seem to argue here that reductionism is true but that it's a powerful meme that easily gets into people brain.
[-][anonymous]10y 14

To understand our civilisation, one must appreciate that the extended order resulted not from human design or intention but spontaneously: it arose from unintentionally conforming to certain traditional and largely moral practices, many of which men tend to dislike, whose significance they usually fail to understand, whose validity they cannot prove, and which have nonetheless fairly rapidly spread by means of an evolutionary selection — the comparative increase of population and wealth — of those groups that happened to follow them. The unwitting, reluctant, even painful adoption of these practices kept these groups together, increased their access to valuable information of all sorts, and enabled them to be 'fruitful, and multiply, and replenish the earth, and subdue it' (Genesis 1:28). This process is perhaps the least appreciated facet of human evolution.

-- Friedrich Hayek, The Fatal Conceit : The Errors of Socialism (1988), p. 6

It's not the end of the world. Well. I mean, yes, literally it is the end of the world, but moping doesn't help!

-- A Softer World

Should we add a point to these quote posts, that before posting a quote you should check there is a reference to it's original source or context? Not necessarily to add to the quote, but you should be able to find it if challenged. seems fairly diligent at sourcing quotes, but Google doesn't rank it highly in search results compared to all the misattributed, misquoted or just plain made up on the spot nuggets of disinformation that have gone viral and colonized Googlespace lying in wait to catch the unwary (such as apparently myself).

Yes, and also a point to check whether the quote has been posted to LW already.

By keenly confronting the enigmas that surround us, and by considering and analyzing the observations that I have made, I ended up in the domain of mathematics.

M. C. Escher

[-][anonymous]10y 13

When a philosophy thus relinquishes its anchor in reality, it risks drifting arbitrarily far from sanity.

Gary Drescher, Good and Real

But a curiosity of my type remains after all the most agreeable of all vices --- sorry, I meant to say: the love of truth has its reward in heaven and even on earth.

-Friedrich Nietzsche

Explanations are all based on what makes it into our consciousness, but actions and the feelings happen before we are consciously aware of them—and most of them are the results of nonconscious processes, which will never make it into the explanations. The reality is, listening to people’s explanations of their actions is interesting—and in the case of politicians, entertaining—but often a waste of time. --Michael Gazzaniga

Does that apply to that explanation as well? Does it apply to explanations made in advance of the actions? For example, this evening (it is presently morning) I intend buying groceries on my way home from work, because there's stuff I need and this is a convenient opportunity to get it. When I do it, that will be the explanation. In the quoted article, the explanation he presents as a paradigmatic example of his general thesis is the reflex of jumping away from rustles in the grass. He presents an evolutionary just-so story to explain it, but one which fails to explain why I do not jump away from rustles in the grass, although surely I have much the same evolutionary background as he. I am more likely to peer closer to see what small creature is scurrying around in there. But then, I have never lived anywhere that snakes are a danger. He has. And yet this, and split-brain experiments, are the examples he cites to say that "often", we shouldn't listen to anyone's explanations of their behaviour. I smell crypto-dualism. "I thought there was a snake" seems to me a perfectly good description of the event, even given that I jumped way before I was conscious of the snake. (He has "I thought I'd seen a snake", but this is a fictional example, and I can make up fiction as well as he can.) The article references his book []. Anyone read it? The excerpts I've skimmed on Amazon just consist of more evidence that we are brains: the Libet experiments, the perceived simultaneity of perceptions whose neural signals aren't, TMS experiments, and so on. There are some digressions into emergence, chaos, and quantum randomness. Then -- this is his innovation, highlighted in the publisher's blurb -- he sees responsibility as arising from social interaction. Maybe I'm missing something in the full text, but is he saying that someone alone really is just an automaton, and only in company can one really be a person? I
Did you in fact buy the groceries?
I did. There are many circumstances that might have prevented it; but none of them happened. There are many others that might have obstructed it; but I would have changed my actions to achieve the goal. Goals of such a simple sort are almost invariably achieved.
Three upvotes for demonstrating the basic competence to buy groceries?
There is a famous study that digs a bit deeper and convincingly demonstrates it: Telling more than we can know: Verbal reports on mental processes [] .
From the abstract: It seems to me that "cognitive processes" could be replaced by "physical surroundings", and the resulting statement would still be true. I am not sure how significant these findings are. We have imperfect knowledge of ourselves, but we have imperfect knowledge of everything.
Obviously not, since Gazzaniga is not explaining his own actions.
He is, among other things, explaining some of his own actions: his actions of explaining his actions.
You seem to have failed to notice the key point. Here's a slight rephrasing of it: "explanations for actions will fail to reflect the actual causes of those actions to the extent that those actions are the results of nonconscious processes." You ask, does Gazzaniga's explanation apply to explanations made in advance of the actions? The key point I've highlighted answers that question. In particular, your explanation of the actions you plan to take are (well, seem to me to be) the result of conscious processes. You consciously apprehended that you need groceries and consciously formulated a plan to fulfill that need. It seems to me that in common usage, when a person says "I thought there was a snake" they mean something closer to, "I thought I consciously apprehended the presence of a snake," than, "some low-level perceptual processing pattern-matched 'snake' and sent motor signals for retreating before I had a chance to consider the matter consciously."
Yes, he says that. And then he says: thus extending the anecdote of snakes in the grass to a parable that includes politicans' speeches. Or perhaps they mean "I heard a sound that might be a snake". As long as we're just making up scenarios, we can slant them to favour any view of consciousness we want. This doesn't even rise to the level of anecdote.

The world is full of obvious things which nobody by any chance ever observes…

— Arthur Conan Doyle, “The Hound of the Baskervilles”

[M]uch mistaken thinking about society could be eliminated by the most straightforward application of the pigeonhole principle: you can't fit more pigeons into your pigeon coop than you have holes to put them in. Even if you were telepathic, you could not learn all of what is going on in everybody's head because there is no room to fit all that information in yours. If I could completely scan 1,000 brains and had some machine to copy the contents of those into mine, I could only learn at most about a thousandth of the information stored in those brains, and then only at the cost of forgetting all else I had known. That's a theoretical optimum; any such real-world transfer process, such as reading and writing an e-mail or a book, or tutoring, or using or influencing a market price, will pick up only a small fraction of even the theoretically acquirable knowledge or preferences in the mind(s) at the other end of said process, or if you prefer of the information stored by those brain(s). Of course, one can argue that some kinds of knowledge -- like the kinds you and I know? -- are vastly more important than others, but such a claim is usually more snobbery than fact. Furthermore, a s

... (read more)

What about compression?

Do you mean lossy or lossless compression? If you mean lossy compression then that is precisely Szabo's point. On the other hand, if you mean lossless, then if you had some way to losslessly compress a brain, this would only work if you were the only one with this compression scheme, since otherwise other people would apply it to their own brains and use the freed space to store more information.

You'll probably have more success losslessly compressing two brains than losslessly compressing one.

Still, I don't think you could compress the content of 1000 brains into one. (And I'm not sure about two brains, either. Maybe the brains of two six-year-olds into that of a 25-year-old.)
I argue that my brain right now contains a lossless copy of itself and itself two words ago! Getting 1000 brains in here would take some creativity, but I'm sure I can figure something out... But this is all rather facetious. Breaking the quote's point would require me to be able to compute the (legitimate) results of the computations of an arbitrary number of arbitrarily different brains, at the same speed as them. Which I can't. For now.
I'd argue that your brain doesn't even contain a lossless copy of itself. It is a lossless copy of itself, but your knowledge of yourself is limited. So I think that Nick Szabo's point about the limits of being able to model other people applies just as strongly to modelling oneself. I don't, and cannot, know all about myself -- past, current, or future, and that must have substantial implications about something or other that this lunch hour is too small to contain. How much knowledge of itself can an artificial system have? There is probably some interesting mathematics to be done -- for example, it is possible to write a program that prints out an exact copy of itself (without having access to the file that contains it), the proof of Gödel's theorem involves constructing a proposition that talks about itself, and TDT depends on agents being able to reason about their own and other agents' source codes. Are there mathematical limits to this?
I never meant to say that I could give you an exact description of my own brain and itself ε ago, just that you could deduce one from looking at mine.
But our memories discard huge amounts of information all the time. Surely there's been at least a little degradation in the space of two words, or we'd never forget anything.
Certainly. I am suggesting that over sufficiently short timescales, though, you can deduce the previous structure from the current one. Maybe I should have said "epsilon" instead of "two words". Why would you expect the degradation to be completely uniform? It seems more reasonable to suspect that, given a sufficiently small timescale, the brain will sometimes be forgetting things and sometimes not, in a way that probably isn't synchronized with its learning of new things. So, depending on your choice of two words, sometimes the brain would take marginally more bits to describe and sometimes marginally fewer. Actually, so long as the brain can be considered as operating independently from the outside world (which, given an appropriately chosen small interval of time, makes some amount of sense), a complete description at time t will imply a complete description at time t + δ. The information required to describe the first brain therefore describes the second one too. So I've made another error: I should have said that my brain contains a lossless copy of itself and itself two words later. (where "two words" = "epsilon")
See the pigeon-hole argument in the original quote.
If you can scan it, maybe you can simulate it? And if you can simulate one, wait some years and you can simulate 1000, probably connected in some way to form a single "thinking system".
But not on your own brain.

Anything worth doing is worth doing badly.

--Herbert Simon (quoted by Pat Langley)

Including artificial intelligence? ;-)
The Chesterton version looks like it was designed to poke the older (and in my opinion better) advice from Lord Chesterfield: Or, rephrased as Simon did: I strongly recommend his letters to his son []. They contain quite a bit of great advice- as well as politics and health and so on. As it was private advice given to an heir, most of it is fully sound. (In fact, it's been a while. I probably ought to find my copy and give it another read.)

Yeah, they're on my reading list. My dad used to say that a lot, but I always said the truer version was 'Anything not worth doing is not worth doing well', since he was usually using it about worthless yardwork...

Ah, I was gonna mention this. Didn't know it was from Chesterfield. I think there'd be more musicians (a good thing IMO) if more people took Chesterton's advice.
A favorite of mine, but according to Wikiquote G.K. Chesterton said it first [] , in chapter 14 of What's Wrong With The World:
I like Simon's version better: it flows without the awkward pause for the comma.
Yep, it seems thatoften [] epigrams are made more epigrammatic by the open-source process of people misquoting them. I went looking up what I thought [] was another example of this, but Wiktionary calls it "[l]ikely traditional" [,_everything_looks_like_a_nail] (though the only other citation is roughly contemporary with Maslow).
Memetics in action - survival of the most epigrammatic!
[-][anonymous]10y 9

Niels Bohr's maxim that the opposite of a profound truth is another profound truth [is a] profound truth [from which] the profound truth follows that the opposite of a profound truth is not a profound truth at all.

-- The narrator in On Self-Delusion and Bounded Rationality, by Scott Aaronson

8Eliezer Yudkowsky10y
I would remark that truth is conserved, but profundity isn't. If you have two meaningful statements - that is, two statements with truth conditions, so that reality can be either like or unlike the statement - and they are opposites, then at most one of them can be true. On the other hand, things that invoke deep-sounding words can often be negated, and sound equally profound at the end of it. In other words, Bohr's maxim seems so blatantly awful that I am mostly minded to chalk it up as another case of, "I wish famous quantum physicists knew even a little bit about epistemology-with-math".
I don't really know what "profound" means here, but I usually take Bohr's maxim as a way of pointing out that when I encounter two statements, both of which seem true (e.g., they seem to support verified predictions about observations), which seem like opposites of one another, I have discovered a fault line in my thinking... either a case where I'm switching back and forth between two different and incompatible techniques for mapping English-language statements to predictions about observations, or a case for which my understanding of what it means for statements to be opposites is inadequate, or something else along those lines. Mapping epistemological fault lines may not be profound, but I find it a useful thing to attend to. At the very least, I find it useful to be very careful about reasoning casually in proximity to them.
I seem to recall E.T. Jaynes pointing out some obscure passages by Bohr which (according to him) showed that he wasn't that clueless about epistemology, but only about which kind of language to use to talk about it, so that everyone else misunderstood him. (I'll post the ref if I find it. EDIT: here it is []¹.) For example, if this maxim actually means what TheOtherDave says [] it means, then it is a very good thought expressed in a very bad way. -------------------------------------------------------------------------------- 1. Disclaimer: I think [] the disproof of Bell's theorem in the linked article is wrong.
Hmm, why is that? This seems incontrovertible, but I can't think of an explanation, or even a hypothesis.
4Eliezer Yudkowsky10y
Because they have non-overlapping truth conditions. Either reality is inside one set of possible worlds, inside the other set, or in neither set.
Let's try it on itself... What's the negative of "often"? "Sometimes"? Yep, still sounds equally profound. Probably not the type of self-consistency you were striving for, though.
Reminds me of this [] .

"So now I’m pondering the eternal question of whether the ends justify the means."

"Hmm ... can be either way, depending on the circumstances."

"Precisely. A mathematician would say that stated generally, the problem lacks a solution. Therefore, instead of a clear directive the One in His infinite wisdom had decided to supply us with conscience, which is a rather finicky and unreliable device."

— Kirill Yeskov, The Last Ringbearer, trans. Yisroel Markov

Not only should you disagree with others, but you should disagree with yourself. Totalitarian thought asks us to consider, much less accept, only one hypothesis at a time. By contrast quantum thought, as I call it -- although it already has a traditional name less recognizable to the modern ear, scholastic thought -- demands that we simultaneoulsy consider often mutually contradictory possibilities. Thinking about and presenting only one side's arguments gives one's thought and prose a false patina of consistency: a fallacy of thought and communications s

... (read more)

the overuse of "quantum" hurt my eyes. :(

[-][anonymous]10y 7

“If a man take no thought about what is distant, he will find sorrow near at hand.”


In matters of science, the authority of thousands is not worth the humble reasoning of one single person.


OTOH, thousands would be less likely to all make the same mistake than one single person -- were it not for information cascades.
Almost always false.
If the basis of the position of the thousands -is- their authority, then the reason of one wins. If the basis of their position is reason, as opposed to authority, then you don't arrive at that quote.
It depends on whether or not the thousands are scientists. I'll trust one scientist over a billion sages.
I wouldn't, though I would trust a thousand scientists over a billion sages.
It would depend on the subject. Do we control for time period and the relative background knowledge of their culture in general?
The majority is most part of time wrong. Or you search in data for patterns, or you put credences in some autor or group. People keep saying math things without basal training all the time -- here too.

"Given the nature of the multiverse, everything that can possibly happen will happen. This includes works of fiction: anything that can be imagined and written about, will be imagined and written about. If every story is being written, then someone, somewhere in the multiverse is writing your story. To them, you are a fictional character. What that means is that the barrier which separates the dimensions from each other is in fact the Fourth Wall."

-- In Flight Gaiden: Playing with Tropes

(Conversely, many fictions are instantiated somewhere,... (read more)

(Conversely, many fictions are instantiated somewhere, in some infinitesimal measure. However, I deliberately included logical impossibilities into HPMOR, such as tiling a corridor in pentagrams and having the objects in Dumbledore's room change number without any being added or subtracted, to avoid the story being real anywhere.)

In the library of books of every possible string, close to "Harry Potter and the Methods of Rationality" and "Harry Potter and the Methods of Rationalitz" is "Harry Potter and the Methods of Rationality: Logically Consistent Edition." Why is the reality of that books' contents affected by your reticence to manifest that book in our universe?

Absolutely; I hope he doesn't think that writing a story about X increases the measure of X. But then why else would he introduce these "impossibilities"?
Because it's funny?
It is a different story then, so the original HpMor would still not be nonfiction in another universe. For all we know, the existance of a corridor tiled with pentagons is in fact an important plot point and removing it would utterly destroy the structure of upcoming chapters.
4Eliezer Yudkowsky10y
Nnnot really. The Time-Turner, certainly, but that doesn't make the story uninstantiable. Making a logical impossibility a basic plot premise... sounds like quite an interesting challenge, but that would be a different story.
A spell that lets you get a number of objects that is an integer such that it's larger than some other integer but smaller than it's successor, used to hide something.
This idea (the integer, not the spell) is the premise of the short story The Secret Number [] by Igor Teper.
And SCP-033. And related concepts in Dark Integers by Greg Egan. And probably a bunch of other places. I'm surprised I couldn't find a TVtropes page on it.
Huh. And here I thought that space was just negatively curved in there, with the corridor shaped in such a way that it looks normal (not that hard to imagine), and just used this [] to tile the floor. Such disappointment... This was part of a thing, too, in my head, where Harry (or, I guess, the reader) slowly realizes that Hogwarts, rather than having no geometry, has a highly local geometry. I was even starting to look for that as a thematic thing, perhaps an echo of some moral lesson, somehow. And this isn't even the sort of thing you can write fanfics about. :¬(
Could you explain why you did that? As regards the pentagons, I kinda assumed the pentagons weren't regular, equiangular pentagons - you could tile a floor in tiles that were shaped like a square with a triangle on top! Or the pentagons could be different sizes and shapes.
Because he doesn't want to create Azkaban. Also, possibly because there's not a happy ending.
But if all mathematically possible universes exist anyway (or if they have a chance of existing), then the hypothetical "Azkaban from a universe without EY's logical inconsistencies" exists, no matter whether he writes about it or not. I don't see how writing about it could affect how real/not-real it is. So by my understanding of how Eliezer explained it, he's not creating Azkaban, in the sense that writing about it causes it to exist, he's describing it. (This is not to say that he's not creating the fiction, but the way I see it create is being used in two different ways.) Unless I'm missing some mechanism by which imagining something causes it to exist, but that seems very unlikely.
I seem to recall that he terminally cares about all mathematically possible universes, not just his own, to the point that he won't bother having children because there's some other universe where they exist anyway. I think that violates the crap out of Egan's Law (such an argument could potentially apply to lots of other things), but given that he seems to be otherwise relatively sane, I conclude that he just hasn't fully thought it through (“decompartimentalized” in LW lingo) (probability 5%), that's not his true rejection [] to the idea of having kids (30%), or I am missing something (65%).
2Eliezer Yudkowsky10y
That is not the reason or even a reason why I'm not having kids at the moment. And since I don't particularly want to discourage other people from having children, I decline to discuss my own reasons publicly (or in the vicinity of anyone else who wants kids).
I feel that I should. It's a politically inconvenient stance to take, since all human cultures are based on reproducing themselves; antinatal cultures literally die out. But from a human perspective, this world is deeply flawed. To create a life is to gamble with the outcome of that life. And it seems to be a gratuitous gamble.
That sounds sufficiently ominous that I'm not quite sure I want kids any more.
7Eliezer Yudkowsky10y
Shouldn't you be taking into account that I don't want to discourage other people from having kids?

That might just be because you eat babies.

Unfortunately, that seems to be a malleable argument. Which way your stating that (you don't want to disclose your reasons for not wanting to have kids) will influence audiences seems like it will depend heavily on their priors for how generally-valid-to-any-other-person this reason might be, and for how self-motivated both the not-wanting-to-have-kids and the not-wanting-to-discourage-others could be. Then again, I might be missing some key pieces of context. No offense intended, but I try to make it a point not to follow your actions and gobble up your words personally, even to the point of mind-imaging a computer-generated mental voice when reading the sequences. I've already been burned pretty hard by blindly reaching for a role-model I was too fond of.
But you're afraid that if you state your reason, it will discourage others from having kids.

All that means is that he is aware of the halo effect. People who have enjoyed or learned from his work will give his reasons undue weight as a consequence, even if they don't actually apply to them.

Obviously his reason is that he wants to personally maximize his time and resources on FAI research. Because not everyone [] is a seed AI programmer, this reason does not apply to most everyone else. If Eliezer thinks FAI is going to probably take a few decades (which evidence seems to indicate he does), then it probably very well is in the best interest of those rationalists who aren't themselves FAI researchers to be having kids, so he wouldn't want to discourage that. (although I don't see how just explaining this would discourage anybody from having kids who you would otherwise want to.)
(I must have misremembered. Sorry)
8Eliezer Yudkowsky10y
OK, no prob! (I do care about everything that exists. I am not particularly certain that all mathematically possible universes exist, or how much they exist if they do. I do expect that our own universe is spatially and in several other ways physically infinite or physically very big. I don't see this as a good argument against the fun of having children. I do see it as a good counterargument to creating children for the sole purpose of making sure that mindspace is fully explored, or because larger populations of the universe are good qua good. This has nothing to do with the reason I'm not having kids right now.)
I think I care about almost nothing that exists, and that seems like too big a disagreement. It's fair to assume that I'm the one being irrational, so can you explain to me why one should care about everything?

All righty; I run my utility function over everything that exists. On most of the existing things in the modern universe, it outputs 'don't care', like for dirt. However, so long as a person exists anywhere, in this universe or somewhere else, my utility function cares about them. I have no idea what it means for something to exist, or why some things exist more than others; but our universe is so suspiciously simple and regular relative to all imaginable universes that I'm pretty sure that universes with simple laws or uniform laws exist more than universes with complicated laws with lots of exceptions in them, which is why I don't expect to sprout wings and fly away. Supposing that all possible universes 'exist' with some weighting by simplicity or requirement of uniformity, does not make me feel less fundamentally confused about all this; and therefore I'm not sure that it is true, although it does seem very plausible.

[-][anonymous]10y 12

Don’t forget.
Always, somewhere,
somebody cares about you.
As long as you simulate him,
you are not valueless.

The moral value of imaginary friends?
I notice that I am meta-confused... Shouldn't we strongly expect this weighting, by Solomonoff induction?
Probability is not obviously [] amount of existence.
Allow me to paraphrase him with some of my own thoughts. Dang, existence, what is that? Can things exist more than other things? In Solomonoff induction we have something that kind of looks like "all possible worlds", or computable worlds anyway, and they're each equipped with a little number that discounts them by their complexity. So maybe that's like existing partially? Tiny worlds exist really strongly, and complex worlds are faint? That...that's a really weird mental image, and I don't want to stake very much on its accuracy. I mean, really, what the heck does it mean to be in a world that doesn't exist very much? I get a mental image of fog or a ghost or something. That's silly because it needlessly proposes ghosty behavior [] on top of the world behavior which determines the complexity, so my mental imagery is failing me. So what does it mean for my world to exist less than yours? I know how that numerical discount plays into my decisions, how it lets me select among possible explanations, it's a very nice and useful little principle. Or at least its useful in this world. But maybe I'm thinking that in multiple worlds, some of which I'm about to find myself having negative six octarine tentacles. So occam's razor is useful in ... some world. But the fact that its useful to me suggests that it says something about reality, maybe even about all those other possible worlds, whatever they are. Right? Maybe? It doesn't seem like a very big leap to go from "Occam's razor is useful" to "Occam's razor is useful because when using it, my beliefs reflect and exploit the structure of reality", or to "Some worlds exist more than others, the obvious interpretation of what ontological fact is being taking into consideration in the math of Solomonoff induction". Wei Dai suggested that maybe prior probabilities are just utilities, that simpler universes don't exist more, we just care about them more, or let o
(Assuming you mean “all imaginable universes with self-aware observers in them”.) Not completely sure about that, even Conway's Game of Life is Turing-complete after all. (But then, it only generates self-aware observers under very complicated starting conditions. We should sum the complexity of the rules and the complexity of the starting conditions, and if we trust Penrose and Hawking about this, the starting conditions of this universe were terrifically simple.)
What do you mean, you don't care about dirt? I care about dirt! Dirt is where we get most of our food, and humans need food to live. Maybe interstellar hydrogen would be a better example of something you're indifferent to? 10^17 kg of interstellar hydrogen disappearing would be an inconsequential flicker if we noticed it at all, whereas the loss of an equal mass of arable soil would be an extinction-level event.

I care about the future consequences of dirt, but not the dirt itself.

(For the love of Belldandy, you people...)

He means that he doesn't care about dirt for its own sake (e.g. like he cares about other sentient beings for their own sakes).
Yes, and I'm arguing that it has instrumental value anyway. A well-thought-out utility function should reflect that sort of thing.
Instrumental values are just subgoals that appear when you form plans to achieve your terminal values. They aren't supposed to be reflected in your utility function. That is a type error plain and simple.
For agents with bounded computational resources, I'm not sure that's the case. I don't terminally value money at all, but I pretend I do as a computational approximation because it'd be too expensive for me to run an expected utility calculation over all things I could possibly buy whenever I'm consider gaining or losing money in exchange for something else.
I thought that was what I just said...
An approximation is not necessarily a type error.
No, but mistaking your approximation for the thing you are approximating is.
That one is. Instrumental values do not go in utility function. You use instrumental values to shortcut complex utility calculations, but utility calculating shortcut != component of utility function.
Try tabooing exist: you might find out that you actually disagree on fewer things than you expect. (I strongly suspect that the only real differences between the four possibilities in this [] is labels -- the way once in a while people come up with new solutions to Einstein's field equations only to later find out they were just already-known solutions with an unusual coordinate system.)
I've not yet found a good way to do that. Do you have one?
"Be in this universe"(1) vs "be mathematically possible" should cover most cases, though other times it might not quite match either of those and be much harder to explain. 1. "This universe" being defined as everything that could interact with the speaker, or with something that could interacted with the speaker, etc. ad infinitum.
Defining 'existence' by using 'interaction' (or worse yet the possibility of interaction) seems to me to be trying to define something fundamental by using something non-fundamental. As for "mathematical possibility", that's generally not what most people mean by existence -- unless Tegmark IV is proven or assumed to be true, I don't think we can therefore taboo it in this manner...
I'm not claiming they're ultimate definitions --after all any definition must be grounded in something else-- but at least they disambiguate which meaning is meant, the way “acoustic wave” and “auditory sensation” disambiguate “sound” in the tree-in-a-forest problem. For a real-world example of such a confusion, see this [], where people were talking at cross-purposes because by “no explanation exists for X” one meant ‘no explanation for X exists written down anywhere’ and another meant ‘no explanation for X exists in the space of all possible strings’. Sentences such as “there exist infinitely many prime numbers” don't sound that unusual to me.
That's way too complicated (and as for tabooing 'exist', I'll believe it when I see it). Here's what I mean: I see a dog outside right now. One of the things in that dog is a cup or so of urine. I don't care about that urine at all. Not one tiny little bit. Heck, I don't even care about that dog, much less all the other dogs, and the urine that is in them. That's a lot of things! And I don't care about any of it. I assume Eliezer doesn't care about the dog urine in that dog either. It would be weird if he did. But it's in the 'everything' bucket, so...I probably misunderstood him?
So you're using exist in a sense according to which they have moral relevance iff they exist (or something roughly like that), which may be broader than ‘be in this universe’ but may be narrower than ‘be mathematically possible’. I think I get it now.
I was confused by this for a while, but couldn't express that in words until now. First, I think existence is necessarily a binary sort of thing, not something that exists in degrees. If I exist 20%, I don't even know what that sentence should mean. Do I exist, but only sometimes? Do only parts of me exist at a time? Am I just very skinny? It doesn't really make sense. Just as a risk of a risk is still a type of risk, so a degree of existence is still a type of existence. There are no sorts of existence except either being real or being fake. Secondly, even if my first part is wrong, I have no idea why having more existence would translate into having greater value. By way of analogy, if I was the size of a planet but only had a very small brain and motivational center, I don't think that would mean that I should receive more from utilitarians. It seems like a variation of the Bigger is Better or Might makes Right moral fallacy, rather than a well reasoned idea. I can imagine a sort of world where every experience is more intense, somehow, and I think people in that sort of world might matter more. But I think intensity is really a measure of relative interactions, and if their world was identical to ours except for its amount of existence, we'd be just as motivated to do different things as they would. I don't think such a world would exist, or that we could tell whether or not we were in it from-the-inside, so it seems like a meaningless concept. So the reasoning behind that sentence didn't really make sense to me. The amount of existence that you have, assuming that's even a thing, shouldn't determine your moral value.
I imagine Eliezer is being deliberately imprecise, in accordance with a quote I very much like: "Never speak more clearly than you think." [The internet seems to attribute this to one Jeremy Bernstein] If you believe MWI there are many different worlds that all objectively exist. Does this mean morality is futile, since no matter what we choose, there's a world where we chose the opposite? Probably not: the different worlds seem to have different different "degrees of existence" in that we are more likely to find ourselves in some than in others. I'm not clear how this can be, but the fact that probability works suggests it pretty strongly. So we can still act morally by trying to maximize the "degree of existence" of good worlds. This suggests that the idea of a "degree of existence" might not be completely incoherent.
I suppose you can just attribute it to imprecision, but "I am not particularly certain much they exist" implies that he's talking about a subset of mathematically possible universes that do objectively exist, but yet exist less than other worlds. What you're talking about, conversely, seems to be that we should create as many good worlds as possible, stretched in order to cover Eliezer's terminology. Existence is binary, even though there are more of some things that exist than there are of other things. Using "amount of existence" instead of "number of worlds" is unnecessarily confusing, at the least. Also, I don't see any problems with infinitarian ethics anyway because I subscribe to (broad) egoism. Things outside of my experience don't exist in any meaningful sense except as cognitive tools that I use to predict my future experiences. This allows me to distinguish between my own happiness and the happiness of Babykillers, which allows me to utilize a moral system much more in line with my own motivations. It also means that I don't care about alternate versions of the universe unless I think it's likely that I'll fall into one through some sort of interdimensional portal (I don't). Although, I'll still err on the side of helping other universes if it does no damage to me because I think Superrationality can function well in those sort of situations and I'd like to receive benefits in return, but in other scenarios I don't really care at all.
Congratulations for having "I am missing something" at a high probability!
I was sure I had heard seen you talk about them in public (On BHTV, I believe) some thing like (possible misquote) "Lbh fubhyqa'g envfr puvyqera hayrff lbh pna ohvyq bar sebz fpengpu," which sounded kinda wierd, because it applies to literally every human on earth, and that didn't seem to be where you were going.
He has said something like that, but always with the caveat that there be an exception for pre-singularity civilizations.
The way I recall it, there was no such caveat in that particular instance. I am not attempting to take him outside of context and I do think I would have remembered. He may have used this every other time he's said it. It may have been cut for time. And I don't mean to suggest my memory is anything like perfect. But: I strongly suspect that's still on the internet, on BHTV or somewhere else.
Why is that in ROT13? Are you trying to not spoil an underspecified episode of BHTV?
It's not something Eliezer wanted said publicly. I wasn't sure what to do, and for some reason I didn't want to PM or email, so I picked a shitty, irrational half measure. I do that sometimes, instead of just doing the rational thing and PMing/ emailing him/ keeping my mouth shut if it really wasn't worth the effort to think about another 10 seconds. I do that sometimes, and I usually know about when I do it, like this time, but can't always keep myself from doing it.
Tiling the wall with impossible geometry seems reasonable, but from what I recall about the objects in Dumbledore's room, all the story said was that Hermione kept losing track. Not sure whether artist intent trumps reader interpretation, but at first glance it seems far more likely to me that magic was causing Hermione to be confused than that magic was causing mathematical impossibilities.
The problem with using such logical impossibilities is you have to make sure they're really impossible. For example, tiling a corridor with pentagons is completely viable in non-euclidean space. So, sorry to break it to you, but it there's a multiverse your story is real in it.
I'm curious though, is there anything in there that would even count as this [] level [] of logically impossible? Can anyone remember one?
Anyway, I've decided that, when not talking about mathematics, real, exist, happen, etc. are deictic terms which specifically refer to the particular universe the speaker is in. Using real to apply to everything in Tegmark's multiverse fails Egan's Law IMO. See also: the last chapter of Good and Real.
Of course, universes including stories extremely similar to HPMOR except that the corridor is tiled in hexagons etc. do ‘exist’ ‘somewhere’. (EDIT: hadn't notice the same point had been made before. OK, I'll never again reply to comments in “Top Comments” without reading already existing replies first -- if I remember not to.)
And they aren't even regular pentagons! So, it's all real then...
Or at least... the story could not be real in a universe unless at least portions of the universe could serve as a model for hyperbolic geometry [] and... hmm, I don't think non-standard arithmetic [] will get you "Exists.N (N != N)", but reading literally here, you didn't say they were the same as such, merely that the operations of "addition" or "subtraction" were not used on them. Now I'm curious about mentions of arithmetic operations and motion through space in the rest of the story. Harry implicitly references orbital mechanics I think... I'm not even sure if orbits are stable in hyperbolic 3-space... And there's definitely counting of gold in the first few chapters, but I didn't track arithmetic to see if prices and total made sense... Hmm. Evil :-P

Evil doesn't worry about not being good

  • from the video game "Dragon Age: Origins" spoken by the player.

Not sure if this is a "rationality" quote in and of itself; maybe a morality quote?

[Meta] This post doesn't seem to be tagged 'quotes,' making it less convenient to move from it to the other quote threads.

Done (and sorry for the long delay).

Fiction is a branch of neurology.

-- J. G. Ballard (in a "what I'm working on" essay from 1966.)

[-][anonymous]10y 27


5[anonymous]10y []
Ballard does note later in the same essay "Neurology is a branch of fiction."

I am a strange loop and so can you!

To develop mathematics, one must always labor to substitute ideas for calculations.

-- Dirichlet

(Don't have source, but the following paper quotes it : Prolegomena to Any Future Qualitative Physics )

A principal object of Wald's [statistical decision theory] is then to characterize the class of admissible strategies in mathematical terms, so that any such strategy can be found by carrying out a definite procedure... [Unfortunately] an 'inadmissible' decision may be overwhelmingly preferable to an 'admissible' one, because the criterion of admissibility ignores prior information — even information so cogent that, for example, in major medical... safety decisions, to ignore it would put lives in jeopardy and support a charge of criminal negligence.


... (read more)
You mean such as 'rational'.

Ignorance is preferable to error and he is less remote from the truth who believes nothing than he who believes what is wrong.

Thomas Jefferson

I wonder how we could empirically test this. We could see who makes more accurate predictions, but people without beliefs about something won't make predictions at all. That should probably count as a victory for wrong people, so long as they do better than chance. We could also test how quickly people learn the correct theory. In both cases, I expect you'd see some truly deep errors which are worse than ignorance, but that on the whole people in error will do quite a lot better. Bad theories still often make good predictions, and it seems like it would be very hard, if not impossible, to explain a correct theory of physics to someone who has literally no beliefs about physics. I'd put my money on people in error over the ignorant.

Man likes complexity. He does not want to take only one step; it is more interesting to look forward to millions of steps. The one who is seeking the truth gets into a maze, and that maze interests him. He wants to go through it a thousand times more. It is just like children. Their whole interest is in running about; they do not want to see the door and go in until they are very tired. So it is with grown-up people. They all say that they are seeking truth, but they like the maze. That is why the mystics made the greatest truths a mystery, to be given on

... (read more)

A lie, repeated a thousand times, becomes a truth. --Joseph Goebbels, Nazi Minister of Propaganda

It does not! It does not! It does not! ... continued here

He who knows best, best knows how little he knows.

Thomas Jefferson

Intellectuals solve problems, geniuses prevent them.

-- [Edit: Probably not] Albert Einstein

Do you have a source? Einstein gets quoted quite a lot for stuff he didn't say.
Yes, and even more annoyingly, he gets quoted on things of which he is a non-expert and has nothing interesting to say (politics, psychology, ethics, etc...).
Hmm. There are hundreds of thousands of pages asserting that he said it but for some reason I can't find a single reference to it's context. Thanks. Have edited the quote.
For future reference: wikiquote [] gives quotes with context.
Thanks, I already [] plugged them :)
Genii seem to create problems. They prevent some in the process, and solve others, but that's not what they're in for: it's not nearly as fun.
[-][anonymous]10y 3

Inside every non-Bayesian, there is a Bayesian struggling to get out.

Dennis Lindley

(I've read plenty of authors who appear to have the intuition that probabilities are epistemic rather than ontological somewhere in the back --or even the front-- of their mind, but appear to be unaware of the extent to which this intuition has been formalised and developed.)

Suppose we carefully examine an agent who systematically becomes rich [that is, who systematically "wins" on decision problems], and try hard to make ourselves sympathize with the internal rhyme and reason of his algorithm. We try to adopt this strange, foreign viewpoint as though it were our own. And then, after enough work, it all starts to make sense — to visibly reflect new principles appealing in their own right. Would this not be the best of all possible worlds? We could become rich and have a coherent viewpoint on decision theory. If such

... (read more)

David Hume lays out the foundations of decision theory in A Treatise of Human Nature (1740):

...'tis only in two senses, that any affection can be call'd unreasonable. First, when a passion, such as hope or fear, grief or joy, despair or security, is founded on the supposition of the existence of objects which really do not exist. Secondly, when in exerting any passion in action, we chuse means insufficient for the design'd end, and deceive ourselves in our judgment of causes and effects.

This seems to omit the possibility of akrasia.
Doesn't cover that?

I fear perhaps thou deemest that we fare
An impious road to realms of thought profane;
But 'tis that same religion oftener far
Hath bred the foul impieties of men:
As once at Aulis, the elected chiefs,
Foremost of heroes, Danaan counsellors,
Defiled Diana's altar, virgin queen,
With Agamemnon's daughter, foully slain.
She felt the chaplet round her maiden locks
And fillets, fluttering down on either cheek,
And at the altar marked her grieving sire,
The priests beside him who concealed the knife,
And all the folk in tear

... (read more)
How do you make newlines work inside quotes? The formatting when I made this comment is bad.
This is the same as if you wrote it without the greater-than sign then added a greater-than sign to the beginning of each line. (If you want a line break without a paragraph break, end a line with two spaces.)

Who taught you that senseless self-chastisement? I give you the money and you take it! People who can't accept a gift have nothing to give themselves. » -De Gankelaar (Karakter 1997)

Nulla è più raro al mondo, che una persona abitualmente sopportabile. -Giacomo Leopardi

(Nothing more rare in the world than a person who is habitually bearable)

[-][anonymous]10y 0


[This comment is no longer endorsed by its author]Reply
[-][anonymous]10y 0

Should we add a point to these quote posts, that before posting a quote you should check there is a reference to it's original source or context? Not to add to the quote, but you should be able to find it if challenged. seems fairly diligent at sourcing quotes, but Google doesn't rank it highly in search results compared to all the misattributed, misquoted or just plain made up on the spot nuggets of disinformation that have gone viral and colonized Googlespace lying in wait to catch the unwary (such as apparently myself).

[This comment is no longer endorsed by its author]Reply

Some say (not without a trace of mockery) that the old masters would supposedly forever invest a fraction of their souls in each batch of mithril, and since today there are no souls, but only the ‘objective reality perceived by our senses,’ by definition we have no chance to obtain true mithril.

-Kirill Yeskov, The Last Ringbearer, trans. Yisroel Markov

Context, please?
Mithril is described as an alloy with near-miraculous properties, produced in ancient times, which cannot be reproduced nowadays, despite the best efforts of modern metallurgy. The book is a work of fiction.

Alternatively, mithril is aluminum, almost unobtainable in ancient times and thus seen as miraculous. Think about that the next time you crush a soda can.


Incidentally, in many cases modern armor is made of aluminum, because aluminum (being less rigid) can dissipate more energy without failing. A suit of chain mail made of aircraft-grade aluminum would seem downright magical a few centuries ago.

Aluminum was entirely unobtainable in ancient times, I believe. It fuses with carbon as well as oxygen, so there was no way to refine it. And it would have made terrible armor, being quite a lot softer than steel. It also suffers from fatigue failures much more easily than steel. These are some of the reasons it makes a bad, though cheap, material for bikes.
Pure aluminum can be found without reducing it yourself, but it's very rare []. You'd have to pluck it out of the interior of a volcano or the bottom of the sea- and so it seems possible that some could end up in the hands of a medieval smith, but very unlikely.
Oh, I don't know, one would say the same thing about meteoritic iron [], and yet there are well documented uses of it. (Although apparently the Sword of Attila [] wasn't really meteoritic and I got that from fiction [].)
I dunno. I read The Last Ringbearer (pretty good, although I have mixed feelings about it in general), but it doesn't seem interesting to me either.

My favorite fantasy is living forever, and one of the things about living forever is all the names you could drop.

Roz Kaveny

However, the facile explanations provided by the left brain interpreter may also enhance the opinion of a person about themselves and produce strong biases which prevent the person from seeing themselves in the light of reality and repeating patterns of behavior which led to past failures. The explanations generated by the left brain interpreter may be balanced by right brain systems which follow the constraints of reality to a closer degree.

I know by experience that I'm not able to endure the presence of a single person for more than three hours. After this period, I lost lucidity, become obfuscated and end up irritated or sunk in a deep depression" -Julio Ramon Ribeyro

If you wish to make an apple pie from scratch you must first invent the universe. --Carl Sagan

[This comment is no longer endorsed by its author]Reply