A dramatic understatement -- I found this to be far superior to WAitW, as it's
more concrete and offers reasonable advice to its readers. By being more
systematic, it strikes me as a better illustration of WAitA than the actual
WAitA article.
1FiftyTwo11y
WAitw = "Worst argument on the world" yes? The acronym is unclear.
0Nic_Smith11y
Yes.
7Viliam_Bur11y
That you for linking to this article; I enjoy seeing other people making my
point better than I could. Here are some additional thoughts:
My first thought after reading the linked article was "pick your battles",
especially as expressed in Paul Graham's "What You Can't Say
[http://paulgraham.com/say.html]". It sounds like the exact opposite of
"Atheism+ [http://freethoughtblogs.com/carrier/archives/2412/]", and yet they
both seem to make a lot of sense... well, how is that possible?
More generally: You hold opinions X and Y, and they are both important to you.
Another person agrees with X and disagrees with Y. When should you treat this
person as an ally, and when should you treat them as an enemy? Let's suppose
that both X and Y are your core values, so you can't decide by "which one is
more important to you".
Seems to me that when X is endangered, then each proponent of X is a gift, and
you don't look the gift horse in the mouth. On the other hand, when X is safe --
it may be still a minority belief, but it gains momentum irreversibly -- it is
strategic to associate X with Y as much as possible, to transfer some momentum
to Y; declaring "X but not Y" people as the "enemies of the true X (which
includes Y)" is the obvious way to do it. You can afford to alienate the few "X
but not Y" people if X will win without them too.
This was a strategic analysis in general, but now let's look at these specific X
and Y; namely: What's could be possibly wrong about asking people to be
compassionate and reasonable, and not be a bully???
Well, it depends on your specific definitions or "compassionate", "reasonable"
and "bully". Yes, the devil is in the details. As long as the vocal people are
allowed to redefine these words to mean exactly what they need them to mean in a
given moment, and especially if "surely, you said A, but we all know that's just
a code for B" arguments are accepted, it allows to relabel each dissent as
bullying, and to ostracize given people not becaus
6[anonymous]11y
Also this comment by Kaj_Sotala:
4John_Maxwell11y
Has it occurred to anyone that worst-argument-in-the-world type thinking is
probably a result of the affect heuristic
[http://lesswrong.com/lw/lg/the_affect_heuristic/]?
After reading David Burns's "Feeling Good" and receiving a score on the depression test corresponding to a severe depression I tried the exercises in the book. Though I still struggle with them, they have helped me temendously and lowered the score on the test after only a week. I can not attribute the change only to the exercises seeing as I have been more strict in my meditation regimen (15min at evening). The exercises are very interesting to this community I think and maybe I will write a dedicated discussion post.
With my new found optimism/hope/energy I am much more motivated to start exercising again in the next days, maybe a programming project and again taking up quantifying myself.
Feeling Good also helped me a lot, I think I self-diagnosed on moderate
depression using its test and then got much better after reading the first
chapters of it.
5[anonymous]11y
Write a main post! Summarizing a widely acclaimed book about a
rationality-related topic of interest to many LessWrongers surely constitutes
worthy subject matter.
3Metus11y
I am going to write it in discussion. If the moderators feel it belongs in main
they can move it.
0[anonymous]11y
Is there any precedent for the moderators doing such a thing?
0Alicorn11y
Some, but not much.
1FiftyTwo11y
Very interesting, is it available online anywhere?
1[anonymous]11y
It's available on Library Genesis [http://libgen.org]
0Metus11y
No, you will have to buy it or go to your local library. Of course, those are
only the legal options. Alternatively, wait until I publish my post, then you
will be willing to buy the book. Yes, it is that good.
0coffeespoons11y
I'd be very interested in reading it!
0Richard_Kennaway11y
I look forward to it.
0Vaniver11y
Awesome! I look forward to reading any post you make about it.
I'm thinking about a fantasy setting that I expect to set stories in in the future, and I have a cryptography problem.
Specifically, there are no computers in this setting (ruling out things like supercomplicated RSA). And all the adults share bodies (generally, one body has two people in it). One's asleep (insensate, not forming memories about what's going on, and not in any sort of control over the body) and one's awake (in control, forming memories, experiencing what's going on) at any given time. There is not necessarily any visible sign when one party falls asleep and the other wakes, although there are fakeable correlates (basically, acting like you just appeared wherever you are). It does not follow a rigid schedule, although there is an approximate maximum period of time someone can stay awake for, and there are (also fakeable) symptoms of tiredness. Persons who share bodies still have distinct legal and social existences, so if one commits a crime, the other is entitled to walk free while awake as long as they come back before sleeping - but how do they prove it?
There are likely to be three levels of security, with one being "asking", the second being a sort ... (read more)
All personalities are given a pair of esoteric stimuli. Through reinforcement/punishment, one personality is conditioned to have a positive physiological reaction to Stimulus A and a negative physiological reaction stimulus B. The other personality is given the converse.
The stimuli are all drawn from a common pool of images like "bear", "hat" or "bicycle", so one half of a stimuli pair may be "a bear in a hat on a bicycle". There's a canonical set of stimuli, like a huge deck of cards, with all possible combinations, all of which are numbered. The numbers for my stimuli pair are tattoed on my body in some obscure location, like the sole of my foot.
If I need to prove my identity, I show my tattoo to the authority figure. It will read something like "1184/0346". They pick out either image 1184 (bear in a hat on a bicycle) or image 0346 (a sword in a hill being struck by lightning), and show it to me. My immediate response will be either arousal or disgust, and they will know which personality I am.
Is this a realistic cultural adaptation? In most human societies if you are
stuck working or living with someone your social existence is somewhat shared. A
person from your clan doing something bad is a also bad for your own reputation.
If someone from your family committed a crime some legal traditions would hold
you responsible. It seems much more plausible that society would consider the
two people living in the same body to be legally treated at least like a married
couple or brothers where in some past ones.
Given your constraints and assuming no cheap and easy test of distinguishing
them, of all historical examples I can think of, only modern Western culture
with its hyper-individual liberalism would bother with the impracticality of
treating the two people like two fully distinct individuals. And even then they
would have to give a family-like if not legal guardian-like relationship for the
issue of making medical decisions. Not sharing your place of residence and
ownership over it would be impractical, though perhaps there would be a strong
norm of not going into the other guys part of the house.
Also as a minor note the culture would probably develop a norm of some sort of
marker (perhaps clothes, jewellery) or face paint to show which of the two
persons in currently in control. The distinction would be more or less universal
not simply individualized so even strangers could tell this was two different
persons. Think more "Ah I see your patron god is the first twin Jahu. Your
cohabitor was here yesterday." instead of "Aha James always wears his leather
jacket you must be Harry!". Using the wrong marker would probably at least as
taboo as cross-dressing was in some past cultures.
4Alicorn11y
I'm not trying to get too much into the cultural details here - certainly
cultures vary in the setting. Some of them do treat cohabiting like it's on par
with marriage, and even arrange it through families (which makes sense: if we
want to share grandchildren, we arrange for our kids to get married if they're
the opposite sex, but if they're the same sex nonfantasy humans are out of
grandchildren-sharing luck. In comes cohabitation!) But importantly, cohabitors
cannot talk to each other. There is no way for them to socially pressure each
other outside of self-destructive attacks or sternly written letters. You could
hold someone responsible for what their cohabitor did, but this would only deter
people who were compassionate enough to care about the fate of someone they
cannot ever interact with - and, if they picked each other instead of being
arranged, chose on the basis of not particularly desiring to ever interact with
them again. (You don't pick your friends as cohabitors: you pick people whose
company you don't care for with comparable danger tolerances and cosmetic
features you want to include when you have your bodies conglomerated.)
Also, they don't sleep, so "place of residence" dissolves for most people. They
have typical hangouts, storage lockers, clubhouses and favorite restaurants and
rental kitchens - but why bother maintaining an entire house? You don't need a
secure place in which to sleep; your cohabitor will look after your body while
you're unconscious. Medical decisions are also made a lot simpler by the magic
system, although they don't completely go away and there's probably some plot to
be had there.
Most people would probably adopt cosmetic markers, but how required these would
be would certainly vary; I think your expectation here would be a reasonable way
for one society to operate but too sweeping for all. This isn't how we treat
identical twins, who, while uncommon, are still a known feature of the real
world. I look a lot like my sist
4NancyLebovitz11y
Cohabitors could also pressure each other with rewards, and with threatening to
withhold rewards.
I'm not sure about the lack of residences. A storage locker isn't the same thing
as having your stuff conveniently arranged for use.
2Alicorn11y
Well, houses are at least a great deal more optional. I'm imagining them as
something of a status symbol.
0NancyLebovitz11y
How much of a status symbol would a home be? Only the poorest don't have a home?
A home is a middle-class sort of thing? Only the rich? Only the very rich?
0Alicorn11y
Again, would vary from culture to culture within the setting.
2A1987dM11y
IIRC, in some cultures (e.g. mid-20th-century Italy) they did the opposite, i.e.
they dressed their twin children identically.
5Kindly11y
Each personality owns a bracelet with a combination lock. To prove you're you,
you unlock your bracelet. This is basically the password system, but localized,
and now you just have to worry about making combination locks tamper-proof.
0Alicorn11y
Unfortunately, physical locks interact very badly with the magic system. (In
brief: "Lockedness" is a thing. If you are about average at magic, it's a thing
you can move from one thing you're touching that is locked to another thing you
are touching that can be locked but isn't.)
0bogdanb11y
Since it’s the only thing I know about the magic system, I suggest looking
closely into what it means that X can be Y. (By “looking closely” I mean
“exercise your authorial authority”.) Then tie the procedure to something that
can’t be moved to anything that prisoners have around, other than the actual
testing thing.
But the thing that keeps returning to my mind is that in our world we do
quarantine innocent people if they carry dangerous enough diseases. I think
you’d need a pretty high rate of evil-twinniness for a society not to take the
easy way out and do the same. Even a very trustworthy person can fail to return
to prison (?) by accident.
Anyway, I think pen-and-paper cryptography is your best guess, unless
“encryptedness” and related properties are things that can be moved. Neal
Stephenson’s Cryptonomicon has an example of a protocol that uses a deck of
cards. (Which is imaginary but possible AFAIK.)
5[anonymous]11y
It's not imaginary; the protocol is described in one of the appendices, and I've
implemented it once.
0bogdanb11y
Cool! Do you remember the “performance” of the protocol? (That is, how much work
it takes to exchange how much information, in approximate human-scale terms, and
its approximate security in usual cryptographic language.)
5Paul Crowley11y
Sadly, Bruce Schneier's "Solitaire" [http://www.schneier.com/solitaire.html] is
broken [http://www.ciphergoth.org/crypto/solitaire/]. That break was one of the
things that got me into crypto!
0TimS11y
Can you explain how broken it is to this layperson?
Warning: What follows likely has major technical errors - basically all I know
about cryptography I learned from Cryptonomicon.
From the description, the random numbers are not evenly generated so that what
should have a 1/26 chance of happening has a 1/22.5. And the output is heavily
biased.
How much does that matter? We can easily decrypt Enigma with brute force right
now. Is the difference in the amount of computing power to brute force Solitaire
all that much different from what is expected?
In other words, encryptions with 256-bit keys are harder to crack than 128-bit
keys. But is the problem with Solitaire 20-years-safe vs. 10-years-safe, or is
it 20-years-safe vs. 12-months-safe?
1Alicorn11y
Yeah... I guess as long as I'm postulating accomplices, I might as well
postulate accomplices who'd kidnap their jailed friend's cohabitor and wait
until they are forced to sleep by sheer exhaustion.
0[anonymous]11y
Is there a risk that any authentication scheme could be bypassed by transferring
the "Autenticatedness" from someone else, or does the magic system forbid that
somehow?
In any case, some kind of magical version of the bracelet lock sounds like a
good idea, if you can think of one.
0Alicorn11y
Transferring authenticatedness doesn't work, so that's not going to be an issue.
I can't think of a way to magic up the bracelet to work like this,
unfortunately.
-3[anonymous]11y
Couldn't they just each memorise a six digit number and recite it on demand?
5RolfAndreassen11y
The first thing that occurs to me is to decentralise the database, which
incidentally is rather a computer-ish concept. Each person designates two or
more Keyphrase Holders, with a separate password for each. For low-security
situations, they have to give their passphrase to one KH; for maximum security,
they have to convince all of them. Ten or a dozen passwords should not be beyond
anyone's memorisation capabilities in a world without shiny Internet
distractions, and the KH can write them down - this gives you a lot of different
DSP-DPs instead of one big one. Any given KH may be suborned or have his
database broken into, but by the time you get up to a dozen or so that is
unlikely.
Obviously this works best if you don't have to physically drag the KH to the
prison cell, or whatever, before you let the innocent one out.
2Randaly11y
To make this easier to memorize and more secure, you could have there be a much
larger number of KHs. Their job is to be KHs; their identities are kept secret
even from each other. Each KH has a certain property about the person's password
that they learn- e.g. its length, the number of vowels, the number of times the
letter "a" appears minus the number of times a letter appears, etc. However,
they don't know the password itself; they only know the person's answer to the
question. When a person wants to be released, a certain number of KH's, randomly
selected, large enough that correct guesses or collaboration is unlikely, and
all wearing hoods, are summoned to the person's cell to figure out their
identity.
You'd need to ensure that, following an incorrect guess, the same KH isn't used
again- or that the innocent person picks a new password. (Propagating password
changes would be terrible- it would make sense to have very severe punishments
for claiming to be another person. The first time would be standard jail
processing- everybody innocent would need to go down a line of KH's and tell
them their name and the answer. This also highlights the main weakness of any
possible system- the need to have verified who is who when dealing with the
initial passwords, since criminals would presumably immediately go to sleep
following crimes, or claim to have just woken up.)
4Emile11y
Give everybody training in a particular skill during their childhood: Juggling,
acrobacy, calligraphy, drawing, playing a particular instrument - or even
something more esoteric like doing figures with a Yoyo or a Diabolo, or doing
pool tricks, or tricks with a socker ball; anyway something require a good
amount of motor skills and training; and also make sure that no cohabitor pair
has skills that are too similar (like calligraphy and drawing, or acrobacy and
soccer tricks, or the violin and the bass).
Then have a taboo against learning those skills outside the "official" (or
religious) context in childhood (for example: being seen training for them is a
crime, the props can't be found outside special temples, etc.).
4TheOtherDave11y
Physiological correlates to anxiety in response to known personality-specific
trauma?
3Salutator11y
Can they use quill and parchent?
If so, the usual public key algorithms could be encoded into something like a
tax form, i.e. something like "...51. Subtract the number on line 50 from the
number on line 49 and write the result in here:__ ...500. The warden should also
have calculated the number on line 499. Burn this parchent."
Of course there would have to be lots of error checks. ("If line 60 doesn't
match line 50 you screwed up. If so, redo everything from line 50 on.")
To make it practical, each warden/non-prisoner-pair would do a Diffie-Hellman
exchange only once. That part would take a day or two. After establishing a
shared secret the daily authentication would be done by a hash, which probably
could be done in half an hour or less.
Of course most people would have no clue why those forms work, they would just
blindly follow the instructions, which for each line would be doable with
primary school math.
The wardens would probably spend large parts of their shifts precalculating
hashes for prisoners still asleep, so that several prisoners could do their
get-out work at the same time. Or maybe they would do the crypto only once a
month or so and normally just tell the non-prisoners their passwords for the
next day every time they come in.
1Alicorn11y
I don't think that I understand how this works, which has a meta-level
drawback...
4khafra11y
You might have better expository skills than Salutator, and people love learning
esoteric things about mysterious professions in the midst of fiction.
Diffie-Helman relies on certain properties of math in prime modulus groups, but
understanding those properties isn't necessary just to do DH. It only takes
primary-school level math abilities to follow the example on Wikipedia
[http://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange] (and note
that, if nobody has computers, you don't need a 2048 bit modulus.
3Mitchell_Porter11y
Everyone is born with a true name that they intuitively know but can't say, and
they also have a unique soul-color. And there are special glow-stones that you
can think your true-name at, which will then glow the same as the soul-color of
the person with that name.
5Alicorn11y
I'd rather not solve the problem by adding magic that doesn't fit into the
existing system. Especially suspiciously convenient magic.
2Pentashagon11y
You need to think about one-way functions (hashes) and trapdoor one-way
functions (public key algorithms). There are some additional issues that arise
like nonces to thwart replay attacks and the level of protection individuals can
be expected to give to secret keys.
Also, even without explicit mathematics the universe will presumably have a
concept of entropy and conservation of something, even if it's just conservation
of magical energy. If you can come up with a plausible problem that magic can
solve given a lot of expended magical energy but can be solved much more easily
with the knowledge of a secret, then you can build a challenge-response identify
proof so long as it's not easy to steal the secret by watching the
demonstration. If additionally it's very hard to derive the secret from the
demonstration of its knowledge you probably have the power of a public key
system.
Not all the following problems require magic to implement, and many of them
actually benefit from not having a knowledge of mathematics and algorithms,
since most of these are not cryptographically secure.
* Have each person construct an elaborate puzzle out of oddly shaped objects
that can be packed into a small finite volume in only one way (the knapsack
problem)
* Each person constructs a (large) set of sticks (or metal rods, or whatever)
of varying lengths, of which a subset add up to a standard length like a
meter (the subset sum problem)
* Society forms a hierarchical tree of secret handshakes so that each person
only has to remember, say, 100 secret handshakes and the tree only has to be
log_100 (N) tall so the courts can just subpoena a logarithmic number of
individuals to verify handshakes between any two arbitrary people. Obviously
any one of your 100 acquaintances can impersonate you, so two or more
distinct trees would at least require collusion.
* Any magical item that only functions for its "owner".
* A magical "hash function", like a petronus o
2saturn11y
Maybe you could adapt this implicit memory-based authentication scheme
[http://www.extremetech.com/extreme/133067-unbreakable-crypto-store-a-30-character-password-in-your-brains-subconscious-memory]
into a board game format similar to Mastermind
[http://en.wikipedia.org/wiki/Mastermind_%28board_game%29].
2gwern11y
Recognition memory is actually even cooler than implicit memory, I thought, and
can contain quite a bit of information (as far as I could tell, working through
Shannon's theorem): http://www.gwern.net/Spaced%20repetition#fn63
[http://www.gwern.net/Spaced%20repetition#fn63]
Dunno how it would work in this setting, though, unless the personalities share
visual recognition.
0Alicorn11y
If I do something in this approximate neighborhood, I think I'll go with the
hypnotism idea, since it's easier both to understand and to handwave about.
2Emile11y
A few possibilities:
A clockwork Analytical engine / Enigma machine, that does something equivalent
to public key verification (though I assume you don't want that kind of machine
either).
In each city is a temple of the Sigils, in which are stored the Sigils of
people, in public view. The Sigils are like intricate signatures drawn on clay
tablets; but they are made on a special clay, Sigil Clay, that dries in about a
minute, and changes color depending on the pressure you apply to it, the heat
(depending of whether you're touching it with a stylus or with your fingers),
and how dry it is. Sigil Priests know hundreds of drawing techniques, and when
an alternate pair is created, each person will be taught a few techniques to
apply to his drawing, with no overlap between the alternates (so it should be
quite hard for someone to reproduce his alternate's Sigil). Being able to draw
one's Sigil is generally considered a proof of identity, and since only the
Sigil Priests know how to make Sigil Clay, one has little opportunity to
practice drawing someone else's Sigil (not to mention that it's of course
considered a grave crime).
For the prisonner's case, why not having the "day" persona return to prison to
sleep and give a new passphrase short (randomly generated with a special set of
dice) to the guard, and when he wakes up and wants to get out, he must give the
same passphrase (if he gets it wrong, he is lightly punished and must wait at
least 30 minutes before trying again.
This is a weird and interesting premise!
0Alicorn11y
The passphrase idea you describe is probably fine for minimum and even medium
security, it's just vulnerable to eavesdropping and message-passing by third
parties if the prisoner has friends.
2[anonymous]11y
So basically the Cherubs in Homestuck.
0Alicorn11y
I barely got ten pages into Homestuck, so I wouldn't know.
2[anonymous]11y
Calliope/Caliborn share the same body. Each is "asleep" while the other is
"awake", and they have a pair of ankle-shackles of which magically only one can
open. They also have disjoint skillsets; due to some kind of brain trauma,
Caliborn is incapable of drawing, while Calliope is pretty good: example
[http://images.wikia.com/mspaintadventures/images/0/09/Uu_Artwork.png]
Caliborn circumvents this latter restriction by biting off his own leg.
2Randaly11y
Why use cryptography? If I understand the problem statement correctly, there's a
simpler solution. When a prisoner wants to go to sleep, they signal and a guard
walks over and renders them unconscious, presumably using drugs. Since we know
that nobody would go to sleep outside of jail, you can figure out who is who by
counting the number of times they've been sedated.
(This is vulnerable to troubles telling who is who at the start, but so is any
knowledge-based method. This is also vulnerable to people falling asleep
outside, but so is any knowledge based method. It's also fairly dangerous, given
that most drugs capable of rendering somebody unconscious are dangerous;
however, giving guards some training and then handwaving away or saying the
society isn't concerned by the (minimal) danger sounds reasonable. It assumes
certain things about going to sleep and drugs that may not be true in this
universe, but it at least sounds reasonable- and this is fiction.)
4Alicorn11y
Sedatives would cause physical sleep, and the reason people share bodies in this
world is because having your body be asleep will cause your soul to be eaten by
insubstantial demons. Sleeping-while-someone-else-pilots-your-body is safe in
large part because it cuts off interventions regarding your soul from outside
sources - demons, drugs, magic, etc.
Also, this method relies on cooperative criminals, not just cooperative
cohabitors-with-criminals. The criminal has an incentive to make being in jail
really inconvenient for their cohabitor - by, for instance, not notifying anyone
before going to sleep. They're already in jail, so making their cohabitor mad at
them has limited power to make their situation worse, but if the guards wind up
having to imprison the cohabitor too to be safe, the cohabitor might work on
ways to get out.
3Emile11y
I suppose reallocating cohabitors (say, criminals with criminals) is out of the
question?
3Alicorn11y
Moving one person in with another person is already very magically challenging;
this might not be strictly impossible but your average community would not have
access to even one person who could do it. Perhaps this would be a good last
resort on a national level for anyone with a demonstrated propensity to actually
escape, or whose escape would be particularly dreadful.
1shminux11y
Is handwriting style per person or per body?
9Alicorn11y
Per person, but most people in ordinary day-to-day life will have plenty of
opportunity to observe and practice mimicking their cohabitor's handwriting if
they feel like working on that - they can't talk to each other directly, so they
leave notes ("watch out for our left foot, it's still tender, I dropped
something on it", "so how are you doing, what are you up to", "we're pregnant").
2gwern11y
So handwriting is secure between a pair; then all you need is some sort of
authentication. Why not use a very simple random number generator? Each member
of a pair knows it, of course, and they occasionally set up fresh seeds. Each
day is one iteration. To 'sign' a message, one simply writes down today's random
number afterwards. (You said handwriting is secure, so you don't worry about
someone tampering with the message and making an authentic number testify to a
faked message.)
What RNG? Dunno. Blum Blum Shub [http://en.wikipedia.org/wiki/Blum_Blum_Shub]
has a hilarious name, but the multiplying is a bit painful. Depending on how
much accuracy you want, you could make up your own simple recurrence (imagine a
list of 5 integers, which shift each day, and the first is defined by the sum of
the last two modulo 5). But it turns out geeks have already discussed PRNGs you
can do with mental arithmetic:
* http://ask.metafilter.com/191135/Help-me-get-random-numbers-by-mental-arithmetic
[http://ask.metafilter.com/191135/Help-me-get-random-numbers-by-mental-arithmetic]
* http://blog.yunwilliamyu.net/2011/08/14/mindhack-mental-math-pseudo-random-number-generators/
[http://blog.yunwilliamyu.net/2011/08/14/mindhack-mental-math-pseudo-random-number-generators/]
* http://stackoverflow.com/questions/3919597/is-there-a-pseudo-random-number-generator-simple-enough-to-do-in-your-head
[http://stackoverflow.com/questions/3919597/is-there-a-pseudo-random-number-generator-simple-enough-to-do-in-your-head]
* http://ask.metafilter.com/20334/Random-sequences-in-your-head
[http://ask.metafilter.com/20334/Random-sequences-in-your-head]
For the looks of them, at least one suggestion should work for you.
0MixedNuts11y
This allows pair members to authenticate themselves to each other, but not third
parties to tell members apart.
0gwern11y
Set up another pair of RNGs; both write down on a piece of paper and show the
paper simultaneously, something like that. With third parties, you lose the
time-delay aspect which makes things hard in the case of temporally separate
pair members trying to authenticate to each other.
0shminux11y
OMG!
Well, first, handwriting is extremely hard to mimic perfectly, but maybe it's
easier if you are using the same hand (and brain). Think of other individual
traits that are harder to observe in your other half. Maybe speech patterns, or
mannerisms, or some other subconscious manifestations. Maybe have a separate
hypnotic induction for each person when they become of age. Judging by your
writings, you don't suffer from the lack of imagination. The goal is to have a
cheap version of the same feature, and "There are likely to be three levels of
security" sounds pretty complicated already.
2Alicorn11y
Oh, come on, it's an obvious consequence of the premise.
Hypnosis has some promise. Speech patterns/mannerisms seem like they'd rely on
the testimony of people who know both of the cohabitors really well and who
probably aren't cops, which has the problem of those people being corruptible in
various ways.
I don't suffer from lack of imagination, but I'm just one person. An entire
civilization which has had this problem for a long time should be able to come
up with a solution that's more robust than what I've been coming up with, so I
solicit help - I'd feel especially silly if there were some trivially
implementable noncomputerized version of RSA that someone could tell me about.
Also, the entire setting does this thing where people share bodies, and there
are multiple cultures in the setting - ideally they'd have different approaches,
so if I can come up with more than one workable idea, so much the better.
2JohnWittle11y
Without introducing more magic and without there being at least some kind of
database, this is an unsolvable problem. I would say use a one-time pad, but the
key would have to be stored in a database.
If the technology of the time is at least that of, say, the 1940's, you could
use quantum key distribution to at least be alerted if the crypto is broken
(more useful than any other solutions), but would still require a database.
0shminux11y
Maybe it would be obvious, were I female.
Good point. RSA in a nutshell is "I'm the only one who knows a certain secret,
and I'm the only one who can unconditionally and repeatedly verify this fact
without divulging the secret itself". Well, this is one half of it, the
authentication part, not the encryption part.
So you need a way for a person to produce some output from a given input that
can be unique both to the person and to the input. but easily verifiable. What
kind of non-technical output is available? Visual? Aural? Motor functions?
For example, maybe a way one's eyes follow a complicated pattern is while
unpredictable, but unique enough and easy to check. Or a rhythm one drums in
response to something. Or the interpretation of the Rorschach test.
By the way, if you find something that works in real life, you will be famous
and set for life, as this is an open problem with multiple applications.
0Alicorn11y
These people are humans, although there is much more potential for magical
alteration of the base plan than real humans have. They have human capacities to
memorize and transmit information.
0Kaj_Sotala11y
I'm reminded of this
[http://lesswrong.com/lw/dp5/link_using_procedural_memory_to_thwart_rubberhose/].
Although the technique in the article was taught using a computer game, one
could plausibly develop an analog equivalent. Give someone a musical instrument
and teach them to play specific sequences in response to the sequences somebody
else plays, or something.
But the teaching would be really time-consuming, and of course you'd have to
make sure that the right person was in charge of the body while they were being
taught.
0Alicorn11y
If it's something you can teach children, then wealthy societies (which can
afford to wait longer before having people move into each other's bodies) can be
sure to teach only the correct people, but indeed time consumption remains an
issue.
0Tripitaka11y
Well, there is visual cryptography in various forms, and if one databank is not
secure enough, make it two or three- parole officier+National Databank or
something, thats called secret-sharing-cryptography. It is possible to combine
both, and even have them at a simple enough level to not require PCs. Of course,
for visual cryptography you need a fast way to recreate the visual secrets-
computing and graphing polynoms for thirty minutes every twelve-ish hours is a
serious waste of time...
0[anonymous]11y
Does the protocol need to be robust against cohabitors in league with each
other? That is, is "permanently private" built in, or could someone share their
key with a cohabitor who agrees to take the fall?
1Alicorn11y
I think under the circumstances they're going to have to consider cohabitors who
aid and abet their cohabitor's crimes to be accessories deserving of the same
punishment (at least insofar as that punishment is restriction of movement) -
otherwise you let the accessory go, they travel to a safe place, and they nap,
boom, criminal is free.
"Our "real will" (in Bosanquet's terms) or "rational will" (in Blanshard's) is simply that which we would want, all things considered, if our reflections upon what we presently desire were pursued to their ideal limit."
This is remarkably similar to the informal descriptions of CEV and moral "renormalization" that exist. Someone should look into the literature on Bosanquet and Blanshard's rational will, and see if there's anything else of use.
Thanks for the reference. It's a shame that the informal description wasn't
attached to a more distinctive label. If so it would be worth adopting it for
the sake of conformity.
1Will_Newsome11y
If I had a dollar for every time a philosopher talked informally about something
potentially very cool...
7Richard_Kennaway11y
...then you'd have a dollar for every post in the Sequences.
But this is also a case where we can look to the past and other societies for lessons in terms of how it will impact our society. Though I have never personally lived in this sort of family, except to some extent between the ages of two and four (and so my memories are minimal), I know of the downsides from family lore and gossip. Just watch a Bollywood film as ethnography. From what I can gather a linear increase in the number of family members within a household does not entail a linear increase in the family drama. On the contrary, there is a very rapid increase, as inter-personal relationships become much more elaborated (this especially is true when you multiply grades of relatedness). A far greater proportion of one’s life is taken up by maintenance of household relationships. The American nuclear family is to some extent on the atomized side, but extended families tend toward hyper-sociality.
And I believe that this has consequences. The shift back toward extended families is due to the exigency of post-bubble America. But we may be on the way to a more thoroughgoing shift in the nature of American society, and how we relate to
That's a pretty good example of that, yeah. It's also interesting to note how
values, or at least the potential for them, may be conserved across long-term
shifts: American culture is notably fixated on geneaology compared to societies
where the extended family is a socioeconomic norm; the motivation to have a
wider familial context is there, even in families and individuals who are quite
comfy with the nuclear pattern. I'm not suggesting it's a causal influence that
trumps the economics driving the push for extended families, but I can't help
seeing it as influential. The demographic transition and decline of extended
families in the US wasn't that long ago...
I own a personal server running Debian Squeeze which has a 1Gb/s symmetric connection and 15TB per month bandwidth.
I am offering free shell accounts to lesswrongers, with one contingency:
1) You'll be placed in a usergroup, 'lw', as opposed to various other usergroups for various other communities I belong to, which will be in other usergroups. Anything that ends up in /var/log is fair game. I intend to make lots of graphs and post them on all the communities I belong to. There won't be any personally identifying data in anything that ends up publicly.
Your shell account will start out with a disk quota of 5g, and if you need more you can ask me. I'm totally cool with you seeding your torrents. I do not intend to terminate accounts at any point for inactivity or otherwise; you can reasonably expect to have access for at least a year, probably longer.
Query me on freenode's irc (JohnWittle), or send me an email. johnwittle@gmail.com.
Also, while the results of my analysis are likely to go in Discussion, I was wondering if this offering of free service itself might go in discussion. I asked in IRC and was told that advertisements are seriously frowned upon and that I would lose all my karma.
A month or two ago I made a case on the #lesswrong channel on IRC that a massive online class or several created in partnership with and organization like Khan Academy or Udacity, would be a worthy project for CFAR and LW. I specifically mention those two organizations because they are more open to non-academic instructors than say Coursera or EdX and seem more willing to innovate rather than just dump classical university style lectures online.
The reason I consider it a worthy project, is besides it exposing far more people to the material and ideas we want to spread, it would allow us to make progress on the difficult problems of teaching and testing "rationality" with the magic of Big Data and even something as basic as A/B testing to help us.
I considered making an article on it but several people advised me that this would prove a distraction for CFAR, more trouble than is worth at this early stage. I have set up a one year reminder to make such a proposal next summer and plan to do some research on the subject in the meanwhile to see if it really is as good an opportunity as I think it... (read more)
It has become increasingly clear over the last year or so that planets can in fact form around highly metal poor stars. Example planet. This both increases the total number of planets to expect and increase the chance that planets formed around the very oldest stars. (Younger stars have higher metal content). One argument against Great Filter concerns is that it might be that life cannot arise much younger than it did on Earth because stars much older than our sun would not have high metal content. This seems to seriously undermine this argument.
How much should this do to our estimates for whether to expect heavy Filtration in front of us? My immediate reaction is that it does make future filtration more likely but not by much since even if planets could form, a lack of carbon and other heavier elements would still make formation of life and its evolution into complicated creatures difficult. Is this analysis accurate?
I have a Great Filter related thought which doesn't address your question directly but, hey, it's the Open Thread.
My thesis here is that the presence of abundant fossil energy on earth is the primary thing that has enabled our technological civilization, and abundant fossil energy may be far less common than intelligent life.
On top of all the other qualities of Earth which allowed it to host its profusion of life, I'll point out a few more facts related specifically to fossil energy, which I haven't seen in any discussions of Fermi's Paradox or the Great Filter.
Life on Earth happens to be carbon-based, and carbon-based life, when heated in an anoxic environment, turns into oil, gas and coal.
Earth is roughly 2/3 covered in oceans (this figure has varied over geologic time), a fact with significant consequences to deposition of dead algae, erosion, and sedimentation.
Earth possesses a mass, size, and age such that the temperature a few kilometers below the surface may be hundreds of degrees C, while the surface temperature remains "Goldilocks."
Earth has a conveniently oxidizing atmosphere in which hydrocarbons burn easily, but not so oxidizing that it prevents stable
The oxidizing atmosphere is not due to chance. It was created by early life that exhaled oxygen, and killed off its neighbors that couldn't handle it. Hence, I don't think the goldilocks oxygen levels speak much to great filter questions.
Early in civilization, we used wood and charcoal as energy sources. Blacksmithing and cast iron were originally done with wood charcoal. Cast iron is a very important tool in our history of machine tools and hence the industrial revolution. It's possible that we could have carried on without coal, instead using large-scale forestry management or other biomass as our energy source. In the early 1700s there were already environmental concerns about deforestation. They were more related to continued supply of wood for charcoal and hunting grounds than "ecological" concerns, but there were still laws and regulations enacted to deal with the problem.
How many people do we need to support a high-tech civilization? I suspect fewer than we tried it with. It's quite possible that biofuel sources would have produced a high tech civilization, just slower and with fewer people.
Also, note that biofuels can produce all the lubricants and plastics you ne... (read more)
These are all good points and I don't disagree with you. It probably is worth
pointing out that ever since about 1800 our civilization has had "the pedal to
the metal
[http://www.theoildrum.com/files/world-energy-consumption-by-source.png]" in
terms of accelerating our demand for energy, i.e. an exponential rise in energy
demand, and that demand has been consistently met and often exceeded - this is
why we can afford to fill our personal cars with this precious fuel on a regular
basis.
I think that a sufficiently forward-thinking civilization probably could base
its energy production around biofuels, but a gallon of gasoline-equivalent would
probably cost about a thousand dollars-equivalent. Building a skyscraper would
be a project akin to manned space flight. Manned space flight would be
completely out of the realm of possibility.
3[anonymous]11y
The more important question would be how hard it would be to get nuclear energy.
2faul_sname11y
I find this doubtful, being as ethanol (25 MJ/L) is nowhere near that expensive
to create, and is fairly near the energy density of gasoline (35 MJ/L).
8moridinamael11y
Consider the entire economy, though. Let's not assume that ethanol could ever
replace fossil fuels at the scale needed for explosive technological growth. the
reason pure ethanol is cheap in the modern world is because we have enormous
economies of scale producing the necessary feedstocks which rely on trucks and
trains and fertilizers, hell, even the energy used to distill the ethanol is
typically from fossil fuel.
It's about supply-demand. If, tomorrow, there were no gasoline anymore, the
price of ethanol would be astronomical.
2vi21maobk9vp11y
Note, though, that we are talking about much smaller population - so you could
spend quite a lot of land per capita on growing both ethanol source and fuel.
Current size of humankind is clearly unsustainable in this mode, of course.
4JoshuaZ11y
With a much smaller population you start losing all sorts of other advantages,
especially economies of scale and comparative advantage.
4evand11y
Careful. Economies of scale for quantity million parts don't show up until
probably the 20th century. Prior to that, the effect of reduced population size
might just be reduced variety. Do you have any idea how many manufacturers of
engine lathes there were at the end of the 19th century, for instance? (Hint:
more than a couple.)
8NancyLebovitz11y
Actually, the neighbors that couldn't handle oxygen got forced underground. They
live in the mud under the deep sea, in digestive tracts, etc.
8evand11y
Well, some of their descendants are still alive, yes. But I believe that there
was a lot of dying involved in that process. More than I think is implied by the
phrase "forced underground".
5John_Maxwell11y
Well, one point is that supposedly there were a lot of societal factors that
also had to be in place for the industrial revolution to take place. (Apparently
if you lived anywhere but Britain, if you were doing anything cool, the ruling
monarch would come along and just take it.) So it's not necessarily just tech.
Another point is that Earth appears to have periodic ice ages, and many/most
human civilizations seem to collapse after a while. So sustaining progress over
long periods is nontrivial.
5[anonymous]11y
Environmental ones too. Britain had to be so short of wood and charcoal to burn
that using coal in home stoves, even with its nasty byproducts, was preferably
to most people going without any source of burnable fuel. The widespread
proliferation of coal that followed to meet the demand meant there was plenty of
it about to turn to other purposes.
8[anonymous]11y
Frankly, I'm wondering if the whole idea of exponential growth is just short
cultural time horizons applied to the implications of fossil fuels for energy
production, which touched off the Industrial Revolution. The Hubbert Peak holds,
although coming out the other side of it resembles a gradual stepping-downward
with its own local spikes and valleys (much as there are spikes and valleys in
growth and use now, despite a steady upward trend). Fossil fuels still supply
over three quarters of the world's energy demand; there hasn't been a nuclear
renaissance so far and as much as someone always wants to boost pebble bed,
travelling-wave or thorium reactors, innovation and growth for nuclear both seem
quite limited on the balance. That might not seem like a big deal now (surely it
could happen, right?) but what if that situation does not change appreciably,
and world civilization starts transiting down the other side of of the curve,
taking a few centuries to do it? What if we never do figure out FAI, or MNT, or
fusion, or whatnot? What if that's because the noise of society, geopolitics and
history-in-general just don't allow for them to come to pass?
What if the the answer to Fermi's paradox is simply "You'd have to mistake the
infrastructural equivalent of a blood sugar rush for an inexorable trend in
technological development to even wonder why nobody's zipping around in
relativistic spacecraft or building Dyson spheres?" What if the problem is just
short time horizons and poor understanding of context?
2[anonymous]10y
This. I've been searching for a way to articulate this idea for quite some time,
and this is the best way I've seen it stated.
The last few centuries are potentially extremely atypical in human history. We
have three generations of economists raised on exponentiation to think it is
normal, and a series of technological advances that almost all require highly
concentrated energy in ways that are seldom appreciated. When you think about it
too, it would appear that something like an oil well is the most concentrated
source of easily captured energy in the solar system - where else do you get
such a huge amount of highly reduced matter next to such highly oxidized gas?
With the interface between them requiring something as simple as a drill and
furnace? Per unit of infrastructure and effort that is an incredible resource
that I honestly doubt you can really improve upon. I have long suspected that
reversion towards (through perhaps not all the way to) the mean is far more
likely in our future.
1[anonymous]10y
Thank you. It's still a bit indistinct to me as yet -- I haven't seen many other
people talking about it in these terms, except Karl Schroeder (who explores it a
bit in his science fiction writing), but I knew something seemed a little funny
when the Rare Earth Hypothesis and its pop-sci cousins started growing in
popularity among the transhumanist set. It seems like an awful lot of the
background ideas about the Fermi Paradox and its implications for anthropics in
the core cluster that LW shares go back to an intellectual movement that came to
prominence at a time before we'd discovered more than a tiny handful of
exoplanets. Now we know there's at least one Earth-sized world around Alpha
bloody Centauri and even Tau Ceti of all stars is being proposed as rich in
worlds; at this rate I personally expect to learn about the probable existence
of another biosphere around a star within 100 ly, within my natural lifetime
(though, for the reasons expressed in my comment, I'm doubtful we'd be able to
reliably notice another civilization unless they signalled semi-deliberately or
we got staggeringly lucky and they have a recognizably-similar fossil fuel
"spike" within a similar window, meaning we can catch the light of cities on the
night side assuming Sufficiently Powerful Telescopes).
nod I suspect the future probably looks rather weird to LWian eyes, in this
regard -- neither a reversion to the 10th or 17th century for the rest of human
existence, nor much like the most common conceptions of it here (namely:
UFAI-driven apocalypse vs FAI-driven technorapture). It's hard to tease out the
threads that seem most relevant to my budding picture of things, but they look
something like: increasing efficiency where it's possible, a gradual net
reduction in the stuff economists have been watching grow for the last few
generations, some decidedly weirdtopian adaptations in lifestyle that I can only
guess at... we've learned so much about automation, efficiency, logistics and
sof
1[anonymous]10y
I think the jury is still out on this... on the one hand we are finding huge
numbers of planets, and it is likely that our sampling biases are what push us
towards finding all these big "super-earths" close to their parent stars (I take
issue with that terminology, calling something something of ~5 earth masses
'potentially habitable' or even 'terrestrial' is problematic because we have no
experience with planets of that size range in our system and you can't
confidently state that most things with that mass would actually necessarily
have a surface resembling a rock-to-liquid/gas transition). On the other hand we
are finding so many systems that look nothing like ours with compact orbits and
arrangements that probably could not have formed that way and thus went through
a period of destructive chaos, suggesting that the stability of our system could
be an anomaly. I'm waiting on the full several years of Kepler data that should
actually be able to detect earth-radius planets at a full AU or so from a star,
until then there seem to be too many variables.
I mostly agree. I actually find the 'great silence' not particularly puzzling -
the only things we have reliably excluded are things like star system scale
engineering, and massive radio beacons that either put out large percentages of
a planet's solar input out in the form of omnidirectional radio or ping millions
of nearby stars with directional beams on a regular basis. When you consider the
vast space of options where such grand things don't happen, for reasons other
than annihilation, you get a different picture. We couldn't detect our own
omnidirectional radiation more than a fraction of a light-year away, and new
technologies are actually decreasing it of late. And how many directional
messages have we sent out explicitly aimed at other star systems? A dozen? And
they would need directional antennas to be picked up. What are the odds that two
points in space that don't know of each other's existence would fi
7JoshuaZ11y
The limiting oxygen concentration
[http://en.wikipedia.org/wiki/Limiting_oxygen_concentration] for most woods is
between 14% and 18%. The Earth oxygen concentration is a little over 20% so it
does look close. But this is slightly misleading: All that oxygen showed up
because carbon based life was releasing it from water and carbon dioxide in
photosynthesis. Oxygen using life only showed up after there were dangerously
high levels of oxygen. And if the oxygen levels get very high then the
photosynthesizers will start to get poisoned and the percentage will go down. So
it isn't really likely to have an atmosphere with so much oxygen that it is a
problem for carbon life.
But yes, certainly an equilibrium with less oxygen is plausible in which case
fire would be close to impossible even if the percentage dropped by only a small
amount.
5evand11y
I think it's pretty clear that for broad definitions of life, you need carbon or
something heavier. It's possible you could substitute boron, but I don't think
you can get boron by any process that won't produce carbon as well. You almost
certainly need both reducing and oxidizing agents, which means oxygen and
hydrogen as the lightest options. There have been proposals of exotic life
chemistries, but all the serious ones I've seen substitute heavier atoms like
silicon.
The more interesting question is whether you can build more complex life without
all the trace elements used on earth. For example, there are plenty of bacteria
and fungi that have much lower dependence on heavier metals than multicellular
life does, and some simpler multicellular organisms need less than humans do. My
unfounded hunch is that you need something that can play the role of phosphorous
as an energy carrier, and that it would be hard to find that in just CHON
structures. On the other hand, it's possible that even a really poor substitute
would offer enough for life to arise, even if it was inefficient, slow, and
fragile compared to life on earth: there would be no stronger threat from other
life using phosphorous.
The next question is whether metal-poor planets can produce a technological
civilization. How important is metalworking in our history? Can you substitute
something else for it? Can you get a spacefaring or radio-capable civilization
without metals for magnets, wires, and electronics? There are alternatives like
organic conductors and semiconductors, but are those accessible without the
intervening metals stage? Just how metal-poor are these planets, anyway? Would
it be like iron, copper, aluminum, and tin being only as available as, say,
nickel is on Earth? Or is silver or gold a more appropriate comparison? Or even
rarer than that? Or are they present, but not concentrated into usable deposits?
I feel like I don't know enough about the detailed makeup of these planets to
gi
4FiftyTwo11y
Another great filter related question I posted a while ago but didn't get much
response to: [http://lesswrong.com/lw/dms/open_thread_july_1631_2012/74m0]
Could the great filter just be a case of anthropic bias?
* Assume any interplanetary species will colonise everything within reasonable
distance in a time-scale significantly shorter than it takes a new
intelligent species to emerge.
* If a species had colonised our planet their presence would have prevented our
evolution as an intelligent species.
* Therefore we shouldn't expect to see any evidence of other species.
So the universe could be teeming with intelligent life, and theres no good
reason there can't be any near us, but if there were we would not have existed.
Hence we don't see any.
0JoshuaZ11y
This is an interesting idea but I think it doesn't work. Say for example that
another species starts 200 million light years away and is spreading a
colonization wave at .5 which is a pretty extreme value. Then one should have at
least 400 million years to notice that. And it is going to be pretty hard to do
a fast colonization wave without some astronomically detectable signs. Reducing
the colonization speed makes it less likely to be detected but increases the
time span.
0TheOtherDave11y
It seems no less plausible that what spreads outward at a sizable fraction of
lightspeed is a wave of "terraforming" agents, altering all planets in the
neighborhood into more suitable colony planets. Meanwhile colonization spreads
at a rate roughly bounded by the ratio of reproduction rate to death rate, which
might well be significantly slower than that.
That scenario would be enough to ensure that if an intelligent species evolves,
it is necessarily far from any spreading interstellar empire (since otherwise
the terraforming agents would have destroyed it), without having to posit such a
fast colonization wave.
That said, though, why assume that a colonization wave is astronomically
detectable? Being detectable at this range with our instruments is surely an
indication of wasting rather an enormous amount of energy that could instead be
put to use by a sufficiently advanced technology, no?
0JoshuaZ11y
Waste heat is one thing there's not much one can do about. Even a Dyson sphere
will have it. In the case of Dyson spheres there have been active attempts to
find them. See here
[http://blogs.discovermagazine.com/cosmicvariance/2008/12/02/no-dyson-spheres-found-yet/]
although some other work suggests that Dyson spheres are just not that likely
[http://www.lpi.usra.edu/meetings/abscicon2010/pdf/5469.pdf](pdf). Most
largescale engineering projects will leave a recognizable signature. In this
example, systematic searches have only been done out a few hundred light years,
but stellar engineering is in the more blunt forms noticeable even at an
intergalactic level.
Moreover, many ship designs lead to detectable results. For example, large
fusion torch drives have a known sort of signature that we've looked for and
haven't found.
3Thomas11y
Several times more planets could increase the probability of a distant
civilization several times, at the most. That is not a lot, if the initial
probability is already tiny.
A rocky planet with no metals have a much weaker magnetic field. A civilization
without iron and other metals is more difficult as well. Without heavy
radioactive isotopes, volcanoes and tectonics is also different or non existent.
May be some other factors, not all against aliens.
2John_Maxwell11y
Are there any metals necessary for life?
6JoshuaZ11y
Astronomers use metal to mean elements other than hydrogen and helium
[http://en.wikipedia.org/wiki/Metallicity]. Metals in the chemists sense of the
word aren't in general necessary. A lot of life is pure CHONPS. However, most
complex life involves some amount of metals in the chemical sense (most animals
require both iron and selenium for example). And planets which are of low
metalicity in the astronomical sense will be necessarily be of extremely low
metal content in the chemical sense, since in order to get the actual metals
other than just lithium and beryllium require extensive synthesis chains before
one gets to them.
2John_Maxwell11y
Thanks for the clarification!
-1wedrifid11y
Wow, Astronomers are lazy. It's not hard to make up new terms for things when
the existing ones clearly don't fit. Heck, if making up a word was too difficult
they could have used an arbitrary acronym.
5[anonymous]11y
Well, when most of what they have to work with is hydrogen, a whiff of helium,
and a tiny smattering of literally everything else ever, it's kinda hard to
blame 'em. ;p
0billswift11y
Not really. If you look at a periodic table, the vast majority actually are
metals.
0wedrifid11y
The vast majority are metals, and saying they all are is wrong (except in as
much as authority within the clique is able to redefine such things). It's also
distasteful and lazy to formalise the misuse. I'd be embarassed if I were an
astronomer.
0bogdanb11y
Well, Wiktionary claims “metal” used to mean “to mine” a few thousand years ago,
so I can’t blame them that much. The astronomers at least didn’t mess up the
pronunciation again :-)
2Douglas_Knight11y
Yes, a planet around an old star should raise the odds of old, hence, metal-poor
planets, but how much? Old stars have plenty of time to do other things to
acquire planets, such as stealing them or creating them while passing through
metal-rich nebulas. Can we directly measure the composition of this planet?
0JoshuaZ11y
In the particular case I linked to, there are two planets around the same star.
It is extremely unlikely to pick up multiple planets from floating rogues. As to
metal rich nebulas, my understanding is that they aren't that dense so wouldn't
do much. And at least if that had occurred, we'd likely see the star having
higher metal content as well. In this case the iron content of the star is
slightly under a tenth as common as it for the sun, and many other metals have
more extreme ratios.
2bogdanb11y
Do you have any support for that statement? (I’m not arguing, just curious how
does one go about estimating the frequency of planetary capture given what I
thought to be very little data.)
1JoshuaZ11y
Planetary capture is low. For multiple capture events the probabilities are
independent (this isn't quite true, there are some complicating factors but this
is very close to true), so the probability of capturing two is roughly the
square of capturing a single one, which is estimated as around 3-6% under
generous conditions(rogue planet numbers at least equal to the number of stars)
[http://www.cfa.harvard.edu/news/2012/pr201212.html]. So no more than around 1
in every 288 planets should have a double capture event, and the likely number
is much lower than that. With around 700 known planetary systems, the chance
that a given one is in this category is low, but that number isn't as important
since what needs to be asked is whether it is more likely that they've formed
around the star or that multiple captures occurred. Note also if one assumes 3%
rather than 6% then one gets around 1 in every 1100 planets which means we
shouldn't have even seen any examples. And if one thinks that the rogue planet
percentage is lower than 1-1, all of this drops quite quickly.
A related issue is that if the first capture is in a somewhat stable orbit
elliptical orbit, the introduction of another somewhat similar size body in
orbit can destabilize the first making it get flung out of system, so once the
second capture event occurs there's a chance one will lose the first planet.
I don't have a reference off-hand, but this is pretty standard logic to the
point where multiple capture events as an explanation for strange systems is
generally not often considered compared to deciding that it indicates our models
are wrong.
ETA: Some of this logic is severely off. I just remembered that more recent
estimates drastically increase the number of rogue planets floating around. See
e.g. here
[http://www.universetoday.com/93749/nomad-planets-could-outnumber-stars-100000-to-1/]
so the 6% may in fact be an underestimate, in which case multiple capture
planets becomes a much more plausible expla
0bogdanb11y
Hi Joshua, thanks for answering. Quick follow-up question: how come only “rogue”
planets are mentioned in these arguments? (Well, it makes sense for studies
about rogue planets, but it seems to happen even in discussions explicitly about
captured planets, your comment being an example.) Can’t planets be “exchanged
directly” between closely-passing stars? (I mean, without the exchanged planet
spending a long time unbound to a solar system, in a sort of larger-scale
analogue of close binaries exchanging envelope matter.)
I imagine close encounters are rare in general, but given the large number of
binary and multiple-star systems that we seem to see everywhere, and my
(admittedly vague) recollections of some rather tight clusters of stars with
complicated/chaotic dynamics, it seems like it should be feasible (even common)
for stars to exchange planets early (while they’re still part of a young cluster
or complex multi-star system, and they interact closely) and then separate
taking with them “stolen” planets (my understanding was that a significant
fraction of stars in young clusters acquire high velocities and are “evaporated”
away from their birth cluster, especially in “tight” clusters). Are the
interaction time-frames incompatible with that kind of scenario or something?
1JoshuaZ11y
Yes, they can happen. But my understanding is that exchange isn't a likely
result of system interaction whereas losing a planet (that is a planet getting
sent out of orbit) is much more likely a result than exchange. But this is
pushing the limits of my knowledge base in this area.
-2shminux11y
I find it hard to even take the idea of the Great Filter seriously, given that
we don't have a good definition of what life, let alone intelligent life, is.
Generalizing from one example is not very productive.
8JoshuaZ11y
One doesn't need much in the way of definitions here to see the problem. The
essential problem is that we don't see anything out there that all shows a sign
of intelligence. No major stellar engineering, etc. The fact that intelligent
life and life may have fuzzy borders doesn't enter into that arrangement much.
If you want to to really be careful, you can talk about a version of the Filter
that applies to life similar to our own. For our purposes, that's about as
worrisome.
2shminux11y
You can, but it's pointless, unless you think that "life similar to our own" has
a significant chance of arising independently of ours. The argument "we are
here, so it stand to reason that someone like us would evolve elsewhere" is the
generalization from one example (and also a failure of imagination) that I am so
dubious about. I see no reason to believe that even given a more or less exact
replica of the Solar system (or a billion of such replicas scattered around the
Galaxy or the Universe) there will arise even a single instance of what we would
recognize as intelligence. This may change if we ever find some
non-Earth-originated lifeform. (Go, Curiosity!) Until then, the notion of the
Great Filter is just some idle chat.
2siodine11y
As you should be [http://arxiv.org/abs/1107.3835].
0JoshuaZ11y
So in a nutshell in the framework of standard discussions about Fermi issues and
the Filter one would just say that one heavy filtration step is that intelligent
life as we know it seems unlikely to arise.
Essentially, Eliezer gets negative karma for some of his comments (-13, -4, -12, -7) explaining why he thinks the new changes of karma rules are a good thing. To compare, even the obvious trolls usually don't get -13 comment karma.
What exactly is the problem? I don't think that for a regular commenter, having to pay 5 karma points for replying to a negatively voted comment is such a problem. Because you will do it only once in a while, right? Most of your comments will still be reactions to articles or to non-negatively voted comments, right? So what exactly is this problem, and why this overreaction? Certainly, there are situations where replying to a negatively voted comment is the right thing to do. But are they the exception, or the rule? Because the new algorithm does not prevent you from doing this; it only provides a trivial disincentive to do so.
What is happening here?
A few months ago LW needed an article to defend that some people here really have read the Sequences, and that recommending Sequences to someone is not an offense. What? How can this happen on a website which originally more or less was the Sequences? That seemed absurd to me, ... (read more)
I suggest everyone to think for a moment about the fact that Eliezer somehow created this site, wrote a lot of content people consider useful, and made some decisions about the voting system, which together resulted in a website we like. So perhaps this is some Bayesian evidence that he knows what he is doing.
There's also plenty of Bayesian evidence he's not that great at moderation. SL4 was enough of an eventual failure to prompt the creation of OB; OB prompted the creation of LW; he failed to predict that opening up posting would lead to floods of posts like it did for LW; he signally failed to understand that his reaction to Roko's basilisk was pretty much the worst possible reaction he could engage in, such that even now it's still coming up in print publications about LWers; and this recent karma stuff isn't looking much better.
I am reminded strongly of Jimbo Wales. He too helped create a successful community but seemed to do so accidentally as he later supported initiatives that directly undermined what made that community function.
Seems to me there are two important factors to distinguish:
* how good is Eliezer at "herding cats
[http://lesswrong.com/lw/3h/why_our_kind_cant_cooperate/]" (as opposed to
someone else herding cats)
* how difficult is herding cats (as opposed to herding other species)
To me it seems that the problem is the inherent difficulty of herding cats; and
Eliezer is the most successful example I have ever seen. I have seen initially
good web communities ruined after a year or two... and then I read an article
[http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/] describing how
exactly that happened. From outside view, LW seems to survive for surprisingly
long time as a decent website.
The problem with Roko seems to me a bit similar to what is happening now -- some
people intentionally do things that annoy other people; moderator tries to
supress that behavior; contrarians enjoy fighting him by making it more visible
and rationalize their behavior as defending the freedom of speech or whatever.
The Roko situation was much more insane; at least one person threatened to
increase existential risk if Eliezer does not stop moderating the discussion.
Today the most crazy reaction I found was upvoting an obvious troll so that
others can comment on their nonsensical sequence of words without karma costs
[http://lesswrong.com/lw/ece/rationality_quotes_september_2012/7bdh]! Yay,
that's exactly the behavior you would expect to find in a super-rational
community, right? Unfortunately, it is exactly the kind of behavior you will
find when you make a website for wannabe smart people.
Wikipedia is different: it is neither a blog nor a discussion forum. And it
exists at cost of hundreds of people who have no life, so they can spend a lot
of time in endless edit wars. This is yet another danger for LW. Not only new
users can overrule the old users, but also the old users who have no life can
overrule the old users whose instrumental goals are outside of LW. Users who
wan
8Rhwawn11y
I don't think any of that addresses the main point: what has Eliezer done that
is evidence of good moderating skills? Who has Eliezer banned or not banned?
etc.
The question isn't: "can Eliezer spend years cranking out high quality content
on the excellent Reddit codebase with a small pre-existing community and see it
grow?" It is: "can Eliezer effectively moderate this growing community?" And I
gave several examples of how he had not done so effectively before LW, and has
not done so effectively since LW.
(And I think you badly underestimate the similarities of Wikipedia during its
good phase and LW. Both tackle tough problems and aspire to accumulate high
quality content, with very nerdish users, and hence, solve or fail at very
similar problems.)
8wedrifid11y
This just isn't remotely accurate as a representation of history.
The remainder of the parent comment seems to present similarly false (or
hyperbolically misrepresented) premises and reason from them to dubious
conclusions.
Eliezer is not so vulnerable that he needs to be supported by bullshit.
My thoughts on the recent excitement about "trolls", and moderation, and the new karma penalty for engaging with significantly downvoted comments:
First, the words troll and trolling are being used very indiscriminately, to refer to a wide variety of behaviors and intentions. If LW really needed to have a long-term discussion, about how to deal with the "troll problem", it would be advisable to develop a much more precise vocabulary, and also a more objective, verifiable assessment of how much "trolling" and "troll-feeding" was happening, e.g. a list of examples.
Just spotted this thread. The Sequences were indeed the direct inspiration for
the format of the linked series of posts I run. Though mine are on a pretty
broad range of topics -- most recently contrasting Sondheim's Company with
Passion and using both to talk about what the ends of marriage are.
Recently we had also a few articles about how to make LW more popular; how to attract more readers and participants. Well, if that happens, we will need more strict moderation than we have now; otherwise we will drown in the noise. For instance, within this week we have a full screen of "Discussion" articles, some of them containing 86, 103, 191 comments. How many of those comments contain really useful information? What is your estimate, how many of that information will you remember after one week? Do you think that visiting LW once in a week is enough to deal with that amount of information? Or do you just ignore most of that? How big part of a week can you spend online reading LW, and still pretending you are being rational instead of procrastinating?
Up voted for this. I can't believe how many people don't get it.
He got my downvotes for making terrible arguments defending a change that won't
do what it's supposed to do, while also doing other shitty things. He was also
an overconfident dick about the whole situation. The problem isn't the rule,
it's the wrong beliefs about how the forums work and how they might be fixed.
4Sly11y
That thread is Bayesian evidence against the new poorly thought out rule. The
objections that have been raised to it have not even come close to being met.
That fact that your own post is a hair breadth away from inflicting negative
karma on me should be enough to give you pause.
The reaction to the new rule should not be surprising. If it was surprising,
then you should update your model.
2[anonymous]11y
Good point about the silliness of people downvoting Eliezer to show their
disagreement.
Using the phrase 'trivial disincentive' looks like a deliberate reference to
this article [http://lesswrong.com/lw/f1/beware_trivial_inconveniences/] which
would be an unconvincing way to argue that the change won't cause any problems.
And in general, I don't think that the change will have really serious
side-effects but I'm in favor of changing complex systems in as small increments
as possible. The only sensible, currently relevant reason for implementing the
new feature (flooding of the recent comments sidebar) that was given can be
solved much less invasively by not having comments from crappy threads show up
in the recent comments sidebar. For additional soft paternalist goodness, you
could also have replies to comments made in such threads not appear in user's
inboxes.
Being able to keep up with all the conversation going on LessWrong seems
incompatible with the goal of expanding the community. Reading comments and
participating in conversation is a leisure activity. If I were very concerned
with being "rational" about my LessWrong usage patterns I would stop reading
them at all and stick to just articles (possibly only main section articles if I
were really concerned).
A few years ago, I learned that multivitamins are ineffective, according to research. At that point, I have heard of the benefits of many of them, they were individually praised like some would praise anything that's good enough to take by itself, so I was thinking that multivitamins should be something ultra-effective that only irrational people won't take. When I learned they were ineffective, I hypothesized that vitamins in pills simply don't get processed well.
Recently, I was reading a fewarticles about Vitamin D - I thought I should definitely have it, because the sources were rather scientific and were praising it a lot. I got it in the form of softgels, because gwern suggested it. When they arrived, I saw it's very similar to pills, so I thought it might be ineffective and decided to take another look at Wikipedia/Multivitamins. Then I got very confused.
Apparently, the multivitamins DO get processed! And yes, they ARE found to have no significant effect (even in double-blind placebo trials), But at the same time, we have pages saying that 50-60% of the people are deprived from Vitamin D and that it seriously reduces the risk of cancer, among with other things (including a heart disease). Can anyone explain what's going on?
I don't really follow. A multivitamin != vitamin D, so it's no surprise that
they might do different things. If a multivitamin had no vitamin D in it, or if
it had vitamin D in different doses, or if it had substances which interacted
with vitamin D (such as calcium), or if it had substances which had negative
effects which outweigh the positive (such as vitamin A?), we could well expect
differing results.
In this case, all of those are true to varying extents. Some multivitamins I've
had contained no vitamin D. The last multivitamin I was taking both contains
vitamins used in the negative trials and also some calcium; the listed vitamin D
dosage was ~400IU, while I take >10x as much now (5000IU).
Is that unsatisfactory?
3Blackened11y
That would only makes sense if vitamin D is the only one that has any real
significant effects or if the other ones who do, are too included in small
dosages (this doesn't seem improbable at all).
I remember seeing studies which doubt that vitamin C would help healing from
common cold. No wonder if most other are as insignificant.
Also, just checked some pills of vitamins (for hair, skin and nails) I bought
1-2 years ago. It says "take 3 times a day" and it has 100 IU of vitamin D. It's
also apparently 50% of RDA - most other vitamins/minerals in it are up to
200-250%, and my vitamin D pills are 1250% RDA. Mystery solved, I guess.
1Epiphany11y
Supplements have quality issues often. You'd be surprised what they get away
with. Sometimes the coating doesn't digest, so the nutrients aren't absorbed.
Sometimes they use the wrong form of the substance because it is cheaper.
Sometimes they're even contaminated with lead. I only buy vitamins that have
been tested by an independent lab. So far, the best brands I've found were
Solgar and Jarrow.
1dbaupp11y
(Links are created by writing [ text ] then ( url ), you seem to have used
parentheses for both.)
There was much skepticism about my lottery story in the last open thread. Readers should be aware, I sent photographic proof to Mitch Porter by e-mail.
As promised, I made substantial donations to the following two causes:
I see 'M J GEDDES' listed. Well done!
Out of curiosity, how much did you donate? (If it was >$500, I forgive you all
the crap on OB and SL4; actions are more important than words.)
4[anonymous]11y
Well, I've opted to focus the bulk of my philanthropy on the 'Methuselah
Foundation'. I joined the 300 and I've now pledged $US 25 000. My statement is
here:
http://www.mfoundation.org/?pn=donors [http://www.mfoundation.org/?pn=donors]
Powerful new forces are in play as the board game for Singularity takes a
dramatic turn!
4gwern11y
Received '85.00'?
0katydee11y
The '300' pledge is for $25000 over a span of 10 years.
0gwern11y
That's still $2500 for the first installment, not $85.
0katydee11y
They break it down further until it's like $3 per day, so I don't know what
their installment plan is.
0Blackened11y
Well, so it was a good decision to play lottery after all!
(I'm joking)
But anyway, congratulations for the success and thanks for the contributions! I
personally am going to donate huge amounts of money on similar causes if I get
rich. It seems to be the most rational way (according to my goals) to spend
them.
"Before Time Cube, Otis E. Ray advocated the sport of marbles. He authored a book titled Mr. Marbles – Marbles for Everyone,and got the city council of St. Petersburg, Florida to proclaim a "Marbles Week" in the 1970s. In 1987, this became a controversial attempt to establish a million dollar marble tournament inside a huge round structure and establish a philosophical "Order of the Sphere."
By rejecting many small spheres in favor of one large cube, Gene Ray has dedicated his life to demonstrating that reversed stupidity is not intelligence.
It seems Yvian has accepted the challenge and made a steel man
[http://squid314.livejournal.com/327646.html] attempt.
1Multiheaded11y
In an awesome way, too, and exactly how I'd do it if I could write better and
had more patience. Also, looks like Yvain is turning into the second Moldbuggian
Progressive after yours truly!
2AlexMennen11y
Consider the great circle passing through your current location and the Earth's
poles, together with the great circle perpendicular to it at the poles. These
form 4 lines of longitude, each one of which is experiencing a different day
simultaneously (for instance, when it is midnight on your line of longitude, it
is 6am, noon, and 6pm on the others). Of course, you might wonder why I would
single out these 4 lines of longitude instead of just the one at your current
location, giving the traditional 1 day per 24 hours, or all of them, giving
infinity days per 24 hours. Of course, it would be ridiculous to say that there
is a different day going on in one location and another some infinitesimal
distance away from it, so the latter is a non-starter. And the standard 1-day
answer ignores the fact that different longitudes do not experience the same
day. Counting 4 days occurring at the same time makes sense because then the
days are separated by 90-degree rotations of the Earth, and correspond to
quadrants of a circle. 90 degrees is the most fundamental angle in geometry, and
should be considered the primary unit of rotation, as explained in this video
[http://www.youtube.com/watch?v=1qpVdwizdvI] (relevant discussion starting at
4:28).
2Viliam_Bur11y
There is more than one time zone. When you search information about time, Bible
is an unreliable source. Also, teachers should not use Bible in classrooms.
1drethelin11y
Is time cube pro gay and transgender rights? 4 Orientations (Man who likes
ladies, man who likes men, Lady who likes ladies, Lady who likes men) and 4
semi-concrete genders (Cismen, Cisladies, FTM and MTF).
2Multiheaded11y
Politically, it's a sort of reactionary multiculturalism; all four "sides"
should be kept separate and distinct in all aspects, racial segregation, etc.
0FiftyTwo11y
Its actually a stealth argument in favour of increased mental health provision,
fallen prey to Poe's law.
Precision First by L. Kimberly Epting on Inside Higher Ed was an interesting read for me.
Indeed, many of my students have revealed this to me when complaining about points not earned on test questions; they have told me, in no uncertain terms, that they have learned to look at the topic of an essay question and then “just write pretty much everything [they] know about that topic.” This seems reasonable if the test prompt is “tell me everything you know about X,” but I can tell you the exact number of times I have written such an item: zero. Truthfully, I recognize I had a similar history, at least until advanced courses in college -- filling up the space on the page with at least related information generally produced favorable consequences.
Students also often ask if items on my tests are “trick questions.” My standard answer is that I never intend items to be “trick questions”; however, they are intended to be specific, precise questions. It occurs to me this might be an important revelation from them: focusing on specificity in reading and answering a short-answer/essay item is so unfamiliar to them, they find it suspect when required to do so. And they are genuinely confused
Stanislas Dehaene's and Laurent Cohen's (2007) Cultural Recycling of Cortical Maps has an interesting argument about how the ability to read might have developed by taking over visual circuits specialized for biologically more relevant tasks, and how this may constrain different writing systems:
According to the neuronal recycling hypothesis, cortical
biases constraint visual word recognition to a specific anatomical
site, but they may even have exerted a powerful
constraint, during the evolution of writing systems, on the
very form that these systems take, thus reducing the span
of cross-cultural variations. Consistent with this view,
Changizi and collaborators have recently demonstrated
two remarkable cross-cultural universals in the visual
properties of writing systems (Changizi and Shimojo,
2005; Changizi et al., 2006). First, in all alphabets, letters
are consistently composed of an average of about three
strokes per character (Changizi and Shimojo, 2005). This
number may be tentatively related to the visual system’s
hierarchical organization, where increases in the complexity
of the neurons’ preferred features are accompanied by
a 2- to 3-fold increase in receptive field siz
I've found the practice of providing open drafts of possible future articles in the open threads and relevant comment sections has proven quite useful and well received in the past. I've decided to now make and maintain a list of them. If anyone else has made similar posts, please share them with me, and I'll add them to the list.
I've decided I should educate myself about LW-specific decision theories. I've downloaded Eliezer's paper on timeless decision theory and I'm reading through it. I'm wondering if there are similar consolidated presentations of updateless and ambient decision theory. Has anyone attempted to write these theories up for academic publication? Or is the best place to learn about them still the blog posts linked on the wiki?
I'm currently researching TDT, UDT, and ADT. So far as I am aware, there have
been no comprehensive presentations of UDT and ADT. Eliezer's paper itself is a
step in the right direction, but is unfinished and has some major flaws.
SI has contracted the philosopher Rachael Briggs to write a paper on TDT for a
peer-reviewed, academic journal. Last time I spoke to Luke about it, he said
that the pre-print will be done sometime this winter. I don't know whether the
pre-print will available to the general public, or just to internal researchers.
Edit: According to Nisan
[http://lesswrong.com/lw/ecf/open_thread_september_115_2012/7dk3], the
information in the second paragraph is out-of-date.
4Nisan11y
Rachael Briggs is no longer working on that project. It's been taken over by SI
Research Fellow Alex Altair.
2[anonymous]11y
Ah, I was unaware. Thank you for the update. Was there any explanation given as
to why she is no longer working on the project? Do we have a revised timeline
for the paper's completion?
0Nisan11y
I think Briggs wanted to stop; I don't know why. And I don't know when the
project will be completed.
2[anonymous]11y
Edit: According to Nisan
[http://lesswrong.com/lw/ecf/open_thread_september_115_2012/7dk3], the
information in the second paragraph is out-of-date.
0pragmatist11y
Is there a place on this website (or elsewhere) where the major flaws in
Eliezer's paper are pointed out and discussed?
2[anonymous]11y
There likely is, but I don't have any links off-hand. IMO, the major flaw is
that some passages are dense and unclear. It's difficult to understand some
explanations and examples as a result. Don't be discouraged if you have to
re-re-re-read a part of a paper in order to decipher the meaning. I certainly
had to.
Beyond that, people disagree about TDT itself and have tried to make revisions,
as well as revisions to those revisions. (Hence UDT and ADT.) Those flaws are
discussed in the blog posts on decision theory, as well in in comment sections.
Even still, that information is dispersed and unorganized. So far as I can tell,
most of it just exists within the minds of individuals and hasn't been formally
written up yet.
I'm pretty sure greater gender equality in a society translates into women who are less likely to say they look for status in mates. To a certain extent it seems plausible that it influences behaviour, I'm very sceptical of the implied argument that "high status in men" ceases to be a key sexy trait if you just have the right culture though.
The participants were asked in their native language whether certain criteria (such as ‘financial prospect’ and ‘being a good cook’) were important in choosing a mate.
Did they put "is well liked by other women" or "someone who my friends consider cool" on that list?
Sexuality is a strange thing. If you consciously think something is sexy, it
then becomes sexy for you. At least that's how it works for me, I'm generalizing
from one example here.
2[anonymous]11y
In our society the consensus seems to be it doesn't quite work like that, at
least when it comes to things like say homosexuality.
2Blackened11y
I didn't say that sexuality is entirely shaped by this, only that it's
influenced. Say, when I read that hourglass-shaped women bodies are supposed to
be attractive, I started noticing that I think I'm attracted to that, although
one can argue that I used to be before I read it, so I only started noticing
that. However, it worked for me for other things, many of which are not liked by
many people.
3NancyLebovitz11y
I don't know, but that last would just reflect the consensus, no matter what it
was.
It might be worthwhile to ask men from the various countries what women seemed
to be looking for.
2[anonymous]11y
I'm not sure this would produce good results. That we have the phrase "he got
lucky" indicates men may be clueless about what women want. A better result
would be gained data mining online behaviour in response to flirting on say
Facebook.
Computational sociology [http://en.wikipedia.org/wiki/Computational_sociology]
ftw.
1NancyLebovitz11y
"Might be useful" is a weak claim. I was thinking that if men say "women want
men with money" in the gender disparity countries and they say "women want
good-looking men" in the gender equal countries, it would be confirmatory
evidence. Likewise, it might be of interest if men of different ages in the same
country have different views of what women want.
There are certainly plenty of men who are convinced they know what women want on
the average, if not in particular cases. I wonder how much they're subject to
availability bias.
People may be amused by this Bitcoin extortion attempt; needless to say, I declined. (This comment represents part of my public commitment to not pay.)
So at the beginning of this story there was no AI, there was only nondestructive
upload technology, and the researcher sneakily uploaded the 'testee' at the
beginning of the test.
Ten minute video about human evolution and digestion which argues plausibly that we're very well-evolved to eat starch-- specifically tubers and seeds, though we also have remarkable flexibility in what we eat.
I thought coyotes have at least as wide a range of foods as we do, though.
Yet another Online University this one launched on Marginal Revolution. 2012 has been a remarkable ride for Online Education and in many respects is a start of a test to see which theory of what formal education is actually for is correct. Will software and the internet disrupt education like it did the record business?
Amusing commentary by gwern:
Hm, economists not outsourcing to any of the specialists in this very active growing marketplace, and doing an online education webservice in-house? The irony! It burns!
Is there anything solid known about eye position (front vs. side of skull) and other aspects of an organism's life? It seems to me that front of the skull correlates with being a hunter, but (as is usual with biology) there may well be exceptions.
Probably worth noting that fish, even predatory ones, don't necessarily have
binocular vision, and vice versa for herbivores. Sperm whales are the largest
living predators and lack it; fruit bats, who don't hunt, do have it.
There ARE incentives to develop it, or retain it, based on those lifestyle
differences, but it makes for a somewhat fuzzy heuristic.
The other thing is this is pretty much restricted to fish and their mutant
descendants, the tetrapods. Get outside the chordates and you find different
solutions to these problems. Arthropods have several distinct kinds of eye
architecture and sometimes their strategies generalize well: house flies (which
are prey and scavengers) and dragonflies (which hunt) both have
similarly-structure eyes; if anything I think the dragonfly has wider coverage.
Spiders often rely on widely-placed eyes of differing strengths and ranges;
mantis shrimp only have the two eyes, on stalks, and are renowned predators.
So it might look like a generalizable rule because it applies to so many of the
most obvious, easy-to-examine large animals you can find, but remember they're
our close anatomical cousins, and they're solving the problem with very similar
design constraints.
(Also, primates -- many primates who spend a lot of time in trees, but don't
hunt, have binocular vision. In their case it's there because of its benefits
for rangefinding and spacial awareness in an arboreal environment.)
1dbaupp11y
I have read stuff that posited that hunters have front eyes (I think the reason
given was for more accurate depth perception), and that prey-animals have eyes
towards the side of their head to give a wider field of vision.
I'll see if I can refind any of that stuff.
--------------------------------------------------------------------------------
I didn't find exactly what I was thinking of (I think it was probably a book),
but a section of the Binocular vision wikipedia article
[https://en.wikipedia.org/wiki/Binocular_vision#Field_of_view_and_eye_movements]
has some information (uncited, unfortunately). Specifically:
0NancyLebovitz11y
I was wondering whether the rules might be different for sea creatures because
of hydrodynamics. Practically all fish have their eyes on the sides of their
heads. It's possible that understanding hammerhead sharks
[http://en.wikipedia.org/wiki/Hammerhead_shark] and flounders
[http://en.wikipedia.org/wiki/Flounder] would be too hard.
Puffer fish [http://en.wikipedia.org/wiki/Tetraodontidae] are fish which have
eyes at or near the front of their heads, but they aren't built for chasing
things down. I just found out that you can get a puffer fish to chase a laser
[http://www.youtube.com/watch?v=2i6LhaKTvOs]. I don't know what that proves.
Maybe they chase relatively small slow prey.
2[anonymous]11y
Puffers are sometimes pelagic (ocean-going) for parts of their life cycle, but
typically they hang out in reefs, brackish areas, or other near-shore zones and
hunt smallish prey, "sprinting" it down and delivering a quick snap, or just
teasing it out from hiding places among coral or plants. They use the same
"sprint" to evade attack.
Puffers also have the ability to swivel their eyes independently, like a
chameleon.
0bogdanb11y
I think that the rules are different for sea creatures simply because accurate
sight is usually a less useful position sense in water. In most places you can’t
see far away no matter how good your eyes are, so just noticing shadows is
mostly enough. Sound (including vibrations and currents) tends to be more useful
there, hence echolocation and the lateral line, as is smell (see sharks).
Basically, you can’t hunt much with sight, but it’s still useful to avoid being
hunted.
There are some exceptions, like octopi (big eyes) and some fish with curiously
complex sight (poly-chromatic, polarization-sensitive eyes) I don’t have a very
good explanation for. But I’d guess they’re a bit like bats for land animals,
some accident of evolution probably threw them on a tangent and they found a
“local maxima” of fitness.
I've just started playing with Foldit, a game that lets science harness your brain for protein folding problems. It has already been used to decode an HIV protein and find a better enzyme for catalyzing industrial processes. Currently, work is under way to design treatments for Sepsis.
The perception that women are scarce leads men to become impulsive, save less, and increase borrowing, according to new research from the University of Minnesota's Carlson School of Management.
Research on this in the context of online forums such as ours might be very interesting.
A related blog entry by Peter Frost title Our brideprice culture that deals with societal implications of gender imbalance. It begins with hig... (read more)
As a female, I wonder what it means that I don't react to behaviors like
competing for status, class signaling and spending beyond ones means by being
attracted - instead, I have the same feeling I get when people are being
immature and stupid. Lol. I have thought about this a lot. I am just not
attracted to the ordinary symbols of male power - though I seem to have a few
triggers. Height doesn't matter, muscles don't do a thing and money has no
effect. The demonstrations of power I do enjoy are when they're able to hold up
their end of a debate with me (I keep wishing for someone to win against me), or
when they're doing something really, really intellectually difficult. Those
things, I do respond to. Fluff? No.
I have to wonder if other women who are as intellectual as I am are the same.
As [...] I wonder what it means that I don't [...].
Generally, when someone says that majority of A do X, but you are A and don't do X, here are some possible explanations:
the statistics is simply wrong;
the statistics is correct about the majority, but you as an individual are an exception, and possibly so are some of your friends (this similarity could have contributed to you being friends);
the statistics is correct about the majority, but within it a minority is an exception, and you belong to this minority, and possibly so do some of your friends;
you are wrong, you are actually doing X, but you rationalize that it's something else.
Also from the outside, if someone else is saying this, don't forget:
publication bias -- people who don't fit the statistics are more like to write about it then those who fit are likely to write "me too" (in communities that value independence).
Specifically for this topic, think also about the difference between maximizers and satisficers. If you read that "females value X", you may automatically translate it as "females are X-maximizers", and then observe that you are not. But even then you could still ... (read more)
What's the difference between the second and the third bullet?
0Epiphany11y
Thanks for seeing that there are multiple options for interpretation. I hate it
when people interpret my behavior into a false dichotomy of options, which
happens to me frequently, so I am finding this refreshing.
I have a functionality threshold, but I see that as different from a class
threshold. For instance, I had a boyfriend that had recently graduated from
school. He was unemployed at that point, of course. It took him a very long time
to get a job due to the recession. That didn't deter me from liking him. Why
not? I had no reason to think he was dysfunctional, I figured he would get a job
eventually.
On the other hand, if I meet someone who reeks of alcohol and obviously hasn't
showered in a week, I'm going to be assuming they're dysfunctional - that even
if their situation could be temporary, they're probably exacerbating it.
That's not about class. That's about wanting only functional, healthy
relationships in my life. It's not a healthy relationship if you have to pay for
a person's food and shelter because they're not able to get those things for
themselves.
If I meet someone who seems functional (has showered, does not reek of alcohol,
etc.) and they strike up intelligent conversation (funny is nice but intelligent
conversation is more my thing) but happen to be homeless, I will judge them
based on how functional they are. I would not invest much until they get back on
their feet, because I know better than to think that seeming functional and
actually being functional are the same thing, but I wouldn't refuse to talk to
them if they seemed interesting and functional.
Why invest in a guy who just graduated but not the homeless guy? Well let's ask
this: what did the recent graduate do wrong? Nothing. Nothing is out of the
ordinary if a recent grad is looking for work. That's normal. That's not a red
flag. The homeless person, though may have done something to cause their
situation. That is an abnormal situation, a red flag. I won't be sure they are
I don't react to behaviors like competing for status, class signaling and spending beyond ones means
The demonstrations of power I do enjoy are when they're able to hold up their end of a debate with me (I keep wishing for someone to win against me), or when they're doing something really, really intellectually difficult. Those things, I do respond to.
That is class signalling (of a particular class) and winning debates is competing for status.
Fluff? No.
You have your own sexual preferences and the traits that you are not attracted to appear less intrinsically worthy. Another woman may say she isn't attracted to "Fluff" like intellectual displays and rhetorical flair and instead is only attracted to the 'things that really matter' like social alliances, security and physical health.
I have to wonder if other women like me are the same.
Lol thank you Wedrifid, that was refreshing, and you were pretty good.
I disagree with you, but you're welcome to continue the disagreement with me. (:
Just because other people use those as signals that a person is in a particular
place in a hierarchy does not mean that:
A. I believe in social hierarchies or that social hierarchies even exist. (I see
them as an illusion).
B. The specific reason I am attracted to these qualities is due to an attraction
to people in a certain position in the social hierarchy.
The reasons I want someone who is able to defeat me in a debate are:
1. It gets extremely tedious to disagree with people who can't. I end up
teaching them things endlessly in order to get us to a point of agreement,
while learning too little.
2. I might get careless if nobody knocks me down for a long time. It's not good
for me.
3. It is rather uncomfortable and awkward in a relationship or even a
friendship if one person is always right and the other always loses debates.
That feels wrong.
"Fluff, no." vs "You have your own preferences and other people see your
preference as fluff."
If I said I had a million dollars, but really, I was a million dollars in debt,
would that be an empty claim? Yes. If a person is spending beyond their means in
order to signal that they have money, they're being dishonest. So that's fluff.
If social hierarchies don't actually exist, and a person signals that they're in
one, is that real, or is it a fantasy? if they don't exist, it's fluff.
"This seems tautologically likely."
Okay, this was an embarrassing failure to use clear wording on my part. Although
you're not actually disagreeing with me, you got me good, lol.
That was fun. Feel free to disagree with me from now on.
4beoShaffer11y
Can you clarify what you mean by this?
These are decent reasons to intentionally seek out someone who can out debate
you, however as far as actual attraction goes they make just as much, if not
more sense as post-hoc rationalizations as real reasons. As Yvain has explained
[http://lesswrong.com/lw/6p6/the_limits_of_introspection/] all introspection of
the type you are engaging is prone to this error mode and while reasons your
reasons 1 & 3 aren't completely inconsistent with our knowledge of human
attraction they don't fit as well as the hypothesis that you are attracted to
behaviors that signal high IQ and/or status while side-steping your issues with
the most common ways of displaying those traits (this is largely based on what
I've been told in various psychology classes, I don't have the original studies
that my professors based their conclusions on on hand).
-edit if anyone knows how to make blockquote play nice with the original
formatting let me know, I think this works for now.
3Epiphany11y
On introspection biases: For minor things, I wouldn't be surprised if I make
errors in judging why I do them, because it can take a bit of rigor to do this
well. But if something is important, I can use meta-cognition and ask myself a
series of questions (carefully worded - this is a skill I have practiced),
seeing how I feel after each, to determine why I am doing something. I carefully
word them to prevent myself from taking them as suggestions. Instead, I make
sure I interpret them as yes or no questions. For instance: "Does class make me
feel attracted?" instead of "Should I feel attracted to class?" - it's an
important distinction to make, especially for certain topics like fears. "Am I
afraid of spiders because I assume they're poisonous?" will get a totally
different reaction (assuming I am not afraid of them) than "Would I be afraid of
spiders if I thought they are all poisonous?"
It takes a little concentration to get it right during introspection.
So we'll start with class for example. I ask myself "Do I find class
attractive?" and I can ask myself things like "Imagine a guy with lots of money
asks me out. How do I feel?" and "Imagine a guy who has things in common with me
asks me out, how do I feel?" If you ask enough questions for compare and
contrast, you can get pretty good answers this way.
To make sure I'm not just having random reactions based on how I want to feel, I
come up with real examples from my recent past. In the last year or so, I have
been asked out by or dated a lot of different people with varying amounts of
income. There were a lot of guys who are making 6 figures - this is because I
tend to attract well paid IT guys. I liked some of them but didn't like all of
them. Some of the guys making 6 figures didn't attract me whatsoever. So income
doesn't make me like a guy all by itself.
I can ask "Does having a high income make me like them more?"
The two top attractions of all time, for me, were to an underpaid writer and a
college stu
9[anonymous]11y
Believing oneself to be an exceptional case was a common failure mode among the
subjects of studies summarized in Yvain's article
[http://lesswrong.com/lw/6p6/the_limits_of_introspection/]. When confronted with
the experimental results showing how their behavior was influenced in ways
unknown to them, they would either deny it outright or admit that it is a very
interesting phenomenon that surely affected other people but they happened to be
the lone exception to the rule.
That doesn't really preclude your introspective skills (I actually believe such
skills can be developed to an extent) but it should make you suspicious.
0Epiphany11y
Have you done any reading on cognitive restructuring
[http://en.wikipedia.org/wiki/Cognitive_restructuring] (psychotherapy)? It's
interesting that people on this forum believe this is impossible when a method
exists as a type of psychotherapy. Have you guys refuted cognitive restructuring
or are you just unaware of it?
0[anonymous]11y
I'm aware of cognitive restructuring. Note that I haven't said that
introspection is completely useless or even that the specific type of
introspection you describe is totally impossible, just that you are very
confident about it and there's a common pattern of extreme overconfidence.
2beoShaffer11y
This type of hypothetical questioning is notoriously unreliable, people ofter
come up with answers that don't reflect their actual reactions, If you read
closely Yvain's article already gives several examples. It's also one of the
methodologies that my psychology teachers highlighted as sounding good, but
being largely unreliable.
This is better, but between the general unreliability
[http://psychology.wikia.com/wiki/List_of_memory_biases] of memory
[http://psychology.wikia.com/wiki/False_Memory] and the number other factors
that would need to be controlled for its still not that great. Particularly
since you do feel attracted to men who are more dominate as debaters.
3Epiphany11y
It occurs to me that since this debate is about me and my subjective
experiences, there's really no way for either of us to win. Even if we got a
whole bunch of people with different incomes and did an experiment on me to see
which ones I was more attracted to, the result of the experiment would be
subjective and there would be no way for anyone to know I wasn't pretending.
I still think that there are ways to know what's going on inside you with
relatively good certainty. Part of the reason I believe this is because I'm able
to change myself, meaning that I am able to decide to feel a different way and
accomplish that. I don't mean to say I can decide to experience pleasure instead
of pain if I bang my toe, but that I am able to dig around in the belief system
behind my feelings, figure out what ideas are in there, improve the ideas, and
translate that change over to the emotional part of me so that I react to the
new ideas emotionally. If I was wrong about my motivations, this would not work,
so the fact that I can do this supports the idea that I'm able to figure out
what I'm thinking with a pretty high degree of accuracy. I would like to write
an article about how I do this at some point because it's been a really useful
skill for me, and I want to share. But right now I've got a lot on my plate. I
think it's best for us to discontinue this debate about whether or not my
subjective experiences match my perceptions or your expectations, and if you
want to tear apart my writings on how I change myself later, you can.
Your links are bookmarked, so if your purpose was to make sure I was aware of
them, I've got them. Thanks.
0Epiphany11y
Thanks for those links by the way, they are interesting.
-6Epiphany11y
-6Epiphany11y
0J_Taylor11y
Could you elaborate? Do you see all social constructs as being illusory?
0Epiphany11y
Sure, I clarified that here
[http://lesswrong.com/lw/ecf/open_thread_september_115_2012/7bku]
8Vladimir_Nesov11y
It's an infationary
[http://lesswrong.com/lw/coo/avoid_inflationary_use_of_terms/] use of
"illusory". "Social constructs" describe certain regularities in the real world,
maybe not very useful regularities often presented in a confusing manner, but
something real nonetheless. "Illusory" usually refers to a falsity, so its use
in this case doesn't seem appropriate. Furthermore, being a bad fit, this word
shouldn't be used in explaining/clarifying your actual point, otherwise you risk
its connotations leaking in where they don't follow from your argument.
The demonstrations of power I do enjoy are when they're able to hold up their end of a debate with me (I keep wishing for someone to win against me)
How do you define winning? From my observation of your comments here, you refuse to concede even when your arguments no longer make sense. Maybe they just get tired and pretend to yield, or look for a girl with less ego.
Being wrong and not making sense to somebody isn't the same thing. If you want
to really nail somebody at debate, you generally have to corner them really good
by highlighting a flaw in a key point or points that destroy the supports for
their belief. If you see the way that Wedrifid undermines my points, those are
some examples of the types of attacks that might corner me into a defeat.
You're right to be concerned that my ego might be too big - I am concerned that
I may become careless, and think that I'm going to win and then fail because I
was overconfident. So far, I haven't had a big problem with that, but if this
goes on long enough, I could start doing that.
Which is why I keep asking for it. I've added a request for honest critiques
into a few of my discussions now, hoping that people will eventually feel
comfortable with debating with me, if they're not now.
As for specifically why somebody might not make sense and yet not be wrong...
well that could range anywhere from a common misunderstanding, to being bad at
explaining your ideas (I admit that when trying to explain a new idea I am
frequently misunderstood - there's a pattern to my problem which is really
difficult to explain and even more difficult to compensate for, so I'm not going
to get into that here). It is also possible that the audience was not ready for
the message, didn't know a concept that was required to understand it or
something, didn't get enough sleep, really there are so many reasons why stuff
can fail to make sense, yet not be wrong.
And then there's the problem of getting the person to realize they've lost. Not
all failures to realize you've lost are due to ego. We all want to protect
ourselves against bad ideas, and nobody knows where the next bad idea is coming
from. You often have to go over a lot of pieces of information with them until
they get it, and sometimes it's hard to get at their true rejection
[http://lesswrong.com/lw/wj/is_that_your_true_rejection/]. Sometimes yo
This approach to debating strikes me as exemplifying everything bad that I learned in high school policy debate. Specifically, it seems to me like debate distilled down to a status competition, with arguments as soldiers and the goal being for your side to win. For status competitions, signaling of intellectual ability, and demonstrating your blue or green allegiance, this works well. What it does not sound like, to me, is someone who is seeking the truth for herself. If you engaged in a debate with someone of lesser rhetorical skill, but who was also correct on an issue where you were incorrect (perhaps not even the main subject of the debate, but a small portion), would you notice? Would you give their argument proper attention, attempt to fix your opponent's arguments, and learn from the result? Or would you simply be happy that you had out-debated them, supported all your soldiers, killed the enemy soldiers, and "won" the debate? Beware the prodigy of refutation.
Adversarial debates are not without their usefulness
[http://wiki.lesswrong.com/wiki/Adversarial_process], such as in legal and
political processes. It's true that they are generally suboptimal as far as
deliberative truth-seeking goes, but sometimes we really do care about refuting
incorrect positions and arguments ("killing soldiers") as clearly as possible.
0Epiphany11y
I agree. I think it's really important to be able to support a point when you
really do have one. That some people were able to win debates - which takes a
lot of skill - was required for humanity to progress. How else would we have
left behind our superstitions? The problem isn't trying to win the opponent over
to the truth, the problem is trying to win the opponent over for other reasons.
If a person was very good at debate, how would you make the distinction?
Especially if everyone else is trying to win for the sake of ego? It's not easy
to tell the difference between a person who wins because they have more of the
truth or are clever in the way they defending it, versus a person who wins
because they're more tenacious than their competitor.
A person who does have the most complete understanding of the truth can be
attacked to the point of tedium with logical fallacies until they get bored and
wander away. A group of people who are all debating for the sake of ego will not
only be likely to insist that the debaters who are best at defending truth are
wrong, but they will project their own motives onto that person and insist that
they, too, are debating for the sake of ego. Add to that the fact that nobody
believes something that they think is wrong, which leads to everybody thinking
that they're right, and it can get to be a pretty big mess.
This gets very confusing.
-6Epiphany11y
9[anonymous]11y
It means that the narratives surrounding pop-distillations of evolutionary
psychological accounts of human sexuality shouldn't be given too much weight
when evaluating actual human beings, mostly.
Same here; I tend to find it actively repellent.
-2Epiphany11y
Hahahaha! Love it.
(: This makes me want to take a survey.
1atorm11y
Your description of being attracted to intellect in men gave me the urge to find
a way to debate you. Since this would probably count as competing for status, do
you think you would find it attractive in person (assuming I actually could keep
up with you)?
EDIT: I'm in a relationship and not seeking another: I'm just curious about your
response to men trying to attract you with intellectual signalling.
Further, the TI description does not need to invoke arbitrary collapse triggers such as consciousness, etc., because it is the absorber rather than the observer which precipitates the collapse of the SV, and this can occur atemporally and nonlocally across any sort of interval between elements of the measuring apparatus.
What is it about "absorbers" (which seems very much like a magical category, morally equivalent to "observers") which make them non-magical and therefore different f... (read more)
I personally love nothing more than a Great Loyalty Oath Crusade.
--------------------------------------------------------------------------------
Linked from Richard Carrier is this
[http://www.michaelnugent.com/2012/07/26/why-atheist-and-skeptic-groups-should-be-inclusive-caring-and-supportive/]
piece:
I really hate it when someone tells me not to do something in a way that really
makes me want to do it. I mean, I never thought of literally telling someone to
self-abuse themselves anally before reading this post.
2drethelin11y
These comments seem terrible
1[anonymous]11y
They are usually better. I'm not sure why he isn't wielding the moderator rod as
harshly as usual, perhaps he is afraid of coming off as partisan?
6razib11y
the explanation is banal. 10 hour days at my "day job" + i sleep 6 hours + and
have a daughter. not much on the margin. i devote way more time to moderation of
comments than a typical blogger as it is, so it shows when i cut back.
i don't see what that has to do with anything. LW people say stupid things all
the time.
addendum: i don't have much experience on this forum, but i am friends with
people associated with the berkeley/bay area LW group. as i said, LW people say
stupid things all the time. but, LW people tend to not take it personally when
you explain that they're being ignorant outside of domain, which is great. so my
last comment wasn't really meant as negatively as it might have seemed. but the
back & forth that i have/had with the LW set does not translate well onto my
blog, where there is usually a domain-knowledge asymmetry (i'm pretty good at
guessing the identity of commenters who know more than me, and usually excuse
those from aggressive moderation, because i wouldn't know what to moderate).
8[anonymous]11y
There is a reason I usually state "the comments are well worth reading" when
linking to your blog posts here. You are clearly doing something right, while
there are of course false positives people can point to
[http://lesswrong.com/lw/ecf/open_thread_september_115_2012/7cvp], the losses
from those are far outweighed by the gains.
LW if anything is remarkably bad at this kind of gardening
[http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/]. We don't down
vote well meaning but clueless commenter's enough and when we do one merely has
to complain about being down voted to inch back into positive karma.
2wedrifid11y
I agree, yet for some reason suspect that your ideal would see an entirely
different subset of comments downvoted to oblivion and suspect I would just
leave if you had your way (and that you would do likewise if I had my way). From
what I have seen I'd also leave in an instant if Razib had that kind of power.
This is the advantage of having the moderation influence distributed (among
multiple moderators or in this case just voting) rather than in the hands of one
individual. Neither one of us has enough power to change the forum such that it
is intolerable to the other. The failure mode only comes when the collective
judgement is abysmal and even then it is less catastrophic to one ego holding
sway.
2[anonymous]11y
Really? Honestly I think I would find a forum moderated by you well worth
visiting and depending on how much time you put into it, might be much better
than LW.
I think we probably agree on 90% of posts that should be down voted but aren't.
1wedrifid11y
Almost certainly and likewise probably more agreement than between randomly
selected individuals. The problem comes if any part of that 10% happens to
include things that I am strongly averse to but which you consider ok and use. I
wouldn't expect you to hang around if I started banning your comments---I
certainly wouldn't take that kind of treatment from anyone (unless I was getting
paid well).
3[anonymous]11y
I never understood people who get all offended and scream censorship if one or
two of their posts get moderated while the vast majority are let through. If
however you'd feel that a quarter or a third of my comments where objectionable
I wouldn't bother commenting any more, though I might keep reading.
3wedrifid11y
I wouldn't accept too many more than, say, two or three a year that I
reflectively endorsed even after judgement. But I wouldn't call it censorship.
It's some guy with power exercising it with either (subjectively) poor judgement
or personal opposition to me. It's not something I prefer to accept but I'm not
going to abuse the word 'censorship'.
2razib11y
i wasn't expecting much from that thread. i was more curious about the rationale
of the atheism+ proponents. i got confirmation of what i feared....
0wedrifid11y
That isn't surprising. The reasoning:
... struck me as odd. Before seeing your contradiction and then confirming your
judgement for myself I had been substituting "and yet despite that" for "thus".
Fallout from Razib banning or driving away quality commenters has reached even
here. At a first approximation I expect such moderation to support the ego of
moderator and drive away any intellectual rivals, not guarantee that it is worth
reading.
It takes more than 'vigor' to make moderation beneficial.
6[anonymous]11y
Don't multiply your anecdotes, since your source is just gwern
[http://lesswrong.com/lw/dfk/link_why_the_kids_dont_know_no_algebra/6yps]
getting banned for a while.
It is easy to speak like this since it appeals to the anti-authoritarian impulse
of the average LW reader but I invite you to inspect the uninformed drivel one
can read in the comment sections on some other quality blogs dealing with
similar topics.
I would argue based on comparison to other such blogs that the occasional
mistakes are worth it
[http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/] to
maintain a good signal to noise ratio. I am not alone
[http://lesswrong.com/lw/dfk/link_why_the_kids_dont_know_no_algebra/6yw5] in
this assessment.
1wedrifid11y
Excuse me? No it isn't. You are mind reading, and incorrectly. (Discussions at
that time brought attention to other who didn't wish to bother with Razib.)
No. I don't want to compare to a known inferior solution and the endorsement
being evaluated was that they were worth reading, not that elsewhere on the web
is worse. There is a reason I don't tend to hang out in the comments sections of
personal blogs. They aren't an environment that provides incentives for valuable
comment contributions and neither lax moderation not vigorous moderation in
defense of self interest produce particularly impressive outcomes. Actual
'moderate' and vaguely objective moderation is rare. Lesswrong's karma system is
far superior and produces barely tolerable comment threads most of the time.
0[anonymous]11y
The karma system in itself is not what made this site interesting, not by a long
stretch. While some very bad comments did make it through that now don't,
Overcoming Bias before the karma system had interesting discussions as well.
The karma system is a key feature of what made LW what it is, but it isn't
exceptional in this. Just as vital where the features of its demographics, the
topics we chose [http://lesswrong.com/lw/cdn/petition_off_topic_area/6l3u], the
norms and culture that developed. If any of those wash out LessWrong becomes
nothing but a smaller suckier reddit.
0wedrifid11y
That would indeed be a strange position for someone to take.
-2[anonymous]11y
I didn't mean to state that was what you where saying but I was questioning why
you seem so sure moderation is an inferior solution based on conversations on
LessWrong sucking less. I pointed out that seems rather weak evidence since OB
didn't suck much more.
2razib11y
if LW gave me dictatorial powers i would have nuked this sub-thread a long time
ago, and saved a lot of people productive time they could have devoted to more
edifying intellectual pursuits.
also, as a moderate diss, i don't delve deep into LW comments much anymore. but
some of these remind now me of usenet in the 1990s. what i appreciate about the
'rationality' community in berkeley is that these are people who are interested
in being smart, not seeming smart.
1sam034511y
I follow your comments, because you usually have something interesting to say -
and usually something that gets a little close to the borders of what is
permissible on less wrong.
Now, sorry to say, your recent comments have become boring. Has Less Wrong
become even more repressive, or did you just run out of things to say?
5[anonymous]11y
You are right on my recent comments being somewhat boring. In the past I've been
told by people that they tend to read my posts because they are usually high
quality correction or fun gadflyish needling.
Maybe my comments are more boring because there are fewer things wrong in
interesting ways? Not that I would imply there are fewer things wrong in general
unfortunate. I mostly agree with all recent criticisms I've made but some of it
was pretty dull to write, I guess that shows. There are some signs that the
political discourse is on a lower level than it was. I unfortunately often end
up talking about politics, as I saw politically motivated stupidity on some
topics. The other explanation is that I've been using the site to procrastinate
more and thus didn't bother to abstain from marginal comments. There is however
no excuse for spending way too much time on useless crappy meta debates as I did
about a week or two ago.
When I think of what posts of value I think made in the past 30 days in which
I'm apparently among the top contributors all I can think of that is of real
value are the link posts. Which aren't bad, as I think LessWrong doesn't as a
community does not update when exposed to good ideas and material from the
outside. That this is the only kind of recent posts I see value in does shows I
haven't either not taken the time or had the inspiration for new original ideas
or synthesis.
Perhaps I need to study more new material, perhaps I need to do more thinking,
perhaps I need a break. On the other hand I do think LW didn't really learn what
I hoped it would from my old comments, so maybe this is more a problem of me
sounding like a broken record because I have to keep repeating the same points,
since this bores me I do it more poorly than before. So perhaps I need a new
venue.
I've been meaning to take another month's leave from the site starting some time
this September, to improve the quality of my writing. I guess this is as good as
any day to star
7Wei_Dai11y
My suggestion to you is the same as the one I gave to wedrifid: write more posts
relative to comments. Comments are for asking/answering questions, or fixing
mistakes in other people's posts, or debates. If you think you have something to
teach LW, please do it via posts, where you can organize your thoughts, put in
the effort necessary to bridge the inferential gaps, and get the attention you
deserve.
(If the suggestion doesn't make sense to you, I'd be interested to know why. As
I said, I made the same suggestion to wedrifid before, but he didn't respond to
agree or disagree, nor did he subsequently write more posts, which leaves me
wondering why some LWers choose to write so few posts relative to comments.)
0[anonymous]11y
It does make sense to me. I seem to have massive will failure when it comes to
writing actual articles. I've tried to fix this by writing more public
[http://lesswrong.com/lw/ecf/open_thread_september_115_2012/7bl3] and private
drafts, but these generally come out disappointing in my eyes. Also writing
comments requires little motivation while writing articles feels like work.
While the strategy of comment>draft>article has done some good, it isn't good
enough at all. No more commenting until I write up an article that is worth
submitting, if I don't, well too bad.
0[anonymous]11y
It does make sense to me. I seem to have massive will failure when it comes to
writing actual articles. I've tried to fix this by writing more public
[http://lesswrong.com/lw/ecf/open_thread_september_115_2012/7bl3] and private
drafts, but these generally come out disappointing in my eyes. Also writing
comments requires little motivation while writing articles feels like work.
While the strategy of comment>draft>article has done some good, it isn't good
enough at all. No more commenting until I write up an article that is worth
submitting, if I don't, well too bad.
I think this should be "I'll pick the opposite of what you'll predict me to
pick", otherwise Omega will Loeb you...
0shminux11y
How? I thought that asking Omega to model himself would throw him inside the
model, which is bad enough. Yours is just blatantly incompatible with Omega
being able to predict your choice, so he probably would not offer you to play.
(Also, unfortunately neither is permitted as a reply, since the Omega's
prediction is withheld from the player, according to the setup.)
An interesting analogy. If we were to apply it to uploads, one wonders whether
the Googlers are more or less productive once inside the Google bubble...
We are not the first to have meta discussions. Where are the best ideas on technical and social means to foster productive and reduce unproductive discussion? Are there bloggers that focus on getting the best out of "the bottom half of the Internet"?
1) I'm moving to Vienna on the 25th. If there exist lesswrongers there I'd be most happy to meet them.
2) Moving strikes me as a great opportunity to develop positive, life-enchancing habits. If anyone has any literature or tips on this i'd greatly appreciate it
Sorry for missing the stupid questions thread, but since the sequences didn't have something direct about WBE, I thought Open thread might be a better place to ask this question.
I want to know how is the fidelity of Whole Brain Emulation expected to be empirically tested, other than replication of taught behaviour ?
After uploading a rat, would someone look at the emulation of its lifetime and say," I really knew this rat. This is that rat alone and no one else".
Would only trained behaviour replication be the empirical standard? What would that ... (read more)
You didn't miss the stupid questions thread, you can still post there. It
doesn't really matter how old a thread is.
2NancyLebovitz11y
People with pet rats notice personality differences.
7wedrifid11y
Rats do have personality differences and I would expect people to 'notice'
differences in personality even if they didn't exist.
0Rhwawn11y
Rats even seem to have IQ of sorts
[https://en.wikipedia.org/wiki/Rat_IQ#Intelligence]. Truly, our fuzzy little
friends are often underestimated.
0blogospheroid11y
Thanks for all the replies. Sorry for the delay in response.
Does this mean that in terms of empirically evaluating brain emulations, we will
have to "walk blind" on the path of emulating higher and higher organisms until
we reach a level of complexity, like rats where we can truly state that a
personality is being emulated here and not just a generic instance of an animal?
0Rhwawn11y
Probably. I've seen proposals for testing uploads (or cryonics) by learning
simple reactions or patterns, but while this is good for testing that the brain
is working at all, it's still a very long way from testing preservation of
personal identity.
-2billswift11y
The world (including brains) is strictly deterministic. The only source of our
mental contents are our genetics and what we are "taught" by our environments
(and the interactions between them). The only significant difference between rat
and human brains for the purpose of uploading should be the greater capacity and
more complex interactions supported by human brains.
Was reading up on the Flynn effect, and saw the claim it's too fast to reflect evolution. Is that really true? Yes, it's too fast, given the pressures, for what Darwin called natural selection, given the lack of anything coming along and dramatically killing off the less intelligent before they can reproduce. But that's not the only force of evolution; there's also sexual selection.
If it's become easier in the last 150 years for women to have surviving children by high-desirability mates, then we should, in fact, see a proportionate increase in the high... (read more)
Although a negative relationship between fertility and education has been described consistently in most countries of the world, less is known about the relationship between intelligence and reproductive outcomes. Also the paths through which intelligence influences reproductive outcomes are uncertain. The present study uses the NLSY79 to analyze the relationship of intelligence measured in 1980 with the number of children reported in 2004, when the respondents were between 39 and 47 years old. Intelligence is negatively related to the number of children, with partial correlations (age controlled) of −.156, −.069, −.235 and −.028 for White females, White males, Black females and Black males, respectively. This effect is related mainly to the g-factor. It is mediated in part by education and income, and to a lesser extent by the more “liberal” gender attitudes of more intelligent people. In the absence of migration and with constant environment, genetic selection would reduce the average IQ of the US population by about .8 points per generation.
I'd assign a low probability to this hypothesis. Most of the Flynn effect seems to occur on the lower end of the IQ spectrum moving upwards. Source. This is highly consistent with education, nutrition and diseases hypotheses, but it is difficult to see how to reconcile this with a sexual selection hypothesis.
Also, I'm not sure that your hypothesis fits with expected forms of infidelity. One commonly expected form of common infidelity would be generally with strong males while trying to get a resource rich males to think the children are there's If such infidelity is a common pattern, then one shouldn't expect much selection pressure for intelligence, if anything the opposite.
The fraction of the population which engages in infidelity even in urban environments is not that high. Infidelity rates in both genders are around 5-15%, but only about 3% of offspring have parentage that reflects infidelity. Source, so the selection impact can't be that large.
It reconciles quite well, actually.
The greater the genetically-determined status differential between a woman's
husband and her a potential lover, the more differential advantage to the
woman's offspring in replacing the husband's genes with those of a
higher-quality male. So the lower the status of the husband, the greater the
incentive to replace his genes with another's.
Assuming for a moment IQ is 100% heritable and IQ is linear in advantage, the
woman with an IQ of 85 and a husband of IQ 85 will see her kids have an IQ of 85
if she's faithful, and 115 with a lover of 145, for a net advantage to her kids
of +30 IQ if she strays. If a woman and her husband are IQ 100, the same lover
will raise the IQ +22.5; her kids get less advantage than Mrs. 85. In the case
of Mr & Mrs. IQ 115, the advantage is only +15. For Mr & Mrs. IQ 130, the
advantage to cheating is only +7.5. For Mr. & Mrs. IQ 145, cheating with a lover
of IQ 145 doesn't benefit her kids at all, while for Mr & Mrs. IQ 160, she wants
to avoid having kids by a lover of IQ 145.
So, it is precisely the women on the low end that have the greater incentive to
cheat "up", which we would expect would result in more cheating, and thus the
low end where IQs would increase the most.
Also, the lower status the woman's husband, the easier it is to find a willing
lover of higher status, and thus the greater the opportunity to replace the
husband's genes with another's. Mrs. 85 can find a lover with IQ 100 more easily
than Mrs. 100 can find a lover with IQ 115, even though the both have the same
incentive to find a lover of +15 IQ points. Mrs. 115 has even more difficulty
finding a lover of IQ 130, and so on.
So, it is precisely the women on the low end that have the greater opportunities
to cheat "up", which we would expect would result in more cheating, and thus the
low end where IQs would increase the most.
Assuming monogamous and assortative marriage, there's a serious limit to how
high resource/high status
3Metus11y
A test would be to look wether there is a correlation between cheating and IQ
and whether this correlation is influenced by sex. Also, asymetrical incidence
of STDs with respect to the sexes could also be an indicator.
9shminux11y
How would you test this model?
9Epiphany11y
There are so many other factors, you're probably getting mostly noise there. For
instance: I read somewhere that depending on whether babies drink breast milk or
formula, they may lose 10 points (to formula) - the reason stated was lack of
omega 3. What about lead paint chips? We have banned lead, that should increase
IQ - after an initial decrease when lead paint began to be used. (There'd be a
similar increase / decrease cycle with the invention of formula.) The point of
these two is that as we learn more, we may be preventing a lot of things that
previously caused children brain damage. And then there are other health factors
which we've improved. In the great depression, I read 10% of the population
starved to death. Starvation, for those who survive it, can cause brain damage.
Were there other starvations before this, that had stopped happening? When did
helmets become popular for people riding bicycles and skateboards and such?
There are just too many factors.
Heh, and I read somewhere that here in America, the Flynn effect has stopped.
O.O
-1see11y
1) Sure. I'm not claiming the Flynn effect is genetic; I'm disputing the common
claim that it can't be genetic.
2) Whether the Flynn effect has stopped or not is an area of ongoing dispute;
some studies suggest it merely paused for a while. And if it has ended . . .
that might merely mark that America's reached the new equilibrium point under
urban infidelity conditions.
0Epiphany11y
"I'm disputing the common claim that it can't be genetic."
Oh, sorry.
I have found out the hard way, myself, that it's really best to start with a
single sentence that makes one's point clear in the very beginning. Maybe that
would help your commenters respond appropriately.
8Douglas_Knight11y
When people say that it's "too fast," they are making a quantitative claim. The
Flynn effect is a standard deviation per generation. Under your scenario of no
selection on women, this would require that the bottom half of the bell curve to
have no biological children. 50% cuckoldry, perfectly correlated with IQ? Even
men who think they've been cuckolded don't have that high a rate.
You're asking a question about language use here, yes?
Depends on the context.
If 90% of U.S.voters voted for a particular policy proposal I would comfortably
describe that as a "vast majority", but if only 90% of sulfur atoms remained in
an unstoppered container of sulfur at STP I would describe that as a
"startlingly small percentage".
On a minute's thought, I'd say 2 standard deviations above mean portion-size for
the context under discussion.
So, for instance, you may recall a little flurry of debate a while back over the Republican rhetorical trope of characterizing Social Security as a Ponzi scheme, and the ensuing boomlet of essays and blog posts vehemently insisting that obviously the program is or is not an instance of one. A more
It is possible there simply isn't any such experimental material. If I had to bet on it I would say it is more likely there is some than not, though I would also bet that some things we wish where done haven't been so far. In the past I've wondered if we can in the future expect CFAR or LessWrong to do experimental work to test many of the hypotheses based on insight or long fragile chains of reasoning we've come up with. I don't think I've seen anyone talk about considering this.
While mention of say CFAR doing this, the mind jumps to them doing expensiv... (read more)
[This comment is no longer endorsed by its author]Reply
I just talked to someone and she praised her doctor, because she complained from chest (armpit) pain, and the doctor, untraditinally, cured her with accupuncture on the spot. I asked her and she said the pain was going on for a few weeks (and was quite intense), and it disappeared on the next day. Some bias IS expected of her (more so than from the average person).
Maybe it's just random chance plus unconscious exaggeration, but I doubt it could have been so strong. After I started writing this, I looked up on W... (read more)
Do you need a different explanation? The super-surprising effectiveness of
placebo feels a bit offensive to us truth-seekers; but the universe and our
brain-architecture isn't required to play fair with us, alas. In certain
occasions, the deluded and deceived may have an advantage.
? Am a bit confused because when I read the Wikipedia article, it says that
accupuncture (both "real" and "sham") was seen to be effective in combatting
pain. So where did you read that it was ineffective?
0Blackened11y
Oh damn I missed that. I got too distracted by the Effectiveness research
section. So there you go, I found a reasonable explanation, although I was more
looking forward to some sort of fundamental bias that effects everyone, which I
must have somehow missed. Would have been a good explanation to some things.
Still, I'm waiting for someone to appear with a very good hypothesis of the
cancer case. I'm not saying there has to necessarily be one, but there might be.
Placebo was in fact a very good hypothesis, but I'm not sure if you can cure
cancer with placebo ("Yes, you can" would close the case).
Edit: I looked it up, apparently placebo doesn't affect cancer. Surprising.
Does profit maximizing software eat the world and go darwinian ?
I don't think that is a good description of what happened.
Konkvistador But that is a rather huge topic... it seems to me
Konkvistador that the arbitrary thing they optimize for may turn out to be something that makes them eat up a lot of reality
Konkvistador also the humans present a sort of starting anchor, what do humans want? They want information processing, they want energy, they want food, they want metal, finished products
Konkvistador What do companies try... (read more)
[This comment is no longer endorsed by its author]Reply
Could one train an animal* to operate a turing machine via reinforcement mechanisms? Would there be any use for such a thing? (Other than being able to say you have an organic computer...).
*Obviously you can train humans, and I guess likely great apes as well. But what would be the lower bound on intelligence? A rat? An insect?
What exactly do you mean by "operate a turing machine."
If you have a simple enough machine and translation of symbols on the tape to
stimuli for the animal, it seems easy (in principle) to use classical and
operant conditioning on a rat to push the appropriate buttons to change the
machine's states.
This isn't from The Onion-- " 'real' or from The Onion" is macro uncertainty-- it seems that, by being clever, it's possible to do somewhat better measurement of subatomic particles than was expected. Does the article look sound? If so, what are some implications?
The title of that article is extremely misleading. The uncertainty principle, as
understood in contemporary physics, is a consequence of the (extremely
well-confirmed) laws of quantum mechanics. Momentum-space wavefunctions in
quantum mechanics are Fourier transforms
[http://en.wikipedia.org/wiki/Fourier_transform] of position-space
wavefunctions. As a consequence, the more you concentrate a wavefunction in
position space, the more it spreads out in momentum space, and vice versa. More
generally, there will be an "uncertainty principle" associated with any two
non-commuting observables (two operators A and B are non-commuting if AB - BA is
not 0). Any experiment challenging this version of the uncertainty principle
would be contradicting the basic math of quantum mechanics, and the correct
response would be to defy the data
[http://lesswrong.com/lw/ig/i_defy_the_data/].
But this experiment does not challenge the uncertainty principle, it challenges
Heisenberg's original interpretation of the uncertainty principle. Rather than
seeing the principle as a simple consequence of the mathematical relationship
between position and momentum, Heisenberg concocted a physical explanation for
the principle that appealed to classical intuitions. According to his
interpretation, the uncertainty principle is a consequence of the fact that any
attempt to measure the position of a particle (by say, bouncing photons off it)
disturbs the particle, which leads to a change in its momentum. The correct
mathematical explanation of the uncertainty principle, given above, does not
make any reference to measurement or disturbance, you'll notice.
Anyway, this experiment only challenges Heisenberg's version of the uncertainty
principle, not the actual uncertainty principle. Far from contradicting the math
of quantum mechanics, the falsity of Heisenberg's interpretation is actually
predicted by that math, as shown by Ozawa
[http://pra.aps.org/abstract/PRA/v67/i4/e042105]. The abstract of the p
0NancyLebovitz11y
Thanks.
I'm wondering (assuming that the work pans out) whether there would be
technological implications even though the foundations of physics aren't shaken
at all.
I believe the correct term is "straw individual" by Yvain is well worth reading.
After reading David Burns's "Feeling Good" and receiving a score on the depression test corresponding to a severe depression I tried the exercises in the book. Though I still struggle with them, they have helped me temendously and lowered the score on the test after only a week. I can not attribute the change only to the exercises seeing as I have been more strict in my meditation regimen (15min at evening). The exercises are very interesting to this community I think and maybe I will write a dedicated discussion post.
With my new found optimism/hope/energy I am much more motivated to start exercising again in the next days, maybe a programming project and again taking up quantifying myself.
I'm thinking about a fantasy setting that I expect to set stories in in the future, and I have a cryptography problem.
Specifically, there are no computers in this setting (ruling out things like supercomplicated RSA). And all the adults share bodies (generally, one body has two people in it). One's asleep (insensate, not forming memories about what's going on, and not in any sort of control over the body) and one's awake (in control, forming memories, experiencing what's going on) at any given time. There is not necessarily any visible sign when one party falls asleep and the other wakes, although there are fakeable correlates (basically, acting like you just appeared wherever you are). It does not follow a rigid schedule, although there is an approximate maximum period of time someone can stay awake for, and there are (also fakeable) symptoms of tiredness. Persons who share bodies still have distinct legal and social existences, so if one commits a crime, the other is entitled to walk free while awake as long as they come back before sleeping - but how do they prove it?
There are likely to be three levels of security, with one being "asking", the second being a sort ... (read more)
All personalities are given a pair of esoteric stimuli. Through reinforcement/punishment, one personality is conditioned to have a positive physiological reaction to Stimulus A and a negative physiological reaction stimulus B. The other personality is given the converse.
The stimuli are all drawn from a common pool of images like "bear", "hat" or "bicycle", so one half of a stimuli pair may be "a bear in a hat on a bicycle". There's a canonical set of stimuli, like a huge deck of cards, with all possible combinations, all of which are numbered. The numbers for my stimuli pair are tattoed on my body in some obscure location, like the sole of my foot.
If I need to prove my identity, I show my tattoo to the authority figure. It will read something like "1184/0346". They pick out either image 1184 (bear in a hat on a bicycle) or image 0346 (a sword in a hill being struck by lightning), and show it to me. My immediate response will be either arousal or disgust, and they will know which personality I am.
I just ran across this in Wikipedia:
"Our "real will" (in Bosanquet's terms) or "rational will" (in Blanshard's) is simply that which we would want, all things considered, if our reflections upon what we presently desire were pursued to their ideal limit."
This is remarkably similar to the informal descriptions of CEV and moral "renormalization" that exist. Someone should look into the literature on Bosanquet and Blanshard's rational will, and see if there's anything else of use.
The waning of the nuclear family by Razib Khan
... (read more)I own a personal server running Debian Squeeze which has a 1Gb/s symmetric connection and 15TB per month bandwidth.
I am offering free shell accounts to lesswrongers, with one contingency:
1) You'll be placed in a usergroup, 'lw', as opposed to various other usergroups for various other communities I belong to, which will be in other usergroups. Anything that ends up in /var/log is fair game. I intend to make lots of graphs and post them on all the communities I belong to. There won't be any personally identifying data in anything that ends up publicly.
Your shell account will start out with a disk quota of 5g, and if you need more you can ask me. I'm totally cool with you seeding your torrents. I do not intend to terminate accounts at any point for inactivity or otherwise; you can reasonably expect to have access for at least a year, probably longer.
Query me on freenode's irc (JohnWittle), or send me an email. johnwittle@gmail.com.
Also, while the results of my analysis are likely to go in Discussion, I was wondering if this offering of free service itself might go in discussion. I asked in IRC and was told that advertisements are seriously frowned upon and that I would lose all my karma.
Related to: List of public drafts on LessWrong
An online course in rationality?
A month or two ago I made a case on the #lesswrong channel on IRC that a massive online class or several created in partnership with and organization like Khan Academy or Udacity, would be a worthy project for CFAR and LW. I specifically mention those two organizations because they are more open to non-academic instructors than say Coursera or EdX and seem more willing to innovate rather than just dump classical university style lectures online.
The reason I consider it a worthy project, is besides it exposing far more people to the material and ideas we want to spread, it would allow us to make progress on the difficult problems of teaching and testing "rationality" with the magic of Big Data and even something as basic as A/B testing to help us.
I considered making an article on it but several people advised me that this would prove a distraction for CFAR, more trouble than is worth at this early stage. I have set up a one year reminder to make such a proposal next summer and plan to do some research on the subject in the meanwhile to see if it really is as good an opportunity as I think it... (read more)
Obama has been reading Kahneman's Thinking, Fast and Slow.
It has become increasingly clear over the last year or so that planets can in fact form around highly metal poor stars. Example planet. This both increases the total number of planets to expect and increase the chance that planets formed around the very oldest stars. (Younger stars have higher metal content). One argument against Great Filter concerns is that it might be that life cannot arise much younger than it did on Earth because stars much older than our sun would not have high metal content. This seems to seriously undermine this argument.
How much should this do to our estimates for whether to expect heavy Filtration in front of us? My immediate reaction is that it does make future filtration more likely but not by much since even if planets could form, a lack of carbon and other heavier elements would still make formation of life and its evolution into complicated creatures difficult. Is this analysis accurate?
I have a Great Filter related thought which doesn't address your question directly but, hey, it's the Open Thread.
My thesis here is that the presence of abundant fossil energy on earth is the primary thing that has enabled our technological civilization, and abundant fossil energy may be far less common than intelligent life.
On top of all the other qualities of Earth which allowed it to host its profusion of life, I'll point out a few more facts related specifically to fossil energy, which I haven't seen in any discussions of Fermi's Paradox or the Great Filter.
Life on Earth happens to be carbon-based, and carbon-based life, when heated in an anoxic environment, turns into oil, gas and coal.
Earth is roughly 2/3 covered in oceans (this figure has varied over geologic time), a fact with significant consequences to deposition of dead algae, erosion, and sedimentation.
Earth possesses a mass, size, and age such that the temperature a few kilometers below the surface may be hundreds of degrees C, while the surface temperature remains "Goldilocks."
Earth has a conveniently oxidizing atmosphere in which hydrocarbons burn easily, but not so oxidizing that it prevents stable
The oxidizing atmosphere is not due to chance. It was created by early life that exhaled oxygen, and killed off its neighbors that couldn't handle it. Hence, I don't think the goldilocks oxygen levels speak much to great filter questions.
Early in civilization, we used wood and charcoal as energy sources. Blacksmithing and cast iron were originally done with wood charcoal. Cast iron is a very important tool in our history of machine tools and hence the industrial revolution. It's possible that we could have carried on without coal, instead using large-scale forestry management or other biomass as our energy source. In the early 1700s there were already environmental concerns about deforestation. They were more related to continued supply of wood for charcoal and hunting grounds than "ecological" concerns, but there were still laws and regulations enacted to deal with the problem.
How many people do we need to support a high-tech civilization? I suspect fewer than we tried it with. It's quite possible that biofuel sources would have produced a high tech civilization, just slower and with fewer people.
Also, note that biofuels can produce all the lubricants and plastics you ne... (read more)
This discussion thread is insane.
Essentially, Eliezer gets negative karma for some of his comments (-13, -4, -12, -7) explaining why he thinks the new changes of karma rules are a good thing. To compare, even the obvious trolls usually don't get -13 comment karma.
What exactly is the problem? I don't think that for a regular commenter, having to pay 5 karma points for replying to a negatively voted comment is such a problem. Because you will do it only once in a while, right? Most of your comments will still be reactions to articles or to non-negatively voted comments, right? So what exactly is this problem, and why this overreaction? Certainly, there are situations where replying to a negatively voted comment is the right thing to do. But are they the exception, or the rule? Because the new algorithm does not prevent you from doing this; it only provides a trivial disincentive to do so.
What is happening here?
A few months ago LW needed an article to defend that some people here really have read the Sequences, and that recommending Sequences to someone is not an offense. What? How can this happen on a website which originally more or less was the Sequences? That seemed absurd to me, ... (read more)
There's also plenty of Bayesian evidence he's not that great at moderation. SL4 was enough of an eventual failure to prompt the creation of OB; OB prompted the creation of LW; he failed to predict that opening up posting would lead to floods of posts like it did for LW; he signally failed to understand that his reaction to Roko's basilisk was pretty much the worst possible reaction he could engage in, such that even now it's still coming up in print publications about LWers; and this recent karma stuff isn't looking much better.
I am reminded strongly of Jimbo Wales. He too helped create a successful community but seemed to do so accidentally as he later supported initiatives that directly undermined what made that community function.
My thoughts on the recent excitement about "trolls", and moderation, and the new karma penalty for engaging with significantly downvoted comments:
First, the words troll and trolling are being used very indiscriminately, to refer to a wide variety of behaviors and intentions. If LW really needed to have a long-term discussion, about how to deal with the "troll problem", it would be advisable to develop a much more precise vocabulary, and also a more objective, verifiable assessment of how much "trolling" and "troll-feeding" was happening, e.g. a list of examples.
However, it seems that people are already moving on. For future reference, here are all the articles in Discussion which arose directly from the appearance of the new penalty and the ensuing debate: "Karma for last 30 days?", "Dealing with trolling", "Dealing with meta-disussion", "Karma vote checklist?", "Preventing endless September", "Protection against cultural collapse", and hopefully that's the end of it.
So it seems we won't need some specialized troll-ologists to work out all the issues. Rather than a "war on tr... (read more)
Up voted for this. I can't believe how many people don't get it.
I am very confused right now.
A few years ago, I learned that multivitamins are ineffective, according to research. At that point, I have heard of the benefits of many of them, they were individually praised like some would praise anything that's good enough to take by itself, so I was thinking that multivitamins should be something ultra-effective that only irrational people won't take. When I learned they were ineffective, I hypothesized that vitamins in pills simply don't get processed well.
Recently, I was reading a few articles about Vitamin D - I thought I should definitely have it, because the sources were rather scientific and were praising it a lot. I got it in the form of softgels, because gwern suggested it. When they arrived, I saw it's very similar to pills, so I thought it might be ineffective and decided to take another look at Wikipedia/Multivitamins. Then I got very confused.
Apparently, the multivitamins DO get processed! And yes, they ARE found to have no significant effect (even in double-blind placebo trials), But at the same time, we have pages saying that 50-60% of the people are deprived from Vitamin D and that it seriously reduces the risk of cancer, among with other things (including a heart disease). Can anyone explain what's going on?
There was much skepticism about my lottery story in the last open thread. Readers should be aware, I sent photographic proof to Mitch Porter by e-mail.
As promised, I made substantial donations to the following two causes:
Brain Preservation Fund Kim Suozzi Fund
Please confirm my name on the list of donors Brain Preservation General Fund
I'm shortly going to be flying out to the EU to work on life extension causes, see my my blog for information: 27 European Union nations in 27 weeks
Challenge: Steel man Time Cube.
I read the following by Kate Evens on Twitter:
And I became curious. What could LW come up with?
According to Wikipedia:
By rejecting many small spheres in favor of one large cube, Gene Ray has dedicated his life to demonstrating that reversed stupidity is not intelligence.
Precision First by L. Kimberly Epting on Inside Higher Ed was an interesting read for me.
... (read more)Stanislas Dehaene's and Laurent Cohen's (2007) Cultural Recycling of Cortical Maps has an interesting argument about how the ability to read might have developed by taking over visual circuits specialized for biologically more relevant tasks, and how this may constrain different writing systems:
... (read more)List of public drafts on LessWrong
I've found the practice of providing open drafts of possible future articles in the open threads and relevant comment sections has proven quite useful and well received in the past. I've decided to now make and maintain a list of them. If anyone else has made similar posts, please share them with me, and I'll add them to the list.
Konkvistador
Related to: Old material
I've decided I should educate myself about LW-specific decision theories. I've downloaded Eliezer's paper on timeless decision theory and I'm reading through it. I'm wondering if there are similar consolidated presentations of updateless and ambient decision theory. Has anyone attempted to write these theories up for academic publication? Or is the best place to learn about them still the blog posts linked on the wiki?
Greater gender equality means that women are less apt to look for status in mates. Hey, it's just one study, but when does that stop anybody else?
I'm pretty sure greater gender equality in a society translates into women who are less likely to say they look for status in mates. To a certain extent it seems plausible that it influences behaviour, I'm very sceptical of the implied argument that "high status in men" ceases to be a key sexy trait if you just have the right culture though.
Did they put "is well liked by other women" or "someone who my friends consider cool" on that list?
People may be amused by this Bitcoin extortion attempt; needless to say, I declined. (This comment represents part of my public commitment to not pay.)
School isn't about learning, SMBC edition.
Short story about the Turing Test, entertaining read.
Consider two versions of that story, with one having the line "At that point, finally, he let me out of the tank." appended.
Ten minute video about human evolution and digestion which argues plausibly that we're very well-evolved to eat starch-- specifically tubers and seeds, though we also have remarkable flexibility in what we eat.
I thought coyotes have at least as wide a range of foods as we do, though.
transhumanist cartoon
Marginal Revolution University
Yet another Online University this one launched on Marginal Revolution. 2012 has been a remarkable ride for Online Education and in many respects is a start of a test to see which theory of what formal education is actually for is correct. Will software and the internet disrupt education like it did the record business?
Amusing commentary by gwern:
Is there anything solid known about eye position (front vs. side of skull) and other aspects of an organism's life? It seems to me that front of the skull correlates with being a hunter, but (as is usual with biology) there may well be exceptions.
For example, lemurs aren't especially hunters, but they have eyes in front.
I was thinking that cats are both hunters and prey, and they have eyes in front.
Also, what about the evolution of eye position? How much of a lag is there if living conditions change?
I've just started playing with Foldit, a game that lets science harness your brain for protein folding problems. It has already been used to decode an HIV protein and find a better enzyme for catalyzing industrial processes. Currently, work is under way to design treatments for Sepsis.
A 3 minute talk on the Financial Consequences of Too Many Men. It seems the perceived sex ratio strongly influences male behaviours.
Research on this in the context of online forums such as ours might be very interesting.
A related blog entry by Peter Frost title Our brideprice culture that deals with societal implications of gender imbalance. It begins with hig... (read more)
Generally, when someone says that majority of A do X, but you are A and don't do X, here are some possible explanations:
Also from the outside, if someone else is saying this, don't forget:
Specifically for this topic, think also about the difference between maximizers and satisficers. If you read that "females value X", you may automatically translate it as "females are X-maximizers", and then observe that you are not. But even then you could still ... (read more)
That is class signalling (of a particular class) and winning debates is competing for status.
You have your own sexual preferences and the traits that you are not attracted to appear less intrinsically worthy. Another woman may say she isn't attracted to "Fluff" like intellectual displays and rhetorical flair and instead is only attracted to the 'things that really matter' like social alliances, security and physical health.
This seems tautologically likely.
How do you define winning? From my observation of your comments here, you refuse to concede even when your arguments no longer make sense. Maybe they just get tired and pretend to yield, or look for a girl with less ego.
This approach to debating strikes me as exemplifying everything bad that I learned in high school policy debate. Specifically, it seems to me like debate distilled down to a status competition, with arguments as soldiers and the goal being for your side to win. For status competitions, signaling of intellectual ability, and demonstrating your blue or green allegiance, this works well. What it does not sound like, to me, is someone who is seeking the truth for herself. If you engaged in a debate with someone of lesser rhetorical skill, but who was also correct on an issue where you were incorrect (perhaps not even the main subject of the debate, but a small portion), would you notice? Would you give their argument proper attention, attempt to fix your opponent's arguments, and learn from the result? Or would you simply be happy that you had out-debated them, supported all your soldiers, killed the enemy soldiers, and "won" the debate? Beware the prodigy of refutation.
In the Transactional Interpretation, Cramer claims:
What is it about "absorbers" (which seems very much like a magical category, morally equivalent to "observers") which make them non-magical and therefore different f... (read more)
An interesting blog post by Razib Khan on "Atheism+".
Nuke'm solution to the Newcomb problem: tell Omega that you pick what he'd have picked for himself, were he in your situation. That'll Godel him.
(semi-OT but strikes me of interest) "You know the science-fiction concept of having your brain uploaded to a computer and then you live in a simulation of the real world? Going to work for Google is a bit like this." Openness in the wider culture outside open source.
We are not the first to have meta discussions. Where are the best ideas on technical and social means to foster productive and reduce unproductive discussion? Are there bloggers that focus on getting the best out of "the bottom half of the Internet"?
Maker's Schedule and Manager's Schedule by Paul Graham
Anybody know what happened to user RSS feeds? It used to be you could get them with "lesswrong.com/user/username.rss", but that now says no such page.
2 separate related comments:
1) I'm moving to Vienna on the 25th. If there exist lesswrongers there I'd be most happy to meet them.
2) Moving strikes me as a great opportunity to develop positive, life-enchancing habits. If anyone has any literature or tips on this i'd greatly appreciate it
Game AI vs. Traditional (academic) AI
Peter Watts considers the wisdom of The Conspiracy.
A short draft for an article where I criticize Yvain's Worst Argument in the World
Sorry for missing the stupid questions thread, but since the sequences didn't have something direct about WBE, I thought Open thread might be a better place to ask this question.
I want to know how is the fidelity of Whole Brain Emulation expected to be empirically tested, other than replication of taught behaviour ?
After uploading a rat, would someone look at the emulation of its lifetime and say," I really knew this rat. This is that rat alone and no one else".
Would only trained behaviour replication be the empirical standard? What would that ... (read more)
Was reading up on the Flynn effect, and saw the claim it's too fast to reflect evolution. Is that really true? Yes, it's too fast, given the pressures, for what Darwin called natural selection, given the lack of anything coming along and dramatically killing off the less intelligent before they can reproduce. But that's not the only force of evolution; there's also sexual selection.
If it's become easier in the last 150 years for women to have surviving children by high-desirability mates, then we should, in fact, see a proportionate increase in the high... (read more)
But there's also an opposing evolutionary pressure: educated women have fewer children.
The Reproduction of Intelligence attempts to quanitfy this effect:
I'd assign a low probability to this hypothesis. Most of the Flynn effect seems to occur on the lower end of the IQ spectrum moving upwards. Source. This is highly consistent with education, nutrition and diseases hypotheses, but it is difficult to see how to reconcile this with a sexual selection hypothesis.
Also, I'm not sure that your hypothesis fits with expected forms of infidelity. One commonly expected form of common infidelity would be generally with strong males while trying to get a resource rich males to think the children are there's If such infidelity is a common pattern, then one shouldn't expect much selection pressure for intelligence, if anything the opposite.
The fraction of the population which engages in infidelity even in urban environments is not that high. Infidelity rates in both genders are around 5-15%, but only about 3% of offspring have parentage that reflects infidelity. Source, so the selection impact can't be that large.
One thing worth noting though is that one of the pieces of evidence for disease mattering is that there's a correlation between high parasite load and lower average IQ, but your hypothesis would also cause one to expect such a correlat... (read more)
Poll: What is the smallest portion than be considered a "vast majority" of the whole? What about a "vast, vast majority"?
Two great posts from Julian Sanchez: Intellectual Strategies: Precisification and Elimination, and its follow-up On Partly Verbal Disputes. Related to our conception of "Rationalist Taboo", and to Yvain's Worst Argument in the World post.
Sample quote:
... (read more)Trying to measure the shadow economy
People thinking hard about measuring something they have no exact way of checking.
It is possible there simply isn't any such experimental material. If I had to bet on it I would say it is more likely there is some than not, though I would also bet that some things we wish where done haven't been so far. In the past I've wondered if we can in the future expect CFAR or LessWrong to do experimental work to test many of the hypotheses based on insight or long fragile chains of reasoning we've come up with. I don't think I've seen anyone talk about considering this.
While mention of say CFAR doing this, the mind jumps to them doing expensiv... (read more)
Any thoughts, information, or research about selective effects of arranged marriages?
Users love simple and familiar designs – Why websites need to make a great first impression
I need help in explaining this case to myself.
I just talked to someone and she praised her doctor, because she complained from chest (armpit) pain, and the doctor, untraditinally, cured her with accupuncture on the spot. I asked her and she said the pain was going on for a few weeks (and was quite intense), and it disappeared on the next day. Some bias IS expected of her (more so than from the average person).
Maybe it's just random chance plus unconscious exaggeration, but I doubt it could have been so strong. After I started writing this, I looked up on W... (read more)
Does profit maximizing software eat the world and go darwinian ?
I don't think that is a good description of what happened. Konkvistador But that is a rather huge topic... it seems to me Konkvistador that the arbitrary thing they optimize for may turn out to be something that makes them eat up a lot of reality Konkvistador also the humans present a sort of starting anchor, what do humans want? They want information processing, they want energy, they want food, they want metal, finished products Konkvistador What do companies try... (read more)
Could one train an animal* to operate a turing machine via reinforcement mechanisms? Would there be any use for such a thing? (Other than being able to say you have an organic computer...).
*Obviously you can train humans, and I guess likely great apes as well. But what would be the lower bound on intelligence? A rat? An insect?
Physicists cast doubt on renowned uncertainty principle.
This isn't from The Onion-- " 'real' or from The Onion" is macro uncertainty-- it seems that, by being clever, it's possible to do somewhat better measurement of subatomic particles than was expected. Does the article look sound? If so, what are some implications?