I heard an interview on NPR with a surgeon who asked other surgeons to use checklists in there operating rooms. Most didn't want to. He convinced some to try them out anyway.
(If you're like me, at this point you need time to get over your shock that surgeons don't use checklists. I mean, it's not like they're doing something serious, like flying a plane or extracting a protein, right?)
After trying them out, 80% said they would like to continue to use checklists. 20% said they still didn't want to use checklists.
So he asked them, If they had surgery, would they want their surgeon to use a checklist? 94% said they would want their surgeon to use a checklist.
Recent observations on the art of writing fiction:
My main characters in failed/incomplete/unsatisfactory stories are surprisingly reactive, that is, driven by events around them rather than by their own impulses. I think this may be related to the fundamental attribution error: we see ourselves as reacting naturally to the environment, but others as driven by innate impulses. Unfortunately this doesn't work for storytelling at all! It means my viewpoint character ends up as a ping-pong ball in a world of strong, driven other characters. (If you don't see this error in my published fiction, it's because I don't publish unsuccessful stories.)
Closely related to the above is another recent observation: My main character has to be sympathetic, in the sense of having motivations that I can respect enough to write them properly. Even if they're mistaken, I have to be able to respect the reasons for their mistakes. Otherwise my viewpoint automatically shifts to the characters around them, and once again the non-protagonist ends up stronger than the protagonist.
Just as it's necessary to learn to make things worse for your characters, rather than following the natural impulse to
5CronoDAS11yThat's not uncommon. Villains act, heroes react.
[http://tvtropes.org/pmwiki/pmwiki.php/Main/VillainsActHeroesReact]
It's already called The Law of Bruce
[http://tvtropes.org/pmwiki/pmwiki.php/Main/TheLawOfBruce], but it's stated a
little differently.
5wedrifid11yI noticed where I was while on the first page this time. Begone with you!
"Former Christian Apologizes For Being Such A Huge Shit Head All Those Years" sounds like an Onion article, but it isn't. What's impressive is not only the fact that she wrote up this apology publicly, but that she seems to have done it within a few weeks of becoming an atheist after a lifetime of Christianity, and in front of an audience that has since sent her so much hate mail she's stopped reading anything in her inbox that's not clearly marked as being on another topic.
5Eliezer Yudkowsky11yThis woman is a model unto the entire human species.
1Unknowns11yIt isn't that impressive to me. As far as I can see, what it shows is that she
has been torturing herself for a long time, probably many years, over her issues
with Christianity. She's just expressing her anger with the suffering it caused
her.
1RobinZ11yThank you for posting that. It's an inspiration.
0Paul Crowley11yI wish it were possible to mail her and tell her she doesn't have to apologise!
Inspired by reading this blog for quite some time, I started reading E.T. Jaynes' Probability Theory. I've read most of the book by now, and I have incredibly mixed feelings about it.
On one hand, the development of probability calculus starting from the needs of plausible inference seems very appealing as far as the needs of statistics, applied science and inferential reasoning in general are concerned. The Bayesian viewpoint of (applied) probability is developed with such elegance and clarity that alternative interpretations can hardly be considered appealing next to it.
On the other hand, the book is very painful reading for the pure mathematician. The repeated pontification about how wrong mathematicians are for desiring rigor and generality is strange, distracting and useless. What could possibly be wrong about the desire to make the steps and assumptions of deductive reasoning as clear and explicit as possible? Contrary to what Jaynes says or at least very strongly implies (in Appendix B and elsewhere), clarity and explicitness of mathematical arguments are not opposites or mutually contradictory; in my experience, they are complementary.
Even worse, Jaynes makes several strong ... (read more)
3komponisto11yAmen. Amen-issimo.
The solution, of course, is for the Bayesian view to become widespread enough
that it doesn't end up identified particularly with Jaynes. The parts of Jaynes
that are correct -- the important parts -- should be said by many other people
in many other places, so that Jaynes can eventually be regarded as a brilliant
eccentric who just by historical accident happened to be among the first to say
these things.
There's no reason that David Hilbert shouldn't have been a Bayesian. None.
After pondering the adefinitemaybe case for a bit, I can't shake the feeling that we really screwed this one up in a systematic way, that Less Wrong's structure might be turning potential contributors off (or turning them into trolls). I have a few ideas for fixes, and I'll post them as replies to this comment.
Essentially, what it looks like to me is that adefmay checked out a few recent articles, was intrigued, and posted something they thought clever and provocative (as well as true). Now, there were two problems with adefmay's comment: first, they had an idea of the meaning of "evidence" that rules out almost everything short of a mathematical proof, and secondly, the comment looked like something that a troll could have written in bad faith.
But what happened next is crucial, it seems to me. A bunch of us downvoted the comment or (including me) wrote replies that look pretty dismissive and brusque. Thus adefmay immediately felt attacked from all sides, with nobody forming a substantive and calm reply (at best, we sent links to pages whose relevance was clear to us but not to adefmay). Is it any wonder that they weren't willing to reconsider their definition of evi... (read more)
7orthonormal11yPartial Fix #2:
I can't help but think that some people might have hesitated to downvote
adefmay's first comment, or might have replied at greater length with a more
positive tone, had it been obvious that this was in fact adefmay's first post.
(I did realize this, but replied in a comically insulting fashion anyhow
[http://lesswrong.com/lw/1la/new_years_predictions_thread/1e0i]. Mea culpa.)
It might be helpful if there were some visible sign that, for instance, this was
among the 20 first comments from an account.
5Jack11yWhen it became clear that adefmay couldn't role with the punches there were
quite a few sensitive comments with good advice and explanations for why he/she
had been sent links. His/her response to those was basically to get rude,
indignant and come up with as many counter-arguments as possible while not once
trying to understand someone else's position or consider the possibility he/she
was mistaken about something.
I don't know if adefmay was intentionally trolling but he/she was certainly
deficient in rationalist virtue.
That said, I think we need to handle newcomers better anyway and an FAQ section
is really important. I'd help with it.
5orthonormal11yIt seems plausible that things could have turned out much differently, but that
the initial response did irreparable damage to the conversation. Perhaps putting
adefmay on the defensive so soon made it implicitly about status and not losing
face. Or perhaps the exchange fell into a pattern where acting the troll started
to feel too good
[http://scienceblogs.com/gnxp/2006/07/stupid_feels_might_good_am_i_i.php].
Overall, I didn't find adefmay's tone and obstinacy at the start to be worse
than some comments (elsewhere) by people who I consider valuable members of Less
Wrong.
4Eliezer Yudkowsky11yI'd have to say that the trollness seems obvious as all hell to me. Also,
consider the prior probabilities.
1orthonormal11yI may be giving adefmay the benefit of the doubt due to an overactive
conscience; I go back and forth on this particular case. Still, it seems to me
that being new here can involve a lot of early perceived hostility (people
who've joined the community more recently, feel free to support or correct this
claim), that we may well be losing LW contributors for this reason, and that
some relatively easy fixes might do a lot of good.
1Nick_Tarleton11yMe too. Obvious from his second comment
[http://lesswrong.com/lw/1la/new_years_predictions_thread/1e04] on, even. (Or,
if not a troll, not going to become a valued contributor without some growing
up.)
2orthonormal11yPartial Fix #1:
We put together a special forum (subset of threads and posts) for a number of
old argument topics, and make sure that it is readily accessible from the main
page, or especially salient for new people. We have a norm there to (as much as
possible) write out our points from scratch instead of using shorthand and links
as we do in discussions between LW veterans.
Benefits:
* It's much less of a status threat to be told that one's comment belongs in
another thread than to have it dismissed as happened to adefmay.
* Most of the trouble seems to happen when new people jump into a current
thread and derail a conversation between LW veterans, who react brusquely as
above. Separating the newest/most advanced conversations from the old
objections should make everyone happier.
* I find that the people who have been on LW for a few months have just the
right kind of zeal for these newfound ideas
[http://lesswrong.com/lw/1jf/manwithahammer_syndrome/] that makes them eager
and able to defend them against the newest people, who find them absurd. I
think this would be a good thing for both groups of people, and I expect it
to happen naturally should such a place be created.
So if we made some collection of "FAQ threads" and made a big, obvious, enticing
link to them on either the front page or the account creation page (that is, we
give them a list of counterintuitive things we believe or interesting questions
we've tackled, in the hopes they head there first), we might avoid more of these
unfortunate calamities in the future.
I'm not sure there needs to be more than one FAQ thread. But lets start by generating a list of frequently asked questions, coming up with answers with consensus support.
Why is almost everyone here an atheist?
What are the "points" on each comment?
Aren't knowledge and truth subjective or undefinable?
Can you ever really prove anything?
What's all this talk about probabilities and what is a Bayesian?
Why do you all agree on so much? Am I joining a cult?
What are the moderation rules? What kind of comments will result in downvotes and what kind of comments could result in a ban?
Who are you people? (Demographics, and a statement to the effect of demographics don't matter here. )
3orthonormal11yMore FAQ topics:
* Why the MWI?
* Why do you all think cryonics will probably work?
* Why a computational theory of mind?
* What about free will and consciousness?
* What do you mean by "morality", anyway?
* Wait a sec. Torture over dust specks?!?
Basically, I think we need to do more for newcomers than just tell them to read
a sequence; I mean, I think each of us had to actually argue out points we
thought were obvious before we moved forward on these issues. Having a
continuous open thread on such topics (including, of course, links to the
relevant posts or Wiki entry) would be much better, IMO.
A monthly "Old Topics" thread, or a collection of them on various topics, would
be great, although there ought to be a really obvious link directing people to
it.
2Jack11yWhile I'm not saying there shouldn't be a place to discuss those topics I think
the first thing a newcomer sees should focus on epistemology, rationality and
community norms of rationality.
1) This is still presumably what this site is about.
2) Once you get the right attitude and the right approach the other subjects
don't require patient explanation. A place to discuss those things is fine, but
if the issue comes up elsewhere and a veteran does respond brusquely to a
newcomer they can probably deal with it if they have internalized less wrong
norms, traditional rationality and some of the Bayesian type stuff we do here.
3) There seems to be near universal agreement on the rationality stuff but I'm
not sure that is the case with the other issues. I know I agree with the typical
LW position on the first four of your questions, but I disagree on the last two.
I suspect most people here don't think cryonics will probably work (just that it
working is likely enough to justify the cost). There are probably some
determinists mixed in with a lot of compatibilists and there are definitely
dissenters on theory of the mind stuff (I'm thinking of Michael Porter who
otherwise appears to be a totally reasonable less wrong member). Check the
survey results [http://lesswrong.com/lw/fk/survey_results/#more] for more
evidence of dissent. That there is still disagreement on these issues that is
reason to keep discussing them. But I don't know if we should present the
majority views on all these issues as resolved to new users.
But I might just be privileging my own minority views. If the community wants
these included I won't object.
2orthonormal11yGood points, but I still think that these questions belong in some kind of "Old
Topics" thread, because there's already been a lot said about them, and because
most new people will want to argue them anyway. Even if they're not considered
to be settled or to be conditions that define LW, I'd prefer if there's a place
for new people to start discussing them other than 2-year-old threads or
tangential references in new posts.
In a fairly recent little-noticed comment, I let slip that I differ from many folks here in what some may regard as an important way: I was not raised on science fiction.
I'll be more specific here: I think I've seen one of the Star Wars films (the one about the kid who apparently grows up to become the villain in the other films). I have enough cursory familiarity with the Star Trek franchise to be able to use phrases like "Spock bias" and make the occasional reference to the Starship Enterprise (except I later found out that the reference in that post was wrong, since the Enterprise is actually supposed to travel faster than light -- oops), but little more. I recall having enjoyed the "Tripod" series, and maybe one or two other, similar books, when they were read aloud to me in elementary school. And of course I like Yudkowsky's parables, including "Three Worlds Collide", as much as the next LW reader.
But that's about the extent of my personal acquaintance with the genre.
Now, people keep telling me that I should read more science fiction; in fact, they're often quite surprised that I haven't. So maybe, while we're doing these... (read more)
9Vladimir_Nesov11yGreg Egan: Permutation City, Diaspora, Incandescence.
Vernor Vinge: True Names, Rainbows End.
Charlie Stross: Accelerando.
Scott Bakker: Prince of Nothing series.
3jscn11yVoted up mainly for the Greg Egan recommendations.
7Kevin11yI am a big fan of Isaac Asimov. Start with his best short story, which I submit
as the best sci-fi short story of all time.
http://www.multivax.com/last_question.html
[http://www.multivax.com/last_question.html]
7Bindbreaker11yI prefer this one [http://www.roma1.infn.it/~anzel/answer.html], and yes, it
really is that short.
6Paul Crowley11yMy first recommendation here is always Iain M Banks, Player of Games.
6Alicorn11yIf you'd like some TV recommendations as well, here are some things that you can
find on Hulu:
Firefly [http://www.hulu.com/firefly]. It's not all available at the same time,
but they rotate the episodes once a week; in a while you'll be able to start at
the beginning. If you haven't already seen the movie, put it off until you've
watched the whole series.
Babylon 5 [http://www.hulu.com/babylon-5]. First two seasons are all there. It
takes a few episodes to hit its stride.
If you're willing to search a little farther afield, Farscape is good, and of
the Star Treks, DS9 is my favorite (many people prefer TNG, though, and this
seems for some reason to be correlated with gender).
2ShardPhoenix11yMaybe that's because DS9 is about a bunch of people living in a big house, while
TNG is about a bunch of people sailing around in a big boat ;). I prefer DS9
myself though and I'm a guy.
1randallsquared11yWith respect to B5, I'd say "a few episodes" is the entire first season and a
quarter of the second. I don't regret having spent the time to watch that, but
I'm not sure I would have bothered had I not had friends raving about it,
knowing in advance what I know now. :)
5NancyLebovitz11yVinge's Marooned in Real Time, A Fire Upon the Deep. The former introduced the
idea of the Singularity, the latter gets a lot of fun playing near the edge of
it.
Olaf Stapledon: Last and First Men, Star Maker.
Poul Anderson: Brain Wave. What happens if there's a drastic, sudden
intelligence increase?
After you've read some science fiction, if you let us know what you've liked, I
bet you'll get some more fine-tuned recommendations.
4Wei_Dai11yI second A Fire Upon the Deep (and anything by Vinge, but A Fire Upon the Deep
is my favorite). BTW, it contains what is in retrospect a clear reference to the
FAI problem. See
http://books.google.com/books?id=UGAKB3r0sZQC&lpg=PA400&ots=VBrKocfTHM&dq=%22fast%20burn%20transcendence%22&pg=PA400
[http://books.google.com/books?id=UGAKB3r0sZQC&lpg=PA400&ots=VBrKocfTHM&dq=%22fast%20burn%20transcendence%22&pg=PA400]
If anyone read it for the first time recently, I'm curious what you think of the
Usenet [http://en.wikipedia.org/wiki/Usenet] references. Those were my favorite
parts of the book when I first read it.
1zero_call11yI thought the Usenet references were really cool and really clever, both from a
reader's standpoint, and also from an author's standpoint. For example, it
doesn't take a lot of digression to explain it or anything since most readers
are already familiar with similar stuff (e.g., Usenet.) It also just seems
really plausible as a form of universe-scale "telegram" communication, so I
think it works great for the story. Implausibility just ruins science fiction
for me, it destroys that crucial suspension of disbelief.
4NancyLebovitz11yIt depends on what you're looking for. Books you might enjoy? If so, we need to
know more about your tastes. Books we've liked? Books which have influenced us?
An overview of the field?
In any case, some I've liked-- Heinlein's Rocketship Galileo which is quite a
nice intro to rationality and also has Nazis in abandoned alien tunnels on the
Moon, and Egan's Diaspora which is an impressive depiction of people living as
computer programs.
Oh, and Vinge's A Fire Upon the Deep which is an effort to sneak up on writing
about the Singularity (Vinge invented the idea of the Singularity), and
Kirsteen's The Steerswoman (first of a series), which has the idea of a guild of
people whose job it is to answer questions-- and if you don't answer one of
their questions, you don't get to ask them anything ever again.
3Dreaded_Anomaly10yI second the recommendations of 1984 and Player of Games (the whole Culture
series is good, but that one especially held my interest).
Recommendations I didn't see when skimming the thread:
* The Hitchhiker's Guide to the Galaxy series by Douglas Adams: A truly
enjoyable classic sci-fi series, spanning the length of the galaxy and the
course of human history.
* Timescape by Gregory Benford: Very realistic and well-written story about
sending information back in time. The author is an astrophysicist, and knows
his stuff.
* The Andromeda Strain, Sphere, Timeline, Prey, and Next by Michael Crichton:
These are his best sci-fi works, aimed at realism and dealing with the
consequences of new technology or discovery.
* Replay by Ken Grimwood: A man is given the chance to relive his life. A
stirring tale with several twists.
* The Commonwealth Saga and The Void Trilogy by Peter F. Hamilton: Superb space
opera, in which humanity has colonized the stars via traversable wormholes,
and gained immortality via rejuvenation technology. The trilogy takes place a
thousand years after the saga, but with several of the same characters.
* The Talents series and the Tower and Hive series by Anne McCaffrey: These
novels deal with the emergence and organization of humans with "psychic"
abilities (telekinesis, telepathy, teleportation, and so forth). The first
series takes place roughly in the present day, the second far in the future
on multiple planets.
* Priscilla Hutchins series and Alex Benedict series by Jack McDevitt: Two
series, unrelated, both examining how humans might explore the galaxy and
what they might find (many relics of ancient civilizations, and a few alien
races still living). The former takes place in the relatively near future,
while the latter takes place millennia in the future.
* Hyperion Cantos by Dan Simmons: An epic space opera dealing heavily with
singularity-related concepts such as AI and hum
3JoshuaZ11yI wouldn't recommend Scalzi. Much of Scalzi is miltiary scifi with little
realism and isn't a great introduction for scifi. I'd recommend Charlie Stross.
"The Atrocity Archives", "Singularity Sky" and "Halting State" are all
excellent. The third is very weird in that it is written in the second person,
but is lots of fun. Other good authors to start with are Pournelle and Niven
(Ringworld, The Mote in God's Eye, and King David's Spaceship are all
excellent).
4Risto_Saarelma11yAm I somehow unusual for being seriously weirded out by the cultural undertones
in Scalzi's Old Man's War books? I keep seeing people in generally enlightened
forums gushing over his stuff, but the book read pretty nastily to me with its
mix of very juvenile approach to science, psychology and pretty much everything
it took on, and its glorification of genocidal war without alternatives. It
brought up too much associations to telling kids who don't know better about the
utter necessity of genocidal war in simple and exiting terms in real-world
history, and seemed too little aware of this itself to be enjoyable.
Maybe it's a Heinlein thing. Heinlein is pretty obscure here in Europe, but
seems to be woven into the nostalgia trigger gene in the American SF fan DNA,
and I guess Scalzi was going for something of a Heinlein pastiche.
4NancyLebovitz11yIt's nice to know that I'm not the only person who hated Old Man's War, though
our reasons might be different.
It's been a while since I've read it, but I think the character who came out in
favor of an infrastructure attack (was that the genocidal war?) turned out to be
wrong.
What I didn't like about the book was largely that it was science fiction lite--
the world building was weak and vague, and the viewpoint character was way too
trusting. I've been told that more is explained in later books, but I had no
desire to read them.
There's a profoundly anti-imperialist/anti-colonialist theme in Heinlein, but
most Heinlein fans don't seem to pick up on it.
5Risto_Saarelma11yThe most glaring SF-lite problem for me was that in both Old Man's War and The
Ghost Brigades, the protagonist was basically written as a generic
twenty-something Competent Man character, despite both books deliberately
setting the protagonist up as very unusual compared to the archetype character.
in Old Man's War, the protagonist is a 70-year old retiree in a retooled body,
and in The Ghost Brigades something else entirely. Both of these instantly point
to what I thought would have been the most interesting thing about the book, how
does someone who's coming from a very different place psychologically approach
stuff that's normally tackled by people in their twenties. And then pretty much
nothing at all is done with this angle. Weird.
2Risto_Saarelma11yCome to think of it, I had a similar problem with James P. Hogan's Voyage from
Yesteryear, which was about a colony world of in vitro grown humans raised by
semi-intelligent robots without adult parents. I thought this would lead to some
seriously weird and interesting social psychology with the colonists, when all
sorts of difficult to codify cultural layers are lost in favor of subhuman
machines as parental authorities and things to aspire to.
Turned out it was just a setup to lecture how anarchism with shooting people you
don't like would lead to the perfect society if it weren't for those meddling
history-perpetuating traditionalists, with the colonists of course being
exemplars of psychological normalcy and wholesomeness as well as required by the
lesson, and then I stopped reading the book.
1NancyLebovitz11yThere was so much, so very much sf-lite about that book. Real military life is
full of detail and jargon. OMW had something like two or three kinds of weapons.
There was the big sex scene near the beginning of the book, and then the
characters pretty much forgot about sex.
It was intentionally written to be an intro to sf for people who don't usually
read the stuff. Fortunately, even though the book was quite popular, that
approach to writing science fiction hasn't caught on.
0RobinZ11yNor I - I've read Agent to the Stars [http://www.scalzi.com/agent/], which was
just as bad, so I have no expectation of improvement.
0JoshuaZ11yThis isn't a Scalzi problem so much as a general problem with the military end
of SF. See for example, Starship Troopers and Ender's Game. Ender's Game makes
it more complicated, but there's still some definite sympathy with genocide
(speciescide?).
2NancyLebovitz11yI wonder how important what the characters say is compared to what they do-- and
the importance may be in what the readers remember.
Card has an actual genocide.
In ST, Heinlein speaks in favor of crude "roll over the other guys so that your
genes can survive" expansionism, but he portrays a society where racial/ethnic
background doesn't matter for humans, and an ongoing war which won't necessarily
end with the Bugs or the humans being wiped out.
3jscn11y* Solaris by Stanislaw Lem is probably one of my all time favourites.
* Anathem by Neal Stephenson is very good.
3Jack11yLeGuin- The Dispossessed
William Gibson- Neuromancer
George Orwell- 1984
Walter Miller - A Canticle for Leibowitz
Philip K. Dick- The Man in the High Castle
That actually might be my top five books of all time.
3Jawaka11yI am a huge fan of Philip K. Dick. I don't usually read much fiction or even
science fiction, but PKD has always fascinated me. Stanislav Lem is also great.
3RichardKennaway11yBearing in mind that you're asking this on LessWrong, these come to mind:
Greg Egan. Everything he's written, but start with his short story collections,
"Axiomatic" and "Luminous". Uploading, strong materialism, quantum mechanics,
immortality through technology, and the implications of these for the concept of
personal identity. Some of his short stories are online
[http://gregegan.customer.netspace.net.au/].
Charles Stross. Most of his writing is set in a near-future, near-Singularity
world.
On related themes are "The Metamorphosis of Prime Intellect"
[http://en.wikipedia.org/wiki/The_Metamorphosis_of_Prime_Intellect], and John C.
Wright's Golden Age [http://en.wikipedia.org/wiki/The_Golden_Age_(novel_series\]
) trilogy.
There are many more SF novels I think everyone should read, but that would be
digressing into my personal tastes.
Some people here have recommended M. Scott Bakker's trilogy that begins with
"The Darkness That Comes Before", as presenting a picture of a superhuman
rationalist, although having ploughed through the first book I'm not all that
moved to follow up with the rest. I found the world-building rather derivative,
and the rationalist doesn't play an active role. Can anyone sell me on reading
volume 2?
2Zack_M_Davis11yStrongly seconding Egan. I'd start with "Singleton
[http://www.gregegan.net/MISC/SINGLETON/Singleton.html]" and "Oracle
[http://gregegan.customer.netspace.net.au/MISC/ORACLE/Oracle.html]."
Also of note, Ted Chiang.
0gwern9yI couldn't unless 'pretty good fantasy version of the Crusades' sounds like your
cup of tea.
2daos11ymany good recommendations so far but unbelievably nobody has yet mentioned Iain
M. Banks' series of 'Culture' novels based on a humanoid society (the 'Culture')
run by incredibly powerful AI's known as 'Minds'.
highly engaging books which deal with much of what a possible highly
technologically advanced post singularity society might be like in terms of
morality, politics, philosophy etc. they are far fetched and a lot of fun.
here's the list to date:
* Consider Phlebas (1987)
* The Player of Games (1988)
* Use of Weapons (1990)
* Excession (1996)
* Inversions (1998)
* Look to Windward (2000)
* Matter (2008)
they are not consecutive so reading order isn't that important though it is nice
to follow their evolution from the perspective of the writing.
2AdeleneDawner11yI don't know whether to be surprised that no one has recommended the Ender's
Game series or not. They're not terribly realistic in the tech (especially
toward the end of the series), and don't address the idea of a technological
singularity, but they're a good read anyway.
Oh - I'm not sure if this is what you were thinking of by sci-fi or not, and it
gets a bit new-agey, but Spider Robinson's "Telempath" is a personal favorite.
It's set in a near-future (at the time of writing) earth after a virus was
released that magnified everyone's sense of smell to the point where cities, and
most modern methods of producing things, became intolerable. (Does anyone else
have post-apocalyptic themed favorites? I have a fondness for the genre, sci-fi
or not.)
4Cyan11yI had a high opinion of Ender's Game once (less so for its sequels). Then I read
this [http://plover.net/~bonds/ender.html].
2Blueberry11yA poorly thought out, insult-filled rant comparing scenes in Ender's Game to
"cumshots" changed your view of a classic, award-winning science fiction novel?
Please reconsider.
5Cyan11yIf you strip out the invective and the appeal to emotion embodied in the
metaphorical comparison to porn, there yet remains valid criticism of the
structure and implied moral standards of the book.
2xamdam11yI did not believe this was possible, but this analysis has turned EG into ashes
retroactively. Still, it gets lots of kids into scifi, so there is some value.
A really great kids scifi book is "Have spacesuit, will travel" by Heinlein.
5NancyLebovitz11yI've heard that effect called "the suck fairy". The suck fairy sneaks into your
life and replaces books you used to love with vaguely similar books that suck.
2xamdam11yGreat name, but unfortunately it's the same book; the analysis made it
incompatible with self-respect.
2NancyLebovitz11yThe suck fairy always brings something that looks exactly like the same book,
but somehow....
I'm not sure if I'll ever be able to enjoy Macroscope again. Anthony was really
interesting about an information gift economy, but I suspect that "vaguely
creepy about women" is going to turn into something much worse.
2sketerpot11yRobert Heinlein wrote some really good stuff (before becoming increasingly
erratic in his later years). Very entertaining and fun. Here are some that I
would recommend for starting out with:
Tunnel in the Sky. The opposite of Lord of the Flies. Some people are stuck on a
wild planet by accident, and instead of having civilization collapse, they start
out disorganized and form a civilization because it's a good idea. After reading
this, I no longer have any patience for people who claim that our natural state
is barbarism.
Citizen of the Galaxy. I can't really summarize this one, but it's got some good
characters in it.
Between Planets. Our protagonist finds himself in the middle of a revolution all
of a sudden. This was written before we knew that Venus was not habitable.
I was raised on this stuff. Also, I'd like to recommend Startide Rising, by
David Brin, and its sequel The Uplift War. They're technically part of a
trilogy, but reading the first book (Sundiver) is completely unnecessary. It's
not really light reading, but it's entertaining and interesting.
2NancyLebovitz11yNote about Tunnel in the Sky-- they didn't just form a society (not a
civilization) because they thought it was a good idea to do-- they'd had
training in how to build social structures.
2[anonymous]11yLord of Light by Roger Zelazny.
Snow Crash by Neal Stephenson
7Technologos11yI strongly second Snow Crash. I enjoyed it thoroughly.
2whpearson11yI'd say identify what sort of future scenarios you want to explore and ask us to
identify exemplars. Or is the goal is just to get a common vocabulary to discuss
things?
Reading Sci-Fi while potentially valuable should be done with a purpose in mind.
Unless you need another potential source of procrastination.
5komponisto11yGoodness gracious. No, just looking for more procrastination/pure fun. I've
gotten along fine without it thus far, after all.
(Of course, if someone actually thinks I really do need to read sci-fi for some
"serious" reason, that would be interesting to know.)
1Technologos11yWhile I don't think you need to read it, per se, I have found sci fi to be of
remarkable use in preparing me for exactly the kind of mind-changing upon which
Less Wrong thrives. The Asimov short stories cited above are good examples.
I also continue to cite Asimov's Foundation trilogy (there are more after the
trilogy, but he openly said that he wrote the later books purely because his
publisher requested them) as the most influential fiction works in pushing me
into my current career.
1Sniffnoy11ySince noone's mentioned it yet, Rendevous with Rama. You really don't want to
touch the sequels, though.
0Jonathan_Graehl11yAgreed on both points.
1Kevin11yOh, definitely 1984 if you've never read it. Scary how much predictive power
it's had.
1brian_jaress11yThis might not be the best place to ask because so many people here prefer
science fiction to regular fiction. I've noticed that people who prefer science
fiction have a very different idea of what makes good science fiction than
people who have no preference or who prefer regular fiction.
Most of what I see in the other comments is on the "prefers science fiction"
side, except for things by LeGuin and maybe Dune.
Of course, you might turn out to prefer science fiction and just not have
realized it. Then all would be well.
1zero_call11yIt's actually very important to ask people for recommendations for books, and
especially for sci-fi, since it seems like a large majority of the work out
there is well, garbage. Not to be too harsh, as IMO, the same thing could be
said for a lot of artistic genres (anime, modern action film, etc, etc.).
For sci-fi, there are some really top notch work out there. But be warned, that
in general the rest of the series isn't as good as the first book. Some
classics, all favorites of mine are:
* Dune (Frank Herbert)
* Starship Troopers (Robert Heinlein)
* Ringworld (first book) (Larry Niven)
* Neuromancer (William Gibson) (Warning: last half of the book becomes s.l.o.w.
though)
* Fire Upon the Deep (Vernor Vinge)
1Blueberry11yI haven't seen much of the Star Wars or Star Trek stuff either, and don't really
consider them science fiction as much as space action movies. That's not really
what we're talking about.
I would strongly advise you to start with short stories, specifically Isaac
Asimov, Robert Heinlein, Arthur C. Clarke, Robert Sheckley, and Philip K. Dick.
All those authors are considered giants in the field and have anthologies of
collected short stories. Science fiction short stories tend to be easier to read
because you don't get bogged down in detail, and you can get right to the point
of exploring the interesting and speculative worlds.
1Jack11yFilms:
Blade Runner
Gattaca
2001: A Space Odyssey
1Furcas11yIsaac Asimov's Foundation series:
* Foundation
* Foundation and Empire
* Second Foundation
* Foundation's Edge
* Foundation and Earth
There are prequels too, but I don't like 'em.
1Cyan11yI recommend anything by Charles Stross, Lois McMaster Bujold's Vorkosigan Saga
[http://en.wikipedia.org/wiki/Vorkosigan_Saga] (link gives titles and
chronology), and anthing by Ursula LeGuin, but especially City of Illusions and
The Left Hand of Darkness.
0MartinB11yJust reading that I am curious what you did end up reading and what you think
about it.
My recents were Heinleins: citizen of the galaxy, and the starbeast.
If you mean, "what would we pay to save your life", you could probably take up a respectable collection if you credibly identified a threat to your health that could be fixed with a medium-sized amount of money.
If you mean, "will we bribe you to hang out with us"... uh... no.
In one of the dorkier moments of my existence, I've written a poem about the Great Filter. I originally intended to write music for this, but I've gone a few months now without inspiration, so I think I'll just post the poem to stand by itself and for y'all to rip apart.
The dire floor of Earth afore
saw once a fortuitous spark.
Life's swift flame sundry creature leased
and then one age a freakish beast
awakened from the dark.
Boundless skies beheld his eyes
and strident through the void he cried;
set his devices into space;
scryed for signs of a yonder race;
but desolate hush replied.
Stars surround and worlds abound,
the spheres too numerous to name.
Yet still no creature yet attains
to seize this lot, so each remains
raw hell or barren plain.
What daunting pale do most 'fore fail?
Be the test later or done?
Those dooms forgone our lives attest
themselves impel from first inquest:
cogito ergo sum.
Man does boast a charmèd post,
to wield the blade of reason pure.
But if this prov'ence be not rare,
then augurs fate our morrow bare,
our fleeting days obscure.
But might we nigh such odds defy,
and see before us cosmos bend?
Toward the heavens thy mind set,
and waver not: this proof, till 'yet,
did ne'er with man contend!
Suggested tweaks are welcome. Things that I'm currently unhappy with are that "fortuitous" scans awkwardly, and the skies/eyes rhyme feels clichéd.
5pjeby11yIt reminds me of something that happened in college, where a poem of mine was
being put in some sort of collection; there was a typo in it, and I mentioned a
correction to the professor. He nodded wisely, and said, "yes, that would keep
it to iambic pentameter [http://en.wikipedia.org/wiki/Iambic_pentameter]."
And I said, "iambic who what now?"... or words to that effect.
And then I discovered the wonderful world of meter. ;-)
Your poem is trying to be in iambic tetrameter
[http://en.wikipedia.org/wiki/Iambic_tetrameter] (four iambs - "dit dah" stress
patterns), but it's missing the boat in a lot of places. Iambic tetrameter also
doesn't lend itself to sounding serious; you can write something serious in it,
sure, but it'll always have kind of a childish singsong-y sort of feel, so you
have to know how to counter it.
Before I grokked this meter stuff, I just randomly tried to make things sound
right, which is what your poem appears to be doing. If you actually know what
meter you're trying for, it's a LOT easier to find the right words, because they
will be words that naturally hit the beat. Ideally, you should be able to read
your poem in a complete monotone and STILL hear the rhythmic beating of the
dit's and dah's... you could probably write a morse code message if you wanted
to. ;-)
Anyway, you will probably find it a lot easier to fix the problems with the
poem's rhythm if you know what rhythm you are trying to create. Enjoy!
3Eliezer Yudkowsky11yFor those who still read books, recommend "The Poem's Heartbeat".
1dfranke11yYes, I'm well aware of what iambic tetrameter is and that the poem generally
conforms to it :-). The intended meter isn't quite that simple though. The final
verse of each stanza is only three feet, and the first foot of the third verse
of each stanza is a spondee. Verses are headless where necessary.
There's also an inverted foot in "Be the test later or done?", but I'm leaving
that in even though I could easily substitute "ahead" for "later". Despite
breaking the meter, it sounds better as-is.
I recently revisited my old (private) high school, which had finished building a new >$15 million building for its football team (and misc. student activities & classes).
I suddenly remembered that when I was much younger, the lust of universities and schools in general for new buildings had always puzzled me: I knew perfectly well that I learned more or less the same whether the classroom was shiny new or grizzled gray and that this was true of just about every subject-matter*, and even then it was obvious that buildings must cost a lot to build and... (read more)
I tried creating a separate login on my computer with no distractions, and tried to get my work done there. This reduced my productivity because it increased the cost of switching back from procrastinating to working. I would have thought that recovering in large bites and working in large bites would have been more efficient, but apparently no, it's not.
I'm currently testing the hypothesis that reading fiction (possibly reading anything?) comes out of my energy-to-work-on-the-book budget.
Next up to try: Pick up a CPAP machine off Craigslist.
6wedrifid11yA technical problem that is easily solvable. My approach has been to use VMWare.
All the productive tools are installed on the base OS. Procrastination tools are
installed on a virtual machine. Starting the procrastination box takes about 20
seconds (and more importantly a significant active decision) but closing it to
revert to 'productive mode' takes no time at all.
4jimrandomh11yI've noticed the same problem in separating work from procrastination
environments. But it might work if it was asymmetric - say, there's a single
fast hotkey to go from procrastination mode to work mode, but you have to type a
password to go in the other direction. (Or better yet, a 5 second delay timer
that you can cancel.)
3kpreid11yI had the same problem when I was using just virtual screens with a key to
switch, not even separate accounts. It was a significant decrease in
productivity before I realized the problem. I think it's not just the effort to
switch; it's also that the work doesn't stay visible so that you think about it.
Suppose we want to program an AI to represent the interest of a group. The standard utilitarian solution is to give the AI a utility function that is an average of the utility functions of the individual in the group, but that runs into the interpersonal comparison of utility problem. (Was there ever a post about this? Does Eliezer have a preferred approach?)
Here's my idea for how to solve this. Create N AIs, one for each individual in the group, and program it with the utility function of that individual. Then set a time in the future when one of those A... (read more)
4Alicorn11yUnless you can directly extract a sincere and accurate utility function from the
participants' brains, this is vulnerable to exaggeration in the AI programming.
Say my optimal amount of X is 6. I could program my AI to want 12 of X, but be
willing to back off to 6 in exchange for concessions regarding Y from other AIs
that don't want much X.
2wedrifid11yThis does not seem to be the case when the AIs are unable to read each other's
minds. Your AI can be expected to lie to others with more tactical effectiveness
than you can lie indirectly via deceiving it. Even in that case it would be
better to let the AI rewrite itself for you.
On a similar note, being able to directly extract a sincere and accurate utility
function from the participants' brains leaves the system vulnerable to
exploitations. Individuals are able to rewrite their own preferences
strategically in much the same way that an AI can. Future-me may not be happy
but present-me got what he wants and I don't (necessarily) have to care about
future me.
0Wei_Dai11yI had also mentioned this in an earlier comment on another thread. It turns out
that this is a standard concern in bargaining theory. See section 11.2 of this
review paper [http://rcer.econ.rochester.edu/RCERPAPERS/rcer_554.pdf].
So, yeah, it's a problem, but it has to be solved anyway in order for AIs to
negotiate with each other.
0timtyler10yDo you think the more powerful group members are going to agree to that?!? They
worked hard for their power and status - and are hardly likely to agree to their
assets being ripped away from them in this way. Surely they will ridicule your
scheme, and fight against it being implemented.
5Wei_Dai10yThe main idea I wanted to introduce in that comment was the idea of using
(supervised) bargaining to aggregate individual preferences. Bargaining power
(or more generally, weighing of individual preferences) is a mostly orthogonal
issue. If equal bargaining power turns out to be impractical and/or immoral
[http://lesswrong.com/lw/2b7/hacking_the_cev_for_fun_and_profit/], then some
other distribution of bargaining power can be used.
0Roko11yWhy not use virtual agents, which are given only a safe interface to negotiate
with each other over, and no physical powers, and are monitored by a meta-AI
that prevents them from trying to game the system, fool each other, etc. This
would avoid having wars between superintelligences in the real physical
universe.
0Wei_Dai11yI think that's what I implied: there is a supervisor process that governs the
negotiation process and eventually picks a random AI to be released into the
real world.
0Roko11yok, just checking you weren't advocating a free-for-all.
0Vladimir_Nesov11yWhat exactly is "equal bargaining power" is vague. If you "instantiate" multiple
AIs, their "bargaining power" may well depend on their "positions" relative to
each other, the particular values in each of them, etc.
Why this requirement? A cooperation of AIs might as well be one AI. Cooperation
between AIs is just a special case of operation of each AI in the environment,
and where you draw the boundary between AI and environment is largely arbitrary.
1Wei_Dai11yThe idea is that the status quo (i.e., the outcome if the AIs fail to cooperate)
is N possible worlds of equal probability, each shaped according to the values
of one individual/AI. The AIs would negotiate from this starting point and
improve upon it. If all the AIs cooperate (which I presume would be the case),
then which AI gets randomly selected to take over the world won't make any
difference.
In this case the AIs start from an equal position, but you're right that their
values might also figure into bargaining power. I think this is related to a
point Eliezer made in the comment I linked to: a delegate may "threaten to adopt
an extremely negative policy in order to gain negotiating leverage over other
delegates." So if your values make you vulnerable to this kind of threat, then
you might have less bargaining power than others. Is this what you had in mind?
1Vladimir_Nesov11yLetting a bunch of AIs with given values resolve their disagreement is not the
best way to merge values, just like letting the humanity go on as it is is not
the best way to preserve human values. As extraction of preference shouldn't
depend on the actual "power" or even stability of the given system, merging of
preference could also possibly be done directly and more fairly when specific
implementations and their "bargaining power" are abstracted away. Such
implementation-independent composition/interaction of preference may turn out to
be a central idea for the structure of preference.
1andreas11yThere seems to be a bootstrapping problem: In order to figure out what the
precise statement is that human preference makes, we need to know how to combine
preferences from different systems; in order to know how preferences should
combine, we need to know what human preference says about this.
1Vladimir_Nesov11yIf we already have a given preference, it will only retell itself as an answer
to the query "What preference should result [from combining A and B]?", so
that's not how the game is played. "What's a fair way of combining A and B?" may
be more like it, but of questionable relevance. For now, I'm focusing on getting
a better idea of what kind of mathematical structure preference should be,
rather than on how to point to the particular object representing the given
imperfect agent.
0Wei_Dai11yWhat is/are your approach(es) for attacking this problem, if you don't mind
sharing?
In my UDT1 post I suggested that the mathematical structure of preference could
be an ordering on all possible (vectors of) execution histories of all possible
computations. This seems general enough to represent any conceivable kind of
preference (except preferences about uncomputable universes), but also appears
rather useless for answering the question of how preferences should be merged.
0Vladimir_Nesov11ySince I don't have self-contained results, I can't describe what I'm searching
for concisely, and the working hypotheses and hunches are too messy to summarize
in a blog comment. I'll give some of the motivations I found towards the end of
the current blog sequence, and possibly will elaborate in the next one if the
ideas sufficiently mature.
Yes, this is not very helpful. Consider the question: what is the difference
between (1) preference, (2) strategy that the agent will follow, and the (3)
whole of agent's algorithm? Histories of the universe could play a role in
semantics of (1), but they are problematic in principle, because we don't know,
nor will ever know with certainty, the true laws of the universe. And what we
really want is to get to (3), not (1), but with good understanding of (1) so
that we know (3) to be based on our (1).
0Wei_Dai11yThanks. I look forward to that.
I don't understand what you mean here, and I think maybe you misunderstood
something I said earlier. Here's what I wrote in the UDT1 post
[http://lesswrong.com/lw/15m/towards_a_new_decision_theory/]:
(Note that of course this utility function has to be represented in a
compressed/connotational form, otherwise it would be infinite in size.) If we
consider the multiverse to be the execution of all possible programs, there is
no uncertainty about the laws of the multiverse. There is uncertainty about
"which universes, i.e., programs, we're in", but that's a problem we already
have a handle on, I think.
So, I don't know what you're referring to by "true laws of the universe", and I
can't find an interpretation of it where your quoted statement makes sense to
me.
0Vladimir_Nesov11yI don't believe that directly posing this "hypothesis" is a meaningful way to
go, although computational paradigm can find its way into description of the
environment for the AI that in its initial implementation works from within a
digital computer.
0andreas11yHere is a revised way of asking the question I had in mind: If our preferences
determine which extraction method is the correct one (the one that results in
our actual preferences), and if we cannot know or use our preferences with
precision until they are extracted, then how can we find the correct extraction
method?
Asking it this way, I'm no longer sure it is a real problem. I can imagine that
knowing what kind of object preference is would clarify what properties a
correct extraction method needs to have.
0Vladimir_Nesov11yGoing meta and using the (potentially) available data such as humans in form of
uploads, is a step made in attempt to minimize the amount of data (given
explicitly by the programmers) to the process that reconstructs human
preference. Sure, it's a bet (there are no universal preference-extraction
methods that interpret every agent in a way it'd prefer to do itself, so we have
to make a good enough guess), but there seems to be no other way to have a
chance at preserving current preference. Also, there may turn out to be a good
means of verification that the solution given by a particular
preference-extraction procedure is the right one.
1pdf23ds11ySo you know how to divide the pie
[http://lesswrong.com/lw/ru/the_bedrock_of_fairness/]? There is no interpersonal
"best way" to resolve directly conflicting values. (This is further than Eliezer
went.) Sure, "divide equally" makes a big dent in the problem, but I find it
much more likely any given AI will be a Zaire than a Yancy. As a simple case,
say AI1 values X at 1, and AI2 values Y at 1, and X+Y must, empirically, equal
1. I mean, there are plenty of cases where there's more overlap and orthogonal
values, but this kind of conflict is unavoidable between any reasonably complex
utility functions.
1Vladimir_Nesov11yI'm not suggesting an "interpersonal" way (as in, by a philosopher of perfect
emptiness). The possibilities open for the search of "off-line" resolution of
conflict (with abstract transformation of preference) are wider than those for
the "on-line" method (with AIs fighting/arguing it over) and so the "best"
option, for any given criterion of "best", is going to be better in "off-line"
case.
0[anonymous]11yThere seems to be a bootstrapping problem: In order to figure out what the
precise statement is that human preference makes, we need to know how to combine
preferences from different systems; in order to know how preferences should
combine, we need to know what human preference says about this.
0[anonymous]11yThere seems to be a bootstrapping problem: In order to figure out what the
precise statement is that human preference makes, we need to know how to combine
preferences from different systems; in order to know how preferences should
combine, we need to know what human preference says about this.
0Wei_Dai11y[Edited] I agree that it is probably not the best way. Still, the idea of
merging values by letting a bunch of AIs with given values resolve their
disagreement seems better than previous proposed solutions, and perhaps gives a
clue to what the real solution looks like.
BTW, I have a possible solution to the AI-extortion problem mentioned by
Eliezer. We can set a lower bound for each delegate's utility function at the
status quo outcome, (N possible worlds with equal probability, each shaped
according to one individual's utility function). Then any threats to cause an
"extremely negative" outcome will be ineffective since the "extremely negative"
outcome will have utility equal to the status quo outcome.
I rewatched 12 Monkeys last week (because my wife was going through a Brad Pitt phase, although I think this movie cured her of that :), in which Bruce Willis plays a time traveler who accidentally got locked up in a mental hospital. The reason I mention it here is because It contained an amusing example of mutual belief updating: Bruce Willis's character became convinced that he really is insane and needs psychiatric care, while simultaneously his psychiatrist became convinced that he actually is a time traveler and she should help him save the world.
2MichaelGR11yThe movie is also a good example of existential risk
[http://www.nickbostrom.com/existential/risks.html] in fiction (in this case, a
genetically engineered biological agent).
I've been a longtime lurker, and tried to write up a post a while ago, only to see that I didn't have enough karma. I figure this is is the post for a newbie to present something new. I already published this particular post on my personal blog, but if the community here enjoys it enough to give it karma, I'd gladly turn it into a top-level post here, if that's in order.
7Morendil11yWhat you seem to be saying, that I agree with, is that it's irritating as well
as irrelevant when people try to pull authority on you, using "age" or "quantity
of experience" as a proxy for authority. Yes, argument does screen off
authority. But that's no reason to knock "life experience".
If opinions are not based on "personal experience", what can they possibly be
based on? Reading a book is a personal experience. Arguing an issue with someone
(and changing your mind) is a personal experience. Learning anything is a
personal experience, which (unless you're too good at compartmentalizing) colors
your other beliefs.
Perhaps the issue is with your thinking that "demolishing someone's argument" is
a worthwhile instrumental goal in pursuit of truth. A more fruitful goal is to
repair your interlocutor's argument, to acknowledge how their personal
experience has led them to having whatever beliefs they have, and expose
symmetrically what elements in your own experience lead you to different views.
Anecdotes are evidence, even though they can be weak evidence. They can be
strong evidence too. For instance, having read this comment
[http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1rwg] after I read the
commenter's original report of his experience as an isolated individual, I'd be
more inclined to lend credence to the "stealth blimp" theory. I would have
dismissed that theory on the basis of reading the Wikipedia page alone or
hearing the anecdote along, but I have a low prior probability for someone on
LessWrong arranging to seem as if he looked up news reports after first making a
honest disclosure to other people interested in truth-seeking.
It seems inconsistent on your part to start off with a rant about "anecdotes",
and then make a strong, absolute claimed based solely on "the Sokal affair" -
which at the scale of scientific institutions is anecdotal.
I think you're trying to make two distinct points and getting them mixed up, and
as a result not getti
2Seth_Goldin11yHi Morendil,
Thanks for the comment. The particular version you are commenting on was an
earlier, worse version than what I posted and then pulled this morning. The
version I posted this morning was much better than this. I actually changed the
claim about the Sokal affair completely.
Due to what I fear was an information cascade of negative karma, I pulled the
post so that I might make revisions.
The criticism concerning both this earlier version and the newer one from this
morning still holds though. I too realized after the immediate negative feedback
that I actually was combining, poorly, two different points and losing both of
them in the process. I think I need to revise this into two different posts, or
cut out the point about academia entirely. I will concede that anecdotes are
evidence as well in the future version.
Unfortunately I was at exactly 50 karma, and now I'm back down to 20, so it will
be a while before I can try again. I'll be working on it.
2Seth_Goldin11yHere's the latest version, what I will attempt to post on the top level when I
again have enough karma.
--------------------------------------------------------------------------------
"Life Experience" as a Conversation-Halter
Sometimes in an argument, an older opponent might claim that perhaps as I grow
older, my opinions will change, or that I'll come around on the topic. Implicit
in this claim is the assumption that age or quantity of experience is a proxy
for legitimate authority. In and of itself, such "life experience" is necessary
for an informed rational worldview, but it is not sufficient.
The claim that more "life experience" will completely reverse an opinion
indicates that to the person making such a claim, belief that opinion is based
primarily on an accumulation of anecdotes, perhaps derived from extensive
availability bias [http://en.wikipedia.org/wiki/Availability_heuristic]. It
actually is a pretty decent assumption that other people aren't Bayesian,
because for the most part, they aren't. Many can confirm this, including Haidt,
Kahneman, Tversky.
When an opponent appeals to more "life experience," it's a last resort, and it's
a conversation halter [http://lesswrong.com/lw/1p2/conversation_halters/]. This
tactic is used when an opponent is cornered. The claim is nearly an outright
acknowledgment of a move to exit the realm of rational debate. Why stick to
rational discourse when you can shift to trading anecdotes? It levels the
playing field, because anecdotes, while Bayesian evidence
[http://lesswrong.com/lw/in/scientific_evidence_legal_evidence_rational/], are
easily abused, especially for complex moral, social, and political claims. As
rhetoric, this is frustratingly effective, but it's logically rude
[http://lesswrong.com/lw/1p1/logical_rudeness/].
Although it might be rude and rhetorically weak, it would be authoritatively
appropriate for a Bayesian to be condescending to a non-Bayesian in an argument.
Conversely, it can be downright m
0Seth_Goldin11ySorry; I didn't realize that I can still post. I went ahead and posted it.
2SilasBarta11yI agree with your point and your recommendation. Life experiences can provide
evidence, and they can also be an excuse to avoid providing arguments. You need
to distinguish which one it is when someone brings it up. Usually, if it is
valid evidence, the other person should be able to articulate which insight a
life experience would provide to you, if you were to have it, even if they can't
pass the experience directly to your mind.
I remember arguing with a family member about a matter of policy (for obvious
reasons I won't say what), and when she couldn't seem to defend her position,
she said, "Well, when you have kids, you'll see my side." Yet, from context, it
seems she could have, more helpfully, said, "Well, when you have kids, you'll be
much more risk-averse, and therefore see why I prefer to keep the system as is"
and then we could have gone on to reasons about why one or the other system is
risky.
In another case (this time an email exchange on the issue of pricing carbon
emissions), someone said I would "get" his point if I would just read the famous
Coase paper on externalities. While I hadn't read it, I was familiar with the
arguments in it, and ~99% sure my position accounted for its points, so I kept
pressing him to tell me which insight I didn't fully appreciate. Thankfully,
such probing led him to erroneously state what he thought was my opinion, and
when I mentioned this, he decided it wouldn't change my opinion.
3thomblake11yIt illustrated nothing of the sort. The Sokal affair illustrated that a
non-peer-reviewed, non-science journal will publish bad science writing that was
believed to be submitted in good faith.
Social Text was not peer-reviewed because they were hoping to... do...
something. What Sokal did was similar to stealing everything from a 'good faith'
vegetable stand and then criticizing its owner for not having enough security.
6Seth_Goldin11yNoted. In another draft I'll change this to make the point how easy it is for
high-status academics to deal in gibberish. Maybe they didn't have so much
status external to their group of peers, but within it, did they?
What the Social Text Affair Does and Does Not Prove
http://www.physics.nyu.edu/faculty/sokal/noretta.html
[http://www.physics.nyu.edu/faculty/sokal/noretta.html]
"From the mere fact of publication of my parody I think that not much can be
deduced. It doesn't prove that the whole field of cultural studies, or cultural
studies of science -- much less sociology of science -- is nonsense. Nor does it
prove that the intellectual standards in these fields are generally lax. (This
might be the case, but it would have to be established on other grounds.) It
proves only that the editors of one rather marginal journal were derelict in
their intellectual duty, by publishing an article on quantum physics that they
admit they could not understand, without bothering to get an opinion from anyone
knowledgeable in quantum physics, solely because it came from a conveniently
credentialed ally'' (as Social Text co-editor Bruce Robbins later candidly
admitted[12]), flattered the editors' ideological preconceptions, and attacked
theirenemies''.[13]"
1thomblake11yI'd forgotten that Sokal himself admitted that much about it - thanks for the
cite.
2Vladimir_Nesov11yIt's unclear what you mean by both "Bayesian" and by "authority" in this
sentence. If a person is "Bayesian", does it give "authority" for condescension?
There clearly is some truth to the claim that being around longer sometimes
allows to arrive at more accurate beliefs, including more accurate intuitive
assessment of the situation, if you are not down a crazy road in the particular
domain. It's not a very strong evidence, and it can't defeat many forms of more
direct evidence pointing in the contrary direction, but sometimes it's an OK
heuristic, especially if you are not aware of other evidence ("ask the elder").
This article about gendered language showed up on one of my feeds a few days ago. Given how often discussions of nongendered pronouns happen here, I figure it's worth sharing.
7Daniel_Burfoot11yNice, I liked the part about Tuyuca:
It would be fun to try to build a "rational" dialect of English that requires
people to follow rules of logical inference and reasoning.
Suppose you could find out the exact outcome (up to the point of reading the alternate history equivalent of Wikipedia, history books etc.) of changing the outcome of a single historical event. What would that event be?
Note that major developments like "the Roman empire would never have fallen" or "the Chinese wouldn't have turned inwards" involve multiple events, not just one.
So many. I can't limit it to one, but my top four would be "What if Mohammed had never been born?", "What if Julian the Apostate had succeeded in stamping out Christianity?" and "What if Thera had never blown and the Minoans had survived?" and "What if Alexander the Great had lived to a ripe old age?"
The civilizations of the Near East were fascinating, and although the early Islamic Empire was interesting in its own right it did a lot to homogenize some really cool places. It also dealt a fatal wound to Byzantium as well. If Mohammed had never existed, I would look forward to reading about the Zoroastrian Persians, the Byzantines, and the Romanized Syrians and Egyptians surviving much longer than they did.
The Minoans were the most advanced civilization of their time, and had plumbing, three story buildings, urban planning and possibly even primitive optics in 2000 BC (I wrote a bit about them here). Although they've no doubt been romanticized, in the romanticized version at least they had a pretty equitable society, gave women high status, and revered art and nature. Then they were all destroyed by a giant volcano. I remember reading one hist... (read more)
Given that Alexander was one of the most successful conquerors in all of history, he almost certainly benefited from being extremely lucky. If he had lived longer, therefore, he would have probably experienced much regression to the mean with respect to his military success.
5wedrifid11yOf course, once you are already the most successful conqueror alive you tend to
need less luck. You can get by on the basic competence that comes from
experience and the resources you now have at your disposal. (So long as you
don't, for example, try to take Russia. Although even then Alexander's style
would probably have worked better than Napoleon's.)
1DanArmak11yAs did the Christian culture before them. And the original Roman Empire before
that. And Alexander's Hellenistic culture spread by the fragments of his
mini-empire. And the Persian empires that came and went in the region...
2anonym11yAlong the same idea, but much more likely to yield radical differences to the
future of human society, I'd like to know what would have happened if some
ancient bottleneck epidemic had not happened or had happened differently (killed
more or fewer people, or just different individuals). Much or all of the human
gene pool after that altered event would be different.
2DanArmak11yI'd like to see a world in which all ancestor-types of humans through to the
last common ancestor with chimps still lived in many places.
8Kaj_Sotala11yI'd be curious to know what would have happened if Christopher Columbus's fleet
had been lost at sea during his first voyage across the Atlantic. Most scholars
were already highly skeptical of his plans, as they were based on a
miscalculation, and him not returning would have further discouraged any
explorers from setting off in that direction. How much longer would it have
taken before Europeans found out about the Americas, and how would history have
developed in the meanwhile?
1Jack11yHave you read Orson Scott Card's "Pastwatch: The Redemption of Christopher
Columbus"? It suggest an answer to this question.
1CronoDAS11yNot a very realistic one, though.
6Alicorn11yI would like to know what would have happened if, sometime during the Dark Ages
let's say, benevolent and extremely advanced aliens had landed with the
intention to fix everything. I would diligently copy and disseminate the entire
Wikipedia-equivalent for the generously-divulged scientific and sociological
knowledge therein, plus cultural notes on the aliens such that I could write a
really keenly plausible sci-fi series.
4Gavin11yA sci-fi series based on real extra-terrestrials would quite possibly be so
alien to us that no one would want to read it.
6billswift11yNot just science fiction and aliens either. Nearly all popular and successful
fiction is based around what are effectively modern characters in whatever
setting. I remember a paper I read back around the mid-eighties pointing out
that Louis L'Amour's characters were basically just modern Americans with the
appropriate historical technology and locations.
3Alicorn11yI might have to mess with them a bit to get an audience, yes.
3Zack_M_Davis11yOf course you can't fully describe the scenario, or you would already have your
answer, but even so, this question seems tantalizingly underspecified. Fix
everything, by what standard? Human goals aren't going to sync up exactly with
alien goals (or why even call them aliens?), so what form does the aliens'
benevolence take? Do they try to help the humans in the way that humans would
want to be helped, insofar as that problem has a unique answer? Do they give
humanity half the stars, just to be nice? Insofar as there isn't a unique answer
to how-humans-would-want-to-be-helped, how can the aliens avoid engaging in what
amounts to cultural imperialism---unilaterially choosing what human civilization
develops into? So what kind of imperialism do they choose?
How advanced are these aliens? Maybe I'm working off horribly flawed
assumptions, but in truth it seems kind of odd for them to have interstellar
travel without superintelligence and uploading. (You say you want to write
keenly plausible science fiction, so you are going have to do this kind of
analysis.) The alien civilization has to be rich and advanced enough to send out
a benevolent rescue ship, and yet not develop superintelligence and send out a
colonization wave at near-c to eat the stars and prevent astronomical waste
[http://www.nickbostrom.com/astronomical/waste.html]. Maybe the rescue ship
itself was sent out at near-c and the colonization wave won't catch up for a few
decades or centuries? Maybe the rescue ship was sent out, and then the home
civilization collapsed or died out
[http://www.nickbostrom.com/existential/risks.html]?---and the rescue ship can't
return or rebuild on its own (not enough fuel or something), so they need some
of the Sol system's resources?
Or maybe there's something about the aliens' culture and psychology such that
they are capable of developing interstellar travel but not capable of developing
superintelligence? I don't think it should be too surprising if the aliens
should
5Alicorn11yWhy not, as long as I'm making things up?
Because they are from another planet.
I do not know enough science to address the rest of your complaints.
6orthonormal11yOK, I sense cross-purposes here. You're asking "what would be the most
interesting and intelligible form of positive alien contact (in human terms)",
and Zack is asking "what would be the most probable form of positive alien
contact"?
(By "positive alien contact", I mean contact with aliens who have some goal that
causes them to care about human values and preferences (think of the
Superhappies [http://lesswrong.com/lw/y4/three_worlds_collide_08/]), as opposed
to a Paperclipper [http://wiki.lesswrong.com/wiki/Paperclip_maximizer] that only
cares about us as potential resources for or obstacles to making paperclips.)
Keep in mind that what we think of as good sci-fi is generally an example of
positing human problems (or allegories for them) in inventive settings, not of
describing what might most likely happen in such a setting...
4Zack_M_Davis11yI'm worried that some of my concepts here are a little be shaky and confused in
a way that I can't articulate, but my provisional answer is: because their
planet would have to be virtually a duplicate of Earth to get that kind of
match. Suppose that my deepest heart's desire, my lifework, is for me to write a
grand romance novel about an actuary who lives in New York and her unusually
tall boyfriend. That's a necessary condition for my ideal universe: it has to
contain me writing this beautiful, beautiful novel.
It doesn't seem all that implausible that powerful aliens would have a goal of
"be nice to all sentient creatures," in which case they might very well help me
with my goal in innumerable ways, perhaps by giving me a better word processor,
or providing life extension so I can grow up to have a broader experience base
with which to write. But I wouldn't say that this is the same thing as the alien
sharing my goals, because if humans had never evolved, it almost certainly
wouldn't have even occurred to the alien to create, from scratch, a human being
who writes a grand romance novel about an actuary who lives in New York and her
unusually tall boyfriend. A plausible alien is simply not going to spontaneously
invent those concepts and put special value on them. Even if they have rough
analogues to courtship story or even person who is rewarded for doing economic
risk-management calculations, I guarantee you they're not going to invent New
York.
Even if the alien and I end up cooperating in real life, when I picture my ideal
universe, and when they picture their ideal universe, they're going to be
different visions. The closest thing I can think of would be for the aliens to
have evolved a sort of domain-general niceness, and to have a top-level goal for
the universe to be filled with all sorts of diverse life with their own
analogues of pleasure or goal-achievement or whatever, which me and my
beautiful, beautiful novel would qualify as a special case of. Act
4Alicorn11yDomain-general niceness works. It's possible to be nice to and helpful to lots
of different kinds of people with lots of different kinds of goals. Think
Superhappies except with respect for autonomy.
5RolfAndreassen11yI would try to study the effects of individual humans, Great-Man vs Historical
Inevitability style, by knocking out statesmen of a particular period. Hitler is
a cliche, whom I'd nonetheless start with; but I'd follow up by seeing what
happens if you kill Chamberlain, Churchill, Roosevelt, Stalin... and work my way
down to the likes of Turing and Doenitz. Do you still get France overrun in six
weeks? A resurgent German nationalism? A defiant to-the-last-ditch mood in
Britain? And so on.
Then I'd start on similar questions for the unification of Germany: Bismarck,
Kaiser Wilhelm, Franz Josef, Marx, Napoleon III, and so forth. Then perhaps the
Great War or the Cold War, or perhaps I'd be bored with recent history and go
for something medieval instead - Harald wins at Stamford Bridge, perhaps. Or to
maintain the remove-one-person style of the experiment, there's the three
claimants to the British throne, one could kill Edgar the Confessor earlier, the
Pope has a hand in it, there's the various dukes and other feudal lords in
England... lots of fun to be had with this scenario!
1DanArmak11yDon't limit yourself to just killing people. It's not a good way to learn how
history works, just like studying biology by looking at organisms with defective
genes doesn't tell us everything we'd like to know about cell biology.
5dfranke11yI'd like to know what would have happened if movable type had been invented in
the 3rd century AD.
3Nick_Novitski11yFor starters, the Council of Nicea would flounder helplessly as every sect with
access to a printing press floods the market with their particular version of
christianity.
4PeterS11yI've been curious to know what the "U.S." would be like today if the American
Revolution had failed.
Also, though it's a bit cliche to respond to this question with something like
"Hitler is never born", it is interesting to think about just what is necessary
to propel a nation into war / dictatorship / evil like that (e.g. just when can
you kill / eliminate a single man and succeed in preventing it?) That's
something I'm fairly curious about (and the scope of my curiosity isn't
necessarily confined to Hitler - could be Bush II, Lincoln, Mao, an Islamic imam
whose name I've forgotten, etc.).
2DanielLC11ySomething like Canada I guess.
While we're at it, what if the Continental Congress failed at replacing the
Articles of Confederation?
1i7711yCode Geass :)
3anonym11yI'd like to know what would have happened if the Library of Alexandria hadn't
been destroyed. If even the works of Archimedes alone -- including the key
insight underlying Integral Calculus -- had survived longer and been more widely
disseminated, what difference would that have made to the future progress of
mathematics and technology?
2MichaelGR11yI wonder if much in 20th century history would have been different if the USSR
had been first to land someone on the Moon.
At the time, both sides played it like was something very important, if only for
psychological reasons. But did that symbolic victory really mean that much? Did
it actually alter the course of history much?
1blogospheroid11yChina not imposing the Hai Jin [http://en.wikipedia.org/wiki/Hai_jin] edict.
Greater chinese exploration would have meant an extremely different and
interesting history.
3DanArmak11yMay you live in interesting times!
1[anonymous]11yA recent Facebook status of mine: Too bad Benjamin Franklin wasn't alive in
1835; he could have invented the Internet. The relay had been invented around
then; that's theoretically all that's needed for computation and error
correction, though it would go very slowly.
3JohannesDahlstrom11yWell, Charles Babbage [http://en.wikipedia.org/wiki/Analytical_engine] was alive
back then...
3[anonymous]11yHuh. Then, uh... too bad Charles Babbage wasn't Benjamin Franklin?
Oh, and to post another "what would you find interesting" query, since I found the replies to the last one to be interesting. What kind of crazy social experiment would you be curious to see the results of? Can be as questionable or unethical as you like; Omega promises you ve'll run the simulation with the MAKE-EVERYONE-ZOMBIES flag set.
Raise a kid by machine, with physical needs provided for, and expose the kid to language using books, recordings, and video displays, but no interactive communication or contact with humans. After 20 years or so, see what the person is like.
Try to create a society of unconscious people with bicameral minds, as described in Julian Jaynes's "The Origin of Consciousness in the Breakdown of the Bicameral Mind", using actors taking on the appropriate roles. (Jaynes's theory, which influenced Daniel Dennett, was that consciousness is a recent cultural innovation.)
Try to create a society where people grow up seeing sexual activity as casual, ordinary, and expected as shaking hands or saying hello, and see whether sexual taboos develop, and study how sexual relationships form.
Raise a bunch of kids speaking artificial languages, designed to be unlike any human language, and study how they learn and modify the language they're taught. Or give them a language without certain concepts (relatives, ethics, the self) and see how the language influences they way they think and act.
Raise a kid by machine, with physical needs provided for, and expose the kid to language using books, recordings, and video displays, but no interactive communication or contact with humans. After 20 years or so, see what the person is like
They'd probably be like the average less wrong commenter/singularitarian/transhumanist, so really no need to run this one.
2MatthewB11yI've noticed that some of the Pacific Island countries don't have much in the
way of sexual taboos, and they tend to teach their kids things like:
* Don't stick your thingy in there without proper lube
or
* If you are going to do that, clean up afterward.
Japan is also a country that has few sexual taboos (when compared to western
Christian society). They still have their taboos and strangeness surrounding
sex, but it is not something that is considered sinful or dirty
I am really interested in that last suggestion, and it sounds like one of the
areas I want to explore when I get to grad school (and beyond). At Eliezer's
talk at the first Singularity Summit (and other talks I have heard him give) he
speaks of a possible mind space. I would like to explore that mind space further
outside of the human mind.
As John McCarthy proposed in one of his books. It might be the case that even a
thermostat is a type of a mind. I have been exploring how current computers are
a type of evolving mind with people as the genetic agents. we take things in
computers that work for us, and combine those with other things, to get an
evolutionary development of an intelligent agent.
I know that it is nothing special, and others have gone down that path as well,
but I'd like to look into how we can create these types of minds biologically.
Is it possible to create an alien mind in a human brain? Your 4th suggestion
seems to explore this space. I like that (I should up vote it as a result)
1NancyLebovitz11yPoint 1: I'm not sure what you mean by physical needs. If human babies aren't
cuddled, they die. Humans are the only known species to do this.
A General Theory of Love
[http://www.amazon.com/General-Theory-Love-Thomas-Lewis/dp/0375709223] describes
the connection between the limbic system and love-- I thought it was a good
book, but to judge by the Amazon reviews, it's more personally important to a
lot of intellectual readers than I would have expected.
1Blueberry11yI've heard that called "failure to thrive" before. Yes, we'd need some kind of
machine to provide whatever tactile stimulation was required. Given the way many
primates groom each other and touch each other for social bonding, I'd be
surprised if it were just humans who needed touch.
1NancyLebovitz11yA lot of animals need touch to grow up well. Only humans need touch to survive.
A General Theory of Love describes experiments with baby rodents to determine
which physical systems are affected by which aspects of contact with the
mother-- touch is crucial for one system, smell for another.
3MBlume11yI'd like to put about 50 anosognosiacs and one healthy person in a room on some
pretext, and see how long it takes the healthy person to notice everyone else is
delusional, and whether ve then starts to wonder if ve is delusional too.
3Kaj_Sotala11yI'd be really curious to see what happened in a society where your social gender
was determined by something else than your biological sex. Birth order, for
instance. Odd male and even female, so that every family's first child is
considered a boy and their second a girl. Or vice versa. No matter what the
biology. (Presumably, there'd need to be some certain sign of the gender to tell
the two apart, like all social females wearing a dress and no social males doing
so.)
1MatthewB11yI'd like to know how many people would eat human meat if it was not so taboo (No
nervous system so as to avoid nasty prion diseases). I know that since I
accidentally had a bite of finger when I was about 19 that I've wondered what a
real bite of a person would taste like (prepared properly... Maybe a
ginger/garlic sauce???).
Also, building on Kaj Sotala's proposal, what about sexual assignment by job or
profession (instead of biological sex). So, all Doctors or Health Care workers
would be female, all Soldiers would be male, all ditch diggers would be male,
yet all bakers would be female. All Mailmen would be male, yet all waiters would
be female.
Then, one could have multiple sex-assignments if one worked more than one job.
How about a neuter sex and a dual sex in their as well (so the neuter sex would
have no sex, and the hermaphrodite would be... well, both...)
2orthonormal11yAfter your prior revelations
[http://lesswrong.com/lw/1lb/are_wireheads_happy/1e77] and this, I'm waiting for
the third shoe to drop.
3MatthewB11yThen shoes could be dropping for quite a while...
Edit: I better stop biographing for a while. I've led a life that has been
colorful to say the least (I wish that it had been more profitable - it was at
one point... But, well, you have a link to what happened to the money)
1RichardKennaway11yIsn't that circular? Not eating human meat is the taboo.
2MatthewB11yA better way to have said that would be
In other words: If there were no taboo against eating human meat, how many
people would eat it?
From what I remember of the bite of finger, it had a white meat taste. Sort of
like pork-turkey... I guess kinda like a hot dog (only it had no salt on/in it
beyond the sweat that was on the hand).
I do think that human meat would stack up against Pork and Turkey as a delicious
meat. Maybe if we ate condemned criminals. They would spend their time in prison
before their execution fattening up. (OK, I realize that I am getting really
out-there morbid now).
Cannibalism is a subject that fascinates me though. I have often wondered about
fantastic settings in which the only thing that existed to eat was other people.
Say, a planet in which there existed no other life forms at all. No plants,
microbes, animals, etc. The Planet would have water, or maybe springs that had a
liquid that contained nutrients that weren't in human meat... And, it would have
people. So, the people would be the only things to eat, and the only things out
of which tools could be made.
I do actually have a series of stories based upon this premise written. It was
an interesting thought experiment to think about the types of cultures that
could arise to deal with such a dilemma. And, if the inhabitants didn't know
that any other life existed, (and had some cultural memory of the expression You
are what you eat) then they might consider it a horrid idea to eat anything but
people (should they eventually discover that other people from other planets eat
dumb animals and plants that cannot even think.
If You are what you eat, then eating a stupid immobile plant or a flatulent
stupid bovine would seem like the ultimate in self-condemnation.
2Nick_Tarleton11yLarry Niven, "Bordered in Black". Sort of.
3MatthewB11yIsn't that the Short Story where the first two Superluminal astronauts arrive at
a planet that contains a giant ocean and just one island, that is surrounded by
a dark black line.
The dark black area turns out to be algae and people's remains, and a crowd of
people wander the island's coast eating either the algae or each other.
I don't see a very large similarity (but then I am looking at it from much more
information about the place than you), as those people had no real developed
culture or solitary food source. I was surprised to read it when I did, because
it did come close to my idea (I first thought of this idea in 2nd grade when we
had a nutritional lecture: "You Are What You Eat"). I spent three weeks
wondering when the cafeteria was going to start serving people. I figured "I am
a person. If I am what I eat, then I must eat people to continue being one." The
teacher had to call my parents when I asked her directly about when we would
start eating people or, if "This was only something grown-ups did." My mother
did her normal "How could you do this to me?!", and my father did the "Look what
you've done to your mother!"
The Culture that I envisioned was large and highly populous, and the whole point
of life was to eventually be able to give your meat to your family (although,
many children are eaten if they don't live up to standards). They build cities
out of mud and bone, and use glass for some tools (created by burning bone and
intestinal gasses created in special people who are nothing but huge guts. These
people also produce other chemicals in different metabolic processes, but the
point is that a whole class of person exists that is nothing but a chemical
factory. These people usually have most of their cortex removed as well, so they
are basically vegetables. They use the neocortex of these people as artificial
memory devices).
There are other groups on this imaginary world as well, who are much less
"Civilized" and predatory. They all live under
2Multiheaded9yWow. Did elements of this appear in your mind during one or several bad trips?
I recommend making a longer list of recent comments available, the way Making Light does.
If you've been working with dual n-back, what have you gotten out of it? Which version are you using?
Would an equivalent to a .newsrc be possible? I would really like to be able to tell the site that I've read all the comments in a thread at a given moment, so that when I come back, I'll default to only seeing more recent comments.
2RichardKennaway11yYears ago I was involved with both Loglan [http://www.loglan.org/] (the
original) and Lojban [http://www.lojban.org/] (the spin-off, started by a Loglan
enthusiast who thought the original creator was being too possessive of Loglan).
For me it was simply an entertaining hobby, along with other conlangs such as
Láadan and Klingon. But in the history of artificial languages, it is important
as the first to be based on the standard universal language of mathematics,
first-order predicate calculus.
If quantum immortality is correct, and assuming life extension technologies and uploading are delayed for a long time, wouldn't each of us, in our main worldline, become more and more decrepit and injured as time goes on, until living would be terribly and constantly painful, with no hope of escape?
4Alicorn11yWe frequently become unconscious (sleep) in our threads of experience. There is
no obvious reason we couldn't fall comatose after becoming sufficiently
battered.
3SoullessAutomaton11yI present for your consideration a delightful quote, courtesy of a discussion on
another site [http://news.ycombinator.com/item?id=928054]:
I think the moral of the story is: stay healthy and able-bodied as much as
possible. If, at some point, you should find yourself surviving far beyond what
would be reasonably expected, it might be wise to attempt some strategic quantum
suicide reality editing while you still have the capacity to do so...
3Roko11yHow could it be "correct" or "incorrect"? QI doesn't make a falsifiable factual
claim, as far as I know...
5orthonormal11yA superhuman intelligence that understood the nature of human consciousness and
subjective experience would presumably know whether QI was correct, incorrect,
or somehow a wrong question. Consciousness and experience all happen within
physics, they just currently confuse the hell out of us.
2Roko11yI think it is becoming clear that it is a wrong question.
see Max Tegmark on MWI
[http://arxiv.org/PS_cache/quant-ph/pdf/9709/9709032v1.pdf]
1orthonormal11yNeat paper!
2Eliezer Yudkowsky11y"The author recommends that anyone reading this story sign up with Alcor or the
Cryonics Institute to have their brain preserved after death for later revival
under controlled conditions."
(From a little story
[http://www.fanfiction.net/s/5389450/1/The_Finale_of_the_Ultimate_Meta_Mega_Crossover]
which assumes QTI.)
1rwallace11yEven supposing this unpleasant scenario is true, it is not hopeless. There are
things we can do to improve matters. The timescale to develop life extension and
uploading is not a prior constant; we can work to speed it up, and we should be
doing this anyway. And we can sign up for cryonics to obtain a better
alternative worldline.
I spent December 23rd, 24th and 25th in the hospital. My uncle died of brain cancer (Glioblastoma multiforme). He was an atheist, so he knew that this was final, but he wasn't signed up for cryonics.
We learned about the tumor 2 months ago, and it all happened so fast.. and it's so final.
This is a reminder to those of you who are thinking about signing up for cryonics; don't wait until it's too late.
Because trivial inconvieniences be a strong deterent, maybe someone should make a top-level post on the practicallities of cryonics; an idiots guide to immortality.
9Alicorn11yI want to sign up. I don't want to sign up alone. I can't convince any of my
family to sign up with me. Help.
9Eliezer Yudkowsky11yMost battles like this end in losses; I haven't been able to convince any of my
parents or grandparents to sign up. You are not alone, but in all probability,
the ones who stand with you won't include your biological family... that's all I
can say.
5Technologos11yNow that would be a great extension of the LW community--a specific forum for
people who want to make rationalist life decisions like that, to develop a more
personal interaction and decrease subjective social costs.
5aausch11yIt could be a more general advice-giving forum. Come and describe your problem,
and we'll present solutions.
That might also be a useful way to track the performance of rationalist methods
in the real world.
1Technologos11yI like it. Sure would beat the hell out of a lot of the advice I've heard, and
if nothing else it would be good training in changing our minds and in
aggregating evidence appropriately.
4Dagon11yCan I help by pointing out flaws in your implied argument ("I believe cryonics
is worthwhile, but without my family, I'd rather die, and they don't want to")?
Do you intend to kill yourself when some or all of your current family dies? If
living beyond them is positive value, then cryonics seems a good bet even if no
current family member has signed up.
Also, your arguments to them that they should sign up gets a LOT stronger with
your family if you're actually signed up and can help with the paperwork,
insurance, and other practical barriers. In fact, some of your family might be
willing to sign up if you set everything up for them, including paying, and they
just have to sign.
In fact, cryonics as gift seems like a win all around. It's a wonderful signal:
I love you so much I'll spend on your immortality. It gets more people signed
up. It sidesteps most of the rationalization for non-action (it's too much
paperwork, I don't know enough about what group to sign up, etc.).
7Alicorn11yNo. I do expect to create a new family of my own between now and then, though.
It is the prospect of spending any substantial amount of time with no beloved
company that I dread, and I can easily imagine being so lonely that I'd want to
kill myself. (Viva la extroversion.) I would consider signing up with a
fiancé(e) or spouse to be an adequate substitute (or even signing up one or more
of my offspring) but currently have no such person(s).
Actually, shortly after posting the grandparent, I decided that limiting myself
to family members was dumb and asked a couple of friends about it. My best
friend has to talk to her fiancé first and doesn't know when she'll get around
to that, but was generally receptive. Another friend seems very on-board with
the idea. I might consider buying my sister a plan if I can get her to explain
why she doesn't like the idea (it might come down to finances; she's being weird
and mumbly about it), although I'm not sure what the legal issues surrounding
her minority are.
Edit: Got a slightly more coherent response from my sister when I asked her if
she'd cooperate with a cryonics plan if I bought her one. Freezing her when she
dies "sounds really, really stupid", and she's not interested in talking about
her "imminent death" and asks me to "please stop pestering her about it". I
linked her to this
[http://lesswrong.com/lw/2d/talking_snakes_a_cautionary_tale/], and think that's
probably all I can safely do for a while. =/
3AndrewWilcox11yYou have best friends now, how did you meet them? In the worst case scenario
where people you currently know don't make it, do you doubt that you'll be able
to quickly make new friends?
Suppose that there are hundreds of people who would want to be your best friend,
and that you would genuinely be good friends with. Your problem is that you
don't know who they are, or how to find them. Not to be too much of a technology
optimist :-), but imagine if the super-Facebook-search engine of the future
would be able to accurately put you in touch with those hundreds.
3Peter_de_Blanc11yEven if none of your relatives sign up for cryonics, I would expect some of them
to still be alive when you are revived.
5Vladimir_Nesov11ySince there is already only a slim chance of actually getting to the revival
part (even though high payoff keeps the project interesting, like with
insurance), after mixing in the requirement of reaching the necessary tech in
(say) 70 years for someone alive today to still be around, and also managing to
die before that, not a lot is left, so I wouldn't call it something to be
"expected". "Conditional on you getting revived, there is a good chance some of
your non-frozen relatives are still alive" is more like it (and maybe that's
what you meant).
2Alicorn11yDo you mean that a relative I have now, or one who will be born later, will
probably be around at that time? Because the former would require that I die
soon (while my relatives don't) or that there's an awfully rapid turnaround
between my being frozen and my being defrosted.
4JamesAndrix11yWell the whole point of signing up now is that you might die soon.
So sign up now. If you get to be old And still have no young family And the
singularity doesn't seem close, then cancel.
3scotherns11yDo it anyway. Lead by example. Over time, you might find they become more used
to the idea, particularly if they have someone who can help them with the
paperwork and organisational side of things. If you can help them financially,
so much the better.
If you are successfully revived, you will have plenty of time to make new
friends, and start a new family. I'm not meaning to sound callous, but its not
unheard of for people to lose their families and eventually recover. I'm doing
everything I can to persuade my family to sign up, but its up to them to make
the final decision.
I'd give my life to save my family, but I wouldn't kill myself if I found myself
alone.
1Alicorn11yI'd be more convinced of my ability to lead by example if I'd ever convinced
anyone to become a vegetarian.
3AngryParsley11yIt's much easier to overcome your own aversion to signing up alone than to
convince your family to sign up with you. Even assuming you can convince them
that living longer is a good thing, there are a ton of prerequisites needed
before one can accurately evaluate the viability of cryonics.
2rwallace11yI think it's great that you've taken the first steps, and would encourage you to
go ahead and sign up.
In my experience, arguing with people who've decided they definitely don't want
to do something, especially if their reasons are irrational, is never
productive. As Eliezer says, it may simply be that those who stand with you will
be your friends and the family you create, not the family you came from. But I
would guess the best chance of your sister signing up would be obtained by you
going ahead right now, but not pushing the matter, so that in a few years the
fact of your being signed up will have become more of an established state of
affairs.
It's a sobering demonstration of just how much the human mind relies on social
proof for anything that can't be settled by immediate personal experience.
(Conjecture: any intelligence must at least initially work this way; a universe
in which it were not necessary, would be too simple to evolve intelligence in
the first place. But I digress.)
Is there anything that can be done to bend social instinct more in the right
direction here? For example, I know there have been face-to-face gatherings for
those who live within reach of them; would it help if several people at such a
gathering showed up wearing 'I'm signed up for cryonics' badges?
1byrnema11yWhat do you perceive as the main barrier to their signing up?
5Alicorn11yMy dad was the only one with any non-mumbling answer to the suggestion. I told
him I wanted him to live forever and he told me I was selfish. He said some
things about overpopulation and global warming and universalizability and no
proven results from the procedure.
3Roko11yWell, if it is any consolation, I have had zero success and a bunch of ridicule
from all friends and family I mentioned the idea to.
I've had the "selfish, overpopulation and global warming" objection from my
mother, and I then reminded her that (a) she had a fair amount of personal
wealth and wasn't remotely interested in spending any of it on third world
charities, charities who try to reduce population or efficient ways to combat
global warming and (b) she wasn't in favor of killing people to reduce
population. Of course, this had no effect.
2DanArmak11yDo you think it's worthwhile to argue with him rationally on the details, or
that if you make him understand his reasons aren't valid he'll just mumble "no"
like the rest of your family?
2Alicorn11yArguing with my dad is profoundly unpleasant, and he is extremely stubborn. I
may send him links to websites, especially if I need his cooperation to involve
my sister because she's 16, but I don't anticipate a good result from continuing
to engage him directly (at least if I'm the one doing it: our relationship
history is such that the odds of me convincing him of anything he's presently
strongly against approach nil, and prolonged attempts to do so end in tears.)
Alexandre Borovik summarizes the Bayesian error in null hypothesis rejection method, citing the classical J. Cohen (1994). `The Earth Is Round (p < .05)'. American Psychologist 49(12):997-1003.
If a person is an American, then he is probably not a member
of Congress. (TRUE, RIGHT?) This person is a member of Congress. Therefore, he is probably not an American.
What is the appropriate etiquette for post frequency? I work on multiple drafts at a time and sometimes they all get finished near each other. I assume 1 post per week is safe enough.
Nick Bostrom, however, once asked whether it would make sense to build an Oracle AI, one that only answered questions, and ask it our questions about Friendly AI.
Has Bostrom made this proposal in anything published? I can't seem to find it on nickbostrom.com.
Different responses to challenges seen through the lens of video games. Although I expect the same can be said for character driven stories (rather than say concept driven).
It turns out there are two different ways people respond to challenges. Some people see them as opportunities to perform - to demonstrate their talent or intellect. Others see them as opportunities to master - to improve their skill or knowledge.
Say you take a person with a performance orientation ("Paul") and a person with a mastery orientation ("Matt"). Give them
This is ridiculous. (A $3 item discounted to $2.33 is perceived as a better deal (in this particular experimental setup) than the same item discounted to $2.22, because ee sounds suggest smallness and oo sounds suggest bigness.)
4Eliezer Yudkowsky11yThat is pretty ridiculous - enough to make me want to check the original study
for effect size and statistical significance. Writing newspaper articles on
research without giving the original paper title ought to be outlawed.
2AllanCrossman11y"Small Sounds, Big Deals: Phonetic Symbolism Effects in Pricing", DOI:
10.1086/651241
http://www.journals.uchicago.edu/doi/pdf/10.1086/651241
[http://www.journals.uchicago.edu/doi/pdf/10.1086/651241]
Whether you'll be able to access it I know not.
2timtyler11ySame researchers, somewhat similar effect:
"Distortion of Price Discount Perceptions: The Right Digit Effect"
* http://www.journals.uchicago.edu/doi/abs/10.1086/518526
[http://www.journals.uchicago.edu/doi/abs/10.1086/518526]
What is the informal policy about posting on very old articles? Specifically, things ported over from OB? I can think of two answers: (a) post comments/questions there; (b) post comments/questions in the open thread with a link to the article. Which is more correct? Is there a better alternative?
3Paul Crowley11y(a). Lots of us scan the "Recent Comments" page, so if a discussion starts up
there plenty of people will get on board.
1orthonormal11yI think each has their advantages. If you post a comment on the open thread,
it's more likely to be read and discussed now; if you post one on the original
thread, it's more likely to be read by people investigating that particular
issue some time from now.
Dawkins: We could devise a little experiment where we take your forecasts and then give some of them straight, give some of them randomized, sometimes give Virgo the Pisces forecast et cetera. And then ask people how accurate they were.
Astrologer: Yes, that would be a perverse thing to do, wouldn't it.
Dawkins: It would be - yes, but I mean wouldn't that be a good test?
Astrologer: A test of what?
Dawkins: Well, how accurate you are.
Astrologer: I think that your intention there is mischief, and I'd think what you'd then get back is mischief.
Dawkins: Well my intention would not be mischief, my intention would be experimental test. A scientific test. But even if it was mischief, how could that possibly influence it?
Astrologer: (Pause.) I think it does influence it. I think whenever you do things with astrology, intentions are strong.
Dawkins: I'd have thought you'd be eager.
Astrologer: (Laughs.)
Dawkins: The fact that you're not makes me think you don't really in your heart of hearts believe it. I don't think you really are prepared to put your reputation on the line.
Astrologer: I just don't believe in the experiment, Richard, it's that simple.
Dawkins: Well you're in a kind of no-lose situation then, aren't you.
5PhilGoetz11yDawkins: "Well... you're sort of in a no-lose situation, then."
Astrologer: "I certainly hope so."
3Cyan11yA fine example of:
3AngryParsley11yThat video has been taken down, but you can skip to around 5 minutes into this
video [http://video.google.com/videoplay?docid=-7218293233140975017] to watch
the astrology bit.
0blashimov8yThe linked video is set to private? I can't view it. Not a big deal, the
transcript is almost as good.
He believes that they are sinning. Mormons have a really complicated dolled-up afterlife, so if he's sticking to doctrine, he probably doesn't actually expect gays as a group to all go to Hell.
Edit: He did a gay guy in the Memory of Earth series too (the plot of which, I later found, is a blatant ripoff of the Book of Mormon). Like the gay guy in Songbird, this one ends up with a woman, although less tragically.
3Jack11yI have to say. It is an interesting coincidence that he has written two gay
characters that end up with women. Especially since he is absolutely terrible at
writing (heterosexual) sex scenes/sexuality- I mean really I've never read a
professional writer who was worse at this.
3SilasBarta11yIs there any significance to how OSC avoids using the standard terms for gay,
but instead uses a made-up in-world term for it that you have to infer means
"gay". (At least in the Memory of Earth series; I haven't read the other.)
2bogus11ywtf? that's the kwyjiboest thing I've ever seen. omg lol
2Alicorn11yI don't think it's a coincidence at all. The way I understand it is that under
Mormon doctrine, the act, not the temptation towards the act, is what's a sin:
so a gay character who marries a woman and (regardless of whether he actually
has sex with her or not) refrains from extramarital sexual activity is just fine
and dandy. The Songbird character didn't get married; the Memory of Earth one
did. But the former, while not "demonized", was presented as a fairly weak
person; the latter was supposed to be a generally decent guy.
2AdeleneDawner11yI went back and checked my source (wikipedia); you're right, I'd mis-remembered.
One definition of a prime, of course, is "a number whose only factors are itself and 1, except for 1 itself". Another, however, is "a number with exactly two factors", which is probably the simpler than "a number whose only factors are itself and 1". And if 1 were prime, it would be a highly exceptional one, in that there would be many places to say "all prime numbers except 1".
e^x is often said to be the only function that is its own derivative, as if the zero function somehow did
Within the context of the article, the bigger form of the argument can be phrased as such:
DirectX is not cross-platform
OpenGL is cross-platform
Blizzard is successful
Blizzard releases cross-platform software
It is more successful to release cross-platform software
It is more successful to use OpenGL than DirectX
This is bad and wrong. As a snap judgement, it is likely that releasing cross-platform software is a more successful thing to do but using that snap judgement to build bigger arguments is dangerous.
If the tiebreak strategy is "agree with the previous person's guess", then you reach that point immediately. The first person's draw determines everyone's guess: If the second person's draw is the same as the first, then of course they agree, and if not then they're at a 50/50 posterior and thus also agree.
If the tiebreak strategy is "write down your own draw (i.e. maximize the information given to subsequent players)", then information can be collected only so long as the number of each color drawn remains tied or +/-1. As soon as one ... (read more)
1Morendil11yNice - I hadn't gotten so far as analyzing the other tiebreak policy.
"Prior information" in this kind of problem includes a bunch of rather unlikely
assumptions, such as that every player is maximally rational and that the rules
of the game reward picking the true choice of urn.
Unfortunately there is no reason to prefer one tiebreak policy over the other.
Does it make the problem more determinate if we assume the game scores per
Bayesian Truth Serum, that is, you get more points for a contrarian choice that
happens to be right ?
1pengvado11ySince the total evidence you can get from examining all previous guesses
(assuming conventional strategy and rewards as before) gives you only a 4/5
accuracy, and you can get 2/3 by ignoring all previous guesses and looking only
at your own draw: Yes, rewarding correct contrarians at least 20% more than
correct majoritarians would provide enough incentive to break the information
cascade. Only until you've accumulated enough extra information to make the
majoritarian answer confident enough to overcome the difference between rewards,
of course, but it would still equilibrate at a higher accuracy.
I take "life experience" to mean a haphazard collection of anecdotes.
I don't think that's something that most people who think "life experience" is valuable would agree to.
Claims from haphazardly collected anecdotes do not constitute legitimate evidence, though I concede those claims do often have positive correlations with true facts.
It might be profitable for you to revise your criteria for what constitutes legitimate evidence. Throwing away information that has a positive correlation with the thing you're wondering about seems a bit hasty.
A few years back I did an ethics course at university. It very quickly made me realise that both I and most of the rest of the class based our belief in the existence of objective ethics simply on a sense that ethics must exist. When I began to question this idea my teacher asked me what I expected an objective form of ethics to look like. When I said I didn't know she asked if I would agree that a system of ethics would be objective if they could be universally calculated by any none biased, perfectly logical being. This seemed fair enough but the problem... (read more)
7Vladimir_Nesov11y1) Why would a "perfectly logical being" compute (do) X and not Y? Do all
"perfectly logical beings" do the same thing? (Dan's comment
[http://lesswrong.com/lw/1lf/open_thread_january_2010/1ej0]: a system that
computes your answer determines that answer, given a question. If you presuppose
an unique answer, you need to sufficiently restrict the question (and the
system). A universal computer will execute any program (question) to produce its
output (answer).) All "beings" won't do exactly the same thing, answer any
question in exactly the same way. See also: No Universally Compelling Arguments
[http://lesswrong.com/lw/rn/no_universally_compelling_arguments/].
2) Why would you be interested in what the "perfectly logical being" does? No
matter what argument you are given, it is you that decides whether to accept it.
See also: Where Recursive Justification Hits Bottom
[http://lesswrong.com/lw/s0/where_recursive_justification_hits_bottom/],
Paperclip maximizer [http://wiki.lesswrong.com/wiki/Paperclip_maximizer], and
more generally Metaethics sequence
[http://wiki.lesswrong.com/wiki/Metaethics_sequence].
2.5) What humans want (and you in particular
[http://lesswrong.com/lw/rl/the_psychological_unity_of_humankind/]), is a very
detailed notion [http://wiki.lesswrong.com/wiki/Complexity_of_value], one that
won't automatically appear from a question that doesn't already include all that
detail. And every bit of that detail is incredibly important to get right
[http://lesswrong.com/lw/y3/value_is_fragile/], even though its form isn't fixed
[http://lesswrong.com/lw/v4/which_parts_are_me/] in human image.
4Jack11yI don't know what you mean by objective ethics. I believe there are ethical
facts but they're a lot more like facts about the rules of baseball than facts
about the laws of physics
4DanArmak11y"Calculated" based on what? What is the question that this would be the answer
to?
Also, how can you define "bias" here?
As you can guess from my questions, I don't even see what an objective system of
ethics could possibly mean :-)
3MatthewB11yThis seems to be my biggest problem as well. I have been trying to find
definitions of an objective system of ethics, yet all of the definitions seem so
dogmatic and contrived. Not to mention varying from time to time depending upon
the domain of the ethics (whether they apply to Christians, Muslims, Buddhists,
etc.)
2[anonymous]11yI know all those replies weren't posted just to aid me but thanks for posting
them nevertheless. Obviously I at least need to put more thought into what
ethics is and hence what my question means. Maybe the question will disappear
following that but, if not, at least I'll be on more solid ground to try to
respond to it.
1MatthewB11yI am having a discussion on a forum where a theist keeps stating that there must
be objective truth, that there must be objective morality, and that there is
objective knowledge that cannot be discovered by Science (I tried to point out
that if it were Objective, then any system should be capable of producing that
knowledge or truth).
I had completely forgot to ask him if this objective truth/knowledge/morality
could be discovered if we took a group of people, raised in complete isolation,
and then gave them the tools to explore their world. If such things were truly
objective, then it would be trivial for these people to arrive at the discovery
of these objective facts.
I shall have to remember this, as well as the fact that such objective
knowledge/ethics may indeed exist, yet, why is it that our ethical systems
across the globe seem to have a few things in common, but disagree on a great
many more?
1PhilGoetz11yYou can't ask whether there are more things in common than not in common, unless
you can enumerate the things to be considered. If everyone agrees on something,
perhaps it doesn't get categorized under ethics anymore. Or perhaps it just
doesn't seem salient when you take your informal mental census of ethical
principals.
Excellent response to the theist.
1MatthewB11yDoh!
Yes, of course... Slip of the brain's transmission there.
As for the response to the theist, I wish that I had used that specific
response. I cannot recall now what I did use to counter his claims.
As I mentions, his claim was that there is knowledge that is not available to
the scientific method, yet can be observed in other ways.
I pointed out that there were no other ways of observing things than empirical
methods, and that if some method of knowledge that just entered out brain should
be discovered (Revelation), and its reliability were determined, then this would
just be another form of observation (Proprioception) and the whole process would
then just be another tool of science.
He just couldn't seem to get around the fact that as soon as he makes an
empirical claim that it falls within the realm of scientific discovery.
He was also misusing Gödel's incompleteness theorem (some true statements in a
formal system cannot be proved within that formal system).
At which point, he began to conflate science as some sort of religion and god
that was being worshiped, and from which everything was meaningless and thus
there were no ethics, so he could just go kill and rape whoever he pleased.
It frightens me that there are such people in the world.
P(A)*P(B|A) = P(B)*P(A|B). Therefore, P(A|B) = P(A)*P(B|A) / P(B). Therefore, woe is you should you assign a probability of 0 to B, only for B to actually happen later on; P(A|B) would include a division by 0.
Once upon a time, there was a Bayesian named Rho. Rho had such good eyesight that she could see the exact location of a single point. Disaster struck, however, when Rho accidentally threw a dart, its shaft so thin that its intersection with a perfect dartboard would be a single point, at a perfect dartboard. You see, when you randomly select a point f... (read more)
My life plus the life of a random stranger, for example. If I was doomed to die in a certain fashion but had the chance to save another life (even in a way nobody would ever know about), well, that's a no-brainer for me.
EDIT: Ah, now I see the context. How about the following hypothetical:
I am on a spaceship returning to Earth when all my shipmates die. I realize that I am a carrier for a horrific disease; I will never get sick from it, but I can transmit it to others, of whom 99% will die. Let's furthermore imagine that the people on the ground don't ... (read more)
3[anonymous]11yBut non-total mass extinction events are awesome! The overpopulation immediately
vanishes! Uh, hang on a moment, let me rethink something.
2MatthewB11yThis has pretty much been what has prevented three Suicide Bombers from
succeeding. In the first case (The flight over PA on 9/11), all lost their lives
to prevent a much more horrible catastrophe. The Shoe Bomber and the Underwear
Bomber were both stopped (Successfully) by the passengers on the aircraft
without any loss of life, yet they all knew this was possible.
I value my life, as I am certain every one of them did, yet they valued the
lives of others as well, and in a situation, where to not act was to have both
certain death of oneself coupled with the certain deaths of many others, almost
anyone would choose to act rather than do nothing, as I would.
In fact, I believe that this is our strongest defense against Terrorism in the
USA. If the suicide bombers who try to attack us discover that we are willing to
die to prevent them from dying in their attempt... It will take a lot of the
impetus out of them (after all, most are doing this to martyr themselves, and
failure is a horrible thing to them).
I think that there are also other things that I might value more than my life.
For instance I might value not creating another life more than my own life
depending upon the circumstances. But, pretty much all of those things involve
the sacrifice of my life for something that is greater than myself. If one
thinks that they are the greatest thing on earth... well, that is going to be a
lonely existence.
5RichardKennaway11yI work out and eat healthily to make right now better.
Of course, I hope that the body will last longer as well, but I wouldn't
undertake a regimen that guaranteed I'd see at least 120, at the cost of never
having the energy to get much done with the time. Not least because I'd take
such a cost as casting doubt on the promise.
3Jawaka11yI stopped smoking after I learned about the Singularity and Aubrey de Grey. I
don't have any really good data on what healthy food is but I think I am doing
alright. I have also singed up to a Gym recently. However I don't think I can
sign up to cryogenics in Germany.
1Morendil11yYou can sign up from anywhere, in principle (CI and Alcor list a number of
non-US members). The major issue is that it will obviously cost more to
transport you to suspension facilities in the US, while avoiding damage to your
brain cells in transit.
One disturbing thing about cryonics is that it forces you to allocate
probabilities to a wide range of end-of-life scenarios. Am I more likely to die
hit by a truck (in which case I wouldn't make much of my chances for successful
suspension and revival), or a fatal disease diagnosed early enough, yet not
overly aggressive, such that I can relocate to Michigan or Arizona for my final
weeks ? And who know how many other likely scenarios.
3DanArmak11yI'd guess that getting your local hospitals and government to allow your body to
be treated correctly would be the biggest non-financial problem.
I live in Israel, and even if I had unlimited money and could sign up, I'm not
at all sure I could solve this problem except by leaving the country.
1AngryParsley11yI'm signed up for cryonics and I exercise regularly. I usually run 3-4 miles a
day and do some random stretching, push-ups, and sit-ups. I slack if I'm on
vacation or if the weather is bad. I never eat properly. Some days I forget most
meals. Other days I'll have bacon and ice cream.
1scotherns11yI work out regularly, eat healthy, and I am signed up for Cryonics. One data
point for you :-)
You value things other than your own life, hence your life isn't priceless to you as well (there are hypothetical situations where you exchange your life for a significant improvement in the other things you value), though its value will of course be different for you and for other people, perhaps with the difference of couple orders of magnitude.
A little knowledge can be a dangerous thing. At least Eliezer has previously often recommended Judgment Under Uncertainty as something people should read. Now, I'll admit I haven't read it myself, but I'm wondering if that might be a bad advice, as the book's rather dated. I seem to frequently come across articles that cite JUU, but either suggest alternative interpretations or debunk its results entirely.
Just today, I was trying to find recent articles about scope insensitivity that I could cite. But on a quick search I primarily ran across articles point... (read more)
Two studies explored the role of implicit theories of intelligence in adolescents'
mathematics achievement. In Study 1 with 373 7th graders, the belief that intelligence is malleable (incremental theory) predicted an upward trajectory in grades over the two years of junior high school, while a belief that intelligence is fixed (entity theory) predicted a flat trajectory. A mediational model including learning goals, positive beliefs about
4Nick_Tarleton11yNo, it doesn't. What about weight?
4whpearson11yFair point. Would you agree with, "People on lesswrong commonly talk as if
intelligence is a thing we can put a number to (without temporal qualification),
which implies a fixed trait."?
We often say our weight is currently X or Y. But people rarely say their IQ is
currently Z, at least in my experience.
4Zack_M_Davis11yIf it works, it can't be a lie. In any case, surely a sophisticated
understanding does not say that intelligence is malleable or not-malleable.
Rather, we say it's malleable to this-and-such an extent in such-and-these
aspects by these-and-such methods.
2Kaj_Sotala11y"Intelligence is malleable" can be a lie and still work. Kids who believe their
general intelligence to be malleable might end up exercising domain-specific
skills and a general perseverance so that they don't get too easily discouraged.
That leaves their general intelligence unchanged, but nonetheless improves
school performance.
A suggestion for the site (or perhaps the Wiki): It would be useful to have a central registry for bets placed by the posters. The purpose is threefold:
Aid the memory of posters, who might accumulate quite a few bets as time passes.
Form a record of who has won and lost bets, helping us calibrate our confidences.
Formalise the practice of saying "I'll take a bet on that", prodding us to take care when posting predictions with probabilities attached. The intention here is to overcome akrasia in the form of throwing out a number and thus signalling our rationality; numbers are important and should be well considered when we use them at all.
2Eliezer Yudkowsky11yhttp://predictionbook.com/ [http://predictionbook.com/] - doesn't include a
registry for monetary bets, but it'd start narrowing things down.
2Vladimir_Nesov11yGo on and create the page on the wiki if you want.
1RolfAndreassen11yOk, I have done so: http://wiki.lesswrong.com/wiki/Bets_registry
[http://wiki.lesswrong.com/wiki/Bets_registry] .
In the meantime, one comment on that other interesting reading at Less Wrong. It has been fun sifting through various posts on a variety of subjects. Every time I leave I have the urge to give them the Vulcan hand signal and say "Live Long and Prosper". LOL.
I shall leave the interpretation of this to those whose knowledge of Star Trek is deeper than mine...
Is there any interest in an experimental Less Wrong literary fiction book club, specifically for the purpose of gaining insight? Or more specifically, so that together we can hash out exactly what insights are or are not available in particular works of fiction.
Michael Vassar suggests The Great Gatsby (I think, it was kind of written confusingly parallel with the names of authors but I don't think there was ever an author Ga... (read more)
How old were you when you became self-aware or achieved a level of sentience well beyond that of an infant or toddler?
I was five years old and walking down the hall outside of my kindergarden classroom and I suddenly realized that I had control over what was happening inside of my mind's eye. This manifested itself by me summoning an image in my head of Gene Wilder as Willy Wonka.
Is it proper to consider that the moment when I became self aware? Does anyone have a similar anecdote?
(This is inspired by Shannon's mention of her child exploring her sense of s... (read more)
3AdeleneDawner11yI don't have any memory of a similar revelation, but one of my earliest memories
is of asking my mother if there was a way to 'spell letters' - I understood that
words could be broken down into parts and wanted to know if that was true of
letters, too, and if so where the process ended - which implies that I was
already doing a significant amount of abstract reasoning. I was three at the
time.
0MrHen11yStrange, I have no such memory. The closest thing I can think of is my big
Crisis of Faith when I was 17. I realized I had much more power over myself than
I had previously thought. It scared me a lot, actually.
I have read pretty much everything more than once. It is pretty difficult to turn reading into action though. Which is why I feel like there is something I am missing. Yep.
One technique I use to internalize certain beliefs is to determine their implied actions, then take those actions while noting that they're the sort of actions I'd take if I "truly" believed. Over time the belief becomes internal and not something I have to recompute every time a related decision comes up. I don't know precisely why this works but my theory is that it has to do with what I perceive my identity to be. Often this process exposes other actions I take which are not in line with the belief. I've used this for things like "animal suffering is actually bad", "FAI is actually important", and "I actually need to practice to write good UIs".
1CassandraR11yThis is similar to my experience. Perhaps a better way to express my problem is
this. What are the some safe and effective way to construct and dismantle
identity? And what sorts of identity are most able to incorporate new
information and process them into rational beliefs? One strategy I have used in
the past is to simply not claim ownership of any belief so that I might release
it more easily but in this I run into a lack of motivation when I try to act on
these beliefs. On the other hand if I define my identity based on a set of
beliefs then any threat to them is extremely painful.
That was my original question, how can I build an identity or cognitive
foundation that motivates me but is not painfully threatened by counter
evidence?
3orthonormal11yThe litany of Tarski [http://wiki.lesswrong.com/wiki/Litany_of_Tarski] and the
litany of Gendlin [http://wiki.lesswrong.com/wiki/Litany_of_Gendlin] exemplify a
pretty good attitude to cultivate. (Check out the posts linked in the Litany of
Gendlin wiki article; they're quite relevant too. After that, the sequence on
How to Actually Change Your Mind
[http://wiki.lesswrong.com/wiki/How_To_Actually_Change_Your_Mind] contains still
more helpful analysis and advice.)
This can be one of the toughest hurdles for aspiring rationalists. I want to
emphasize that it's OK and normal to have trouble with this, that you don't have
to get everything right on the first try (and to watch out if you think you do),
and that eventually the world will start making sense again and you'll see it
was well worth the struggle.
I occasionally see people here repeatedly making the same statement, a statement which appears to be unique to them, and rarely giving any justification for it. Examples of such statements are "Bayes' law is not the fundamental method of reasoning; analogy is" and "timeless decision is the way to go". (These statements may have been originally articulated more precisely than I just articulated them.)
I'm at risk of having such a statement myself, so here, I will make this statement for hopefully the last time, and justify it.
2Wei_Dai11yDoes anyone understand the last two paragraphs of the comment that I'm
responding to? I'm having trouble figuring out whether Warrigal has a real
insight that I'm failing to grasp, or if he is just confused.
When I'm actively following the site (visiting 3+ times a day), I primarily follow the new comments page. I only read top posts when I see that there's an interesting discussion going on about one of them, or if the post's title seems particularly interesting. (I do wind up reading a large portion of the top posts sooner or later, though.)
I have the 'recent posts' RSS feed in my reader for when I'm not actively following the site, but I only click through if something seems very interesting.
Well, the future will certainly be full of mostly strangers. If you can't convince any of your current friends/family to sign up, you might be better of making friends with those that have already signed up. There are bound to some you would get along with (I've read OOTS since it started :-) )
If I ever have any success in convincing anyone else to sign up for cryonics, I'll let you know how I did it (in the unlikely event that this will help!).
A soft reminder to always be looking for logical fallacies: This quote was smushed into an opinion piece about OpenGL:
Blizzard always releases Mac versions of their games simultaneously, and they're one of the most successful game companies in the world! If they're doing something in a different way from everyone else, then their way is probably right.
2MrHen11yIt really does surprise me how often people do things like this.
This is a quote from someone being interviewed about bad but common passwords
[http://www.nytimes.com/2010/01/21/technology/21password.html?em]. Would this be
labeled a semantic stopsign [http://lesswrong.com/lw/it/semantic_stopsigns/], or
a fake explanation [http://lesswrong.com/lw/ip/fake_explanations/], or ...?
2RobinZ11yFake explanation - he noticed a pattern and picked something which can cause
that kind of pattern, without checking if it would cause that pattern.
Once upon a time I was pretty good at math but either I just stopped liking it or the series of dismal school teachers I had turned me off of it. I ended up taking the social studies/humanities rout and somewhat regretting it. I've studied some foundations of mathematics stuff, symbolic logic and really basic set theory and usually find that I can learn pretty rapidly if I have a good explanation in front of me. What is the best way to teach myself math? I stopped with statistics (High school, advanced placement) and never got to calculus. I don't expect to become a math wiz or anything, I'd just like to understand the science I read better. Anyone have good advice?
4nhamann11yI'm currently trying to teach myself mathematics from the ground up, so I'm in a
similar situation as you. The biggest issue, as I see it, is attempting to
forget everything I already "know" about math. Math curriculum at both the
public high school and the state university I attended was generally bad; the
focus was more on memorizing formulas and methods of solving prototypical
problems than on honing one's deductive reasoning skills, which if I'm not
mistaken is the core of math as a field of inquiry.
So obviously textbooks are good place to start, but which ones don't suck? Well,
I can't help you there, as I'm trying to figure this out myself, but I use a
combination of recommendations from this page
[http://www.ocf.berkeley.edu/~abhishek/chicmath.htm] and looking at ratings on
Amazon.
Here are the books I am currently reading, have read portions of, or are on my
immediate to-read list, but take this with a huge grain of salt as I'm not a
mathematician, only an aspiring student:
* How to Prove It: A Structured Approach by Vellemen - Elementary proof
strategies, is a good reference if you find yourself routinely unable to
follow proofs
* How to Solve It by Polya - Haven't read it yet but it's supposedly quite
good.
* Mathematics and Plausible Reasoning, Vol. I & II by Polya - Ditto.
* Topics in Algebra by Herstein - I'm not very far into this, but it's fairly
cogent so far
* Linear Algebra Done Right by Axler - Intuitive, determinant-free approach to
linear algebra
* Linear Algebra by Shilov - Rigorous, determinant-based approach to linear
algebra. Virtually the opposite of Axler's book, so I figure between these
two books I'll have a fairly good understanding once I finish.
* Calculus by Spivak - Widely lauded. I'm only 6 chapters in, but I immensely
enjoy this book so far. I took three semesters of calculus in college, but I
didn't intuitively understand the definition of a li
2Paul Crowley11yI've learned an awful lot of maths from Wikipedia.
When people here say they are signed up for cryonics, do they systematically mean "signed up with the people who contract to freeze you and signed up with an instrument for funding suspension, such as life insurance" ?
I have contacted Rudi Hoffmann to find out just what getting "signed up" would entail. So far I'm without a reply, and I'm wondering when and how to make a second attempt, or whether I should contact CI or Alcor directly and try to arrange things on my own.
Not being a US resident makes things much more complicated (I live in France). Are there other non-US folks here who are "signed up" in any sense of the term ?
6Zack_M_Davis11yNot really; they're not decision theory stories. The Three Laws are adversarial
[http://intelligence.org/upload/CFAI//adversarial.html] injunctions that hide
huge amounts of complexity [http://lesswrong.com/lw/tj/dreams_of_friendliness/]
under short English words like harm. It wouldn't actually work. It didn't even
work [http://en.wikipedia.org/wiki/%E2%80%94That_Thou_art_Mindful_of_Him] in the
story.
8Jack11yThe whole point of the stories is that it doesn't work in the end, it is a case
study in how not to do it. How it can go wrong. Obviously he didn't solve the
problem. The first digital computer had just been constructed, what would you
expect?
1Kevin11yAlong those lines, I'd recommend the Metamorphosis of Prime Intellect. It's a
short-novel length expression of an AI that gains control of all matter and
energy in the universe while being constrained by Asimov's Three Laws.
It's available free online under copyright.
http://www.kuro5hin.org/prime-intellect/
[http://www.kuro5hin.org/prime-intellect/]
Feature request, feel free to ignore if it is a big deal or requested before.
When message people back and forth it would be nifty to be able to see the thread. I see glimpses of this feature but it doesn't seem fully implemented.
3Jack11yI suggested something along these lines on the feature request thread. I'd like
to be able to find old message exchanges. Finding messages I sent is easy, but
received messages are in the same place as comment replies and aren't
searchable.
Does undetectable equal nonexistent? Examples: There are alternate universes, but there's no way we can interact with them. There are aliens outside our light cones. Past events evidence of which has been erased.
First: I'm having a very bad brain week; my attempts to form proper-sounding sentences have generally been failing, muddling the communicative content, or both. I want to catch this open thread, though, with this question, so I'll be posting in what is to me an easier way of stringing words together. Please don't take it as anything but that; I'm not trying to be difficult or to display any particular 'tone of voice'. (Do feel free to ask about this; I don't mind talking about it. It's not entirely unusual for me, and is one of the reasons that I'm fairly ... (read more)
4Blueberry11yFor me personally, I would prefer transcripts and written summaries of any audio
or video content. I find it very difficult to listen to and learn from hearing
audio when sitting at a computer, and having text or a transcript to read from
instead helps a lot. It allows me to read at my own pace and go back and forth
when I need to.
I'd also like any audio and video content to be easily and separately
downloadable, so I could listen to it at my own convenience. And I'd want any
slides or demonstrations to be easily printable, so I could see it on paper and
write notes on it. (As you can probably tell, I'm more of a verbal and visual
learner.)
By the way, your comment seemed totally normal to me, and I didn't notice any
unusual tone, but I'm curious what you were referring to.
2Alicorn11ySeconded the need for transcriptions. This is also a matter of disability
access, which is frequently neglected in website design - better to have it
there from the beginning than wait for someone to sue.
1byrnema11yGrades 4-8 is an interesting category, and I wouldn't know to what extent a
successful model for online learning has already been implemented for this age
group.
For a somewhat younger age group, I would suggest starfall.com
[http://www.starfall.com/] as an online learning site that seems to have a
number of very effective elements. One element that I found remarkable is that
frequently after a "learning lesson", the lesson solicits feedback. (For
example, see the end of this lesson
[http://www.starfall.com/n/holiday/gingerbread/load.htm?f&n=main]). The feedback
is extremely easy to provide -- for example, the child just picks a happy face
or an unhappy face indicating whether they enjoyed the lesson. (For older kids,
it might instead be a choice between a puzzled expression and an "I understand!"
expression.)
In any case, I think the value of building in feedback and learning assessment
mechanisms would be an important thing to consider in the planning stages.
Sorry, really late reply. Was just looking over this thread and happened to see this.
Card's writing that involves sexual attraction just comes off as asexual. I never got the sense that the characters were actually sexually attracted to each other; affectionate maybe, but not aroused. It's like the way sexuality looks on tv, not the way people actually experience it. I recall reading Card himself say that he didn't think he was very good at writing about sex or sexual attractions in an interview or something. It might have been in the Folk of the the Fringe book somewhere but I can't find it in my library.
0RolfAndreassen11yOk, I guess I agree with that. He either cannot or will not write such that you
feel the emotions associated with sexual attraction; it is an area where he
tells rather than showing. Perhaps this is a deliberate choice based in his
Mormon religion; he's also rather down on porn. Either way, though, it seems to
me that his stories rarely suffer from this. To take an example, 'Empire' is way
worse than the Ender sequels, but it's not because of the sex; indeed it has
effectively zero sex in it, even of the kind you describe. Rather it suffers
from being nearly-explicit propaganda.
I apologize for not replying and providing the citations needed. I've had unforeseen difficulties in finding the time, and now I'm going abroad for a week with no net access. When I come back I hope to make time to participate in LW regularly again and will also reply here.
0RobinZ11yThat is really, really cool. Not particularly rationality-related (except as
regards the display format), but really cool.
0Kevin11yYeah, it's basically just pretty pictures. However, they're pretty pictures that
are probably an interesting knowledge gap for many here.
Perhaps what is rationality related is why these orbitals are never taught to
students. I suppose because so few atoms are actually configured in higher
orbitals, but students of all ages should find the pictures themselves
interesting and understandable.
In high school chemistry, our book went up to e orbitals, and actually said
something about how the f orbitals are not shown because they are impossible or
very difficult to describe, which is blatantly untrue. I found some pictures of
the f orbitals on the internet and showed my teacher (who was one of my best
high school teachers) and he was really interested and showed all of his classes
those pictures.
It is not that I object to dramatic thoughts; rather, I object to drama in the absence of thought. Not every scream made of words represents a thought. For if something really is wrong with the universe, the least one could begin to do about it would be to state the problem explicitly. Even a vague first attempt ("Major! These atoms ... they're all in the wrong places!") is at least an attempt to say something, to communicate some sort of proposition that can be checked against the world. But you see, I fear that some screams don't actually commu... (read more)
1Zack_M_Davis11yNo. (Exploratory commentary seemed appropriate for Open Thread.)
1Zack_M_Davis11yThis is analysis is very well and good taken on its own terms, but it
conceals---very cleverly conceals, I do compliment you, for surely, surely you
had seen it yourself, or some part of you had---it conceals assumptions that do
not apply to our own realm. Essences, discreteness, digitality---these are all
artifacts born of optimizers; they play no part in the ontology of our
continuous, reductionist world. There is no pure agonium, no thing-that-hurts
without having any semblance of a reason for being hurt---such an entity would
require a very masterful designer indeed, if it could even exist at all. In
reality, there is no threshold. We face cries that fractionally have referents.
And the quantitative extent to which these cries don't have enough structure for
us to extrapolate a volition is exactly again the quantitative extent to which
any stray stream of memes has license to reshape the entity, pushing it towards
the strong attractor. You present us with this bugaboo of entities that we
cannot help because they don't even have well-defined problems, but entities
without problems don't have rights, either. So what's your problem? You just
spray the entity with appropriate literature until it is a creature. Sculpt the
thing like clay. That is: you help it by destroying it.
4Vladimir_Nesov11y(I read CFAI once 1.5 years ago, and didn't reread it since obtaining the
current outlook on the problem, so some mistakes may be present.)
"Challenges of Friendly AI" and "Beyond anthropomorphism" seem to be still
relevant, but were mostly made obsolete by some of the posts on Overcoming Bias.
"An Introduction to Goal Systems" is hand-made expected utility maximisation,
"Design of Friendship systems" is mostly premature nontechnical speculation that
doesn't seem to carry over to how this thing could be actually constructed (but
at the time could be seen as intermediate step towards a more rigorous design).
"Policy implications" is mostly wrong.
Something has been bothering me ever since I began to try to implement many of the lessons in rationality here. I feel like there needs to be an emotional reinforcement structure or a cognitive foundation that is both pliable and supportive of truth seeking before I can even get into the why, how and what of rationality. My successes in this area have been only partial but it seems like the better well structured the cognitive foundation is the easier it is to adopt, discard and manipulate new ideas.
I understand that is likely a fairly meta topic and woul... (read more)
1Alicorn11yI think your phrasing of your question is confusing. Are you asking for help
putting yourself into a mindset conducive to learning and developing rationality
skills?
And if you cannot act such that 0 rights are violated? Your function would seem to suggest that you are indifferent between killing a dictator and committing the genocide he would have caused, since the number of rights violations is (arguably, of course) in both cases positive.
I got an email back from her. Tl;dr version: Nope, that's definitely not how she was thinking about it. (Perhaps noteworthy: She rarely communicates via email, so she's out of her element here. It is possible to evoke saner discussion from her in realtime.)
As far as the comment from the blogger on that website, it sounds to me that they have a very bland argument. First, most women who are against abortion have had abortions and know the harm caused to the child, but also the harm that happens to them. Second, there are plenty of pro-life women who hav
1Cyan11yTry this
[http://scholar.google.ca/scholar?sourceid=navclient&rlz=1T4GGLL_en&q=MDL%20and%20MML%3A%20Similarities%20and%20Differences&um=1&ie=UTF-8&sa=N&hl=en&tab=ws]
.
Let's say someone gravely declares, of some moral dilemma [...] that there is no moral answer; both options are wrong and blamable; whoever faces the dilemma has had poor moral luck. Fine, let's suppose this is the case: then when you cannot be innocent, justified, or praiseworthy, what will you choose anyway?
Lately I've actually been thinking that maybe we should split up morality into two concepts, and deal with them separately: one referring to moral sentiments, and another referring to what we actually do. It... (read more)
1PhilGoetz11yIsn't that what people have always done? Maybe not explicitly. To explicitly
make the split you're speaking of would just help people to deny reality, and do
what they need to do, albeit in highly suboptimal and destructive ways, while
still holding on to incoherent moral codes that continue to harm them in other
ways.
But it beats letting ourselves be wiped out. I worry about the fact that Western
civilization is saying that an increasing number of rights must not be violated
under any circumstances, at a time when we are facing an increasing number of
existential risks. There are some things that we don't let ourselves see,
because seeing them would mean acknowledging that somebody's rights will have to
be violated.
For instance, plenty of people simultaneously believe that Israel must stay
where it is, and that Israel must not commit genocide. Reality might accommodate
them (eg., if we discover an alternative energy source that impoverishes the
other middle eastern states). But I think it's more likely that it won't.
1Cyan11yCox's theorem [http://en.wikipedia.org/wiki/Cox's_theorem] doesn't deal with
utility, only plausibility. The utility stuff comes from looking at preference
relations -- some big names there are von Neumann, Morgenstern
[http://en.wikipedia.org/wiki/Expected_utility_hypothesis] and L.J. Savage
[http://www-history.mcs.st-andrews.ac.uk/Biographies/Savage.html].
1Eliezer Yudkowsky11yAlso keyword, "Dutch book".
1pdf23ds11yI don't think that's quite the same usage of "moral luck". According to the
technical term, it's when you, for example, judge someone who was driving drunk
and hit a person more harshly than someone who was driving drunk and didn't hit
anyone, all else being equal. In other words, things entirely outside of your
control that make the same action more or less blameworthy. Another example,
from the link:
Question for all of you: Is our subconscious conscious? That is, are parts of us conscious? "I" am the top-level consciousness thinking about what I'm typing right now. But all sorts of lower-level processes are going on below "my" consciousness. Are any of them themselves conscious? Do we have any way of predicting or testing whether they are?
Tononi's information-theoretic "information integration" measure (based on mutual information between components) could tell you "how conscious" a well-specified circuit ... (read more)
1byrnema11yIt's a very interesting question. I think it's pretty straight-forward that
'ourselves' is a composite of 'awarenesses' with non-overlapping mutual
awareness.
Some data with respect to inebriation:
* drunk people would pass a Turing test, but the next morning when events are
recalled, it feels like someone else' experiences. But then when drunk again,
the experiences again feel immediate.
* when I lived in France, most of my socialization time was spent inebriated.
For years thereafter, whenever I was intoxicated, I felt like it was more
natural to speak in French than English. Even now, my French vocabulary is
accessible after a glass of wine.
1PhilGoetz11yThat is interesting, but not what I was trying to ask. I was trying to ask if
there could be separate, smaller, less-complex, non-human consciousnesses inside
every human, It seems plausible (not probable, plausible) that there are, and
that we currently have no way of detecting whether that is the case.
If you're suggesting that all science fiction is implausible though, then that's not true. There's a difference between coming up with random, futuristic ideas, and coming up with random, futuristic ideas that have justification for working.
[Parent at -2.] Is the advice to not waste time and effort on stuff you don't need really that bad? (Hypothetical, under the assumption that you really don't need it; if you do need it occasionally, in the majority of cases it'll be enough to relearn directly on demand, rather than supporting for perfection's sake.)
3Tyrrell_McAllister11yYou wrote
But suppose that you use a skill only occasionally. Then you still need the
skill. But to retain a skill, you might need to use it frequently. Therefore,
you might need to inflate artificially how often you use it, so that you retain
it. That is how it can be that you use a skill and yet still need a "problem of
the day".
1Sniffnoy11yAgreed. Best is if you can learn something well enough that even if you don't
remember it, you can rederive it; but usually good enough is learning something
well enough that you can do it if you've got a textbook to remind you.
1Paul Crowley11yIn this instance, if I needed an answer to this question I'd use Maxima
[http://maxima.sourceforge.net/].
We're using different definitions of validity. Yours is "[a] syllogism... is valid [if] it reaches a conclusion that is, on average, correct." Mine is this one.
ETA: Thank you taking the time to explain your position thoroughly; I've upvoted the parent. I'm unconvinced by your maximum entropy argument because, at the level of lack of information you're talking about, H and D could be in continuous spaces, and in such spaces, maximum entropy only works relative to some pure non-informative measure, which has be be derived from arguments other than maximum entropy.
If the Simple Math of Everything were a real text book, I'd read that. But I've gathered calculus is the right place to start. Probability theory would be next, I guess.
Users' karma is only displayed on their user page (and the top contributors list). The number in the header of an article or comment is the score for that post only. Does this help?
Email sent. I'll quote the relevant bit here, in case it turns out to affect her reply. (I did link to the conversation but I'm not sure she'll follow the link.)
I am writing for a more interesting reason than just to keep in touch, though. One of the places I've been spending my time at online is a rationalist forum, and a few of the members were discussing abortion law. One of them suggested that the main reason that women who believe abortion is wrong support anti-abortion laws is that having such a law in place would reduce the temptation they'd feel
They did not focus overly much on the technologies, which were mostly post-Singularity
Was this ever said or shown in an episode? It seems like a cop out to just assume magical technology is post-Singularity without it being in the back story.
the strict adherence to a dualist New-Age philosophy of consciousness really kept me away from the show.
Wasn't there a consciousness swapping episode of Farscape? Also, what about the Data is basically a person trope in TNG? I agree that Star Trek technology doesn't make a lot of sense, though.
Give your high standards shouldn't the fact that cylons were never much more intelligent than humans bother you?
So these two positions differ ethically in that the poor support one but not the other?
Yes, and the reason this is relevant is because the positions are about things to be done to the poor.
You said:
Some people support unrestrained capitalism because they think it provides the most economic growth which is better for the poor.
There is a factual disagreement about how to best help the poor. The poor themselves generally support one of the two options: social support. They may, factually, be wrong. There is then a further decision: do we help them in ... (read more)
2Jack11yAre we breaking some rule if this discussion gets a little political?
OK. But they're also about things to be done to the rich.
This is a such a dismal way of looking at the issue from my perspective. Once
you decide that the policy should just be whatever some group wants it to be you
throw any chance for deliberation or real debate out the window. I realize such
things are rare in the present American political landscape but turning interest
group politics into an ethical principle is too much for me.
I read this as "this is a trade-off between helping them financially, and
patronizing them" :-).
If most women opposed abortion rights (as they do in many Catholic countries)
you would be fine prohibiting it? Even for the dissenting minority? Saying
people should be able to have abortions, even if it is bad for them makes sense
to me. Saying some arbitrarily defined group should be able to define abortion
policy, (regardless) if it is bad for them, does not.
Also, almost all policies involve coercing someone for the the benefit of
someone else. How do you decide which group gets to decide policy?
Maybe, though I don't know if we have the evidence to determine that. But
they're signaling because they want people to think they are ethical. There
being some kind of universal human ethics and most people being secretly
unethical is a totally coherent description of the world.
What I meant was that the fact that you think something ethically controversial
is as bad as something ethically uncontroversial doesn't tell us anything. Also,
I know I used it as an example first but the abortion debate likely involves
factual disagreements for many people (if not you).
But ethics are product of biological and cultural evolution! Empathy was
probably an evolutionary accident (our instincts for caring for offspring got
hijacked). If there is a universal moral grammar I don't know the evolutionary
reason, but surely there is one. The cultural aspects likely helped groups
4Alicorn11yJust as an aside, lots of women
[http://www.snipeme.com/guestrants.php?rant=moral_abortion] go ahead and get
abortions even if they assent to statements to the effect that it shouldn't be
allowed. Which preference are you more inclined to respect?
2Nick_Tarleton11yYou're ignoring the tradeoff between helping the current poor and future poor.
The current poor would naturally favor the former, but I don't think that's an
argument for it over the latter.
1Alicorn11yClass is fairly heritable. To the extent to which we think people ought to make
decisions for their descendants, it may make sense to let current poor make
decisions that affect the future poor.
Hmm, what about an outside view? That is, thinking about what it would be like for someone else. I'm a little too sleepy now to recall the exact reference, but there was something said here about how people make better estimates e.g. about how long a project will take if they think about how long similar projects have taken then how long they think this project will take. And, because you know about the present, let's make our thought experiment happen in the present.
So, what if a woman was frozen a hundred years ago, and woke up today? Would she be ab... (read more)
3Alicorn11yI imagine such a woman would be viewed as a worthwhile curiosity, but probably
not a good prospective friend, by history geeks and journalists. I think she
would find her sensibilities and skills poorly suited to letting her move
comfortably about in mainstream society, which would inhibit her ability to pick
up friends in other contexts. If there were other defrostees, she might connect
with them in some sort of support group setting (now I'm imagining an episode of
Futurama the title of which eludes me), which might provide the basis for
collaboration and maybe, eventually, friendship, but it seems to me that that
would take a while to develop if in fact it worked.
3kpreid11y(Meta) I wish byrnema had not deleted their comment which was in this position.
2[anonymous]11yI would expect that it would be very natural to treat defrostees like foreign
exchange students or refugees. They would be taken care of by a plain old
mothering type like me, that are empathetic and understand what it's like to
wake up in a foreign place. I would show this 18th century woman places that she
would relate to (the grocery store, the library, window shopping downtown) and
introduce her to people, a little bit at a time. It would be a good 6-9 months
before she felt quite acclimated, but by then she'd be finding a set of friends
and her own interests. When she felt overwhelmed, I would tell her to take a
bath and spend an evening reading a book.
I've stayed in foster homes in several countries for a variety of reasons, and
this is quite usual.
1AndrewWilcox11yHmm, I wonder if you could leave instructions, kind of like a living will except
in reverse, so to speak... e.g., "only unfreeze me if you know I'll be able to
make good friends and will be happy". Perhaps with a bit more detail explaining
what "good friends" and "being happy" means to you :-)
If I were in charge of defrosting people, I'd certainly respect their wishes to
the best of my ability.
And, if your life does turn out to be miserable, you can, um, always commit
suicide then... you don't have to commit passive suicide now just in case... :-)
But it certainly is a huge leap in the dark, isn't it? With most decisions, we
have some idea of the possible outcomes and a sense of likelihoods...
I think plenty of ethical differences remain even if we eliminate all possiblee factual disagreements.
As regards religion, (many) religious people claim that they obey god's commands because they are (ethically) good and right in themselves, and just because they come from god. It's hard to dismiss religion entirely when discussing the ethics adopted by actual people - there's not much data left.
But here's another example: some people advocate the ethics of minimal government and completely unrestrained capitalism. I, on the other hand, believe in state ... (read more)
1Jack11ySome people support unrestrained capitalism because they think it provides the
most economic growth which is better for the poor. This is obviously a factual
disagreement. Of course there are those who think wealth redistribution violates
their rights, but it seems plausible that at least many of them would change
their mind if they knew what the country would look like without redistribution
or if they had different beliefs about the poor (perhaps many people with this
view think the poor are lazy or intentionally avoid work to get welfare).
Slavery (at least the brutal kind) is almost always accompanied by a myth about
slaves being innately inferior and needing the guidance of their masters.
Now I think there probably are some actual ethical differences between human
cultures I just don't want exaggerate those differences-- especially since they
already get most of our attention. All the vast similarities kind of get ignored
because conflicts are interesting and newsworthy. We have debates about abortion
not about eating babies. But I think most possible human behavior is the
obvious, baby-eating category and the area of disagreement is relatively small.
Moreover there is considerably evidence for innate moral intuitions. Empathy is
an innate process in humans with normal development. Also see John Mikhail
[http://www.law.georgetown.edu/faculty/mikhail/] on universal moral grammar. I
think there is something we can call "human ethics" but that there is enough
cultural variability within it to allow us to also pick out local ethical
(sub)systems.
Er forget this. When we say "human ethics is universal" we need to finish the
sentence with "among... x". Looking up thread I see that the context for this
discussion finishes that sentence with "among conscious beings" or something to
that effect. I find that exceedingly unlikely. That said, I'm not at all
bothered by Clippy the way I would be bothered by the Babyeaters (and not just
because eating babies is immor
So do you believe that ethics are just an invented rule system that could have a different form and still be as ethical?
What do you mean "as ethical"? By what meta-ethical rule?
if your reply is, "by the objective meta-ethics which I postulate that all sentient beings can derive" - if everyone can derive it equally, doesn't that imply everyone ought to be equally ethical? If you admit someone or some society are un-ethical (as you asked of Jack), does that mean they somehow failed to derive the meta-ethics? That the ethics they adopted is internally inconsistent somehow?
So do you believe that ethics are just an invented rule system that could have a different form and still be as ethical?
Invented isn't the right word, though that is partly my fault since baseball isn't an ideal metaphor. Natural language is a better one. Parts of ethics are culturally inherited (presumably at some time in the past they were invented) other parts are innate. The word ethics has a type-token ambiguity. It can refer to our ethical system (call it 'ethics prime') or it can refer to the type of thing that ethics prime is (an ethics). There ... (read more)
1PhilGoetz11yThis is a good way of putting it!
In fact, it just convinced me that there is an objective ethics! Sort of. Asking
whether there is an objective meta-ethics is a lot like asking, "Is there such a
thing as language?" Language is a concept that can be usefully applied to
interactions between organisms of a particular level of intelligence given
particular environmental conditions. So is ethics. Is it universal? What the
hell does that mean?
But when people say there is no objective ethics, that isn't what they mean.
They aren't denying that ethics makes sense as a concept. They're claiming the
right to set their own arbitrary goals and values.
It's hard for me to imagine why someone who was convinced that there were no
objective ethics would waste time on this, unless they were a Continental
philosopher. Claiming there is no objective ethics sounds to me more like the
actions of someone who believes in objective ethics, and has come to their own
values that are unique enough that they must reject existing values.
If you want to read a full length Asimov book, my personal recommendation is the End of Eternity. It has a rather unique take on time travel and functions well as a stand alone book. It has just been reprinted after being out of print for too long.
Foundation is his most well known novel and it also very much worth reading.
I can't find someone violating the copyright online with a quick Google, but Asimov's short story "The Last Answer" is also a good one with a different take on religion than "The Last Question".
Today at work, for the first time, LessWrong.com got classified as "Restricted:Illegal Drugs" under eSafe. I don't know what set that off. It means I can't see it from work (at least, not the current one).
How do we fix it, so I don't have to start sending off resumes?
4byrnema11yI went to the eSafe site and while looking up what the "illegal drugs"
classification meant, submitted a request for them to change their status for
LessWrong.com. A pop-up window told me they'd look into it.
You can check (and then apply to modify) the status of LessWrong here
[http://www.aladdin.com/support/checking-classification.aspx].
2MatthewB11yThat may have been my fault. I mentioned that I used to have drug problems and
mentioned specific drugs in one thread, so that may have set off the filters. I
apologize if this is the case. The discussion about this went on for a day or
two (involving maybe six comments).
I do hope that is not the problem, but I will avoid such topics in the future to
avoid any such issues.
1byrnema11yI doubt it, all of the words you used (name brands of prescription drugs) were
used elsewhere, often occurring in clusters just as in your thread.
By the way, do you have any idea why you don't have an overview page?
I was linking not just to the paper, but to a summary of the paper, and included that example out of that summary, a summary-of-summary. Others have already summarized what you got wrong in your reply. You can see that the paper has about 1300 citations, which should count for its importance.
I am going to be hosting a Less Wrong meeting at East Tennessee State University in the near future, likely within the next two weeks. I thought I would post here first to see if anyone at all is interested and if so when a good time for such a meeting might be. The meeting will be highly informal and the purpose is just to gauge how many people might be in the local area.
Please review a draft of a Less Wrong post that I'm working on: Complexity of Value != Complexity of Outcome, and let me know if there's anything I should fix or improve before posting it here. (You can save more substantive arguments/disagreements until I post it. Unless of course you think it completely destroys my argument so that I shouldn't even bother. :)
I've definitely learned a lot of math from Wikipedia. I don't generally do the proofs myself, so I don't really have any of the elusive "mathematical maturity", but I definitely have learned a lot of abstract algebra, category theory and mathematical logic just by reading the definitions of various things on Wikipedia and trying to understand them.
On the other hand, I am pretty motivated to learn these things because I actively enjoy them. Other branches of math, I am much less interested in and so I don't learn that much. But it is possible!
0Kevin11yI think "Rapture of the Geeks" is a meme that could catch on with the general
public, but this community seems to have reluctance to engage in
self-promotional activities. Is Eliezer actively avoiding publicity?
So I am back in college and I am trying to use my time to my best advantage. Mainly using college as an easy way to get money to fund room and board while I work on my own education. I am doing this because i was told here among other places that there are many important problems that need to be solved and i wanted to develop skills to help solve them because I have been strongly convinced that it is moral to do so. However beyond this I am completely unsure of what to do. So I have the furious need for action but seem to have no purpose guiding that actio... (read more)
2wedrifid11ySocialise a lot. Learn the skills of social influence and the dynamics of power
at both the academic level and practical.
AnnaSalamon made this and other suggestions when Calling for SIAI fellows
[http://lesswrong.com/lw/1hn/call_for_new_siai_visiting_fellows_on_a_rolling/].
I imagine that the skills useful for SIAI wannabes could have significant
overlap with those needed for whatever project you choose to focus on. Specific
technical skills may vary somewhat.
Schooling isn't about education. This article is pretty mind-boggling: apparently, it's been a norm until now in Germany that school ends at lunchtime and the children then go home. Considering how strong the German economy has traditionally been, this raises serious questions of the degree that elementary school really is about teaching kids things (as opposed to just being a place to drop off the kids while the parents work).
Oh, and the country is now making the shift towards school in the afternoon as well, driven by - you guessed it - a need for women to spend more time actually working.
2mattnewport11yAssuming you were using your own computer at home and not a public Wi-Fi hotspot
or public computer then it could be that you use the same ISP and you were
assigned an IP address previously used by another user. Given the relatively low
number of users on lesswrong though this seems like a somewhat unlikely
coincidence.
1MrHen11yHmm... I was at a coffee shop the other day. I don't see how anyone else there
(or anyone else in the entire city I live in) would have ever heard of
LessWrong. The block appears to have been created today, however, which makes
even less sense.
1Vladimir_Nesov11yI'll be more careful with "Ban this IP" option in the future, which I used to
uncheck during the spam siege a few months back, but didn't in this case.
Apparently the IP is only blocked for a day or so. I've removed it from the
block list, please check if it works and write back if it doesn't.
0MrHen11yIt works again.
Honestly, I have no problem not editing the wiki for a few days if it helps
block spammers. It's not like I am adding anything critical. I was just
confused.
2Vladimir_Nesov11yIt'd only be necessary to block spammers by IP if they actually relapse (and
after a captcha mod was installed, spammers are not a problem), but the fact
that you share IP with a spammer suggests that you should check your computer's
security.
0MrHen11yWell, in the last week I've probably had at least three IP address assigned to
my computer while editing the wiki. It is hard to know where to begin. I think
someone I know has a good program to detect outgoing traffic... that may work.
1Nick_Tarleton11y"Bella" was blocked
[http://wiki.lesswrong.com/mediawiki/index.php?title=Special:Log&type=block&page=User:Bella]
for adding spam links. Could your computer be a zombie
[http://en.wikipedia.org/wiki/Zombie_computer]?
Strange fact about my brain, for anyone interested in this kind of thing:
Even though my recent top-level post has (currently) been voted up to 19, earning me 190 karma points, I feel like I've lost status as a result of writing it.
5Paul Crowley11yI quite like swearing, but I don't think it primes people to think and respond
rationally in general, and is usually best avoided. Like wedrifid, I'm inclined
to argue for an exception for "bullshit", which is a term of art
[http://en.wikipedia.org/wiki/On_Bullshit].
2RobinZ11yI don't know of an official policy, but swearing can be distracting. Avoid?
3wedrifid11yI advocate the use of the term Bullshit. Both because it a good description of a
significant form of bias and because the profanity is entirely appropriate. I
really, really don't like seeing the truth distorted like that.
More generally I don't particularly object to swearing but as RobinZ notes it
can be distracting. I don't usually find much use for it.
2Christian_Szegedy11yI'd propose to use the word "bulshytt" instead. ;)
The emotional framework of which you speak doesn't seem to resemble anything I can introspectively access in my head, but maybe I can offer advice anyway. Some emotional motivations that are conducive to rationality are curiosity, and the powerful need to accomplish some goal that might depend on you acting rationally.
1RobinZ11yInteresting heuristic - I would be curious to find if anyone else has followed
something similar to good effect, but it sounds conceptually reasonable.
What's the right prior for evaluating an H1N1 conspiracy theory?
I have a friend, educated in biology and business, very rational compared to the average person, who believes that H1N1 was a pharmaceutical company conspiracy. They knew they could make a lot of money by making a less-deadly flu that would extend the flu season to be year round. Because it is very possible for them to engineer such a virus and the corporate leaders are corrupt sociopaths, he thinks it is 80% probable that it was a conspiracy. Again, he thinks that because it was possible for ... (read more)
1Paul Crowley11yAny such conspiracy would have to be known by quite a few people and so would
stand an excellent chance of having the whistle blown on it. Every case I can
think of where large Western companies have been caught doing anything like that
outrageously evil, they have started with a legitimate profit-making plan, and
then done the outrageous evil to hide some problem with it.
We should pay people to adopt and advocate independent views, to their own detriment.
I guess we already do something like that, namely award people with status for being inventors or early adopters of ideas (think Darwin and Huxley) that eventually turn out to be accepted by the majority.
It does occur to me that I wasn't objecting to the hypothetical existence of said function, only that rights aren't especially useful if we give up on caring about them in any world where we cannot prevent literally all violations.
I read new posts as soon as I see them. I look at the comments through the recent comments bar, but that requires having the LW tab open more or less constantly. I also reread posts to get any comments I miss and to get a better sense of how the discussions are preceding.
As a 19 year old student living in Hungary cryonics is way back on my list of life extension related things to do. Nevertheless I think cryonics is a great option and I'll sign up as soon as I figure out how I could do it in my country (Russia being the closest place with cryo service) and have the money for it.
As a side note, I think cryonics has the best payoffs when you've got some potentially lethal relatively slowly advancing disease like cancer or ALS, and have the option of moving very closely to a cryonics facility.
Humbug. What you are actually saying is that wanting to know can be a terminal value, so why won't you just say that?
And of course, I know that, but there is just too much stuff out there to learn, so it's a necessity that the things you do choose to learn are in some sense better than the rest (otherwise you lose something), more beautiful or more useful. Just saying that one would learn X because "learning in general" is fun isn't enough.
Because you're apparently giving the same status ("SilasBarta::validity") to Bayesian inferences that I'm giving to the disputed syllogism S1.
For me, the necessity of using Bayesian inference follows from Cox's Theorem, an argument which invokes no meta-probability distribution. Even if Bayesian inference turns out to have SilasBarta::validity, I would not justify it on those grounds.
What about the Bayes Theorem itself, which does exactly that (specify a probability distribution on variables not attached to any particular instance)?
I poked around a bit on the site and I think the vast majority of ways I could spend equivalent downtime would be worth more than the pennies they offer there. Even the overhead of signing up for the tasks is too costly a barrier for such tiny payouts, and that's if you avoid the ones that require you to pass qualification tests. Plus, the number of offers asking people to re-write content in their own words just screams plagiarism.
But you're permitting yourself the same thing! Whenever you apply the Bayes Theorem..
Checks for a syllogism's Cyan::validity do not apply Bayes' Theorem per se. ...
Argh. I wasn't saying that you were using the Bayes Theorem in your claimed definition of Cyan::validity. I was saying that when you are deriving probabilities through Bayesian inference, you are implicitly applying a standard of validity for probabilistic syllogisms -- a standard that matches mine, and yields the conclusion I claimed about the syllogism in question.
1Cyan11yI do not agree that that it what I'm doing. I don't know why my willingness to
use Bayes' Theorem commits me to SilasBarta::validity.
I think I understand what you meant now. I deny that I am permitting myself the
same thing as you. I try to make my problems well-structured enough that I have
grounds for using a given probability distribution. I remain unconvinced that
probabilistic syllogisms not attached to any particular instance have enough
structure to justify a probability distribution for their elements -- too much
is left unspecified. Jaynes makes a related point on page 10 of "The Well-Posed
Problem [http://bayes.wustl.edu/etj/articles/well.pdf]" at the start of section
8.
Because the only argument you've given for it is a maxent one, and it's not
sufficient to the task, as I explain further below.
This is not correct. The problem is that Shannon's definition is not invariant
to a change of variable. Suppose I have a square whose area is between 1 cm^2
and 4 cm^2. The Shannon-maxent distribution for the square's area is uniform
between 1 cm^2 and 4 cm^s. But such a square has sides whose lengths are between
1 cm and 2 cm. For the "side length" variable, the Shannon-maxent distribution
is uniform between 1 cm and 2 cm. Of course, the two Shannon-maxent
distributions are mutually inconsistent. This problem doesn't arise when using
the Jaynes definition.
In your problem, suppose that, for whatever reason, I prefer the floodle scale
to the probability scale, where floodle = prob + sin(2*pi*prob)/(2.1*pi). Why do
I not get to apply a Shannon-maxent derivation on the floodle scale?
I was recently asked to produce the indefinite integral of ln x, and completely failed to do so. I had forgotten how to do integration by parts in the 6 months since I had done serious calculus. Is there anyone who knows of a calculus problem of the day or some such that I might use to retain my skills?
I don't vote for blind memorization either. However, I think that if one can not reconstruct a proof than it is not understood either. Trying to reconstruct thought processes by heart will highlight the parts with incomplete understanding.
Of course in order to fully understand things one should look at additional consequences, solve problems, look at analogues, understand motivation etc. Still, the reconstruction of proofs is a very good starting point, IMO.
Well, if everyone else they've revived so far has ended up a miserable outcast in an alien society, or some other consistent outcome, they might be able to take a guess at it.
So it seems like your definition of validity in probabilistic syllogisms matches mine.
I only call syllogisms about probabilities valid if they follow from Bayes' Theorem. You permit yourself a meta-probability distribution over the probabilities and call a syllogism valid if it is Cyan::valid on average w.r.t. to your meta-distribution. I'm not saying that SilasBarta::valid isn't a possibly interesting thing to think about, but it doesn't seem to match Cyan::valid to me.
A continuous space, yes, but on a finite interval. That lets you define the max-e
If someone's argument, and therefore position, is irrational, how can we trust them to give honest and accurate criticism of other arguments?
At which point you are completely forsaking your original argument (rightfully or wrongly, which is a separate concern), which is the idea of my critical comment above. It's unclear what you are arguing about, if your conclusion is equivalent to a much simpler premise that you have to assume independently of the argument. This sounds like rationalization (again, no matter whether the conclusion-advice-heuristic is ... (read more)
However, if there is only one true set of baseball rules that all people must abide by to be playing baseball than that would be objective.
If that's the distinction, then whether there is objective ethics or not is just a matter of semantics; not anything of philosophical or practical interest.
Ethical problem. It occurred to me that there's an easy, obvious way to make money by playing slot machines: Buy stock in a casino and wait for the dividends. Now, is this ethically ok? On the one hand, you're exploiting a weakness in other people's brains. On the other hand, your capital seems unlikely, at the existing margins, to create many more gamblers, and you might argue that you are more ethical than the average investor in casinos.
It's a theoretical issue for me, since my investment money is in an index fund, which I suppose means I own some tiny share in casinos anyway and might as well roll with it. But I'd be interested in people's thoughts anyway.
3Blueberry11yInvesting in a company is different than playing slot machines. Casinos are
entertainment providers: they put on shows, sell food and drink, and provide
gaming. They have numerous expenses as well. Investing in a casino is not
guaranteed to make money in the same way the house is in roulette, for instance.
Casinos do go bankrupt and their stock prices do go down.
In addition, when you buy a share of stock on the open market, you buy it from
another investor, not the company, so you're not providing any new capital to
the company.
I don't believe there is anything ethically wrong with either gambling or
funding casinos. If people want to gamble, that's their choice.
1Wei_Dai11yI curious what made you think about this problem. I'm sure you're aware of the
efficient market hypothesis... do you have some private information that
suggests casino stocks are undervalued?
By coincidence I was in Las Vegas a couple of weeks ago and did some research
before I left for the trip. It turns out that many casinos (both physical and
online) offer gambles with positive expected value for the player, as a way to
attract customers (most of whom are too irrational to take proper advantage of
the offers, I suppose). There are entire books and websites devoted to this. See
http://en.wikipedia.org/wiki/Comps_%28casino%29
[http://en.wikipedia.org/wiki/Comps_%28casino%29] and
http://www.casinobonuswhores.com/ [http://www.casinobonuswhores.com/]
This is a very interesting line of argument. How much of it do you think is due to this:
Many of these women live in cultures and social circles/families where publicly supporting abortion rights is very damaging socially. Even in relaxed conditions, disagreeing with one's family and friends on an issue that evokes such strong emotions is hard. The women tend to conform unless they have a strong personal opinion to the contrary, and once they conform on the signalling level, they may eventually come to believe themselves that they are against abortion.
5Cyan11yNot really. You keep demonstrating my point as if it supports your argument, so
I know we've got a major communication problem.
And that's what I'm attacking. We are using the same definition of "valid",
right? An argument is valid if and only if the conclusion follows from the
premises. You're missing the "only if" part.
Yes, even for probabilistic claims. See Jaynes's policeman's syllogism in
Chapter 1 of PT:LOS for an example of a valid probabilistic argument. You can
make a bunch of similarly formed probabilistic syllogisms and check them against
Bayes' Theorem to see if they're valid. The syllogism you're attempting to
defend is
P(D|H) has a low value.
D is true.
Therefore, P(H|D) has a low value.
But this doesn't follow from Bayes' Theorem at all, and the Congress example is
an explicit counterexample.
Once you know the specific H and E involved, you have to use that knowledge;
whatever probability distribution you want to postulate over p(H)/p(E) is
irrelevant. But even ignoring this, the idea is going to need more development
before you put in into a post: Jaynes's argument in the Bertrand problem
postulates specific invariances and you've failed to do likewise; and as he
discusses, the fact that his invariances are mutually compatible and specify a
single distribution instead of a family of distributions is a happy circumstance
that may or may not hold in other problems. The same sort of thing happens in
maxent derivations (in continuous spaces, anyway): the constraints under which
entropy is being maximized may be overspecified (mutually inconsistent) or
underspecified (not sufficient to generate a normalizable distribution).
I wouldn't quite say that I take the revealed preference more seriously than the overt one. I'm prepared to accept that there may be people who genuinely believe that abortion is morally wrong and also genuinely believe that other people (and possibly they themselves) will succumb to temptation and have an abortion if it is legal and available even if they believe it is wrong. We generally accept the reality of akrasia here, this seems like a very similar phenomenon: the belief that people can't be trusted to do what is morally right when faced with an unw... (read more)
1Paul Crowley11yNote the image in the banner of "Overcoming Bias"...
I would be against an opt-in anti-abortion law, since unlike with akrasia I see
no reason to prefer the earlier preference over the later one in this instance.
I suppose you could argue that it's a logical and inescapable consequence of MWI
It is. If you believe MWI, you believe that Schrodinger's cat will experience survival every time, even if you repeated the experiment 100 times, but that you will observe the cat dead if you repeat the experiment enough times.
There is no falsifiable fact above and beyond MWI as far as I can see, apart from the general air of confusion about subjective experience, which hasn't coalesced into anything sufficiently definite enough to be falsified.
Take ethics to be a system that sets out rules that determine whether we are good or bad people and whether our actions are good or bad. They differ from other ascriptions of good (say, at baseball) and bad (say, me playing baseball) in that there is an imperative to be good in this sense whereas it is acceptable to be bad at baseball (I hope).
I suspect that won't answer your question so instead I'll ask another. Do you believe that this inability to define means there is no real concept that underpins the folk conception of ethics or does it just mean we are unable to define it well enough to discuss it?
For that purpose I think killing people would work admirably well.
If you're ok with a yes or no answer, then it's enough. If you also want to know how individuals may be important to events, killing may not be enough, I think.
My understanding is that such a story relies on trying to define the area of a point when only areas of regions are well-defined; the probability of the point case is just the limit of the probability of the region case, in which case there is technically no zero probability involved.
"Imagine the human race gets wiped out. But you want to transmit the so far acquired knowledge to succeeding intelligent races (or aliens). How do you do?"
I got this question while reading a dystopia of a world after nuclear war.
1[anonymous]11yTransmitting it to aliens ain't happening; we'd get them from radio to the
present day, a couple hundred years' worth of technology, which is relatively
little, and that's only if we manage to aim it right.
So, we want to communicate to future sapient species on Earth. I say take many,
many plates of uranium glass and carve into it all of our most fundamental
non-obvious knowledge: stuff like the periodic table, how to make electricity,
how to make a microchip, some microchip designs, some software. And, of course,
the scientific method, rationality, the non-exception convention (0 is a number,
a square is a rectangle, the empty product is 1, . . .), and the function
application motif (the way we construct mathematical expressions and
natural-language phrases). Maybe tell them about Friendly AI, too.
Well, I can't really object to the extremes theory. You aren't a Third-Worlder or a highly driven Indian or Chinese or pre-20th century American child who wouldn't be bothered by such conditions, after all.
But most school building is not about avoiding such extremes. I can cite exactly one example in my educational career where a building had a massive overhaul due to genuine need (a fire in the gym burned the roof badly); all the other expansions and new buildings.... not so much.
Its very difficult to keep focused when the classroom is 30 degrees Celciu
I just noticed that I showed up around the same time as the Guardian Mention as well... However, I have been lurking (without registering) for two years now. I met Eliezer Yudowski at the First Singularity Summit, and became aware of OB as a result, and then became aware of this blog shortly after he split from OB.
However, I would like to say that a newcomers section in a FAQ or Wiki would have been most welcome.
I do have a little bit of a clue what I am doing here as well, as I have spent a lot of time on forums such as Richard Dawkins' and Sam Harris' an... (read more)
So make the classes bigger, perhaps. In a Hansonian vein:
"But while state legislatures for decades have passed laws — and provided millions of dollars — to cap the size of classes, some academic researchers and education leaders say that small reductions in the number of students in a room often have little effect on their performance." ... Dan Goldhaber, an education professor at the University of Washington, said the obsession with class size stemmed from a desire for “something that people can grasp easily — you walk into a class and you see e
3quanticle11yWell, classrooms are of limited size. I know that the classrooms at my old high
school were only designed for thirty kids each. Now they hold nearly forty each.
There is a significant cost from having correspondingly less space per person.
The corresponding reductions in mobility and classroom flexibility have an
impact on learning.
This is especially pronounced in science labs. Having even one more person per
lab station can have a surprisingly detrimental impact on learning. If there are
two or three people at a lab station, then pretty much everyone is forced to
participate (and learn) in order to finish the lesson. However, if there are
four or more kids at a lab station, then you can have a person slacking off, not
doing much and the others can cover for the slacker. The slacker doesn't learn
anything, and the other students are resentful because three are doing the work
of four.
It wasn't clear to me how that misses the point of the paper, and in acknowledgment of that possibility I added the caveat at the end. Hardly "obnoxious".
Nevertheless, your original comment would be a lot more helpful if you actually summarized the point of the paper well enough that I could tell that my comment is irrelevant.
Could you edit your original post to do so? (Please don't tell me it's impossible. If you do, I'll have to read the paper myself, post a summary, save everyone a lot of time, and prove you wrong.)
3Cyan11yThe point of the paper is that the reasoning behind the p-value approach to null
hypothesis rejection ignores a critical factor, to wit, the ratio of the prior
probability of the hypothesis to that of the data. Your s/member of
Congress/Russian example shows that sometimes that factor close enough to unity
that it can be ignored, but that's not the fallacy. The fallacy is failing to
account for it at all.
1SilasBarta11yOn second though, my original reasoning was correct, and I should have spelled
it out. I'll do so here.
It's true that the ratio influences the result, but just the same, you can use
your probability distribution of what predicates will appear in the "member of
Congress" slot, over all possible propositions. It's hard to derive, but you can
come up with a number.
See, for example, Bertrand's paradox, the question of how probable a
randomly-chosen chord of a circle is of being greater than the length of side of
an inscribed equilateral triangle. Some say the answer depends on how you
randomly choose the chord. But as E. T. Jaynes argued
[http://en.wikipedia.org/wiki/Bertrand%27s\_paradox\_\(probability\]
#Unique_solution_using_the_.22maximum_ignorance.22_principle), the problem is
well-posed as is. You just strip away any false assumptions you have of how the
chord is chosen, and use the max-entropy probability distribution subject to
whatever constraints are left.
Likewise, you can assume you're being given a random syllogism of this form,
weighted over the probabilities of X and Y appearing in those slots
If a person is an X, then he is probably not a Y.
This person is a Y.
Therefore, he is probably not an X.
1MatthewB11yWooo... Hooo... I just talked to a friend in Texas, too, who gave me info on an
Alcor plan (he runs Alcor Meetups in Austin TX), and it seems that they have
plans that one can buy as well (on installments).
I need to get this set up as soon as I can. I would rather not worry about being
hit by a truck and not being prepared.
The Na'vi didn't defect. The Na'vi refused to play. The human faction wouldn't accept any outcome that didn't end with them getting the unobtainium, and the Na'vi not playing was such an outcome, so the humans forced a game and, when the Na'vi still weren't cooperative, defected big-time. Since the game was spread out in time, this permitted retaliatory defection - which isn't part of the original non-iterated PD, nor is refusing to play.
I recently had to have some minor surgery. However, there's a body of thought that says it's safe to wait and watch for symptoms, and only have surgery later. There's a peer reviewed (I assume) paper supporting this position.
Upon reading this paper I found what looked like a statistical error. Looking at outcomes between two groups, they report p = 0.52, but doing the sums myself I got p = 0.053. For this reason, I went and had the surgery.
Since I'm just a novice at statistics, I was wondering if I had in fact got it right - it's disturbing to think that a... (read more)
6Vladimir_Nesov11yIs there any reason not to post the link immediately? You are creating an
additional barrier (pretty steep one) that lessens your chances of getting any
cooperation.
4AllanCrossman11yWell, I was only going to post all the minutiae if there was any interest...
http://jama.ama-assn.org/cgi/reprint/295/3/285.pdf
[http://jama.ama-assn.org/cgi/reprint/295/3/285.pdf]
The two groups are as follows:
Assigned to "Watchful Waiting":
* 336 patients
* 17 had problems after 2 years
Assigned to surgery:
* 317 patients
* 7 had problems after 2 years
Some patients crossed between the two groups, but this does not matter, as they
were testing the effects of the initial assignment.
They report p = 0.52, but they also give a 95% confidence interval for the
difference in risk, which just barely contains zero; which is a dead giveaway
that p should be around 0.05, right? Anyway, doing a chi-squared test on the
above numbers, I got p = 0.053.
The relevant bit is at the top of page 289 (page 6 of the PDF). Also relevant
are the Results section of the abstract, and Figures 1 and 2. Essentially the
entire problem is this statement:
1Unnamed11yYou are correct, and the pdf that you linked contains a correction on its last
page:
It does not say anything about whether this affects their conclusions.
It is most certainly not an academic look at the concept, but that doesn't mean he didn't play a role in bringing the concept to the public eye. It doesn't have to be a scientific paper to have a real influence on the idea.
Does anybody have any updates as to the claims made against Alcor, i.e. the Tuna Can incident? I've tried a bunch of searches, but haven't been able to find anything conclusive as to the veracity of the claims.
0RobinZ11yNo. The Turing test is an intuition pump
[http://en.wikipedia.org/wiki/Intuition_pump], not a person-predicate.
0Kevin11yFirst, is there an agreed upon definition for person? We need to define that and
make sure we agree before we should go much further, but I'll give it a try
anyways.
All Turing tests are not intuition pumps. There should be other Turing tests to
recognize a greater degree of personhood. Perhaps if the investigator can
trigger an existential crisis in the chatbot? Or if the chatbot can be judged to
be more self-aware than an average 18 year old?
What if the chatbot gets 1000 karma on Less Wrong?
How would you Turing test an oracle chatbot?
http://lesswrong.com/lw/1lf/open_thread_january_2010/1i6u
[http://lesswrong.com/lw/1lf/open_thread_january_2010/1i6u]
It seems like this idea has probably been discussed before and that there is
something I am missing, please link me if possible.
http://yudkowsky.net/other/fiction/npc [http://yudkowsky.net/other/fiction/npc]
is all that comes to mind.
1RobinZ11yI think I'm confused: what I assumed you meant was a chatbot in the sense of
ELIZA (a program which uses canned replies chosen and modified as per a cursory
scan of the input text). Such a program is by definition not a person, and
success in Turing tests does not grant it personhood.
As for my second sentence: Turing's imitation game was proposed as a way to get
past the common intuition that only a human being could be a person by
countering it with the intuition that someone you can talk to, you can hold an
ordinary conversation with, is a person. It's an archetypal intuition pump, a
very sensible and well-reasoned intuition pump, a perfectly valid intuition pump
- but not a rigorous mathematical test. ELIZA, which is barely clever, has
passed the Turing test several times. We know that ELIZA is no person.
0Kevin11ySorry, by chatbot I meant an intelligent AI programmed only to do chat. An AI
trapped in the proverbial box.
I agree that a rigorous mathematical definition of personhood is important, but
I doubt that I will be able to make a meaningful contribution in that area
anytime in the next few years. For now, I think we should be able to think of
some philosophical or empirical test of chatbot personhood.
I still feel confused about this and I think that's because we still don't have
a good definition of what a person actually is; but we shouldn't need a rigorous
mathematical mathematical test in order to gain a better understanding of what
defines a person.
0RobinZ11yThe Turing test isn't a horrible test of personhood, from that attitude, but
without better understanding of 'personhood' I don't think it's appropriate to
spend time trying to come up with a better one.
0mattnewport11yMost ISPs recycle IP addresses between subscribers periodically. So someone
using the same ISP as you could have ended up with the same IP address.
0Vladimir_Nesov11yBut how many users do you expect sit on the same IP? And thus, what is the prior
probability that basically the only spammer in weeks (there was only one
another) would happen to have the same IP as one of the few dozen (or less) of
users active enough to notice a day's IP block? This explanation sounds like a
rationalization of a hypothesis privileged because of availability.
0mattnewport11yI didn't know the background spamming rate but it does seem a little unlikely
doesn't it? A chance reuse of the same IP address does seem improbable but a
better explanation doesn't spring to mind at the moment.
0Vladimir_Nesov11yNot a reason to privilege a known-false hypothesis. It's how a lot of
superstition actually survives: "But do you have a better explanation? No?".
0MrHen11yAh, okay. I completely misinterpreted your previous comment.
I heard an interview on NPR with a surgeon who asked other surgeons to use checklists in there operating rooms. Most didn't want to. He convinced some to try them out anyway.
(If you're like me, at this point you need time to get over your shock that surgeons don't use checklists. I mean, it's not like they're doing something serious, like flying a plane or extracting a protein, right?)
After trying them out, 80% said they would like to continue to use checklists. 20% said they still didn't want to use checklists.
So he asked them, If they had surgery, would they want their surgeon to use a checklist? 94% said they would want their surgeon to use a checklist.
The Guardian published a piece citing Less Wrong:
The number's up by Oliver Burkeman
Recent observations on the art of writing fiction:
My main characters in failed/incomplete/unsatisfactory stories are surprisingly reactive, that is, driven by events around them rather than by their own impulses. I think this may be related to the fundamental attribution error: we see ourselves as reacting naturally to the environment, but others as driven by innate impulses. Unfortunately this doesn't work for storytelling at all! It means my viewpoint character ends up as a ping-pong ball in a world of strong, driven other characters. (If you don't see this error in my published fiction, it's because I don't publish unsuccessful stories.)
Closely related to the above is another recent observation: My main character has to be sympathetic, in the sense of having motivations that I can respect enough to write them properly. Even if they're mistaken, I have to be able to respect the reasons for their mistakes. Otherwise my viewpoint automatically shifts to the characters around them, and once again the non-protagonist ends up stronger than the protagonist.
Just as it's necessary to learn to make things worse for your characters, rather than following the natural impulse to
"Former Christian Apologizes For Being Such A Huge Shit Head All Those Years" sounds like an Onion article, but it isn't. What's impressive is not only the fact that she wrote up this apology publicly, but that she seems to have done it within a few weeks of becoming an atheist after a lifetime of Christianity, and in front of an audience that has since sent her so much hate mail she's stopped reading anything in her inbox that's not clearly marked as being on another topic.
So, one result of this experiment would be/is a significantly below average ability to distinguish humor from serious debate...
Or significantly below average ability to signal whether something is humorous or serious. ;)
Inspired by reading this blog for quite some time, I started reading E.T. Jaynes' Probability Theory. I've read most of the book by now, and I have incredibly mixed feelings about it.
On one hand, the development of probability calculus starting from the needs of plausible inference seems very appealing as far as the needs of statistics, applied science and inferential reasoning in general are concerned. The Bayesian viewpoint of (applied) probability is developed with such elegance and clarity that alternative interpretations can hardly be considered appealing next to it.
On the other hand, the book is very painful reading for the pure mathematician. The repeated pontification about how wrong mathematicians are for desiring rigor and generality is strange, distracting and useless. What could possibly be wrong about the desire to make the steps and assumptions of deductive reasoning as clear and explicit as possible? Contrary to what Jaynes says or at least very strongly implies (in Appendix B and elsewhere), clarity and explicitness of mathematical arguments are not opposites or mutually contradictory; in my experience, they are complementary.
Even worse, Jaynes makes several strong ... (read more)
After pondering the adefinitemaybe case for a bit, I can't shake the feeling that we really screwed this one up in a systematic way, that Less Wrong's structure might be turning potential contributors off (or turning them into trolls). I have a few ideas for fixes, and I'll post them as replies to this comment.
Essentially, what it looks like to me is that adefmay checked out a few recent articles, was intrigued, and posted something they thought clever and provocative (as well as true). Now, there were two problems with adefmay's comment: first, they had an idea of the meaning of "evidence" that rules out almost everything short of a mathematical proof, and secondly, the comment looked like something that a troll could have written in bad faith.
But what happened next is crucial, it seems to me. A bunch of us downvoted the comment or (including me) wrote replies that look pretty dismissive and brusque. Thus adefmay immediately felt attacked from all sides, with nobody forming a substantive and calm reply (at best, we sent links to pages whose relevance was clear to us but not to adefmay). Is it any wonder that they weren't willing to reconsider their definition of evi... (read more)
I'm not sure there needs to be more than one FAQ thread. But lets start by generating a list of frequently asked questions, coming up with answers with consensus support.
What else? Anyone have drafts of answers?
Okay, so....a confession.
In a fairly recent little-noticed comment, I let slip that I differ from many folks here in what some may regard as an important way: I was not raised on science fiction.
I'll be more specific here: I think I've seen one of the Star Wars films (the one about the kid who apparently grows up to become the villain in the other films). I have enough cursory familiarity with the Star Trek franchise to be able to use phrases like "Spock bias" and make the occasional reference to the Starship Enterprise (except I later found out that the reference in that post was wrong, since the Enterprise is actually supposed to travel faster than light -- oops), but little more. I recall having enjoyed the "Tripod" series, and maybe one or two other, similar books, when they were read aloud to me in elementary school. And of course I like Yudkowsky's parables, including "Three Worlds Collide", as much as the next LW reader.
But that's about the extent of my personal acquaintance with the genre.
Now, people keep telling me that I should read more science fiction; in fact, they're often quite surprised that I haven't. So maybe, while we're doing these... (read more)
If you mean, "what would we pay to save your life", you could probably take up a respectable collection if you credibly identified a threat to your health that could be fixed with a medium-sized amount of money.
If you mean, "will we bribe you to hang out with us"... uh... no.
In one of the dorkier moments of my existence, I've written a poem about the Great Filter. I originally intended to write music for this, but I've gone a few months now without inspiration, so I think I'll just post the poem to stand by itself and for y'all to rip apart.
Suggested tweaks are welcome. Things that I'm currently unhappy with are that "fortuitous" scans awkwardly, and the skies/eyes rhyme feels clichéd.
Your dying and your leaving LW are two different things, whether or not we are in a position to tell the difference.
I recently revisited my old (private) high school, which had finished building a new >$15 million building for its football team (and misc. student activities & classes).
I suddenly remembered that when I was much younger, the lust of universities and schools in general for new buildings had always puzzled me: I knew perfectly well that I learned more or less the same whether the classroom was shiny new or grizzled gray and that this was true of just about every subject-matter*, and even then it was obvious that buildings must cost a lot to build and... (read more)
Akrasia FYI:
I tried creating a separate login on my computer with no distractions, and tried to get my work done there. This reduced my productivity because it increased the cost of switching back from procrastinating to working. I would have thought that recovering in large bites and working in large bites would have been more efficient, but apparently no, it's not.
I'm currently testing the hypothesis that reading fiction (possibly reading anything?) comes out of my energy-to-work-on-the-book budget.
Next up to try: Pick up a CPAP machine off Craigslist.
Suppose we want to program an AI to represent the interest of a group. The standard utilitarian solution is to give the AI a utility function that is an average of the utility functions of the individual in the group, but that runs into the interpersonal comparison of utility problem. (Was there ever a post about this? Does Eliezer have a preferred approach?)
Here's my idea for how to solve this. Create N AIs, one for each individual in the group, and program it with the utility function of that individual. Then set a time in the future when one of those A... (read more)
I rewatched 12 Monkeys last week (because my wife was going through a Brad Pitt phase, although I think this movie cured her of that :), in which Bruce Willis plays a time traveler who accidentally got locked up in a mental hospital. The reason I mention it here is because It contained an amusing example of mutual belief updating: Bruce Willis's character became convinced that he really is insane and needs psychiatric care, while simultaneously his psychiatrist became convinced that he actually is a time traveler and she should help him save the world.
Perh... (read more)
Hello all,
I've been a longtime lurker, and tried to write up a post a while ago, only to see that I didn't have enough karma. I figure this is is the post for a newbie to present something new. I already published this particular post on my personal blog, but if the community here enjoys it enough to give it karma, I'd gladly turn it into a top-level post here, if that's in order.
Life Experience Should Not Modify Your Opinion http://paltrypress.blogspot.com/2009/11/life-experience-should-not-modify-your.html
When I'm debating some controversial topic wi... (read more)
This article about gendered language showed up on one of my feeds a few days ago. Given how often discussions of nongendered pronouns happen here, I figure it's worth sharing.
By you making them distinguishable, in the way that she suggested.
Suppose you could find out the exact outcome (up to the point of reading the alternate history equivalent of Wikipedia, history books etc.) of changing the outcome of a single historical event. What would that event be?
Note that major developments like "the Roman empire would never have fallen" or "the Chinese wouldn't have turned inwards" involve multiple events, not just one.
So many. I can't limit it to one, but my top four would be "What if Mohammed had never been born?", "What if Julian the Apostate had succeeded in stamping out Christianity?" and "What if Thera had never blown and the Minoans had survived?" and "What if Alexander the Great had lived to a ripe old age?"
The civilizations of the Near East were fascinating, and although the early Islamic Empire was interesting in its own right it did a lot to homogenize some really cool places. It also dealt a fatal wound to Byzantium as well. If Mohammed had never existed, I would look forward to reading about the Zoroastrian Persians, the Byzantines, and the Romanized Syrians and Egyptians surviving much longer than they did.
The Minoans were the most advanced civilization of their time, and had plumbing, three story buildings, urban planning and possibly even primitive optics in 2000 BC (I wrote a bit about them here). Although they've no doubt been romanticized, in the romanticized version at least they had a pretty equitable society, gave women high status, and revered art and nature. Then they were all destroyed by a giant volcano. I remember reading one hist... (read more)
Given that Alexander was one of the most successful conquerors in all of history, he almost certainly benefited from being extremely lucky. If he had lived longer, therefore, he would have probably experienced much regression to the mean with respect to his military success.
I'd really, really like to see what the world would be like today if a single butterfly's wings had flapped slightly faster back in 5000 B.C.
Prisoner's Dilemma on Amazon Mechanical Turk: http://blog.doloreslabs.com/2010/01/altruism-on-amazon-mechanical-turk/
Oh, and to post another "what would you find interesting" query, since I found the replies to the last one to be interesting. What kind of crazy social experiment would you be curious to see the results of? Can be as questionable or unethical as you like; Omega promises you ve'll run the simulation with the MAKE-EVERYONE-ZOMBIES flag set.
There are several that I've wondered about:
Raise a kid by machine, with physical needs provided for, and expose the kid to language using books, recordings, and video displays, but no interactive communication or contact with humans. After 20 years or so, see what the person is like.
Try to create a society of unconscious people with bicameral minds, as described in Julian Jaynes's "The Origin of Consciousness in the Breakdown of the Bicameral Mind", using actors taking on the appropriate roles. (Jaynes's theory, which influenced Daniel Dennett, was that consciousness is a recent cultural innovation.)
Try to create a society where people grow up seeing sexual activity as casual, ordinary, and expected as shaking hands or saying hello, and see whether sexual taboos develop, and study how sexual relationships form.
Raise a bunch of kids speaking artificial languages, designed to be unlike any human language, and study how they learn and modify the language they're taught. Or give them a language without certain concepts (relatives, ethics, the self) and see how the language influences they way they think and act.
They'd probably be like the average less wrong commenter/singularitarian/transhumanist, so really no need to run this one.
Has anyone here tried Lojban? Has it been useful?
I recommend making a longer list of recent comments available, the way Making Light does.
If you've been working with dual n-back, what have you gotten out of it? Which version are you using?
Would an equivalent to a .newsrc be possible? I would really like to be able to tell the site that I've read all the comments in a thread at a given moment, so that when I come back, I'll default to only seeing more recent comments.
If quantum immortality is correct, and assuming life extension technologies and uploading are delayed for a long time, wouldn't each of us, in our main worldline, become more and more decrepit and injured as time goes on, until living would be terribly and constantly painful, with no hope of escape?
I spent December 23rd, 24th and 25th in the hospital. My uncle died of brain cancer (Glioblastoma multiforme). He was an atheist, so he knew that this was final, but he wasn't signed up for cryonics.
We learned about the tumor 2 months ago, and it all happened so fast.. and it's so final.
This is a reminder to those of you who are thinking about signing up for cryonics; don't wait until it's too late.
Because trivial inconvieniences be a strong deterent, maybe someone should make a top-level post on the practicallities of cryonics; an idiots guide to immortality.
Alexandre Borovik summarizes the Bayesian error in null hypothesis rejection method, citing the classical
J. Cohen (1994). `The Earth Is Round (p < .05)'. American Psychologist 49(12):997-1003.
The fallacy of null hypothesis rejection
What is the appropriate etiquette for post frequency? I work on multiple drafts at a time and sometimes they all get finished near each other. I assume 1 post per week is safe enough.
From The Rhythm of Disagreement:
Has Bostrom made this proposal in anything published? I can't seem to find it on nickbostrom.com.
Different responses to challenges seen through the lens of video games. Although I expect the same can be said for character driven stories (rather than say concept driven).
... (read more)This is ridiculous. (A $3 item discounted to $2.33 is perceived as a better deal (in this particular experimental setup) than the same item discounted to $2.22, because ee sounds suggest smallness and oo sounds suggest bigness.)
What is the informal policy about posting on very old articles? Specifically, things ported over from OB? I can think of two answers: (a) post comments/questions there; (b) post comments/questions in the open thread with a link to the article. Which is more correct? Is there a better alternative?
Richard Dawkins talking to an astrologer. Best part at 10m28s.
Transcript:
--
Dawkins: We could devise a little experiment where we take your forecasts and then give some of them straight, give some of them randomized, sometimes give Virgo the Pisces forecast et cetera. And then ask people how accurate they were.
Astrologer: Yes, that would be a perverse thing to do, wouldn't it.
Dawkins: It would be - yes, but I mean wouldn't that be a good test?
Astrologer: A test of what?
Dawkins: Well, how accurate you are.
Astrologer: I think that your intention there is mischief, and I'd think what you'd then get back is mischief.
Dawkins: Well my intention would not be mischief, my intention would be experimental test. A scientific test. But even if it was mischief, how could that possibly influence it?
Astrologer: (Pause.) I think it does influence it. I think whenever you do things with astrology, intentions are strong.
Dawkins: I'd have thought you'd be eager.
Astrologer: (Laughs.)
Dawkins: The fact that you're not makes me think you don't really in your heart of hearts believe it. I don't think you really are prepared to put your reputation on the line.
Astrologer: I just don't believe in the experiment, Richard, it's that simple.
Dawkins: Well you're in a kind of no-lose situation then, aren't you.
Astrologer: I hope so.
--
He believes that they are sinning. Mormons have a really complicated dolled-up afterlife, so if he's sticking to doctrine, he probably doesn't actually expect gays as a group to all go to Hell.
Edit: He did a gay guy in the Memory of Earth series too (the plot of which, I later found, is a blatant ripoff of the Book of Mormon). Like the gay guy in Songbird, this one ends up with a woman, although less tragically.
One definition of a prime, of course, is "a number whose only factors are itself and 1, except for 1 itself". Another, however, is "a number with exactly two factors", which is probably the simpler than "a number whose only factors are itself and 1". And if 1 were prime, it would be a highly exceptional one, in that there would be many places to say "all prime numbers except 1".
... (read more)Within the context of the article, the bigger form of the argument can be phrased as such:
This is bad and wrong. As a snap judgement, it is likely that releasing cross-platform software is a more successful thing to do but using that snap judgement to build bigger arguments is dangerous.
This is an example of an appeal from author... (read more)
If the tiebreak strategy is "agree with the previous person's guess", then you reach that point immediately. The first person's draw determines everyone's guess: If the second person's draw is the same as the first, then of course they agree, and if not then they're at a 50/50 posterior and thus also agree.
If the tiebreak strategy is "write down your own draw (i.e. maximize the information given to subsequent players)", then information can be collected only so long as the number of each color drawn remains tied or +/-1. As soon as one ... (read more)
Why is the news media comfortable with lying about science?
http://arstechnica.com/science/news/2010/01/why-is-the-news-media-comfortable-with-lying-about-science.ars
James Hughes - with a (IMO) near-incoherent Yudkowsky critique:
http://ieet.org/index.php/IEET/more/hughes20100108/
I don't think that's something that most people who think "life experience" is valuable would agree to.
It might be profitable for you to revise your criteria for what constitutes legitimate evidence. Throwing away information that has a positive correlation with the thing you're wondering about seems a bit hasty.
A few years back I did an ethics course at university. It very quickly made me realise that both I and most of the rest of the class based our belief in the existence of objective ethics simply on a sense that ethics must exist. When I began to question this idea my teacher asked me what I expected an objective form of ethics to look like. When I said I didn't know she asked if I would agree that a system of ethics would be objective if they could be universally calculated by any none biased, perfectly logical being. This seemed fair enough but the problem... (read more)
Here's a silly comic about rationality.
I rather wish it was called "Irrationally Undervalues Rapid Decisions Man". Or do I?
P(A)*P(B|A) = P(B)*P(A|B). Therefore, P(A|B) = P(A)*P(B|A) / P(B). Therefore, woe is you should you assign a probability of 0 to B, only for B to actually happen later on; P(A|B) would include a division by 0.
Once upon a time, there was a Bayesian named Rho. Rho had such good eyesight that she could see the exact location of a single point. Disaster struck, however, when Rho accidentally threw a dart, its shaft so thin that its intersection with a perfect dartboard would be a single point, at a perfect dartboard. You see, when you randomly select a point f... (read more)
My life plus the life of a random stranger, for example. If I was doomed to die in a certain fashion but had the chance to save another life (even in a way nobody would ever know about), well, that's a no-brainer for me.
EDIT: Ah, now I see the context. How about the following hypothetical:
I am on a spaceship returning to Earth when all my shipmates die. I realize that I am a carrier for a horrific disease; I will never get sick from it, but I can transmit it to others, of whom 99% will die. Let's furthermore imagine that the people on the ground don't ... (read more)
There is this thread. But it needs to be linked to from some kind of faq page because right now it is too hidden from new users to be helpful.
I am curious as to how many LWers attempt to work out and eat healthy to lengthen life span. Especially among those who have signed up for cryogenics.
You value things other than your own life, hence your life isn't priceless to you as well (there are hypothetical situations where you exchange your life for a significant improvement in the other things you value), though its value will of course be different for you and for other people, perhaps with the difference of couple orders of magnitude.
A little knowledge can be a dangerous thing. At least Eliezer has previously often recommended Judgment Under Uncertainty as something people should read. Now, I'll admit I haven't read it myself, but I'm wondering if that might be a bad advice, as the book's rather dated. I seem to frequently come across articles that cite JUU, but either suggest alternative interpretations or debunk its results entirely.
Just today, I was trying to find recent articles about scope insensitivity that I could cite. But on a quick search I primarily ran across articles point... (read more)
I found this interesting and the paper it discusses children's conception of intelligence.
The abstract to the article
... (read more)A suggestion for the site (or perhaps the Wiki): It would be useful to have a central registry for bets placed by the posters. The purpose is threefold:
For the "How LW is Perceived" file:
Here is an excerpt from a comments section elsewhere in the blogosphere:
I shall leave the interpretation of this to those whose knowledge of Star Trek is deeper than mine...
I am currently writing a sequence of blog posts on Friendly AI. I would appreciate your comments on present and future entries.
Inspired by this comment by Michael Vassar:
http://lesswrong.com/lw/1lw/fictional_evidence_vs_fictional_insight/1hls?context=1#comments
Is there any interest in an experimental Less Wrong literary fiction book club, specifically for the purpose of gaining insight? Or more specifically, so that together we can hash out exactly what insights are or are not available in particular works of fiction.
Michael Vassar suggests The Great Gatsby (I think, it was kind of written confusingly parallel with the names of authors but I don't think there was ever an author Ga... (read more)
How old were you when you became self-aware or achieved a level of sentience well beyond that of an infant or toddler?
I was five years old and walking down the hall outside of my kindergarden classroom and I suddenly realized that I had control over what was happening inside of my mind's eye. This manifested itself by me summoning an image in my head of Gene Wilder as Willy Wonka.
Is it proper to consider that the moment when I became self aware? Does anyone have a similar anecdote?
(This is inspired by Shannon's mention of her child exploring her sense of s... (read more)
I have read pretty much everything more than once. It is pretty difficult to turn reading into action though. Which is why I feel like there is something I am missing. Yep.
One technique I use to internalize certain beliefs is to determine their implied actions, then take those actions while noting that they're the sort of actions I'd take if I "truly" believed. Over time the belief becomes internal and not something I have to recompute every time a related decision comes up. I don't know precisely why this works but my theory is that it has to do with what I perceive my identity to be. Often this process exposes other actions I take which are not in line with the belief. I've used this for things like "animal suffering is actually bad", "FAI is actually important", and "I actually need to practice to write good UIs".
I occasionally see people here repeatedly making the same statement, a statement which appears to be unique to them, and rarely giving any justification for it. Examples of such statements are "Bayes' law is not the fundamental method of reasoning; analogy is" and "timeless decision is the way to go". (These statements may have been originally articulated more precisely than I just articulated them.)
I'm at risk of having such a statement myself, so here, I will make this statement for hopefully the last time, and justify it.
It's often s... (read more)
Paul Graham -- How to Disagree
http://www.paulgraham.com/disagree.html
When I'm actively following the site (visiting 3+ times a day), I primarily follow the new comments page. I only read top posts when I see that there's an interesting discussion going on about one of them, or if the post's title seems particularly interesting. (I do wind up reading a large portion of the top posts sooner or later, though.)
I have the 'recent posts' RSS feed in my reader for when I'm not actively following the site, but I only click through if something seems very interesting.
Well, the future will certainly be full of mostly strangers. If you can't convince any of your current friends/family to sign up, you might be better of making friends with those that have already signed up. There are bound to some you would get along with (I've read OOTS since it started :-) )
If I ever have any success in convincing anyone else to sign up for cryonics, I'll let you know how I did it (in the unlikely event that this will help!).
I use RSS for top level posts, and have an easily accessible bookmark to the comments page which I check more frequently than I should.
A soft reminder to always be looking for logical fallacies: This quote was smushed into an opinion piece about OpenGL:
Oops.
Anecdotes are rational evidence, but not scientific evidence.
Once upon a time I was pretty good at math but either I just stopped liking it or the series of dismal school teachers I had turned me off of it. I ended up taking the social studies/humanities rout and somewhat regretting it. I've studied some foundations of mathematics stuff, symbolic logic and really basic set theory and usually find that I can learn pretty rapidly if I have a good explanation in front of me. What is the best way to teach myself math? I stopped with statistics (High school, advanced placement) and never got to calculus. I don't expect to become a math wiz or anything, I'd just like to understand the science I read better. Anyone have good advice?
When people here say they are signed up for cryonics, do they systematically mean "signed up with the people who contract to freeze you and signed up with an instrument for funding suspension, such as life insurance" ?
I have contacted Rudi Hoffmann to find out just what getting "signed up" would entail. So far I'm without a reply, and I'm wondering when and how to make a second attempt, or whether I should contact CI or Alcor directly and try to arrange things on my own.
Not being a US resident makes things much more complicated (I live in France). Are there other non-US folks here who are "signed up" in any sense of the term ?
Oh! More Asimov, "I, Robot". Here the guy was talking about Friendly AI in 1942.
Feature request, feel free to ignore if it is a big deal or requested before.
When message people back and forth it would be nifty to be able to see the thread. I see glimpses of this feature but it doesn't seem fully implemented.
I follow Nick Bostrom on anthropic reasoning as well as existential risk, so I expect to see you there.
An interesting application of near/far:
http://scienceblogs.com/notrocketscience/2010/01/becoming_better_mind-readers_-_to_work_out_how_other_people.php
Does undetectable equal nonexistent? Examples: There are alternate universes, but there's no way we can interact with them. There are aliens outside our light cones. Past events evidence of which has been erased.
First: I'm having a very bad brain week; my attempts to form proper-sounding sentences have generally been failing, muddling the communicative content, or both. I want to catch this open thread, though, with this question, so I'll be posting in what is to me an easier way of stringing words together. Please don't take it as anything but that; I'm not trying to be difficult or to display any particular 'tone of voice'. (Do feel free to ask about this; I don't mind talking about it. It's not entirely unusual for me, and is one of the reasons that I'm fairly ... (read more)
Except wireheads.
Sorry, really late reply. Was just looking over this thread and happened to see this.
Card's writing that involves sexual attraction just comes off as asexual. I never got the sense that the characters were actually sexually attracted to each other; affectionate maybe, but not aroused. It's like the way sexuality looks on tv, not the way people actually experience it. I recall reading Card himself say that he didn't think he was very good at writing about sex or sexual attractions in an interview or something. It might have been in the Folk of the the Fringe book somewhere but I can't find it in my library.
I apologize for not replying and providing the citations needed. I've had unforeseen difficulties in finding the time, and now I'm going abroad for a week with no net access. When I come back I hope to make time to participate in LW regularly again and will also reply here.
Why was this comment downvoted to -4? Seems to me it's a legitimate question, from a fairly new poster.
It appears as long as I stoop to the correct level of self-depreciation I get enough karma to allow me to keep bashing myself over the head.
Ask Peter Norvig anything: http://www.reddit.com/r/programming/comments/auvxf/ask_peter_norvig_anything/
Grand Orbital Tables: http://www.orbitals.com/orb/orbtable.htm
In high school and intro chemistry in college, I was taught up to the e and then f orbitals, but they keep going and going from there.
It is not that I object to dramatic thoughts; rather, I object to drama in the absence of thought. Not every scream made of words represents a thought. For if something really is wrong with the universe, the least one could begin to do about it would be to state the problem explicitly. Even a vague first attempt ("Major! These atoms ... they're all in the wrong places!") is at least an attempt to say something, to communicate some sort of proposition that can be checked against the world. But you see, I fear that some screams don't actually commu... (read more)
Ray Kurzweil Responds to the Issue of Accuracy of His Predictions
http://nextbigfuture.com/2010/01/ray-kurzweil-responds-to-issue-of.html
How much of Eliezer's 2001 FAI document is still advocated? eg. Wisdom tournaments and bugs in the code.
Something has been bothering me ever since I began to try to implement many of the lessons in rationality here. I feel like there needs to be an emotional reinforcement structure or a cognitive foundation that is both pliable and supportive of truth seeking before I can even get into the why, how and what of rationality. My successes in this area have been only partial but it seems like the better well structured the cognitive foundation is the easier it is to adopt, discard and manipulate new ideas.
I understand that is likely a fairly meta topic and woul... (read more)
I've just reached karma level 1337. Please downvote me so I can experience it again!
"Top Contributors" is now sorted correctly. (Kudos to Wesley Moore at Tricycle.)
And if you cannot act such that 0 rights are violated? Your function would seem to suggest that you are indifferent between killing a dictator and committing the genocide he would have caused, since the number of rights violations is (arguably, of course) in both cases positive.
I got an email back from her. Tl;dr version: Nope, that's definitely not how she was thinking about it. (Perhaps noteworthy: She rarely communicates via email, so she's out of her element here. It is possible to evoke saner discussion from her in realtime.)
... (read more)Possibly dumb question but... can anyone here explain to me the difference between Minimum Message Length and Minimum Description Length?
I've looked at the wikipedia pages for both, and I'm still not getting it.
Thanks.
Yudkowsky briefly addressed moral luck:
Lately I've actually been thinking that maybe we should split up morality into two concepts, and deal with them separately: one referring to moral sentiments, and another referring to what we actually do. It... (read more)
Question for all of you: Is our subconscious conscious? That is, are parts of us conscious? "I" am the top-level consciousness thinking about what I'm typing right now. But all sorts of lower-level processes are going on below "my" consciousness. Are any of them themselves conscious? Do we have any way of predicting or testing whether they are?
Tononi's information-theoretic "information integration" measure (based on mutual information between components) could tell you "how conscious" a well-specified circuit ... (read more)
If you're suggesting that all science fiction is implausible though, then that's not true. There's a difference between coming up with random, futuristic ideas, and coming up with random, futuristic ideas that have justification for working.
[Parent at -2.] Is the advice to not waste time and effort on stuff you don't need really that bad? (Hypothetical, under the assumption that you really don't need it; if you do need it occasionally, in the majority of cases it'll be enough to relearn directly on demand, rather than supporting for perfection's sake.)
We're using different definitions of validity. Yours is "[a] syllogism... is valid [if] it reaches a conclusion that is, on average, correct." Mine is this one.
ETA: Thank you taking the time to explain your position thoroughly; I've upvoted the parent. I'm unconvinced by your maximum entropy argument because, at the level of lack of information you're talking about, H and D could be in continuous spaces, and in such spaces, maximum entropy only works relative to some pure non-informative measure, which has be be derived from arguments other than maximum entropy.
If the Simple Math of Everything were a real text book, I'd read that. But I've gathered calculus is the right place to start. Probability theory would be next, I guess.
Users' karma is only displayed on their user page (and the top contributors list). The number in the header of an article or comment is the score for that post only. Does this help?
ke^x is its own derivative for any k, including 0. It's a lot more convenient for 1 not to be prime. But 0! = 1, for example.
Email sent. I'll quote the relevant bit here, in case it turns out to affect her reply. (I did link to the conversation but I'm not sure she'll follow the link.)
... (read more)Was this ever said or shown in an episode? It seems like a cop out to just assume magical technology is post-Singularity without it being in the back story.
Wasn't there a consciousness swapping episode of Farscape? Also, what about the Data is basically a person trope in TNG? I agree that Star Trek technology doesn't make a lot of sense, though.
Give your high standards shouldn't the fact that cylons were never much more intelligent than humans bother you?
Yes, and the reason this is relevant is because the positions are about things to be done to the poor.
You said:
There is a factual disagreement about how to best help the poor. The poor themselves generally support one of the two options: social support. They may, factually, be wrong. There is then a further decision: do we help them in ... (read more)
Well, at least you'll have the Less Wrong reunion.
Hmm, what about an outside view? That is, thinking about what it would be like for someone else. I'm a little too sleepy now to recall the exact reference, but there was something said here about how people make better estimates e.g. about how long a project will take if they think about how long similar projects have taken then how long they think this project will take. And, because you know about the present, let's make our thought experiment happen in the present.
So, what if a woman was frozen a hundred years ago, and woke up today? Would she be ab... (read more)
I think plenty of ethical differences remain even if we eliminate all possiblee factual disagreements.
As regards religion, (many) religious people claim that they obey god's commands because they are (ethically) good and right in themselves, and just because they come from god. It's hard to dismiss religion entirely when discussing the ethics adopted by actual people - there's not much data left.
But here's another example: some people advocate the ethics of minimal government and completely unrestrained capitalism. I, on the other hand, believe in state ... (read more)
What do you mean "as ethical"? By what meta-ethical rule?
if your reply is, "by the objective meta-ethics which I postulate that all sentient beings can derive" - if everyone can derive it equally, doesn't that imply everyone ought to be equally ethical? If you admit someone or some society are un-ethical (as you asked of Jack), does that mean they somehow failed to derive the meta-ethics? That the ethics they adopted is internally inconsistent somehow?
Invented isn't the right word, though that is partly my fault since baseball isn't an ideal metaphor. Natural language is a better one. Parts of ethics are culturally inherited (presumably at some time in the past they were invented) other parts are innate. The word ethics has a type-token ambiguity. It can refer to our ethical system (call it 'ethics prime') or it can refer to the type of thing that ethics prime is (an ethics). There ... (read more)
If you want to read a full length Asimov book, my personal recommendation is the End of Eternity. It has a rather unique take on time travel and functions well as a stand alone book. It has just been reprinted after being out of print for too long.
Foundation is his most well known novel and it also very much worth reading.
I can't find someone violating the copyright online with a quick Google, but Asimov's short story "The Last Answer" is also a good one with a different take on religion than "The Last Question".
Today at work, for the first time, LessWrong.com got classified as "Restricted:Illegal Drugs" under eSafe. I don't know what set that off. It means I can't see it from work (at least, not the current one).
How do we fix it, so I don't have to start sending off resumes?
I was linking not just to the paper, but to a summary of the paper, and included that example out of that summary, a summary-of-summary. Others have already summarized what you got wrong in your reply. You can see that the paper has about 1300 citations, which should count for its importance.
And for one short moment, in the wee morning hours, MrHen takes up the whole damn Recent Comments section.
I assume dropping two walls of text and a handful of other lengthy comments isn't against protocol. Apologies if I annoy anyone.
I am going to be hosting a Less Wrong meeting at East Tennessee State University in the near future, likely within the next two weeks. I thought I would post here first to see if anyone at all is interested and if so when a good time for such a meeting might be. The meeting will be highly informal and the purpose is just to gauge how many people might be in the local area.
Please review a draft of a Less Wrong post that I'm working on: Complexity of Value != Complexity of Outcome, and let me know if there's anything I should fix or improve before posting it here. (You can save more substantive arguments/disagreements until I post it. Unless of course you think it completely destroys my argument so that I shouldn't even bother. :)
I've definitely learned a lot of math from Wikipedia. I don't generally do the proofs myself, so I don't really have any of the elusive "mathematical maturity", but I definitely have learned a lot of abstract algebra, category theory and mathematical logic just by reading the definitions of various things on Wikipedia and trying to understand them.
On the other hand, I am pretty motivated to learn these things because I actively enjoy them. Other branches of math, I am much less interested in and so I don't learn that much. But it is possible!
Garry Kasparov: The Chess Master and the Computer
http://www.nybooks.com/articles/23592
Today's Questionable Content has a brief Singularity shoutout (in its typical smart-but-silly style).
I recently found an article that may be of interest to Less Wrong readers:
Blame It on the Brain
The latest neuroscience research suggests spreading resolutions out over time is the best approach
The article also mentions a study in which overloading the prefrontal cortex with other tasks reduces people's willposer.
(should I repost this link to next month's open thread? not many people are likely to see it here)
Inorganic dust with lifelike qualities: http://www.sciencedaily.com/releases/2007/08/070814150630.htm
So I am back in college and I am trying to use my time to my best advantage. Mainly using college as an easy way to get money to fund room and board while I work on my own education. I am doing this because i was told here among other places that there are many important problems that need to be solved and i wanted to develop skills to help solve them because I have been strongly convinced that it is moral to do so. However beyond this I am completely unsure of what to do. So I have the furious need for action but seem to have no purpose guiding that actio... (read more)
Schooling isn't about education. This article is pretty mind-boggling: apparently, it's been a norm until now in Germany that school ends at lunchtime and the children then go home. Considering how strong the German economy has traditionally been, this raises serious questions of the degree that elementary school really is about teaching kids things (as opposed to just being a place to drop off the kids while the parents work).
Oh, and the country is now making the shift towards school in the afternoon as well, driven by - you guessed it - a need for women to spend more time actually working.
For some reason, my IP was banned on the LessWrong Wiki. Apparently this is the reason:
Any idea how this happens and how I can prevent from happening again?
Strange fact about my brain, for anyone interested in this kind of thing:
Even though my recent top-level post has (currently) been voted up to 19, earning me 190 karma points, I feel like I've lost status as a result of writing it.
This doesn't make much sense, though it might not be a bad thing.
What are/ought to be the standards here for use of profanity?
The emotional framework of which you speak doesn't seem to resemble anything I can introspectively access in my head, but maybe I can offer advice anyway. Some emotional motivations that are conducive to rationality are curiosity, and the powerful need to accomplish some goal that might depend on you acting rationally.
It's one of my favourites, and I also think it's a good one to start with. But so is The State of the Art. My favourite by him is Feersum Endjinn.
I mentioned something along these lines before.
Paul Bucheit -- Evaluating risk and opportunity (as a human)
http://paulbuchheit.blogspot.com/2009/09/evaluating-risk-and-opportunity-as.html
What's the right prior for evaluating an H1N1 conspiracy theory?
I have a friend, educated in biology and business, very rational compared to the average person, who believes that H1N1 was a pharmaceutical company conspiracy. They knew they could make a lot of money by making a less-deadly flu that would extend the flu season to be year round. Because it is very possible for them to engineer such a virus and the corporate leaders are corrupt sociopaths, he thinks it is 80% probable that it was a conspiracy. Again, he thinks that because it was possible for ... (read more)
I guess we already do something like that, namely award people with status for being inventors or early adopters of ideas (think Darwin and Huxley) that eventually turn out to be accepted by the majority.
http://www.vimeo.com/7397629
Can someone point me towards the calculations people have been doing about the expected gain from donating to the SIAI, in lives per dollar?
Edit: Never mind. I failed to find the video previously, but formulating a good question made me think of a good search term.
Yes, that would be an information cascade.
The Edge Annual Question 2010: How is the internet changing the way you think?
http://www.edge.org/q2010/q10_print.html#responses
It does occur to me that I wasn't objecting to the hypothetical existence of said function, only that rights aren't especially useful if we give up on caring about them in any world where we cannot prevent literally all violations.
I read new posts as soon as I see them. I look at the comments through the recent comments bar, but that requires having the LW tab open more or less constantly. I also reread posts to get any comments I miss and to get a better sense of how the discussions are preceding.
As a 19 year old student living in Hungary cryonics is way back on my list of life extension related things to do. Nevertheless I think cryonics is a great option and I'll sign up as soon as I figure out how I could do it in my country (Russia being the closest place with cryo service) and have the money for it.
As a side note, I think cryonics has the best payoffs when you've got some potentially lethal relatively slowly advancing disease like cancer or ALS, and have the option of moving very closely to a cryonics facility.
Humbug. What you are actually saying is that wanting to know can be a terminal value, so why won't you just say that?
And of course, I know that, but there is just too much stuff out there to learn, so it's a necessity that the things you do choose to learn are in some sense better than the rest (otherwise you lose something), more beautiful or more useful. Just saying that one would learn X because "learning in general" is fun isn't enough.
For me, the necessity of using Bayesian inference follows from Cox's Theorem, an argument which invokes no meta-probability distribution. Even if Bayesian inference turns out to have SilasBarta::validity, I would not justify it on those grounds.
I woul... (read more)
I poked around a bit on the site and I think the vast majority of ways I could spend equivalent downtime would be worth more than the pennies they offer there. Even the overhead of signing up for the tasks is too costly a barrier for such tiny payouts, and that's if you avoid the ones that require you to pass qualification tests. Plus, the number of offers asking people to re-write content in their own words just screams plagiarism.
A. I expect to need to use it in the fall when I go to college. B. I want to know how to do calculus.
Argh. I wasn't saying that you were using the Bayes Theorem in your claimed definition of Cyan::validity. I was saying that when you are deriving probabilities through Bayesian inference, you are implicitly applying a standard of validity for probabilistic syllogisms -- a standard that matches mine, and yields the conclusion I claimed about the syllogism in question.
... (read more)I was recently asked to produce the indefinite integral of ln x, and completely failed to do so. I had forgotten how to do integration by parts in the 6 months since I had done serious calculus. Is there anyone who knows of a calculus problem of the day or some such that I might use to retain my skills?
I don't vote for blind memorization either. However, I think that if one can not reconstruct a proof than it is not understood either. Trying to reconstruct thought processes by heart will highlight the parts with incomplete understanding.
Of course in order to fully understand things one should look at additional consequences, solve problems, look at analogues, understand motivation etc. Still, the reconstruction of proofs is a very good starting point, IMO.
Well, if everyone else they've revived so far has ended up a miserable outcast in an alien society, or some other consistent outcome, they might be able to take a guess at it.
I only call syllogisms about probabilities valid if they follow from Bayes' Theorem. You permit yourself a meta-probability distribution over the probabilities and call a syllogism valid if it is Cyan::valid on average w.r.t. to your meta-distribution. I'm not saying that SilasBarta::valid isn't a possibly interesting thing to think about, but it doesn't seem to match Cyan::valid to me.
... (read more)At which point you are completely forsaking your original argument (rightfully or wrongly, which is a separate concern), which is the idea of my critical comment above. It's unclear what you are arguing about, if your conclusion is equivalent to a much simpler premise that you have to assume independently of the argument. This sounds like rationalization (again, no matter whether the conclusion-advice-heuristic is ... (read more)
If that's the distinction, then whether there is objective ethics or not is just a matter of semantics; not anything of philosophical or practical interest.
Ethical problem. It occurred to me that there's an easy, obvious way to make money by playing slot machines: Buy stock in a casino and wait for the dividends. Now, is this ethically ok? On the one hand, you're exploiting a weakness in other people's brains. On the other hand, your capital seems unlikely, at the existing margins, to create many more gamblers, and you might argue that you are more ethical than the average investor in casinos.
It's a theoretical issue for me, since my investment money is in an index fund, which I suppose means I own some tiny share in casinos anyway and might as well roll with it. But I'd be interested in people's thoughts anyway.
This is a very interesting line of argument. How much of it do you think is due to this:
Many of these women live in cultures and social circles/families where publicly supporting abortion rights is very damaging socially. Even in relaxed conditions, disagreeing with one's family and friends on an issue that evokes such strong emotions is hard. The women tend to conform unless they have a strong personal opinion to the contrary, and once they conform on the signalling level, they may eventually come to believe themselves that they are against abortion.
If th... (read more)
Not for probabilistic claims.
No. The reasoning can be valid even though, given additional information, the conclusion would be changed.
Example:
Bob is accused of murder.
Then, Bob's fingerprints a... (read more)
I wouldn't quite say that I take the revealed preference more seriously than the overt one. I'm prepared to accept that there may be people who genuinely believe that abortion is morally wrong and also genuinely believe that other people (and possibly they themselves) will succumb to temptation and have an abortion if it is legal and available even if they believe it is wrong. We generally accept the reality of akrasia here, this seems like a very similar phenomenon: the belief that people can't be trusted to do what is morally right when faced with an unw... (read more)
It is. If you believe MWI, you believe that Schrodinger's cat will experience survival every time, even if you repeated the experiment 100 times, but that you will observe the cat dead if you repeat the experiment enough times.
There is no falsifiable fact above and beyond MWI as far as I can see, apart from the general air of confusion about subjective experience, which hasn't coalesced into anything sufficiently definite enough to be falsified.
Take ethics to be a system that sets out rules that determine whether we are good or bad people and whether our actions are good or bad. They differ from other ascriptions of good (say, at baseball) and bad (say, me playing baseball) in that there is an imperative to be good in this sense whereas it is acceptable to be bad at baseball (I hope).
I suspect that won't answer your question so instead I'll ask another. Do you believe that this inability to define means there is no real concept that underpins the folk conception of ethics or does it just mean we are unable to define it well enough to discuss it?
If you're ok with a yes or no answer, then it's enough. If you also want to know how individuals may be important to events, killing may not be enough, I think.
My understanding is that such a story relies on trying to define the area of a point when only areas of regions are well-defined; the probability of the point case is just the limit of the probability of the region case, in which case there is technically no zero probability involved.
"Imagine the human race gets wiped out. But you want to transmit the so far acquired knowledge to succeeding intelligent races (or aliens). How do you do?"
I got this question while reading a dystopia of a world after nuclear war.
Well, I can't really object to the extremes theory. You aren't a Third-Worlder or a highly driven Indian or Chinese or pre-20th century American child who wouldn't be bothered by such conditions, after all.
But most school building is not about avoiding such extremes. I can cite exactly one example in my educational career where a building had a massive overhaul due to genuine need (a fire in the gym burned the roof badly); all the other expansions and new buildings.... not so much.
... (read more)I just noticed that I showed up around the same time as the Guardian Mention as well... However, I have been lurking (without registering) for two years now. I met Eliezer Yudowski at the First Singularity Summit, and became aware of OB as a result, and then became aware of this blog shortly after he split from OB.
However, I would like to say that a newcomers section in a FAQ or Wiki would have been most welcome.
I do have a little bit of a clue what I am doing here as well, as I have spent a lot of time on forums such as Richard Dawkins' and Sam Harris' an... (read more)
So make the classes bigger, perhaps. In a Hansonian vein:
... (read more)It wasn't clear to me how that misses the point of the paper, and in acknowledgment of that possibility I added the caveat at the end. Hardly "obnoxious".
Nevertheless, your original comment would be a lot more helpful if you actually summarized the point of the paper well enough that I could tell that my comment is irrelevant.
Could you edit your original post to do so? (Please don't tell me it's impossible. If you do, I'll have to read the paper myself, post a summary, save everyone a lot of time, and prove you wrong.)
You can pay for cryonics with life insurance.
But the question is whether it's safe to advise people to wait, knowing that they can have surgery later if needed.
Anyway my main question was whether I'd done the stats right.
The Na'vi didn't defect. The Na'vi refused to play. The human faction wouldn't accept any outcome that didn't end with them getting the unobtainium, and the Na'vi not playing was such an outcome, so the humans forced a game and, when the Na'vi still weren't cooperative, defected big-time. Since the game was spread out in time, this permitted retaliatory defection - which isn't part of the original non-iterated PD, nor is refusing to play.
I recently had to have some minor surgery. However, there's a body of thought that says it's safe to wait and watch for symptoms, and only have surgery later. There's a peer reviewed (I assume) paper supporting this position.
Upon reading this paper I found what looked like a statistical error. Looking at outcomes between two groups, they report p = 0.52, but doing the sums myself I got p = 0.053. For this reason, I went and had the surgery.
Since I'm just a novice at statistics, I was wondering if I had in fact got it right - it's disturbing to think that a... (read more)
Surely this isn't changing a single historical event but the laws governing our universe.
It is most certainly not an academic look at the concept, but that doesn't mean he didn't play a role in bringing the concept to the public eye. It doesn't have to be a scientific paper to have a real influence on the idea.
Laser fusion test results raise energy hopes: http://news.bbc.co.uk/2/hi/science/nature/8485669.stm
I'll track down the paper from Science on request.
Does anybody have any updates as to the claims made against Alcor, i.e. the Tuna Can incident? I've tried a bunch of searches, but haven't been able to find anything conclusive as to the veracity of the claims.
Does a Turing chatbot deserve recognition as a person?
(Turing chatbot = bot that can pass the Turing test... 50% of the time? 95% of the time? 99% of the time?)
http://en.wikipedia.org/wiki/Chantek
oh but surely there has got to be some sort of simple cure for that sickness where you should be sleeping but you just stay up wanting to scream
My ISP? Or my IP address? I assume the latter.