I heard an interview on NPR with a surgeon who asked other surgeons to use checklists in there operating rooms. Most didn't want to. He convinced some to try them out anyway.
(If you're like me, at this point you need time to get over your shock that surgeons don't use checklists. I mean, it's not like they're doing something serious, like flying a plane or extracting a protein, right?)
After trying them out, 80% said they would like to continue to use checklists. 20% said they still didn't want to use checklists.
So he asked them, If they had surgery, would they want their surgeon to use a checklist? 94% said they would want their surgeon to use a checklist.
After reading the original link I have this to comment:
The interesting part is that the surgeon who did the study didn't himself expect
the checklist to make any difference and was resisting its use. But after
starting to use it himself he noticed a massive improvement in his results.
Recent observations on the art of writing fiction:
My main characters in failed/incomplete/unsatisfactory stories are surprisingly reactive, that is, driven by events around them rather than by their own impulses. I think this may be related to the fundamental attribution error: we see ourselves as reacting naturally to the environment, but others as driven by innate impulses. Unfortunately this doesn't work for storytelling at all! It means my viewpoint character ends up as a ping-pong ball in a world of strong, driven other characters. (If you don't see this error in my published fiction, it's because I don't publish unsuccessful stories.)
Closely related to the above is another recent observation: My main character has to be sympathetic, in the sense of having motivations that I can respect enough to write them properly. Even if they're mistaken, I have to be able to respect the reasons for their mistakes. Otherwise my viewpoint automatically shifts to the characters around them, and once again the non-protagonist ends up stronger than the protagonist.
Just as it's necessary to learn to make things worse for your characters, rather than following the natural impulse to
That's not uncommon. Villains act, heroes react.
[http://tvtropes.org/pmwiki/pmwiki.php/Main/VillainsActHeroesReact]
It's already called The Law of Bruce
[http://tvtropes.org/pmwiki/pmwiki.php/Main/TheLawOfBruce], but it's stated a
little differently.
5wedrifid12y
I noticed where I was while on the first page this time. Begone with you!
0Technologos12y
I interpreted Eliezer as saying that that was a cause of the stories' failure or
unsatisfactory nature, attributing this to our desire to feel like decisions
come from within even when driven by external forces.
"Former Christian Apologizes For Being Such A Huge Shit Head All Those Years" sounds like an Onion article, but it isn't. What's impressive is not only the fact that she wrote up this apology publicly, but that she seems to have done it within a few weeks of becoming an atheist after a lifetime of Christianity, and in front of an audience that has since sent her so much hate mail she's stopped reading anything in her inbox that's not clearly marked as being on another topic.
This woman is a model unto the entire human species.
1Unknowns12y
It isn't that impressive to me. As far as I can see, what it shows is that she
has been torturing herself for a long time, probably many years, over her issues
with Christianity. She's just expressing her anger with the suffering it caused
her.
1RobinZ12y
Thank you for posting that. It's an inspiration.
0Paul Crowley12y
I wish it were possible to mail her and tell her she doesn't have to apologise!
Inspired by reading this blog for quite some time, I started reading E.T. Jaynes' Probability Theory. I've read most of the book by now, and I have incredibly mixed feelings about it.
On one hand, the development of probability calculus starting from the needs of plausible inference seems very appealing as far as the needs of statistics, applied science and inferential reasoning in general are concerned. The Bayesian viewpoint of (applied) probability is developed with such elegance and clarity that alternative interpretations can hardly be considered appealing next to it.
On the other hand, the book is very painful reading for the pure mathematician. The repeated pontification about how wrong mathematicians are for desiring rigor and generality is strange, distracting and useless. What could possibly be wrong about the desire to make the steps and assumptions of deductive reasoning as clear and explicit as possible? Contrary to what Jaynes says or at least very strongly implies (in Appendix B and elsewhere), clarity and explicitness of mathematical arguments are not opposites or mutually contradictory; in my experience, they are complementary.
Even worse, Jaynes makes several strong ... (read more)
Amen. Amen-issimo.
The solution, of course, is for the Bayesian view to become widespread enough
that it doesn't end up identified particularly with Jaynes. The parts of Jaynes
that are correct -- the important parts -- should be said by many other people
in many other places, so that Jaynes can eventually be regarded as a brilliant
eccentric who just by historical accident happened to be among the first to say
these things.
There's no reason that David Hilbert shouldn't have been a Bayesian. None.
After pondering the adefinitemaybe case for a bit, I can't shake the feeling that we really screwed this one up in a systematic way, that Less Wrong's structure might be turning potential contributors off (or turning them into trolls). I have a few ideas for fixes, and I'll post them as replies to this comment.
Essentially, what it looks like to me is that adefmay checked out a few recent articles, was intrigued, and posted something they thought clever and provocative (as well as true). Now, there were two problems with adefmay's comment: first, they had an idea of the meaning of "evidence" that rules out almost everything short of a mathematical proof, and secondly, the comment looked like something that a troll could have written in bad faith.
But what happened next is crucial, it seems to me. A bunch of us downvoted the comment or (including me) wrote replies that look pretty dismissive and brusque. Thus adefmay immediately felt attacked from all sides, with nobody forming a substantive and calm reply (at best, we sent links to pages whose relevance was clear to us but not to adefmay). Is it any wonder that they weren't willing to reconsider their definition of evi... (read more)
Partial Fix #2:
I can't help but think that some people might have hesitated to downvote
adefmay's first comment, or might have replied at greater length with a more
positive tone, had it been obvious that this was in fact adefmay's first post.
(I did realize this, but replied in a comically insulting fashion anyhow
[http://lesswrong.com/lw/1la/new_years_predictions_thread/1e0i]. Mea culpa.)
It might be helpful if there were some visible sign that, for instance, this was
among the 20 first comments from an account.
5Jack12y
When it became clear that adefmay couldn't role with the punches there were
quite a few sensitive comments with good advice and explanations for why he/she
had been sent links. His/her response to those was basically to get rude,
indignant and come up with as many counter-arguments as possible while not once
trying to understand someone else's position or consider the possibility he/she
was mistaken about something.
I don't know if adefmay was intentionally trolling but he/she was certainly
deficient in rationalist virtue.
That said, I think we need to handle newcomers better anyway and an FAQ section
is really important. I'd help with it.
5orthonormal12y
It seems plausible that things could have turned out much differently, but that
the initial response did irreparable damage to the conversation. Perhaps putting
adefmay on the defensive so soon made it implicitly about status and not losing
face. Or perhaps the exchange fell into a pattern where acting the troll started
to feel too good
[http://scienceblogs.com/gnxp/2006/07/stupid_feels_might_good_am_i_i.php].
Overall, I didn't find adefmay's tone and obstinacy at the start to be worse
than some comments (elsewhere) by people who I consider valuable members of Less
Wrong.
0RichardKennaway12y
There have been several newcomers in the last few days -- maybe the mention in
the Guardian drew them here.
Besides telling them what we're all about, a standing invitation for newcomers
to introduce themselves might be useful, but there isn't a place for them to do
so. How about another standard monthly thread?
We don't have personal profile pages here, do we?
4Jack12y
There is this thread [http://lesswrong.com/lw/b9/welcome_to_less_wrong/]. But it
needs to be linked to from some kind of faq page because right now it is too
hidden from new users to be helpful.
1MatthewB12y
I just noticed that I showed up around the same time as the Guardian Mention as
well... However, I have been lurking (without registering) for two years now. I
met Eliezer Yudowski at the First Singularity Summit, and became aware of OB as
a result, and then became aware of this blog shortly after he split from OB.
However, I would like to say that a newcomers section in a FAQ or Wiki would
have been most welcome.
I do have a little bit of a clue what I am doing here as well, as I have spent a
lot of time on forums such as Richard Dawkins' and Sam Harris' and decided that
I wanted to find some people who were a) more into AI and rational reasoning and
b) closer to home.
I would second the suggestion for an introductory thread. And, some better
guidelines for posting (what is likely to get downvoted, what is likely to get
upvoted... although, from my vote count, I seem to have some clue of what works
and what doesn't.. Still, I could use a few more definitive guidelines that just
not making stupid posts - or trollish posts).
4Eliezer Yudkowsky12y
I'd have to say that the trollness seems obvious as all hell to me. Also,
consider the prior probabilities.
1orthonormal12y
I may be giving adefmay the benefit of the doubt due to an overactive
conscience; I go back and forth on this particular case. Still, it seems to me
that being new here can involve a lot of early perceived hostility (people
who've joined the community more recently, feel free to support or correct this
claim), that we may well be losing LW contributors for this reason, and that
some relatively easy fixes might do a lot of good.
1Nick_Tarleton12y
Me too. Obvious from his second comment
[http://lesswrong.com/lw/1la/new_years_predictions_thread/1e04] on, even. (Or,
if not a troll, not going to become a valued contributor without some growing
up.)
0MatthewB12y
Seeing as I missed that whole thing, and I am interested in how to best define
evidence (I need such a definition for other forums, probably more than I would
need it here)... Could someone post those same links about the definition (or, I
see the word "Meaning" used... Why is that???) of Evidence?
Never mind... It's in the Wiki...
0Nick_Tarleton12y
from the wiki [http://wiki.lesswrong.com/wiki/Evidence]
2orthonormal12y
Partial Fix #1:
We put together a special forum (subset of threads and posts) for a number of
old argument topics, and make sure that it is readily accessible from the main
page, or especially salient for new people. We have a norm there to (as much as
possible) write out our points from scratch instead of using shorthand and links
as we do in discussions between LW veterans.
Benefits:
* It's much less of a status threat to be told that one's comment belongs in
another thread than to have it dismissed as happened to adefmay.
* Most of the trouble seems to happen when new people jump into a current
thread and derail a conversation between LW veterans, who react brusquely as
above. Separating the newest/most advanced conversations from the old
objections should make everyone happier.
* I find that the people who have been on LW for a few months have just the
right kind of zeal for these newfound ideas
[http://lesswrong.com/lw/1jf/manwithahammer_syndrome/] that makes them eager
and able to defend them against the newest people, who find them absurd. I
think this would be a good thing for both groups of people, and I expect it
to happen naturally should such a place be created.
So if we made some collection of "FAQ threads" and made a big, obvious, enticing
link to them on either the front page or the account creation page (that is, we
give them a list of counterintuitive things we believe or interesting questions
we've tackled, in the hopes they head there first), we might avoid more of these
unfortunate calamities in the future.
I'm not sure there needs to be more than one FAQ thread. But lets start by generating a list of frequently asked questions, coming up with answers with consensus support.
Why is almost everyone here an atheist?
What are the "points" on each comment?
Aren't knowledge and truth subjective or undefinable?
Can you ever really prove anything?
What's all this talk about probabilities and what is a Bayesian?
Why do you all agree on so much? Am I joining a cult?
What are the moderation rules? What kind of comments will result in downvotes and what kind of comments could result in a ban?
Who are you people? (Demographics, and a statement to the effect of demographics don't matter here. )
More FAQ topics:
* Why the MWI?
* Why do you all think cryonics will probably work?
* Why a computational theory of mind?
* What about free will and consciousness?
* What do you mean by "morality", anyway?
* Wait a sec. Torture over dust specks?!?
Basically, I think we need to do more for newcomers than just tell them to read
a sequence; I mean, I think each of us had to actually argue out points we
thought were obvious before we moved forward on these issues. Having a
continuous open thread on such topics (including, of course, links to the
relevant posts or Wiki entry) would be much better, IMO.
A monthly "Old Topics" thread, or a collection of them on various topics, would
be great, although there ought to be a really obvious link directing people to
it.
2Jack12y
While I'm not saying there shouldn't be a place to discuss those topics I think
the first thing a newcomer sees should focus on epistemology, rationality and
community norms of rationality.
1) This is still presumably what this site is about.
2) Once you get the right attitude and the right approach the other subjects
don't require patient explanation. A place to discuss those things is fine, but
if the issue comes up elsewhere and a veteran does respond brusquely to a
newcomer they can probably deal with it if they have internalized less wrong
norms, traditional rationality and some of the Bayesian type stuff we do here.
3) There seems to be near universal agreement on the rationality stuff but I'm
not sure that is the case with the other issues. I know I agree with the typical
LW position on the first four of your questions, but I disagree on the last two.
I suspect most people here don't think cryonics will probably work (just that it
working is likely enough to justify the cost). There are probably some
determinists mixed in with a lot of compatibilists and there are definitely
dissenters on theory of the mind stuff (I'm thinking of Michael Porter who
otherwise appears to be a totally reasonable less wrong member). Check the
survey results [http://lesswrong.com/lw/fk/survey_results/#more] for more
evidence of dissent. That there is still disagreement on these issues that is
reason to keep discussing them. But I don't know if we should present the
majority views on all these issues as resolved to new users.
But I might just be privileging my own minority views. If the community wants
these included I won't object.
2orthonormal12y
Good points, but I still think that these questions belong in some kind of "Old
Topics" thread, because there's already been a lot said about them, and because
most new people will want to argue them anyway. Even if they're not considered
to be settled or to be conditions that define LW, I'd prefer if there's a place
for new people to start discussing them other than 2-year-old threads or
tangential references in new posts.
In a fairly recent little-noticed comment, I let slip that I differ from many folks here in what some may regard as an important way: I was not raised on science fiction.
I'll be more specific here: I think I've seen one of the Star Wars films (the one about the kid who apparently grows up to become the villain in the other films). I have enough cursory familiarity with the Star Trek franchise to be able to use phrases like "Spock bias" and make the occasional reference to the Starship Enterprise (except I later found out that the reference in that post was wrong, since the Enterprise is actually supposed to travel faster than light -- oops), but little more. I recall having enjoyed the "Tripod" series, and maybe one or two other, similar books, when they were read aloud to me in elementary school. And of course I like Yudkowsky's parables, including "Three Worlds Collide", as much as the next LW reader.
But that's about the extent of my personal acquaintance with the genre.
Now, people keep telling me that I should read more science fiction; in fact, they're often quite surprised that I haven't. So maybe, while we're doing these... (read more)
Greg Egan: Permutation City, Diaspora, Incandescence.
Vernor Vinge: True Names, Rainbows End.
Charlie Stross: Accelerando.
Scott Bakker: Prince of Nothing series.
3jscn12y
Voted up mainly for the Greg Egan recommendations.
0djcb12y
I read Vinge's Rainbows End, and I found the futurism interesting (it seems
Google is starting to work
[http://scitedaily.com/googles-book-scanning-technology-revealed/] on the book
scanning stuff), but I couldn't really get into the story.
(edit: fixed typo, thanks)
0RobinZ12y
Rainbows End, but I agree.
7Kevin12y
I am a big fan of Isaac Asimov. Start with his best short story, which I submit
as the best sci-fi short story of all time.
http://www.multivax.com/last_question.html
[http://www.multivax.com/last_question.html]
7Bindbreaker12y
I prefer this one [http://www.roma1.infn.it/~anzel/answer.html], and yes, it
really is that short.
0Kevin12y
Thanks, Brown wrote that in 1954, two years before Asimov wrote The Last
Question. Do you think Asimov read Brown's story?
0Technologos12y
Asimov thought it was his best story, too (or at least his favorite). Can't say
I disagree.
0komponisto12y
Ah yes, CronoDAS recommended that, too.
[http://lesswrong.com/lw/1kh/the_correct_contrarian_cluster/1cv0] (Sorry, I
should have acknowledged!)
3Jack12y
Oh! More Asimov, "I, Robot". Here the guy was talking about Friendly AI in 1942.
6Zack_M_Davis12y
Not really; they're not decision theory stories. The Three Laws are adversarial
[http://intelligence.org/upload/CFAI//adversarial.html] injunctions that hide
huge amounts of complexity [http://lesswrong.com/lw/tj/dreams_of_friendliness/]
under short English words like harm. It wouldn't actually work. It didn't even
work [http://en.wikipedia.org/wiki/%E2%80%94That_Thou_art_Mindful_of_Him] in the
story.
8Jack12y
The whole point of the stories is that it doesn't work in the end, it is a case
study in how not to do it. How it can go wrong. Obviously he didn't solve the
problem. The first digital computer had just been constructed, what would you
expect?
0Vladimir_Nesov12y
The FAI problem has nothing to do with digital computers. It's a math problem.
You'd only need digital computers after you've solved the problem, to implement
the solution.
0Zack_M_Davis12y
Not that they weren't good stories, and not that I expect fiction authors to do
their own basic research, but I wouldn't say they're about the Friendly AI
problem.
0JDM9y
It is most certainly not an academic look at the concept, but that doesn't mean
he didn't play a role in bringing the concept to the public eye. It doesn't have
to be a scientific paper to have a real influence on the idea.
1Kevin12y
Along those lines, I'd recommend the Metamorphosis of Prime Intellect. It's a
short-novel length expression of an AI that gains control of all matter and
energy in the universe while being constrained by Asimov's Three Laws.
It's available free online under copyright.
http://www.kuro5hin.org/prime-intellect/
[http://www.kuro5hin.org/prime-intellect/]
2Kevin12y
If you want to read a full length Asimov book, my personal recommendation is the
End of Eternity. It has a rather unique take on time travel and functions well
as a stand alone book. It has just been reprinted after being out of print for
too long.
Foundation is his most well known novel and it also very much worth reading.
I can't find someone violating the copyright online with a quick Google, but
Asimov's short story "The Last Answer" is also a good one with a different take
on religion than "The Last Question".
6Paul Crowley12y
My first recommendation here is always Iain M Banks, Player of Games.
0MichaelGR12y
Personally, I'd recommend starting with Consider Phlebas, then Use of Weapons,
then Player of Games.
0AllanCrossman12y
Why that Culture novel, precisely? I don't recall it as one of the better ones.
Admittedly, I'm unusual in that my favourite Culture story is The State of the
Art. General Pinochet Chili Con Carne! Richard Nixon Burgers! What's not to
like?
1Paul Crowley12y
It's one of my favourites, and I also think it's a good one to start with. But
so is The State of the Art. My favourite by him is Feersum Endjinn.
6Alicorn12y
If you'd like some TV recommendations as well, here are some things that you can
find on Hulu:
Firefly [http://www.hulu.com/firefly]. It's not all available at the same time,
but they rotate the episodes once a week; in a while you'll be able to start at
the beginning. If you haven't already seen the movie, put it off until you've
watched the whole series.
Babylon 5 [http://www.hulu.com/babylon-5]. First two seasons are all there. It
takes a few episodes to hit its stride.
If you're willing to search a little farther afield, Farscape is good, and of
the Star Treks, DS9 is my favorite (many people prefer TNG, though, and this
seems for some reason to be correlated with gender).
2ShardPhoenix12y
Maybe that's because DS9 is about a bunch of people living in a big house, while
TNG is about a bunch of people sailing around in a big boat ;). I prefer DS9
myself though and I'm a guy.
1randallsquared12y
With respect to B5, I'd say "a few episodes" is the entire first season and a
quarter of the second. I don't regret having spent the time to watch that, but
I'm not sure I would have bothered had I not had friends raving about it,
knowing in advance what I know now. :)
0[anonymous]12y
Does Jericho count as sci-fi? Either way, I highly recommend it.
Who will be the first person to recommend Lexx? :-)
0MrHen12y
You can probably find someone who has the Firefly discs, too.
-1MatthewB12y
I was not at all impressed with Firefly. It's idioms for the more primitive were
too primitive (dresses from the 1800s???). It's Premise was awesome, but due to
the mainstream audience, the writers were very constrained. Had it been done as
an anime, I imagine it would have looked far more like Trigun.
Now, Farscape. This was a re-telling of the Buck-Rogers story, and it was done
Freaking Well! They did not focus overly much on the technologies, which were
mostly post-Singularity (as were many of the alien species), but due to the
collapse of the civilization that supported that portion of the Galaxy, the
Peacekeepers had become a force for malevolence and dystopic vision rather than
the force for good which they began as.
I have never been able to enjoy Star Trek in any of its genres past TOS. The
lack of obvious applications of much of the technologies, and the strict
adherence to a dualist New-Age philosophy of consciousness really kept me away
from the show. They occasionally had some excellent shows, but overall; I found
that their lack of general AI, given the supposed power of many of their
computers, and their lack of nano-tech based technologies (given the absolute
necessity of nanotech for some technologies) was just appalling. The Medical
technologies were also rather wonky. If they can regrow bone, and they can
regrow nerves, and they can regrow skin, and they can regrow muscles... Why
cannot they re-grow entire limbs.
Also, the silly rationale behind there were not more technologies like Giordi's
eyes really made no sense.
IMO, the absolute Best Sci-Fi TV series in recent years has been the new BSG,
and the upcoming Caprica, which will tackle a civilization as it approaches its
own singularity and then fails to make it through the event horizon. Not due to
having created unfriendly AI, but by having their AI corrupted by a psychopathic
religious girl who manages to inhabit that AI. It should be excellent.
2Jack12y
Was this ever said or shown in an episode? It seems like a cop out to just
assume magical technology is post-Singularity without it being in the back
story.
Wasn't there a consciousness swapping episode of Farscape? Also, what about the
Data is basically a person trope in TNG? I agree that Star Trek technology
doesn't make a lot of sense, though.
Give your high standards shouldn't the fact that cylons were never much more
intelligent than humans bother you?
-1MatthewB12y
A society, or group of societies, needn't have the concept of the singularity in
order for one to have occurred. Farscape had some very obvious technologies
(mostly medical) which were very highly advanced nanotech, and there were
elements of AI. Most of the theme of the show, though, was that they were living
in a fallen society, which had once passed through a Singularity (at least parts
of the interstellar civilization) yet had fallen back below it, with these
magical items being carefully guarded and little understanding of how they
worked. That did bother me a little, but since the story was driven by the plot
and some of the characters, and they rarely sunk into techno-babble, it was
easier to overlook.
There was a consciousness swapping episode of Farscape. It was not one of my
favorite episodes of the show. As for Star Trek and Data... That was something
that I hated. If they had the type of imaging technology that they claimed to
have in their medical and scanning technologies, creating more Datas should have
been the easiest thing in the world. and, Data should have known that there was
no more to him that the patterns in his "Positronic Matrix" and that if he was
taken apart, all that would be necessary was a back-up of this matrix... Of
course, just by fiat they claimed that this was impossible.
And... As to BSG... It did bother me that the Cylons (the human ones) were never
much more intelligent than humans did bother me, until it was explained why (The
episode where Cavil has his screaming fit at Ellen Tigh where he screams at her
"I am just a machine... I want to see gamma-rays, smell x-rays, hear radio
waves, touch the solar wind and taste dark matter. Yet, you gave me this
arthritic old body and these failing eyes to look at the wonders of the
universe".
It was explained the Cavil, in his jealousy of the First 5, who had arrived from
the Original earth in the final months of the Cylon War with the colonies, had
managed to trap the 5 (long after t
5NancyLebovitz12y
Vinge's Marooned in Real Time, A Fire Upon the Deep. The former introduced the
idea of the Singularity, the latter gets a lot of fun playing near the edge of
it.
Olaf Stapledon: Last and First Men, Star Maker.
Poul Anderson: Brain Wave. What happens if there's a drastic, sudden
intelligence increase?
After you've read some science fiction, if you let us know what you've liked, I
bet you'll get some more fine-tuned recommendations.
4Wei_Dai12y
I second A Fire Upon the Deep (and anything by Vinge, but A Fire Upon the Deep
is my favorite). BTW, it contains what is in retrospect a clear reference to the
FAI problem. See
http://books.google.com/books?id=UGAKB3r0sZQC&lpg=PA400&ots=VBrKocfTHM&dq=%22fast%20burn%20transcendence%22&pg=PA400
[http://books.google.com/books?id=UGAKB3r0sZQC&lpg=PA400&ots=VBrKocfTHM&dq=%22fast%20burn%20transcendence%22&pg=PA400]
If anyone read it for the first time recently, I'm curious what you think of the
Usenet [http://en.wikipedia.org/wiki/Usenet] references. Those were my favorite
parts of the book when I first read it.
1zero_call12y
I thought the Usenet references were really cool and really clever, both from a
reader's standpoint, and also from an author's standpoint. For example, it
doesn't take a lot of digression to explain it or anything since most readers
are already familiar with similar stuff (e.g., Usenet.) It also just seems
really plausible as a form of universe-scale "telegram" communication, so I
think it works great for the story. Implausibility just ruins science fiction
for me, it destroys that crucial suspension of disbelief.
0ChristianKl12y
If you would have tried to explain to people a hundred years ago that we will
have interlinked computers and a lot of people will use them to view images of
naked females I think most people would have found that hypothesis very
implausible.
Any accurate description of the world that will exist 100 years in the future is
bound to contain lots of implausible claims.
2zero_call12y
If you're suggesting that all science fiction is implausible though, then that's
not true. There's a difference between coming up with random, futuristic ideas,
and coming up with random, futuristic ideas that have justification for working.
4NancyLebovitz12y
It depends on what you're looking for. Books you might enjoy? If so, we need to
know more about your tastes. Books we've liked? Books which have influenced us?
An overview of the field?
In any case, some I've liked-- Heinlein's Rocketship Galileo which is quite a
nice intro to rationality and also has Nazis in abandoned alien tunnels on the
Moon, and Egan's Diaspora which is an impressive depiction of people living as
computer programs.
Oh, and Vinge's A Fire Upon the Deep which is an effort to sneak up on writing
about the Singularity (Vinge invented the idea of the Singularity), and
Kirsteen's The Steerswoman (first of a series), which has the idea of a guild of
people whose job it is to answer questions-- and if you don't answer one of
their questions, you don't get to ask them anything ever again.
3Dreaded_Anomaly11y
I second the recommendations of 1984 and Player of Games (the whole Culture
series is good, but that one especially held my interest).
Recommendations I didn't see when skimming the thread:
* The Hitchhiker's Guide to the Galaxy series by Douglas Adams: A truly
enjoyable classic sci-fi series, spanning the length of the galaxy and the
course of human history.
* Timescape by Gregory Benford: Very realistic and well-written story about
sending information back in time. The author is an astrophysicist, and knows
his stuff.
* The Andromeda Strain, Sphere, Timeline, Prey, and Next by Michael Crichton:
These are his best sci-fi works, aimed at realism and dealing with the
consequences of new technology or discovery.
* Replay by Ken Grimwood: A man is given the chance to relive his life. A
stirring tale with several twists.
* The Commonwealth Saga and The Void Trilogy by Peter F. Hamilton: Superb space
opera, in which humanity has colonized the stars via traversable wormholes,
and gained immortality via rejuvenation technology. The trilogy takes place a
thousand years after the saga, but with several of the same characters.
* The Talents series and the Tower and Hive series by Anne McCaffrey: These
novels deal with the emergence and organization of humans with "psychic"
abilities (telekinesis, telepathy, teleportation, and so forth). The first
series takes place roughly in the present day, the second far in the future
on multiple planets.
* Priscilla Hutchins series and Alex Benedict series by Jack McDevitt: Two
series, unrelated, both examining how humans might explore the galaxy and
what they might find (many relics of ancient civilizations, and a few alien
races still living). The former takes place in the relatively near future,
while the latter takes place millennia in the future.
* Hyperion Cantos by Dan Simmons: An epic space opera dealing heavily with
singularity-related concepts such as AI and hum
3JoshuaZ12y
I wouldn't recommend Scalzi. Much of Scalzi is miltiary scifi with little
realism and isn't a great introduction for scifi. I'd recommend Charlie Stross.
"The Atrocity Archives", "Singularity Sky" and "Halting State" are all
excellent. The third is very weird in that it is written in the second person,
but is lots of fun. Other good authors to start with are Pournelle and Niven
(Ringworld, The Mote in God's Eye, and King David's Spaceship are all
excellent).
4Risto_Saarelma12y
Am I somehow unusual for being seriously weirded out by the cultural undertones
in Scalzi's Old Man's War books? I keep seeing people in generally enlightened
forums gushing over his stuff, but the book read pretty nastily to me with its
mix of very juvenile approach to science, psychology and pretty much everything
it took on, and its glorification of genocidal war without alternatives. It
brought up too much associations to telling kids who don't know better about the
utter necessity of genocidal war in simple and exiting terms in real-world
history, and seemed too little aware of this itself to be enjoyable.
Maybe it's a Heinlein thing. Heinlein is pretty obscure here in Europe, but
seems to be woven into the nostalgia trigger gene in the American SF fan DNA,
and I guess Scalzi was going for something of a Heinlein pastiche.
4NancyLebovitz12y
It's nice to know that I'm not the only person who hated Old Man's War, though
our reasons might be different.
It's been a while since I've read it, but I think the character who came out in
favor of an infrastructure attack (was that the genocidal war?) turned out to be
wrong.
What I didn't like about the book was largely that it was science fiction lite--
the world building was weak and vague, and the viewpoint character was way too
trusting. I've been told that more is explained in later books, but I had no
desire to read them.
There's a profoundly anti-imperialist/anti-colonialist theme in Heinlein, but
most Heinlein fans don't seem to pick up on it.
5Risto_Saarelma12y
The most glaring SF-lite problem for me was that in both Old Man's War and The
Ghost Brigades, the protagonist was basically written as a generic
twenty-something Competent Man character, despite both books deliberately
setting the protagonist up as very unusual compared to the archetype character.
in Old Man's War, the protagonist is a 70-year old retiree in a retooled body,
and in The Ghost Brigades something else entirely. Both of these instantly point
to what I thought would have been the most interesting thing about the book, how
does someone who's coming from a very different place psychologically approach
stuff that's normally tackled by people in their twenties. And then pretty much
nothing at all is done with this angle. Weird.
2Risto_Saarelma12y
Come to think of it, I had a similar problem with James P. Hogan's Voyage from
Yesteryear, which was about a colony world of in vitro grown humans raised by
semi-intelligent robots without adult parents. I thought this would lead to some
seriously weird and interesting social psychology with the colonists, when all
sorts of difficult to codify cultural layers are lost in favor of subhuman
machines as parental authorities and things to aspire to.
Turned out it was just a setup to lecture how anarchism with shooting people you
don't like would lead to the perfect society if it weren't for those meddling
history-perpetuating traditionalists, with the colonists of course being
exemplars of psychological normalcy and wholesomeness as well as required by the
lesson, and then I stopped reading the book.
1NancyLebovitz12y
There was so much, so very much sf-lite about that book. Real military life is
full of detail and jargon. OMW had something like two or three kinds of weapons.
There was the big sex scene near the beginning of the book, and then the
characters pretty much forgot about sex.
It was intentionally written to be an intro to sf for people who don't usually
read the stuff. Fortunately, even though the book was quite popular, that
approach to writing science fiction hasn't caught on.
0RobinZ12y
Nor I - I've read Agent to the Stars [http://www.scalzi.com/agent/], which was
just as bad, so I have no expectation of improvement.
0JoshuaZ12y
This isn't a Scalzi problem so much as a general problem with the military end
of SF. See for example, Starship Troopers and Ender's Game. Ender's Game makes
it more complicated, but there's still some definite sympathy with genocide
(speciescide?).
2NancyLebovitz12y
I wonder how important what the characters say is compared to what they do-- and
the importance may be in what the readers remember.
Card has an actual genocide.
In ST, Heinlein speaks in favor of crude "roll over the other guys so that your
genes can survive" expansionism, but he portrays a society where racial/ethnic
background doesn't matter for humans, and an ongoing war which won't necessarily
end with the Bugs or the humans being wiped out.
3jscn12y
* Solaris by Stanislaw Lem is probably one of my all time favourites.
* Anathem by Neal Stephenson is very good.
0djcb12y
I really like Anathem (am about halfway reading it); I'd goes into many of the
themes popular around here (rationalism, MWI), except for the singularity stuff.
3Jack12y
LeGuin- The Dispossessed
William Gibson- Neuromancer
George Orwell- 1984
Walter Miller - A Canticle for Leibowitz
Philip K. Dick- The Man in the High Castle
That actually might be my top five books of all time.
3Jawaka12y
I am a huge fan of Philip K. Dick. I don't usually read much fiction or even
science fiction, but PKD has always fascinated me. Stanislav Lem is also great.
3RichardKennaway12y
Bearing in mind that you're asking this on LessWrong, these come to mind:
Greg Egan. Everything he's written, but start with his short story collections,
"Axiomatic" and "Luminous". Uploading, strong materialism, quantum mechanics,
immortality through technology, and the implications of these for the concept of
personal identity. Some of his short stories are online
[http://gregegan.customer.netspace.net.au/].
Charles Stross. Most of his writing is set in a near-future, near-Singularity
world.
On related themes are "The Metamorphosis of Prime Intellect"
[http://en.wikipedia.org/wiki/The_Metamorphosis_of_Prime_Intellect], and John C.
Wright's Golden Age [http://en.wikipedia.org/wiki/The_Golden_Age_(novel_series\]
) trilogy.
There are many more SF novels I think everyone should read, but that would be
digressing into my personal tastes.
Some people here have recommended M. Scott Bakker's trilogy that begins with
"The Darkness That Comes Before", as presenting a picture of a superhuman
rationalist, although having ploughed through the first book I'm not all that
moved to follow up with the rest. I found the world-building rather derivative,
and the rationalist doesn't play an active role. Can anyone sell me on reading
volume 2?
2Zack_M_Davis12y
Strongly seconding Egan. I'd start with "Singleton
[http://www.gregegan.net/MISC/SINGLETON/Singleton.html]" and "Oracle
[http://gregegan.customer.netspace.net.au/MISC/ORACLE/Oracle.html]."
Also of note, Ted Chiang.
0gwern10y
I couldn't unless 'pretty good fantasy version of the Crusades' sounds like your
cup of tea.
2daos12y
many good recommendations so far but unbelievably nobody has yet mentioned Iain
M. Banks' series of 'Culture' novels based on a humanoid society (the 'Culture')
run by incredibly powerful AI's known as 'Minds'.
highly engaging books which deal with much of what a possible highly
technologically advanced post singularity society might be like in terms of
morality, politics, philosophy etc. they are far fetched and a lot of fun.
here's the list to date:
* Consider Phlebas (1987)
* The Player of Games (1988)
* Use of Weapons (1990)
* Excession (1996)
* Inversions (1998)
* Look to Windward (2000)
* Matter (2008)
they are not consecutive so reading order isn't that important though it is nice
to follow their evolution from the perspective of the writing.
0Paul Crowley12y
I mentioned "Player of Games" above.
0daos12y
duly noted. i missed it before amongst all the BSG and ST dicussions.. good
choice btw - i've always considered it to be one of his best.
2AdeleneDawner12y
I don't know whether to be surprised that no one has recommended the Ender's
Game series or not. They're not terribly realistic in the tech (especially
toward the end of the series), and don't address the idea of a technological
singularity, but they're a good read anyway.
Oh - I'm not sure if this is what you were thinking of by sci-fi or not, and it
gets a bit new-agey, but Spider Robinson's "Telempath" is a personal favorite.
It's set in a near-future (at the time of writing) earth after a virus was
released that magnified everyone's sense of smell to the point where cities, and
most modern methods of producing things, became intolerable. (Does anyone else
have post-apocalyptic themed favorites? I have a fondness for the genre, sci-fi
or not.)
4Cyan12y
I had a high opinion of Ender's Game once (less so for its sequels). Then I read
this [http://plover.net/~bonds/ender.html].
2Blueberry12y
A poorly thought out, insult-filled rant comparing scenes in Ender's Game to
"cumshots" changed your view of a classic, award-winning science fiction novel?
Please reconsider.
5Cyan12y
If you strip out the invective and the appeal to emotion embodied in the
metaphorical comparison to porn, there yet remains valid criticism of the
structure and implied moral standards of the book.
2xamdam12y
I did not believe this was possible, but this analysis has turned EG into ashes
retroactively. Still, it gets lots of kids into scifi, so there is some value.
A really great kids scifi book is "Have spacesuit, will travel" by Heinlein.
5NancyLebovitz12y
I've heard that effect called "the suck fairy". The suck fairy sneaks into your
life and replaces books you used to love with vaguely similar books that suck.
2xamdam12y
Great name, but unfortunately it's the same book; the analysis made it
incompatible with self-respect.
2NancyLebovitz12y
The suck fairy always brings something that looks exactly like the same book,
but somehow....
I'm not sure if I'll ever be able to enjoy Macroscope again. Anthony was really
interesting about an information gift economy, but I suspect that "vaguely
creepy about women" is going to turn into something much worse.
0Jack12y
I recommended "A Canticle for Leibowitz" and "Jericho" earlier. Also, Ender's
Game and Speaker for the Dead would have been the next two books on my list,
though I read them when I was younger and don't know if they would be appealing
to adults. How do people think Card (a devout Mormon) does at writing
atheist/agnostic characters (nearly all the main characters in the series)?
0AdeleneDawner12y
I haven't really thought about his portrayal of atheists, but he did a good
enough job of writing a convincing, non-demonized gay man in Songbird that I was
speechless when I discovered that he firmly believes that such people are going
to hell.
5Alicorn12y
He believes that they are sinning. Mormons have a really complicated dolled-up
afterlife, so if he's sticking to doctrine, he probably doesn't actually expect
gays as a group to all go to Hell.
Edit: He did a gay guy in the Memory of Earth series too (the plot of which, I
later found, is a blatant ripoff of the Book of Mormon). Like the gay guy in
Songbird, this one ends up with a woman, although less tragically.
3Jack12y
I have to say. It is an interesting coincidence that he has written two gay
characters that end up with women. Especially since he is absolutely terrible at
writing (heterosexual) sex scenes/sexuality- I mean really I've never read a
professional writer who was worse at this.
3SilasBarta12y
Is there any significance to how OSC avoids using the standard terms for gay,
but instead uses a made-up in-world term for it that you have to infer means
"gay". (At least in the Memory of Earth series; I haven't read the other.)
2bogus12y
wtf? that's the kwyjiboest thing I've ever seen. omg lol
2Alicorn12y
I don't think it's a coincidence at all. The way I understand it is that under
Mormon doctrine, the act, not the temptation towards the act, is what's a sin:
so a gay character who marries a woman and (regardless of whether he actually
has sex with her or not) refrains from extramarital sexual activity is just fine
and dandy. The Songbird character didn't get married; the Memory of Earth one
did. But the former, while not "demonized", was presented as a fairly weak
person; the latter was supposed to be a generally decent guy.
0RolfAndreassen12y
Where does OSC even attempt to do so? He generally just leaves the actual sex
scenes out of the books, to the best of my recollection. Would that Turtledove
had shown similar restraint.
0Jack12y
It has been a while since a read any Card but Folk of the Fringe included a
really bizarre story about sex between a young white boy and an middle-aged
native American. The Enders Game sequels almost all include ostensibly sexual
relationships and he tries to describe aspects of that and moments when,
presumably, the characters would be experiencing sexual attraction.
0RolfAndreassen12y
Ok, I was thinking more in terms of straight-out sex scenes, as in Turtledove,
where the tab goes in the slot. I must say I didn't find OSC's writing on sexual
attraction particularly awkward; what about it did you dislike so?
2Jack12y
Sorry, really late reply. Was just looking over this thread and happened to see
this.
Card's writing that involves sexual attraction just comes off as asexual. I
never got the sense that the characters were actually sexually attracted to each
other; affectionate maybe, but not aroused. It's like the way sexuality looks on
tv, not the way people actually experience it. I recall reading Card himself say
that he didn't think he was very good at writing about sex or sexual attractions
in an interview or something. It might have been in the Folk of the the Fringe
book somewhere but I can't find it in my library.
0RolfAndreassen12y
Ok, I guess I agree with that. He either cannot or will not write such that you
feel the emotions associated with sexual attraction; it is an area where he
tells rather than showing. Perhaps this is a deliberate choice based in his
Mormon religion; he's also rather down on porn. Either way, though, it seems to
me that his stories rarely suffer from this. To take an example, 'Empire' is way
worse than the Ender sequels, but it's not because of the sex; indeed it has
effectively zero sex in it, even of the kind you describe. Rather it suffers
from being nearly-explicit propaganda.
2AdeleneDawner12y
I went back and checked my source (wikipedia); you're right, I'd mis-remembered.
2sketerpot12y
Robert Heinlein wrote some really good stuff (before becoming increasingly
erratic in his later years). Very entertaining and fun. Here are some that I
would recommend for starting out with:
Tunnel in the Sky. The opposite of Lord of the Flies. Some people are stuck on a
wild planet by accident, and instead of having civilization collapse, they start
out disorganized and form a civilization because it's a good idea. After reading
this, I no longer have any patience for people who claim that our natural state
is barbarism.
Citizen of the Galaxy. I can't really summarize this one, but it's got some good
characters in it.
Between Planets. Our protagonist finds himself in the middle of a revolution all
of a sudden. This was written before we knew that Venus was not habitable.
I was raised on this stuff. Also, I'd like to recommend Startide Rising, by
David Brin, and its sequel The Uplift War. They're technically part of a
trilogy, but reading the first book (Sundiver) is completely unnecessary. It's
not really light reading, but it's entertaining and interesting.
2NancyLebovitz12y
Note about Tunnel in the Sky-- they didn't just form a society (not a
civilization) because they thought it was a good idea to do-- they'd had
training in how to build social structures.
2[anonymous]12y
Lord of Light by Roger Zelazny.
Snow Crash by Neal Stephenson
7Technologos12y
I strongly second Snow Crash. I enjoyed it thoroughly.
2whpearson12y
I'd say identify what sort of future scenarios you want to explore and ask us to
identify exemplars. Or is the goal is just to get a common vocabulary to discuss
things?
Reading Sci-Fi while potentially valuable should be done with a purpose in mind.
Unless you need another potential source of procrastination.
5komponisto12y
Goodness gracious. No, just looking for more procrastination/pure fun. I've
gotten along fine without it thus far, after all.
(Of course, if someone actually thinks I really do need to read sci-fi for some
"serious" reason, that would be interesting to know.)
1Technologos12y
While I don't think you need to read it, per se, I have found sci fi to be of
remarkable use in preparing me for exactly the kind of mind-changing upon which
Less Wrong thrives. The Asimov short stories cited above are good examples.
I also continue to cite Asimov's Foundation trilogy (there are more after the
trilogy, but he openly said that he wrote the later books purely because his
publisher requested them) as the most influential fiction works in pushing me
into my current career.
1Sniffnoy12y
Since noone's mentioned it yet, Rendevous with Rama. You really don't want to
touch the sequels, though.
0Jonathan_Graehl12y
Agreed on both points.
1Kevin12y
Oh, definitely 1984 if you've never read it. Scary how much predictive power
it's had.
1[anonymous]12y
This might not be the best place to ask because so many people here prefer
science fiction to regular fiction. I've noticed that people who prefer science
fiction have a very different idea of what makes good science fiction than
people who have no preference or who prefer regular fiction.
Most of what I see in the other comments is on the "prefers science fiction"
side, except for things by LeGuin and maybe Dune.
Of course, you might turn out to prefer science fiction and just not have
realized it. Then all would be well.
1zero_call12y
It's actually very important to ask people for recommendations for books, and
especially for sci-fi, since it seems like a large majority of the work out
there is well, garbage. Not to be too harsh, as IMO, the same thing could be
said for a lot of artistic genres (anime, modern action film, etc, etc.).
For sci-fi, there are some really top notch work out there. But be warned, that
in general the rest of the series isn't as good as the first book. Some
classics, all favorites of mine are:
* Dune (Frank Herbert)
* Starship Troopers (Robert Heinlein)
* Ringworld (first book) (Larry Niven)
* Neuromancer (William Gibson) (Warning: last half of the book becomes s.l.o.w.
though)
* Fire Upon the Deep (Vernor Vinge)
1Blueberry12y
I haven't seen much of the Star Wars or Star Trek stuff either, and don't really
consider them science fiction as much as space action movies. That's not really
what we're talking about.
I would strongly advise you to start with short stories, specifically Isaac
Asimov, Robert Heinlein, Arthur C. Clarke, Robert Sheckley, and Philip K. Dick.
All those authors are considered giants in the field and have anthologies of
collected short stories. Science fiction short stories tend to be easier to read
because you don't get bogged down in detail, and you can get right to the point
of exploring the interesting and speculative worlds.
1Jack12y
Films:
Blade Runner
Gattaca
2001: A Space Odyssey
1Furcas12y
Isaac Asimov's Foundation series:
* Foundation
* Foundation and Empire
* Second Foundation
* Foundation's Edge
* Foundation and Earth
There are prequels too, but I don't like 'em.
1Cyan12y
I recommend anything by Charles Stross, Lois McMaster Bujold's Vorkosigan Saga
[http://en.wikipedia.org/wiki/Vorkosigan_Saga] (link gives titles and
chronology), and anthing by Ursula LeGuin, but especially City of Illusions and
The Left Hand of Darkness.
0RolfAndreassen12y
Upvoted for the Vorkosigan suggestion; seconded.
0AdeleneDawner12y
As much as I love LeGuin, her work tends to be fairly challenging. It's worth
noting that her novels tend to be much easier to read than her short stories,
unlike most authors.
0Alicorn12y
You find her novels easier? I've loved many LeGuin short stories (most notably
The Ones who Walk Away from Omelas, and everything in the Changing Planes
collection) but I can't stand her novels. They lose me ten pages in; I've never
managed to slog more than halfway through a single one.
0AdeleneDawner12y
The novels are definitely still challenging, but until I'd read a few of her
novels and figured out how to think about her writing, I wasn't able to make
sense of most of her short stories (Omelas being one exception to that). I'd get
to the end of the text and go 'wait, was there supposed to be a story in that
set of words?'
0MartinB12y
Just reading that I am curious what you did end up reading and what you think
about it.
My recents were Heinleins: citizen of the galaxy, and the starbeast.
0RobinZ12y
I can see you have already been deluged in recommendations, but here are a few
novels I liked, with notes:
Mission of Gravity by Hal Clement. One of the better-written books from one of
my first favorite authors. Hal Clement is, in my opinion, the definitive writer
of hard science fiction, the benchmark to which others should be compared. If
possible, get a copy with the essay "Whirligig World" included (the volume Heavy
Planet
[http://www.amazon.com/Heavy-Planet-Classic-Mesklin-Stories/dp/076530368X], for
example).
Islands in the Net by Bruce Sterling. Something of a science-fiction
bildungsroman, and some of my favorite writing of all time. It's surprisingly
accurate as futurology, although that's not a particularly important feature in
a novel; more to the point, it's got wonderful worldbuilding and
characterization.
A Fire Upon the Deep by Vernor Vinge. Excellent epic science fiction. I don't
believe it is a classic in the way some others may have suggested, but I do
believe it's a good read.
A Woman of the Iron People by Eleanor Arnason. An excellent entry in the realm
of anthropological science fiction, with beautiful characterization of both the
human anthropologists and the population of aliens. (Worth comparing to Sheri S.
Tepper, Ursula K. LeGuin, and Joan D. Vinge.)
0Morendil12y
You already have more than enough, I'll nevertheless add a few:
Larry Niven's Ringworld
David Brin's Uplift books
John Varley's Titan, Wizard, Demon
In one of the dorkier moments of my existence, I've written a poem about the Great Filter. I originally intended to write music for this, but I've gone a few months now without inspiration, so I think I'll just post the poem to stand by itself and for y'all to rip apart.
The dire floor of Earth afore
saw once a fortuitous spark.
Life's swift flame sundry creature leased
and then one age a freakish beast
awakened from the dark.
Boundless skies beheld his eyes
and strident through the void he cried;
set his devices into space;
scryed for signs of a yonder race;
but desolate hush replied.
Stars surround and worlds abound,
the spheres too numerous to name.
Yet still no creature yet attains
to seize this lot, so each remains
raw hell or barren plain.
What daunting pale do most 'fore fail?
Be the test later or done?
Those dooms forgone our lives attest
themselves impel from first inquest:
cogito ergo sum.
Man does boast a charmèd post,
to wield the blade of reason pure.
But if this prov'ence be not rare,
then augurs fate our morrow bare,
our fleeting days obscure.
But might we nigh such odds defy,
and see before us cosmos bend?
Toward the heavens thy mind set,
and waver not: this proof, till 'yet,
did ne'er with man contend!
Suggested tweaks are welcome. Things that I'm currently unhappy with are that "fortuitous" scans awkwardly, and the skies/eyes rhyme feels clichéd.
It reminds me of something that happened in college, where a poem of mine was
being put in some sort of collection; there was a typo in it, and I mentioned a
correction to the professor. He nodded wisely, and said, "yes, that would keep
it to iambic pentameter [http://en.wikipedia.org/wiki/Iambic_pentameter]."
And I said, "iambic who what now?"... or words to that effect.
And then I discovered the wonderful world of meter. ;-)
Your poem is trying to be in iambic tetrameter
[http://en.wikipedia.org/wiki/Iambic_tetrameter] (four iambs - "dit dah" stress
patterns), but it's missing the boat in a lot of places. Iambic tetrameter also
doesn't lend itself to sounding serious; you can write something serious in it,
sure, but it'll always have kind of a childish singsong-y sort of feel, so you
have to know how to counter it.
Before I grokked this meter stuff, I just randomly tried to make things sound
right, which is what your poem appears to be doing. If you actually know what
meter you're trying for, it's a LOT easier to find the right words, because they
will be words that naturally hit the beat. Ideally, you should be able to read
your poem in a complete monotone and STILL hear the rhythmic beating of the
dit's and dah's... you could probably write a morse code message if you wanted
to. ;-)
Anyway, you will probably find it a lot easier to fix the problems with the
poem's rhythm if you know what rhythm you are trying to create. Enjoy!
3Eliezer Yudkowsky12y
For those who still read books, recommend "The Poem's Heartbeat".
1dfranke12y
Yes, I'm well aware of what iambic tetrameter is and that the poem generally
conforms to it :-). The intended meter isn't quite that simple though. The final
verse of each stanza is only three feet, and the first foot of the third verse
of each stanza is a spondee. Verses are headless where necessary.
There's also an inverted foot in "Be the test later or done?", but I'm leaving
that in even though I could easily substitute "ahead" for "later". Despite
breaking the meter, it sounds better as-is.
0pjeby12y
Fair enough. I found other aspects of the poem so awkward, though, that I never
actually finished any one full stanza without wincing. The rhythm seemed like
the one thing I could offer a semi-objective opinion on, and I figured that
maybe some of the other things that were bothering me were a result of you
trying to fit a meter without conscious awareness of what meter you were trying
to fit.
0rwallace12y
I think it works very well as is. Upvoted.
Edit: but perhaps 'wondrous' for 'fortuitous'?
I recently revisited my old (private) high school, which had finished building a new >$15 million building for its football team (and misc. student activities & classes).
I suddenly remembered that when I was much younger, the lust of universities and schools in general for new buildings had always puzzled me: I knew perfectly well that I learned more or less the same whether the classroom was shiny new or grizzled gray and that this was true of just about every subject-matter*, and even then it was obvious that buildings must cost a lot to build and... (read more)
I don't know about that. I know that there are several buildings at my
university that I hate to have classes in, because they're either too hot, too
cold, or poorly ventilated. Yes, you're correct that in the majority of cases,
the age of the building makes no difference (e.g. no one recognizes the
difference between a two year old building and a twenty year old building), but
in extremis, the age can make a difference (e.g. if the building does not have
proper ventilation or temperature control). Its very difficult to keep focused
when the classroom is 30 degrees Celcius and the lecture is two hours long.
1gwern12y
Well, I can't really object to the extremes theory. You aren't a Third-Worlder
or a highly driven Indian or Chinese or pre-20th century American child who
wouldn't be bothered by such conditions, after all.
But most school building is not about avoiding such extremes. I can cite exactly
one example in my educational career where a building had a massive overhaul due
to genuine need (a fire in the gym burned the roof badly); all the other
expansions and new buildings.... not so much.
This reflects a failure of pedagogy more than the value of architecture - I've
never seen any research saying students can really focus & learn for 2 hours,
and the research I glanced over suggest much shorter lectures than that. (IIRC,
the FAA or USAF found pilot-education lectures should be no longer than 20
minutes and followed immediately by review.)
0[anonymous]12y
My dorm building has the number 2008 carved conspicuously into one of the stones
in its facade. It's pretty easy to tell that it's a two year old building.
0CronoDAS12y
My town has fairly recently (in the past ten years) added several new school
buildings. The old buildings had problems (leaky roofs, no air conditioning,
etc.) and the town's school-age population was growing.
Now, if they would only be willing to expand the library. :(
1gwern12y
So make the classes bigger, perhaps. In a Hansonian vein:
(I don't think I ever met someone who failed to learn something because
somewhere in the school there was a leak. Because of no air conditioning, maybe,
but puddles or leaks?)
3quanticle12y
Well, classrooms are of limited size. I know that the classrooms at my old high
school were only designed for thirty kids each. Now they hold nearly forty each.
There is a significant cost from having correspondingly less space per person.
The corresponding reductions in mobility and classroom flexibility have an
impact on learning.
This is especially pronounced in science labs. Having even one more person per
lab station can have a surprisingly detrimental impact on learning. If there are
two or three people at a lab station, then pretty much everyone is forced to
participate (and learn) in order to finish the lesson. However, if there are
four or more kids at a lab station, then you can have a person slacking off, not
doing much and the others can cover for the slacker. The slacker doesn't learn
anything, and the other students are resentful because three are doing the work
of four.
0CronoDAS12y
Leaks damage things. Such as ceilings, for example.
I tried creating a separate login on my computer with no distractions, and tried to get my work done there. This reduced my productivity because it increased the cost of switching back from procrastinating to working. I would have thought that recovering in large bites and working in large bites would have been more efficient, but apparently no, it's not.
I'm currently testing the hypothesis that reading fiction (possibly reading anything?) comes out of my energy-to-work-on-the-book budget.
Next up to try: Pick up a CPAP machine off Craigslist.
A technical problem that is easily solvable. My approach has been to use VMWare.
All the productive tools are installed on the base OS. Procrastination tools are
installed on a virtual machine. Starting the procrastination box takes about 20
seconds (and more importantly a significant active decision) but closing it to
revert to 'productive mode' takes no time at all.
4jimrandomh12y
I've noticed the same problem in separating work from procrastination
environments. But it might work if it was asymmetric - say, there's a single
fast hotkey to go from procrastination mode to work mode, but you have to type a
password to go in the other direction. (Or better yet, a 5 second delay timer
that you can cancel.)
3kpreid12y
I had the same problem when I was using just virtual screens with a key to
switch, not even separate accounts. It was a significant decrease in
productivity before I realized the problem. I think it's not just the effort to
switch; it's also that the work doesn't stay visible so that you think about it.
0groupuscule12y
This strategy works for me. I made the password to my non-work login something
that would remind me why I set up the system. (I know of people doing similar
things to the phone numbers of people they don't want to call.)
Suppose we want to program an AI to represent the interest of a group. The standard utilitarian solution is to give the AI a utility function that is an average of the utility functions of the individual in the group, but that runs into the interpersonal comparison of utility problem. (Was there ever a post about this? Does Eliezer have a preferred approach?)
Here's my idea for how to solve this. Create N AIs, one for each individual in the group, and program it with the utility function of that individual. Then set a time in the future when one of those A... (read more)
Unless you can directly extract a sincere and accurate utility function from the
participants' brains, this is vulnerable to exaggeration in the AI programming.
Say my optimal amount of X is 6. I could program my AI to want 12 of X, but be
willing to back off to 6 in exchange for concessions regarding Y from other AIs
that don't want much X.
2wedrifid12y
This does not seem to be the case when the AIs are unable to read each other's
minds. Your AI can be expected to lie to others with more tactical effectiveness
than you can lie indirectly via deceiving it. Even in that case it would be
better to let the AI rewrite itself for you.
On a similar note, being able to directly extract a sincere and accurate utility
function from the participants' brains leaves the system vulnerable to
exploitations. Individuals are able to rewrite their own preferences
strategically in much the same way that an AI can. Future-me may not be happy
but present-me got what he wants and I don't (necessarily) have to care about
future me.
0Wei_Dai12y
I had also mentioned this in an earlier comment on another thread. It turns out
that this is a standard concern in bargaining theory. See section 11.2 of this
review paper [http://rcer.econ.rochester.edu/RCERPAPERS/rcer_554.pdf].
So, yeah, it's a problem, but it has to be solved anyway in order for AIs to
negotiate with each other.
0timtyler11y
Do you think the more powerful group members are going to agree to that?!? They
worked hard for their power and status - and are hardly likely to agree to their
assets being ripped away from them in this way. Surely they will ridicule your
scheme, and fight against it being implemented.
5Wei_Dai11y
The main idea I wanted to introduce in that comment was the idea of using
(supervised) bargaining to aggregate individual preferences. Bargaining power
(or more generally, weighing of individual preferences) is a mostly orthogonal
issue. If equal bargaining power turns out to be impractical and/or immoral
[http://lesswrong.com/lw/2b7/hacking_the_cev_for_fun_and_profit/], then some
other distribution of bargaining power can be used.
0Roko12y
Why not use virtual agents, which are given only a safe interface to negotiate
with each other over, and no physical powers, and are monitored by a meta-AI
that prevents them from trying to game the system, fool each other, etc. This
would avoid having wars between superintelligences in the real physical
universe.
0Wei_Dai12y
I think that's what I implied: there is a supervisor process that governs the
negotiation process and eventually picks a random AI to be released into the
real world.
0Roko12y
ok, just checking you weren't advocating a free-for-all.
0Vladimir_Nesov12y
What exactly is "equal bargaining power" is vague. If you "instantiate" multiple
AIs, their "bargaining power" may well depend on their "positions" relative to
each other, the particular values in each of them, etc.
Why this requirement? A cooperation of AIs might as well be one AI. Cooperation
between AIs is just a special case of operation of each AI in the environment,
and where you draw the boundary between AI and environment is largely arbitrary.
1Wei_Dai12y
The idea is that the status quo (i.e., the outcome if the AIs fail to cooperate)
is N possible worlds of equal probability, each shaped according to the values
of one individual/AI. The AIs would negotiate from this starting point and
improve upon it. If all the AIs cooperate (which I presume would be the case),
then which AI gets randomly selected to take over the world won't make any
difference.
In this case the AIs start from an equal position, but you're right that their
values might also figure into bargaining power. I think this is related to a
point Eliezer made in the comment I linked to: a delegate may "threaten to adopt
an extremely negative policy in order to gain negotiating leverage over other
delegates." So if your values make you vulnerable to this kind of threat, then
you might have less bargaining power than others. Is this what you had in mind?
1Vladimir_Nesov12y
Letting a bunch of AIs with given values resolve their disagreement is not the
best way to merge values, just like letting the humanity go on as it is is not
the best way to preserve human values. As extraction of preference shouldn't
depend on the actual "power" or even stability of the given system, merging of
preference could also possibly be done directly and more fairly when specific
implementations and their "bargaining power" are abstracted away. Such
implementation-independent composition/interaction of preference may turn out to
be a central idea for the structure of preference.
1andreas12y
There seems to be a bootstrapping problem: In order to figure out what the
precise statement is that human preference makes, we need to know how to combine
preferences from different systems; in order to know how preferences should
combine, we need to know what human preference says about this.
1Vladimir_Nesov12y
If we already have a given preference, it will only retell itself as an answer
to the query "What preference should result [from combining A and B]?", so
that's not how the game is played. "What's a fair way of combining A and B?" may
be more like it, but of questionable relevance. For now, I'm focusing on getting
a better idea of what kind of mathematical structure preference should be,
rather than on how to point to the particular object representing the given
imperfect agent.
0Wei_Dai12y
What is/are your approach(es) for attacking this problem, if you don't mind
sharing?
In my UDT1 post I suggested that the mathematical structure of preference could
be an ordering on all possible (vectors of) execution histories of all possible
computations. This seems general enough to represent any conceivable kind of
preference (except preferences about uncomputable universes), but also appears
rather useless for answering the question of how preferences should be merged.
0Vladimir_Nesov12y
Since I don't have self-contained results, I can't describe what I'm searching
for concisely, and the working hypotheses and hunches are too messy to summarize
in a blog comment. I'll give some of the motivations I found towards the end of
the current blog sequence, and possibly will elaborate in the next one if the
ideas sufficiently mature.
Yes, this is not very helpful. Consider the question: what is the difference
between (1) preference, (2) strategy that the agent will follow, and the (3)
whole of agent's algorithm? Histories of the universe could play a role in
semantics of (1), but they are problematic in principle, because we don't know,
nor will ever know with certainty, the true laws of the universe. And what we
really want is to get to (3), not (1), but with good understanding of (1) so
that we know (3) to be based on our (1).
0Wei_Dai12y
Thanks. I look forward to that.
I don't understand what you mean here, and I think maybe you misunderstood
something I said earlier. Here's what I wrote in the UDT1 post
[http://lesswrong.com/lw/15m/towards_a_new_decision_theory/]:
(Note that of course this utility function has to be represented in a
compressed/connotational form, otherwise it would be infinite in size.) If we
consider the multiverse to be the execution of all possible programs, there is
no uncertainty about the laws of the multiverse. There is uncertainty about
"which universes, i.e., programs, we're in", but that's a problem we already
have a handle on, I think.
So, I don't know what you're referring to by "true laws of the universe", and I
can't find an interpretation of it where your quoted statement makes sense to
me.
0Vladimir_Nesov12y
I don't believe that directly posing this "hypothesis" is a meaningful way to
go, although computational paradigm can find its way into description of the
environment for the AI that in its initial implementation works from within a
digital computer.
0andreas12y
Here is a revised way of asking the question I had in mind: If our preferences
determine which extraction method is the correct one (the one that results in
our actual preferences), and if we cannot know or use our preferences with
precision until they are extracted, then how can we find the correct extraction
method?
Asking it this way, I'm no longer sure it is a real problem. I can imagine that
knowing what kind of object preference is would clarify what properties a
correct extraction method needs to have.
0Vladimir_Nesov12y
Going meta and using the (potentially) available data such as humans in form of
uploads, is a step made in attempt to minimize the amount of data (given
explicitly by the programmers) to the process that reconstructs human
preference. Sure, it's a bet (there are no universal preference-extraction
methods that interpret every agent in a way it'd prefer to do itself, so we have
to make a good enough guess), but there seems to be no other way to have a
chance at preserving current preference. Also, there may turn out to be a good
means of verification that the solution given by a particular
preference-extraction procedure is the right one.
1pdf23ds12y
So you know how to divide the pie
[http://lesswrong.com/lw/ru/the_bedrock_of_fairness/]? There is no interpersonal
"best way" to resolve directly conflicting values. (This is further than Eliezer
went.) Sure, "divide equally" makes a big dent in the problem, but I find it
much more likely any given AI will be a Zaire than a Yancy. As a simple case,
say AI1 values X at 1, and AI2 values Y at 1, and X+Y must, empirically, equal
1. I mean, there are plenty of cases where there's more overlap and orthogonal
values, but this kind of conflict is unavoidable between any reasonably complex
utility functions.
1Vladimir_Nesov12y
I'm not suggesting an "interpersonal" way (as in, by a philosopher of perfect
emptiness). The possibilities open for the search of "off-line" resolution of
conflict (with abstract transformation of preference) are wider than those for
the "on-line" method (with AIs fighting/arguing it over) and so the "best"
option, for any given criterion of "best", is going to be better in "off-line"
case.
0[anonymous]12y
There seems to be a bootstrapping problem: In order to figure out what the
precise statement is that human preference makes, we need to know how to combine
preferences from different systems; in order to know how preferences should
combine, we need to know what human preference says about this.
0[anonymous]12y
There seems to be a bootstrapping problem: In order to figure out what the
precise statement is that human preference makes, we need to know how to combine
preferences from different systems; in order to know how preferences should
combine, we need to know what human preference says about this.
0Wei_Dai12y
[Edited] I agree that it is probably not the best way. Still, the idea of
merging values by letting a bunch of AIs with given values resolve their
disagreement seems better than previous proposed solutions, and perhaps gives a
clue to what the real solution looks like.
BTW, I have a possible solution to the AI-extortion problem mentioned by
Eliezer. We can set a lower bound for each delegate's utility function at the
status quo outcome, (N possible worlds with equal probability, each shaped
according to one individual's utility function). Then any threats to cause an
"extremely negative" outcome will be ineffective since the "extremely negative"
outcome will have utility equal to the status quo outcome.
I rewatched 12 Monkeys last week (because my wife was going through a Brad Pitt phase, although I think this movie cured her of that :), in which Bruce Willis plays a time traveler who accidentally got locked up in a mental hospital. The reason I mention it here is because It contained an amusing example of mutual belief updating: Bruce Willis's character became convinced that he really is insane and needs psychiatric care, while simultaneously his psychiatrist became convinced that he actually is a time traveler and she should help him save the world.
The movie is also a good example of existential risk
[http://www.nickbostrom.com/existential/risks.html] in fiction (in this case, a
genetically engineered biological agent).
0HalFinney12y
I agree about the majoritarianism problem. We should pay people to adopt and
advocate independent views, to their own detriment. Less ethically we could
encourage people to think for themselves, so we can free-ride on the costs they
experience.
1Wei_Dai12y
I guess we already do something like that, namely award people with status for
being inventors or early adopters of ideas (think Darwin and Huxley) that
eventually turn out to be accepted by the majority.
I've been a longtime lurker, and tried to write up a post a while ago, only to see that I didn't have enough karma. I figure this is is the post for a newbie to present something new. I already published this particular post on my personal blog, but if the community here enjoys it enough to give it karma, I'd gladly turn it into a top-level post here, if that's in order.
What you seem to be saying, that I agree with, is that it's irritating as well
as irrelevant when people try to pull authority on you, using "age" or "quantity
of experience" as a proxy for authority. Yes, argument does screen off
authority. But that's no reason to knock "life experience".
If opinions are not based on "personal experience", what can they possibly be
based on? Reading a book is a personal experience. Arguing an issue with someone
(and changing your mind) is a personal experience. Learning anything is a
personal experience, which (unless you're too good at compartmentalizing) colors
your other beliefs.
Perhaps the issue is with your thinking that "demolishing someone's argument" is
a worthwhile instrumental goal in pursuit of truth. A more fruitful goal is to
repair your interlocutor's argument, to acknowledge how their personal
experience has led them to having whatever beliefs they have, and expose
symmetrically what elements in your own experience lead you to different views.
Anecdotes are evidence, even though they can be weak evidence. They can be
strong evidence too. For instance, having read this comment
[http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1rwg] after I read the
commenter's original report of his experience as an isolated individual, I'd be
more inclined to lend credence to the "stealth blimp" theory. I would have
dismissed that theory on the basis of reading the Wikipedia page alone or
hearing the anecdote along, but I have a low prior probability for someone on
LessWrong arranging to seem as if he looked up news reports after first making a
honest disclosure to other people interested in truth-seeking.
It seems inconsistent on your part to start off with a rant about "anecdotes",
and then make a strong, absolute claimed based solely on "the Sokal affair" -
which at the scale of scientific institutions is anecdotal.
I think you're trying to make two distinct points and getting them mixed up, and
as a result not getti
2Seth_Goldin12y
Hi Morendil,
Thanks for the comment. The particular version you are commenting on was an
earlier, worse version than what I posted and then pulled this morning. The
version I posted this morning was much better than this. I actually changed the
claim about the Sokal affair completely.
Due to what I fear was an information cascade of negative karma, I pulled the
post so that I might make revisions.
The criticism concerning both this earlier version and the newer one from this
morning still holds though. I too realized after the immediate negative feedback
that I actually was combining, poorly, two different points and losing both of
them in the process. I think I need to revise this into two different posts, or
cut out the point about academia entirely. I will concede that anecdotes are
evidence as well in the future version.
Unfortunately I was at exactly 50 karma, and now I'm back down to 20, so it will
be a while before I can try again. I'll be working on it.
2Seth_Goldin12y
Here's the latest version, what I will attempt to post on the top level when I
again have enough karma.
--------------------------------------------------------------------------------
"Life Experience" as a Conversation-Halter
Sometimes in an argument, an older opponent might claim that perhaps as I grow
older, my opinions will change, or that I'll come around on the topic. Implicit
in this claim is the assumption that age or quantity of experience is a proxy
for legitimate authority. In and of itself, such "life experience" is necessary
for an informed rational worldview, but it is not sufficient.
The claim that more "life experience" will completely reverse an opinion
indicates that to the person making such a claim, belief that opinion is based
primarily on an accumulation of anecdotes, perhaps derived from extensive
availability bias [http://en.wikipedia.org/wiki/Availability_heuristic]. It
actually is a pretty decent assumption that other people aren't Bayesian,
because for the most part, they aren't. Many can confirm this, including Haidt,
Kahneman, Tversky.
When an opponent appeals to more "life experience," it's a last resort, and it's
a conversation halter [http://lesswrong.com/lw/1p2/conversation_halters/]. This
tactic is used when an opponent is cornered. The claim is nearly an outright
acknowledgment of a move to exit the realm of rational debate. Why stick to
rational discourse when you can shift to trading anecdotes? It levels the
playing field, because anecdotes, while Bayesian evidence
[http://lesswrong.com/lw/in/scientific_evidence_legal_evidence_rational/], are
easily abused, especially for complex moral, social, and political claims. As
rhetoric, this is frustratingly effective, but it's logically rude
[http://lesswrong.com/lw/1p1/logical_rudeness/].
Although it might be rude and rhetorically weak, it would be authoritatively
appropriate for a Bayesian to be condescending to a non-Bayesian in an argument.
Conversely, it can be downright m
0Seth_Goldin12y
Sorry; I didn't realize that I can still post. I went ahead and posted it.
2SilasBarta12y
I agree with your point and your recommendation. Life experiences can provide
evidence, and they can also be an excuse to avoid providing arguments. You need
to distinguish which one it is when someone brings it up. Usually, if it is
valid evidence, the other person should be able to articulate which insight a
life experience would provide to you, if you were to have it, even if they can't
pass the experience directly to your mind.
I remember arguing with a family member about a matter of policy (for obvious
reasons I won't say what), and when she couldn't seem to defend her position,
she said, "Well, when you have kids, you'll see my side." Yet, from context, it
seems she could have, more helpfully, said, "Well, when you have kids, you'll be
much more risk-averse, and therefore see why I prefer to keep the system as is"
and then we could have gone on to reasons about why one or the other system is
risky.
In another case (this time an email exchange on the issue of pricing carbon
emissions), someone said I would "get" his point if I would just read the famous
Coase paper on externalities. While I hadn't read it, I was familiar with the
arguments in it, and ~99% sure my position accounted for its points, so I kept
pressing him to tell me which insight I didn't fully appreciate. Thankfully,
such probing led him to erroneously state what he thought was my opinion, and
when I mentioned this, he decided it wouldn't change my opinion.
3thomblake12y
It illustrated nothing of the sort. The Sokal affair illustrated that a
non-peer-reviewed, non-science journal will publish bad science writing that was
believed to be submitted in good faith.
Social Text was not peer-reviewed because they were hoping to... do...
something. What Sokal did was similar to stealing everything from a 'good faith'
vegetable stand and then criticizing its owner for not having enough security.
6Seth_Goldin12y
Noted. In another draft I'll change this to make the point how easy it is for
high-status academics to deal in gibberish. Maybe they didn't have so much
status external to their group of peers, but within it, did they?
What the Social Text Affair Does and Does Not Prove
http://www.physics.nyu.edu/faculty/sokal/noretta.html
[http://www.physics.nyu.edu/faculty/sokal/noretta.html]
"From the mere fact of publication of my parody I think that not much can be
deduced. It doesn't prove that the whole field of cultural studies, or cultural
studies of science -- much less sociology of science -- is nonsense. Nor does it
prove that the intellectual standards in these fields are generally lax. (This
might be the case, but it would have to be established on other grounds.) It
proves only that the editors of one rather marginal journal were derelict in
their intellectual duty, by publishing an article on quantum physics that they
admit they could not understand, without bothering to get an opinion from anyone
knowledgeable in quantum physics, solely because it came from a conveniently
credentialed ally'' (as Social Text co-editor Bruce Robbins later candidly
admitted[12]), flattered the editors' ideological preconceptions, and attacked
theirenemies''.[13]"
1thomblake12y
I'd forgotten that Sokal himself admitted that much about it - thanks for the
cite.
2Vladimir_Nesov12y
It's unclear what you mean by both "Bayesian" and by "authority" in this
sentence. If a person is "Bayesian", does it give "authority" for condescension?
There clearly is some truth to the claim that being around longer sometimes
allows to arrive at more accurate beliefs, including more accurate intuitive
assessment of the situation, if you are not down a crazy road in the particular
domain. It's not a very strong evidence, and it can't defeat many forms of more
direct evidence pointing in the contrary direction, but sometimes it's an OK
heuristic, especially if you are not aware of other evidence ("ask the elder").
0Seth_Goldin12y
Maybe "authority" is the wrong word. What I mean is that the opponent making
this claim is dismissing my stance as wrong, because of my supposed less
experience. It means that they believe that truth follows from collecting
anecdotes. They ascertain that because they have more anecdotes, they are
correct, and I am incorrect. For not being rational, we can't trust their
standard of truth to dismiss my position as wrong, since their whole methodology
is hopelessly flawed.
0Vladimir_Nesov12y
Your core claim seems to be that you should dismiss statements (as opposed to
arguments) by "irrational" people. This is a more general idea, basically
unrelated to amount of their personal experience or other features of typical
conversations which you discuss in your comment.
0Seth_Goldin12y
If someone's argument, and therefore position, is irrational, how can we trust
them to give honest and accurate criticism of other arguments?
1Vladimir_Nesov12y
At which point you are completely forsaking your original argument
[http://lesswrong.com/lw/1lf/open_thread_january_2010/1esp] (rightfully or
wrongly, which is a separate concern), which is the idea of my critical comment
[http://lesswrong.com/lw/1lf/open_thread_january_2010/1ewl] above. It's unclear
what you are arguing about, if your conclusion is equivalent to a much simpler
premise that you have to assume independently of the argument. This sounds like
rationalization [http://wiki.lesswrong.com/wiki/Rationalization] (again, no
matter whether the conclusion-advice-heuristic is correct or not).
0Seth_Goldin12y
OK, let me break it down.
I take "life experience" to mean a haphazard collection of anecdotes.
Claims from haphazardly collected anecdotes do not constitute legitimate
evidence, though I concede those claims do often have positive correlations with
true facts.
As such, relying on "life experience" is not rational. The point about
condescension is tangential. The whole rhetorical technique is frustrating,
because there is no way to move on from it. If "life experience" were legitimate
evidence for the claim, the argument would not be able to continue until I have
gained more "life experience," and who decides how much would be sufficient?
Would it be until I come around? Once we throw the standard of evidence out,
we're outside the bounds of rational discourse.
4thomblake12y
I don't think that's something that most people who think "life experience" is
valuable would agree to.
It might be profitable for you to revise your criteria for what constitutes
legitimate evidence. Throwing away information that has a positive correlation
with the thing you're wondering about seems a bit hasty.
0Seth_Goldin12y
I am calling attention to reverting to "life experience" as recourse in an
argument. If someone strays to that, it's clear that we're no longer considering
evidence for whatever the argument is about. Referring back to "life experience"
is far too nebulous to take as any evidence anything.
As for what constitutes legitimate evidence, even if anecdotes can correlate,
anecdotes are not evidence!
http://www.scientificamerican.com/article.cfm?id=how-anecdotal-evidence-can-undermine-scientific-results
[http://www.scientificamerican.com/article.cfm?id=how-anecdotal-evidence-can-undermine-scientific-results]
3Nick_Tarleton12y
Anecdotes are rational evidence, but not scientific evidence
[http://lesswrong.com/lw/in/scientific_evidence_legal_evidence_rational/].
0Seth_Goldin12y
For a debate involving complex religious, scientific, or political arguments,
this won't suffice.
0[anonymous]12y
Let's say I'm debating someone on whether or not poltergeists exist.
0Seth_Goldin12y
All,
Thanks for the votes. So, I'm not exactly sure how the karma system works. On
the main page I see articles from people with less than 50 points, and I see
prominent users that have nonsensically low counts. Do I still need 50 points to
post a main article?
2kpreid12y
Users' karma is only displayed on their user page (and the top contributors
list). The number in the header of an article or comment is the score for that
post only. Does this help?
This article about gendered language showed up on one of my feeds a few days ago. Given how often discussions of nongendered pronouns happen here, I figure it's worth sharing.
Nice, I liked the part about Tuyuca:
It would be fun to try to build a "rational" dialect of English that requires
people to follow rules of logical inference and reasoning.
Suppose you could find out the exact outcome (up to the point of reading the alternate history equivalent of Wikipedia, history books etc.) of changing the outcome of a single historical event. What would that event be?
Note that major developments like "the Roman empire would never have fallen" or "the Chinese wouldn't have turned inwards" involve multiple events, not just one.
So many. I can't limit it to one, but my top four would be "What if Mohammed had never been born?", "What if Julian the Apostate had succeeded in stamping out Christianity?" and "What if Thera had never blown and the Minoans had survived?" and "What if Alexander the Great had lived to a ripe old age?"
The civilizations of the Near East were fascinating, and although the early Islamic Empire was interesting in its own right it did a lot to homogenize some really cool places. It also dealt a fatal wound to Byzantium as well. If Mohammed had never existed, I would look forward to reading about the Zoroastrian Persians, the Byzantines, and the Romanized Syrians and Egyptians surviving much longer than they did.
The Minoans were the most advanced civilization of their time, and had plumbing, three story buildings, urban planning and possibly even primitive optics in 2000 BC (I wrote a bit about them here). Although they've no doubt been romanticized, in the romanticized version at least they had a pretty equitable society, gave women high status, and revered art and nature. Then they were all destroyed by a giant volcano. I remember reading one hist... (read more)
Given that Alexander was one of the most successful conquerors in all of history, he almost certainly benefited from being extremely lucky. If he had lived longer, therefore, he would have probably experienced much regression to the mean with respect to his military success.
Of course, once you are already the most successful conqueror alive you tend to
need less luck. You can get by on the basic competence that comes from
experience and the resources you now have at your disposal. (So long as you
don't, for example, try to take Russia. Although even then Alexander's style
would probably have worked better than Napoleon's.)
1DanArmak12y
As did the Christian culture before them. And the original Roman Empire before
that. And Alexander's Hellenistic culture spread by the fragments of his
mini-empire. And the Persian empires that came and went in the region...
Along the same idea, but much more likely to yield radical differences to the
future of human society, I'd like to know what would have happened if some
ancient bottleneck epidemic had not happened or had happened differently (killed
more or fewer people, or just different individuals). Much or all of the human
gene pool after that altered event would be different.
2DanArmak12y
I'd like to see a world in which all ancestor-types of humans through to the
last common ancestor with chimps still lived in many places.
0Zack_M_Davis12y
Book recommendation [http://en.wikipedia.org/wiki/A_Different_Flesh]
0loqi12y
I'd be pretty interested in seeing the results of this set of Malaria-resistance
mutations [http://en.wikipedia.org/wiki/Malaria#Resistance_in_South_Asia] having
been more widespread.
-1[anonymous]12y
Probably not badly enough to pony up for the computational power necessary to
find the answer though, right?
ETA: Nevermind, didn't see the parent prompt. Still an important consideration
though, so I'm leaving it in...
8Kaj_Sotala12y
I'd be curious to know what would have happened if Christopher Columbus's fleet
had been lost at sea during his first voyage across the Atlantic. Most scholars
were already highly skeptical of his plans, as they were based on a
miscalculation, and him not returning would have further discouraged any
explorers from setting off in that direction. How much longer would it have
taken before Europeans found out about the Americas, and how would history have
developed in the meanwhile?
1Jack12y
Have you read Orson Scott Card's "Pastwatch: The Redemption of Christopher
Columbus"? It suggest an answer to this question.
1CronoDAS12y
Not a very realistic one, though.
6Alicorn12y
I would like to know what would have happened if, sometime during the Dark Ages
let's say, benevolent and extremely advanced aliens had landed with the
intention to fix everything. I would diligently copy and disseminate the entire
Wikipedia-equivalent for the generously-divulged scientific and sociological
knowledge therein, plus cultural notes on the aliens such that I could write a
really keenly plausible sci-fi series.
4Gavin12y
A sci-fi series based on real extra-terrestrials would quite possibly be so
alien to us that no one would want to read it.
6billswift12y
Not just science fiction and aliens either. Nearly all popular and successful
fiction is based around what are effectively modern characters in whatever
setting. I remember a paper I read back around the mid-eighties pointing out
that Louis L'Amour's characters were basically just modern Americans with the
appropriate historical technology and locations.
0dclayh12y
I've found that Umberto Eco's novels do the best job I've seen at avoiding this.
0pdf23ds12y
I'd love to see an essay-length expansion on this theme.
0billswift12y
As I wrote, I read it in something in the 1980s. Probably, but I 'm not sure, in
Olander and Greenberg's "Robert A Heinlein" or in Franklin's "Robert A Heinlein:
America as Science Fiction".
3Alicorn12y
I might have to mess with them a bit to get an audience, yes.
3Zack_M_Davis12y
Of course you can't fully describe the scenario, or you would already have your
answer, but even so, this question seems tantalizingly underspecified. Fix
everything, by what standard? Human goals aren't going to sync up exactly with
alien goals (or why even call them aliens?), so what form does the aliens'
benevolence take? Do they try to help the humans in the way that humans would
want to be helped, insofar as that problem has a unique answer? Do they give
humanity half the stars, just to be nice? Insofar as there isn't a unique answer
to how-humans-would-want-to-be-helped, how can the aliens avoid engaging in what
amounts to cultural imperialism---unilaterially choosing what human civilization
develops into? So what kind of imperialism do they choose?
How advanced are these aliens? Maybe I'm working off horribly flawed
assumptions, but in truth it seems kind of odd for them to have interstellar
travel without superintelligence and uploading. (You say you want to write
keenly plausible science fiction, so you are going have to do this kind of
analysis.) The alien civilization has to be rich and advanced enough to send out
a benevolent rescue ship, and yet not develop superintelligence and send out a
colonization wave at near-c to eat the stars and prevent astronomical waste
[http://www.nickbostrom.com/astronomical/waste.html]. Maybe the rescue ship
itself was sent out at near-c and the colonization wave won't catch up for a few
decades or centuries? Maybe the rescue ship was sent out, and then the home
civilization collapsed or died out
[http://www.nickbostrom.com/existential/risks.html]?---and the rescue ship can't
return or rebuild on its own (not enough fuel or something), so they need some
of the Sol system's resources?
Or maybe there's something about the aliens' culture and psychology such that
they are capable of developing interstellar travel but not capable of developing
superintelligence? I don't think it should be too surprising if the aliens
should
5Alicorn12y
Why not, as long as I'm making things up?
Because they are from another planet.
I do not know enough science to address the rest of your complaints.
6orthonormal12y
OK, I sense cross-purposes here. You're asking "what would be the most
interesting and intelligible form of positive alien contact (in human terms)",
and Zack is asking "what would be the most probable form of positive alien
contact"?
(By "positive alien contact", I mean contact with aliens who have some goal that
causes them to care about human values and preferences (think of the
Superhappies [http://lesswrong.com/lw/y4/three_worlds_collide_08/]), as opposed
to a Paperclipper [http://wiki.lesswrong.com/wiki/Paperclip_maximizer] that only
cares about us as potential resources for or obstacles to making paperclips.)
Keep in mind that what we think of as good sci-fi is generally an example of
positing human problems (or allegories for them) in inventive settings, not of
describing what might most likely happen in such a setting...
4Zack_M_Davis12y
I'm worried that some of my concepts here are a little be shaky and confused in
a way that I can't articulate, but my provisional answer is: because their
planet would have to be virtually a duplicate of Earth to get that kind of
match. Suppose that my deepest heart's desire, my lifework, is for me to write a
grand romance novel about an actuary who lives in New York and her unusually
tall boyfriend. That's a necessary condition for my ideal universe: it has to
contain me writing this beautiful, beautiful novel.
It doesn't seem all that implausible that powerful aliens would have a goal of
"be nice to all sentient creatures," in which case they might very well help me
with my goal in innumerable ways, perhaps by giving me a better word processor,
or providing life extension so I can grow up to have a broader experience base
with which to write. But I wouldn't say that this is the same thing as the alien
sharing my goals, because if humans had never evolved, it almost certainly
wouldn't have even occurred to the alien to create, from scratch, a human being
who writes a grand romance novel about an actuary who lives in New York and her
unusually tall boyfriend. A plausible alien is simply not going to spontaneously
invent those concepts and put special value on them. Even if they have rough
analogues to courtship story or even person who is rewarded for doing economic
risk-management calculations, I guarantee you they're not going to invent New
York.
Even if the alien and I end up cooperating in real life, when I picture my ideal
universe, and when they picture their ideal universe, they're going to be
different visions. The closest thing I can think of would be for the aliens to
have evolved a sort of domain-general niceness, and to have a top-level goal for
the universe to be filled with all sorts of diverse life with their own
analogues of pleasure or goal-achievement or whatever, which me and my
beautiful, beautiful novel would qualify as a special case of. Act
4Alicorn12y
Domain-general niceness works. It's possible to be nice to and helpful to lots
of different kinds of people with lots of different kinds of goals. Think
Superhappies except with respect for autonomy.
5RolfAndreassen12y
I would try to study the effects of individual humans, Great-Man vs Historical
Inevitability style, by knocking out statesmen of a particular period. Hitler is
a cliche, whom I'd nonetheless start with; but I'd follow up by seeing what
happens if you kill Chamberlain, Churchill, Roosevelt, Stalin... and work my way
down to the likes of Turing and Doenitz. Do you still get France overrun in six
weeks? A resurgent German nationalism? A defiant to-the-last-ditch mood in
Britain? And so on.
Then I'd start on similar questions for the unification of Germany: Bismarck,
Kaiser Wilhelm, Franz Josef, Marx, Napoleon III, and so forth. Then perhaps the
Great War or the Cold War, or perhaps I'd be bored with recent history and go
for something medieval instead - Harald wins at Stamford Bridge, perhaps. Or to
maintain the remove-one-person style of the experiment, there's the three
claimants to the British throne, one could kill Edgar the Confessor earlier, the
Pope has a hand in it, there's the various dukes and other feudal lords in
England... lots of fun to be had with this scenario!
1DanArmak12y
Don't limit yourself to just killing people. It's not a good way to learn how
history works, just like studying biology by looking at organisms with defective
genes doesn't tell us everything we'd like to know about cell biology.
0RolfAndreassen12y
Nu, but I specified the particular part of "how history works" that I want to
study, namely, are individuals important to large-scale events? For that purpose
I think killing people would work admirably well. For other studies, certainly,
I would use a different technique.
1DanArmak12y
If you're ok with a yes or no answer, then it's enough. If you also want to know
how individuals may be important to events, killing may not be enough, I think.
5dfranke12y
I'd like to know what would have happened if movable type had been invented in
the 3rd century AD.
3Nick_Novitski12y
For starters, the Council of Nicea would flounder helplessly as every sect with
access to a printing press floods the market with their particular version of
christianity.
4PeterS12y
I've been curious to know what the "U.S." would be like today if the American
Revolution had failed.
Also, though it's a bit cliche to respond to this question with something like
"Hitler is never born", it is interesting to think about just what is necessary
to propel a nation into war / dictatorship / evil like that (e.g. just when can
you kill / eliminate a single man and succeed in preventing it?) That's
something I'm fairly curious about (and the scope of my curiosity isn't
necessarily confined to Hitler - could be Bush II, Lincoln, Mao, an Islamic imam
whose name I've forgotten, etc.).
2DanielLC12y
Something like Canada I guess.
While we're at it, what if the Continental Congress failed at replacing the
Articles of Confederation?
1i7712y
Code Geass :)
0LucasSloan12y
Sadly, that is more like the result if the ARW fails and the laws of physics
were weirdly different.
3anonym12y
I'd like to know what would have happened if the Library of Alexandria hadn't
been destroyed. If even the works of Archimedes alone -- including the key
insight underlying Integral Calculus -- had survived longer and been more widely
disseminated, what difference would that have made to the future progress of
mathematics and technology?
2MichaelGR12y
I wonder if much in 20th century history would have been different if the USSR
had been first to land someone on the Moon.
At the time, both sides played it like was something very important, if only for
psychological reasons. But did that symbolic victory really mean that much? Did
it actually alter the course of history much?
1blogospheroid12y
China not imposing the Hai Jin [http://en.wikipedia.org/wiki/Hai_jin] edict.
Greater chinese exploration would have meant an extremely different and
interesting history.
3DanArmak12y
May you live in interesting times!
1[anonymous]12y
A recent Facebook status of mine: Too bad Benjamin Franklin wasn't alive in
1835; he could have invented the Internet. The relay had been invented around
then; that's theoretically all that's needed for computation and error
correction, though it would go very slowly.
3JohannesDahlstrom12y
Well, Charles Babbage [http://en.wikipedia.org/wiki/Analytical_engine] was alive
back then...
3[anonymous]12y
Huh. Then, uh... too bad Charles Babbage wasn't Benjamin Franklin?
0SilasBarta12y
And if not then, by the time they had extensive telegraph or telephone networks,
basic computation, and typewriters, about 1890 (sic). Why didn't it? Numerous
barriers, and their overcoming since then counts as political and scientific
advances.
0MatthewB12y
This one is hard.
Take Cannae, for example. Can you really measure this as one outcome? It could
be broken down into all kinds of things:
* Varro's (or Paullus' if you happen to believe that Varro was indeed
scapegoated for the disaster and that Paullus was really in command that day)
decision to mass the legions in a phalanx, instead of their usual wide
maniples.
* Whether the Celts, Celtiberians and Iberians would have been able to hold the
center against the legionaires without breaking.
* Whether Hasdrubal would have managed to stop the Roman and Italian Cavalry
* I cannot recall who was in command of he Numidians on the other flank, but if
they had not been turned around when they went to pursue The Italian Allied
Cavalry on their flank, that side of the battlefield would not have been
enveloped by the Punic/African Heavy Infantry that Hannibal had held back
And, one could continue, probably down to the level of "Did Legionaire Plebius
manage to hurl his pila soon enough to impale the charging Celtberian Villoni in
time to keep the aforementioned Celtiberian from eventually killing Plebius'
centurian, who would have then gone on to kill Hannibal, just before he had time
to issue the final order of the day?"
So, how are you defining "A single event" here?
0Kaj_Sotala12y
Loosely. Any of the ones you listed would be fine for me.
0NancyLebovitz12y
I don't think you get a single outcome even from the best specified event--
you'd get a big sheaf of outcomes.
If you could see all the multiple futures branching off from the present and
have some way of sorting through them, you could presumably make better choices
than you do now, but it would still be very hard to optimize much of anything.
0Kaj_Sotala12y
Okay - "suppose you could find out the single most probable outcome..."
0orthonormal12y
Since we're talking about a continuous probability measure, I'm not sure if
that's the right way to think about it. Perhaps it's best to think of a randomly
chosen point from the probability measure that evolves from a concentrated mass
around a particular starting configuration— that is, a typical history given a
particular branching point.
0Kaj_Sotala12y
One could always argue that since there is only a finite (even if unimaginably
huge) amount of possible branching points, we're actually talking about a
discrete probability distribution.
Your approach works, too.
0orthonormal12y
How do you mean?
I'm talking about the fundamental physics of the universe. From a mathematical
perspective, it's far more elegant (ergo, more likely) to deal with a partial
differential equation defined on a continuous configuration space
[http://lesswrong.com/lw/pw/decoherence_is_pointless/]. Attempts to discretize
the space in the name of infinite-set atheism seem ad-hoc to me.
0Kaj_Sotala12y
Oh, right - I was under the impression that MWI would have involved discrete
transitions at some point (I haven't had the energy to read all of the MWI
sequence). If that's incorrect, then ignore my previous comment.
0DanArmak12y
The easy and trite answer is: the event of EY discovering a correct FAI theory,
which is so simple that it's fully described in the Wikipedia article.
0Zack_M_Davis12y
Related: what if I. J. Good had taken himself seriously and started a
Singularity effort rather than just writing that one article
[http://www.acceleratingfuture.com/ultraintelligentmachine.html]?
0Larks12y
If Einstein was wrong, and Newton right. More specifically, if experiments held
at the time revealed the speed of light were relative and the earth moved in
either.
1Jack12y
Surely this isn't changing a single historical event but the laws governing our
universe.
Oh, and to post another "what would you find interesting" query, since I found the replies to the last one to be interesting. What kind of crazy social experiment would you be curious to see the results of? Can be as questionable or unethical as you like; Omega promises you ve'll run the simulation with the MAKE-EVERYONE-ZOMBIES flag set.
Raise a kid by machine, with physical needs provided for, and expose the kid to language using books, recordings, and video displays, but no interactive communication or contact with humans. After 20 years or so, see what the person is like.
Try to create a society of unconscious people with bicameral minds, as described in Julian Jaynes's "The Origin of Consciousness in the Breakdown of the Bicameral Mind", using actors taking on the appropriate roles. (Jaynes's theory, which influenced Daniel Dennett, was that consciousness is a recent cultural innovation.)
Try to create a society where people grow up seeing sexual activity as casual, ordinary, and expected as shaking hands or saying hello, and see whether sexual taboos develop, and study how sexual relationships form.
Raise a bunch of kids speaking artificial languages, designed to be unlike any human language, and study how they learn and modify the language they're taught. Or give them a language without certain concepts (relatives, ethics, the self) and see how the language influences they way they think and act.
Raise a kid by machine, with physical needs provided for, and expose the kid to language using books, recordings, and video displays, but no interactive communication or contact with humans. After 20 years or so, see what the person is like
They'd probably be like the average less wrong commenter/singularitarian/transhumanist, so really no need to run this one.
What Adelene said. I'm afraid it isn't very funny. :-)
2MatthewB12y
I've noticed that some of the Pacific Island countries don't have much in the
way of sexual taboos, and they tend to teach their kids things like:
* Don't stick your thingy in there without proper lube
or
* If you are going to do that, clean up afterward.
Japan is also a country that has few sexual taboos (when compared to western
Christian society). They still have their taboos and strangeness surrounding
sex, but it is not something that is considered sinful or dirty
I am really interested in that last suggestion, and it sounds like one of the
areas I want to explore when I get to grad school (and beyond). At Eliezer's
talk at the first Singularity Summit (and other talks I have heard him give) he
speaks of a possible mind space. I would like to explore that mind space further
outside of the human mind.
As John McCarthy proposed in one of his books. It might be the case that even a
thermostat is a type of a mind. I have been exploring how current computers are
a type of evolving mind with people as the genetic agents. we take things in
computers that work for us, and combine those with other things, to get an
evolutionary development of an intelligent agent.
I know that it is nothing special, and others have gone down that path as well,
but I'd like to look into how we can create these types of minds biologically.
Is it possible to create an alien mind in a human brain? Your 4th suggestion
seems to explore this space. I like that (I should up vote it as a result)
1NancyLebovitz12y
Point 1: I'm not sure what you mean by physical needs. If human babies aren't
cuddled, they die. Humans are the only known species to do this.
A General Theory of Love
[http://www.amazon.com/General-Theory-Love-Thomas-Lewis/dp/0375709223] describes
the connection between the limbic system and love-- I thought it was a good
book, but to judge by the Amazon reviews, it's more personally important to a
lot of intellectual readers than I would have expected.
1Blueberry12y
I've heard that called "failure to thrive" before. Yes, we'd need some kind of
machine to provide whatever tactile stimulation was required. Given the way many
primates groom each other and touch each other for social bonding, I'd be
surprised if it were just humans who needed touch.
1NancyLebovitz12y
A lot of animals need touch to grow up well. Only humans need touch to survive.
A General Theory of Love describes experiments with baby rodents to determine
which physical systems are affected by which aspects of contact with the
mother-- touch is crucial for one system, smell for another.
0Peter_de_Blanc12y
I just read about #2 on wikipedia. Wow. Science is so much weirder than science
fiction.
0Blueberry12y
I should warn you that Julian Jaynes's theory may be more like science fiction
than science. It's interesting speculation but it's still a very controversial
theory (which is why I'd love to test it). Daniel Dennett has written a couple
articles talking about how he's adapted parts of Jaynes's theory into his
theories of consciousness, and his books discuss some of the experimental
evidence which sheds some light on similar theories about consciousness.
3MBlume12y
I'd like to put about 50 anosognosiacs and one healthy person in a room on some
pretext, and see how long it takes the healthy person to notice everyone else is
delusional, and whether ve then starts to wonder if ve is delusional too.
3Kaj_Sotala12y
I'd be really curious to see what happened in a society where your social gender
was determined by something else than your biological sex. Birth order, for
instance. Odd male and even female, so that every family's first child is
considered a boy and their second a girl. Or vice versa. No matter what the
biology. (Presumably, there'd need to be some certain sign of the gender to tell
the two apart, like all social females wearing a dress and no social males doing
so.)
0RichardKennaway12y
The concept of the berdache [http://www.google.com/search?q=berdache] might be
relevant. The link is just to a Google search on the word, as the politics
surrounding it leave me uncertain what to believe about the subject.
0AdeleneDawner12y
Ursula LeGuin has written a short story
[http://www.ursulakleguin.com/Birthday_Excerpts.html#Mountain] with a premise
that's not quite the same, but still interesting. (The introduction is the
useful part, there - the story excerpt cuts off before getting anywhere terribly
interesting.)
0Kaj_Sotala12y
That is indeed an interesting variation of the premise. (It does feel a bit
contrived, but then again, so does my original.)
1MatthewB12y
I'd like to know how many people would eat human meat if it was not so taboo (No
nervous system so as to avoid nasty prion diseases). I know that since I
accidentally had a bite of finger when I was about 19 that I've wondered what a
real bite of a person would taste like (prepared properly... Maybe a
ginger/garlic sauce???).
Also, building on Kaj Sotala's proposal, what about sexual assignment by job or
profession (instead of biological sex). So, all Doctors or Health Care workers
would be female, all Soldiers would be male, all ditch diggers would be male,
yet all bakers would be female. All Mailmen would be male, yet all waiters would
be female.
Then, one could have multiple sex-assignments if one worked more than one job.
How about a neuter sex and a dual sex in their as well (so the neuter sex would
have no sex, and the hermaphrodite would be... well, both...)
2orthonormal12y
After your prior revelations
[http://lesswrong.com/lw/1lb/are_wireheads_happy/1e77] and this, I'm waiting for
the third shoe to drop.
3MatthewB12y
Then shoes could be dropping for quite a while...
Edit: I better stop biographing for a while. I've led a life that has been
colorful to say the least (I wish that it had been more profitable - it was at
one point... But, well, you have a link to what happened to the money)
-4[anonymous]12y
Hey, no linking to people's revelations without their permission.
1RichardKennaway12y
Isn't that circular? Not eating human meat is the taboo.
2MatthewB12y
A better way to have said that would be
In other words: If there were no taboo against eating human meat, how many
people would eat it?
From what I remember of the bite of finger, it had a white meat taste. Sort of
like pork-turkey... I guess kinda like a hot dog (only it had no salt on/in it
beyond the sweat that was on the hand).
I do think that human meat would stack up against Pork and Turkey as a delicious
meat. Maybe if we ate condemned criminals. They would spend their time in prison
before their execution fattening up. (OK, I realize that I am getting really
out-there morbid now).
Cannibalism is a subject that fascinates me though. I have often wondered about
fantastic settings in which the only thing that existed to eat was other people.
Say, a planet in which there existed no other life forms at all. No plants,
microbes, animals, etc. The Planet would have water, or maybe springs that had a
liquid that contained nutrients that weren't in human meat... And, it would have
people. So, the people would be the only things to eat, and the only things out
of which tools could be made.
I do actually have a series of stories based upon this premise written. It was
an interesting thought experiment to think about the types of cultures that
could arise to deal with such a dilemma. And, if the inhabitants didn't know
that any other life existed, (and had some cultural memory of the expression You
are what you eat) then they might consider it a horrid idea to eat anything but
people (should they eventually discover that other people from other planets eat
dumb animals and plants that cannot even think.
If You are what you eat, then eating a stupid immobile plant or a flatulent
stupid bovine would seem like the ultimate in self-condemnation.
2Nick_Tarleton12y
Larry Niven, "Bordered in Black". Sort of.
3MatthewB12y
Isn't that the Short Story where the first two Superluminal astronauts arrive at
a planet that contains a giant ocean and just one island, that is surrounded by
a dark black line.
The dark black area turns out to be algae and people's remains, and a crowd of
people wander the island's coast eating either the algae or each other.
I don't see a very large similarity (but then I am looking at it from much more
information about the place than you), as those people had no real developed
culture or solitary food source. I was surprised to read it when I did, because
it did come close to my idea (I first thought of this idea in 2nd grade when we
had a nutritional lecture: "You Are What You Eat"). I spent three weeks
wondering when the cafeteria was going to start serving people. I figured "I am
a person. If I am what I eat, then I must eat people to continue being one." The
teacher had to call my parents when I asked her directly about when we would
start eating people or, if "This was only something grown-ups did." My mother
did her normal "How could you do this to me?!", and my father did the "Look what
you've done to your mother!"
The Culture that I envisioned was large and highly populous, and the whole point
of life was to eventually be able to give your meat to your family (although,
many children are eaten if they don't live up to standards). They build cities
out of mud and bone, and use glass for some tools (created by burning bone and
intestinal gasses created in special people who are nothing but huge guts. These
people also produce other chemicals in different metabolic processes, but the
point is that a whole class of person exists that is nothing but a chemical
factory. These people usually have most of their cortex removed as well, so they
are basically vegetables. They use the neocortex of these people as artificial
memory devices).
There are other groups on this imaginary world as well, who are much less
"Civilized" and predatory. They all live under
2Multiheaded10y
Wow. Did elements of this appear in your mind during one or several bad trips?
0[anonymous]12y
It's not circular. One might pose the question of how many people in cultures
where eating pork is taboo would eat it if it weren't taboo. Conversely, there's
no taboo against eating smoked salmon that I know of, but I can't stand the
stuff.
0dclayh12y
Perhaps he means how would it stack up in deliciousness against beef, chicken,
fish, etc..
0RichardKennaway12y
In this [http://lesswrong.com/lw/q9/the_failures_of_eld_science/] sort of
environment, Jeffreyssai poses the question: "Find what is valuable in
religion."
Among the false starts that he will instantly slap down are responses that say
what is non-valuable or pernicious ("This was not the question. Do not waste our
time rehearsing irrelevancies known to us all"), evolutionary explanations of
religion ("What is valuable, not what merely happened"), religiously motivated
good works ("we do superior works without"), and any concept of useful lies
[http://lesswrong.com/lw/uz/protected_from_myself/].
I recommend making a longer list of recent comments available, the way Making Light does.
If you've been working with dual n-back, what have you gotten out of it? Which version are you using?
Would an equivalent to a .newsrc be possible? I would really like to be able to tell the site that I've read all the comments in a thread at a given moment, so that when I come back, I'll default to only seeing more recent comments.
Years ago I was involved with both Loglan [http://www.loglan.org/] (the
original) and Lojban [http://www.lojban.org/] (the spin-off, started by a Loglan
enthusiast who thought the original creator was being too possessive of Loglan).
For me it was simply an entertaining hobby, along with other conlangs such as
Láadan and Klingon. But in the history of artificial languages, it is important
as the first to be based on the standard universal language of mathematics,
first-order predicate calculus.
0Lightwave12y
+1
It looks like this [http://nielsenhayden.com/makinglight/newbackthreads.html]. I
would even add a sorting functionality for the list of the last X comments by
topic
0MatthewB12y
There is some guy on the forums of Ray Kurzweil's website who regularly goes off
on these huge tangents about Lojban and Pot and how AIs will all be the
multi-agent Lojban speaking, pot smoking embodiments of... something...
Thus, where-ever/whenever I see the word lojban, I tend to have a negative
reaction. I did manage to have a sane conversation with Steve Omohundro about
Lojban when he spoke at my school last year, so my reaction has tempered
somewhat. RichardKenneyway seems to say more about it (usefully) than I have
said.
If quantum immortality is correct, and assuming life extension technologies and uploading are delayed for a long time, wouldn't each of us, in our main worldline, become more and more decrepit and injured as time goes on, until living would be terribly and constantly painful, with no hope of escape?
We frequently become unconscious (sleep) in our threads of experience. There is
no obvious reason we couldn't fall comatose after becoming sufficiently
battered.
3SoullessAutomaton12y
I present for your consideration a delightful quote, courtesy of a discussion on
another site [http://news.ycombinator.com/item?id=928054]:
I think the moral of the story is: stay healthy and able-bodied as much as
possible. If, at some point, you should find yourself surviving far beyond what
would be reasonably expected, it might be wise to attempt some strategic quantum
suicide reality editing while you still have the capacity to do so...
3Roko12y
How could it be "correct" or "incorrect"? QI doesn't make a falsifiable factual
claim, as far as I know...
5orthonormal12y
A superhuman intelligence that understood the nature of human consciousness and
subjective experience would presumably know whether QI was correct, incorrect,
or somehow a wrong question. Consciousness and experience all happen within
physics, they just currently confuse the hell out of us.
2Roko12y
I think it is becoming clear that it is a wrong question.
see Max Tegmark on MWI
[http://arxiv.org/PS_cache/quant-ph/pdf/9709/9709032v1.pdf]
1orthonormal12y
Neat paper!
-2pdf23ds12y
As I understand it, it makes a prediction about your future experience (and the
MWI measure of that experience)--not dying. Is that not falsifiable? I suppose
you could argue that it's a logical and inescapable consequence of MWI, and not
in itself falsifiable, but that doesn't seem like an important distinction.
I don't see how Tegmark's paper is relevant to this question.
1Roko12y
It is. If you believe MWI, you believe that Schrodinger's cat will experience
survival every time, even if you repeated the experiment 100 times, but that you
will observe the cat dead if you repeat the experiment enough times.
There is no falsifiable fact above and beyond MWI as far as I can see, apart
from the general air of confusion about subjective experience, which hasn't
coalesced into anything sufficiently definite enough to be falsified.
2Eliezer Yudkowsky12y
"The author recommends that anyone reading this story sign up with Alcor or the
Cryonics Institute to have their brain preserved after death for later revival
under controlled conditions."
(From a little story
[http://www.fanfiction.net/s/5389450/1/The_Finale_of_the_Ultimate_Meta_Mega_Crossover]
which assumes QTI.)
1rwallace12y
Even supposing this unpleasant scenario is true, it is not hopeless. There are
things we can do to improve matters. The timescale to develop life extension and
uploading is not a prior constant; we can work to speed it up, and we should be
doing this anyway. And we can sign up for cryonics to obtain a better
alternative worldline.
0Nick_Tarleton12y
Not if, as is at least conceivable*, enough Friendly superintelligences model
the past and reconstruct people from it that eventually most of your measure
comes from them. (Or other, mostly less pleasant but seemingly much less likely
possibilities.)
* It actually seems a lot more than "at least conceivable" to me, but I trust
this seeming very little, since the idea is so comforting.
0Eliezer Yudkowsky12y
That requires a double assumption about not just quantum immortality, but about
"subjective measure / what happens next" continuing into all copies of a
computation, rather than just the local causal future of a computation.
0Nick_Tarleton12y
Right, MWI has a different causal structure than other multiverses and quantum
immortality is a distinct case of, call it 'modal-realist immortality'. I do
tend to forget that.
I spent December 23rd, 24th and 25th in the hospital. My uncle died of brain cancer (Glioblastoma multiforme). He was an atheist, so he knew that this was final, but he wasn't signed up for cryonics.
We learned about the tumor 2 months ago, and it all happened so fast.. and it's so final.
This is a reminder to those of you who are thinking about signing up for cryonics; don't wait until it's too late.
Because trivial inconvieniences be a strong deterent, maybe someone should make a top-level post on the practicallities of cryonics; an idiots guide to immortality.
I want to sign up. I don't want to sign up alone. I can't convince any of my
family to sign up with me. Help.
9Eliezer Yudkowsky12y
Most battles like this end in losses; I haven't been able to convince any of my
parents or grandparents to sign up. You are not alone, but in all probability,
the ones who stand with you won't include your biological family... that's all I
can say.
0MatthewB12y
I have found that to be very true.
I think that I would not wish to have most of my family around if their lives
were interrupted for 20, 50, or 100 years. Most of them have a hard enough time
with living in a world that is moving at the pace of our current world, much
less the drastic change that they would experience if they were to suddenly wake
to a world to which they had no frame of reference.
I would not wish to be lonely in such a world, but, I already have friends with
Alcor plans.
5Technologos12y
Now that would be a great extension of the LW community--a specific forum for
people who want to make rationalist life decisions like that, to develop a more
personal interaction and decrease subjective social costs.
5aausch12y
It could be a more general advice-giving forum. Come and describe your problem,
and we'll present solutions.
That might also be a useful way to track the performance of rationalist methods
in the real world.
1Technologos12y
I like it. Sure would beat the hell out of a lot of the advice I've heard, and
if nothing else it would be good training in changing our minds and in
aggregating evidence appropriately.
4Dagon12y
Can I help by pointing out flaws in your implied argument ("I believe cryonics
is worthwhile, but without my family, I'd rather die, and they don't want to")?
Do you intend to kill yourself when some or all of your current family dies? If
living beyond them is positive value, then cryonics seems a good bet even if no
current family member has signed up.
Also, your arguments to them that they should sign up gets a LOT stronger with
your family if you're actually signed up and can help with the paperwork,
insurance, and other practical barriers. In fact, some of your family might be
willing to sign up if you set everything up for them, including paying, and they
just have to sign.
In fact, cryonics as gift seems like a win all around. It's a wonderful signal:
I love you so much I'll spend on your immortality. It gets more people signed
up. It sidesteps most of the rationalization for non-action (it's too much
paperwork, I don't know enough about what group to sign up, etc.).
7Alicorn12y
No. I do expect to create a new family of my own between now and then, though.
It is the prospect of spending any substantial amount of time with no beloved
company that I dread, and I can easily imagine being so lonely that I'd want to
kill myself. (Viva la extroversion.) I would consider signing up with a
fiancé(e) or spouse to be an adequate substitute (or even signing up one or more
of my offspring) but currently have no such person(s).
Actually, shortly after posting the grandparent, I decided that limiting myself
to family members was dumb and asked a couple of friends about it. My best
friend has to talk to her fiancé first and doesn't know when she'll get around
to that, but was generally receptive. Another friend seems very on-board with
the idea. I might consider buying my sister a plan if I can get her to explain
why she doesn't like the idea (it might come down to finances; she's being weird
and mumbly about it), although I'm not sure what the legal issues surrounding
her minority are.
Edit: Got a slightly more coherent response from my sister when I asked her if
she'd cooperate with a cryonics plan if I bought her one. Freezing her when she
dies "sounds really, really stupid", and she's not interested in talking about
her "imminent death" and asks me to "please stop pestering her about it". I
linked her to this
[http://lesswrong.com/lw/2d/talking_snakes_a_cautionary_tale/], and think that's
probably all I can safely do for a while. =/
3AndrewWilcox12y
You have best friends now, how did you meet them? In the worst case scenario
where people you currently know don't make it, do you doubt that you'll be able
to quickly make new friends?
Suppose that there are hundreds of people who would want to be your best friend,
and that you would genuinely be good friends with. Your problem is that you
don't know who they are, or how to find them. Not to be too much of a technology
optimist :-), but imagine if the super-Facebook-search engine of the future
would be able to accurately put you in touch with those hundreds.
0Alicorn12y
I met a significant percentage of my friends on a message board associated with
a webcomic called The Order of the Stick. Others I met in school. One I met when
she sent me a fan e-mail regarding my first webcomic. A majority of my friends,
I met through people I already knew through one method or another.
When I pop out into the bright and glorious future, might they have a
super-Facebook that would ferry me the cream of the friendship crop and have me
re-ensconced in a comfy social net in a week tops? Maybe. But that's adding one
more if to the long string of ifs that cryonics already is, and that's the if I
can't get over. What I do know is that my standard methods of making friends
can't be relied upon to work. I do not expect to wake up to fans of my webcomic
eagerly awaiting my defrosting. I do not expect to wake up to find the Order of
the Stick forum bustling with activity. I don't expect to wake up to find myself
enrolled in school. I certainly don't expect that, if nobody I'm friends with
gets frozen, I'll be introduced to any of their friends.
2orthonormal12y
Well, at least you'll have the Less Wrong reunion.
0Zack_M_Davis12y
In the vanishingly small fraction of worlds where the Earth is not destroyed
[http://www.nickbostrom.com/existential/risks.html].
3orthonormal12y
I follow Nick Bostrom on anthropic reasoning
[http://lesswrong.com/lw/19d/the_anthropic_trilemma/] as well as existential
risk, so I expect to see you there.
0Alicorn12y
In certain moods, that might be enough to push me to sign up, but the moods
rarely last long enough that I could rely on the impetus from one to get through
all the necessary paperwork.
2AndrewWilcox12y
Hmm, what about an outside view? That is, thinking about what it would be like
for someone else. I'm a little too sleepy now to recall the exact reference, but
there was something said here about how people make better estimates e.g. about
how long a project will take if they think about how long similar projects have
taken then how long they think this project will take. And, because you know
about the present, let's make our thought experiment happen in the present.
So, what if a woman was frozen a hundred years ago, and woke up today? Would she
be able to make any friends? Would anyone care about anything she cared about?
Would anyone be interested in her?
Another thought that occurs to me is that making friends is a skill that can be
learned like any other skill. Perhaps you haven't needed to be very skilled at
making friends because you've grown up in this environment where friends have
come to you fairly easily. So if you practice and become really good at that
skill and have demonstrated to yourself that you can make friends easily in any
situation, then you'd alleviate the worry that is causing you to feel conflicted
about cryonics?
3Alicorn12y
I imagine such a woman would be viewed as a worthwhile curiosity, but probably
not a good prospective friend, by history geeks and journalists. I think she
would find her sensibilities and skills poorly suited to letting her move
comfortably about in mainstream society, which would inhibit her ability to pick
up friends in other contexts. If there were other defrostees, she might connect
with them in some sort of support group setting (now I'm imagining an episode of
Futurama the title of which eludes me), which might provide the basis for
collaboration and maybe, eventually, friendship, but it seems to me that that
would take a while to develop if in fact it worked.
3kpreid12y
(Meta) I wish byrnema had not deleted their comment which was in this position.
2[anonymous]12y
I would expect that it would be very natural to treat defrostees like foreign
exchange students or refugees. They would be taken care of by a plain old
mothering type like me, that are empathetic and understand what it's like to
wake up in a foreign place. I would show this 18th century woman places that she
would relate to (the grocery store, the library, window shopping downtown) and
introduce her to people, a little bit at a time. It would be a good 6-9 months
before she felt quite acclimated, but by then she'd be finding a set of friends
and her own interests. When she felt overwhelmed, I would tell her to take a
bath and spend an evening reading a book.
I've stayed in foster homes in several countries for a variety of reasons, and
this is quite usual.
1AndrewWilcox12y
Hmm, I wonder if you could leave instructions, kind of like a living will except
in reverse, so to speak... e.g., "only unfreeze me if you know I'll be able to
make good friends and will be happy". Perhaps with a bit more detail explaining
what "good friends" and "being happy" means to you :-)
If I were in charge of defrosting people, I'd certainly respect their wishes to
the best of my ability.
And, if your life does turn out to be miserable, you can, um, always commit
suicide then... you don't have to commit passive suicide now just in case... :-)
But it certainly is a huge leap in the dark, isn't it? With most decisions, we
have some idea of the possible outcomes and a sense of likelihoods...
0Alicorn12y
Why would they be in a position to know that I'd be able to make good friends
and be happy?
1SoullessAutomaton12y
Well, if everyone else they've revived so far has ended up a miserable outcast
in an alien society, or some other consistent outcome, they might be able to
take a guess at it.
0Alicorn12y
Bit of a gap between "not a miserable outcast in an alien society" and "has good
close friends".
0AndrewWilcox12y
I can think of three possibilities...
If I'm in charge of unfreezing people, and I'm intelligent enough, it becomes a
simple statistical analysis. I look at the totality of historical information
available about the past life of frozen people: forum posts, blog postings,
emails, youtube videos... and find out what correlates with the happiness or
unhappiness of people who have been unfrozen. Then the decision depends what
confidence level you're looking for: do you want to be unfrozen if there's a 80%
chance that you'll be happy? 90%? 95%? 99%? 99.9%?
Two, I might not be intelligent enough, or there might not be enough data
available, or we might not be finding useful statistical correlates. Then if
your instructions are to not unfreeze you if we don't know, we don't unfreeze
you.
Three, I might be incompetent or mistaken so that I unfreeze you even if there
isn't any good evidence that you're going to be happy with your new situation.
3Peter_de_Blanc12y
Even if none of your relatives sign up for cryonics, I would expect some of them
to still be alive when you are revived.
5Vladimir_Nesov12y
Since there is already only a slim chance of actually getting to the revival
part (even though high payoff keeps the project interesting, like with
insurance), after mixing in the requirement of reaching the necessary tech in
(say) 70 years for someone alive today to still be around, and also managing to
die before that, not a lot is left, so I wouldn't call it something to be
"expected". "Conditional on you getting revived, there is a good chance some of
your non-frozen relatives are still alive" is more like it (and maybe that's
what you meant).
2Alicorn12y
Do you mean that a relative I have now, or one who will be born later, will
probably be around at that time? Because the former would require that I die
soon (while my relatives don't) or that there's an awfully rapid turnaround
between my being frozen and my being defrosted.
4JamesAndrix12y
Well the whole point of signing up now is that you might die soon.
So sign up now. If you get to be old And still have no young family And the
singularity doesn't seem close, then cancel.
3scotherns12y
Do it anyway. Lead by example. Over time, you might find they become more used
to the idea, particularly if they have someone who can help them with the
paperwork and organisational side of things. If you can help them financially,
so much the better.
If you are successfully revived, you will have plenty of time to make new
friends, and start a new family. I'm not meaning to sound callous, but its not
unheard of for people to lose their families and eventually recover. I'm doing
everything I can to persuade my family to sign up, but its up to them to make
the final decision.
I'd give my life to save my family, but I wouldn't kill myself if I found myself
alone.
1Alicorn12y
I'd be more convinced of my ability to lead by example if I'd ever convinced
anyone to become a vegetarian.
0scotherns12y
Did you become vegetarian, despite the fact that you couldn't persuade anyone
else? Did your decision at least make some people at least consider the option
seriously?
0Alicorn12y
Yes, because unlike with being alive, being a vegetarian is something I don't
need company to do happily. I probably wouldn't have become a vegetarian if it
involved being shipped to the Isle of the Vegetarians, population: a lot of
strangers, unless I could convince people to join me. I don't think my
vegetarianism has made anyone give really serious thought to the diet; the
person who has reacted with the most thoughtfulness upon my disclosure has a
vegan mother and I'm inclined to credit her for all his respect for not eating
animals.
3scotherns12y
Well, the future will certainly be full of mostly strangers. If you can't
convince any of your current friends/family to sign up, you might be better of
making friends with those that have already signed up. There are bound to some
you would get along with (I've read OOTS since it started :-) )
If I ever have any success in convincing anyone else to sign up for cryonics,
I'll let you know how I did it (in the unlikely event that this will help!).
3AngryParsley12y
It's much easier to overcome your own aversion to signing up alone than to
convince your family to sign up with you. Even assuming you can convince them
that living longer is a good thing, there are a ton of prerequisites needed
before one can accurately evaluate the viability of cryonics.
2rwallace12y
I think it's great that you've taken the first steps, and would encourage you to
go ahead and sign up.
In my experience, arguing with people who've decided they definitely don't want
to do something, especially if their reasons are irrational, is never
productive. As Eliezer says, it may simply be that those who stand with you will
be your friends and the family you create, not the family you came from. But I
would guess the best chance of your sister signing up would be obtained by you
going ahead right now, but not pushing the matter, so that in a few years the
fact of your being signed up will have become more of an established state of
affairs.
It's a sobering demonstration of just how much the human mind relies on social
proof for anything that can't be settled by immediate personal experience.
(Conjecture: any intelligence must at least initially work this way; a universe
in which it were not necessary, would be too simple to evolve intelligence in
the first place. But I digress.)
Is there anything that can be done to bend social instinct more in the right
direction here? For example, I know there have been face-to-face gatherings for
those who live within reach of them; would it help if several people at such a
gathering showed up wearing 'I'm signed up for cryonics' badges?
1byrnema12y
What do you perceive as the main barrier to their signing up?
5Alicorn12y
My dad was the only one with any non-mumbling answer to the suggestion. I told
him I wanted him to live forever and he told me I was selfish. He said some
things about overpopulation and global warming and universalizability and no
proven results from the procedure.
3Roko12y
Well, if it is any consolation, I have had zero success and a bunch of ridicule
from all friends and family I mentioned the idea to.
I've had the "selfish, overpopulation and global warming" objection from my
mother, and I then reminded her that (a) she had a fair amount of personal
wealth and wasn't remotely interested in spending any of it on third world
charities, charities who try to reduce population or efficient ways to combat
global warming and (b) she wasn't in favor of killing people to reduce
population. Of course, this had no effect.
2DanArmak12y
Do you think it's worthwhile to argue with him rationally on the details, or
that if you make him understand his reasons aren't valid he'll just mumble "no"
like the rest of your family?
2Alicorn12y
Arguing with my dad is profoundly unpleasant, and he is extremely stubborn. I
may send him links to websites, especially if I need his cooperation to involve
my sister because she's 16, but I don't anticipate a good result from continuing
to engage him directly (at least if I'm the one doing it: our relationship
history is such that the odds of me convincing him of anything he's presently
strongly against approach nil, and prolonged attempts to do so end in tears.)
0MatthewB12y
I wonder if any insurance companies have policies that cover cryonics? I have
emailed a friend who is pretty tied in with the Alcor people in Austin Texas (as
well as other cryonics companies, and in other locales) whom I asked for some
info about what to do about paying for the service.
It seems that some form of indentured servitude should be available if they
really have a belief that reanimation of some sort is possible.
1CronoDAS12y
You can pay for cryonics with life insurance.
1MatthewB12y
Wooo... Hooo... I just talked to a friend in Texas, too, who gave me info on an
Alcor plan (he runs Alcor Meetups in Austin TX), and it seems that they have
plans that one can buy as well (on installments).
I need to get this set up as soon as I can. I would rather not worry about being
hit by a truck and not being prepared.
Alexandre Borovik summarizes the Bayesian error in null hypothesis rejection method, citing the classical J. Cohen (1994). `The Earth Is Round (p < .05)'. American Psychologist 49(12):997-1003.
If a person is an American, then he is probably not a member
of Congress. (TRUE, RIGHT?) This person is a member of Congress. Therefore, he is probably not an American.
I need to read those links... I'll probably have to edit this as soon as I do...
Obviously, I did need to edit it. This is just a strange form of Modus Tollens
except with a probabilistic thingy thrown in (pardon the technical term).
Obviously, I need to go back and re-read the article again, because I am not
seeing what they were talking about
-1SilasBarta12y
Valid reasoning. The problem lies in the failure to include all relevant
knowledge (A member of Congress is very likely an American), not in the form of
reasoning. The reason it looks so wrong is that we automatically add the extra
premise on seeing discussion of a "member of Congress". Look at how the
reasoning works in a context where there isn't such a premise:
Somehow I get the feeling that the point your comment just whooshed over my
head...
ETA: Okay, it's not valid reasoning. My point about the assumed premise of the
reader remains though.
ETA: Yes it is valid reasoning. See my reply to Cyan
[http://lesswrong.com/lw/1lf/open_thread_january_2010/1em5].
0Peter_de_Blanc12y
It's not valid Bayesian reasoning, because we haven't said anything about
P(member of congress | not american).
-1AdeleneDawner12y
Both of these have false statements in the third position. The problematic word
is 'therefore'. Most Russians aren't Americans, but that's not because most
Americans aren't Russian; it's because most people don't have dual citizenship
(among other possible facts that you could infer that from).
-1Vladimir_Nesov12y
You are being obnoxious. Why would you argue with a short example intended to
illustrate the topic discussed in the linked paper at length?
1SilasBarta12y
It wasn't clear to me how that misses the point of the paper, and in
acknowledgment of that possibility I added the caveat at the end. Hardly
"obnoxious".
Nevertheless, your original comment would be a lot more helpful if you actually
summarized the point of the paper well enough that I could tell that my comment
is irrelevant.
Could you edit your original post to do so? (Please don't tell me it's
impossible. If you do, I'll have to read the paper myself, post a summary, save
everyone a lot of time, and prove you wrong.)
3Cyan12y
The point of the paper is that the reasoning behind the p-value approach to null
hypothesis rejection ignores a critical factor, to wit, the ratio of the prior
probability of the hypothesis to that of the data. Your s/member of
Congress/Russian example shows that sometimes that factor close enough to unity
that it can be ignored, but that's not the fallacy. The fallacy is failing to
account for it at all.
1SilasBarta12y
On second though, my original reasoning was correct, and I should have spelled
it out. I'll do so here.
It's true that the ratio influences the result, but just the same, you can use
your probability distribution of what predicates will appear in the "member of
Congress" slot, over all possible propositions. It's hard to derive, but you can
come up with a number.
See, for example, Bertrand's paradox, the question of how probable a
randomly-chosen chord of a circle is of being greater than the length of side of
an inscribed equilateral triangle. Some say the answer depends on how you
randomly choose the chord. But as E. T. Jaynes argued
[http://en.wikipedia.org/wiki/Bertrand%27s\_paradox\_\(probability\]
#Unique_solution_using_the_.22maximum_ignorance.22_principle), the problem is
well-posed as is. You just strip away any false assumptions you have of how the
chord is chosen, and use the max-entropy probability distribution subject to
whatever constraints are left.
Likewise, you can assume you're being given a random syllogism of this form,
weighted over the probabilities of X and Y appearing in those slots
If a person is an X, then he is probably not a Y.
This person is a Y.
Therefore, he is probably not an X.
0Cyan12y
It wasn't: when a certain form of argument is asserted to be valid, it suffices
to demonstrate a single counterexample to falsify the assertion. It's kind of
funny -- you wrote
But the the failure to include all relevant knowledge is exactly why the
reasoning isn't valid.
1SilasBarta12y
Not for probabilistic claims.
No. The reasoning can be valid even though, given additional information, the
conclusion would be changed.
Example:
Bob is accused of murder.
Then, Bob's fingerprints are the only ones found on the murder weapon.
Bob has an ironclad alibi: 30 witnesses and video footage of where he was.
O(guilty|accused of murder) = 1:3
P(prints on weapon|guilty) / P(prints on weapon|~guilty) = 1000
O(guilty|accused of murder, prints on weapon) = 1000*(1:3) = 1000:3
P(guilty| ....) > 99%.
If Bob is accused of murder, he has a moderate chance of being guilty.
Bob's prints are much more likely to later be the only ones found on the murder
weapon if he were guilty than if he were not.
Bob's prints are the only ones on the murder weapon.
Therefore, there is a very high probability Bob is guilty.
Bob probably isn't guilty.
Therefore the Bayes Theorem is invalid reasoning. (???)
See the problem? The form of the reasoning presented originally is valid. That
is what I was defending. But obviously, you can show the conclusion is invalid
if you include additional information. In the general case, reasoning that
is valid, if that is all you know. But you can only invert the conclusion by
assuming a higher level of knowledge than what is presented (in the quoted model
above) -- specifically, that you have an additional low-entropy point in your
probability distribution for "Y implies high probability of X". But again, this
assumes a probability distribution of lower entropy (higher informativeness)
than you can justifiably claim to have.
So you can actually form a valid probabilistic inference without looking up the
specific p(H)/p(E) ratio applying to this specific situation -- just use your
max entropy distribution for those values, which favors the reasoning I was
defending.
I'm actually writing up an article for LW about the "Fallacy Fallacy" that
touches on these issues -- I think it would be worthwhile to finish it and post
it. (So no, I'm not just
5Cyan12y
Not really. You keep demonstrating my point as if it supports your argument, so
I know we've got a major communication problem.
And that's what I'm attacking. We are using the same definition of "valid",
right? An argument is valid if and only if the conclusion follows from the
premises. You're missing the "only if" part.
Yes, even for probabilistic claims. See Jaynes's policeman's syllogism in
Chapter 1 of PT:LOS for an example of a valid probabilistic argument. You can
make a bunch of similarly formed probabilistic syllogisms and check them against
Bayes' Theorem to see if they're valid. The syllogism you're attempting to
defend is
P(D|H) has a low value.
D is true.
Therefore, P(H|D) has a low value.
But this doesn't follow from Bayes' Theorem at all, and the Congress example is
an explicit counterexample.
Once you know the specific H and E involved, you have to use that knowledge;
whatever probability distribution you want to postulate over p(H)/p(E) is
irrelevant. But even ignoring this, the idea is going to need more development
before you put in into a post: Jaynes's argument in the Bertrand problem
postulates specific invariances and you've failed to do likewise; and as he
discusses, the fact that his invariances are mutually compatible and specify a
single distribution instead of a family of distributions is a happy circumstance
that may or may not hold in other problems. The same sort of thing happens in
maxent derivations (in continuous spaces, anyway): the constraints under which
entropy is being maximized may be overspecified (mutually inconsistent) or
underspecified (not sufficient to generate a normalizable distribution).
0SilasBarta12y
Okay, let me first try to clarify where I believe the disagreement is. If you
choose to respond, please let me know which claims of mine you disagree with,
and where I mischaracterize your claims.
I claim that the following syllogism S1 is valid in that it reaches a conclusion
that is, on average, correct.
So, I claim, if you know nothing about what H and D are, except that the first
two lines hold, your best bet (expected circumstance over all possibilities) is
that the third line holds as well. You claim that the syllogism is invalid
because this syllogism, S2, is invalid:
I claim your argument is mistaken, because the invalidity of S2 does not imply
the invalidity of S1; it's using different premises.
(You further claim that the existence of a case where P(H|D) has a high value
despite lines 1 and 2 of S1 holding, is proof that S1 is invalid. I claim that
its probabilistic nature means that it doesn't have to get the right answer
(that further knowledge reveals) every time, giving a long example about
murder.)
I claim that the article cited by Vladimir was claiming that S1 is an invalid
syllogism. I claim that it is in error to do so, and that it was actually
showing the errors that result from failing to incorporate all knowledge. So, it
is not the use of the template S1 that is the problem, but failing to recognize
that your template is actually S2, since your knowledge about members of
congress adds the line 3 in S2.
I further claim that S1 is justified by maximum entropy inference, and that the
parallels to Bertrand's paradox were clear. I take back the latter part, and
will now attempt to show why similar reasoning and invariances apply here.
Given line 1, you know that, whatever the probability distribution of D, it
intersects with, at least, a small fraction of H. So draw the Venn/Euler
diagram: the D circle (well, a general bounded curve, but we'll call it a
circle) could be encompassing only that small portion of H (in the member of
Congress case)
2Cyan12y
We're using different definitions of validity. Yours is "[a] syllogism... is
valid [if] it reaches a conclusion that is, on average, correct." Mine is this
one [http://en.wikipedia.org/wiki/Validity#Validity_of_arguments].
ETA: Thank you taking the time to explain your position thoroughly; I've upvoted
the parent. I'm unconvinced by your maximum entropy argument because, at the
level of lack of information you're talking about, H and D could be in
continuous spaces, and in such spaces, maximum entropy only works relative to
some pure non-informative measure, which has be be derived from arguments other
than maximum entropy.
0SilasBarta12y
Okay, then how to you reply to my point about Bayesian reasoning in general? All
Bayesian inference does is tell you what probability distribution you are
justified in having, given your current level of knowledge.
With additional knowledge, that probability distribution changes. That doesn't
make your original probability assignments wrong. It doesn't invalidate the
probabilistic syllogisms you made using Bayes's Theorem. So it seems like your
definition of validity in probabilistic syllogisms matches mine.
Again, refer back to the murder example. The fact that the alibi reverses the
probability of guilt resulting from the fingerprint evidence, does not mean it
was invalid to assign a high probability of guilt when you only had the
fingerprint evidence.
"But the alibi is additional evidence!" Yes, but so is knowledge of what H and D
stand for.
A continuous space, yes, but on a finite interval. That lets you define the
max-entropy (meta)probability distribution. If q equals P(D|H) (which is low),
then your (meta)distribution on P(D) is a flat line over the interval [q,1].
Most of that distribution is such that P(H|D) is also low.
--------------------------------------------------------------------------------
I appreciate the civility with which you've approached this disagreement.
1Cyan12y
I only call syllogisms about probabilities valid if they follow from Bayes'
Theorem. You permit yourself a meta-probability distribution over the
probabilities and call a syllogism valid if it is Cyan::valid on average w.r.t.
to your meta-distribution. I'm not saying that SilasBarta::valid isn't a
possibly interesting thing to think about, but it doesn't seem to match
Cyan::valid to me.
No, a finite interval is not sufficient. You really need to specify the
invariant measure
[http://en.wikipedia.org/wiki/Limiting_density_of_discrete_points] to use maxent
in the continuous case
[http://en.wikipedia.org/wiki/Principle_of_maximum_entropy#Continuous_case]. For
instance, suppose we had a straw-throwing machine, a spinner-controlling
machine, and a dart-throwing machine, each to be used to draw a chord on a
circle (extending the physical experiments described here
[http://en.wikipedia.org/wiki/Bertrand's_paradox_\(probability\]
#Physical_experiments)). We have testable information
[http://en.wikipedia.org/wiki/Principle_of_maximum_entropy#Testable_information]
about each of their accuracies and precisions. According to my understanding of
Jaynes, when maximizing entropy we need to use different invariant measures for
the three different machines, even though the (finite) outcome space is the same
in all cases.
-1SilasBarta12y
But you're permitting yourself the same thing! Whenever you apply the Bayes
Theorem, you're asserting a probability distribution to hold, even though that
might not be the true generating distribution of the phenomenon. You would
reject the construction of such as scenario (where your inference is way off) as
a "counterexample" or somehow showing the invalidity of updates performed under
the Bayes theorem. And why? Because that distribution is the best probability
estimate, on average, for scenarios in which you occupy that epistemic state.
All I'm saying is that the same situation holds with respect to undefined
tokens. Given that you don't know what D and H are, and given the two premises,
your best estimate of P(H|D) is low. Can you find cases where it isn't low?
Sure, but not on average. Can you find cases where it necessarily isn't low?
Sure, but they involve moving to a different epistemic state.
Wrong
[http://en.wikipedia.org/wiki/Maximum_entropy_probability_distribution#Uniform_and_piecewise_uniform_distributions]
:
0Cyan12y
Checks for a syllogism's Cyan::validity do not apply Bayes' Theorem per se. No
prior and likelihood need be specified, and no posterior is calculated. The
question is "can we start with Bayes' Theorem as an equation, take whatever the
premises assert about the variables in that equation (inequalities or whatever),
and derive the conclusion?" Checks for SilasBarta::validity also don't apply
Bayes' Theorem as far as I can tell -- they just involve an extra element (a
probability distribution for the variables of the Bayes' Theorem equation) and
an extra operation (expectation w.r.t. to the previously mentioned
distribution).
This is definitely a point of miscommunication, because I certainly never
intended to impeach Bayes' Theorem.
Maybe. I've still yet to be convinced that it's possible to derive a
meta-probability distribution for the unconditional probabilities.
The text you link uses Shannon's definition of the entropy of a continuous
distribution, not Jaynes's.
1SilasBarta12y
Argh. I wasn't saying that you were using the Bayes Theorem in your claimed
definition of Cyan::validity. I was saying that when you are deriving
probabilities through Bayesian inference, you are implicitly applying a standard
of validity for probabilistic syllogisms -- a standard that matches mine, and
yields the conclusion I claimed about the syllogism in question.
Yes, definitely a miscommunication: my point there was that the existence of
cases where Bayesian inference gives you a probability differing from the true
distribution are not evidence for the Bayes Theorem being invalid. I don't know
how you read it before, but that was the point, and I hope it makes more sense
now.
Why? Because you don't see how defining the variables is a kind of information
you're not allowed to have here? Because you think you can update (have a
non-unity P(D)/P(H) ratio) in the absence of any information about P(D) and
P(H)? Because you don't see how the "member of Congress" case is an example of a
low entropy, concentrated-probability-mass case? Because you reject
meta-probabilities to begin with (in which case it's not clear what makes
probabilities found through Bayesian inference more "right" or "preferable" to
other probabilities, even as they can be wrong)?
So? The difference only matters if you want to know the absolute (i.e.
scale-invariant) magnitude of the entropy. If you're only concerned about which
distribution has the maximum entropy, you don't need to pick an invariant
measure (at least not for a case as simple as this one), and Shannon and Jaynes
give the same result.
1Cyan12y
I do not agree that that it what I'm doing. I don't know why my willingness to
use Bayes' Theorem commits me to SilasBarta::validity.
I think I understand what you meant now. I deny that I am permitting myself the
same thing as you. I try to make my problems well-structured enough that I have
grounds for using a given probability distribution. I remain unconvinced that
probabilistic syllogisms not attached to any particular instance have enough
structure to justify a probability distribution for their elements -- too much
is left unspecified. Jaynes makes a related point on page 10 of "The Well-Posed
Problem [http://bayes.wustl.edu/etj/articles/well.pdf]" at the start of section
8.
Because the only argument you've given for it is a maxent one, and it's not
sufficient to the task, as I explain further below.
This is not correct. The problem is that Shannon's definition is not invariant
to a change of variable. Suppose I have a square whose area is between 1 cm^2
and 4 cm^2. The Shannon-maxent distribution for the square's area is uniform
between 1 cm^2 and 4 cm^s. But such a square has sides whose lengths are between
1 cm and 2 cm. For the "side length" variable, the Shannon-maxent distribution
is uniform between 1 cm and 2 cm. Of course, the two Shannon-maxent
distributions are mutually inconsistent. This problem doesn't arise when using
the Jaynes definition.
In your problem, suppose that, for whatever reason, I prefer the floodle scale
to the probability scale, where floodle = prob + sin(2*pi*prob)/(2.1*pi). Why do
I not get to apply a Shannon-maxent derivation on the floodle scale?
-1SilasBarta12y
Because you're apparently giving the same status ("SilasBarta::validity") to
Bayesian inferences that I'm giving to the disputed syllogism S1. In what sense
is it true that Bob is "probably" the murderer, given that you only know he's
been accused, and that his prints were then found on the murder weapon? Okay: in
that sense I say that the conclusion of S1 is valid.
Where do you think I'm saying something different?
What about the Bayes Theorem itself, which does exactly that (specify a
probability distribution on variables not attached to any particular instance)?
Because a) your information was given with the probability metric, not the
floodle metric, and b) a change in variable can never be informative, while this
one allows you to give yourself arbitrary information that you can't have, by
concentrating your probability on an arbitrary hypothesis.
The link I gave specified that the uniform distribution maximizes entropy even
for the Jaynes definition.
1Cyan12y
For me, the necessity of using Bayesian inference follows from Cox's Theorem, an
argument which invokes no meta-probability distribution. Even if Bayesian
inference turns out to have SilasBarta::validity, I would not justify it on
those grounds.
I wouldn't say that Bayes' Theorem specifies a probability distribution on
variables not attached to any particular instance; rather it uses consistency
with classical logic to eliminate a degree of freedom in how other methods can
specify otherwise arbitrary probability distributions. That is, once I've
somehow picked a prior and a likelihood, Bayes' Theorem shows how consistency
with logic forces my posterior distribution to be proportional to the product of
those two factors.
I'm going to leave this by because it is predicated on what I believe to be a
confusion about the significance of using Shannon entropy instead of Jaynes's
version.
We're at the "is not! / is too!" stage in our dialogue, so absent something
novel to the conversation, this will be my final reply on this point.
The link does not so specify: this old revision
[http://en.wikipedia.org/w/index.php?title=Maximum_entropy_probability_distribution&oldid=17457674]
shows that the example refers specifically to the Shannon definition. I believe
the more general Jaynes definition was added later in the usual Wikipedia
mishmash fashion, without regard to the examples listed in the article.
In any event, at this point I can only direct you to the literature I regard as
definitive: section 12.3 of PT:LOS
[http://books.google.com/books?id=tTN4HuUNXjgC&lpg=PP1&dq=probability%20theory%20the%20logic%20of%20science&pg=PA375#v=onepage&q=&f=false]
(pp 374-8) (ETA: Added link -- Google Books is my friend
[http://lesswrong.com/lw/1lr/rationality_quotes_january_2010/1fbs?context=3#comments]
). (The math in the Wikipedia article Principle of maximum entropy
[http://en.wikipedia.org/wiki/Principle_of_maximum_entropy] follows Jaynes's
material closely. I ought to know: I
0SilasBarta12y
Y... you mean you were citing as evidence a Wikipedia article you had heavily
edited? Bad Cyan! ;-)
Okay, I agree we're at a standstill. I look forward to comments you may have
after I finish the article I mentioned. FWIW, the article isn't about this
specific point I've been defending, but rather, about the Bayesian
interpretation of standard fallacy lists
[http://www.google.com/#hl=en&source=hp&q=list+of+fallacies&aq=0&aqi=g6g-m4&oq=list+of+fallacie&fp=e8d6ef47431c6a4a]
, where my position here falls out as a (debatable) implication.
0Cyan12y
Requesting explanation for the downvote of the parent.
0Tyrrell_McAllister12y
One obstacle to understanding in this conversation seems to be that it involves
the notion of "second-order probability". That is, a probability is given to the
proposition that some other proposition has a certain probability (or a
probability within certain bounds).
As far as I know, this doesn't make sense when only one epistemic agent is
involved. An ideal Bayesian wouldn't compute probabilities of the form p(x1 < p(
A) < x2) for any proposition A.
Of course, if two agents are involved, then one can speak of "second-order
probabilities". One agent can assign a certain probability that the other agent
assigns some probability. That is, if I use probability-function p, and you use
probability function p*, then I might very well want to compute p(x1 < p*(A) <
x2).
And the "two agents" here might be oneself at two different times, or one's
conscious self and one's unconscious intuitive probability-assigning cognitive
machinery.
From where I'm sitting, it looks like SilasBarta just needs to be clear that
he's using the coherent notion of "second-order probability". Then the
disagreement dissolves.
0Cyan12y
Naw, that part's cool. (I already had the idea of a meta-probability
[http://lesswrong.com/lw/1i8/intuitive_supergoal_uncertainty/1b0d] in my
armamentarium.) The major obstacle to understanding was that we meant different
things by the word "valid".
0thomblake12y
If you think there's a fact of the matter about what p(A) is (or should be) then
it makes sense. You can reason as follows: "There are some situations where I
should assign an 80% probability to a. What is the probability that A is such an
a?"
Unless you think "What probability should I assign to A" is entirely a different
sort of question than simply "What is p(A)".
0Tyrrell_McAllister12y
I have plenty to learn about Bayesian agents, so I may be wrong. But I think
that this would be a mixing of the object-language and the meta-language.
I'm supposing that a Bayesian agent evaluates probabilities p(A) where A is a
sentence in a first-order logic L. So how would the agent evaluate the
probability that it itself assigns a certain probability to some sentence?
We can certainly suppose that the agent's domain of discourse D includes the
numbers in the interval (0, 1) and the functions mapping sentences in L to the
interval (0, 1). For each such function f let 'f' be a function-symbol for which
f is the interpretion [http://en.wikipedia.org/wiki/Valuation_%28logic%29]
assigned by the agent. Similarly, for each number x in (0, 1), let 'x' be a
constant-symbol for which x is the interpretation.
Now, how do we get the agent to evaluate the probability that p(A) = x? The
natural thing to try might be to have the agent evaluate p('p'(A) = 'x'). But
the problem is that 'p'(A) = 'x' is not a well-formed formula in L. Writing a
sentence as the argument following a function symbol is not one of the valid
ways to construct well-formed formulas.
0Vladimir_Nesov12y
Wouldn't I say that to be for the best, given that I started the thread by
linking to the paper?
-1SilasBarta12y
That's not excuse for not providing a meaningful summary so that others can
gauge whether it's worth their time. You need to give more than "Vladimir says
so" as a reason for judging the paper worthwhile.
You ... do ... understand the paper well enough to provide such a summary ...
RIGHT?
2Vladimir_Nesov12y
I was linking not just to the paper, but to a summary of the paper, and included
that example out of that summary, a summary-of-summary. Others have already
summarized what you got wrong in your reply. You can see that the paper has
about 1300 citations
[http://scholar.google.com/scholar?cluster=11019192941980045755], which should
count for its importance.
What is the appropriate etiquette for post frequency? I work on multiple drafts at a time and sometimes they all get finished near each other. I assume 1 post per week is safe enough.
Nick Bostrom, however, once asked whether it would make sense to build an Oracle AI, one that only answered questions, and ask it our questions about Friendly AI.
Has Bostrom made this proposal in anything published? I can't seem to find it on nickbostrom.com.
Different responses to challenges seen through the lens of video games. Although I expect the same can be said for character driven stories (rather than say concept driven).
It turns out there are two different ways people respond to challenges. Some people see them as opportunities to perform - to demonstrate their talent or intellect. Others see them as opportunities to master - to improve their skill or knowledge.
Say you take a person with a performance orientation ("Paul") and a person with a mastery orientation ("Matt"). Give them
This is ridiculous. (A $3 item discounted to $2.33 is perceived as a better deal (in this particular experimental setup) than the same item discounted to $2.22, because ee sounds suggest smallness and oo sounds suggest bigness.)
That is pretty ridiculous - enough to make me want to check the original study
for effect size and statistical significance. Writing newspaper articles on
research without giving the original paper title ought to be outlawed.
2AllanCrossman12y
"Small Sounds, Big Deals: Phonetic Symbolism Effects in Pricing", DOI:
10.1086/651241
http://www.journals.uchicago.edu/doi/pdf/10.1086/651241
[http://www.journals.uchicago.edu/doi/pdf/10.1086/651241]
Whether you'll be able to access it I know not.
2timtyler12y
Same researchers, somewhat similar effect:
"Distortion of Price Discount Perceptions: The Right Digit Effect"
* http://www.journals.uchicago.edu/doi/abs/10.1086/518526
[http://www.journals.uchicago.edu/doi/abs/10.1086/518526]
0timtyler12y
Pretty amazing material! A demonstration "in the wild" would be more convincing
to marketers, though.
What is the informal policy about posting on very old articles? Specifically, things ported over from OB? I can think of two answers: (a) post comments/questions there; (b) post comments/questions in the open thread with a link to the article. Which is more correct? Is there a better alternative?
(a). Lots of us scan the "Recent Comments" page, so if a discussion starts up
there plenty of people will get on board.
1orthonormal12y
I think each has their advantages. If you post a comment on the open thread,
it's more likely to be read and discussed now; if you post one on the original
thread, it's more likely to be read by people investigating that particular
issue some time from now.
1timtyler12y
There, I figure (a).
0CarlShulman12y
People can read them from the sequences page
[http://wiki.lesswrong.com/wiki/Sequences] and Google searches, so I'd suggest
a). A follow-up post linking to the old article is also a possibility!
Dawkins: We could devise a little experiment where we take your forecasts and then give some of them straight, give some of them randomized, sometimes give Virgo the Pisces forecast et cetera. And then ask people how accurate they were.
Astrologer: Yes, that would be a perverse thing to do, wouldn't it.
Dawkins: It would be - yes, but I mean wouldn't that be a good test?
Astrologer: A test of what?
Dawkins: Well, how accurate you are.
Astrologer: I think that your intention there is mischief, and I'd think what you'd then get back is mischief.
Dawkins: Well my intention would not be mischief, my intention would be experimental test. A scientific test. But even if it was mischief, how could that possibly influence it?
Astrologer: (Pause.) I think it does influence it. I think whenever you do things with astrology, intentions are strong.
Dawkins: I'd have thought you'd be eager.
Astrologer: (Laughs.)
Dawkins: The fact that you're not makes me think you don't really in your heart of hearts believe it. I don't think you really are prepared to put your reputation on the line.
Astrologer: I just don't believe in the experiment, Richard, it's that simple.
Dawkins: Well you're in a kind of no-lose situation then, aren't you.
Dawkins: "Well... you're sort of in a no-lose situation, then."
Astrologer: "I certainly hope so."
3Cyan12y
A fine example of:
3AngryParsley12y
That video has been taken down, but you can skip to around 5 minutes into this
video [http://video.google.com/videoplay?docid=-7218293233140975017] to watch
the astrology bit.
0blashimov10y
The linked video is set to private? I can't view it. Not a big deal, the
transcript is almost as good.
A few years back I did an ethics course at university. It very quickly made me realise that both I and most of the rest of the class based our belief in the existence of objective ethics simply on a sense that ethics must exist. When I began to question this idea my teacher asked me what I expected an objective form of ethics to look like. When I said I didn't know she asked if I would agree that a system of ethics would be objective if they could be universally calculated by any none biased, perfectly logical being. This seemed fair enough but the problem... (read more)
1) Why would a "perfectly logical being" compute (do) X and not Y? Do all
"perfectly logical beings" do the same thing? (Dan's comment
[http://lesswrong.com/lw/1lf/open_thread_january_2010/1ej0]: a system that
computes your answer determines that answer, given a question. If you presuppose
an unique answer, you need to sufficiently restrict the question (and the
system). A universal computer will execute any program (question) to produce its
output (answer).) All "beings" won't do exactly the same thing, answer any
question in exactly the same way. See also: No Universally Compelling Arguments
[http://lesswrong.com/lw/rn/no_universally_compelling_arguments/].
2) Why would you be interested in what the "perfectly logical being" does? No
matter what argument you are given, it is you that decides whether to accept it.
See also: Where Recursive Justification Hits Bottom
[http://lesswrong.com/lw/s0/where_recursive_justification_hits_bottom/],
Paperclip maximizer [http://wiki.lesswrong.com/wiki/Paperclip_maximizer], and
more generally Metaethics sequence
[http://wiki.lesswrong.com/wiki/Metaethics_sequence].
2.5) What humans want (and you in particular
[http://lesswrong.com/lw/rl/the_psychological_unity_of_humankind/]), is a very
detailed notion [http://wiki.lesswrong.com/wiki/Complexity_of_value], one that
won't automatically appear from a question that doesn't already include all that
detail. And every bit of that detail is incredibly important to get right
[http://lesswrong.com/lw/y3/value_is_fragile/], even though its form isn't fixed
[http://lesswrong.com/lw/v4/which_parts_are_me/] in human image.
4Jack12y
I don't know what you mean by objective ethics. I believe there are ethical
facts but they're a lot more like facts about the rules of baseball than facts
about the laws of physics
0[anonymous]12y
Let's say by objective ethics we mean a set of rules which there is an
imperative to obey which are the same for all beings. So if by the rules of
baseball you are talking about a game which could have different rules for a
different league then that would not be objective in the same sense. However, if
there is only one true set of baseball rules that all people must abide by to be
playing baseball than that would be objective.
So do you believe that ethics are just an invented rule system that could have a
different form and still be as ethical? If so, are you saying you follow ethical
relativism or some form of subjective ethical doctrine?
2DanArmak12y
What do you mean "as ethical"? By what meta-ethical rule?
if your reply is, "by the objective meta-ethics which I postulate that all
sentient beings can derive" - if everyone can derive it equally, doesn't that
imply everyone ought to be equally ethical? If you admit someone or some society
are un-ethical (as you asked of Jack), does that mean they somehow failed to
derive the meta-ethics? That the ethics they adopted is internally inconsistent
somehow?
2Jack12y
Invented isn't the right word, though that is partly my fault since baseball
isn't an ideal metaphor. Natural language is a better one. Parts of ethics are
culturally inherited (presumably at some time in the past they were invented)
other parts are innate. The word ethics has a type-token ambiguity. It can refer
to our ethical system (call it 'ethics prime') or it can refer to the type of
thing that ethics prime is (an ethics). There can be societies without ethics
prime, these societies are not ethical in the token sense but may be in the type
sense (if they have a different ethical system). Imagine if the word for English
and the word for language were the same word 'language'. Do the French speak
language?
My own ethical system demands that I try to enforce it on a large class of
beings similar to myself so I am not a relativist in that I think other people
should do many of the things my ethics require me to do. This seems to me to
have little to do with what those other people believe is ethical.
1PhilGoetz12y
This is a good way of putting it!
In fact, it just convinced me that there is an objective ethics! Sort of. Asking
whether there is an objective meta-ethics is a lot like asking, "Is there such a
thing as language?" Language is a concept that can be usefully applied to
interactions between organisms of a particular level of intelligence given
particular environmental conditions. So is ethics. Is it universal? What the
hell does that mean?
But when people say there is no objective ethics, that isn't what they mean.
They aren't denying that ethics makes sense as a concept. They're claiming the
right to set their own arbitrary goals and values.
It's hard for me to imagine why someone who was convinced that there were no
objective ethics would waste time on this, unless they were a Continental
philosopher. Claiming there is no objective ethics sounds to me more like the
actions of someone who believes in objective ethics, and has come to their own
values that are unique enough that they must reject existing values.
1PhilGoetz12y
If that's the distinction, then whether there is objective ethics or not is just
a matter of semantics; not anything of philosophical or practical interest.
4DanArmak12y
"Calculated" based on what? What is the question that this would be the answer
to?
Also, how can you define "bias" here?
As you can guess from my questions, I don't even see what an objective system of
ethics could possibly mean :-)
3MatthewB12y
This seems to be my biggest problem as well. I have been trying to find
definitions of an objective system of ethics, yet all of the definitions seem so
dogmatic and contrived. Not to mention varying from time to time depending upon
the domain of the ethics (whether they apply to Christians, Muslims, Buddhists,
etc.)
0[anonymous]12y
Though I suppose a similar idea about objective ethics could be expressed with
the claim that one form of ethics, and only one form of ethics, can be derived
from purely logical principles which would cut out the question of bias.
-2DanArmak12y
That does not answer my question: what is the purely objective, unbiased
definition of "ethics"? We can't discuss objective systems of ethics without an
objective definition of the question that ethics is supposed to answer.
P.S. my previous comment was malformatted, so you may have missed this part of
it; I've fixed it now.
1[anonymous]12y
Take ethics to be a system that sets out rules that determine whether we are
good or bad people and whether our actions are good or bad. They differ from
other ascriptions of good (say, at baseball) and bad (say, me playing baseball)
in that there is an imperative to be good in this sense whereas it is acceptable
to be bad at baseball (I hope).
I suspect that won't answer your question so instead I'll ask another. Do you
believe that this inability to define means there is no real concept that
underpins the folk conception of ethics or does it just mean we are unable to
define it well enough to discuss it?
-2DanArmak12y
What do you mean by imperative? Humans have certain imperatives, whether evolved
or "purely" cultural, but they are all human-specific: other creatures and minds
will potentially have different ones. They can't be called "objective and
unbiased between all rational thinking minds".
How can we talk about something if we can't define or at least describe it, or
point to examples of it existing? Inability to define by definition means
there's no concept. A concept isn't right or wrong, it just is, and it's
equivalent to a definition that lets us know what we're talking about.
As for "folk concepts" of ethics, no offence intended, but aren't they roughly
in the same category as religion and "sexual morals"?
0byrnema12y
Aren't you just asserting with this statement, without argument, that there is
no objective ethics? Isn't it the question at hand whether or not human
imperatives are specific or universal?
(Though I wouldn't exclude the higher order possibility that there could be an
objective ethical system defined around imperatives in general; for any
arbitrary imperatives that a subsystem defines for itself, there is an objective
imperative to have them satisfied.)
0DanArmak12y
Well, it's not clear to me that that's what AaronBensen meant by "objective
ethics". But I do believe that human ethics are not universal, because:
1. Human ethics aren't even universal among humans. Plenty of humans live and
have lived who would think I should rightly be killed - for not obeying some
religious prescription, for instance. On the other hand some humans believe
no-one should be killed and no-one has the right to kill anyone else, ever.
Many more opinions exist.
2. I know of no reason why an AI couldn't be built with different ethics from
ours, or with no ethics at all. A paperclipper AI could be very intelligent,
conscious (whatever that means), but sill - unethical by our lights. If
anyone believes that such unethical minds literally cannot exist, the burden
of proof is on them.
0Jack12y
Careful. We need to distinguish between ethical beliefs and 'factual' beliefs.
Someone might have an ethics that says: If there is a God, do what he says.
Else, do not murder. This person might want to kill Dan because he believes God
wants to heathens to die. Others might have the same ethical system but not
believe in God and therefore default to not murdering anyone. I'm not saying
there aren't ethical disagreements but eliminating differences is factual
knowledge might eliminate many apparent ethical differences.
Also, I'm not sure your second point matters. You can probably program anything.
If all evolved, intelligent and social beings had very similar ethics I would
consider that good enough to claim universality.
2DanArmak12y
I think plenty of ethical differences remain even if we eliminate all possiblee
factual disagreements.
As regards religion, (many) religious people claim that they obey god's commands
because they are (ethically) good and right in themselves, and just because they
come from god. It's hard to dismiss religion entirely when discussing the ethics
adopted by actual people - there's not much data left.
But here's another example: some people advocate the ethics of minimal
government and completely unrestrained capitalism. I, on the other hand, believe
in state social welfare and support taxing to fund it. Others regard these taxes
as robbery. And another: many people in slave-owning countries have thought it
ethical to own slaves; I think it is not, and would free slaves by force if I
had the opportunity.
I think enough examples can be found to let my point stand. There is little, if
any, universal human ethics.
That is underspecified. Evolved how? If I set up evolution in a simulation, or
competition with selection between outright AIs, does that count? Can I choose
the seeds or do I have to start from primordial soup?
1Jack12y
Some people support unrestrained capitalism because they think it provides the
most economic growth which is better for the poor. This is obviously a factual
disagreement. Of course there are those who think wealth redistribution violates
their rights, but it seems plausible that at least many of them would change
their mind if they knew what the country would look like without redistribution
or if they had different beliefs about the poor (perhaps many people with this
view think the poor are lazy or intentionally avoid work to get welfare).
Slavery (at least the brutal kind) is almost always accompanied by a myth about
slaves being innately inferior and needing the guidance of their masters.
Now I think there probably are some actual ethical differences between human
cultures I just don't want exaggerate those differences-- especially since they
already get most of our attention. All the vast similarities kind of get ignored
because conflicts are interesting and newsworthy. We have debates about abortion
not about eating babies. But I think most possible human behavior is the
obvious, baby-eating category and the area of disagreement is relatively small.
Moreover there is considerably evidence for innate moral intuitions. Empathy is
an innate process in humans with normal development. Also see John Mikhail
[http://www.law.georgetown.edu/faculty/mikhail/] on universal moral grammar. I
think there is something we can call "human ethics" but that there is enough
cultural variability within it to allow us to also pick out local ethical
(sub)systems.
Er forget this. When we say "human ethics is universal" we need to finish the
sentence with "among... x". Looking up thread I see that the context for this
discussion finishes that sentence with "among conscious beings" or something to
that effect. I find that exceedingly unlikely. That said, I'm not at all
bothered by Clippy the way I would be bothered by the Babyeaters (and not just
because eating babies is immor
0DanArmak12y
Not so obvious to me. The real disagreement isn't over "what generates the most
economic growth" but over "what is best for the poor" (even if we ignore the
people who simply don't want to help the poor, and they do exist). After all,
the poor want social support now, not a better economy in a hundred years' time.
Deciding that you know what's best for them better than they do is an ethical
matter.
Some slave systems were as you desribe (U.S. enslavement of blacks, general
European colonial policies, arguably Nazi occupation forced labor). But in many
others, anyone at all could be sold or born into slavery, and slaves could be
freed and become citizens, thus there was no room for looking down on slaves in
general (well, not any more than on poor but free people). Examples include most
if not all ancient cultures - the Greek, Roman, Jewish, Middle and Near Eastern,
and Egyptian cultures, and the original Germanic societies at least.
That's true.
A lot of people are advocating a position that women are not allowed to abort,
ever. Or perhaps only to save their own lives. To me that's no better than
advocating the free eating of unwanted newborn babies.
I think for almost all possible human behavior that is long-term beneficial to
the humans engaging in it, there is or was a society in recorded history where
it was normative. Do you have counterexamples?
0Jack12y
So these two positions differ ethically in that the poor support one but not the
other? I guess espousing bizarre ethical views is one way to make your point
:-). Perhaps you can explain this better. I take it this doesn't apply to social
policy, like abortion and gay marriage?
Thus the "brutal" qualifier in the original comment. The practice of slavery in
general might be an ethical difference between cultures, I'll grant. Though it
is worth noting that such societies considered compassion toward slaves to be
virtuous and cruelty a vice.
This looks like information relevant to the question of universal human ethics
but it isn't.
Not fair. Any particular ethical system only comes about when it dictates or
allows behavior that is long-term beneficial to those who engage in it. Thats
how cultural and biological evolution work. The thing is, the same kinds of
behavior were long-term beneficial for every human culture.
2DanArmak12y
Yes, and the reason this is relevant is because the positions are about things
to be done to the poor.
You said:
There is a factual disagreement about how to best help the poor. The poor
themselves generally support one of the two options: social support. They may,
factually, be wrong. There is then a further decision: do we help them in the
way we think best, or do we help them in the way they think best? This is a
tradeoff between helping them financially, and making them feel good in various
ways (by listening to them and doing as they ask). This tradeoff requires an
ethical decision.
It does apply, and in much the same way (inasfar as these issues are similar to
wealth redistribution policy).
For instance, there are two possible reasons to support giving women abortion
rights. One is to make their lives better in various ways - place them in
greater control of their lives, let them choose non-child-rearing lives
reliably, let them plan ahead, let them solve medical issues with pregnancy.
This relies in part on facts, and disagreements about it are partly factual
disagreements: what will make women happiest, what will place them in control of
their lives, etc.
The other possible reason is simply: the women want abortion rights, so they
should have them - even if having these rights is bad for them by some measure.
They should have the freedom and the responsibility. (Personally, I espouse this
reasoning and I also don't think it's bad for them somehow). This is ethical
reasoning, and disagreements about it are ethical, not factual.
I think this compassion on the part of society-at-large tends to be more a
matter of signalling than of practice.
Er, why not? It's an example of an ethical disagreement among different people.
It's true that every behaviour which occurs, is evolutionarily beneficial. But
I'm suggesting that the opposite is also true: every behaviour that is possible
(doesn't require a brilliant insight to invent), and that is evolutionarily
2Jack12y
Are we breaking some rule if this discussion gets a little political?
OK. But they're also about things to be done to the rich.
This is a such a dismal way of looking at the issue from my perspective. Once
you decide that the policy should just be whatever some group wants it to be you
throw any chance for deliberation or real debate out the window. I realize such
things are rare in the present American political landscape but turning interest
group politics into an ethical principle is too much for me.
I read this as "this is a trade-off between helping them financially, and
patronizing them" :-).
If most women opposed abortion rights (as they do in many Catholic countries)
you would be fine prohibiting it? Even for the dissenting minority? Saying
people should be able to have abortions, even if it is bad for them makes sense
to me. Saying some arbitrarily defined group should be able to define abortion
policy, (regardless) if it is bad for them, does not.
Also, almost all policies involve coercing someone for the the benefit of
someone else. How do you decide which group gets to decide policy?
Maybe, though I don't know if we have the evidence to determine that. But
they're signaling because they want people to think they are ethical. There
being some kind of universal human ethics and most people being secretly
unethical is a totally coherent description of the world.
What I meant was that the fact that you think something ethically controversial
is as bad as something ethically uncontroversial doesn't tell us anything. Also,
I know I used it as an example first but the abortion debate likely involves
factual disagreements for many people (if not you).
But ethics are product of biological and cultural evolution! Empathy was
probably an evolutionary accident (our instincts for caring for offspring got
hijacked). If there is a universal moral grammar I don't know the evolutionary
reason, but surely there is one. The cultural aspects likely helped groups
4Alicorn12y
Just as an aside, lots of women
[http://www.snipeme.com/guestrants.php?rant=moral_abortion] go ahead and get
abortions even if they assent to statements to the effect that it shouldn't be
allowed. Which preference are you more inclined to respect?
0mattnewport12y
I don't think that's necessarily hypocrisy. A reformed drug addict may say that
he believes drugs should be illegal and then later relapse. That doesn't
necessarily mean his revealed preference for taking drugs overrides his stated
opinion that they should be illegal. He may support prohibition because he
doesn't trust his own ability to resist a short term temptation that he believes
is not in his own long term best interests. Similarly it would not be
inconsistent for a woman to believe that abortions should be illegal because
they are bad (by some criteria) but too tempting for women who find themselves
with an unwanted pregnancy. Believing that they themselves will not be able to
resist that temptation if they become pregnant is if anything an argument in
favor of making abortion illegal.
For the record I don't believe abortion or drugs should be illegal but I don't
think it is necessarily inconsistent for a woman to believe abortion should be
illegal and still get one.
0Paul Crowley12y
It's not necessarily hypocrisy, but it leaves us with two sets of preferences
for a single population, and a judgement call on which is the right one to
follow. The argument you're making is sound on its face, but as far as abortion
goes neither of us buy it - we take the revealed preference more seriously than
the overt one, and the fact that this is even sometimes the right call makes the
plan to give groups what they say they want rather than what we think will
maximise utility quite a lot less appealing.
1DanArmak12y
This is a very interesting line of argument. How much of it do you think is due
to this:
Many of these women live in cultures and social circles/families where publicly
supporting abortion rights is very damaging socially. Even in relaxed
conditions, disagreeing with one's family and friends on an issue that evokes
such strong emotions is hard. The women tend to conform unless they have a
strong personal opinion to the contrary, and once they conform on the signalling
level, they may eventually come to believe themselves that they are against
abortion.
If their social environment changes, or they move into a new one, they may
change or "reveal" their new pro-abortion-rights opinion very quickly &
dramatically.
And however frequent cases like this may be, we also tend to over-estimate their
incidence, because we believe ourselves that abortion rights are really good for
women, and that the women are the "good" underdogs in this story.
1mattnewport12y
I wouldn't quite say that I take the revealed preference more seriously than the
overt one. I'm prepared to accept that there may be people who genuinely believe
that abortion is morally wrong and also genuinely believe that other people (and
possibly they themselves) will succumb to temptation and have an abortion if it
is legal and available even if they believe it is wrong. We generally accept the
reality of akrasia here, this seems like a very similar phenomenon: the belief
that people can't be trusted to do what is morally right when faced with an
unwanted pregnancy and so need to make a Ulysses pact
[http://en.wikipedia.org/wiki/Ulysses_pact] in advance to bind themselves
against temptation.
The reason this argument doesn't hold water for me is because I don't think it
is right that people who believe abortion is morally wrong should be able to
prevent others who don't share that belief from having abortions. If an 'opt-in'
anti-abortion law was proposed where you voluntarily committed to being jailed
for having an abortion in advance of needing one I wouldn't have a problem with
it.
In reality I don't know what percentage of women with anti-abortion beliefs use
this kind of reasoning. I have heard it explicitly from people who have taken
drugs in the past and still support prohibition however.
1Paul Crowley12y
Note the image in the banner of "Overcoming Bias"...
I would be against an opt-in anti-abortion law, since unlike with akrasia I see
no reason to prefer the earlier preference over the later one in this instance.
0AdeleneDawner12y
I used to be friends with someone who was an anti-abortion activist, and who
likes thinking about the logic behind such decisions. To the best of my
knowledge, she'd never thought about it from that angle. I think I still have a
good email address for her, if you'd like me to ask her what she thinks of the
idea.
0mattnewport12y
I'd be curious to know if anti-abortion activists think about it in those terms.
2AdeleneDawner12y
I got an email back from her. Tl;dr version: Nope, that's definitely not how she
was thinking about it. (Perhaps noteworthy: She rarely communicates via email,
so she's out of her element here. It is possible to evoke saner discussion from
her in realtime.)
2AdeleneDawner12y
Email sent. I'll quote the relevant bit here, in case it turns out to affect her
reply. (I did link to the conversation but I'm not sure she'll follow the link.)
0DanArmak12y
Only if it gets political in the sense of "politics, the mind-killer" :-)
Certainly, and the rich's opinion and interests should be consulted as well. I
wasn't talking about what the best policy is, anyway; I was just analyzing the
position of those rich (or rather non-poor) who you said want to help the poor
by improving the economy.
If your goal is ultimately to please that group, then why not? This isn't a
debate about working together with another group to achieve a common goal or to
compromise on something. This is a debate on how best to help another group.
"Making them happy" and "doing whatever they want" (to the extent of the
resources we agree to commit) is a valid answer, even if many people won't
agree.
The fact that you don't agree is what I was pointing out - that legitimate
ethical disputes exist. I don't even really want to argue for this particular
policy - I haven't thought it through very deeply; it was just an example of a
disagreement. But I do believe it's reasonable enough to at least be considered.
No I would not be fine with that. I'm not fine with any individual prohibiting
abortion for another individual. Any women who are against abortions are free
not to have abortions themselves, and everyone else should be free to have
abortions if they wish. Note that my argument didn't rely on majority opinion or
on using the class of "all women". The freedom to have abortions is a personal
freedom, not a group freedom.
Many policies involve no coercion. Or at least some of the policy options
involve no coercion.
For instance, allowing abortions to everyone involves no coercion. Unless you
consider "knowing other people get abortions and not being able to stop them" a
coerced state.
I never said that personal freedom and responsibility can solve all ethical
issues. Sometimes all policy options are tradeoffs in coercion, and there isn't
aways a "right" option. That only reinforces my point that many ethical disputes
exist and there is no unive
0[anonymous]12y
If they are solely the product of evolution, then there can't be a universal
human ethics among different cultures. Did I misunderstand something about your
argument?
-2Jack12y
I have no idea why this would be true. Convergent evolution.
[http://en.wikipedia.org/wiki/Convergent_evolution]. Also, there can be cultural
evolution in the absence of more than one culture. Some ethical principle might
have evolved when humanity was all one culture (if there ever was such a point,
I guess I find that unlikely).
Lets back up. Human ethics basically consists of five values
[http://faculty.virginia.edu/haidtlab/mft/index.php]. Different cultures at
different times emphasize some values more than others. Genuine ethical
disagreements tend to be about which of these values should take precedence in a
given situation. As a human I don't think there is a "true answer" in these
debates. Some of these questions might have truth values for American liberals
(and I can answer for those), but they don't for all of humanity.
Now
That ethics is basically the purity value being (in my mind) way over
emphasized. Now in modern, Western societies large segments hardly care about
purity at all. I'm one of those people and I suspect a lot of people here are.
But this is a very new development and it is very likely that we still have some
remnants of the purity value left (think about our 'epistemic hygiene'
rhetoric!) . But yes, compared to most of human history modern liberals are
quite revolutionary. It is possible that not all of those values are universal
among evolved, intelligent, social beings (though it seems to me they might be).
The other things:
I meant the first two. Also, facts about personhood, when life begins, the
existence of souls etc. There may also be a value disagreement.
Of course that is a coerced state. :-) Not being able to do something under
threat of state action is textbook coercion. This is why libertarians who think
they can justify their position just by appealing to a single principle of
non-coercion are kidding themselves. They obviously need something else to tell
them which kinds of coercion are justified.
So there isn't so
-1DanArmak12y
By convergent evolution, some cultures can evolve the same ethics. Even many
cultures. But a universal ethics implies that all cultures, no matter how
diverse in ever other way, and including cultures which might have existed but
didn't, would evolve the same ethics (or rather, would preserve the same ethics
without evolving it further). This is extremely unlikely, and would require a
much stronger explanation than the general idea of convergent evolution.
Anyway, my position is that different cultures in fact have different ethics
with little in common between the extremes, so no explanation is needed.
This is an interesting model. I don't remember encountering it before.
I believe you agree with me here, but just to make sure I read your words
correctly: the commonality of these five values (if true) does not in itself
imply a commonality of ethics. There is no ethics until all the decisions about
tradeoffs and priorities between the values are made.
In many non-Christian traditions, sex is pure and sacred. People may need to
purify themselves for or before sex, and the act of sex itself can serve
religious purposes (think "temple whores", for instance). This is pretty much
the opposite of Christian tradition.
The value of purity, and the feelings it inspires, may well be universal among
humans. But the decision to what it applies - what is considered pure and what
is filthy - is almost arbitrary. I suspect the same is true for most or all of
the other five values - although there may be some constants - which only
reinforces my conviction that there is no universal ethics.
It scarcely seems possible to me that any of these values are universal. A few
quick thought-experiments, designed purely to demonstrate the feasibility of
lacking these values in a sentient species:
Harm/care: some human sub-cultures have little enough of this value (e.g.,
groups of young males running free with no higher authority). Plus, a lot of our
nurturant behaviour stems from ra
0Jack12y
Once you have a task that needs to be accomplished there are often only so many
ways of accomplishing it. For example, there are only so many ways to turn sound
into useful data the brain can use. Thus I suspect just about all functioning
ears will have things in common- something that amplifies vibrations and
something that medium can vibrate etc. That said I think you're probably right
that given enough cultures and species with divergent enough histories I'd
probably discover some pretty alien moralities. That said there might not be
many social and intelligent species out there. Given that, it seems plausible
that there is some universal morality in that there are no social and
intelligent exceptions. Universality doesn't mean necessity. (I'm going to let
your points about different evolutionary histories leading to different values
go unresponded to. They're good points though and I think the probability of
really inhuman moralities existing is higher than I thought before).
No no. Sorry if this wasn't clear. Like I said, I don't think humans agree on
prioritizing these values. People in the United States don't even agree on
prioritizing these values to some extent. The commonality of these five values
is a commonality of ethics-- it doesn't imply identical, complete ethical codes
for everyone but I don't think we all have identical codes, just enough in
common that it makes sense to speak of a human morality.
Can you do a better job specifying what kinds of sub-cultures you mean?
Yeah, there are places that value authority a lot more than fairness. Is there
no conception of fairness for those of equal status? If outsiders came and
oppressed them would they not experience that as injustice? This is difficult to
discuss without having more data.
Cite?
There might be some variation in the way some of the values are implemented but
I hardly think what is considered filthy is arbitrary. There are widely
divergent cultures which consider the same things pure a
2DanArmak12y
I apologize for not replying and providing the citations needed. I've had
unforeseen difficulties in finding the time, and now I'm going abroad for a week
with no net access. When I come back I hope to make time to participate in LW
regularly again and will also reply here.
2Nick_Tarleton12y
You're ignoring the tradeoff between helping the current poor and future poor.
The current poor would naturally favor the former, but I don't think that's an
argument for it over the latter.
1Alicorn12y
Class is fairly heritable. To the extent to which we think people ought to make
decisions for their descendants, it may make sense to let current poor make
decisions that affect the future poor.
0bogus12y
If that's the only issue, we could choose whatever policy helps the most and
then compensate current folks by borrowing. Economic growth will be lower and
future folks will be poorer, but the policy will be efficient.
As an aside, we don't really know how wealthy future folks will be. If a
Singularity is imminent, it's probably efficient to liquidate a lot of capital
and help current folks more.
0[anonymous]12y
Not fair. Any particular ethical system only comes about when it dictates or
allows behavior that is long-term beneficial to those who engage in it. Thats
how cultural and biological evolution work. The thing is, the same kinds of
behavior were long-term beneficial for every human culture.
0byrnema12y
I concur with Jack that most ethical disputes are about facts, and if not then
about relative weights for values. Freedom verses existence, etc.
What I would call a real difference in ethics would be the introduction of a
completely novel terminal value (which I can hardly imagine) or differences in
abstract positions such as whether it is OK to locally compromise on ethics if
it results in more global good (i.e., if the ends justify the means), etc.
-2byrnema12y
There is a confusion that results when you consider either system (objective or
subjective ethics) from the viewpoint of the other.
(The objective ethical system viewpoint of human ethics.) Suppose that there is
an objective ethical system defining a set of imperatives. Also, separately, we
have subjectively determined human ethics. The subjective human ethics
overlapping with the objective imperatives are actual imperatives; the rest are
just preferences. It is possible that the objective imperatives are not known to
us, in which case, we may or may not be satisfying them and we are not aware of
our objective value (good or bad).
(The subjective ethical system viewpoint of human ethics.) In the case of no
objective ethical system, imperatives are subjectively collectively determined.
We are bad or good -- to whatever extent it is possible to be 'bad' or 'good' --
if we think we are bad or good. This is self-validation.
Now, to address your objections:
Right, human ethics do seem very inconsistent. To me, this is a challenge only
to the existence of subjective ethics. In the case of objective ethics, there is
no contradiction if humans disagree about what is ethical; humans do not define
what is objectively ethical. In the case of a subjective ethical system,
inconsistencies in human ethics is evidence that there is no well-defined notion
"human ethics", only individual ethics.
Nevertheless, in defense of 'human ethics' for either system, perhaps it is the
case that human ethics are actually consistent, in a way that matters, but the
terminal values are so higher order we don't easily find them. All the different
moral behaviors we see are different manifestations of common values.
Of course, minds could evolve or be constructed with different subjective
ethical systems. Again, they may or may not be objectively ethical.
0DanArmak12y
This redefinition of the word "imperative" goes counter to the existing meaning
of the word (which would include all 'preferences'), so it's confusing. I
suggest you come up with a new term or word-combination.
You defined objective ethics as something every rational thinking being could
derive. Shouldn't it also have some meaning? Some reason why they would in fact
be interested in deriving it?
If this objective ethics can be derived by everyone, but happens to run counter
to almost everyone's subjective ethics, why is it even interesting? Why would we
even be talking about it unless we either expected to encounter aliens with
subjective ethics similar to it; or we were considering adopting it as our own
subjective ethics?
That definitely requires proof. Have you got even a reason for speculating about
it, any evidence for it?
0byrnema12y
Actually, I didn't. I would be interested in AaronBenson's answers to the
questions that follow.
Here, I was just suggesting a solution. I don't have much interest in the
concept of 'human' ethics. (Like Jack
[http://lesswrong.com/lw/1lf/open_thread_january_2010/1ek8?context=1#1ek8], I
would be very interested in what ethics are universal to all evolved,
intelligent, social minds.)
... Yet I didn't suggest it randomly. My evidence for it is that whenever
someone seems to have a different ethical system from my own, I can usually
eventually relate to it by finding a common value.
0DanArmak12y
Right, sorry, that was AaronBensen's definition.
0byrnema12y
I was using the the meaning of imperative as something you 'ought' to do, as in
moral imperative. This does not include preferences unless you feel like you
have a moral obligation to do what you prefer to do.
2[anonymous]12y
I know all those replies weren't posted just to aid me but thanks for posting
them nevertheless. Obviously I at least need to put more thought into what
ethics is and hence what my question means. Maybe the question will disappear
following that but, if not, at least I'll be on more solid ground to try to
respond to it.
1MatthewB12y
I am having a discussion on a forum where a theist keeps stating that there must
be objective truth, that there must be objective morality, and that there is
objective knowledge that cannot be discovered by Science (I tried to point out
that if it were Objective, then any system should be capable of producing that
knowledge or truth).
I had completely forgot to ask him if this objective truth/knowledge/morality
could be discovered if we took a group of people, raised in complete isolation,
and then gave them the tools to explore their world. If such things were truly
objective, then it would be trivial for these people to arrive at the discovery
of these objective facts.
I shall have to remember this, as well as the fact that such objective
knowledge/ethics may indeed exist, yet, why is it that our ethical systems
across the globe seem to have a few things in common, but disagree on a great
many more?
1PhilGoetz12y
You can't ask whether there are more things in common than not in common, unless
you can enumerate the things to be considered. If everyone agrees on something,
perhaps it doesn't get categorized under ethics anymore. Or perhaps it just
doesn't seem salient when you take your informal mental census of ethical
principals.
Excellent response to the theist.
1MatthewB12y
Doh!
Yes, of course... Slip of the brain's transmission there.
As for the response to the theist, I wish that I had used that specific
response. I cannot recall now what I did use to counter his claims.
As I mentions, his claim was that there is knowledge that is not available to
the scientific method, yet can be observed in other ways.
I pointed out that there were no other ways of observing things than empirical
methods, and that if some method of knowledge that just entered out brain should
be discovered (Revelation), and its reliability were determined, then this would
just be another form of observation (Proprioception) and the whole process would
then just be another tool of science.
He just couldn't seem to get around the fact that as soon as he makes an
empirical claim that it falls within the realm of scientific discovery.
He was also misusing Gödel's incompleteness theorem (some true statements in a
formal system cannot be proved within that formal system).
At which point, he began to conflate science as some sort of religion and god
that was being worshiped, and from which everything was meaningless and thus
there were no ethics, so he could just go kill and rape whoever he pleased.
It frightens me that there are such people in the world.
P(A)*P(B|A) = P(B)*P(A|B). Therefore, P(A|B) = P(A)*P(B|A) / P(B). Therefore, woe is you should you assign a probability of 0 to B, only for B to actually happen later on; P(A|B) would include a division by 0.
Once upon a time, there was a Bayesian named Rho. Rho had such good eyesight that she could see the exact location of a single point. Disaster struck, however, when Rho accidentally threw a dart, its shaft so thin that its intersection with a perfect dartboard would be a single point, at a perfect dartboard. You see, when you randomly select a point f... (read more)
Don't worry, the mathematicians have already covered this
[http://en.wikipedia.org/wiki/Conditional_expectation].
0RichardKennaway12y
There are mathematicians who have rejected the idea of the real number line
being made of points, perhaps for reasons like this. I don't recall who, but
pointless topology [http://en.wikipedia.org/wiki/Pointless_topology] mght be
relevant.
1Technologos12y
My understanding is that such a story relies on trying to define the area of a
point when only areas of regions are well-defined; the probability of the point
case is just the limit of the probability of the region case, in which case
there is technically no zero probability involved.
0Larks12y
Is pointless topology ever relevant?
0Christian_Szegedy12y
Yes, it is relevant to algebraic geometry, which is important for the treatment
of down-to-earth problems in number theory.
0Douglas_Knight12y
I think you're confusing topos theory with pointless topology. The latter is a
fragment of the former and a different fragment is used in algebraic geometry.
As I understand it, the main point of pointless topology is to rephrase
arguments to avoid the use of the axiom of choice (which is needed to choose
points). That is certainly a noble goal and relevant to down-to-earth problems,
but not so many in number theory.
I work out and eat healthily to make right now better.
Of course, I hope that the body will last longer as well, but I wouldn't
undertake a regimen that guaranteed I'd see at least 120, at the cost of never
having the energy to get much done with the time. Not least because I'd take
such a cost as casting doubt on the promise.
3Jawaka12y
I stopped smoking after I learned about the Singularity and Aubrey de Grey. I
don't have any really good data on what healthy food is but I think I am doing
alright. I have also singed up to a Gym recently. However I don't think I can
sign up to cryogenics in Germany.
1Morendil12y
You can sign up from anywhere, in principle (CI and Alcor list a number of
non-US members). The major issue is that it will obviously cost more to
transport you to suspension facilities in the US, while avoiding damage to your
brain cells in transit.
One disturbing thing about cryonics is that it forces you to allocate
probabilities to a wide range of end-of-life scenarios. Am I more likely to die
hit by a truck (in which case I wouldn't make much of my chances for successful
suspension and revival), or a fatal disease diagnosed early enough, yet not
overly aggressive, such that I can relocate to Michigan or Arizona for my final
weeks ? And who know how many other likely scenarios.
3DanArmak12y
I'd guess that getting your local hospitals and government to allow your body to
be treated correctly would be the biggest non-financial problem.
I live in Israel, and even if I had unlimited money and could sign up, I'm not
at all sure I could solve this problem except by leaving the country.
1AngryParsley12y
I'm signed up for cryonics and I exercise regularly. I usually run 3-4 miles a
day and do some random stretching, push-ups, and sit-ups. I slack if I'm on
vacation or if the weather is bad. I never eat properly. Some days I forget most
meals. Other days I'll have bacon and ice cream.
1scotherns12y
I work out regularly, eat healthy, and I am signed up for Cryonics. One data
point for you :-)
0[anonymous]12y
Are either of you two signed up for cryogenics?
0Kutta12y
Well, I'm certainly one, having found OB/LW through the Immortality Institute
[http://imminst.org/] forums, where I've been researching health topics
obsessively for several months. My vague personal impression is that life
extension enthusiasts are not especially prevalent here.
0Sly12y
Are either of you two signed up for cryogenics?
1Kutta12y
As a 19 year old student living in Hungary cryonics is way back on my list of
life extension related things to do. Nevertheless I think cryonics is a great
option and I'll sign up as soon as I figure out how I could do it in my country
(Russia being the closest place with cryo service) and have the money for it.
As a side note, I think cryonics has the best payoffs when you've got some
potentially lethal relatively slowly advancing disease like cancer or ALS, and
have the option of moving very closely to a cryonics facility.
A little knowledge can be a dangerous thing. At least Eliezer has previously often recommended Judgment Under Uncertainty as something people should read. Now, I'll admit I haven't read it myself, but I'm wondering if that might be a bad advice, as the book's rather dated. I seem to frequently come across articles that cite JUU, but either suggest alternative interpretations or debunk its results entirely.
Just today, I was trying to find recent articles about scope insensitivity that I could cite. But on a quick search I primarily ran across articles point... (read more)
Two studies explored the role of implicit theories of intelligence in adolescents'
mathematics achievement. In Study 1 with 373 7th graders, the belief that intelligence is malleable (incremental theory) predicted an upward trajectory in grades over the two years of junior high school, while a belief that intelligence is fixed (entity theory) predicted a flat trajectory. A mediational model including learning goals, positive beliefs about
Fair point. Would you agree with, "People on lesswrong commonly talk as if
intelligence is a thing we can put a number to (without temporal qualification),
which implies a fixed trait."?
We often say our weight is currently X or Y. But people rarely say their IQ is
currently Z, at least in my experience.
0Nick_Tarleton12y
Yes.
4Zack_M_Davis12y
If it works, it can't be a lie. In any case, surely a sophisticated
understanding does not say that intelligence is malleable or not-malleable.
Rather, we say it's malleable to this-and-such an extent in such-and-these
aspects by these-and-such methods.
2Kaj_Sotala12y
"Intelligence is malleable" can be a lie and still work. Kids who believe their
general intelligence to be malleable might end up exercising domain-specific
skills and a general perseverance so that they don't get too easily discouraged.
That leaves their general intelligence unchanged, but nonetheless improves
school performance.
0whpearson12y
I was thinking of the more mathematical definitions of intelligence that just
give a scalar average performance over lots of different worlds. They can still
be consistant as they track the history and agents might do better in worlds
they believe that their intelligence changes. As they might do better in worlds
where they are given calculators.
If simple things like the ownership of calculators can change your intelligence,
is it right to think of it as something stable you can apply fission like
exponential growth on.
A suggestion for the site (or perhaps the Wiki): It would be useful to have a central registry for bets placed by the posters. The purpose is threefold:
Aid the memory of posters, who might accumulate quite a few bets as time passes.
Form a record of who has won and lost bets, helping us calibrate our confidences.
Formalise the practice of saying "I'll take a bet on that", prodding us to take care when posting predictions with probabilities attached. The intention here is to overcome akrasia in the form of throwing out a number and thus signalling our rationality; numbers are important and should be well considered when we use them at all.
In the meantime, one comment on that other interesting reading at Less Wrong. It has been fun sifting through various posts on a variety of subjects. Every time I leave I have the urge to give them the Vulcan hand signal and say "Live Long and Prosper". LOL.
I shall leave the interpretation of this to those whose knowledge of Star Trek is deeper than mine...
Is there any interest in an experimental Less Wrong literary fiction book club, specifically for the purpose of gaining insight? Or more specifically, so that together we can hash out exactly what insights are or are not available in particular works of fiction.
Michael Vassar suggests The Great Gatsby (I think, it was kind of written confusingly parallel with the names of authors but I don't think there was ever an author Ga... (read more)
How old were you when you became self-aware or achieved a level of sentience well beyond that of an infant or toddler?
I was five years old and walking down the hall outside of my kindergarden classroom and I suddenly realized that I had control over what was happening inside of my mind's eye. This manifested itself by me summoning an image in my head of Gene Wilder as Willy Wonka.
Is it proper to consider that the moment when I became self aware? Does anyone have a similar anecdote?
(This is inspired by Shannon's mention of her child exploring her sense of s... (read more)
I don't have any memory of a similar revelation, but one of my earliest memories
is of asking my mother if there was a way to 'spell letters' - I understood that
words could be broken down into parts and wanted to know if that was true of
letters, too, and if so where the process ended - which implies that I was
already doing a significant amount of abstract reasoning. I was three at the
time.
0MrHen12y
Strange, I have no such memory. The closest thing I can think of is my big
Crisis of Faith when I was 17. I realized I had much more power over myself than
I had previously thought. It scared me a lot, actually.
I occasionally see people here repeatedly making the same statement, a statement which appears to be unique to them, and rarely giving any justification for it. Examples of such statements are "Bayes' law is not the fundamental method of reasoning; analogy is" and "timeless decision is the way to go". (These statements may have been originally articulated more precisely than I just articulated them.)
I'm at risk of having such a statement myself, so here, I will make this statement for hopefully the last time, and justify it.
Does anyone understand the last two paragraphs of the comment that I'm
responding to? I'm having trouble figuring out whether Warrigal has a real
insight that I'm failing to grasp, or if he is just confused.
A soft reminder to always be looking for logical fallacies: This quote was smushed into an opinion piece about OpenGL:
Blizzard always releases Mac versions of their games simultaneously, and they're one of the most successful game companies in the world! If they're doing something in a different way from everyone else, then their way is probably right.
It really does surprise me how often people do things like this.
This is a quote from someone being interviewed about bad but common passwords
[http://www.nytimes.com/2010/01/21/technology/21password.html?em]. Would this be
labeled a semantic stopsign [http://lesswrong.com/lw/it/semantic_stopsigns/], or
a fake explanation [http://lesswrong.com/lw/ip/fake_explanations/], or ...?
2RobinZ12y
Fake explanation - he noticed a pattern and picked something which can cause
that kind of pattern, without checking if it would cause that pattern.
-1thomblake12y
This isn't an example of a logical fallacy; it could be read that way if the
conclusion was "their way must be right" or something like that. As it is, the
heuristic is "X is successful and Y is part of X's business plan, so Y probably
leads to success".
If you think their planning is no better than chance, or that Y usually only
works when combined with other factors, then disagreeing with this heuristic
makes sense. Otherwise, it seems like it should work most of the time.
Affirming the consequent, in general, is a good heuristic.
4MrHen12y
Within the context of the article, the bigger form of the argument can be
phrased as such:
This is bad and wrong. As a snap judgement, it is likely that releasing
cross-platform software is a more successful thing to do but using that snap
judgement to build bigger arguments is dangerous.
This is an example of an appeal from authority
[http://en.wikipedia.org/wiki/Argument_from_authority] and fallacy of division
[http://en.wikipedia.org/wiki/Fallacy_of_division].
But Y doesn't lead to success. If I say, "Blizzard is successful and making
video games is part of their business plan, so making video games probably leads
to success," something should be obviously wrong. Why would it be true if I use
"always releases Mac versions of their games simultaneously" instead of "makes
video games"?
As far as I can tell, the emphasized part is the whole reason you should be
careful. Picking one part out of a business plan is stupid. If you know enough
about the subject material to determine whether that part of the business plan
is applicable to whatever you are doing, fair enough, but this is a judgement
call above and beyond the statements given in this example.
Maybe, but it is still a logical fallacy.
Once upon a time I was pretty good at math but either I just stopped liking it or the series of dismal school teachers I had turned me off of it. I ended up taking the social studies/humanities rout and somewhat regretting it. I've studied some foundations of mathematics stuff, symbolic logic and really basic set theory and usually find that I can learn pretty rapidly if I have a good explanation in front of me. What is the best way to teach myself math? I stopped with statistics (High school, advanced placement) and never got to calculus. I don't expect to become a math wiz or anything, I'd just like to understand the science I read better. Anyone have good advice?
I'm currently trying to teach myself mathematics from the ground up, so I'm in a
similar situation as you. The biggest issue, as I see it, is attempting to
forget everything I already "know" about math. Math curriculum at both the
public high school and the state university I attended was generally bad; the
focus was more on memorizing formulas and methods of solving prototypical
problems than on honing one's deductive reasoning skills, which if I'm not
mistaken is the core of math as a field of inquiry.
So obviously textbooks are good place to start, but which ones don't suck? Well,
I can't help you there, as I'm trying to figure this out myself, but I use a
combination of recommendations from this page
[http://www.ocf.berkeley.edu/~abhishek/chicmath.htm] and looking at ratings on
Amazon.
Here are the books I am currently reading, have read portions of, or are on my
immediate to-read list, but take this with a huge grain of salt as I'm not a
mathematician, only an aspiring student:
* How to Prove It: A Structured Approach by Vellemen - Elementary proof
strategies, is a good reference if you find yourself routinely unable to
follow proofs
* How to Solve It by Polya - Haven't read it yet but it's supposedly quite
good.
* Mathematics and Plausible Reasoning, Vol. I & II by Polya - Ditto.
* Topics in Algebra by Herstein - I'm not very far into this, but it's fairly
cogent so far
* Linear Algebra Done Right by Axler - Intuitive, determinant-free approach to
linear algebra
* Linear Algebra by Shilov - Rigorous, determinant-based approach to linear
algebra. Virtually the opposite of Axler's book, so I figure between these
two books I'll have a fairly good understanding once I finish.
* Calculus by Spivak - Widely lauded. I'm only 6 chapters in, but I immensely
enjoy this book so far. I took three semesters of calculus in college, but I
didn't intuitively understand the definition of a li
2Paul Crowley12y
I've learned an awful lot of maths from Wikipedia.
0Bo10201012y
I've learned a lot of equations from Wikipedia, but I've not really learned a
lot of real math - that's really come from doing homework problems and thinking
about them later.
1mkehrt12y
I've definitely learned a lot of math from Wikipedia. I don't generally do the
proofs myself, so I don't really have any of the elusive "mathematical
maturity", but I definitely have learned a lot of abstract algebra, category
theory and mathematical logic just by reading the definitions of various things
on Wikipedia and trying to understand them.
On the other hand, I am pretty motivated to learn these things because I
actively enjoy them. Other branches of math, I am much less interested in and so
I don't learn that much. But it is possible!
0Christian_Szegedy12y
I don't understand why/how anyone would learn equations without understanding
them.
I agree that wikipedia is not a good substitute for textbooks in general,
neither does it replace actual practice by problem solving. You can still learn
a lot of math (even complete proofs) from it: get good first impressions on
whole areas. It even contains high quality introductory material on certain
important topics and facts.
However I completely agree with you: the most important thing in math is to
think about problems. Undergraduate Springer books (yellow series) typically
contain a lot of problems alongside actual text. My method is the following:
* 1) Read one chapter and write up the statement of every theorem.
* 2) Go through all statements and reproduce the proof without rereading the
material
* 3) Iterate 1)-2) if your are stuck with any of the proofs
* 4) Proceed with the problem section and try to solve all problems. Omit
problems only if they are marked as hard and if you are stuck after an hour
of thinking.
The most topics books to start with are linear algebra and calculus. Working
through the undergraduate material in the above way takes a long time, but you
will build a firm base for further studies.
0[anonymous]12y
I've always found that memorizing proofs or actually doing the exercises (as
opposed to taking time to understand the structure of the solutions to some of
them, if the main text doesn't already cover the representative propositions)
hits diminishing returns, in most cases anyway, when you are learning for
yourself. The details get forgotten too quickly to justify the effort, the
useful thing is to get good hold of the concepts (which by the way can be
glossed over even with all the proofs and exercises, by relying on brittle
algorithm-like technique instead of deeper intuition).
0Vladimir_Nesov12y
I've always found that memorizing proofs or actually doing the exercises (as
opposed to taking time to understand the structure of the solutions to some of
them, if the main text doesn't already cover the representative propositions)
hits diminishing returns, in most cases anyway, when you are learning for
yourself. The details get forgotten too quickly to justify the effort, the
useful thing is to get good hold of the concepts (which by the way can be
glossed over even with all the proofs and exercises, by relying on brittle
algorithm-like technique instead of deeper intuition).
1Christian_Szegedy12y
I don't vote for blind memorization either. However, I think that if one can not
reconstruct a proof than it is not understood either. Trying to reconstruct
thought processes by heart will highlight the parts with incomplete
understanding.
Of course in order to fully understand things one should look at additional
consequences, solve problems, look at analogues, understand motivation etc.
Still, the reconstruction of proofs is a very good starting point, IMO.
0Vladimir_Nesov12y
Sure. I'm pointing to the difference between making sure that you can do proofs
(not necessarily reconstruct the particular ones from the textbook) and
exercises, and actually reconstructing the proofs and doing the exercises.
Getting to the point of correctly ruling the former can easily take 10 times
less time than the latter. You won't be as fast at performing the proofs in the
coming weeks if need be, but a couple of years pass and you'd be as bad both
ways (but you'd still have the concepts!).
0Bo10201012y
Perhaps I should have said "looked up" instead of "learned." That is, I
understand the Laplace transform, and have done many homework problems that
involved deriving common transform pairs. However, when I need one, I don't try
to re-derive it or rely on memory; I go look it up at Wikipedia and use it.
-3Vladimir_Nesov12y
...reading textbooks?
0Jack12y
I'm looking for specific advice. Do you know of good text books?
0Vladimir_Nesov12y
Which textbook is good on a given topic depends on the student's current level,
and more importantly on what exactly do you want to learn. "Math"?.. A couple of
random suggestions that appealed to me aesthetically, but YMMV:
* F. W. Lawvere & S. H. Schanuel (1991). Conceptual mathematics: a first
introduction to categories. Buffalo Workshop Press, Buffalo, NY, USA.
* S. Mac Lane & G. Birkhoff (1999). Algebra. American Mathematical Society, 3
edn.
(Both can be found on Kad [http://en.wikipedia.org/wiki/Kad_network].)
0Tyrrell_McAllister12y
What, specifically, do you want to learn?
2Jack12y
If the Simple Math of Everything
[http://lesswrong.com/lw/l7/the_simple_math_of_everything/] were a real text
book, I'd read that. But I've gathered calculus is the right place to start.
Probability theory would be next, I guess.
When people here say they are signed up for cryonics, do they systematically mean "signed up with the people who contract to freeze you and signed up with an instrument for funding suspension, such as life insurance" ?
I have contacted Rudi Hoffmann to find out just what getting "signed up" would entail. So far I'm without a reply, and I'm wondering when and how to make a second attempt, or whether I should contact CI or Alcor directly and try to arrange things on my own.
Not being a US resident makes things much more complicated (I live in France). Are there other non-US folks here who are "signed up" in any sense of the term ?
Feature request, feel free to ignore if it is a big deal or requested before.
When message people back and forth it would be nifty to be able to see the thread. I see glimpses of this feature but it doesn't seem fully implemented.
I suggested something along these lines on the feature request thread. I'd like
to be able to find old message exchanges. Finding messages I sent is easy, but
received messages are in the same place as comment replies and aren't
searchable.
Does undetectable equal nonexistent? Examples: There are alternate universes, but there's no way we can interact with them. There are aliens outside our light cones. Past events evidence of which has been erased.
If you mean undetected, then clearly not, since we might yet detect those
things. If you mean necessarily undetectable, I don't see how the question is
answerable, or even has an answer at all, in some sense.
0Nick_Novitski12y
Undetectability is hard (impossible?) to establish outside of thought
experiments. Real examples are limited to undetected and
apparently-unlikely-to-be-detected phenomenon.
But if I took your question charitably, I would personally say absolutely yes.
I've always been fond of stealing Maxwell's example: if there was a system of
ropes hanging from a belfry, which was itself impossible to peer inside, but
which produced some measurable relation between the position and tension between
all the ropes, then what can be said to "exist" in that belfry is nothing more
or less than that relationship, in whatever expression you choose (including
mechanically, with imaginary gears or flywheels or fluids or whatever). And if
later we can suddenly open it up and find that there were some components that
had no effect on the bell pull system (for example, a trilobite fossil with a
footprint on it), then I would have no personal issue with saying that those
components did not exist back "when it was impossible to open the belfry."
But I hold this out of convenience, not rigor.
First: I'm having a very bad brain week; my attempts to form proper-sounding sentences have generally been failing, muddling the communicative content, or both. I want to catch this open thread, though, with this question, so I'll be posting in what is to me an easier way of stringing words together. Please don't take it as anything but that; I'm not trying to be difficult or to display any particular 'tone of voice'. (Do feel free to ask about this; I don't mind talking about it. It's not entirely unusual for me, and is one of the reasons that I'm fairly ... (read more)
For me personally, I would prefer transcripts and written summaries of any audio
or video content. I find it very difficult to listen to and learn from hearing
audio when sitting at a computer, and having text or a transcript to read from
instead helps a lot. It allows me to read at my own pace and go back and forth
when I need to.
I'd also like any audio and video content to be easily and separately
downloadable, so I could listen to it at my own convenience. And I'd want any
slides or demonstrations to be easily printable, so I could see it on paper and
write notes on it. (As you can probably tell, I'm more of a verbal and visual
learner.)
By the way, your comment seemed totally normal to me, and I didn't notice any
unusual tone, but I'm curious what you were referring to.
2Alicorn12y
Seconded the need for transcriptions. This is also a matter of disability
access, which is frequently neglected in website design - better to have it
there from the beginning than wait for someone to sue.
0AdeleneDawner12y
We're already keeping disability access in mind. SecondLife and OpenSim are
generally very good with accessibility for everyone but visually impaired folks,
for whom they're unfortunately very hard to make accessible.
0AdeleneDawner12y
Having the disclaimer seems to help me write more coherently, for whatever
reason; compare the above post to this one
[http://lesswrong.com/lw/1kn/two_truths_and_a_lie/1dtf] for an example. There
are still noticeable (to me) differences, though - my vocabulary is odd in a way
that only anger or this kind of problem evokes (more unusual or overly specific
words, fewer generalizations or 'fuzzy' ways of putting things), and I'm having
trouble adding sub-points into the flow (hence the unusual number of
parentheticals) and connecting main points together in the normal way. I know
there's a more correct way of putting that 'grades 4-8' point in there than just
tacking it on at the end.
0byrnema12y
That's interesting. I distinctly remember reading your comment, leaving the
computer, going about my business, and thinking that the idea that a deficiency
could being selected for was an interesting point.
(But yes, while I understood your comment just fine, I do notice some
awkwardness, for example, in the second sentence, easily fixed by just deleting
the phrase "it's acting on".)
0AdeleneDawner12y
I definitely stand by the point; my ability to think logically is only mildly
impaired, if at all. I generally expect myself to be able to communicate such
things in a way that gets a less annoyed response than I did, though, or at
least to be able to predict when I'm going to get such a response.
1byrnema12y
Grades 4-8 is an interesting category, and I wouldn't know to what extent a
successful model for online learning has already been implemented for this age
group.
For a somewhat younger age group, I would suggest starfall.com
[http://www.starfall.com/] as an online learning site that seems to have a
number of very effective elements. One element that I found remarkable is that
frequently after a "learning lesson", the lesson solicits feedback. (For
example, see the end of this lesson
[http://www.starfall.com/n/holiday/gingerbread/load.htm?f&n=main]). The feedback
is extremely easy to provide -- for example, the child just picks a happy face
or an unhappy face indicating whether they enjoyed the lesson. (For older kids,
it might instead be a choice between a puzzled expression and an "I understand!"
expression.)
In any case, I think the value of building in feedback and learning assessment
mechanisms would be an important thing to consider in the planning stages.
0byrnema12y
I find myself in an analogous situation: some guidance is needed in the
development of on-line learning technology (for adults), and the responsibility
to some extent falls on me since I am more 'pro-technology' than my coworkers.
I'll be interested in the results of this thread.
That is really, really cool. Not particularly rationality-related (except as
regards the display format), but really cool.
0Kevin12y
Yeah, it's basically just pretty pictures. However, they're pretty pictures that
are probably an interesting knowledge gap for many here.
Perhaps what is rationality related is why these orbitals are never taught to
students. I suppose because so few atoms are actually configured in higher
orbitals, but students of all ages should find the pictures themselves
interesting and understandable.
In high school chemistry, our book went up to e orbitals, and actually said
something about how the f orbitals are not shown because they are impossible or
very difficult to describe, which is blatantly untrue. I found some pictures of
the f orbitals on the internet and showed my teacher (who was one of my best
high school teachers) and he was really interested and showed all of his classes
those pictures.
It is not that I object to dramatic thoughts; rather, I object to drama in the absence of thought. Not every scream made of words represents a thought. For if something really is wrong with the universe, the least one could begin to do about it would be to state the problem explicitly. Even a vague first attempt ("Major! These atoms ... they're all in the wrong places!") is at least an attempt to say something, to communicate some sort of proposition that can be checked against the world. But you see, I fear that some screams don't actually commu... (read more)
No. (Exploratory commentary seemed appropriate for Open Thread.)
1Zack_M_Davis12y
This is analysis is very well and good taken on its own terms, but it
conceals---very cleverly conceals, I do compliment you, for surely, surely you
had seen it yourself, or some part of you had---it conceals assumptions that do
not apply to our own realm. Essences, discreteness, digitality---these are all
artifacts born of optimizers; they play no part in the ontology of our
continuous, reductionist world. There is no pure agonium, no thing-that-hurts
without having any semblance of a reason for being hurt---such an entity would
require a very masterful designer indeed, if it could even exist at all. In
reality, there is no threshold. We face cries that fractionally have referents.
And the quantitative extent to which these cries don't have enough structure for
us to extrapolate a volition is exactly again the quantitative extent to which
any stray stream of memes has license to reshape the entity, pushing it towards
the strong attractor. You present us with this bugaboo of entities that we
cannot help because they don't even have well-defined problems, but entities
without problems don't have rights, either. So what's your problem? You just
spray the entity with appropriate literature until it is a creature. Sculpt the
thing like clay. That is: you help it by destroying it.
(I read CFAI once 1.5 years ago, and didn't reread it since obtaining the
current outlook on the problem, so some mistakes may be present.)
"Challenges of Friendly AI" and "Beyond anthropomorphism" seem to be still
relevant, but were mostly made obsolete by some of the posts on Overcoming Bias.
"An Introduction to Goal Systems" is hand-made expected utility maximisation,
"Design of Friendship systems" is mostly premature nontechnical speculation that
doesn't seem to carry over to how this thing could be actually constructed (but
at the time could be seen as intermediate step towards a more rigorous design).
"Policy implications" is mostly wrong.
Something has been bothering me ever since I began to try to implement many of the lessons in rationality here. I feel like there needs to be an emotional reinforcement structure or a cognitive foundation that is both pliable and supportive of truth seeking before I can even get into the why, how and what of rationality. My successes in this area have been only partial but it seems like the better well structured the cognitive foundation is the easier it is to adopt, discard and manipulate new ideas.
I understand that is likely a fairly meta topic and woul... (read more)
I think your phrasing of your question is confusing. Are you asking for help
putting yourself into a mindset conducive to learning and developing rationality
skills?
0CassandraR12y
Let me see if I can be more clear. In my experience I have an emotional
framework with which I hang beliefs from. Each belief has specific emotional
reinforcement or structure that allows me to believe it. If I revoke that
reinforcement then very soon after I find that I no longer hold that belief. I
guess the question I should ask first is that is this emotional framework real?
Did I make it up? And it is real then how can I use it to my advantage?
How did I build this framework and how do I revoke emotional support? I have
good reason to think that the framework isn't simply natural to me since it has
changed so much over time.
3GuySrinivasan12y
One technique I use to internalize certain beliefs is to determine their implied
actions, then take those actions while noting that they're the sort of actions
I'd take if I "truly" believed. Over time the belief becomes internal and not
something I have to recompute every time a related decision comes up. I don't
know precisely why this works but my theory is that it has to do with what I
perceive my identity to be. Often this process exposes other actions I take
which are not in line with the belief. I've used this for things like "animal
suffering is actually bad", "FAI is actually important", and "I actually need to
practice to write good UIs".
1CassandraR12y
This is similar to my experience. Perhaps a better way to express my problem is
this. What are the some safe and effective way to construct and dismantle
identity? And what sorts of identity are most able to incorporate new
information and process them into rational beliefs? One strategy I have used in
the past is to simply not claim ownership of any belief so that I might release
it more easily but in this I run into a lack of motivation when I try to act on
these beliefs. On the other hand if I define my identity based on a set of
beliefs then any threat to them is extremely painful.
That was my original question, how can I build an identity or cognitive
foundation that motivates me but is not painfully threatened by counter
evidence?
3orthonormal12y
The litany of Tarski [http://wiki.lesswrong.com/wiki/Litany_of_Tarski] and the
litany of Gendlin [http://wiki.lesswrong.com/wiki/Litany_of_Gendlin] exemplify a
pretty good attitude to cultivate. (Check out the posts linked in the Litany of
Gendlin wiki article; they're quite relevant too. After that, the sequence on
How to Actually Change Your Mind
[http://wiki.lesswrong.com/wiki/How_To_Actually_Change_Your_Mind] contains still
more helpful analysis and advice.)
This can be one of the toughest hurdles for aspiring rationalists. I want to
emphasize that it's OK and normal to have trouble with this, that you don't have
to get everything right on the first try (and to watch out if you think you do),
and that eventually the world will start making sense again and you'll see it
was well worth the struggle.
1Alicorn12y
The emotional framework of which you speak doesn't seem to resemble anything I
can introspectively access in my head, but maybe I can offer advice anyway. Some
emotional motivations that are conducive to rationality are curiosity
[http://yudkowsky.net/rational/virtues], and the powerful need to accomplish
some goal [http://lesswrong.com/lw/nb/something_to_protect/] that might depend
on you acting rationally.
0Paul Crowley12y
How much of the Sequences [http://wiki.lesswrong.com/wiki/Sequences] have you
read? A lot of them are about, essentially, how to feel like a rationalist.
3CassandraR12y
I have read pretty much everything more than once. It is pretty difficult to
turn reading into action though. Which is why I feel like there is something I
am missing. Yep.
Try this
[http://scholar.google.ca/scholar?sourceid=navclient&rlz=1T4GGLL_en&q=MDL%20and%20MML%3A%20Similarities%20and%20Differences&um=1&ie=UTF-8&sa=N&hl=en&tab=ws]
.
0Psy-Kosh12y
Reading it now, thanks.
Okay, from the initial description, looks like MML looks at TOTAL length, where
the message includes both the theory and the additional info needed to
reconstruct the total data, while MDL ignores aspects of the description of the
theory for the purposes of measuring the length.
Did I get that right or am I misunderstanding?
0Cyan12y
I'm a bit confused on that point myself. Before finding that document, my
understanding was that MML averaged over the prior, while MDL avoided having a
prior by using some kind of minimax approach, but the paper I pointed you to
doesn't seem to say anything about that.
Question for all of you: Is our subconscious conscious? That is, are parts of us conscious? "I" am the top-level consciousness thinking about what I'm typing right now. But all sorts of lower-level processes are going on below "my" consciousness. Are any of them themselves conscious? Do we have any way of predicting or testing whether they are?
Tononi's information-theoretic "information integration" measure (based on mutual information between components) could tell you "how conscious" a well-specified circuit ... (read more)
It's a very interesting question. I think it's pretty straight-forward that
'ourselves' is a composite of 'awarenesses' with non-overlapping mutual
awareness.
Some data with respect to inebriation:
* drunk people would pass a Turing test, but the next morning when events are
recalled, it feels like someone else' experiences. But then when drunk again,
the experiences again feel immediate.
* when I lived in France, most of my socialization time was spent inebriated.
For years thereafter, whenever I was intoxicated, I felt like it was more
natural to speak in French than English. Even now, my French vocabulary is
accessible after a glass of wine.
1PhilGoetz12y
That is interesting, but not what I was trying to ask. I was trying to ask if
there could be separate, smaller, less-complex, non-human consciousnesses inside
every human, It seems plausible (not probable, plausible) that there are, and
that we currently have no way of detecting whether that is the case.
Today at work, for the first time, LessWrong.com got classified as "Restricted:Illegal Drugs" under eSafe. I don't know what set that off. It means I can't see it from work (at least, not the current one).
How do we fix it, so I don't have to start sending off resumes?
I went to the eSafe site and while looking up what the "illegal drugs"
classification meant, submitted a request for them to change their status for
LessWrong.com. A pop-up window told me they'd look into it.
You can check (and then apply to modify) the status of LessWrong here
[http://www.aladdin.com/support/checking-classification.aspx].
0SilasBarta12y
Thanks! I did as you suggested, and it seems to have been removed from that
category, since I can access LW from work now. :-)
Ahem, back to analyzing aircraft...
0byrnema12y
Yes, that is what did it because I had chosen the category, "Blogs / Bulletin
Boards" whether that is the most appropriate category or not. They have a fast
response time!
2MatthewB12y
That may have been my fault. I mentioned that I used to have drug problems and
mentioned specific drugs in one thread, so that may have set off the filters. I
apologize if this is the case. The discussion about this went on for a day or
two (involving maybe six comments).
I do hope that is not the problem, but I will avoid such topics in the future to
avoid any such issues.
1byrnema12y
I doubt it, all of the words you used (name brands of prescription drugs) were
used elsewhere, often occurring in clusters just as in your thread.
By the way, do you have any idea why you don't have an overview page?
0MatthewB12y
No... I would really like to have one, although currently would not know what to
put on it.
I know that when I click on my name, rather than taking me to a page, like the
ones I see for other members, I see a banner that reads "No such page exists"
0Paul Crowley12y
Wow, that's crazy - have you filed a bug?
0MatthewB12y
Sorry for appearing dense... But, how would I go about filing a bug?
To whom would I file a bug?
0Bo10201012y
Usually that sort of thing filters proxy servers too, but I've found that Google
Web Transcode [http://m.google.com/gwt/n?u=http%3A//lesswrong.com&_gwt_noimg=1]
usually isn't blocked.
Of course it strips stylesheets (and optionally images), but I usually consider
that a feature.
I am going to be hosting a Less Wrong meeting at East Tennessee State University in the near future, likely within the next two weeks. I thought I would post here first to see if anyone at all is interested and if so when a good time for such a meeting might be. The meeting will be highly informal and the purpose is just to gauge how many people might be in the local area.
Please review a draft of a Less Wrong post that I'm working on: Complexity of Value != Complexity of Outcome, and let me know if there's anything I should fix or improve before posting it here. (You can save more substantive arguments/disagreements until I post it. Unless of course you think it completely destroys my argument so that I shouldn't even bother. :)
I think "Rapture of the Geeks" is a meme that could catch on with the general
public, but this community seems to have reluctance to engage in
self-promotional activities. Is Eliezer actively avoiding publicity?
So I am back in college and I am trying to use my time to my best advantage. Mainly using college as an easy way to get money to fund room and board while I work on my own education. I am doing this because i was told here among other places that there are many important problems that need to be solved and i wanted to develop skills to help solve them because I have been strongly convinced that it is moral to do so. However beyond this I am completely unsure of what to do. So I have the furious need for action but seem to have no purpose guiding that actio... (read more)
Socialise a lot. Learn the skills of social influence and the dynamics of power
at both the academic level and practical.
AnnaSalamon made this and other suggestions when Calling for SIAI fellows
[http://lesswrong.com/lw/1hn/call_for_new_siai_visiting_fellows_on_a_rolling/].
I imagine that the skills useful for SIAI wannabes could have significant
overlap with those needed for whatever project you choose to focus on. Specific
technical skills may vary somewhat.
Schooling isn't about education. This article is pretty mind-boggling: apparently, it's been a norm until now in Germany that school ends at lunchtime and the children then go home. Considering how strong the German economy has traditionally been, this raises serious questions of the degree that elementary school really is about teaching kids things (as opposed to just being a place to drop off the kids while the parents work).
Oh, and the country is now making the shift towards school in the afternoon as well, driven by - you guessed it - a need for women to spend more time actually working.
Assuming you were using your own computer at home and not a public Wi-Fi hotspot
or public computer then it could be that you use the same ISP and you were
assigned an IP address previously used by another user. Given the relatively low
number of users on lesswrong though this seems like a somewhat unlikely
coincidence.
1MrHen12y
Hmm... I was at a coffee shop the other day. I don't see how anyone else there
(or anyone else in the entire city I live in) would have ever heard of
LessWrong. The block appears to have been created today, however, which makes
even less sense.
1Vladimir_Nesov12y
I'll be more careful with "Ban this IP" option in the future, which I used to
uncheck during the spam siege a few months back, but didn't in this case.
Apparently the IP is only blocked for a day or so. I've removed it from the
block list, please check if it works and write back if it doesn't.
0MrHen12y
It works again.
Honestly, I have no problem not editing the wiki for a few days if it helps
block spammers. It's not like I am adding anything critical. I was just
confused.
2Vladimir_Nesov12y
It'd only be necessary to block spammers by IP if they actually relapse (and
after a captcha mod was installed, spammers are not a problem), but the fact
that you share IP with a spammer suggests that you should check your computer's
security.
0MrHen12y
Well, in the last week I've probably had at least three IP address assigned to
my computer while editing the wiki. It is hard to know where to begin. I think
someone I know has a good program to detect outgoing traffic... that may work.
1Nick_Tarleton12y
"Bella" was blocked
[http://wiki.lesswrong.com/mediawiki/index.php?title=Special:Log&type=block&page=User:Bella]
for adding spam links. Could your computer be a zombie
[http://en.wikipedia.org/wiki/Zombie_computer]?
0MrHen12y
Mmm... it's a Mac so I never think about it. I have no idea where I would have
picked it up. Does anyone know a way to check? (On a Mac.)
0mattnewport12y
A spam bot using your ISP is not unlikely, that's probably what's happened.
0MrHen12y
My ISP? Or my IP address? I assume the latter.
0mattnewport12y
Most ISPs recycle IP addresses between subscribers periodically. So someone
using the same ISP as you could have ended up with the same IP address.
0Vladimir_Nesov12y
But how many users do you expect sit on the same IP? And thus, what is the prior
probability that basically the only spammer in weeks (there was only one
another) would happen to have the same IP as one of the few dozen (or less) of
users active enough to notice a day's IP block? This explanation sounds like a
rationalization of a hypothesis privileged because of availability.
0mattnewport12y
I didn't know the background spamming rate but it does seem a little unlikely
doesn't it? A chance reuse of the same IP address does seem improbable but a
better explanation doesn't spring to mind at the moment.
0Vladimir_Nesov12y
Not a reason to privilege a known-false hypothesis. It's how a lot of
superstition actually survives: "But do you have a better explanation? No?".
0MrHen12y
Ah, okay. I completely misinterpreted your previous comment.
Strange fact about my brain, for anyone interested in this kind of thing:
Even though my recent top-level post has (currently) been voted up to 19, earning me 190 karma points, I feel like I've lost status as a result of writing it.
I quite like swearing, but I don't think it primes people to think and respond
rationally in general, and is usually best avoided. Like wedrifid, I'm inclined
to argue for an exception for "bullshit", which is a term of art
[http://en.wikipedia.org/wiki/On_Bullshit].
2RobinZ12y
I don't know of an official policy, but swearing can be distracting. Avoid?
3wedrifid12y
I advocate the use of the term Bullshit. Both because it a good description of a
significant form of bias and because the profanity is entirely appropriate. I
really, really don't like seeing the truth distorted like that.
More generally I don't particularly object to swearing but as RobinZ notes it
can be distracting. I don't usually find much use for it.
2Christian_Szegedy12y
I'd propose to use the word "bulshytt" instead. ;)
Interesting heuristic - I would be curious to find if anyone else has followed
something similar to good effect, but it sounds conceptually reasonable.
What's the right prior for evaluating an H1N1 conspiracy theory?
I have a friend, educated in biology and business, very rational compared to the average person, who believes that H1N1 was a pharmaceutical company conspiracy. They knew they could make a lot of money by making a less-deadly flu that would extend the flu season to be year round. Because it is very possible for them to engineer such a virus and the corporate leaders are corrupt sociopaths, he thinks it is 80% probable that it was a conspiracy. Again, he thinks that because it was possible for ... (read more)
Any such conspiracy would have to be known by quite a few people and so would
stand an excellent chance of having the whistle blown on it. Every case I can
think of where large Western companies have been caught doing anything like that
outrageously evil, they have started with a legitimate profit-making plan, and
then done the outrageous evil to hide some problem with it.
0roland12y
Where do those numbers come from? 80%, 10%???
0Kevin12y
They're almost made up, which makes any attempt at Bayesian analysis not all
that meaningful... I'd welcome other tools. He gave me the 80% probability
number so I felt obligated to give my own probability.
Consider the numbers to have very wide bounds, or to be more meaningful
expressed in words -- he thinks there is a conspiracy, I don't think there is a
conspiracy, but neither of us are absolutely confident about it.
0roland12y
Exactly. I think there is no rational basis for answering your question.
Your friend has a distrust of corporate leaders(here I agree with him) and his
theory is probably based on his feeling of disgust for their practices. So his
theory has probably more of an emotional basis than a rational one. That doesn't
mean it is wrong, just there aren't any rational reasons for believing it.
I was recently asked to produce the indefinite integral of ln x, and completely failed to do so. I had forgotten how to do integration by parts in the 6 months since I had done serious calculus. Is there anyone who knows of a calculus problem of the day or some such that I might use to retain my skills?
Why would you retain a skill you don't use? Conversely, if you use the skill,
you don't need "problem of the day".
2Vladimir_Nesov12y
[Parent at -2.] Is the advice to not waste time and effort on stuff you don't
need really that bad? (Hypothetical, under the assumption that you really don't
need it; if you do need it occasionally, in the majority of cases it'll be
enough to relearn directly on demand, rather than supporting for perfection's
sake.)
3Tyrrell_McAllister12y
You wrote
But suppose that you use a skill only occasionally. Then you still need the
skill. But to retain a skill, you might need to use it frequently. Therefore,
you might need to inflate artificially how often you use it, so that you retain
it. That is how it can be that you use a skill and yet still need a "problem of
the day".
1Sniffnoy12y
Agreed. Best is if you can learn something well enough that even if you don't
remember it, you can rederive it; but usually good enough is learning something
well enough that you can do it if you've got a textbook to remind you.
1Paul Crowley12y
In this instance, if I needed an answer to this question I'd use Maxima
[http://maxima.sourceforge.net/].
0Zack_M_Davis12y
Yes, it's a mind projection fallacy. Reality doesn't need anything from us;
there is no needfulness apart from [http://lesswrong.com/lw/ww/high_challenge/]
what people want to do.
1Vladimir_Nesov12y
Humbug. What you are actually saying is that wanting to know can be a terminal
value, so why won't you just say that?
And of course, I know that, but there is just too much stuff out there to learn,
so it's a necessity that the things you do choose to learn are in some sense
better than the rest (otherwise you lose something), more beautiful or more
useful. Just saying that one would learn X because "learning in general" is fun
isn't enough.
0Tyrrell_McAllister12y
I didn't read Vladimir as supposing that there was any other kind.
-1Zack_M_Davis12y
Yeah, but then why privilege "I need calculus for my job" over "I want to know,
I want to know, though the Earth burns and the stars are torn apart for
computronium, I WILL UNDERSTAND"?
1LucasSloan12y
A. I expect to need to use it in the fall when I go to college. B. I want to
know how to do calculus.
Ethical problem. It occurred to me that there's an easy, obvious way to make money by playing slot machines: Buy stock in a casino and wait for the dividends. Now, is this ethically ok? On the one hand, you're exploiting a weakness in other people's brains. On the other hand, your capital seems unlikely, at the existing margins, to create many more gamblers, and you might argue that you are more ethical than the average investor in casinos.
It's a theoretical issue for me, since my investment money is in an index fund, which I suppose means I own some tiny share in casinos anyway and might as well roll with it. But I'd be interested in people's thoughts anyway.
Investing in a company is different than playing slot machines. Casinos are
entertainment providers: they put on shows, sell food and drink, and provide
gaming. They have numerous expenses as well. Investing in a casino is not
guaranteed to make money in the same way the house is in roulette, for instance.
Casinos do go bankrupt and their stock prices do go down.
In addition, when you buy a share of stock on the open market, you buy it from
another investor, not the company, so you're not providing any new capital to
the company.
I don't believe there is anything ethically wrong with either gambling or
funding casinos. If people want to gamble, that's their choice.
-1RolfAndreassen12y
Nu, nothing's certain, but buying stock does presumably have a positive expected
value.
Touching the capital, you can reframe the question as "Buy casino bonds" or
"invest in a casino IPO. Besides, even when buying stock from an existing
investor, you are sending a signal of the value of that stock - so many mills
higher than what the next guy in line would have paid - and that provides
working capital in the form of the value of the self-owned stocks, against which
the casino can borrow.
1Wei_Dai12y
I curious what made you think about this problem. I'm sure you're aware of the
efficient market hypothesis... do you have some private information that
suggests casino stocks are undervalued?
By coincidence I was in Las Vegas a couple of weeks ago and did some research
before I left for the trip. It turns out that many casinos (both physical and
online) offer gambles with positive expected value for the player, as a way to
attract customers (most of whom are too irrational to take proper advantage of
the offers, I suppose). There are entire books and websites devoted to this. See
http://en.wikipedia.org/wiki/Comps_%28casino%29
[http://en.wikipedia.org/wiki/Comps_%28casino%29] and
http://www.casinobonuswhores.com/ [http://www.casinobonuswhores.com/]
0RolfAndreassen12y
It was a random thought. I don't think casino stocks are particularly
undervalued, but that doesn't affect the basic analysis: If you own such stocks,
you're basically making money off slot machines, in the same way that owning
stock in a widget factory means you're making money from the production of
widgets.
"Imagine the human race gets wiped out. But you want to transmit the so far acquired knowledge to succeeding intelligent races (or aliens). How do you do?"
I got this question while reading a dystopia of a world after nuclear war.
Transmitting it to aliens ain't happening; we'd get them from radio to the
present day, a couple hundred years' worth of technology, which is relatively
little, and that's only if we manage to aim it right.
So, we want to communicate to future sapient species on Earth. I say take many,
many plates of uranium glass and carve into it all of our most fundamental
non-obvious knowledge: stuff like the periodic table, how to make electricity,
how to make a microchip, some microchip designs, some software. And, of course,
the scientific method, rationality, the non-exception convention (0 is a number,
a square is a rectangle, the empty product is 1, . . .), and the function
application motif (the way we construct mathematical expressions and
natural-language phrases). Maybe tell them about Friendly AI, too.
0Technologos12y
In what language or symbolic system would you do so? The Pioneer plaque
[http://en.wikipedia.org/wiki/Pioneer_plaque] and Voyager records
[http://en.wikipedia.org/wiki/Voyager_Golden_Record] both made an attempt in
that direction, but I'm sure there's a better way.
In one of my classes in college, we were asked to try to decipher the supposedly
universal language of the Pioneer plaque, which should have been relatively easy
insofar as we shared a species (and thus a neural architecture) with the
creators. We got some of it, though not all, which is apparently better than
many of the NASA scientists on the project!
I heard an interview on NPR with a surgeon who asked other surgeons to use checklists in there operating rooms. Most didn't want to. He convinced some to try them out anyway.
(If you're like me, at this point you need time to get over your shock that surgeons don't use checklists. I mean, it's not like they're doing something serious, like flying a plane or extracting a protein, right?)
After trying them out, 80% said they would like to continue to use checklists. 20% said they still didn't want to use checklists.
So he asked them, If they had surgery, would they want their surgeon to use a checklist? 94% said they would want their surgeon to use a checklist.
The Guardian published a piece citing Less Wrong:
The number's up by Oliver Burkeman
Recent observations on the art of writing fiction:
My main characters in failed/incomplete/unsatisfactory stories are surprisingly reactive, that is, driven by events around them rather than by their own impulses. I think this may be related to the fundamental attribution error: we see ourselves as reacting naturally to the environment, but others as driven by innate impulses. Unfortunately this doesn't work for storytelling at all! It means my viewpoint character ends up as a ping-pong ball in a world of strong, driven other characters. (If you don't see this error in my published fiction, it's because I don't publish unsuccessful stories.)
Closely related to the above is another recent observation: My main character has to be sympathetic, in the sense of having motivations that I can respect enough to write them properly. Even if they're mistaken, I have to be able to respect the reasons for their mistakes. Otherwise my viewpoint automatically shifts to the characters around them, and once again the non-protagonist ends up stronger than the protagonist.
Just as it's necessary to learn to make things worse for your characters, rather than following the natural impulse to
"Former Christian Apologizes For Being Such A Huge Shit Head All Those Years" sounds like an Onion article, but it isn't. What's impressive is not only the fact that she wrote up this apology publicly, but that she seems to have done it within a few weeks of becoming an atheist after a lifetime of Christianity, and in front of an audience that has since sent her so much hate mail she's stopped reading anything in her inbox that's not clearly marked as being on another topic.
Inspired by reading this blog for quite some time, I started reading E.T. Jaynes' Probability Theory. I've read most of the book by now, and I have incredibly mixed feelings about it.
On one hand, the development of probability calculus starting from the needs of plausible inference seems very appealing as far as the needs of statistics, applied science and inferential reasoning in general are concerned. The Bayesian viewpoint of (applied) probability is developed with such elegance and clarity that alternative interpretations can hardly be considered appealing next to it.
On the other hand, the book is very painful reading for the pure mathematician. The repeated pontification about how wrong mathematicians are for desiring rigor and generality is strange, distracting and useless. What could possibly be wrong about the desire to make the steps and assumptions of deductive reasoning as clear and explicit as possible? Contrary to what Jaynes says or at least very strongly implies (in Appendix B and elsewhere), clarity and explicitness of mathematical arguments are not opposites or mutually contradictory; in my experience, they are complementary.
Even worse, Jaynes makes several strong ... (read more)
After pondering the adefinitemaybe case for a bit, I can't shake the feeling that we really screwed this one up in a systematic way, that Less Wrong's structure might be turning potential contributors off (or turning them into trolls). I have a few ideas for fixes, and I'll post them as replies to this comment.
Essentially, what it looks like to me is that adefmay checked out a few recent articles, was intrigued, and posted something they thought clever and provocative (as well as true). Now, there were two problems with adefmay's comment: first, they had an idea of the meaning of "evidence" that rules out almost everything short of a mathematical proof, and secondly, the comment looked like something that a troll could have written in bad faith.
But what happened next is crucial, it seems to me. A bunch of us downvoted the comment or (including me) wrote replies that look pretty dismissive and brusque. Thus adefmay immediately felt attacked from all sides, with nobody forming a substantive and calm reply (at best, we sent links to pages whose relevance was clear to us but not to adefmay). Is it any wonder that they weren't willing to reconsider their definition of evi... (read more)
I'm not sure there needs to be more than one FAQ thread. But lets start by generating a list of frequently asked questions, coming up with answers with consensus support.
What else? Anyone have drafts of answers?
Okay, so....a confession.
In a fairly recent little-noticed comment, I let slip that I differ from many folks here in what some may regard as an important way: I was not raised on science fiction.
I'll be more specific here: I think I've seen one of the Star Wars films (the one about the kid who apparently grows up to become the villain in the other films). I have enough cursory familiarity with the Star Trek franchise to be able to use phrases like "Spock bias" and make the occasional reference to the Starship Enterprise (except I later found out that the reference in that post was wrong, since the Enterprise is actually supposed to travel faster than light -- oops), but little more. I recall having enjoyed the "Tripod" series, and maybe one or two other, similar books, when they were read aloud to me in elementary school. And of course I like Yudkowsky's parables, including "Three Worlds Collide", as much as the next LW reader.
But that's about the extent of my personal acquaintance with the genre.
Now, people keep telling me that I should read more science fiction; in fact, they're often quite surprised that I haven't. So maybe, while we're doing these... (read more)
In one of the dorkier moments of my existence, I've written a poem about the Great Filter. I originally intended to write music for this, but I've gone a few months now without inspiration, so I think I'll just post the poem to stand by itself and for y'all to rip apart.
Suggested tweaks are welcome. Things that I'm currently unhappy with are that "fortuitous" scans awkwardly, and the skies/eyes rhyme feels clichéd.
I recently revisited my old (private) high school, which had finished building a new >$15 million building for its football team (and misc. student activities & classes).
I suddenly remembered that when I was much younger, the lust of universities and schools in general for new buildings had always puzzled me: I knew perfectly well that I learned more or less the same whether the classroom was shiny new or grizzled gray and that this was true of just about every subject-matter*, and even then it was obvious that buildings must cost a lot to build and... (read more)
Akrasia FYI:
I tried creating a separate login on my computer with no distractions, and tried to get my work done there. This reduced my productivity because it increased the cost of switching back from procrastinating to working. I would have thought that recovering in large bites and working in large bites would have been more efficient, but apparently no, it's not.
I'm currently testing the hypothesis that reading fiction (possibly reading anything?) comes out of my energy-to-work-on-the-book budget.
Next up to try: Pick up a CPAP machine off Craigslist.
Suppose we want to program an AI to represent the interest of a group. The standard utilitarian solution is to give the AI a utility function that is an average of the utility functions of the individual in the group, but that runs into the interpersonal comparison of utility problem. (Was there ever a post about this? Does Eliezer have a preferred approach?)
Here's my idea for how to solve this. Create N AIs, one for each individual in the group, and program it with the utility function of that individual. Then set a time in the future when one of those A... (read more)
I rewatched 12 Monkeys last week (because my wife was going through a Brad Pitt phase, although I think this movie cured her of that :), in which Bruce Willis plays a time traveler who accidentally got locked up in a mental hospital. The reason I mention it here is because It contained an amusing example of mutual belief updating: Bruce Willis's character became convinced that he really is insane and needs psychiatric care, while simultaneously his psychiatrist became convinced that he actually is a time traveler and she should help him save the world.
Perh... (read more)
Hello all,
I've been a longtime lurker, and tried to write up a post a while ago, only to see that I didn't have enough karma. I figure this is is the post for a newbie to present something new. I already published this particular post on my personal blog, but if the community here enjoys it enough to give it karma, I'd gladly turn it into a top-level post here, if that's in order.
Life Experience Should Not Modify Your Opinion http://paltrypress.blogspot.com/2009/11/life-experience-should-not-modify-your.html
When I'm debating some controversial topic wi... (read more)
This article about gendered language showed up on one of my feeds a few days ago. Given how often discussions of nongendered pronouns happen here, I figure it's worth sharing.
Suppose you could find out the exact outcome (up to the point of reading the alternate history equivalent of Wikipedia, history books etc.) of changing the outcome of a single historical event. What would that event be?
Note that major developments like "the Roman empire would never have fallen" or "the Chinese wouldn't have turned inwards" involve multiple events, not just one.
So many. I can't limit it to one, but my top four would be "What if Mohammed had never been born?", "What if Julian the Apostate had succeeded in stamping out Christianity?" and "What if Thera had never blown and the Minoans had survived?" and "What if Alexander the Great had lived to a ripe old age?"
The civilizations of the Near East were fascinating, and although the early Islamic Empire was interesting in its own right it did a lot to homogenize some really cool places. It also dealt a fatal wound to Byzantium as well. If Mohammed had never existed, I would look forward to reading about the Zoroastrian Persians, the Byzantines, and the Romanized Syrians and Egyptians surviving much longer than they did.
The Minoans were the most advanced civilization of their time, and had plumbing, three story buildings, urban planning and possibly even primitive optics in 2000 BC (I wrote a bit about them here). Although they've no doubt been romanticized, in the romanticized version at least they had a pretty equitable society, gave women high status, and revered art and nature. Then they were all destroyed by a giant volcano. I remember reading one hist... (read more)
Given that Alexander was one of the most successful conquerors in all of history, he almost certainly benefited from being extremely lucky. If he had lived longer, therefore, he would have probably experienced much regression to the mean with respect to his military success.
I'd really, really like to see what the world would be like today if a single butterfly's wings had flapped slightly faster back in 5000 B.C.
Prisoner's Dilemma on Amazon Mechanical Turk: http://blog.doloreslabs.com/2010/01/altruism-on-amazon-mechanical-turk/
Oh, and to post another "what would you find interesting" query, since I found the replies to the last one to be interesting. What kind of crazy social experiment would you be curious to see the results of? Can be as questionable or unethical as you like; Omega promises you ve'll run the simulation with the MAKE-EVERYONE-ZOMBIES flag set.
There are several that I've wondered about:
Raise a kid by machine, with physical needs provided for, and expose the kid to language using books, recordings, and video displays, but no interactive communication or contact with humans. After 20 years or so, see what the person is like.
Try to create a society of unconscious people with bicameral minds, as described in Julian Jaynes's "The Origin of Consciousness in the Breakdown of the Bicameral Mind", using actors taking on the appropriate roles. (Jaynes's theory, which influenced Daniel Dennett, was that consciousness is a recent cultural innovation.)
Try to create a society where people grow up seeing sexual activity as casual, ordinary, and expected as shaking hands or saying hello, and see whether sexual taboos develop, and study how sexual relationships form.
Raise a bunch of kids speaking artificial languages, designed to be unlike any human language, and study how they learn and modify the language they're taught. Or give them a language without certain concepts (relatives, ethics, the self) and see how the language influences they way they think and act.
They'd probably be like the average less wrong commenter/singularitarian/transhumanist, so really no need to run this one.
So, one result of this experiment would be/is a significantly below average ability to distinguish humor from serious debate...
Or significantly below average ability to signal whether something is humorous or serious. ;)
Has anyone here tried Lojban? Has it been useful?
I recommend making a longer list of recent comments available, the way Making Light does.
If you've been working with dual n-back, what have you gotten out of it? Which version are you using?
Would an equivalent to a .newsrc be possible? I would really like to be able to tell the site that I've read all the comments in a thread at a given moment, so that when I come back, I'll default to only seeing more recent comments.
If quantum immortality is correct, and assuming life extension technologies and uploading are delayed for a long time, wouldn't each of us, in our main worldline, become more and more decrepit and injured as time goes on, until living would be terribly and constantly painful, with no hope of escape?
I spent December 23rd, 24th and 25th in the hospital. My uncle died of brain cancer (Glioblastoma multiforme). He was an atheist, so he knew that this was final, but he wasn't signed up for cryonics.
We learned about the tumor 2 months ago, and it all happened so fast.. and it's so final.
This is a reminder to those of you who are thinking about signing up for cryonics; don't wait until it's too late.
Because trivial inconvieniences be a strong deterent, maybe someone should make a top-level post on the practicallities of cryonics; an idiots guide to immortality.
Alexandre Borovik summarizes the Bayesian error in null hypothesis rejection method, citing the classical
J. Cohen (1994). `The Earth Is Round (p < .05)'. American Psychologist 49(12):997-1003.
The fallacy of null hypothesis rejection
What is the appropriate etiquette for post frequency? I work on multiple drafts at a time and sometimes they all get finished near each other. I assume 1 post per week is safe enough.
From The Rhythm of Disagreement:
Has Bostrom made this proposal in anything published? I can't seem to find it on nickbostrom.com.
Different responses to challenges seen through the lens of video games. Although I expect the same can be said for character driven stories (rather than say concept driven).
... (read more)This is ridiculous. (A $3 item discounted to $2.33 is perceived as a better deal (in this particular experimental setup) than the same item discounted to $2.22, because ee sounds suggest smallness and oo sounds suggest bigness.)
What is the informal policy about posting on very old articles? Specifically, things ported over from OB? I can think of two answers: (a) post comments/questions there; (b) post comments/questions in the open thread with a link to the article. Which is more correct? Is there a better alternative?
Richard Dawkins talking to an astrologer. Best part at 10m28s.
Transcript:
--
Dawkins: We could devise a little experiment where we take your forecasts and then give some of them straight, give some of them randomized, sometimes give Virgo the Pisces forecast et cetera. And then ask people how accurate they were.
Astrologer: Yes, that would be a perverse thing to do, wouldn't it.
Dawkins: It would be - yes, but I mean wouldn't that be a good test?
Astrologer: A test of what?
Dawkins: Well, how accurate you are.
Astrologer: I think that your intention there is mischief, and I'd think what you'd then get back is mischief.
Dawkins: Well my intention would not be mischief, my intention would be experimental test. A scientific test. But even if it was mischief, how could that possibly influence it?
Astrologer: (Pause.) I think it does influence it. I think whenever you do things with astrology, intentions are strong.
Dawkins: I'd have thought you'd be eager.
Astrologer: (Laughs.)
Dawkins: The fact that you're not makes me think you don't really in your heart of hearts believe it. I don't think you really are prepared to put your reputation on the line.
Astrologer: I just don't believe in the experiment, Richard, it's that simple.
Dawkins: Well you're in a kind of no-lose situation then, aren't you.
Astrologer: I hope so.
--
Why is the news media comfortable with lying about science?
http://arstechnica.com/science/news/2010/01/why-is-the-news-media-comfortable-with-lying-about-science.ars
James Hughes - with a (IMO) near-incoherent Yudkowsky critique:
http://ieet.org/index.php/IEET/more/hughes20100108/
A few years back I did an ethics course at university. It very quickly made me realise that both I and most of the rest of the class based our belief in the existence of objective ethics simply on a sense that ethics must exist. When I began to question this idea my teacher asked me what I expected an objective form of ethics to look like. When I said I didn't know she asked if I would agree that a system of ethics would be objective if they could be universally calculated by any none biased, perfectly logical being. This seemed fair enough but the problem... (read more)
Here's a silly comic about rationality.
I rather wish it was called "Irrationally Undervalues Rapid Decisions Man". Or do I?
P(A)*P(B|A) = P(B)*P(A|B). Therefore, P(A|B) = P(A)*P(B|A) / P(B). Therefore, woe is you should you assign a probability of 0 to B, only for B to actually happen later on; P(A|B) would include a division by 0.
Once upon a time, there was a Bayesian named Rho. Rho had such good eyesight that she could see the exact location of a single point. Disaster struck, however, when Rho accidentally threw a dart, its shaft so thin that its intersection with a perfect dartboard would be a single point, at a perfect dartboard. You see, when you randomly select a point f... (read more)
I am curious as to how many LWers attempt to work out and eat healthy to lengthen life span. Especially among those who have signed up for cryogenics.
A little knowledge can be a dangerous thing. At least Eliezer has previously often recommended Judgment Under Uncertainty as something people should read. Now, I'll admit I haven't read it myself, but I'm wondering if that might be a bad advice, as the book's rather dated. I seem to frequently come across articles that cite JUU, but either suggest alternative interpretations or debunk its results entirely.
Just today, I was trying to find recent articles about scope insensitivity that I could cite. But on a quick search I primarily ran across articles point... (read more)
I found this interesting and the paper it discusses children's conception of intelligence.
The abstract to the article
... (read more)A suggestion for the site (or perhaps the Wiki): It would be useful to have a central registry for bets placed by the posters. The purpose is threefold:
For the "How LW is Perceived" file:
Here is an excerpt from a comments section elsewhere in the blogosphere:
I shall leave the interpretation of this to those whose knowledge of Star Trek is deeper than mine...
I am currently writing a sequence of blog posts on Friendly AI. I would appreciate your comments on present and future entries.
Inspired by this comment by Michael Vassar:
http://lesswrong.com/lw/1lw/fictional_evidence_vs_fictional_insight/1hls?context=1#comments
Is there any interest in an experimental Less Wrong literary fiction book club, specifically for the purpose of gaining insight? Or more specifically, so that together we can hash out exactly what insights are or are not available in particular works of fiction.
Michael Vassar suggests The Great Gatsby (I think, it was kind of written confusingly parallel with the names of authors but I don't think there was ever an author Ga... (read more)
How old were you when you became self-aware or achieved a level of sentience well beyond that of an infant or toddler?
I was five years old and walking down the hall outside of my kindergarden classroom and I suddenly realized that I had control over what was happening inside of my mind's eye. This manifested itself by me summoning an image in my head of Gene Wilder as Willy Wonka.
Is it proper to consider that the moment when I became self aware? Does anyone have a similar anecdote?
(This is inspired by Shannon's mention of her child exploring her sense of s... (read more)
I occasionally see people here repeatedly making the same statement, a statement which appears to be unique to them, and rarely giving any justification for it. Examples of such statements are "Bayes' law is not the fundamental method of reasoning; analogy is" and "timeless decision is the way to go". (These statements may have been originally articulated more precisely than I just articulated them.)
I'm at risk of having such a statement myself, so here, I will make this statement for hopefully the last time, and justify it.
It's often s... (read more)
Paul Graham -- How to Disagree
http://www.paulgraham.com/disagree.html
A soft reminder to always be looking for logical fallacies: This quote was smushed into an opinion piece about OpenGL:
Oops.
Once upon a time I was pretty good at math but either I just stopped liking it or the series of dismal school teachers I had turned me off of it. I ended up taking the social studies/humanities rout and somewhat regretting it. I've studied some foundations of mathematics stuff, symbolic logic and really basic set theory and usually find that I can learn pretty rapidly if I have a good explanation in front of me. What is the best way to teach myself math? I stopped with statistics (High school, advanced placement) and never got to calculus. I don't expect to become a math wiz or anything, I'd just like to understand the science I read better. Anyone have good advice?
When people here say they are signed up for cryonics, do they systematically mean "signed up with the people who contract to freeze you and signed up with an instrument for funding suspension, such as life insurance" ?
I have contacted Rudi Hoffmann to find out just what getting "signed up" would entail. So far I'm without a reply, and I'm wondering when and how to make a second attempt, or whether I should contact CI or Alcor directly and try to arrange things on my own.
Not being a US resident makes things much more complicated (I live in France). Are there other non-US folks here who are "signed up" in any sense of the term ?
Feature request, feel free to ignore if it is a big deal or requested before.
When message people back and forth it would be nifty to be able to see the thread. I see glimpses of this feature but it doesn't seem fully implemented.
An interesting application of near/far:
http://scienceblogs.com/notrocketscience/2010/01/becoming_better_mind-readers_-_to_work_out_how_other_people.php
Does undetectable equal nonexistent? Examples: There are alternate universes, but there's no way we can interact with them. There are aliens outside our light cones. Past events evidence of which has been erased.
First: I'm having a very bad brain week; my attempts to form proper-sounding sentences have generally been failing, muddling the communicative content, or both. I want to catch this open thread, though, with this question, so I'll be posting in what is to me an easier way of stringing words together. Please don't take it as anything but that; I'm not trying to be difficult or to display any particular 'tone of voice'. (Do feel free to ask about this; I don't mind talking about it. It's not entirely unusual for me, and is one of the reasons that I'm fairly ... (read more)
Except wireheads.
Why was this comment downvoted to -4? Seems to me it's a legitimate question, from a fairly new poster.
Ask Peter Norvig anything: http://www.reddit.com/r/programming/comments/auvxf/ask_peter_norvig_anything/
Grand Orbital Tables: http://www.orbitals.com/orb/orbtable.htm
In high school and intro chemistry in college, I was taught up to the e and then f orbitals, but they keep going and going from there.
It is not that I object to dramatic thoughts; rather, I object to drama in the absence of thought. Not every scream made of words represents a thought. For if something really is wrong with the universe, the least one could begin to do about it would be to state the problem explicitly. Even a vague first attempt ("Major! These atoms ... they're all in the wrong places!") is at least an attempt to say something, to communicate some sort of proposition that can be checked against the world. But you see, I fear that some screams don't actually commu... (read more)
Ray Kurzweil Responds to the Issue of Accuracy of His Predictions
http://nextbigfuture.com/2010/01/ray-kurzweil-responds-to-issue-of.html
How much of Eliezer's 2001 FAI document is still advocated? eg. Wisdom tournaments and bugs in the code.
Something has been bothering me ever since I began to try to implement many of the lessons in rationality here. I feel like there needs to be an emotional reinforcement structure or a cognitive foundation that is both pliable and supportive of truth seeking before I can even get into the why, how and what of rationality. My successes in this area have been only partial but it seems like the better well structured the cognitive foundation is the easier it is to adopt, discard and manipulate new ideas.
I understand that is likely a fairly meta topic and woul... (read more)
I've just reached karma level 1337. Please downvote me so I can experience it again!
"Top Contributors" is now sorted correctly. (Kudos to Wesley Moore at Tricycle.)
Possibly dumb question but... can anyone here explain to me the difference between Minimum Message Length and Minimum Description Length?
I've looked at the wikipedia pages for both, and I'm still not getting it.
Thanks.
Question for all of you: Is our subconscious conscious? That is, are parts of us conscious? "I" am the top-level consciousness thinking about what I'm typing right now. But all sorts of lower-level processes are going on below "my" consciousness. Are any of them themselves conscious? Do we have any way of predicting or testing whether they are?
Tononi's information-theoretic "information integration" measure (based on mutual information between components) could tell you "how conscious" a well-specified circuit ... (read more)
Today at work, for the first time, LessWrong.com got classified as "Restricted:Illegal Drugs" under eSafe. I don't know what set that off. It means I can't see it from work (at least, not the current one).
How do we fix it, so I don't have to start sending off resumes?
And for one short moment, in the wee morning hours, MrHen takes up the whole damn Recent Comments section.
I assume dropping two walls of text and a handful of other lengthy comments isn't against protocol. Apologies if I annoy anyone.
I am going to be hosting a Less Wrong meeting at East Tennessee State University in the near future, likely within the next two weeks. I thought I would post here first to see if anyone at all is interested and if so when a good time for such a meeting might be. The meeting will be highly informal and the purpose is just to gauge how many people might be in the local area.
Please review a draft of a Less Wrong post that I'm working on: Complexity of Value != Complexity of Outcome, and let me know if there's anything I should fix or improve before posting it here. (You can save more substantive arguments/disagreements until I post it. Unless of course you think it completely destroys my argument so that I shouldn't even bother. :)
Garry Kasparov: The Chess Master and the Computer
http://www.nybooks.com/articles/23592
Today's Questionable Content has a brief Singularity shoutout (in its typical smart-but-silly style).
I recently found an article that may be of interest to Less Wrong readers:
Blame It on the Brain
The latest neuroscience research suggests spreading resolutions out over time is the best approach
The article also mentions a study in which overloading the prefrontal cortex with other tasks reduces people's willposer.
(should I repost this link to next month's open thread? not many people are likely to see it here)
Inorganic dust with lifelike qualities: http://www.sciencedaily.com/releases/2007/08/070814150630.htm
So I am back in college and I am trying to use my time to my best advantage. Mainly using college as an easy way to get money to fund room and board while I work on my own education. I am doing this because i was told here among other places that there are many important problems that need to be solved and i wanted to develop skills to help solve them because I have been strongly convinced that it is moral to do so. However beyond this I am completely unsure of what to do. So I have the furious need for action but seem to have no purpose guiding that actio... (read more)
Schooling isn't about education. This article is pretty mind-boggling: apparently, it's been a norm until now in Germany that school ends at lunchtime and the children then go home. Considering how strong the German economy has traditionally been, this raises serious questions of the degree that elementary school really is about teaching kids things (as opposed to just being a place to drop off the kids while the parents work).
Oh, and the country is now making the shift towards school in the afternoon as well, driven by - you guessed it - a need for women to spend more time actually working.
For some reason, my IP was banned on the LessWrong Wiki. Apparently this is the reason:
Any idea how this happens and how I can prevent from happening again?
Strange fact about my brain, for anyone interested in this kind of thing:
Even though my recent top-level post has (currently) been voted up to 19, earning me 190 karma points, I feel like I've lost status as a result of writing it.
This doesn't make much sense, though it might not be a bad thing.
What are/ought to be the standards here for use of profanity?
Paul Bucheit -- Evaluating risk and opportunity (as a human)
http://paulbuchheit.blogspot.com/2009/09/evaluating-risk-and-opportunity-as.html
What's the right prior for evaluating an H1N1 conspiracy theory?
I have a friend, educated in biology and business, very rational compared to the average person, who believes that H1N1 was a pharmaceutical company conspiracy. They knew they could make a lot of money by making a less-deadly flu that would extend the flu season to be year round. Because it is very possible for them to engineer such a virus and the corporate leaders are corrupt sociopaths, he thinks it is 80% probable that it was a conspiracy. Again, he thinks that because it was possible for ... (read more)
Can someone point me towards the calculations people have been doing about the expected gain from donating to the SIAI, in lives per dollar?
Edit: Never mind. I failed to find the video previously, but formulating a good question made me think of a good search term.
The Edge Annual Question 2010: How is the internet changing the way you think?
http://www.edge.org/q2010/q10_print.html#responses
I was recently asked to produce the indefinite integral of ln x, and completely failed to do so. I had forgotten how to do integration by parts in the 6 months since I had done serious calculus. Is there anyone who knows of a calculus problem of the day or some such that I might use to retain my skills?
Ethical problem. It occurred to me that there's an easy, obvious way to make money by playing slot machines: Buy stock in a casino and wait for the dividends. Now, is this ethically ok? On the one hand, you're exploiting a weakness in other people's brains. On the other hand, your capital seems unlikely, at the existing margins, to create many more gamblers, and you might argue that you are more ethical than the average investor in casinos.
It's a theoretical issue for me, since my investment money is in an index fund, which I suppose means I own some tiny share in casinos anyway and might as well roll with it. But I'd be interested in people's thoughts anyway.
"Imagine the human race gets wiped out. But you want to transmit the so far acquired knowledge to succeeding intelligent races (or aliens). How do you do?"
I got this question while reading a dystopia of a world after nuclear war.