Extraordinary claims require extraordinary evidence. But claims of sexual harassment, abuse, assault, and rape are not extraordinary. They are depressingly ordinary. So the level of evidence we should need to believe a claim about sexual harassment, abuse, assault, or rape is substantially lower than the level of evidence we should need to believe a claim about, say, Bigfoot.
This is straight Bayes — since the prior for rape is higher than the prior for Bigfoot, it requires less evidence to raise our credence above 0.5 in any given case of a claimed occurrence. In the comments, one person points out the connection to Bayes, in part remarking:
“Bayesian updating” is a good method for using evidence rationally to change your mind. If someone requires extraordinary evidence to believe a depressingly common event, they are not being rational.
In response, another commenter, apparently triggered by the mention of Bayes, goes on a ... (read more)
Congratulations, you avoided stepping on a landmine!
Is there a name for the bias "if a person A is commenting on a forum X, then
person A is a representative of the forum X"?
Given all the concerns about replication in psychology, it's good to see that at least the most important studies get replicated: [1][2][3][4][5][6][7]. ;)
Before reading these, I recommend making predictions and then seeing how
well-calibrated you were. I learned that V arneyl pubxrq ba zl sbbq ynhtuvat
jura V "ernq" gurfr.
I've decided to live less on the internet (a.k.a. the world's most popular superstimulus) and more in real life. I pledge to give $75 to MIRI if I make any more posts on this account or on my reddit account before the date of October 13 (two months from now).
On a related note, I was thinking about how to solve the problem of the constant temptation to waste time on the internet. For most superstimuli, the correct action is to cut yourself off completely, but that's not really an option at all here. Even disregarding the fact that it would be devastatingly impractical in today's world, the internet is an instant connection to all the information in the world, making it incredibly useful. Ideally one would use the internet purely instrumentally - you would have an idea of what you want to do, open up the browser, do it, then close the browser.
To that end, I have an idea for a Chrome extension. You would open up the browser, and a pop-up would appear prompting you to type in your reason for using the internet today. Then, your reason would be written in big black letters at the top of the page while you're browsing, and only go away when you close Chrome. This would force you to rem... (read more)
Perhaps a stupid question, or, more accurately, not even a question - but I
don't understand this attitude. If you enjoy going on the Internet, why would
you want to stop? If you don't enjoy it, why would it tempt you? It reminds me,
and I mean no offense by this, like the attitude addicts have towards drugs. But
it really stretches plausibility to say that the Internet could be something
like a drug.
Perhaps a stupid question, or, more accurately, not even a question - but I don't understand this attitude. If you enjoy going on the Internet, why would you want to stop? If you don't enjoy it, why would it tempt you?
Wanting is mediated by dopamine. Liking is mostly about opiods. The two features are (unfortunately) not always in sync.
It reminds me, and I mean no offence by this, like the attitude addicts have towards drugs. But it really stretches plausibility to say that the Internet could be something like a drug.
It really doesn't stretch plausibility. The key feature here is "has addictive potential". It doesn't matter to the brain whether the reward is endogenous dopamine released in response to a stimulus or something that came in a pill.
This is confusing to me. Intuitively, reward that is not wireheading is a good
thing, and the Internet's rewarding-ness is in complex and meaningful
information which is the exact opposite of wireheading. For the same reason, I'm
confused about what tasty foods are not seen as a dangerous evil that needs to
be escaped.
8drethelin10y
there are things that can too easily expand to fill all of your time while only
being a certain level of better than baseline. If you want to feel even better
than just browsing the internet you need to not allow it to fill all your time.
I also value doing DIFFERENT things, though not everyone does. It's easier to do
different activities (ie the threshold cost to starting them, which is usually
the biggest emotional price you pay) if you're NOT doing something fairly
engrossing already.
if your base state is 0 hedons (neutral) an hour, internet is 5 hedons an hour,
and going to go out dancing is maybe 1 hedon during travel time and 20 while
doing it, it's easier to go dancing if you're deliberately cutting off your
internet time, because you don't have to spend -4 hedons to get out of the
house.
Another concern is when people care about things other than direct hedons. If
you have goals other than enjoying your time, then allowing internet to take up
all your time sabotages those goals.
6Manfred10y
The brain appears to have separable capabilities for wanting something and
enjoying something. There are definitely some things that I feel urges to do but
don't particularly enjoy at any point. A common example is lashing out at
someone verbally - sometimes, especially on the internet, I have urges to be a
jerk, but when I act on those urges it isn't rewarding to me.
Aaanyhow, your sentence is also the worst argument
[http://lesswrong.com/lw/e95/the_noncentral_fallacy_the_worst_argument_in_the/]
:P
0blacktrance10y
I guess I can't identify with that feeling. I don't think I've ever felt that
way - I've never wanted something that I could have identified as "not
rewarding" at the time that I wanted it (regardless of the how long I reflected
on it). The only times I wanted something but didn't enjoy it was because of
lack of information.
3Dahlen10y
Quick, everyone! If we can do it for less than $75, then let's make LW super
extra interesting to gothgirl420666 for the next two months. :D
Joking aside, perhaps an effective strategy for making yourself spend less time
online is to reduce your involvement with online communities -- for me at least,
flashing inbox icons and commitments made to people on various forums (such as
promising you'll upload a certain file) are a big part of what makes me keep
coming back to certain places I want to spent less time at. If it weren't for
that nagging feeling in the back of my mind, that I'll lose social cred in some
place if I don't come back and act on my promises, or vanish for a few months
and leave PMs unanswered, I'd be tempted to make a "vow of online silence" too.
1Viliam_Bur10y
I use AdBlock to block the "new messages" icon on LessWrong at my work.
2Viliam_Bur10y
I can imagine a site-blocking tool where you could select a browsing "mode".
Each mode would block different websites. When you open an unknown website, it
would ask you to classify it.
Typical modes are "work" (you block everything not work-related) and "free time"
(you may still want to block the largest time sinks), but maybe there could be
something like "a break from the work" that would allow some fun but keep within
some limits, for example only allow programming-related blogs and debates.
The comments are full of deathism: many people apparently sincerely coming out in favour of not just death (limited lifespan) but aging and deterioration.
Everyone who doesn't feel in their gut that many (most?) normal people truly believe aging and death are good, and will really try to stop you from curing it if they can, should go and read through all the comments there. It's good rationality training if (like me) you haven't ever discussed this in person with your friends (or if they all happened to agree). It's similar to how someone brought up by and among atheists (again, me) may not understand religion emotionally without some interaction with it.
Someone marked the appeal to worse problems article on Wikipedia for prospective deletion, for lack of sourcing - it appears to mostly have been written from the TVTropes page page. I've given it its proper name and added "whataboutery" as another name for it - but it needs more, and preferably from a suitably high-quality source.
A fact about industrial organization that recently surprised me:
Antimonopoly rules prevent competitors from coordinating. One exemption in the US is political lobbying: executives can meet at their political action committee. Joint projects in some industries are organized as for-profit companies owned by (nonprofit) political action committees.
My girlfriend taught me how to dive this past weekend. I'm 26. I had fully expected to go my entire life without learning how to dive, I guess because I unconsciously thought it was "too late" to learn, somehow. Now I'm wondering what other skills I never learned at the typical age and could just as easily learn now.
(if you're looking for object-level takeaways, just start out with kneeling dives - they're way easier and far less intimidating - then gradually try standing up more and more)
Two roads diverged in a woods, and I
Stepped on the one less traveled by
Yet stopped, and pulled back with a cry
For all those other passers-by
Who had this road declined to try
Might have a cause I knew not why
What dire truths might that imply?
I feared that road might make me die.
And so with caution to comply
I wrung my hands and paced nearby
My questions finding no reply
Until a traveller passed nigh
With stronger step and focused eye
I bid the untouched road goodbye
And followed fast my new ally.
The difference made I'll never know
'Till down that other path you go.
I am impressed how you managed to do a reasonable variation on that poem using
almost solely rhymes on i/y (even if you had to reuse some words like 'by').
0MugaSofer10y
I found this much more amusing than it should have been.
Did that really change in the last 3 days? If so, impressive turnaround! And
surprising that it'd change without any sort of discussion. Now I'm confused.
Where was the search box showing up before?
What are the relative merits of using one's real name vs. a pseudonym here?
When I first started reading LessWrong, I was working in an industry obsessed with maintaining very mainstream appearances, so I chose to go with a pseudonym. I have since changed industries and have no intention of going back, so my original reason for using a pseudonym is probably irrelevant now.
I continue running into obstacles (largely-but-not-exclusively of an accessibility nature) when it comes to the major crowdfunding websites. It seems not to be just me; the major platforms (Kickstarter/Indiegogo) could stand to be much more screen reader-friendly, and the need for images (and strong urging to use videos) is an obstacle to any blind person seeking funding who doesn't have easy access to sighted allies/minions.
My present thoughts are that I'd rather outsource setting up crowdfunding campaigns to someone for whom these would not be serious ob... (read more)
Here's an interesting article that argues for using (GPL-protected) open source strategies to develop strong AI, and lays out reasons why AI design and opsec should be pursued more at the modular implementation level (where mistakes can be corrected based on empirical feedback) rather than attempted at the algorithmic level. I would be curious to see MIRI's response.
I searched and it doesn't look like anyone has discussed this criticism of LW yet. It's rather condescending but might still be of interest to some: http://plover.net/~bonds/cultofbayes.html
I don't think "condescending" touches accurately upon what is going on here. This seems to be politics being the mindkiller pretty heavily (ironically one of the things they apparently think is stupid or hypocritical). They've apparently taken some of the lack of a better term "right-wing" posts and used that as a general portrayal of LW. Heck, I'm in many ways on the same political/tribal group as this author and think most of what they said is junk.. Examples include:
Members of Lesswrong are adept at rationalising away any threats to their privilege with a few quick "Bayesian Judo" chops. The sufferings caused by today's elites — the billions of people who are forced to endure lives of slavery, misery, poverty, famine, fear, abuse and disease for their benefit — are treated at best as an abstract problem, of slightly lesser importance than nailing down the priors of a Bayesian formula. While the theories of right-wing economists are accepted without argument, the theories of socialists, feminists, anti-racists, environmentalists, conservationists or anyone who might upset the Bayesian worldview are subjected to extended empty "rationalist&quo
Someone using 'Political Correctness' as a positive term?
[http://m.youtube.com/watch?v=bmsV1TuESrc]
(Warning: Political comedy)
3Douglas_Knight10y
Perhaps by "which became notorious for its anti-PC stance and its defences of
hate speech" he means "notorious for being so anti-PC that it defended hate
speech"? I think that's pretty accurate. (Bond's weak tea 2011 link doesn't
defend hate speech, but argues that it is often a false label.)
7fubarobfusco10y
I'd take the author's "anti-PC" to mean something like "seeing 'political
correctness' everywhere, and hating it."
For instance, there are folks who respond to requests for civil and respectful
behavior on certain subjects — delivered with no force but the force of
persuasion — as if those requests were threats of violence, and as if resistance
to those requests were the act of a bold fighter for freedom of speech.
0Emile10y
My English teacher used "Political Correctness" as a positive term, which
surprised me too, though I guess in the context of a teacher who's supposed to
avoid discussing politics in class it does make sense to use it as an explicit
norm.
I searched and it doesn't look like anyone has discussed this criticism of LW yet. It's rather condescending but might still be of interest to some: http://plover.net/~bonds/cultofbayes.html
I'd more go with "incoherent ranting" than "condescending".
Worthless ranting.
His footnote 3 is particularly telling:
In other words, this is soup of the soup [http://www.soupsong.com/bmyth.html].
Looking at the other articles on his site, they're all like that. I would say
that this is someone who does not know how to learn.
8gwern10y
I once read a chunk of Bond's site after running into that page; after noting
its many flaws (including a number of errors of fact, like claiming Bayes tried
to prove God using his theorem when IIRC, that was Richard Price and he didn't
use a version of Bayes theorem), I was curious what the rest was like.
I have to say, I have never read video game reviews which were quite so...
politicized.
8Viliam_Bur10y
It's written by a mindkilled idiot whose only purpose in life seems to be
finding the least charitable interpretation of people he hates, which probably
means everyone except his friends, assuming he has any. There are millions of
such idiots out there, and the only difference is that this one mentioned LW in
one of his articles. We shouldn't feed the trolls just because they decided to
pay attention to us.
Starting with the very first paragraph... uhm, strawmanning mixed with plain
lies... why exactly should anyone spend their limited time reading this?
It is a proof of Bell's Inequality using counterfactual language. The idea is to explore links between counterfactual causal reasoning and quantum mechanics. Since these are both central topics on Less Wrong, I'm guessing there are people on this website who might be interested.
I don't have any background in Quantum Mechanics, so I cannot evaluate the paper myself, but I know two of the authors and have very high regard for their intelligence.
Does anybody think that there might be another common metaethical theory to go along w/ deontology, consequentialism, and virtue? I think it's only rarely codified, usu. used implicitly or as a folk theory, in which morality consists of bettering ones own faction and defeating opposing factions, and as far as I can see it's most common in radical politics of all stripes. Is this distinguishable from very myopic consequentialism or mere selfishness?
It depends on the reasons why one considers it right to benefit one's own faction and defeat opposing ones, I guess. Or are you proposing that this is just taken as a basic premise of the moral theory? If so, I'm not sure you can justifiably attribute it to many political groups. I doubt a significant number of them want to defeat opposing factions simply because they consider that the right thing to do (irrespective of what those factions believe or do).
Also, deontology, consequentialism and virtue ethics count as object-level ethical theories, I think, not meta-ethical theories. Examples of meta-ethical theories would be intuitionism (we know what is right or wrong through some faculty of moral intuition), naturalism (moral facts reduce to natural facts) and moral skepticism (there are no moral facts).
Okay... wow. I somehow managed to get that wrong for all this time? Oh dear.
This one isn't ever formal and rarely meta-ed about, and it's far from universal
in highly combative political groups. But it seems distinct from deontologists
who think it right to defeat your enemies, and from consequentialists who think
it beneficial to defeat their enemies.
3metastable10y
Maybe you're talking about moral relativism, which can be a meta-ethical
position [http://en.wikipedia.org/wiki/Moral_relativism] (what's right or wrong
depends on the context) as well as a normative theory.
Are you thinking of a situation where, for example, the bank robbers think it's
okay to pull heists, but they concede that it's okay for the police to try to
stop heists? And that they would do the same thing if they were police? Kind of
like in Heat? Such a great movie.
2ikrase10y
Yeah, sort of. That's basically the case for which faction membership is not in
question and is not mutable.
The only time I've really heard it formalized is in Plato's Republic where one
of the naive interlocutors suggests that morality consists of "doing good to
one's friends and harm to one's enemies".
0blacktrance10y
I don't think it's often explicitly stated or even identified as a premise - the
only case in which I see it stated by people who understand what it means is
when restrictionists bring it up in debates about immigration. Its opponents
call it tribalism, what its proponents call it differs depending on what the
in-group is. I would classify it as a form of moral intuitionism. By the way,
there are other ethical theories in addition to the three you mentioned. For
example: contractarianism (though perhaps it's a form of consequentialism),
contractualism (maybe consequentialist or deontological), and various forms of
moral intuitionism.
I often write things out to make them clear in my own mind. This works particularly well for detailed planning. Just as some people "don't know what they think until they hear themselves say it", I don't know what I think until I write it down. (Fast typing is an invaluable skill.)
Sometimes I use the same approach to work out what I think, know or believe about a subject. I write a sort of evolving essay laying out what I think or know.
And now I wonder: how much of that is true for other people? For instance, when Eliezer set out to write the Seq... (read more)
Which part is "that"? The fact that you write things out to make them clearer in
your mind or the fact that writing things out makes them clearer in your mind? I
think the latter is true for many people but the former is an uncommon habit. I
didn't explicitly pick it up until after attending the January CFAR workshop.
1David_Gerard10y
I do this by talking to myself. It attracts odd looks from loved ones, but it
works for me so I'm going to keep doing it, dammit.
1erratio10y
It's very much how I operate as well. Talking it out also works, but it needs to
be the right kind of person at the right time, whereas writing pretty much
always works.
Idle curiosity / possibility of post being deleted:
At one point in LessWrong's past (some time in the last year, I think), I seem to recall replying to a post regarding matters of a basilisk nature. I believe that the post I replied to was along these lines:
Given that the information has been leaked, what is the point of continuing to post discussions of this matter?
I believe my response was long the lines of:
I hate to use silly reflective humor, but given that the information has been leaked, what is the point of censoring discussions of this matter
My tactic when trying to find this kind of reference is to use a user page
search. If you can recall a suitable keyword then it you should be able to find
the discussion here
[http://www.ibiblio.org/weidai/lesswrong_user.php?u=J_Taylor]. I couldn't find
anything based on 'basilisk' or 'censor', unfortunately.
6J_Taylor10y
After more work than I would honestly prefer to put into such an effort, I
eventually found this post:
http://lesswrong.com/lw/goe/open_thread_february_1528_2013/8iuo
[http://lesswrong.com/lw/goe/open_thread_february_1528_2013/8iuo]
As a curiosity, this post cannot be found from my user-page, nor can it be found
via Wei Dai's app. Fascinating.
1Luke_A_Somers10y
What is EY thinking hiding this? Unless... he thinks it's right or might be, but
only if we... no, even then, it's best dealt with as quietly as it would be if
it were never touched. No one would be thinking about this if it were left open.
-7Douglas_Knight10y
0Tenoke10y
This is the supposed Modus Operandi of the admins (or maybe only EY) - making
such comments hard to find without deleting them. It has been mentioned here and
there and I am fairly sure I experienced a version of this recently when the
latest comment in the Open Thread feature on the sidebar stopped showing the
latest comment for the duration of this
[http://lesswrong.com/lw/i6l/open_thread_july_29august_4_2013/9hs0] (it could've
been a coincidence and it is a decent way to lessen the Streisand effect so I
don't blame EY for it)
0Richard_Kennaway10y
It can be found from your user page. Click the Comments tab, go to the bottom
and click Next, and (currently) it will be on that page.
As far as I can tell, the Comments tab shows you all of your comments, but the
Overview tab omits anything with an ancestor downvoted to -4 or below (and maybe
also anything with a banned ancestor).
3Douglas_Knight10y
Deletion by the admins does not hide comments from either "overview" or
"comments," at least not today.
Please don't use the word "ban" to refer to deletion of comments. It very often
confuses people and make them think users are being banned. Admins do it because
their UI uses it, but that's a terrible reason.
Attractive commentary is insightful and pithy, but forums do not accumulate pith. Forums bloat with redundant observations, misread remarks, and misunderstanding replies unless the community aggressively cull those comments.
Having your comment dismissed is unwelcoming and hurtful. Even if we know that downvotes shouldn't be hurtful, they are.
Bob writes a comment that doesn't carry its weight. Alice, a LW reader, can choose to up-vote, down-vote, or Dismiss Bob's comment. Dismiss advise... (read more)
Upvoted for a good analysis of the problem, but I think the proposed solution
would make the forum worse, not better - it makes the system more complex (more
buttons, more states a comment can be in), and more prone to abuses (dismissing
as censorship), and drama and complaints about people abusing the feature even
if they are not.
5drethelin10y
negative karma that doesn't discourage the poster from making further similar
comments is almost pointless.
3solipsist10y
Comments don't have to be "bad" to be worth hiding -- they can just be "not very
good" or "not very good anymore" . The fastest way to improve a document is to
remove the least good parts, even if those parts aren't "bad". Many comments are
necessary at the time, but fluffy afterwards ("By foo do you mean bar?", "No, I
meant baz, and have edited my original post to make that clear", "OK, then I
withdraw my objection"). If two people independently offer the same exact
brilliant insight, we'd should still hide one of them. There are no shortage of
times I'd like to hide a comment without discouraging or punishing the author.
4Lumifer10y
That, in effect, sets up a parallel karma system. There is the normal karma,
visible and both up- and downvotable. And then there is the karma of dismissal
which is unseen and can only go down but never can go up.
Besides that, the system implies personal "hide-this" flags for all comments.
The Dismiss button, then, does two things simultaneously: sets the hide-this
flag for the comment and decreases the comment's dismissal karma.
0MugaSofer10y
That would be the little minimise button in the corner.
Researchers have found that people experiencing Nietzschean angst tend to cling to austere ethical codes, in the hopes of reorienting themselves.
That quote is from this Slate article - the article is mostly about social stigma surrounding mental illness.
The quote is plausible, in an untrustworthy common-sense kind of way. It also seems to align with my internal perspective of my moral life. Does anyone know if it is actually true? What research is out there?
EDIT: In case it isn't clear, I'm asking if anyone knows anything about the (uncited) resea... (read more)
I'm a CFAR alumnus looking to learn how to code for the very first time. When I met Luke Muehlhauser, he said that as far as skills go, coding is very good for learning quickly whether one is good at it or not. He said that Less Wrong has some resources for learning and assessing my own natural talent or skill for coding, and he told me to come here to find it.
So, where or what is this resource which will assess my own coding skills with tight feedback loops? Please and thanks.
Checking for the Programming Gear
[http://lesswrong.com/lw/efm/checking_for_the_programming_gear/] contains a
discussion of one really strong version of Luke's claim. The comments on I Want
to Learn Programming [http://lesswrong.com/lw/4yv/i_want_to_learn_programming/]
point to several good ways to start learning and assessing your talent. Also,
the programming thread
[http://lesswrong.com/r/discussion/lw/fth/programming_thread/] compiles a bunch
of programming resources that may be useful.
I've set up a prediction tracking system for personal use. I'm assigning confidence levels to each prediction so I can check for areas of under- or over-confidence.
My question: If I predicted X, and my confidence in X changes, will it distort the assessment of my overall calibration curve if I make a new prediction about X at the new confidence level, keep the old prediction, and score both predictions later? Is that the "right" way to do this?
More generally, if my confidence in X fluctuates over time, does it matter at all what criterion I use ... (read more)
If you ask "Does it matter?" the answer is probably: Yes.
How you query yourself and when has effects. The effects are likely to be
complicated and you are unlikely to fully aware of all of them. When it comes to
polling it frequently happens that the way you ask a question has effects.
This has probably been mentioned before, but I didn't feel like searching the entire comment archive of Less Wrong to find discussion on it: Can functionality be programmed into the website to sort the comments from posts from Overcoming Bias days by "Best" or at least "Top" ("New" would be nice as well!!)? Those posts are still open for commenting, and sometimes I find comments from years later more insightful. Plus, I'm sick and tired of scrolling through arguments with trolls.
And, given that this probably has been discussed before - why hasn't it been done yet?
Running simulations with sentient beings is generally accepted as bad here at LW; yes or no?
If you assign a high probability of reality being simulated, does it follow that most people with our experiences are simulated sentient beings?
I don't have an opinion yet, but I find the combination of answering yes to both questions, extremely unsettling. It's like the whole universe conspires against your values. Surprisingly, each idea encountered by itself doesn't seem to too bad. It's when simultaneously being agai... (read more)
What's bad about running simulations with sentient beings? (Nonperson Predicates
[http://lesswrong.com/lw/x4/nonperson_predicates/] is about inadvertently
running simulations with sentient beings and then killing them because you're
done with the simulation.)
5ygert10y
There's nothing inherently wrong with simulating intelligent beings, so long as
you don't make them suffer. If you simulate an intelligent being and give it a
life significantly worse than you could, well, that's a bit ethically
questionable. If we had the power to simulate someone, and we chose to simulate
him in a world much like our own, including all the strife, trouble, and pain of
this world, when we could have just as easily simulated him in a strictly better
world, then I think it would be reasonable to say that you, the simulator, are
morally responsible for all that additional suffering.
2RomeoStevens10y
Agree, but I'd like to point out that "just as easily" hides some subtlety in
this claim.
0niceguyanon10y
Considering the avoidance of, inadvertently running simulations then killing
them because we're done, I suppose you are right in that it doesn't necessarily
have to be a bad thing. But now how about this question:
* If one believes there is high probability of living in a simulated reality,
must it mean that those running our simulation do not care about Nonpersons
Predicates since there is clearly suffering and we are sentient? If so, that
is slightly disturbing.
2Qiaochu_Yuan10y
Why? I don't feel like I have a good grasp of the space of hypotheses about why
other people might want to simulate us, and I see no particular reason to
promote hypotheses involving those people being negligent rather than otherwise
without much more additional information.
0niceguyanon10y
It seems that our simulators are at the very least indifferent if not negligent
in terms of our values; there have been 100 billion people that have lived
before us and some have lived truly cruel and tortured lives. If one is
concerned aboutNonperson Predicates
[http://lesswrong.com/lw/x4/nonperson_predicates/] in which an AI models a
sentient you trillions of times over just to kill you when it is done, wouldn't
you also be concerned about simulations that model universes of sentient people
that die and suffer?
I suppose we can't do much about it anyway, but it's still an interesting
thought that if one has values that reflect either ygert's
[http://lesswrong.com/lw/ib0/open_thread_august_1218_2013/9kph] commets or
Nonperson Predicates [http://lesswrong.com/lw/x4/nonperson_predicates/] and they
wish to always want to want these values, then the people running our simulation
are indifferent to our values.
Interestingly, all this thought has changed my credence ever so slightly towards
Nick Bostrom's second of three possibilities regarding the simulation argument,
that is:
In this video [http://www.youtube.com/watch?v=nnl6nY8YKHs&t=7m17s] Bostrom
states ethical concerns as a possible reason why a human-level civilization
would not carry out simulations. These are the same kinds of concerns as that of
Nonperson Predicates and ygert's comments.
2somervta10y
If we are, in fact, running in a simulation, there's little reason to think this
is true.
0FeepingCreature10y
I think you need to differentiate between "physical" simulations and "VR"
simulations. In a physical simulation, the only way of arriving at a universe
state is to compute all the states that precede it.
1Luke_A_Somers10y
1 - Depends what you mean by simulation - maintaining ems who think they're in
meat bodies? That's dishonest at least, but I could see cases for certain
special cases being a net good. Creating a digital pocket universe? That's
inefficient, but that inefficiency could end up being irrelevant. Any way you
come at it, the same usual ethics regarding creating people apply, and those
generally boil down to 'it's a big responsibility' (cf. pregnancy)
2 - I don't, but if you think so, then obviously yes. I mean, unless you think
reality contains even more copies of us than the simulation. That seems a bit of
a stretch.
I've decided to start a blog, and I kind of like the name "Tin Vulcan", but I suspect that would be bad PR. Thoughts? (I don't intend it to be themed, but I would expect most of the posts to be LW-relevant.)
(Name origins: Fbzr pbzovangvba bs "orggre guna n fgenj ihypna" naq gur Gva Jbbqzna.)
At least personally, I don't pay very much attention to the titles of blogs:
what matters is the content of the articles. So as long as your title isn't
something like "Adolf Hitler is my idol", it probably doesn't matter very much.
(But I'm generalizing from my own experience
[http://lesswrong.com/lw/dr/generalizing_from_one_example/], so if someone feels
otherwise, please say so.)
6Risto_Saarelma10y
I assume prominent Star Trek terms used in a nonfiction context will connote bad
superficial pop philosophy and lazy science journalism, so I'd prefer something
different.
2philh10y
Hm. I feel like I'm not particularly worried about those connotations, though
maybe I should be. I'm more worried about connoting "thinks Vulcans have the
right idea" and/or "thinks he is as logical as a Vulcan".
It also occurs to me that having watched essentially no Star Trek, my model of a
straw Vulcan is really more of a straw straw Vulcan, and that seems bad.
Currently leaning towards "picking a different title if I come up with one
soon-ish".
5TimS10y
I would be very hesitant to invoke a fictional philosophical concept I wasn't
familiar with. You are invoking related concepts and ideas, and your
unfamiliarity with the source material could easily cause readers who are
familiar with that material to misread your message.
In short, you are setting yourself up for long inferential distance
[http://lesswrong.com/lw/kg/expecting_short_inferential_distances/], which I
would not recommend.
I've heard the idea of adaptive screen brightness mentioned here a few times. I know fluxgui in Linux that does it. and seems that windows 7 and 8 come equipped.
My windows is XP in one of my computers, how do I get it to lose brightness automatically during late hours?
Socks: Traditionally I've worn holeproof explorers. Last time I went shopping for new socks, I wanted to try something new but was overwhelmed by choice and ended up picking some that turned out to be rather bad. My holeproofs and the newer ones are both coming to the end of their lives, and I'll need to replace them all soon. Where should I go to learn about what types of sock would be best?
A quick google for best socks or optimal socks leads me to lots of shops, and pages for sports socks, and pages for sock fashion, but nothing about picking out a comfo... (read more)
That the sweat your skin produces adheres to the fibers of the fabric and is
redistributed throughout the fabric. It's a real effect, but the term is often
used imprecisely.
It doesn't mean that the sweat is all drawn through and out via evaporation
(though it may evaporate) and it doesn't mean that you won't feel moisture on
your skin, though you may feel less than you would otherwise.
4Lumifer10y
You do understand that "optimality" for socks can differ a great deal, right? It
depends on the intended usage (e.g. backpacking socks are rather different from
dress socks), your particular idiosyncrasies (e.g. how strongly do your feet
sweat), personal preferences (e.g. do you care how soft your socks are), etc.
My approach to socks is a highly sophisticated simulated annealing-type
algorithm for efficient search in sock-space:
(1) Pick a pair of socks which looks okay
(2) Wear them for a bit
(3a) If you don't like them, discard and goto (1)
(3b) If you do like them, buy more (or close equivalents) until you're bored
with then, then goto (1)
3NancyLebovitz10y
I'm happy with goldtoe [http://www.goldtoe.com] cotton socks for durability
(easy to measure) and comfort, but I'm not especially picky about socks. What
makes a sock comfortable for you?
I would appreciate some advice. I've been trying to decide what degree to get. I've already taken a bunch of general classes and now I need to decide what to focus on. There are many fields that I think would enjoy working in, such as biotechnology, neuroscience, computer science, molecular manufacturing, alternative energy, etc. Since I'm not sure what I want to go into I was thinking of getting degree with a wide range of applications, such as physics, or math. I plan on improving my programming skills in my spare time, which should widen my prospects.
Deciding to stop "punishing" behavior (which usually isn't much fun for either
of you, though the urge to punish is ingrained). It's certainly a useful thing
to be able to do.
1wedrifid10y
What the (emotional) decision to refrain from further vengeance (often) feels
like from the inside.
Sometimes. Certainly not all the time. Tit-for-tat with a small amount of
forgiveness often performs well. Note that tit-for-tat (the part where the other
defects and then cooperates you then proceed to cooperate) also sometimes counts
as 'forgiviness' in common usage. Like many cases where game theory and
instinctive emotional adaptions intended to handle some common games (like what
feels like 'blackmail') the edges between the concepts are blurry.
1jooyous10y
That's interesting, because I think I usually refrain from vengeance by default,
but I do try to like ... limit further interaction and stuff. Maybe that's
similar.
The way I was thinking about it is that there's an internal feelings component
-- like, do you still feel sad and hurt and angry? Then there's the updating on
evidence component -- are they likely to do that or similar things again? And
then there's also a behavioral piece, where you change something in the way you
act towards/around them (and I'm not sure if vengeance or just running awaaay
both count?) So I wasn't sure which combination of those were part of
"forgiveness" in common usage. It sounds like you're saying internal +
behavioral, right?
0metastable10y
So, I do, and it's informed by religion, but I'll try to phrase it as
LW-friendly as possible: to free somebody else of claims I have against them.
It's not an emotional state I enter or something self-centered (the "I refuse to
ruminate about what you did to me" pop song thing), though sometimes it produces
the same effects. The psychological benefits are secondary, even though they're
very strong for me. I usually feel much more free and much more peaceful when
I've forgiven someone, but forgiveness causes my state of mind, not vice versa.
It's like exercise: you did it and it was good even if you didn't get your
runner's high.
Other useful aspects, from the most blandly general perspective: it's allowed me
to salvage relationships, and it's increased the well-being of people I've
forgiven. I've been the beneficiary of forgiveness from others, and it's
increased my subjective well-being enormously.
From a very specific, personal perspective: every time I experience or give
forgiveness, it reminds me of divine forgiveness, and that reminder makes me
happier.
There was a recent post or comment about making scientific journal articles more interesting by imagining the descriptions (of chemical interactions?) as being GIGANTIC SPECIAL EFFECTS. Anyone remember it well enough to give a link?
here
[http://lesswrong.com/r/discussion/lw/i9p/improving_enjoyment_and_retention_reading/]
0NancyLebovitz10y
Thanks very much. I've posted the link as a comment to Extreme Mnemonics
[http://slatestarcodex.com/2013/08/14/extreme-mnemonics].
0[anonymous]10y
In some fields you don't even need to imagine...
http://www.youtube.com/watch?v=jgJKaP0Sj5U
[http://www.youtube.com/watch?v=jgJKaP0Sj5U]
http://www.youtube.com/watch?v=3IY5ZjcwakE
[http://www.youtube.com/watch?v=3IY5ZjcwakE]
http://www.youtube.com/watch?v=Hm03rCUODqg
[http://www.youtube.com/watch?v=Hm03rCUODqg]
Though imagining can help: https://www.youtube.com/watch?v=u3jQuY0URyg
[https://www.youtube.com/watch?v=u3jQuY0URyg]
Does anyone else has problems with the appearance of Lesswrong? My account is somehow at the bottom of the site and the text of some posts transgresses the white background. I noticed the problem about 2 days ago. I didn't change my browser (Safari) or something else.
Here are 2 screenshots:
Testing with safari 5.1.9, I find that it behaves nicely for me at all times,
even if I squinch the window down really narrowly. What safari version are you
using?
Rhodiola is apparently the bomb, but I've read somewhere it suffers from poor quality supplements. Here in CEE in pharmacies the brand name they sell is called Vitango. Any experiences? http://www.vitango-stress.com/
I had an idea for Wei Dai's "What is Probability, Anyway?," but after actually typing up I became rather unsure that I was actually saying anything new. Is this something that hasn't been brought up before, or did I just write up a "durr"? (If it's not, I'll probably expand it into a full Discussion post later.)
The fundamental idea is, imagining a multiverse of parallel universes, define all identical conscious entities as a single cross-universal entity, and define probability of an observation E as (number of successors to the entity... (read more)
[This comment is no longer endorsed by its author]Reply
(Trigger warnings: mention of rape, harassment, and hostile criticism of Less Wrong.)
A lesson on politics as mindkiller —
There's a thread on Greta Christina's FTB blog about standards of evidence in discussions of rape and harassment. One of her arguments:
This is straight Bayes — since the prior for rape is higher than the prior for Bigfoot, it requires less evidence to raise our credence above 0.5 in any given case of a claimed occurrence. In the comments, one person points out the connection to Bayes, in part remarking:
In response, another commenter, apparently triggered by the mention of Bayes, goes on a ... (read more)
Given all the concerns about replication in psychology, it's good to see that at least the most important studies get replicated: [1] [2] [3] [4] [5] [6] [7]. ;)
I've decided to live less on the internet (a.k.a. the world's most popular superstimulus) and more in real life. I pledge to give $75 to MIRI if I make any more posts on this account or on my reddit account before the date of October 13 (two months from now).
On a related note, I was thinking about how to solve the problem of the constant temptation to waste time on the internet. For most superstimuli, the correct action is to cut yourself off completely, but that's not really an option at all here. Even disregarding the fact that it would be devastatingly impractical in today's world, the internet is an instant connection to all the information in the world, making it incredibly useful. Ideally one would use the internet purely instrumentally - you would have an idea of what you want to do, open up the browser, do it, then close the browser.
To that end, I have an idea for a Chrome extension. You would open up the browser, and a pop-up would appear prompting you to type in your reason for using the internet today. Then, your reason would be written in big black letters at the top of the page while you're browsing, and only go away when you close Chrome. This would force you to rem... (read more)
Wanting is mediated by dopamine. Liking is mostly about opiods. The two features are (unfortunately) not always in sync.
It really doesn't stretch plausibility. The key feature here is "has addictive potential". It doesn't matter to the brain whether the reward is endogenous dopamine released in response to a stimulus or something that came in a pill.
Should we take rhodiola rosea which "extends the lifespans of fruit flies 24% and delays age-related loss in physical performance"?
There was a post on Slashdot today arguing that "Aging is a disease and we should try to defeat it or at least slow it down".
The comments are full of deathism: many people apparently sincerely coming out in favour of not just death (limited lifespan) but aging and deterioration.
Everyone who doesn't feel in their gut that many (most?) normal people truly believe aging and death are good, and will really try to stop you from curing it if they can, should go and read through all the comments there. It's good rationality training if (like me) you haven't ever discussed this in person with your friends (or if they all happened to agree). It's similar to how someone brought up by and among atheists (again, me) may not understand religion emotionally without some interaction with it.
Someone marked the appeal to worse problems article on Wikipedia for prospective deletion, for lack of sourcing - it appears to mostly have been written from the TVTropes page page. I've given it its proper name and added "whataboutery" as another name for it - but it needs more, and preferably from a suitably high-quality source.
A fact about industrial organization that recently surprised me:
Antimonopoly rules prevent competitors from coordinating. One exemption in the US is political lobbying: executives can meet at their political action committee. Joint projects in some industries are organized as for-profit companies owned by (nonprofit) political action committees.
My girlfriend taught me how to dive this past weekend. I'm 26. I had fully expected to go my entire life without learning how to dive, I guess because I unconsciously thought it was "too late" to learn, somehow. Now I'm wondering what other skills I never learned at the typical age and could just as easily learn now.
(if you're looking for object-level takeaways, just start out with kneeling dives - they're way easier and far less intimidating - then gradually try standing up more and more)
I couldn't find a place to mention this sort of thing at the wiki, so I'm mentioning it here.
The search box should be near the top of the page.
It's one of the most valuable things on a lot of websites, especially wikis, and I don't want to have to look for it.
What are the relative merits of using one's real name vs. a pseudonym here?
When I first started reading LessWrong, I was working in an industry obsessed with maintaining very mainstream appearances, so I chose to go with a pseudonym. I have since changed industries and have no intention of going back, so my original reason for using a pseudonym is probably irrelevant now.
I continue running into obstacles (largely-but-not-exclusively of an accessibility nature) when it comes to the major crowdfunding websites. It seems not to be just me; the major platforms (Kickstarter/Indiegogo) could stand to be much more screen reader-friendly, and the need for images (and strong urging to use videos) is an obstacle to any blind person seeking funding who doesn't have easy access to sighted allies/minions.
My present thoughts are that I'd rather outsource setting up crowdfunding campaigns to someone for whom these would not be serious ob... (read more)
Here's an interesting article that argues for using (GPL-protected) open source strategies to develop strong AI, and lays out reasons why AI design and opsec should be pursued more at the modular implementation level (where mistakes can be corrected based on empirical feedback) rather than attempted at the algorithmic level. I would be curious to see MIRI's response.
I searched and it doesn't look like anyone has discussed this criticism of LW yet. It's rather condescending but might still be of interest to some: http://plover.net/~bonds/cultofbayes.html
I don't think "condescending" touches accurately upon what is going on here. This seems to be politics being the mindkiller pretty heavily (ironically one of the things they apparently think is stupid or hypocritical). They've apparently taken some of the lack of a better term "right-wing" posts and used that as a general portrayal of LW. Heck, I'm in many ways on the same political/tribal group as this author and think most of what they said is junk.. Examples include:
... (read more)I'd more go with "incoherent ranting" than "condescending".
Does anyone have any opinions on this paper? [http://arxiv.org/pdf/1207.4913.pdf]
It is a proof of Bell's Inequality using counterfactual language. The idea is to explore links between counterfactual causal reasoning and quantum mechanics. Since these are both central topics on Less Wrong, I'm guessing there are people on this website who might be interested.
I don't have any background in Quantum Mechanics, so I cannot evaluate the paper myself, but I know two of the authors and have very high regard for their intelligence.
Does anybody think that there might be another common metaethical theory to go along w/ deontology, consequentialism, and virtue? I think it's only rarely codified, usu. used implicitly or as a folk theory, in which morality consists of bettering ones own faction and defeating opposing factions, and as far as I can see it's most common in radical politics of all stripes. Is this distinguishable from very myopic consequentialism or mere selfishness?
It depends on the reasons why one considers it right to benefit one's own faction and defeat opposing ones, I guess. Or are you proposing that this is just taken as a basic premise of the moral theory? If so, I'm not sure you can justifiably attribute it to many political groups. I doubt a significant number of them want to defeat opposing factions simply because they consider that the right thing to do (irrespective of what those factions believe or do).
Also, deontology, consequentialism and virtue ethics count as object-level ethical theories, I think, not meta-ethical theories. Examples of meta-ethical theories would be intuitionism (we know what is right or wrong through some faculty of moral intuition), naturalism (moral facts reduce to natural facts) and moral skepticism (there are no moral facts).
I often write things out to make them clear in my own mind. This works particularly well for detailed planning. Just as some people "don't know what they think until they hear themselves say it", I don't know what I think until I write it down. (Fast typing is an invaluable skill.)
Sometimes I use the same approach to work out what I think, know or believe about a subject. I write a sort of evolving essay laying out what I think or know.
And now I wonder: how much of that is true for other people? For instance, when Eliezer set out to write the Seq... (read more)
Idle curiosity / possibility of post being deleted:
At one point in LessWrong's past (some time in the last year, I think), I seem to recall replying to a post regarding matters of a basilisk nature. I believe that the post I replied to was along these lines:
I believe my response was long the lines of:
... (read more)Problem:
Inspiration:
This thread.
Proposal:
Dismiss comment button
Bob writes a comment that doesn't carry its weight. Alice, a LW reader, can choose to up-vote, down-vote, or Dismiss Bob's comment. Dismiss advise... (read more)
That quote is from this Slate article - the article is mostly about social stigma surrounding mental illness.
The quote is plausible, in an untrustworthy common-sense kind of way. It also seems to align with my internal perspective of my moral life. Does anyone know if it is actually true? What research is out there?
EDIT: In case it isn't clear, I'm asking if anyone knows anything about the (uncited) resea... (read more)
I'm a CFAR alumnus looking to learn how to code for the very first time. When I met Luke Muehlhauser, he said that as far as skills go, coding is very good for learning quickly whether one is good at it or not. He said that Less Wrong has some resources for learning and assessing my own natural talent or skill for coding, and he told me to come here to find it.
So, where or what is this resource which will assess my own coding skills with tight feedback loops? Please and thanks.
I've set up a prediction tracking system for personal use. I'm assigning confidence levels to each prediction so I can check for areas of under- or over-confidence.
My question: If I predicted X, and my confidence in X changes, will it distort the assessment of my overall calibration curve if I make a new prediction about X at the new confidence level, keep the old prediction, and score both predictions later? Is that the "right" way to do this?
More generally, if my confidence in X fluctuates over time, does it matter at all what criterion I use ... (read more)
This has probably been mentioned before, but I didn't feel like searching the entire comment archive of Less Wrong to find discussion on it: Can functionality be programmed into the website to sort the comments from posts from Overcoming Bias days by "Best" or at least "Top" ("New" would be nice as well!!)? Those posts are still open for commenting, and sometimes I find comments from years later more insightful. Plus, I'm sick and tired of scrolling through arguments with trolls.
And, given that this probably has been discussed before - why hasn't it been done yet?
Just a few questions for some of you:
Running simulations with sentient beings is generally accepted as bad here at LW; yes or no?
If you assign a high probability of reality being simulated, does it follow that most people with our experiences are simulated sentient beings?
I don't have an opinion yet, but I find the combination of answering yes to both questions, extremely unsettling. It's like the whole universe conspires against your values. Surprisingly, each idea encountered by itself doesn't seem to too bad. It's when simultaneously being agai... (read more)
I've decided to start a blog, and I kind of like the name "Tin Vulcan", but I suspect that would be bad PR. Thoughts? (I don't intend it to be themed, but I would expect most of the posts to be LW-relevant.)
(Name origins: Fbzr pbzovangvba bs "orggre guna n fgenj ihypna" naq gur Gva Jbbqzna.)
I've heard the idea of adaptive screen brightness mentioned here a few times. I know fluxgui in Linux that does it. and seems that windows 7 and 8 come equipped. My windows is XP in one of my computers, how do I get it to lose brightness automatically during late hours?
Socks: Traditionally I've worn holeproof explorers. Last time I went shopping for new socks, I wanted to try something new but was overwhelmed by choice and ended up picking some that turned out to be rather bad. My holeproofs and the newer ones are both coming to the end of their lives, and I'll need to replace them all soon. Where should I go to learn about what types of sock would be best?
A quick google for best socks or optimal socks leads me to lots of shops, and pages for sports socks, and pages for sock fashion, but nothing about picking out a comfo... (read more)
I would appreciate some advice. I've been trying to decide what degree to get. I've already taken a bunch of general classes and now I need to decide what to focus on. There are many fields that I think would enjoy working in, such as biotechnology, neuroscience, computer science, molecular manufacturing, alternative energy, etc. Since I'm not sure what I want to go into I was thinking of getting degree with a wide range of applications, such as physics, or math. I plan on improving my programming skills in my spare time, which should widen my prospects.
On... (read more)
do any programmers or web developers have an opinion about getting training on team tree house? has anyone else done this?
Does anyone have a working definition of "forgiveness"? Given that definition, do you find it to be a useful thing to do?
There was a recent post or comment about making scientific journal articles more interesting by imagining the descriptions (of chemical interactions?) as being GIGANTIC SPECIAL EFFECTS. Anyone remember it well enough to give a link?
Does anyone else has problems with the appearance of Lesswrong? My account is somehow at the bottom of the site and the text of some posts transgresses the white background. I noticed the problem about 2 days ago. I didn't change my browser (Safari) or something else. Here are 2 screenshots:
http://i.imgur.com/OO5UHPX.png http://i.imgur.com/0Il8TeJ.png
Rhodiola is apparently the bomb, but I've read somewhere it suffers from poor quality supplements. Here in CEE in pharmacies the brand name they sell is called Vitango. Any experiences? http://www.vitango-stress.com/
In programming, you can "call" an argumentless function and get a value. But in mathematics, you can't. WTF?
I had an idea for Wei Dai's "What is Probability, Anyway?," but after actually typing up I became rather unsure that I was actually saying anything new. Is this something that hasn't been brought up before, or did I just write up a "durr"? (If it's not, I'll probably expand it into a full Discussion post later.)
The fundamental idea is, imagining a multiverse of parallel universes, define all identical conscious entities as a single cross-universal entity, and define probability of an observation E as (number of successors to the entity... (read more)