If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Has Eugine's mass-downvoting got more aggressive for everyone lately, or just for me? I am getting hit for 10 points or so per day; not only old comments but (I think) every comment I post, without exception.
[EDITED to add:] Of course by "everyone" I mean all Eugine's targets. Actually I don't know who else he's gunning for at the moment; perhaps it's just me.
Perhaps some of those downvotes are from other people and/or reflect actual deficiencies in what I post. But I bet the great majority are just Eugine being Eugine.
[EDITED to add:] Actually, this is interesting. At least some of my comments that are net-positive have lots of downvotes, in some cases more than seems plausible "organically". E.g., this one appears to be on +7-5; I'm not sure it really deserves +7, but I'm extremely sure it doesn't deserve -5. This one appears to be +6-5; a natural -5 seems more plausible here but still unlikely. This one is on +3-4. Some more, all mysteriously on just enough downvotes to come out negative: +1-2, +3-4, +2-3, +2-3, +2-3, +3-4, +2-3, +4-5, +3-4, +2-3, +3-4, +3-4. That's twelve consecutive comments from my overview page, all of which just happen to be on exactly -... (read more)
Thanks for posting this. I've forwarded it to tech support.
How do you find new accounts?
I haven't myself noticed a lot of new accounts other than ones I've already reported as likely-Eugines, and one other that I'm keeping an eye on -- you might want to ask OrphanWilde, who is the one who reported seeing a lot of new accounts.
When I do notice new accounts it's simply by seeing things in the "Recent Comments" written by users whose names I don't recognize.
(If mods don't have a tool for listing recently created accounts, that should go on whatever monstrous wishlist we have for LW features...)
it is on the wishlist.
To my eye, there are... an unusual number of new accounts jumping immediately into posting, lately. None of them have Eugine's trademark style or focus on his preferred topics, however.
I would be unsurprised to find that some of them are Eugine.
There's an obvious solution, which I propose in a spirit of impartial generosity: Insta-ban any account that downvotes any of my comments :-).
(Horrifically, that might actually be an improvement on the present state of affairs. I hope it's unnecessary to say that it would still be an absolutely terrible idea, but I'll say it anyway just in case.)
I am keeping an eye on the individuals, at any rate. It will be interesting if he's adopting a new tactic of -not- talking about the same tired talking points. It would suggest a level of cleverness he thus far has not demonstrated.
And once we get the tools in place to start tracking downvote patterns, that game will be up, too.
My question is: why the heck are you such a dangerous person to Eugene? What point of view do you hold that Eugene deems so worthy of mass downvoting? Ironically for him, now I want to know.
At this point I think it's mostly a personal vendetta on his part. But back when he wasn't just downvoting practically everything I ever post, his mass-downvoting was usually triggered by my having the temerity to disagree with him about one of his three hot-button issues: (1) whether black people are stupid, lazy and dangerous, (2) whether women are mentally unsuited for science, engineering, etc., and (3) whether transgender people should be called "trannies", addressed by their "old" pronouns, etc.
(Eugine would not necessarily express his positions in the way I have suggested there. But e.g. when presented with a list of highly successful black people -- after he suggested there are no successful black people for a "black pride" event to celebrate -- he described them as "basically dancing bears". Make of that what you will.)
You forgot something: Eugine holds that anyone who disagrees with these views is insufficiently rational and doesn't belong on Less Wrong.
He decided at one point that there were too many such irrational people, and engaged in a mass-downvote campaign to punish his ideological enemies; he was banned for this, and keeps coming back, like a sad dumb little puppy who can't understand why he gets punished for shitting on the carpet.
I'm curious; did you choose that analogy on purpose?
Anyway: yes, I agree, I think Eugine thinks that lack of enthusiasm for bigotry => denial of biological realities => stupid irrationality => doesn't belong on LW, and that's part of what's going on here. But I am pretty sure that Eugine or anyone else would search in vain for anything I've said on LW that denies biological realities, and that being wrong about one controversial topic doesn't by any means imply stupid irrationality -- and I think it's at least partly the personal-vendetta thing, and partly a severe case of political mindkilling, that stops him noticing those things.
 And not only because searching for anything on LW is a pain in the (ahahahaha) posterior.
I'd actually meant to link to that exact page, but forgot.
Remark: a policy of pushing all someone's comments down to exactly -1 is worse (for LW, whether or not for the victim) than a policy of downvoting them all n times, for specific n, because it erases information. Suppose I post a stupid comment that someone votes down to -1, and an insightful one that gets up to +4; then along comes Eugine, leaves the first alone and votes the other one down to -1. And now they look exactly the same; Eugine has removed not only the evidence of my insight in the second case, but also the evidence of my stupidity in the first.
The information isn't completely gone; from any comment whose net score isn't zero and whose total number of votes isn't too large you can reconstruct the upvote and downvote numbers by looking at the "% positive" figure. But that doesn't distinguish between Eugine-downvotes and other downvotes, and e.g. the parent of this comment which is currently on (+2,56%) could be either +9-7 or +10-8. And, more to the point, "can be roughly determined by doing some calculation" is rather different from "can be seen immediately"; even in so far as the information isn't lost, it's severely obscured.
The more succ... (read more)
There are a bunch of "comment score below threshold" comments on this thread. Those comments are reasonable polite comments, mostly about the current difficulties with karma abuse here.
I hope to eventually prevent karma abuse, and finding out who's been downvoting discussion of karma abuse should be part of the process.
Most of you are probably annoyed by the sudden focus on Eugine; why is Less Wrong focusing so much on one person? Isn't that just giving him what he wants?
Well, to answer the first question, we're not focusing on Eugine; I'm currently mostly poking him in my off-time using low-effort strategies with particular goals in mind. If I decided to wage war on Eugine, no-holds-barred, I'd start with an upvote brigade; any individual identified as being targeted by Eugine would be targeted far more effectively by my bots, with a 10:1 upvote ratio, and targeted downvotes at his sockpuppets. And I'd work to be sanctioned by the admins, meaning my brigade wouldn't suffer attrition the way his sockpuppet army would.
Even that would be low-effort. It'd take about an hour of coding, and another hour to register all the accounts. (Somewhat longer would be getting administrator approval to break the rules.) If I really wanted to get him, I'd pull down the source code for Less Wrong and create tools to find his bots and disable them. It wouldn't even be difficult.
As for the second question, of whether focusing on him is giving him what he wants: Some of you weren't around for his first downvot... (read more)
For what it's worth, I think tech support cares somewhat, but not enough for a gung ho effort.
I think that's nastier than necessary-- tech support has been giving some help. The problem is that they aren't willing to develop new tools.
If other people make the necessary tools, are they willing to deploy them?
You probably underestimate the number of new users -- the ones who posted their first five or ten comments, received -1 karma on each, and left the website because they felt like the community dislikes them (while in reality their only "sin" was e.g. mentioning to be women in one of those comments) -- who in alternative reality could have produced useful content for the website.
I agree that downvoting crusades and lower quality of content are mostly two separate problems that need to be addressed separately. But on some scale, one bad thing contributes to another.
My impression is that it accelerated the departure of lefty and/or female LWers by more than a hair.
There really isn't that much on LW about this -- if it seems like a lot, I think it's more because there's so little other content on LW.
That was actually done to Eugine at one point. He quickly noticed it, and freaked out.
It's hard to tell; people don't usually bother saying why they're going. But I can offer someone saying they almost left because of a single incident of mass-downvoting. And daenerys (who has since left LW) saying that mass-downvoting is discouraging her from participating much, though at that point she evidently had no plans to leave altogether.
And, over on Slate Star Codex (where there are no links to individual comments; sorry), go to this thread and search for "Because I got mod-bombed" you'll find ialdabaoth saying that's why they left LW; if you read other comments near that one you'll find a bunch of other people saying they left and/or are considering leaving because they don't like how it feels to get heavily downvoted; they aren't (I think) talking about Euginification, but if (1) it's common to be pushed away from places like LW because being heavily downvoted is unpleasant, and (2) there is someone around throwing heavy downvoting at people whose politics he doesn't like, there's an obvious conclusion to draw.
I don't know the politics (or, in several cases, the gender) of the people I'm pointing at, so I am not going to claim them as examples of "lefty an... (read more)
I'm not claiming that LW is generally hostile to lefties, nor that there aren't things that happen here that might annoy righties or push them away, nor that overall it's worse for lefties than for righties. Only that one particular thing that happens here makes LW more unpleasant for lefties than it need be and drives some away.
(I would prefer LW to be a place where people with any political proclivities at all can feel welcome, unless those proclivities are severely and overtly anti-rational or so obnoxious as to render them unwelcome pretty much everywhere.)
Though I enthusiastically endorse the concept of rationality, I often find myself coming to conclusions about Big Picture issues that are quite foreign to the standard LW conclusions. For example, I am not signed up for cryonics even though I accept the theoretical arguments in favor of it, and I am not worried about unfriendly AI even though I accept most of EY's arguments.
I think the main reason is that I am 10x more pessimistic about the health of human civilization than most other rationalists. I'm not a cryonicist because I don't think companies like Alcor can survive the long period of stagnation that humanity is headed towards. I don't worry about UFAI because I don't think our civilization has the capability to achieve AI. It's not that I think AI is spectacularly hard, I just don't think we can do Hard Things anymore.
Now, I don't know whether my pessimism is more rational than others' optimism. LessWrong, and rationalists in general, probably have a blind spot relative to questions of civilizational inadequacy because those questions relate to political issues, and we don't talk about politics. Is there a way we can discuss civilizational issues without becoming mind-killed? Or do we simply have to accept that civilizational issues are going to create a large error bar of uncertainty around our predictions?
What skills are overwhelmingly easier to learn in institutionalized context?
(e.g math wouldn't count, because even if motivation is circumvented as an issue in institutions, you should be theoretically to study everything at home. Neither would necessarily the handling of some kind of lab equipment, if there was some clear documentation available for you, and (assuming that you took the efforts to remember it) if the transfer to practice was straightforward (so pushing buttons and changing settings would be straightforward, while the precise motions of carving a specific kind of motive into wood would be less so))
Probably saying the obvious, but anyway:
What is the advantage of nice communication in a rationalist forum? Isn't the content of the message the only important thing?
Imagine a situation where many people, even highly intelligent, make the same mistake talking about some topic, because... well, I guess I shouldn't have to explain on this website what "cognitive bias" means... everyone here has read the Sequences, right? ;)
But one person happens to be a domain expert in an unusual domain, or happened to talk with a domain expert, or happened to rea... (read more)
I am looking for sources of semi-technical reviews and expository weblog posts to add to my RSS reader; preferably 4—20 screenfuls of text on topics including or related to evolutionary game theory, mathematical modelling in the social sciences, theoretical computer science applied to non-computer things, microeconomics applied to unusual things (e.g. Hanson's Age of Em), psychometrics, the theory of machine learning, and so on. What I do not want: pure mathematics, computer science trivia, coding trivia, machine learning tutorials, etc.
Some examples that ... (read more)
Lesswrong.com and the facebook group were very quiet this week. (The slack doubled in volume to be around 18k messages this week)
Any ideas why?
Possibly just random? There's a feedback effect where if LW is quiet one day, there's less to respond to the next day so it is likely to remain quiet -- so I think smallish random fluctuations can easily produce week-long droughts or gluts.
Mainstream discussion of existential risk is becoming more of a thing, A recent example is this article in The Atlantic. They do mention a variety of risks but focus on nuclear war and worst case global warming.
I realize if I Pomodoro most things, instead of some thing, I feel more motivated to go through my to do list. Sorry if this is already obvious. I tend to do Pomodoros on repetitive, long-term, open-ended tasks like studying, practicing or working.
I'd refrained from doing any Poms on short-term goals, that are uncertain in time it takes, it may take longer than an hour but less than 8 hours, for example researching health insurance; I feel unmotivated to start it because I know it's going to take a long time but not too long, but I don't know how long,... (read more)
BBC News is running a story claiming that the creator of Bitcoin known as Satoshi Nakamoto is an Australian named Craig Wright.
People on Hacker News and reddit.com/r/bitcoin are sceptical.
Meta: I got the date wrong of the last OT, modified it to say 25-1st, and this thread runs 2nd-8th
It just got a lot cheaper and easier to do amino acid builds and mods. With a helpful AGI, you could have designer drugs for pennies per design.
edit Paper http://science.sciencemag.org/content/early/2016/04/20/science.aaf6123
I apologize in advance for asking an off-topic question, but my Google-fu has failed me.
My girlfriend's niece is a Small Child who likes to turn the volume on her Android tablet all the way up, making it too loud for everyone else. How can we make it so that when she tries to make the tablet louder, nothing happens? (I know how to do this on an iOS device but not an Android one.)
I did an exercise in generating my values.
A value is like a direction - you go north, or south. You may hit goal mountains and hang a right past that tree but you still want to be going north. Specifically you may want to lose weight on the way to being healthy, but being healthy is what you value. This was from a 5-10 minute brainstorm, pen+paper session (with a timer) in one of our dojos. I kinda don't want it to be for just my benefit; so I figured I would share it here; they are in no order.
My values rot13:
Haqrefgnaq ubj guvatf jbex
NZ epidemiologist Pearson A.L appears to have predicted the trans-pacific partnership in 2014: Although such a case may have no strong grounds in existing New Zealand law, it is possible that New Zealand may in the future sign international trade agreements where such legal action became more plausible. - British Medical Journal
Why do I, as a desperate male, lonely and horney level desperate, stave of the attention of females when I’m not the one leading the charge? One of my peak experiences was visiting Torquay on an undergrad uni field trip walking w
The "simulation argument" by Bostrom is flawed. It is wrong. I don't understand why a lot of people seem to believe in it. I might do a write up of this if anyone agrees with me, but basically, you cannot reason about without our universe from within our universe. It doesn't make sense to do so. The simulation argument is about using observations from within our own reality to describe something outside our reality. For example, simulations are or will be common in this universe, therefore most agents will be simulated agents, therefore we are s... (read more)
First, Bostrom is very explicit that the conclusion of his argument is not "We are probably living in a simulation". The conclusion of his argument is that at least one of the following three claims is very likely to be true -- (1) humans won't reach the post-human stage of technological development, (2) post-human civilizations will not run a significant number of simulations of their ancestral history, or (3) we are living in a simulation.
Second, Bostrom has addressed the objection you raise here (in his Simulation Argument FAQ, among other places). He essentially flips your disjunctive reasoning around. He argues that we are either in a simulation or we are not. if we are in simulation, then claim 3 is obviously true, by hypothesis. If we are not in a simulation, then our ordinary empirical evidence is a veridical guide to the universe (our universe, not some other universe). This means the evidence and assumptions used as the basis for the simulation argument are sound in our universe. It follows that since claim 3 is false by hypothesis, either claim 1 or claim 2 is very likely to be true. It's worth noting that these two are claims about our universe, not about some... (read more)
On Fox News, Trump said that regarding Muslims in the US, he would do "unthinkable" things, "and certain things will be done that we never thought would happen in this country". He also said it's impossible to tell with absolute certainty whether a Syrian was Christian or Muslim, so he'd have to assume they're all Muslims. This suggests that telling US officials that I'm a LW transhumanist might not convince them that I have no connection with ISIS. I'm not from Syria, but I have an Arabic name and my family is Muslim.
I've read Cory Doc... (read more)