Protips:
The timeline continues with legal actions and arguments about what happened, but has no additional allegations.
You forgot me.
August 13th, 2013
Dallas J. Haugh
Dallas posts a suicide note which includes allegations of rape against Shermer. It is taken down by a relative when he is secured and taken to a hospital; after heβs released, he reposts it.
Allegedly.
I don't really feel the need to write that when I am aware of it from personal experience.
I actually calibrated my P(God) and P(Supernatural) based on P(Simulation), figuring that getting an exact figure for cases where (~Simulation & Supernatural) are basically noise.
I forgot what I actually defined "God" as for my probability estimation, as well as the actual estimation.
Your updates to your blog as of this post seem to replace "Less Wrong", or "MIRI", or "Eliezer Yudkowsky", with the generic term "AI risk advocates".
This just sounds more insidiously disingenuous.
At least now when I cite Eliezer's stuff on my doctoral thesis people who don't know him - there are a lot of them in philosophy - will not say to me "I've googled him and some crazy quotes came up eventually, so maybe you should avoid mentioning his name altogether". This was a much bigger problem to me than what is sounds. I had to do all sort of workarounds to use Eliezer's ideas as if someone else said it because I was advised not to cite him (and the main, often the only, argument was in fact the crazy quote things).
There might be some very...
I've had to deal with the stress you are contributing to putting on the broader perception of transhumanism for the weekend, and that is on top of preexisting mental problems. (Whether MIRI/LW is actually representative to this is entirely orthogonal to the point; public perception has and is shifting towards viewing the broader context of futurism as run by neoreactionaries and beige-os with parareligious delusions.)
Of course, that's no reason to stop anything. People are going to be stressed by things independent of their content.
But you are expecting an...
Paperclip maximizer, obviously. Basilisks typically are static entities, and I'm not sure how you would go about making a credible anti-paperclip 'infohazard'.
I completed the survey. (Did not do the digit ratio questions due to lack of available precise tools.)
Can you be slightly more specific on the context? Like, at least the vague fields of study it might apply to? This would allow us to make an informed decision.
"Is a even better joke than the previous joke when preceded by its quotation" is actually much funnier when followed by something completely different.
It seems like the both of you just want everyone to use efficient RVs.
Perhaps a travelling Less Wrong fleet?
Okay, this is weird, but the first thing that popped into my head when you mentioned that there were images that used to be from this article was an image of a pony, vaguely Pinkie Pie looking. (being aware of cognition is weird)
I don't even watch My Little Pony or participate in its community. Now I'm starting to wonder if it has evolved into some sort of toxic meme which is replacing itself into generic forms of things.
A community blog with the purpose of refining the practice of rational behavior?
Eliminates human bias, doesn't imply that rationality is an 'art', and proclaims itself teleologically rather than ontologically.
I think I am currently in this state. (The inducing factor was probably going to a science fiction convention; I'm not sure why this is weirdly inspirational.) Does anybody have a roundup of appropriate posts somewhere?
Can you imagine Harry killing Hermione because Voldemort threatened to plague all sentient life with one barely noticed dust speck each day for the rest of time? Can you imagine killing your own best friend/significant other/loved one to stop the powers of the Matrix from hitting 3^^^3 sentient beings with nearly inconsquential dust specks? Of course not. No. Snap decision.
My breaking point would be about 10 septillion people, which is far, far less... no, wait, that's for a single-event dust speck.
What's your definition of all sentient life? Are we tal...
So to be clear, you are claiming that the destruction of all life on Earth is a better alternative than life continuing with the common current values?
(5) We create an AI which does not correspond to my values.
So part of the whole point of attempts to things like CEV is that they will (ideally) not use any individual's fixed values but rather will try to use what everyone's values would be if they were smarter and knew more.
...If LW is not trying to eradicate the scourge of transphobia, than clearly SIAI has moved from 1 to 5, and I should be trying t
That is indeed my concern. If CFAR can't avoid a Jerry Sandusky/Joe Paterno type scenario (which I am reasonably probable it is capable of, given one of its founders wrote HPMOR), then it is literally a horrendous joke and I should be allocating my contributions to somewhere more productive.
This confuses me. First of all, the probability of such a scenario is tiny (how many universities have the exact same complete lack of safeguards and transparency and how many had an international scandal?) Second, the difference between writing HPMR and the differen...
So this looks pretty nasty and is frankly disappointing. But he's acknowledged the irrational aspect of it and hasn't brought the statements himself to LW. Moreover, as Gwern correctly notes, IRC is a medium where people are often lacking any substantial filter. The proper response would be for Gwern to just avoid discussing these issues (which in fact he says he does). In any event, I fail to see how this comments mandate "reparations". If people on IRC want to appropriately rebuke him when he says this sort knee-jerk stupid shit when it comes up, that makes sense. The connection this has to SI or CFAR is pretty minimal.
I think gwern's expressed attitudes toward transsexuals are both harmful and not rationally defensible β i.e. if he thought about them sensibly with access to good data, he'd want to change them rather than parading them.
However, I don't think LW should ban people on the basis of that sort of attitude. Everyone is an asshole on some topic. (Me, I can be an asshole about open source. Some of my best friends are Windows users, but ....)
Coercing "apology and reparations" is counterproductive because of the example it sets. It would mean that anyone ...
How is gwern still allowed on this site without making a significant apology and reparations?
Are you suggesting banning users from LW if they make any unwelcoming comments anywhere else without apologizing for them? The absence of that policy seems to be the "how," and I think I much prefer not having that policy to having that policy.
It is making me seriously reconsider any funding that I would give to CFAR or SIAI.
Is your true rejection to funding CFAR or SIAI that they don't have a policy in place for the forum affiliated with them? I'm...
Why are you writing that here? Did you mean to reply to some other comment or am I missing something?
Am I the only person who answered "100" on the cryonics question because "revived at some point in the future" was indefinite enough that a Boltzmann brain-like scenario inevitably occurring eventually seemed reasonable?
Also, I did all the extra credit questions. At twos in the morning.
I assumed it was supposed to mean βrevived in a way that wouldn't have been possible if the patient hadn't been cryopreservedβ.
I somehow really thought this article was going to be about upscaled Rock 'Em Sock 'Em Robots. I'm not sure if this is better or worse.
In a void where there are just these particular Nazis and Jews, sure, but in most contexts, you'll have a variety of intelligences with varying utility functions, and those with pro-arbitrary-genocide values are dangerous to have around.
Of course, there is the simple alternative of putting the Nazis in an enclosed environment where they believe that Jews don't exist. Hypotheticals have to be really strongly defined in order to avoid lateral thinking solutions.
If the Nazis have some built-in value that determines that they hate something utterly arbitrary, then why don't we exterminate them?
This might actually be true. If you consider the categories of white people who would be most likely to have black people in their social network, what comes up is a list of categories correlated with racism (e.g. poverty, religiosity).
"Don't do this nice project that feels warm and fuzzy to you" is guaranteed to provoke a strongly negative reaction in the vast majority of people hearing it, and there's the obvious double standard (which people won't hesitate to point out and make you look stupid) of objecting to such charitable projects but not objecting to somebody buying a movie ticket, say. Besides, buying fuzzies is perfectly fine.
And that's even without the status considerations that paper-machine points out. People think we look weird already. Attacking a high status individual famous throughout the Internets isn't going to make that better.
I think there is a vague consensus that, all other things equal, eating less will make you lose weight and eating more will make you gain weight? I might have seen someone post a counterexample at least once, but I might simply be misremembering.
Yudkowsky's been downvoted before; the most notable time in recent memory was probably removing the link to the NY Observer article.
Our local surroundings could be made into a dense volume of self-replicating computronium hosting as many bare-minimum sapients as possible, but only a few people here would argue that it's morally imperative to carry that out to full term.
Another difference is that the mature sapient has typically specified, or would specify, that it should be reinstated in advance, and works within the framework of society. If the baby survives any sort of abuse it undergoes until it is sapient, then it might be entitled to some damages, but until then, it lacks self-ownership and is susceptible to destruction by its possessors.
Infants and fetuses are not sapient. Arbitrarily privileging biological life regardless of its mental capability would set a horrible precedent. Note that there isn't that coherent of a line between more intelligent mammals and human babies.
What counts as a "conversion"? I was baptized Catholic, but my family was otherwise extremely lapsed. I don't think I really believed in anything that strongly before briefly dabbling in various esoteric practices. JREF and GΓΆdel, Escher, Bach convinced me otherwise.
It might get a bit suspicious if you are entirely asking people about what other people think. You'd have to mix it with more conventional "dummy" questions.
Assuming, of course, that this hypothesis is true. The great thing is that it's easily testable.
An alien civilization within the boundaries of the current observable universe has, or will have within the next 10 billion years, created a work of art which includes something directly analogous to the structure of the "dawn motif" from the beginning of Richard Strauss's Also sprach Zarathustra. (~90%)
I wasn't actually sure what people believed about this, so I was very curious to see how this would be received. So can we say the word "cult" now?
r/atheism is that way. We at Less Wrong hold ourselves to a higher standard about where rational discussion is concerned. At a purely selfish level, when posts like the author's are written that have minimal civility, and responses like this are made, many bystanders will when reading such a response become more sympathetic to the original writer.
Incidentally, you seem to be under a bunch of factual misconceptions or are deliberately ignoring them to be insulting rather than helping yourself or the author become less wrong. In the Beit Shemesh case, the ...
Wait, isn't this almost exactly the beginning of Greg Egan's novel Zendegi, at about approximately the same time?
Assuming you survive for more than the next ten years or so, yes.
Also, your wife is Catholic. If you issue an ultimatum to deconvert, we end up with one of the three following scenarios:
All three scenarios weaken overall religious influence and raise the probability that your children will be epistemologically sane. I consider this preferable.
Divide the groups in two based on familial affiliation (they'll expect that).
Ask the following questions:
Bias x by (1) [near aisle should be "tiny"] and y by (2) [back should be "tall"]. Average groups, ignore children.
One example: I have had to deal with people going on and on with "but it's not really you!" arguments about mind uploading on other forums on several separate occasions. Of course it's annoying to press on about it entirely by yourself, so I don't really bother and move on after a post or two. Here, I don't have to repeat myself over and over very often, and the userbase is sympathetic, so keeping systemic obnoxiousness out of the environment is feasible enough that we should crush it with overwhelming force.
If it's a troll, I'd guess either Eliezer being meta or maybe Mitchell Porter trying to make a point, but I've seen people this oblivious before.
If you posted something not obnoxious, I'm inclined to believe the community would, in fact, upvote it.
You are self-identifying as a 9/11 "truther", which is signalling to us that you are a crank with a persecution complex. The fact that you subsequently verified delusions of persecution is just digging yourself into a deeper hole.
For the interests of identity obfuscation, I have rolled a random number between 1 and 100, and have waited for some time afterwards.
On a 1-49: I have taken the survey, and this post was made after a uniformly random period of up to 24 hours.
On a 50-98: I will take the survey after a uniformly random period of up to 72 hours.
On a 99-100: I have not actually taken the survey. Sorry about that, but this really has to be a possible outcome.
Have a 98% chance of an upvote.