Ruby

Member of the LessWrong 2.0 team. I've been a member of the rationalist/EA communities since 2012. I have particular rationality interests in planning and emotions.

Sequences

LW Team Updates & Announcements
Novum Organum

Comments

The Wiki is Dead, Long Live the Wiki! [help wanted]

Ah, yep, Dark Arts has the flag set for requiring a manual merge. These show up on the tag dashboard and when you go to edit the tag. Someone will get to it! (Of course, feel free to be someone, but no pressure.)
 

The Wiki is Dead, Long Live the Wiki! [help wanted]

The vision is that tag pages should be wiki pages, no matter the depth. (Long pages get displayed with truncation on load, the rest behind "Read More", so it's fine). I think it's actually good to keep the longer discussion on the one page for the topic.

I suspect that most of the "missing content" comes from the fact we haven't finished "merging" the old wiki pages with existing tags, and therefore the current text is just whatever the new tag already had. (And the revision/history reviewer makes it seem like this intentional, but it's not.)

Merging = combine new and old text in whatever most makes sense. Combine and take whichever bits are better when they conflict.

The campaign to get through all the manual import processing continues! We just launched the new tagging dashboard today, on which you can filter for pages requiring merging. Currently 75 remaining for merges.. 

The Wiki is Dead, Long Live the Wiki! [help wanted]

It was indeed I who went through most of the old wiki pages and decided what to do with them. There where ~600,  so I do expect to have made some mistakes, and would very happy to discuss if I missed any valuable ones. 

Looking at Adversial process, I don't see why I wouldn't have imported it. And yet I didn't mark it anything on my spreadsheet, so my bad:
 

We can import it. Let me know any others you think should be there. 

From old talk page on LW1.0 wiki:

Talk:Slate Star Codex

Wikipedia AFD, if someone can get the task, would be nice to merge in https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Slate_Star_Codex Deku-shrub (talk) 22:44, 17 June 2017 (AEST)

Text: Talk:Slate_Star_Codex/wikipedia

From the old wiki discussion page:

Talk:Roko's basilisk

Weirdness points

Why bring up weirdness points here, of all places, when Roko's basilisk is known to be an invalid theory? Is this meant to say, "Don't use Roko's basilisk as a conversation starter for AI risks"? The reason for bringing up weirdness points on this page could do with being made a bit more explicit, otherwise I might just remove or move the section on weirdness points.--Greenrd (talk) 08:37, 29 December 2015 (AEDT)

I just wanted to say

That I didnt know about the term "basilisk" with that meaning and that makes it a basilisk for me. Or a meta-basilisk I should say. Now I'm finding hard not to look for examples on the internet.

Eliminate the Time Problem & the Basilisk Seems Unavoidable

Roko's Basilisk is refutable for the same reason that it makes our skin crawl, the time differential, the idea that a future AI would take retribution for actions predating its existence. The refutation is, more or less, why would it bother? Which I suppose makes sense, unless the AI is seeking to establish credibility. Nevertheless, the time dimension isn't critical to the Basilisk concept itself. At whatever point in the future a utilitarian AI (UAI) were to come into existence, there would no doubt be some who opposed it. If there were enough who opposed it to present a potential threat to the existence of the UAI, the UAI may be forced to defend itself by eliminating that risk, not because it presents a risk to the UAI, but because by presenting a risk to the UAI it presents a risk to the utilitarian goal.

Consider self driving cars with the following assumptions: currently about 1.25 million people are killed and 20-50 million injured each year in traffic accidents (asirt.org); let's say a high quality self-driving system (SDS) would reduce this by 50%; BUT, some of those who die as a result of the SDS would not have died without the SDS. Deploying the SDS universally would seem a utilitarian imperative, as it would save over 600,000 lives per year. Yet some people may oppose doing so because of a bias in favor of human agency, and out of fear of the fact that there would be some quantity of SDS-caused deaths that would otherwise not occur.

Why would a UAI not eliminate 100,000 dissenters per year to achieve the utilitarian advantage of a net 500,000 lives saved?

TomR Oct 18 2019

The Fallacy of Information Hazards

The concept that a piece of information, like Roko's Basilisk, should not be disclosed, assumes (i) that no one else will think it and (ii) that a particular outcome, such as the eventual existence of the AI, is a predetermined certainty that can neither be (a) prevented or (b) mitigated by ensuring that its programming addresses the Balisk. I am unaware of any basis for either of these propositions.

TomR Oct 18 2019

From the old discussion page on LW1.0 wiki:

Talk:Rationalist movement

This question has always bothered me, but now after thinking about it a lot I finally have a clear answer: rationalism is the belief that Eliezer Yudkowsky is the rightful caliph.

This is a joke taken out of its context in the article. I think the line should be replaced with (...) if you want to leave the idea of [something was here in the original article].
I'm not motivated enough to fight over it. My arguments are :

  1. it lacks the context of a whole section of the article
  2. it could be taken out of context by those who consider us outgroup and
    1. even more so when it's there on the wiki on the site where the movement began tribening
  3. i'd prefer if scott hadn't formulated it this way because i find the idea of caliph eliezer fucking terrifying, and since the movement has that funny habit of engaging with its critics as if they were part of it, a more accurate, albeit less funny, formulation would have been "rationalism is the movement that discusses whether the rightful caliph is Eliezer Yudkowsky."

Lead

I don't think we have a very good lead for this article: what is "a set of modes" and how does it relate to actual communities of people? Alti (talk) 10:15, 24 April 2017 (AEST)

The citations on this page are mess with all their links. I'm going to leave them for now and mark it as done, can fix up later. Not worth the time it'd take.

Load More