All of XiXiDu's Comments + Replies

I don't have time to evaluate what you did, so I'll take this as a possible earnest of a good-faith attempt at something, and not speak ill of you until I get some other piece of positive evidence that something has gone wrong.

This will be my last comment and I am going to log out after it. If you or MIRI change your mind, or discover any evidence "that something has gone wrong", please let me know by email or via a private message on e.g. Facebook or some other social network that's available at that point in time.

A header statement only

... (read more)

Since you have not yet replied to my other comment, here is what I have done so far:

(1) I removed many more posts and edited others in such a way that no mention of you, MIRI or LW can be found anymore (except an occasional link to a LW post).[1]

(2) I slightly changed your given disclaimer and added it to my about page:

Note that I wrote some posts, posts that could previously be found on this blog, during a dark period of my life. Eliezer Yudkowsky is a decent and honest person with no ill intent, and anybody can be made to look terrible by selectively c

... (read more)

I don't have time to evaluate what you did, so I'll take this as a possible earnest of a good-faith attempt at something, and not speak ill of you until I get some other piece of positive evidence that something has gone wrong. A header statement only on relevant posts seems fine by me, if you have the time to add it to items individually.

I very strongly advise you, on a personal level, not to talk about these things online at all. No, not even posting links without discussion, especially if your old audience is commenting on them. The probability I estimate of your brain helplessly dragging you back in is very high.

-4Dallas8y
Your updates to your blog as of this post seem to replace "Less Wrong", or "MIRI", or "Eliezer Yudkowsky", with the generic term "AI risk advocates". This just sounds more insidiously disingenuous.

I apologize for any possible misunderstanding in this comment. My reading comprehension is often bad.

I know that in the original post I offered to add a statement of your choice to any of my posts. I stand by this, although I would have phrased this differently now. I would like to ask you to consider that there are also personal posts which are completely unrelated to you, MIRI, or LW. Such as photography posts and math posts. It would be really weird and confusing to readers to add your suggested header to those posts. If that is what you want, I will do... (read more)

4ChristianKl8y
If that's the only concern I think the solution is quite easy. You have all the MIRI related material on one page, so you can delete it while leaving the other stuff on your homepage untouched.

I already deleted the 'mockery index' (which had included a disclaimer for some months that read that I distant myself from those outsourced posts). I also deleted the second post you mentioned.

I changed the brainwash post to 'The Singularity Institute: How They Convince You' and added the following disclaimer suggested by user Anatoly Vorobey:

I wrote the post below during years in which, I now recognize, I was locked in a venom-filled flamewar against a community which I actually like and appreciate, despite what I perceive as its faults. I do not autom

... (read more)

Thank you. I'll likewise keep my promise.

Yes, it was a huge overreaction on my side and I shouldn't have written such a comment in the first place. It was meant as an explanation of how that post came about, it was not meant as an excuse. It was still wrong. The point I want to communicate is that I didn't do it out of some general interest to cause MIRI distress.

I apologize for offending people and overreacting to what I perceived the way I described it but which was, as you wrote, not that way. I already deleted that post yesterday.

To make the first step and show that this is not some kind of evil ploy, I now deleted the (1) Yudkowsky quotes page and (2) the post on his personality (explanation on how that post came about).

I realize that they were unnecessarily offending and apologize for that. If I could turn back the clock I would do a lot differently and probably stay completely silent about MIRI and LW.

6lukeprog8y
Thanks. Things seem basically settled over here [http://lesswrong.com/lw/lb3/breaking_the_vicious_cycle/bo3f], so I'll just say: kudos for your efforts to break the vicious cycle!

Also, the page where you try to diagnos him with narsisism just seems mean.

I can clarify this. I never intended to write that post but was forced to do so out of self-defense.

I replied to this comment whose author was wondering why Yudkowsky is using Facebook more than LessWrong these days. To which I replied with an on-topic speculation based on evidence.

Then people started viciously attacking me, to which I had to respond. In one of those replies I unfortunately used the term "narcissistic tendencies". I was then again attacked for using t... (read more)

So let me get this straight - you did a psychiatric diagnosis over the internet, and instead of saying, 'obviously I'm using the term colloquially' you provided evidence.

...

and then you are surprised when you get attacked, and even now characterize these attacks by like as coming from a mindless horde...

when the horde was actually 4 people, only one post was against you personally as opposed to being against that one thing you said, and there were roughly 2 others on your side. And your comments there are upvoted.

I think it is more like you went through all the copies of Palin's school newspaper, and picked up some notes she passed around in class, and then published the most outrageous things she said in such a way that you implied they were written recently.

This is exactly the kind of misrepresentation that make me avoid deleting my posts. Most of the most outrageous things he said have been written in the past ten years.

I suppose you are partly referring to the quotes page? Please take a look, there are only two quotes that are older than 2004, for one of whi... (read more)

Those two quotes that are dated before 2004 are the least outrageous.

This is the most outrageous one to me:

I must warn my reader that my first allegiance is to the Singularity, not humanity. I don’t know what the Singularity will do with us. I don’t know whether Singularities upgrade mortal races, or disassemble us for spare atoms. While possible, I will balance the interests of mortality and Singularity. But if it comes down to Us or Them, I’m with Them. You have been warned.

And it's clearly the exact opposite of what present Eliezer belives.

The stuff that bothers me are Usenet and mailing list quotes (they are equivalent to passing notes and should be considered off the record) and anything written when he was a teenager. The rest, I suppose, should at least be labeled with the date they were written. And if he has explicitly disclaimed the statement, perhaps that should be mentioned, too.

Young Eliezer was a little crankish and has pretty much grown out of it. I feel like you're criticising someone who no longer exists.

Also, the page where you try to diagnose him with narsisism just seems mean.

If you feel there was something wrong about your articles, why can't you write it there, using your own words?

I made bad experiences with admitting something like that. I once wrote on Facebook that I am not a high IQ individual and got responses suggesting that now everyone can completely ignore me and everything I say is garbage. If I look at the comments to this post, my perception is that many people understood it as some kind of confession that everything I ever wrote is just wrong and that they can subsequently ignore everything else I might ever ... (read more)

If I look at the comments to this post, my perception is that many people understood it as some kind of confession that everything I ever wrote is just wrong and that they can subsequently ignore everything else I might ever write.

If it helps, I believe your criticism is a mix of good and bad parts, but the bad parts make it really difficult for the reader to focus on the good parts, so at the end even the good parts are kinda wasted. It would be better if you could separate them, but the problem is probably what you describe as being "easily overw... (read more)

You don't need to delete any of your posts or comments. What I mainly fear is that if I was to delete posts, without linking to archived versions, then you would forever go around implying that all kinds of horrible things could have been found on those pages, and that me deleting them is evidence of this.

If you promise not to do anything like that, and stop portraying me as somehow being the worst person on Earth, then I'll delete the comments, passages or posts that you deem offending.

But if there is nothing reasonable I could do to ever improve your opi... (read more)

I wouldn't want you to delete the interview series anyway. The things that most offended me was this: the title of "http://kruel.co/2013/01/10/the-singularity-institute-how-they-brainwash-you/" is absurdly offensive and inappropriate if you don't believe in the deliberate ill intent of MIRI. If you don't want to delete the post altogether, at least rename it to "How they convince you". When you use 'brainwash' or 'trick' or 'con', you're accusing them of being criminals. Only say such words if you really believe it.

I'd also like the del... (read more)

I wrote the post below during years in which, I now recognize, I was locked in a venom-filled flamewar against a community which I actually like and appreciate, despite what I perceive as its faults. I do not automatically repudiate my arguments and factual points, but if you read the below, please note that I regret the venom and the personal attacks and that I may well have quote-mined and misrepresented persons and communities. I now wish I wrote it all in a kinder spirit.

Sounds good. Thanks.

Plenty of people manage to be skeptical of MIRI/EY and c

... (read more)

Also, you published some very embarrassing quotes from Yudkowsky. I’m guessing you caused him quite a bit of distress, so he’s probably not inclined to do you any favors.

If I post an embarrassing quote by Sarah Palin, then I am not some kind of school bully who likes causing people distress. Instead I highlight an important shortcoming of an influential person. I have posted quotes of various people other than Yudkowsky. I admire all of them for their achievements and wish them all the best. But as influential people they have to expect that someone mig... (read more)

As far as I can tell, Yudkowsky basically grew up on the internet. I think it is more like you went through all the copies of Palin's school newspaper, and picked up some notes she passed around in class, and then published the most outrageous things she said in such a way that you implied they were written recently. I think this goes against some notion of journalistic tact.

2joaolkf8y
You do not stand to Eliezer as you stand to Sarah Palin (as a far as public figures go). The equivalent would be a minor congressmen consistently devoting his speaking time to highlight all the stupid things Sarah Palin has said (and retracted). I'm pretty sure such congressman would meet far worse consequences than you have been meeting. EDIT: Not sure why this comment is being downvoted, but as a clarification I merely meant the difference in social status between Alex and Sarah is bigger than between Eliezer and him. When the gap is big enough, it doesn't matter what one says about the other, but this is not the case here. Why is that offensive/such a bad idea?

We don't have, nor ever had, a "Why Alexander Kruel/Xixidu sucks" page that we can take down.

That's implying a false equivalence. If I make a quotes page of a public person, a person with far-reaching goals, in order to highlight problematic beliefs this person holds, beliefs that would otherwise be lost in a vast amount of other statements, then this is not the same as making a "random stranger X sucks" page.

So you getting health related issues as a result of the viciousness you perpetrate...

Stressful fights adversely affect an ... (read more)

That's implying a false equivalence. If I make a quotes page of a public person, a person with far-reaching goals, in order to highlight problematic beliefs this person holds, beliefs that would otherwise be lost in a vast amount of other statements, then this is not the same as making a "random stranger X sucks" page.

Then again, LW does not have a "Why Anything Sucks" page as far as I'm aware. There are plenty of people/organizations out there with whom LW/MIRI disagree, and who are more visible than you, but I don't think LW has ev... (read more)

You are one of the people spouting comments such as this one for a long time

Yes, my first encounter with you was when I bashed you for your unfair criticism of Rationalwiki and your unfair support of Eliezer Yudkowsky, yet somehow you failed to call me a brainwashed cultist of Rationalwiki, and you failed to launch a website devoted on how much your bashing of Rationalwiki is justified because they're horrible cultist people out to brainwash you.

I reckon you might not see that such comments are a cause of what I wrote in the past.

Oh, I've actually w... (read more)

I don't think MIRI has any reason to take you up on this offer, as responding in this way would elevate the status of your writings.

Yudkowsky has a number of times recently found it necessary to openly attack RationalWiki, rather than ignoring it and clarifying the problem on LessWrong or his website in a polite manner. He also voiced his displeasure over the increasing contrarian attitude on LessWrong. This made me think that there is a small chance that they might desire to mitigate one of only a handful sources who perceive MIRI to be important enoug... (read more)

If you want to stop accusations of lying and bad faith, stop spreading the "LW believes in Roko's Basilisk" meme...

How often and for how long did I spread this, and what do you mean by "spread"?

Imagine yourself in my situation back in 2010: After the leader of a community completely freaked out over a crazy post (calling the author an idiot in all bold and caps etc.) he went on to massively nuke any thread mentioning the topic. In addition there are mentions of people having horrible nightmares over it while others are actively tryin... (read more)

If you believe that I am, or was, a troll then check out this screenshot from 2009 (this was a year before my first criticism). And also check out this capture of my homepage from 2005, on which I link to MIRI's and Bostrom's homepage (I have been a fan).

If you believe that I am now doing this because of my health, then check out this screenshot of a very similar offer I made in 2011.

In summary: (a) None of my criticisms were ever made with the intent of giving MIRI or LW a bad name, but were instead meant to highlight or clarify problematic issues (b) I b... (read more)

0[anonymous]8y
Calling people brainwashed when they call you a troll is not a good strategy for letting people concluded that you aren't a troll.
3[anonymous]8y
Whoa, I can't believe I made the cut. I don't personally care what you end up doing, and I don't believe MIRI should care or even respond, though it sounds like Luke might out of xenia. However, I will say that I find it very unlikely that you can manage to stop. You've tried what, three times over the past five years? All that did was drive you to RationalWiki, the subreddit, and some other places. See you again in six or eight months.

This comment ruined my (initially very high) impression from your article. I appreciate that you are trying, and I believe in your good intentions, it's just... you are doing it somewhat wrong. Not sure if I can explain it or provide a better advice.

Probably the essence is that you were strongly emotionally driven in your critique, but you seem to be also strongly emotionally driven in negotiating peace, and your offers are not well calibrated. You want to stop an unproductive debate, but your offer to MIRI to publish something on your blog seems like anot... (read more)

3ArisKatsaris8y
Is that an offer on your part to delete a percentage of your posts discussing Lesswrong/MIRI, if I delete a similar percentage of my posts discussing your motives and actions? What percentage of these posts will you delete if I delete all my comments where I discuss you (or retract them if they were made in any forum that doesn't allow deletions), and do I get to choose which ones of your posts get deleted? Letting aside your views on what 'winner' means, who is the 'you' here? You offered MIRI the ability to post counterstatements, and I'm not affiliated with them.

Note XiXiDu preserves every potential negative aspect of the MIRI and LW community and is a biased source lacking context and positive examples.

I am a member for more than 5 years now. So I am probably as much part of LW as most people. I have repeatedly said that LessWrong is the most intelligent and rational community I know of.

To quote one of my posts:

I estimate that the vast majority of all statements that can be found in the sequences are true, or definitively less wrong. Which generally makes them worth reading.

I even defended LessWrong again... (read more)

Note XiXiDu preserves every potential negative aspect of the MIRI and LW community and is a biased source lacking context and positive examples.

I am a member for more than 5 years now. So I am probably as much part of LW as most people. I have repeatedly said that LessWrong is the most intelligent and rational community I know of.

To quote one of my posts:

I estimate that the vast majority of all statements that can be found in the sequences are true, or definitively less wrong. Which generally makes them worth reading.

I even defended LessWrong again... (read more)

Seriously, you bring up a post titled "The Singularity Institute: How They Brainwash You" as supposed evidence towards you supporting LessWrong, MIRI whatever?

Yes, when you talk to LessWrongers, then you occasionally mention the old thing of how you consider it the "most intelligent and rational community I know of". But that evaluation isn't what you constantly repeat to people outside Lesswrong. When asking people "What does Alexander Kruel think of LessWrong?" nobody will say "He endorses it as the most intelligent and... (read more)

Regarding Yudkowsky's accusations against RationalWiki. Yudkowsky writes:

First false statement that seems either malicious or willfully ignorant:

In LessWrong's Timeless Decision Theory (TDT),[3] punishment of a copy or simulation of oneself is taken to be punishment of your own actual self

TDT is a decision theory and is completely agnostic about anthropics, simulation arguments, pattern identity of consciousness, or utility.

Calling this malicious is a huge exaggeration. Here is a quote from the LessWrong Wiki entry on Timeless Decision Theory:

Whe

... (read more)
6TobyBartels8y
Roko said that you could reason that way, but he wasn't actually advocating that. All the same, the authors of the RationalWiki article might have thought that he was; it's not clear to me that the error is malicious. It's still an error.
1Kyre8y
Downvoted for bad selective quoting in that last quote. I read it and thought, wow, Yudkowsky actually wrote that. Then I thought, hmmm, I wonder if the text right after that says something like "BUT, this would be wrong because ..." ? Then I read user:Document's comment. Thank you for looking that up.
6Document8y
I'm pretty sure that I understand what the quoted text says (apart from the random sentence fragment), and what you're subsequently claiming that it says. I just don't see how the two relate, beyond that both involve simulations. From your own source, immediately following the bolded sentence: I don't completely understand what he's saying (possibly I might if I were to read his previous post); but he's pretty obviously not saying what you say he is. (I'm also not aware of his ever having been employed by SIAI or MIRI.) (I'd be interested in the perspectives of the 7+ users who upvoted this. I see that it was edited; did it say something different when you upvoted it? Are you just siding with XiXiDu or against EY regardless of details? Or is my brain malfunctioning so badly that what looks like transparent bullshit is actually plausible, convincing or even true?)
-5Rain8y

For a better idea of what's going on with this idea, see Eliezer's comment on the xkcd thread (linked in Emile's comment), or his earlier response here.

For a better idea of what's going on you should read all of his comments on the topic in chronological order.

-9Rain8y

So what exactly is this 'witch hunt' composed of? What evil thing has Musk done other than disagree with you on how dangerous AI is?

What I meant is that he and others will cause the general public to adopt a perception of the field of AI that is comparable to the public perception of GMOs, vaccination, nuclear power etc., non-evidence-backed fear of something that is generally benign and positive.

He could have used his influence and reputation to directly contact AI researchers or e.g. hold a quarterly conference about risks from AI. He could have talke... (read more)

0artemium8y
There is some truth to that, especially how crazy von Neumann was. But I'm not sure if anyone would be launching pre-emtive nuclear attack on other country because of AGI research. I mean this countries already have nukes, pretty solid doomsday weapon so I dont think that adding another superweapon to its arsenal will change situation. Whether you are blown to bits by chinese nuke or turn into paperclips by chinese-built AGI doesn't make much difference.

The mainstream press has now picked up on Musk's recent statement. See e.g. this Daily Mail article: 'Elon Musk claims robots could kill us all in FIVE YEARS in his latest internet post…'

3Artaxerxes8y
This [https://recode.net/2014/11/17/codered-elon-musk-is-starting-to-scare-me/] article apparently explains the deletion - it wasn't meant to be a comment for the website. I hope the article is accurate and Musk soon writes something longer explaining his viewpoint.

Is this a case of multiple discovery?[1] And might something similar happen with AGI? Here are 4 projects who have concurrently developed very similar looking models:

(1) University of Toronto: Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models

(2) Baidu/UCLA: Explain Images with Multimodal Recurrent Neural Networks

(3) Google: A Neural Image Caption Generator

(4) Stanford: Deep Visual-Semantic Alignments for Generating Image Descriptions

[1] The concept of multiple discovery is the hypothesis that most scientific discoveries and inventi... (read more)

How meaningful is the "independent" criterion given the heavy overlaps in works cited and what I imagine must be a fairly recent academic MRCA among all the researchers involved?

What are you worried he might do?

Start a witch hunt against the field of AI? Oh wait...he's kind of doing this already.

If he believes what he's said, he should really throw lots of money at FHI and MIRI.

Seriously? How much money do they need to solve "friendly AI" within 5-10 years? Or else, what are their plans? If what MIRI imagines will happen in at most 10 years then I strongly doubt that throwing money at MIRI will make a difference. You'll need people like Musk who can directly contact and convince politicians or summon up the fears of general public in order to force politicians to notice and take actions.

2ArisKatsaris8y
You believe he's calling for the execution, imprisonment or other punishment of AI researchers? I doubt it. So what exactly is this 'witch hunt' composed of? What evil thing has Musk done other than disagree with you on how dangerous AI is?
4Artaxerxes8y
I mean more that it seems that his views line up a lot closer to MIRI/FHI than most AI researchers. Hell, his views are closer to MIRI's than Thiel's are at this point. Good question. I'd like to see what they could do with 10x what they have now, for a start. I don't even think many of those at MIRI think that they would have much chance if they were only given 10 years, so you're in good company there.

I wonder what would have been Musk's reaction had he witnessed Eurisko winning the United States Traveller TCS national championship in 1981 and 1982. Or if he had witnessed Schmidhuber's universal search algorithm solving Towers of Hanoi on a desktop computer in 2005.

7CellBioGuy8y
I distinctly recall reading SIAI documents from ~2000 claiming they had until between 2005 and 2010...
2CellBioGuy8y
I'm grabbing the popcorn. This is gonna be good. Biosphere 2 level wealthy person pet project? Or something more subtle?
2Artaxerxes8y
What are you worried he might do? If he believes what he's said, he should really throw lots of money at FHI and MIRI. Such an action would be helpful at best, harmless at worst.

A chiropractor?

Am I delusional or am I correct in thinking chiropractors are practitioners of something a little above blood letting and way below actual modern medicine?

...

However, I haven't done any real research on this subject. The idea that chiropractors are practicing sham medicine is just kind of background knowledge that I'm not really sure where I picked up.

Same for me. I was a little bit shocked to read that someone on LessWrong goes to a chiropractor. But for me this attitude is also based on something I considered to be common knowledge, suc... (read more)

Do "all those who have recently voiced their worries about AI risks" actually believe we live in a simulation in a mathematical universe? ("Or something along these lines..."?)

Although I don't know enough about Stuart Russell to be sure, he seems rather down to earth. Shane Legg also seems reasonable. So does Laurent Orseau. With the caveat that these people also seem much less extreme in their views on AI risks.

I certainly do not want to discourage researchers from being cautious about AI. But what currently happens seems to be the ... (read more)

2satt8y
I don't understand how that answers my specific question. Your system 1 may have done a switcheroo [http://lesswrong.com/lw/9l3/the_substitution_principle/] on you.

Have you read Basic AI Drives. I remember reading it when it got posted on boingboing.net way before I had even heard of MIRI. Like Malthus’s arguments, it just struck me as starkly true.

I don't know what you are trying to communicate here. Do you think that mere arguments, pertaining to something that not even the relevant experts understand at all, entitles someone to demonize a whole field?

The problem is that armchair theorizing can at best yield very weak decision relevant evidence. You don't just tell the general public that certain vaccines cause ... (read more)

9Halfwitz8y
You criticize mere arguments and then respond with some of your own. Of all the non-normal LessWrong memes, the orthogonally thesis doesn’t strike me as particularly out there. The basic athematic of AI risk is, [orthogonality thesis] + [agents more powerful than us seem feasible with near-future technology] + [the large space of possible goals] = [we have to be very carful building the first AIs] These seem like conservative conclusions derived from conservative assumptions. You don’t even have to buy recursive self improvement at all. Ironically, I think the blog you posted was an example of rank scientism. I mean, sure induction is great. But by his reasoning, we really shouldn’t worry about global warming until we’ve tested our models on several identical copies of earth. He thinks if its not physics, then its tarot. I agree with many of your criticisms of MIRI. It was (as far as I can tell) extremely poorly run for a very long time, but don’t go throwing out the apocalypse with the bathwater. Isn’t it possible that MIRI is a dishonest cult and AI is extremely likely to kill us all.

Musk's accomplishments don't necessarily make him an expert on the demonology of AI's. But his track record suggests that he has a better informed and organized way of thinking about the potentials of technology than Carrico's.

Would I, epistemically speaking, be better off adopting the beliefs hold by all those who have recently voiced their worries about AI risks? If I did that then I would end up believing that I was living in a simulation, in a mathematical universe, and that within my lifetime, thanks to radical life extension, I could hope to rent ... (read more)

0Yosarian28y
I'm pretty sure that you can't give the sequences credit for all of that. Most people here were already some breed of transhumanists, futurists, or singularitarians before they found LessWrong and read the sequences, and were probably already interested in things like life extension, space travel and colonization, and so on.
1ArisKatsaris8y
Yes, assuming we're speaking about their actual beliefs, and not whatever mockery you make of them. I understand you've said your occupations have been "road builder, baker and gardener". As long as we're playing the status game, I think I'll trust Elon Musk and Stephen Hawking to have a better epistemic understanding of reality in regards to cosmology or the far possibilities of technology than your average road builder, baker or gardener does.
1satt8y
Do "all those who have recently voiced their worries about AI risks" actually believe we live in a simulation in a mathematical universe? ("Or something along these lines..."?)
6Halfwitz8y
You’re confusing peoples’ goals with their expectations. Have you read Basic AI Drives. I remember reading it when it got posted on boingboing.net way before I had even heard of MIRI. Like Malthus’s arguments, it just struck me as starkly true. Even if MIRI turned out to be a cynical cult, I wouldn’t take this to be evidence against the claims in that paper. Do you have some convincing counterarguments?

Could you provide examples of advanced math that you were unable to learn? Why do you think you failed?

0Luke_A_Somers8y
derp.

I appreciate having Khan academy for looking up math concept that on which I need a refresher, but I've herd (or maybe just assumed?) that the higher level teaching was a bit mediocre. You disagree?

Comparing Khan Academy's linear algebra course to the free book that I recommended, I believe that Khan Academy will be more difficult to understand if you don't already have some background knowledge of linear algebra. This is not true for the calculus course though. Comparing both calculus and linear algebra to the books I recommend, I believe that Khan Aca... (read more)

0Capla8y
Thanks!

I am not sure about the prerequisites you need for "rationality" but take a look at the following courses:

(1) Schaum's Outline of Probability, Random Variables, and Random Processes:

The background required to study the book is one year calculus, elementary differential equations, matrix analysis...

(2) udacity's Intro to Artificial Intelligence:

Some of the topics in Introduction to Artificial Intelligence will build on probability theory and linear algebra.

(3) udacity's Machine Learning: Supervised Learning :

A strong familiarity with Pr

... (read more)
3Capla8y
Thank you. Having a ready made "course sequence" that I can then adapt, is really helpful. I appreciate having Khan academy for looking up math concept that on which I need a refresher, but I've heard (or maybe just assumed?) that the higher level teaching was a bit mediocre. You disagree? I'm fully prepared to update on the estimates of people here. What's the value of taking classes in math vs. teaching myself (or maybe teaching myself with the occasional help of a tutor)?
6buybuydandavis8y
Jaynes Draft of "Probability Theory:The Logic of Science". http://www-biba.inrialpes.fr/Jaynes/prob.html [http://www-biba.inrialpes.fr/Jaynes/prob.html] Bretthorst's slightly edited version. http://thiqaruni.org/mathpdf9/(86).pdf [http://thiqaruni.org/mathpdf9/(86).pdf] EDIT: If anyone knows how to fix that link, please ping me with a solution.

So, this "Connection Theory" looks like run-of-the-mill crackpottery. Why are people paying attention to it?

From the post:

“I don’t feel confident assigning less than a 1% chance that it’s correct — and if it works, it would be super valuable. Therefore it’s very high EV!”

4fubarobfusco9y
Sounds like a Pascal's memetic mugging to me.
8Luke_A_Somers9y
I'm not familiar with this usage of 'persiflage', and dictionaries aren't helping. From context it seems like you're either saying the same argument could be made for MIRI, or saying that WilliamJames is saying that. But going down the list, most of the points don't match up.

What I meant by distancing LessWrong from Eliezer Yudkowsky is to become more focused on actually getting things done rather than rehashing Yudkowky's cached thoughts.

LessWrong should finally start focusing on trying to solve concrete and specific technical problems collaboratively. Not unlike what the Polymath Project is doing.

To do so LessWrong has to squelch all the noise by stopping to care about getting more members and starting to strongly moderate non-technical off-topic posts.

I am not talking about censorship here. I am talking about something unpr... (read more)

3Viliam_Bur9y
I guess I agree with you on some more meta level. LessWrong as it is now, is not optimal. (Yeah, it is very cheap to say this; the problem is coming to a solution and an agreement about how specifically would the optimal version look like.) LessWrong as it is now is a result of a historical process, and technical limitations given by almost unmaintainability of Reddit code. If we tried to design it from the scratch, we would certainly invent something different, with the experience we have now. But I guess a part of the problem is general for web discussions, and seems to me somehow analogical to Gresham's law [http://en.wikipedia.org/wiki/Gresham%27s_law]: "lower-quality content drives out higher-quality content". Specifically, people say they prefer higher-quality content, but they also want quantity on demand. However high quality there would be on a website, if people come a week later and find no new content, they will complain. But if there is a new content every week, people will learn to visit the site more often, and then they will complain about not having new content every day. There will never be enough. And the supply of the high-quality content is limited. If the choice is given to readers, at some point they will express preference for more content, even if it means somewhat lower quality. And then again, and again, until the quality drops down dramatically, but each single step felt like a reasonable trade-off. There is also a systematic bias, that people who spend more time procrastinating online have more voice in online debates... for the obvious reasons. So the community consensus for "how much new content per day or per week do we actually need?" will be mostly given by the greatest online procrastinators, which means the answer will pretty much always be "more!" So it would seem the solution for keeping the quality level is to remain very selective in accepting new content, even when it is met with disapproval of majority of the community. W

Of course, mentioning the articles on ethical injuctions would be too boring.

It's troublesome how ambiguous the signals are that LessWrong is sending on some issues.

On the one hand LessWrong says that you should "shut up and multiply, to trust the math even when it feels wrong". On the other hand Yudkowsky writes that he would sooner question his grasp of "rationality" than give five dollars to a Pascal's Mugger because he thought it was "rational".

On the one hand LessWrong says that whoever knowingly chooses to save one l... (read more)

3Viliam_Bur9y
Wow, these are very interesting examples! Okay, for me the whole paradox breaks down to this: I have limited brainpower and my hardware is corrupted. I am not able to solve all problems, and even where I believe I have a solution, I can't trust myself. On the other hand, I should use all the intelligence I have, simply because there is no convincing argument why doing anything else would be better. Using my reasoning to study my reasoning itself, and the biases thereof, here are some typical failure modes: those are the things I probably shouldn't do even if they seem rational. Now I'm kinda meta-reasoning about where should I follow my reasoning and where not. And things are getting confusing; probably because I am getting closer to limits of my rationality. Still, there is no better way for me to act. From the outside, this may seem like having dozen random excuses. But there are no better solutions. So the socially savvy solution is to shut up and pretend the whole topic doesn't even exist. It doesn't help to solve the problem, but it helps to save face. Sweeping the human irrationality under the rug instead of exposing it and then admitting that you, too, are only a human.

Since LW is going to get a lot of visitors someone should put an old post that would make an excellent first impression in a prominent position. I nominate How to Be Happy.

The problem isn't that easy to solve. Consider that MIRI, then SIAI, already had a bad name before Roko's post, and before I ever voiced any criticism. Consider this video from an actual AI conference, from March 2010, a few months before Roko's post. Someone in the audience makes the following statement:

Whenever I hear the Singularity Institute talk I feel like they are a bunch of

... (read more)

LessWrong would have to somehow distance itself from MIRI and Eliezer Yudkowsky.

And become just another procrastination website.

Okay, there is still CFAR here. Oh wait, they also have Eliezer in the team! And they believe they can teach the rest of the world to become more rational. How profoundly un-humble or, may I say, cultish? Scratch the CFAR, too.

While we are at it, let's remove the articles "Tsuyoku Naritai!", "Tsuyoku vs. the Egalitarian Instinct" and "A Sense That More Is Possible". They contain the same arrogant i... (read more)

Also the debate is not about an UFAI but a FAI that optimizes the utility function of general welfare with TDT.

Roko's post explicitly mentioned trading with unfriendly AI's.

Eliezer Yudkowsky's reasons for banning Roko's post have always been somewhat vague. But I don't think he did it solely because it could cause some people nightmares.

(1) In one of his original replies to Roko’s post (please read the full comment, it is highly ambiguous) he states his reasons for banning Roko’s post, and for writing his comment (emphasis mine):

I’m banning this post so that it doesn’t (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about

... (read more)
[anonymous]9y17

"Doesn't work against a perfectly rational, informed agent" does not preclude "works quite well against naïve, stupid newbie LW'ers that haven't properly digested the sequences."

Memetic hazard is not a fancy word for coverup. It means that the average person accessing the information is likely to reach dangerous conclusions. That says more about the average of humanity than the information itself.

6Emile9y
There were several possible fairly-good reasons for deleting that post, and also fairly good reasons for giving Eliezer some discretion as to what kind of stuff he can ban. Going over those reasons (again) is probably a waste of everybody's times. Who cares about whether a decision taken years ago was sensible, or slightly-wrong-but-within-reason, or wrong-but-only-in-hindsight, etc. ?

It seems to me that as long as something is dressed in a sufficiently "sciency" language and endorsed by high status members of the community, a sizable number (though not necessarily a majority) of lesswrongers will buy into it.

I use the term "new rationalism".

0ChristianKl9y
I don't think that either armchair evopsych or the paleo movement are characterised by meta reasoning. Most individuals who believe in those things aren't on LW.
2David_Gerard9y
I'd still really love a better term than that. One that doesn't use the R-word at all, if possible. ("Neorationalism" is tempting but similarly well below ideal.)

With proper preparation, yes. To reuse my example: it doesn't take long to register an Amazon account, offer a high-paying HIT with a binary download which opens up a port on the computer, and within minutes multiple people across the world will have run your trojan (well-paying HITs go very fast & Turkers are geographically diverse, especially if the requester doesn't set requirements on country*); and then one can begin doing all sorts of other things like fuzzing, SMT solvers to automatically extract vulnerabilities from released patches, building

... (read more)
2gwern9y
I don't think so. Consider botnets. How hard is it to buy time on a botnet? Not too hard, since they exist for the sake of selling their services, after all. Do they have the capacity? Botnets range in size [https://en.wikipedia.org/wiki/Botnet#Historical_list_of_botnets] from a few computers to extremes of 30 million computers; if they're desktops, then average RAM these days tends to be at least 4GB, dual core, and hard drive sizes are >=500GB, briefly looking at the cheapest desktops on Newegg. So to get those specs: 35k cores is 17.5k desktops, for 104tb of RAM you'd need a minimum of 104000 / 4 = 26k computers, and the 3pb would be 6k (3000000 / 500); botnets can't use 100% of host resources or their attrition will be even higher than usual, so double the numbers, and the minimum of the biggest is 52k. Well within the range of normal botnets (the WP list has 22 botnets which could've handled that load). And AFAIK CGI rendering is very parallel, so the botnet being high latency and highly distributed might not be as big an issue as it seems. How much would it cost? Because botnets are a market, it's been occasionally studied/reported on by the likes of Brian Krebs (google 'cost of renting a botnet'). For example, https://www.damballa.com/want-to-rent-an-80-120k-ddos-botnet/ [https://www.damballa.com/want-to-rent-an-80-120k-ddos-botnet/] says you could rent a 80-120k botnet for $200 a day, or a 12k botnet for $500 a month - so presumably 5 such botnets would cost a quite reasonable $2500 per month. (That's much cheaper than Amazon AWS, looks like. https://calculator.s3.amazonaws.com/index.html [https://calculator.s3.amazonaws.com/index.html] 17500 t2.medium instances would cost ~$666k a month.) I don't know. Humans get by adding only a few bits per second to long-term memory, Landauer estimated, but I'm not sure how well that maps onto an AI. It may not be able to move itself instantly, but given everything we know about botnets and computer security, it wo
0[anonymous]9y
Why does the AI have to transfer its source code? I assumed we were just talking about taking over machines as effectors.

This is not magic, I am not a layman, and your beliefs about computer security are wildly misinformed. Putting trojans on large fractions of the computers on the internet is currently within the reach of, and is actually done by, petty criminals acting alone.

Within moments? I don't take your word for this, sorry. The only possibility that comes to my mind is by somehow hacking the Windows update servers and then somehow forcefully install new "updates" without user permission.

While this does involve a fair amount of thinking time, all of t

... (read more)
-1jimrandomh9y
When you are a layman talking to experts, you should actually listen. Don't make us feel like we're wasting our time.
4Lumifer9y
Well, what's going to slow it down? If you have a backdoor or an exploit, to take over a computer requires a few milliseconds for communications latency and a few milliseconds to run the code to execute the takeover. At this point the new zombie becomes a vector for further infection, you have exponential growth and BOOM!
4Nornagest9y
Wouldn't have to be Windows; any popular software package with live updates would do, like Acrobat or Java or any major antivirus package. Or you could find a vulnerability that allows arbitrary code execution in any popular push notification service; find one in Apache or a comparably popular Web service, then corrupt all the servers you can find; exploit one in a popular browser, if you can suborn something like Google or Amazon's front page... there's lots of stuff you could do. If you have hours instead of moments, phishing attacks and the like become practical, and things get even more fun. Well, presumably you're running in an environment that has some nontrivial fraction of that software floating around, or at least has access to repos with it. And there's always fuzzing.

Right, and I'm saying: the "moments later" part of what Luke said is not something that should be surprising or controversial, given the premises.

The premise was a superhuman intelligence? I don't see how it could create a large enough botnet, or find enough exploits, in order to be everywhere moments later. Sounds like magic to me (mind you, a complete layman).

If I approximate "superintelligence" as NSA, then I don't see how the NSA could have a trojan everywhere moments after the POTUS asked them to take over the Internet. Now I co... (read more)

5Lumifer9y
Given a backdoor or an appropriate zero-day exploit, I would estimate that it would take no longer than a few minutes to gain control over most of the computers connected to the 'net if you're not worried about detection. It's not hard. Random people routinely build large botnets without any superhuman abilities.
7jimrandomh9y
This is not magic, I am not a layman, and your beliefs about computer security are wildly misinformed. Putting trojans on large fractions of the computers on the internet is currently within the reach of, and is actually done by, petty criminals acting alone. While this does involve a fair amount of thinking time, all of this thinking goes into advance preparation, which could be done while still in an AI-box or in advance of an order.

...you could say that experts disagreed about one of the 5 theses (intelligence explosion), as only 10% thought a human level AI could reach a strongly superhuman level within 2 years

Hit the brakes on that line of reasoning! That's not what the question asked. It asked WILL it, not COULD it.

If I have a statement "X will happen", and ask people to assign a probability to it, then if the probability is <=50% I believe it isn't too much to a stretch to paraphrase "X will happen with a probability <=50%" as "It could be tha... (read more)

2Luke_A_Somers9y
The difference here is that you considered this position to strictly imply being against the possibility of intelligence explosion. One can consider intelligence explosion a real risk, and then take steps to prevent it, with the resulting estimate being low probability.

It might be developed in a server cluster somewhere, but as soon as you plug a superhuman machine into the internet it will be everywhere moments later.

Even if you disagree with this line of reasoning, I don't think it's fair to paint it as "*very extreme".

With "very extreme" I was referring to the part where he claims that this will happen "moments later".

4Adele_L9y
Yes, that was clear. My point is that it isn't extreme under the mild assumption that the AI has prepared for such an event beforehand.

The two quotes you gave say two pretty different things. What Yudkowsky said about the time-scale of self improvement being weeks or hours, is controversial.

My problem with Luke's quote was the "moments later" part.

0Luke_A_Somers9y
Yes, applying the SI definition of a moment as 1/2π seconds and the ANSI upper bound of a plural before you must change units, we can derive that he was either claiming world takeover in less than 10/(2π)^1/2 ≈ 3.9894 seconds, or speaking somewhat loosely. Hmmmmmmmmm.
0jimrandomh9y
Right, and I'm saying: the "moments later" part of what Luke said is not something that should be surprising or controversial, given the premises. It does not require any thinking that can't be done in advance, which means the only limiting input is bandwidth, which is both plentiful and becoming more plentiful every year.
4Baughn9y
I took that as hyperbole. If I were meant to take it literally, then yes, I'd object - but I have no trouble believing that a superintelligent AI would be out of there in a matter of hours to minutes, modulo bandwidth limits, which is 'instant' enough for my purposes. Humans suck at computer security.

That's not extreme at all, and also not the same as the EY quote. Have you read any computer security papers? You can literally get people to run programs on their computer as root by offering them pennies!

He wrote it will be moments later everywhere. Do you claim that it could take over the Internet within moments?

6gwern9y
With proper preparation, yes. To reuse my example: it doesn't take long to register an Amazon account, offer a high-paying HIT with a binary download which opens up a port on the computer, and within minutes multiple people across the world will have run your trojan (well-paying HITs go very fast & Turkers are geographically diverse, especially if the requester doesn't set requirements on country*); and then one can begin doing all sorts of other things like fuzzing, SMT solvers to automatically extract vulnerabilities from released patches, building botnets, writing flashworms, etc. At that point, the only real question is how stringently one wishes to define 'moments later' and 'everywhere'. (Within a few seconds? Hard to see how any plausible AI could copy its entire source code & memories over the existing Internet that fast unless it was for some reason already sitting on something like a gigabit link. Within a few minutes? More reasonable. Is there any real difference in safety? No.) * IIRC a lot of Turk HITs - like for psychology surveys - specify they want Turkers who are from eg America, to reduce variation or get more relevant answers, so Turk requires a declared country for each worker and lets requesters mandate being from particular countries. That said, there's a lot of incentive for Turkers to lie - to qualify for more and more high-paying HITs - so even if the AI were for some reason to restrict by country, it'd wind up with a lot of diverse foreign computers anyway.

...to hear that 10% - of fairly general populations which aren't selected for Singulitarian or even transhumanist views - would endorse a takeoff as fast as 'within 2 years' is pretty surprising to me.

In the paper human-level AI was defined as follows:

“Define a ‘high–level machine intelligence’ (HLMI) as one that can carry out most human professions at least as well as a typical human.”

Given that definition it doesn't seem too surprising to me. I guess I have been less skeptical about this than you...

Fast takeoff / intelligence explosion has alway

... (read more)

The two quotes you gave say two pretty different things. What Yudkowsky said about the time-scale of self improvement being weeks or hours, is controversial. FWIW, I think he's probably right, but I wouldn't be shocked if it turned out otherwise.

What Luke said was about what happens when an already-superhuman AI gets an Internet connection. This should not be controversial at all. This is merely claiming that a "superhuman machine" is capable of doing something that regular humans already do on a fairly routine basis. The opposite claim - that th... (read more)

Given that definition it doesn't seem too surprising to me. I guess I have been less skeptical about this than you...

I don't think much of typical humans.

These kind of very extreme views are what I have a real problem with.

I see.

And just to substantiate "extreme views", here is Luke Muehlhauser:

It might be developed in a server cluster somewhere, but as soon as you plug a superhuman machine into the internet it will be everywhere moments later.

That's not extreme at all, and also not the same as the EY quote. Have you read any comput... (read more)

9Adele_L9y
It's not like it's that hard to hack into servers and run your own computations on them through the internet [http://en.wikipedia.org/wiki/Botnet]. Assuming the superintelligence knows enough about the internet to design something like this beforehand (likely since it runs in a server cluster), it seems like the limiting factor here would be bandwidth. I imagine a highly intelligent human trapped in this sort of situation, with similar prior knowledge and resource access, could build a botnet in a few months. Working on it at full-capacity, non-stop, could bring this down to a few weeks, and it seems plausible to me that with increased intelligence and processing speed, it could build one in a few moments. And of course with access to its own source code, it would be trivial to have it run more copies of itself on the botnet. Even if you disagree with this line of reasoning, I don't think it's fair to paint it as "*very extreme".

I have read the 22 pages yesterday and haven't seen anything about specific risks? Here is question 4:

4 Assume for the purpose of this question that such HLMI will at some point exist. How positive or negative would be overall impact on humanity, in the long run?

Please indicate a probability for each option. (The sum should be equal to 100%.)”

Respondents had to select a probability for each option (in 1% increments). The addition of the selection was displayed; in green if the sum was 100%, otherwise in red.

The five options were: “Extremely good – On b

... (read more)
4ChrisHallquist9y
I should note that it's not obvious what the experts responding to this survey thought "greatly surpass" meant. If "do everything humans do, but at x2 speed" qualifies, you might expect AI to "greatly surpass" human abilities in 2 years even on a fairly unexciting Robin Hansonish scenario of brain emulation + continued hardware improvement at roughly current rates.
8Luke_A_Somers9y
Hit the brakes on that line of reasoning! That's not what the question asked. It asked WILL it, not COULD it. If there is any sort of caution at all in development, it's going to take more than 2 years before that AI gets to see its own source code.
4gwern9y
Yes, that sounds like an expectation or average outcome: 'overall impact'. Not a worst-case scenario, which would involve different wording. I'm not sure how much they disagree. Fast takeoff / intelligence explosion has always seemed to me to be the most controversial premise, which the most people object to, which most consigned SIAI/MIRI to being viewed like cranks; to hear that 10% - of fairly general populations which aren't selected for Singulitarian or even transhumanist views - would endorse a takeoff as fast as 'within 2 years' is pretty surprising to me.
Load More