This is directed at those who agree with SIAI but are not doing everything they can to support their mission.

Why are you not doing more?

Comments where people proclaim that they have contributed money to SIAI are upvoted 50 times and more. 180 people voted for 'unfriendly AI' to be the most fearsome risk.

If you are one of those people and are not fully committed to the cause, I am asking you, why are you not doing more?

New Comment
38 comments, sorted by Click to highlight new comments since:

Less Wrong needs better contrarians.

[-]Thomas100

A contrarian is never good enough. When he is, he is no longer a contrarian. Or you've became one of his kind.

A contrarian is never good enough. When he is, he is no longer a contrarian. Or you've became one of his kind.

I don't believe you. Is it really true that it is not possible to be a contrarian and be respected?

You can be respected for other properties than your contrarianism. If all those other attributes prevail against your funny believe. Whatever that was.

You don't respect somebody who claims that some centuries were artificially put into the official history but have never happened in fact. If you know only this about him, you can hardly respect him. Except you are inclined to believe it, too.

When you learn it is Kasparov, you probably still think highly of him.

See

Let's consider "only possible to be respected for completely different fields" to be a falsification of my position. I'll demark the kind of respect required as "just this side of Will_Newsome". I can't quite consider my respect for Will to fit into specific respect within the lesswrong namespace due to disparities in context relevant belief being beyond a threshold. But I can certainly imagine there being a contrarian that is slightly less extreme that is respect-worthy even at the local level.

I think part of the problem with identifying contrarians that can be respected is that seldom will people who disagree because they are correct or have thought well but differently on a specific issue - rather than merely being contrary in nature - also disagree on most other issues. We will then end up with many people who are contrarian about a few things but mainstream about most. And those people don't get to be called contrarians usually. If they did then I could claim to be one myself.

[+]XiXiDu-140

It took all of sixty seconds (starting from the link in your profile) to find:

Whenever I'm bored or in the mood for word warfare, I like to amuse myself by saying something inflammatory at a few of my favorite blogospheric haunts. -- Sociopathic Trolling by Sean the Sorcerer

Please leave.

So again you said that you'll log out and try out not to come back for years, and yet you return less than a day later, with yet another post filled with implicit bashings of the LessWrong community. Sometimes it may be hard not to respond to another's words, but is it truly so hard not to make new posts?

If you're to talk about discrepancy between stated thoughts and actual deeds, why don't you ask about your own?

tl+troll;dr

It's surprisingly hard to motivate yourself to save the world.

Edit: highly related comment by Mitchell Porter.

The only thing I've done recently is send money to the Singularity Institute. I did, however, give birth to and raise a son who is dedicated to saving the world. I'm contemplating changing my user name to Sarah Connor. :)

Congratulations!

I'm at that point in life where I'm thinking about whether I should have kids in the future. It's good to know there are people who have managed to reproduce and still find money to donate.

Carl Shulman has been convincing that I should do nothing directly (in terms of labor) on the problem on AI risks, instead become successful elsewhere, and then direct resources as I am able toward the problem.

However I believe I should 1) continue to educate myself on the topic 2) try to learn to be a better rationalist so when I do have resources I can direct them effectively 3) work toward being someone who can gain access to more resources 4) find ways to better optimize my lifestyle.

At one point I seriously considered running off to San Fran to be in the thick of things, but I now believe that would have been a strictly worse choice. Sometimes the best thing you can do is to do what you already do well and hope to direct the proceeds towards helping people. Even when it feels like this is selfish, disengaged, or remote.

RIght now I'm on a career path that will lead to me making lots of money with reasonable probability. I intend to give at least 10% of my income to existential risk reduction (FHI or SI, depending on the current finances of each) for the foreseeable future.

I wish I could do more. I'm probably smart/rational enough to contribute to FAI work directly in at least some capacity. But while that work is extremely important, it doesn't excite me, and I haven't managed to self-modify in that direction yet, though I'm working on it. Historically, I've been unable to motivate myself to do unexciting things for long periods of time (and that's another self-modification project).

I'm not doing more because I am weak. This is one of the primary motivations for my desire to become stronger.

Also, since the title and the post seem to be asking completely different questions, I'll answer the other question too.

  • Donating (not much though - see my list of reasons)
  • Started Toronto LW singularity discussion group
  • Generally I try and find time to understand the issues as best I can
  • I hang out on LW and focus particularly on the AI discussions

No significant accomplishments so far though.

I think it may be time for Less Wrongers to begin to proactively, consciously ignore this troll. Hard.

My reasons, roughly sorted with most severe at top:

  • Personal reasons which I don't want to disclose right now
  • Akrasia
  • Communities of like-minded people are either hard to find or hard to get into
  • Not knowing what I should be doing (relates to "communities")
  • Finding time (relates to "personal" and "akrasia")
[-]Rain50

Because I am soooooo lazy.

Seriously. I've got a form of depression which manifests as apathy.

Particularly ironic since I'm the one linked to as an example of doing a lot. Though I got more than twice as many upvotes for a pithy quote, which has also been the top comment on LessWrong for more than a year.

For what its worth, I remembered especially you because of this comment by you, which reflects my thoughts on that matter completely, and also a comment by Eliezer_Yudkowsky which I cannot find right now. There he is like "nobody here should feel too good for themselves, because they do spend only a non-significant number of their income; user Rain is one of the few exceptions, he is allowed to." That was in the aftermath of the singularity challenge. (in the event that my account of said comment is grossly wrong I would like to apologize in advance to everybody who feels wronged by that.)

[-]Rain00

You're likely thinking of this comment.

This is a very important question, and one I have wanted to ask Lesswrongians for a while also.

Personally, I am not entirely convinced by the general idea of it still, and I still have that niggling feeling that keeps me very cautious of the idea of doing something about this.

This is because of the magnitude of the importance of this idea, and how few people are interested in it. Yes, this is a fallacy, but damnit, why not!?

So I bring attention to less wrong and the technological singularity at least as much as I can. I want to know if this truly is as important as it supposedly is. I am gathering information, and withholding making a decision for the moment (perhaps irrationally).

But I genuinely think that if I am eventually convinced beyond some threshhold, I will start being much more proactive about this matter (or at least I hope so). And for those people in a similar boat to me, I suggest you do the same.

If you are one of those people and are not fully committed to the cause, I am asking you, why are you not doing more?

To some extent because I am busy asking myself questions like: What are the moral reasons that seem as if they point toward fully committing myself to the cause? Do they actually imply what I think they imply? Where do moral reasons in general get their justification? Where do beliefs in general get their justification? How should I act in the presence of uncertainty about how justification works? How should I act in the presence of structural uncertainty about how the world works (both phenomenologically and metaphysically)? How should I direct my inquiries about moral justification and about the world in a way that is most likely to itself be justified? How should I act in the presence of uncertainty about how uncertainty itself works? How can I be more meta? What causes me to provisionally assume that being meta is morally justified? Are the causes of my assumption normatively justifiable? What are the properties of "meta" that make it seem important, and is there a wider class of concepts that "meta" is an example or special case of?

(Somewhat more object-level questions include:) Is SIAI actually a good organization? How to I determine goodness? How do baselines work in general? Should I endorse SIAI? What institutions/preferences/drives am I tacitly endorsing? Do I know why I am endorsing them? What counts as endorsement? What counts as consent? Specifically, what counts as unreserved consent to be deluded? Is the cognitive/motivational system that I have been coerced into or have engineered itself justifiable as a platform for engaging in inquiries about justification? What are local improvements that might be made to said cognitive/motivational system? Why do I think those improvements wouldn't have predictably-stupid-in-rerospect consequences? Are the principles by which I judge the goodness of badness of societal endeavors consistent with the principles by which I judge the goodness or badness of endeavors at other levels of organization? If not, why not? What am I? Where am I? Who am I? What am I doing? Why am I doing it? What would count as satisfactory answers to each of those questions, and what criteria am I using to determine satisfactory-ness for answers to each of those questions? What should I do if I don't have satisfactory answers to each of those questions?

Et cetera, ad infinitum.

I'm loving the fact that "How to I determine goodness?" and "What counts as consent?" are, in this context, "somewhat more object-level questions."

I don't know what you believe I can do. I'm currently studying for the actuary exam on probability, so that I can possibly get paid lots of money to learn what we're talking about. (The second exam pertains to decision theory.) This career path would not have occurred to me before Less Wrong.

laziness

I'm still in the exploration phase of the exploration/exploitation dichotomy, in which information is more important than short-term utility gains. Donating to SIAI is not expected to yield much information.

[-]Thomas-10

I still ponder, that the risks caused by the absence of a super-intelligence around are greater than those induced by one.

So, if you want to do something good, you should maybe act according this probable fact.

The question is for me - What am I doing to bring about the techno singularity?

Not enough, that's sure.

[-]Dmytry-10

Nothing. The arguments towards any course of action have very low external probabilities (which I assign when I see equally plausible but contradicting arguments), resulting in very low expected utilities even if the bad AI is presumed to do some drastically evil stuff vs good AI doing some drastically nice stuff. There are many problems for which efforts have larger expected payoff.

edit:

I do subscribe to the school of thought that the irregular connectionist AIs (neural networks, brain emulations of various kind and the like) are the ones least likely to engage in highly structured effort like maximization of some scary utility to the destruction of everything else. I'm very dubious that the agent can have foresight so good as to decide humans are not worth preserving, as part of general "gather more interesting information" heuristics.

While the design space near the FAI is a minefield of monster AIs and a bugged FAI represents a worst case scenario. There is a draft of my article on the topic. Note: I am a software developer, and I am very sceptical about our ability to write FAI that is not bugged, as well as of ability to detect substantial problems in FAI goal system, as regardless of the goal system the FAI will do all it can to pretend to be working correctly.

There is a draft of my article on the topic.

I can't see this draft. I think only those who write them can see drafts.

[-]Dmytry-10

Hmm, weird. I thought the hide button would hide it from public, and un-hide button would unhide. How do i make it public as a draft?

Just post it to Discussion and immediately use "Delete". It'll still be readable and linkable, but not seen in the index.

[-]Dmytry-10

Hmm, can you see it now? (I of course kept a copy of the text on my computer, in case you were joking, so i do have the draft reposted as well)

[-]Rain00

It is now readable at the previous link, yes.

[-]Thomas-30

I am glad you are staying around, really. Despite I don't agree with you, I don't agree with SIAI either and one CAN discuss with you.