Hi,
I've read some of "Rationality: From AI to Zombies", and find myself worrying about unfriendly strong AI.
Reddit recently had an AMA with the OpenAI team, where "thegdb" seems to misunderstand the concerns. Another user, "AnvaMiba" provides 2 links (http://www.popsci.com/bill-gates-fears-ai-ai-researchers-know-better and http://fusion.net/story/54583/the-case-against-killer-robots-from-a-guy-actually-building-ai/) as examples of researchers not worried about unfriendly strong AI.
The arguments presented in the links above are really poor. However, I feel like I am attacking a straw man - quite possibly, www.popsci.com is misrepresenting a more reasonable argument.
Where can I find some precise, well thought out reasons why the risk of human extinction from strong AI is not just small, but for practical purposes equal to 0? I am interested in both arguments from people who believe the risk is zero, and people who do not believe this, but still attempt to "steel man" the argument.
Where can I find some precise, well thought out reasons why the risk of human extinction from strong AI is not just small, but for practical purposes equal to 0? I am interested in both arguments from people who believe the risk is zero, and people who do not believe this, but still attempt to "steel man" the argument.
The primary disagreement, in the steel man universe, is over urgency. If one knew that we would make AGI in 2200, then one would be less worried about solving the friendliness problem now. If one knew that we would make AGI in 2020, then one would be very worried about solving the friendliness problem now.
For many people who work on AI, it's hard to believe that it will 'just start working' at that high level of ability soon, because of how optimistic AI proponents have been over the years and how hard it is to wring a bit more predictive accuracy out of the algorithms for their problems.
But if one takes the position not that one is certain that it will happen soon, but that one is uncertain when it will happen, the uncertainty implies that it will happen sooner or later, and that means we need to do some planning for the sooner case. (That is, uncertainty does not imply it can only happen a long time from now.) This is, it seems, the most effective way to communicate with people who aren't worried about Strong AI.
There is also the question of what should this type of research actually look like.
I think that's an answer to "why aren't people supporting MIRI's specific research agenda?" but I see SoerenE's question as about "is there a good reason to not be worried about AI danger?"
(In the steelman universe, I think people understand that different research priorities will stem from different intuitions and skills, and think that there's space for everyone to work in the direction that suits them best.)
Hello all,
I joined in late December (so my intro post is there), but I was wondering if anyone has any advice regarding LW-style blogging?
I've been ruminating a lot on the nature of motivation, and I think it would be helpful to my fellow students/friends in real life. A lot of the concepts have their basis in rationalist ideas, so I'm finding myself restating a lot of ideas that are expressed much better here.
So, I suppose my question is, "What is the typical consensus on rationalist blogs?" and "Would anyone be willing to offer their advice on this type of thing/give feedback on what I'm thinking of posting?"
What I write here is just my personal opinion, I am not speaking for the whole LW. Obviously. For the sake of brevity, please assume that every sentence begins with "I think..." and ends with "...unless you have good reasons to do otherwise, of course." I am saying this to avoid scaring a new member, because sometimes new members report that LW culture feels scary to them. /end-of-disclaimers
If you write your beliefs about the nature of motivation (or anything else), it is better to also provide an information why you think that. So that we could estimate your level of certainty, and possibly find out more about the topic. If what you write is based on a research, please provide links, so curious people can look at the original papers. But don't worry, quoting research is not necessary for blogging on LW. If what you write is based on your personal observation, that's also okay. If you have some credentials, e.g. you are a motivation coach, feel free to say shortly something about yourself and link to your website. But not having credentials is still okay. If your conclusions are based on some specific examples you have seen, the article would be better if you write not only the conclusions but also an example or two (properly anonymized) that have led you to the conclusion. This may prevent or reduce misunderstanding.
Please start with the topic you want to write about; avoid long disclaimers and introductions. Don't write an introductory article that merely describes what are you planning to write in the following articles; instead go ahead and post the first part. Make the first part contain its own conclusion, instead of merely opening the topic and promissing to come to the conclusion in one of the future parts. Essentially, write in a way that doesn't leave anything unfinished. If your topic is too long for a single article, choose a subset that can fit in a single article, and write that first. Then choose another subset (now you are allowed to refer to the article you already wrote) and write that. Repeat until the topic is exhausted, or until you lose interest in writing more. Why? For the reader, it allows to discuss and vote on your articles immediately, instead of thinking "okay, this seems kinda interesting, but I haven't heard anything specific yet". For the writer, not promising anything means not creating a pressure on yourself: there are only the articles you have already published, and the completely unconstrained future. (If you have read the Sequences, which I recommend, especially in the book form, you'll see it was written mostly this way.)
What is the typical consensus on rationalist blogs?
About what -- motivation? I remember reading about "hyperbolic discounting" and similar mathy stuff, but to me personally that always felt wrong... not technically incorrect, but avoiding the core of the matter, which is usually something emotional, not an equation. I would recommend reading PJ Eby, who also sometimes posts on LW (Spock's Dirty Little Secret, Improving The Akrasia Hypothesis).
Would anyone be willing to offer their advice on this type of thing/give feedback on what I'm thinking of posting?
If your post will contain interesting information, I am sure people will reply.
Hello Viliam,
Thanks for the thoughts! I can see why having "self-contained" articles is helpful-- and the Sequences definitely are arranged that way.
It had occurred to me to put a few of the essays here, but I'm also considering putting up a personal blog for them-- targeted for my friends at school, who are nonrationalists.
Because of that, I've found myself writing a few basic essays to go into some core rationalist ideas (before talking about motivation), but they pale in comparison to better-explained articles here.
So I feel a little guilty about writing about both motivation and the core ideas because I don't feel "worthy", as I haven't had too much experience-- mainly just anecdotal stuff and introspection.
Hello everyone!
I am an engineering student from India and found Less Wrong through an article about 'tabooing' your words, which I was brought to through Slate Star Codex's description of Motte and Bailey arguments. I have subconsciously thought of my interests in cool logic problems and in the disagreements between social groups as somewhat tenuously related, and I find the content here very appealing and applicable to both areas.
Hi everyone,
I've been lurking for years (originally read Three Worlds Collide, then HPMOR, then started the sequences) and I guess it's about time to be able to interact.
I participate on the LW Slack, facebook, and local meetups. Looking forward to the new directions LW is heading, and being a part of LW 2.0
A few notes about the site mechanics
A few notes about the community
If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter
A list of some posts that are pretty awesome
I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:
More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site!
Note from Clarity: MBlume and other contributors wrote the original version of this welcome post, and orthonormal edited it a fair bit. If there's anything I should add or update please send me a private message or make the change by making the next thread—I may not notice a comment on the post. Finally, once this gets past 500 comments, anyone is welcome to copy and edit this intro to start the next welcome thread.