First, let me apologize pre-emptively if I'm retreading old ground, I haven't carefully read this whole discussion. Feel free to tell me to go reread the damned thread if I'm doing so. That said... my understanding of your account of existence is something like the following:

A model is a mental construct used (among other things) to map experiences to anticipated experiences. It may do other things along the way, such as represent propositions as beliefs, but it needn't. Similarly, a model may include various hypothesized entities that represent certain consistent patterns of experience, such as this keyboard I'm typing on, my experiences of which consistently correlate with my experiences of text appearing on my monitor, responses to my text later appearing on my monitor, etc.

On your account, all it means to say "my keyboard exists" is that my experience consistently demonstrates patterns of that sort, and consequently I'm confident of the relevant predictions made by the set of models (M1) that have in the past predicted patterns of that sort, not-so-confident of relevant predictions made by the set of models (M2) that predict contradictory patterns, etc. etc. etc.

We can also say that M1 all share a common property K that allows such predictions. In common language, we are accustomed to referring to K as an "object" which "exists" (specifically, we refer to K as "my keyboard") which is as good a way of talking as any though sloppy in the way of all natural language.

We can consequently say that M1 all agree on the existence of K, though of course that may well elide over many important differences in the ways that various models in M1 instantiate K.

We can also say that M1 models are more "accurate" than M2 models with respect to those patterns of experience that led us to talk about K in the first place. That is, M1 models predict relevant experience more reliably/precisely/whatever.

And in this way we can gradually converge on a single model (MR1), which includes various objects, and which is more accurate than all the other models we're aware of. We can call MR1 "the real world," by which we mean the most accurate model.

Of course, this doesn't preclude uncovering a new model MR2 tomorrow which is even more accurate, at which point we would call MR2 "the real world". And MR2 might represent K in a completely different way, such that the real world would now, while still containing the existence of my keyboard, contain it in a completely different way. For example, MR1 might represent K as a collection of atoms, and MR2 might represent K as a set of parameters in a configuration space, and when I transition from MR1 to MR2 the real world goes from my keyboard being a collection of atoms to my keyboard being a set of parameters in a configuration space.

Similarly, it doesn't preclude our experiences starting to systematically change such that the predictions made by MR1 are no longer reliable, in which case MR stops being the most accurate model, and some other model (MR3) is the most accurate model, at which point we would call MR3 "the real world". For example, MR3 might not contain K at all, and I would suddenly "realize" that there never was a keyboard.

All of which is fine, but the difficulty arises when after identifying MR1 as the real world we make the error of reifying MRn, projecting its patterns onto some kind of presumed "reality" R to which we attribute a kind of pseudo-existence independent of all models. Then we misinterpret the accuracy of a model as referring, not to how well it predicts future experience, but to how well it corresponds to R.

Of course, none of this precludes being mistaken about the real world... that is, I might think that MR1 is the real world, when in fact I just haven't fully evaluated the predictive value of the various models I'm aware of, and if I were to perform such an evaluation I'd realize that no, actually, MR4 is the real world. And, knowing this, I might have various degrees of confidence in various models, which I can describe as "possible worlds."

And I might have preferences as to which of those worlds is real. For example, MP1 and MP2 might both be possible worlds, and I am happier in MP1 than MP2, so I prefer MP1 be the real world. Similarly, I might prefer MP1 to MP2 for various other reasons other than happiness.

Which, again, is fine, but again we can make the reification error by assigning to R various attributes which correspond, not only to the real world (that is, the most accurate model), but to the various possible worlds MRx..y. But this isn't a novel error, it's just the extension of the original error of reification of the real world onto possible worlds.

That said, talking about it gets extra-confusing now, because there's now several different mistaken ideas about reality floating around... the original "naive realist" mistake of positing R that corresponds to MR, the "multiverse" mistake of positing R that corresponds to MRx..y, etc. When I say to a naive realist that treating R as something that exists outside of a model is just an error, for example, the naive realist might misunderstand me as trying to say something about the multiverse and the relationships between things that "exist in the world" (outside of a model) and "exist in possible worlds" (outside of a model), which in fact has nothing at all to do with my point, which is that the whole idea of existence outside of a model is confused in the first place.

Have I understood your position?

If I understand both your and shiminux's comments, this might express the same thing in different terms:

  • We have experiences ("inputs".)
  • We wish to optimize these inputs according to whatever goal structure.
  • In order to do this, we need to construct models to predict how our actions effect future inputs, based on patterns in how inputs have behaved in the past.
  • Some of these models are more accurate than others. We might call accurate models "real".
  • However, the term "real" holds no special ontological value, and they might l
... (read more)
1shminux7yAs was the case once or twice before, you have explained what I meant better than I did in my earlier posts. Maybe you should teach your steelmanning skills, or make a post out of it. The reification error you describe is indeed one of the fallacies a realist is prone to. Pretty benign initially, it eventually grows cancerously into the multitude of MRs whose accuracy is undefined, either by definition (QM interpretations) or through untestable ontologies, like "everything imaginable exists". This promoting any M->R or a certain set {MP}->R seems forever meaningful if you fall for it once. The unaddressed issue is the means of actualizing a specific model (that is, making it the most accurate). After all, if all you manipulate is models, how do you affect your future experiences?

Welcome to Less Wrong! (5th thread, March 2013)

by orthonormal 5 min read1st Apr 20131761 comments


If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, how you came to identify as a rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ.

(This is the fifth incarnation of the welcome thread; once a post gets over 500 comments, it stops showing them all by default, so we make a new one. Besides, a new post is a good perennial way to encourage newcomers and lurkers to introduce themselves.)

A few notes about the site mechanics

Less Wrong comments are threaded for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via Markdown syntax (you can click the "Help" link below the text box to bring up a primer).

You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.

However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you have any questions about karma or voting, please feel free to ask here.

Replies to your comments across the site, plus private messages from other users, will show up in your inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user's name to view all of their comments and posts.

It's definitely worth your time commenting on old posts; veteran users look through the recent comments thread quite often (there's a separate recent comments thread for the Discussion section, for whatever reason), and a conversation begun anywhere will pick up contributors that way.  There's also a succession of open comment threads for discussion of anything remotely related to rationality.

Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they've better identified their deeper disagreements, or simply "tap out" of a discussion that's stopped being productive. (Seriously, you can just write "I'm tapping out of this thread.") This is absolutely OK, and it's one good way to avoid the flamewars that plague many sites.

There's actually more than meets the eye here: look near the top of the page for the "WIKI", "DISCUSSION" and "SEQUENCES" links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like common abbreviations and idioms. It's a good place to look if someone's speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there's a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they're pretty engrossing in my opinion.

A few notes about the community

If you've come to Less Wrong to  discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.

If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click "Create new article" in the upper right corner next to your username, then write the article, then at the bottom take the menu "Post to" and change it from "Drafts" to "Less Wrong Discussion". Then click "Submit". When you edit a published post, clicking "Save and continue" does correctly update the post.)

If you want to write a post about a LW-relevant topic, awesome! I highly recommend you submit your first post to Less Wrong Discussion; don't worry, you can later promote it from there to the main page if it's well-received. (It's much better to get some feedback before every vote counts for 10 karma- honestly, you don't know what you don't know about the community norms here.)

If you'd like to connect with other LWers in real life, we have  meetups  in various parts of the world. Check the wiki page for places with regular meetups, or the upcoming (irregular) meetups page. There's also a Facebook group. If you have your own blog or other online presence, please feel free to link it.

If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter

A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.

A list of some posts that are pretty awesome

I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:

More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.

Welcome to Less Wrong, and we look forward to hearing from you throughout the site!

Note from orthonormal: MBlume and other contributors wrote the original version of this welcome post, and I've edited it a fair bit. If there's anything I should add or update on this post (especially broken links), please send me a private message—I may not notice a comment on the post. Finally, once this gets past 500 comments, anyone is welcome to copy and edit this intro to start the next welcome thread.