Thank you. I was probably wrong.
In most examples, there's no common knowledge. In most examples, information is only transmitted one way. This does not allow for Aumann agreement. One side makes one update, then stops.
If someone tells me their assigned probability for something, that turns my probability very close to theirs, if I think they've seen nearly strictly better evidence about it than I have. I think this explains most of your examples, without referencing Aumann.
I think I don't understand what you mean. What's Aumann agreement? How's it a useful concept?
I thought the surprising thing about Aumann agreement was that ideal agents with shared priors will come to agree even if they can't intentionally exchange information, and can see only the other's assigned probability. [I checked Wikipedia; with common knowledge of each other's probabilistic belief about something, ideal agents with shared priors have the same belief. There's something about dialogues, but Aumann didn't prove that. I was wrong.]
Your post seems mostly about exchange of information. It doesn't matter which order you find your evidence, so i...
Thank you for responding.
It's possible for your team to lose five points, thereby giving the other team five points.
If the other team loses five points, then you gain five points.
Why is it not possible for the other team to lose five points without anything else happening? Where does the asymmetry come from?
It's
-25 -20 -5 0 20 25.
Why isn't it
-25 -20 -5 0 5 20 25?
- (-25) lose points and other team gains points
- (-20) other team gains points
- (-5) lose points and other team gets nothing
- (0) nobody gets anything
- (20) gain points
- (25) other team loses points and you gain points
Why no (+5)?
X is not a thing that can be other things
Y is not actually a thing that another thing can be
Why the "actually"?
I probably won't go to this.
I probably will go to the October 21st version. Is there some way I should formally communicate that?
Probably there should be a way to be more specific than "MAYBE".
I had to Google "RSVP".
Where should I complain these to?
I no longer think it makes sense to clam up when you can't figure out how you originally came around to the view which you now hold
Either you can say "I came to this conclusion at some point, and I trust myself", or you should abandon the belief.
You don't need to know how or why your brain happened to contain the belief; you just need to know your own justification for believing it now. If you can't sufficiently justify your belief to yourself (even through things like "My-memory-of-myself-from-a-few-minutes-ago thinks it's likely" or "First-order intuitio...
When I read "Extravert", I felt happy related to the uncommon spelling, which I also prefer.
Is this shared reality?
One-box only occurs in simulations, while two-box occurs in and out of simulations.
If I one-box in simulations, then Omega puts $0 in the first box, and I can't one-box.
If I two-box in simulations, then Omega puts $100 in the first box, so I may be in a simulation or not.
One-boxing kills me, so I two-box.
Either I've made a mistake, or you have. Where is it?
Paul Graham says Robert Morris is never wrong.
He does this by qualifying statements (ex. "I think"), not by saying less things.
"Your loved one has passed on"
I'm not sure I've ever used a euphemism (I don't know what a euphemism is).
When should I?
The more uncertain your timelines are, the more it's a bad idea to overstress. You should take it somewhat easy; it's usually more effective to be capable of moderate contribution over the long term than great contribution over the short term.
I dislike when fish suffer because I feel sad, and because other people want fish to not suffer for moral reasons.
A line is just a helix that doesn't curve. It works the same for any helix; it would be a great coincidence, to get a line.
So we can't have less geniuses. More people means more people above 5 standard deviations (by definition?).
I tried to solve (n+1)^4 visually. I spent about five minutes, and was unable to visualise well enough.
You might not survive as yourself, if you could see yourself.
Those who say "That which can be destroyed by the truth should be" may continue to walk the Path from there.
That's wonderful.
You worded this badly, but I agree.
It is possible to read "you robbed a bank" without imagining robbing a bank. Just very hard, and maybe impossible if you're not readied.
disclaimer
This might be the least disclamatory disclaimer I've ever read.
I'd even call it a claimer.
I think that list would be very helpful for me.
Can you form a representative sample of your "list"? Or send the whole thing, if you have it written down.
If people are conforming rationally, then the opinion of 15 other subjects should be substantially stronger evidence than the opinion of 3 other subjects.
This doesn't seem true; the data correlate pretty strongly, so more wouldn't provide much evidence.
Adding a single dissenter—just one other person who gives the correct answer, or even an incorrect answer that’s different from the group’s incorrect answer—reduces conformity very sharply, down to 5–10% of subjects.
This is irrational, though.
The simulations you made are much more complicated than physics. I think almost any simulation would have to be, if it showed an apple with any reasonable amount of computing power (if there's room for an "unreasonable" amount, there's probably room for a lot of apples).
Edit: is this how links are supposed to be used?
I think it could deduce it's an image of a sparse 3D space with 3 channels. From there, it could deduce a lot, but maybe not that the channels are activated by certain frequencies.
You might need a very strong superintelligence, or one with a lot of time. But I think the correct hypothesis has extremely high evidence compared to others, and isn't that complicated. If it has enough thought to locate the hypothesis, it has enough to find it's better than almost any other.
Newtonian Mechanics or something a bit closer would rise very near the top of the list. It's possible even the most likely possibilities wouldn't be given much probability, but it would at least be somewhat modal. [Is there a continuous analogue for the mode? I don't know what softmax is.]
Thank you for the question. I understand better, now.
Anthropics seem very important here; most laws of physics probably don't form people; especially people who make cameras, and then AGI, then give it only a few images which don't look very optimized, or like they're of a much optimized world.
A limit on speed can be deduced; if intelligence enough to make AGI is possible, probably coordination's already taken over the universe and made it to something's liking, unless it's slow for some reason. The AI has probably been designed quite inefficiently; not what you'd expect from intelligent design.
...I could see h
Upon rereading to find where I didn't understand, I found I didn't lose much of the text, and all I had previously lost was unimportant.
My happiness is less, but knowing feels better.
Randomly adding / subtracting extra pieces to either rockets or cryptosystems is playing with the worst kind of fire, and will eventually get you hacked or exploded, respectively.
Haha.
All the examples Kahneman gives, I do not seem to substitute for the questions he lists (or worse ones), for even a moment.
This is the first time in a while I've felt immune to a cognitive bias.
What am I doing wrong? Does my introspection not go deep enough?
Maybe I really have read enough trick questions on the internet that I (what part of me?) immediately tries to tackle the hard problem (at least when reading a Less Wrong article which just told me exactly which mistake I'm expected to make).
I have an impression I've fallen for problems of this type before, when they were given for instrumental rather than epistemic reasons. But I can't remember any examples, and I don't know frequently it happens.
we all know people who insist that they are ugly and stupid and unlikeable even though they don't seem any worse off than anyone else.
I would ask for a long time.
Reading would probably get boring after a few decades, but I think writing essays and programs and papers and books could last much longer. Meditation could also last long, because I'm bad at it.
<1000 years, though; I'd need to be relatively sure I wouldn't commit suicide or fall down stairs.
I probably can't go to the October meetup, due to coincidence. How do I unRSVP on Meetup?
Unrelated, I still think I have a good chance of making it next time.