# All of qvalq's Comments + Replies

I probably can't go to the October meetup, due to coincidence. How do I unRSVP on Meetup?

Unrelated, I still think I have a good chance of making it next time.

Thank you. I was probably wrong.

In most examples, there's no common knowledge. In most examples, information is only transmitted one way. This does not allow for Aumann agreement. One side makes one update, then stops.
If someone tells me their assigned probability for something, that turns my probability very close to theirs, if I think they've seen nearly strictly better evidence about it than I have. I think this explains most of your examples, without referencing Aumann.

I think I don't understand what you mean. What's Aumann agreement? How's it a useful concept?

2tailcalled2mo
It is true that the original theorem relies on common knowledge. In my original post, I phrased it as "a family of theorems" because one can prove various theorems with different assumptions yet similar outcomes. This is a general feature in math, where one shouldn't get distracted by the boilerplate because the core principle is often more general than the proof. So e.g. the principle you mention, of "If someone tells me their assigned probability for something, that turns my probability very close to theirs, if I think they've seen nearly strictly better evidence about it than I have.", is something I'd suggest is in the same family as Aumann's agreement theorem. The reason for my post is that a lot of people find Aumann's agreement theorem counterintuitive and feel like its conclusion doesn't apply to typical real-life disagreements, and therefore assume that there must be some hidden condition that makes it inapplicable in reality. What I think I showed is that Aumann's agreement theorem defines "disagreement" extremely broadly and once you think about it with such a broad conception it does indeed appear to generally apply in real life, even under far weaker conditions than the original proof requires. I think this is useful partly because it suggests a better frame for reasoning about disagreement. For instance I provide lots of examples of disagreements that rapidly dissipate, and so if you wish to know why disagreements persist, it can be helpful to think about how persistent disagreements differ from the examples I list (for example many persistent disagreements are about politics, and for politics there are strong incentives for bias, so maybe some people who make political claims are dishonest, suggesting that conflict theory (the idea that political disagreement is due to differences in interests) is more accurate than mistake theory (the idea that political disagreement is due to making reasoning mistakes, which does not seem to predict that disagreem

I thought the surprising thing about Aumann agreement was that ideal agents with shared priors will come to agree even if they can't intentionally exchange information, and can see only the other's assigned probability. [I checked Wikipedia; with common knowledge of each other's probabilistic belief about something, ideal agents with shared priors have the same belief. There's something about dialogues, but Aumann didn't prove that. I was wrong.]

Your post seems mostly about exchange of information. It doesn't matter which order you find your evidence, so i...

0tailcalled2mo
Knowing each other's probability for a statement requires exchanging information about which statement the probability is assigned to. In basically all of my examples, this was the information exchanged.

Thank you for responding.

It's possible for your team to lose five points, thereby giving the other team five points.
If the other team loses five points, then you gain five points.
Why is it not possible for the other team to lose five points without anything else happening? Where does the asymmetry come from?

It's
-25 -20 -5 0 20 25.
Why isn't it
-25 -20 -5 0 5 20 25?

1Oliver Sourbut3mo
Ah, I see! You're spot on. Failure to properly proofread - thanks, it's amended now.
• (-25) lose points and other team gains points
• (-20) other team gains points
• (-5) lose points and other team gets nothing
• (0) nobody gets anything
• (20) gain points
• (25) other team loses points and you gain points

Why no (+5)?

2Oliver Sourbut3mo
Right, this is a distillation. You actually get +10 for a correct buzzer answer, and then some bonus questions, which the team can answer. Bonus qs are worth +5 (each), and typically (in my experience) you get about 2 of those. So +20 ish on avg per correct buzz. Combine that with possible loss of points and you get these numbers. Obviously these are points (net change vs other team), not reward or utility! I equivocated those a bit in this discussion too.

Maths is incomplete. Inconsistency isn't proven.

Is this wrong?

X is not a thing that can be other things

Y is not actually a thing that another thing can be

Why the "actually"?

4George3d62mo
corrected to aktually

I probably won't go to this.
I probably will go to the October 21st version. Is there some way I should formally communicate that?

Probably there should be a way to be more specific than "MAYBE".
Where should I complain these to?

1J03MAN3mo
The October 21st RSVP is on meetup. This LW post was generated from my meetups everywhere submission on the ACX blog. I've never used LW before and don't know how the site works. https://meetu.ps/e/Mqqtm/N1vlZ/i

I no longer think it makes sense to clam up when you can't figure out how you originally came around to the view which you now hold

Either you can say "I came to this conclusion at some point, and I trust myself", or you should abandon the belief.

You don't need to know how or why your brain happened to contain the belief; you just need to know your own justification for believing it now. If you can't sufficiently justify your belief to yourself (even through things like "My-memory-of-myself-from-a-few-minutes-ago thinks it's likely" or "First-order intuitio...

by far the best impact-to-community health ratio ever

What does this mean?

When I read "Extravert", I felt happy related to the uncommon spelling, which I also prefer.

Is this shared reality?

One-box only occurs in simulations, while two-box occurs in and out of simulations.

If I one-box in simulations, then Omega puts \$0 in the first box, and I can't one-box.

If I two-box in simulations, then Omega puts \$100 in the first box, so I may be in a simulation or not.

One-boxing kills me, so I two-box.

Either I've made a mistake, or you have. Where is it?

2lsusr3mo
I mixed up the \$100 and \$0 in the original post. This is now fixed.

Thank you for the comparison.

Paul Graham says Robert Morris is never wrong.

He does this by qualifying statements (ex. "I think"), not by saying less things.

"Your loved one has passed on"

I'm not sure I've ever used a euphemism (I don't know what a euphemism is).

When should I?

The more uncertain your timelines are, the more it's a bad idea to overstress. You should take it somewhat easy; it's usually more effective to be capable of moderate contribution over the long term than great contribution over the short term.

I dislike when fish suffer because I feel sad, and because other people want fish to not suffer for moral reasons.

A line is just a helix that doesn't curve. It works the same for any helix; it would be a great coincidence, to get a line.

So we can't have less geniuses. More people means more people above 5 standard deviations (by definition?).

I tried to solve (n+1)^4 visually. I spent about five minutes, and was unable to visualise well enough.

You might not survive as yourself, if you could see yourself.

Those who say "That which can be destroyed by the truth should be" may continue to walk the Path from there.

That's wonderful.

Adult IQ scores do too, I think.

You worded this badly, but I agree.

It is possible to read "you robbed a bank" without imagining robbing a bank. Just very hard, and maybe impossible if you're not readied.

No; I agree with you.

3Ilio7mo
Yeah, he said that too. But let’s face it, it’s 2023 and there’s absolutely no trace of radiologists starting to stop being under heavy pressure. Especially in Canada where papy boom is hitting hard and the new generations value family time more than dying at or from work. But yeah, I concede it’s not settled yet. Do you want to bet friendly goodies with me?

diary

should be "dairy".

disclaimer

This might be the least disclamatory disclaimer I've ever read.

I'd even call it a claimer.

3Nathan Helm-Burger7mo
hah, yeah. What I'm trying to get at is something like, "My ability to objectively debate this person in public is likely hampered in ways not clearly observable to me by the fact that I am working closely with them on their projects and have a lot of shared private knowledge and economic interests with them. Please keep these limitations in mind while reading my comments."

I think that list would be very helpful for me.

Can you form a representative sample of your "list"? Or send the whole thing, if you have it written down.

3romeostevensit7mo
partially exists here, but very little explanation https://conceptspace.fandom.com/wiki/List_of_Lists_of_Concepts

If people are conforming rationally, then the opinion of 15 other subjects should be substantially stronger evidence than the opinion of 3 other subjects.

This doesn't seem true; the data correlate pretty strongly, so more wouldn't provide much evidence.

Adding a single dissenter—just one other person who gives the correct answer, or even an incorrect answer that’s different from the group’s incorrect answer—reduces conformity very sharply, down to 5–10% of subjects.

This is irrational, though.

The simulations you made are much more complicated than physics. I think almost any simulation would have to be, if it showed an apple with any reasonable amount of computing power (if there's room for an "unreasonable" amount, there's probably room for a lot of apples).

Edit: is this how links are supposed to be used?

I think it could deduce it's an image of a sparse 3D space with 3 channels. From there, it could deduce a lot, but maybe not that the channels are activated by certain frequencies.

You might need a very strong superintelligence, or one with a lot of time. But I think the correct hypothesis has extremely high evidence compared to others, and isn't that complicated. If it has enough thought to locate the hypothesis, it has enough to find it's better than almost any other.

Newtonian Mechanics or something a bit closer would rise very near the top of the list. It's possible even the most likely possibilities wouldn't be given much probability, but it would at least be somewhat modal. [Is there a continuous analogue for the mode? I don't know what softmax is.]

Thank you for the question. I understand better, now.

Anthropics seem very important here; most laws of physics probably don't form people; especially people who make cameras, and then AGI, then give it only a few images which don't look very optimized, or like they're of a much optimized world.

A limit on speed can be deduced; if intelligence enough to make AGI is possible, probably coordination's already taken over the universe and made it to something's liking, unless it's slow for some reason. The AI has probably been designed quite inefficiently; not what you'd expect from intelligent design.

I could see h

...
6tangerine7mo
You are assuming a superintelligence that knows how to perform all these deductions. Why would this be a valid assumption? You are reasoning from your own point of view, i.e., the point of view of someone who has already seen much, much more of the world than a few frames, and more importantly someone who already knows what the thing is that is supposed to be deduced, which allows you to artificially reduce the hypothesis space. On what basis would this superintelligence be able to do this?

Upon rereading to find where I didn't understand, I found I didn't lose much of the text, and all I had previously lost was unimportant.

My happiness is less, but knowing feels better.

And (10, 3) has top 3 tokens:

I think these are top 10 tokens.

I always feel happy when I read alignment posts I don't understand, for some reason.

1qvalq7mo
Upon rereading to find where I didn't understand, I found I didn't lose much of the text, and all I had previously lost was unimportant.   My happiness is less, but knowing feels better.

I seem very similar to you.

the the

should be

with the

2lc8mo
thanks

Randomly adding / subtracting extra pieces to either rockets or cryptosystems is playing with the worst kind of fire, and will eventually get you hacked or exploded, respectively.

Haha.

Physics also tends toward very uninteresting things. This is for similar reasons, right?

1.123456

Is there any reason for this?

2Andrew Poet8mo
As far as I can tell (pasting 50257^2048 in a calculator) the pattern does not continue beyond what was posted: 1.1234561737320952217205634307..

All the examples Kahneman gives, I do not seem to substitute for the questions he lists (or worse ones), for even a moment.

This is the first time in a while I've felt immune to a cognitive bias.

What am I doing wrong? Does my introspection not go deep enough?

Maybe I really have read enough trick questions on the internet that I (what part of me?) immediately tries to tackle the hard problem (at least when reading a Less Wrong article which just told me exactly which mistake I'm expected to make).

I have an impression I've fallen for problems of this type before, when they were given for instrumental rather than epistemic reasons. But I can't remember any examples, and I don't know frequently it happens.

we all know people who insist that they are ugly and stupid and unlikeable even though they don't seem any worse off than anyone else.

I would ask for a long time.

Reading would probably get boring after a few decades, but I think writing essays and programs and papers and books could last much longer. Meditation could also last long, because I'm bad at it.

<1000 years, though; I'd need to be relatively sure I wouldn't commit suicide or fall down stairs.