# Recent Discussion

jacobjacob's Shortform Feed

What it says on the tin.

Few small points

1. I personally find the word "need" (like in the title) a bit aversive. I think it's generally meant as, "the benefit is higher than the opportunity cost"; but even that is a difficult statement. The word itself seems to imply necessary, and my guess is that some people would read "need" thinking it's "highly certian."

2. While I obviously have hesitation with religious groups, they have figured out a bunch of good things. Personally my gym is a very solo experience; I think that the community in churches and monasteries may make them bette

... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post
3G Gordon Worley III4hI agree with the spirit of your point, but I think we would be better served by a category anchored by an example other than a modern gym. To me the problem is that the modern gym is atomized and transactional: going to the gym is generally a solitary activity, even when you take a class or go with friends, because it's about your workout and not collaboratively achieving something. There are notable exceptions, but most of the time when I think of people going to the gym I imagine them working out as individuals for individual purposes. Rationality training takes more. It requires bumping up against other people to see what happens when you "meet the enemy" of reality, and doing that in a productive way requires a kind of collective safety or trust in your fellow participants to both meet you fairly and to support you even while correcting you. Maybe this was a feature of the classic Greek gymnasium, but I find it lacking from most modern gyms. We do have another kind of place that does regularly engage in this kind of mutual engagement in practice that is not atomized or transactional, and that's the dojo. The salient example to most people will be the dojo for practicing a martial art, and that's a place where trust and shared purpose exist. Sure, you might spend time on your own learning forms, but once you have mastered the basics you'll be engaged with other students head on in situations where, if one of you doesn't do what you should, one or both of you can get seriously injured. Thus it is with rationality training, although there the injuries are emotional or mental rather than physical.
2Hazard34mAgree with you and the OP, and note that the difference between my mental trope of gym and dojo is that I can go to the gym whenever, but is a place where practices happen at specifically scheduled times. I can see wanting both.
2jacobjacob4hThanks for describing that! Some questions: 1) What are some examples of what "practicing CFAR techniques" looks like? 2) To what extent are dojos expected to do "new things" vs. repeated practice of a particular thing? For example, I'd say there's a difference between a gym and a... marathon? match? I think there's more of the latter in the community at the moment: attempting to solve particular bugs using whatever means are necessary.

Charity investigators could be time-effective by optimizing non-cause-neutral donations.

There are a lot more non-EA donors than EA donors. It may also be the case that EA donation research is somewhat saturated.

Say you think that $1 donated to the best climate change intervention is worth 1/10th that of$1 for the best AI-safety intervention. But you also think that your work could increase the efficiency of $10mil of AI donations by 0.5%, but it could instead increase the efficiency of$50mil of climate change donations by 10%. Then, for you to maximize e

... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post
9toonalfrink3hI have gripes with EA's that try to argue about which animals have consciousness. They assume way too readily that consciousness and valence can be inferred from behavior at all. It seems quite obvious to me that these people equate their ability to empathize with an animal with the ability for the animal to be conscious, and it seems quite obvious to me that this is a case of mind projection fallacy. Empathy is just a simulation. You can't actually see another mind. If you're going to make guesses about whether a species is conscious, you should first look at neural correlates of consciousness and valence and then try to find these correlates in animals. You don't look at animal behavior at all. We have absolutely no reason to believe that behavior correlates with consciousness. That's just your empathy getting in the way. The same empathy that attributes feelings to stuffed animals.

A reductio ad absurdum for this is the strong skeptical position: I have no particular reason to believe that anything is conscious. All configurations of quantum space are equally valuable, and any division into "entities" with different amounts of moral weight is ridiculous.

2Lanrian1hThe strong version of this can't be true. You claiming that you're conscious is part of your behaviour. Hopefully, it's approximately true that you would claim that you're conscious iff you believe that you're conscious. If behaviour doesn't at all correlate with consciousness, it follows that your belief in consciousness doesn't at all correlate with you being conscious. Which is a reductio, because the whole point with having beliefs is to correlate them with the truth.
2toonalfrink1hRight, right. So there is a correlation. I'll just say that there is no reason to believe that this correlation is very strong. I once won a mario kart tournament without feeling my hands.
Is Rationalist Self-Improvement Real?

Cross-posted from Putanumonit where the images show up way bigger. I don't know how to make them bigger on LW.

Imagine that tomorrow everyone on the planet forgets the concept of training basketball skills.

The next day everyone is as good at basketball as they were the previous day, but this talent is assumed to be fixed. No one expects their performance to change over time. No one teaches basketball, although many people continue to play the game for fun.

Snapshots of a Steph Curry jump shot. Image credit: ESPN.

Geneticists explain that some people are born with better hand-eye... (Read more)

I'm confused about how manioc detox is more useful to the group than the individual - each individual self-interestedly would prefer to detox manioc, since they will die (eventually) if they don't.

Yeah, I was wrong about manioc.

Something about the "science is fragile" argument feels off to me. Perhaps it's that I'm not really thinking about RCTs; I'm looking at Archimedes, Newton, and Feynman, and going "surely there's something small that could have been tweaked about culture beforehand to make some of this low-hanging scientific fruit get grabbed earlier

... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post
1dmolling7hFor the record, I didn't actually downvote you--just wanted to share why I suspect others did. I agree with your full reasoning and didn't mean to imply you thought that was the only significant difference. I mostly agree with what habryka is saying, though.
11strangepoop10hWhile you're technically correct, I'd say it's still a little unfair (in the sense of connoting "haha you call yourself a rationalist how come you're failing at akrasia"). Two assumptions that can, I think you'll agree, take away from the force of "akrasia is epistemic failure": * if modeling and solving akrasia is, like diet, a hard problem that even "experts" barely have an edge on, and importantly, things that do work seem to be very individual-specific making it quite hard to stand on the shoulders of giants * if a large percentage of people who've found and read through the sequences etc have done so only because they had very important deadlines to procrastinate ...then on average you'd see akrasia over-represented in rationalists. Add to this the fact that akrasia itself makes manually aiming your rationality skills at what you want harder. That can leave it stable even under very persistent efforts.
1Данило Глинський15hI’d say, it is very strange how different people understand same words differently. Originally I thought that those 2 activities are in same category, but now that I read your explanations, shouldn’t I adjust my “categorization” heuristics? Who’s wrong here? This issue seems small compared to original topic, but how can we improve anything, if we don’t speak same language and don’t know what’s right and who’s wrong?

I frequently hear the advice that it's better to sleep on the back and worthwhile to learn to sleep on your back. Are there any studies that backup that advice. Otherwise are there other good arguments? Personal experience is also welcome.

Personal experience / opinion: For me sleeping positions are an issue of expanded (back) or contracted (side) body language.

In an expanded state I seem to have a lower threshold for cognitive dissonance. I.e. my mind is less prone to indulging in pleasant-but-at-odds-with-reality thought trains. So I, for mental health reasons, try to fall asleep on my back when I can manage to tolerate the expanded state.

[Epistemic status: Three different data sets pointing to something similar is at least interesting, make your own mind up as to how interesting!]

In Eli Tyre’s analysis of birth order in historical mathematicians, he mentioned analysing other STEM subjects for similar effects. In the comments I kinda–sorta preregistered a study into this. Following his comments I dropped the age requirement I mentioned as it no longer seemed necessar... (Read more)

This is a review of my own post.

The first thing to say is that for the 2018 Review Eli’s mathematicians post should take precedence because it was him who took up the challenge in the first place and inspired my post. I hope to find time to write a review on his post.

If people were interested (and Eli was ok with it) I would be happy to write a short summary of my findings to add as a footnote to Eli’s post if it was chosen for the review.

***

This was my first post on LessWrong and looking back at it I think it still holds up fairly well.

There... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

Over the past few days I've been reading about reinforcement learning, because I understood how to make a neural network, say, recognise handwritten digits, but I wasn't sure how at all that could be turned into getting a computer to play Atari games. So: what I've learned so far. Spinning Up's Intro to RL probably explains this better.

(Brief summary, explained properly below: The agent is a neural network which runs in an environment and receives a reward. Each parameter in the neural network is increased in proportion to how much it i... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

Operationalizing Newcomb's Problem

The standard formulation of Newcomb's problem has always bothered me, because it seemed like a weird hypothetical designed to make people give the wrong answer. When I first saw it, my immediate response was that I would two-box, because really, I just don't believe in this "perfect predictor" Omega. And while it may be true that Newcomblike problems are the norm, most real situations are not so clear cut. It can be quite hard to demonstrate why causal decision theory is inadequate, let alone build up an intuition about it. In fact, the closest I've seen to a real-worl... (Read more)

1Nebu2hOkay, but then what would you actually do? Would you leave before the 10 minutes is up?
1Nebu2hBecause historically, in this fictional world we're imagining, when psychologists have said that a device's accuracy was X%, it turned out to be within 1% of X%, 99% of the time.
in this fictional world we're imagining, when psychologists have said that a device's accuracy was X%, it turned out to be within 1% of X%, 99% of the time.

99% of the time for me, or for other people? I may not be correct in all cases, but I have evidence that I _am_ an outlier on at least some dimensions of behavior and thought. There are numerous topics where I'll make a different choice than 99% of people.

More importantly, when the fiction diverges by that much from the actual universe, it takes a LOT more work to show that any lessons are valid or useful in the real universe.

Moloch feeds on opportunity

This is very likely my most important idea, and I've been trying to write about it for years, but it's too important to write about it badly, so I haven't ever mustered the courage. I guess I'll just do a quick test round.

It starts with this "hierarchy of needs" model, first popularized my Maslow, that we humans tend to focus on one need at the time.

• If you're hungry, you don't care about aesthetics that much
• If you are competing for status, you can easily be tempted to defect on coordination problems
• etc

I like to model this roughly as an ensemble of subag... (Read more)

Bayesian examination

A few months ago, Olivier Bailleux, a Professor of computer science and reader of my book on Bayesianism, sent me an email. He suggested to apply some of the ideas of the book to examine students. He proposed Bayesian examination.

I believe it to be a brilliant idea, which could have an important impact on how many people think. At least, I think that this is surely worth sharing here.

tl;dr Bayesian examinations seem very important to deploy because they incentivize both probabilistic thinking and intellectual honesty. Yet, as argued by Julia Galef in this talk, incentives seem critical to cha... (Read more)

This is really interesting, thanks, not something I'd thought of.

If the teacher (or whoever set the test) also has a spread of credence over the answers then a Bayesian update would compare the values of P(A), P(B|¬A) and P(C|¬A and ¬B) [1] between the students and teacher. This is my first thought about how I'd create a fair scoring rule for this.

[1] P(D|¬A and ¬B and ¬C) = 1 for all students and teachers so this is screened off by the other answers.

2Bucky2hThe score for the 50:50:0:0 student is: 1−0.52−0.52−02−02=0.5 The score for the 40:20:20:20 student is: 1−0.62−0.22−0.22−0.22=0.52 I think the way you've done it is Briers rule which is (1 - the score from the OP). In Briers rule the lower value is better.
3Vaniver3hIf you have a digital exam, this works fine; if you want students to write things with pencil and paper, then you need to somehow turn the pencil marks into numbers that can be plugged into a simple spreadsheet.
1Pattern5hMultiple approaches might help - if a subject is predominantly taught one way, then people who don't learn well that way seem worse off. Being comfortable with a subject might require being good at it. Being good at it might require practice. Practice might require not fearing failure*. (A sense of play is ideal. That which is self driven can last - it will be practiced and retained, or relearned if not, when it proves to be missing. Absent practice things aren't learned once, but multiple times.) *Ugh fields (aversion to X) are built from smaller ugh fields (aversion to failure), in this model. Given the impact of spaced repetition on learning "information", repetition might have something to do with "aversions" - they are learned.

Applying economic models to physiology seems really obvious. For instance:

• Surely the body uses price signals to match production to consumption of various metabolites. Insulin as a price signal for glucose is one example.
• Presumably such price signals coordinate between spatially-separated organs with specialized roles in various physiological "supply chains". That should lead to general equilibrium models, and questions of convexity and stability.
• Can we back out an implied discount rate for the body's long-term energy stores?

Yet when I run a google search for the obvious phrase ... (Read more)

3jmh8hThis is a bit tongue in cheek but " There's a dog in the picture. Once you see the dog, there's still a lot going on in the picture, but the whole thing makes a lot more sense. " suggests we should not be taking pictures of dalmatians with high contrast black and white films ;-) At the same time I think that does get to the core of the discussion, for me at least. High contrast images are really good for certain things, no so much in other settings. So while economic concepts may shed some light on yet unanswered questions -- or perhaps merely suggest questions we have not yet thought about due to framing type blinders -- I think one needs to tread carefully. I do agree with your, and ChristianKI's, position that some of the underlying economic concepts, theoretical at least, may actually be wheels we can put on another cart and make some progress. But from that perspective it's really just the abstract math model and not really economics. I do think using existing wheels is often a pretty good idea. But I also think periodically reinventing the existing wheels is also a pretty good idea too. So here would be more specific questions I have about the general idea: 1) With pricing I'm not convinced by your answer that we really get anything more, even if the additional properties are really anything more than terminological differences from any other signally mechanism. Nor am I really seeing what more we're learning or can learn with the change to the economic price model here. What new insights are expected here -- or what can the current model approach not tell us but that seem to be rather important? Some other observations. Insulin as a price is problematic to me on two counts. First, even taking it as such it seems to get us a partial equilibrium model at best so tells us very little about the overall state of the system. Second, it's not clear to me just what type of price it would be. It's not like a dollar price where we see the underlying monetary unit

Those are all reasonable questions to ask and points to raise, and I'm not going to go to bat defending any of the suggestions I made off the top of my head when writing the original question. The point of the original question was to see if anybody out there had publications asking/answering the sort of questions you pose, and it looks like the answer is "no".

For some of these questions, as you argue, it's possible that the lack of literature is because there really isn't anything interesting to be found. But at least some of thes... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

2ChristianKl11hHuman market behavior is also complicated and only partially understood. In particular behavior economics finds that humans do all sorts of decisions that violate the rational actor axiom. At the same time economic theory can still make useful predictions about the behavior of markets.
1leggi13hI have read the insulin analogy (I read it before first commenting here). I don't know if the insulin analogy is from the book itself or your interpretation. But it is flawed. For multiple reasons. I started to pick it apart line by line but then decided a better course of action would be to try pointing you in the right direction - so that you could do the research about the physiological system, learn about what you are trying to label and then consider whether it is a good path to be following. Does it really make sense to apply economic labels to physiological systems? Expecting? Careful that you are not rationalising your beliefs.

This is a brief review of On the Chatham House Rule by Scott Garrabrant.

I tend to be very open about my thoughts and beliefs. However, I naturally am still discrete about a lot of things - things my friends told me privately, personal things about myself, and so on.

This has never been a big deal, figuring out the norms around secrecy. For most of my life it's seemed pretty straightforward, and I've not had problems with it. We all have friends who tell us things in private, and we're true to our word. We've all discovered a fact about someone that's maybe a bit embarrassing or personal, where

8jbash4hI don't think Apple is a useful model here at all. Well, Apple thinks so anyway. They may or may not be right, and "control of the brand" may or may not be important anyway. But anyway it's true that Apple can keep secrets to some degree. Apple is a unitary organization, though. It has a boundary. It's small enough that you can find the person whose job it is to care about any given issue, and you are unlikely to miss anybody who needs to know. It has well-defined procedures and effective enforcement. Its secrets have a relatively short lifetime of maybe as much as 2 or 3 years. Anybody who is spying on Apple is likely to be either a lot smaller, or heavily constrained in how they can safely use any secret they get. If I'm at Google and I steal something from Apple, I can't publicize it internally, and in fact I run a very large risk of getting fired or turned in to law enforcement if I tell it to the wrong person internally. Apple has no adversary with a disproportionate internal communication advantage, at least not with respect to any secrets that come from Apple. The color of the next iPhone is never going to be as interesting to any adversary as an X-risk-level AI secret. And if, say, MIRI actually has a secret that is X-risk-level, then anybody who steals it, and who's in a position to actually use it, is not likely to feel the least bit constrained by fear of MIRI's retaliation in using it to do whatever X-risky thing they may be doing.
31jbash4hNo, I think it's probably very counterproductive, depending on what it really means in practice. I wasn't quite sure what the balance was between "We are going to actively try to keep this secret" and "It's taking too much of our time to write all of this up". On the secrecy side of that, the problem isn't whether or not MIRI's secrecy works (although it probably won't)[1] [#fn-KTeqhzcaLjt3yuBBW-1]. The problem is with the cost and impact on their own community from their trying to do it. I'm going to go into that further down this tome. That whole GPT thing was just strange. OpenAI didn't conceal any of the ideas at all. They held back the full version of the actual trained network, but as I recall they published all of the methods they used to create it. Although a big data blob like the network is relatively easy to keep secret, if your goal is to slow down other research, controlling the network isn't going to be effective at all. ... and I don't think that slowing down follow-on research was their goal. If I remember right, they seemed to be worried that people would abuse the actual network they'd trained. That was indeed unrealistic. I've seen the text from the full network, and played with giving it prompts and seeing what comes out. Frankly, the thing is useless for fooling anybody and wouldn't be worth anybody's time. You could do better by driving a manually created grammar with random numbers, and people already do that. Treating it like a Big Deal just made OpenAI look grossly out of touch. I wonder how long it took them to get the cherry-picked examples they published when they made their announcement... So, yes, I thought OpenAI was being unrealistic, although it's not the kind of "romanticization" I had in mind. I just can't figure out what they could have stood to gain by that particular move. All that said, I don't think I object to "more careful release practices", in the sense of giving a little thought to what you hand out. My objections
2ChristianKl7hThere's also the sort of secrecy you have when you signed an NDA because you consult with a company. I would expect a person like Nick Bostrom to have access to information about what happens inside DeepMind that's protected by promises of secrecy.

I can tell you that if you just want to walk into DeepMind (i.e. past the security gate), you have to sign an NDA.

Give praise

The dominant model about status in LW seems to be one of relative influence. Necessarily, it's zero-sum. So we throw up our hands and accept that half the community is just going to run a deficit.

Here's a different take: status in the sense of worth. Here's a set of things we like, or here's a set of problems for you to solve, and if you do, you will pass the bar and we will grant you personhood and take you seriously and allow you onto the ark when the world comes crumbling. Worth is positive-sum.

I think both models are useful, but only one of these models underlies the em... (Read more)

People generally only discuss 'status' when they're feeling a lack of it

While this has been true for other posts that I wrote about the subject, this post was actually written from a very peaceful, happy, almost sage-like state of mind, so if you read it that way you'll get closer to what I was trying to say :)

3leggi13hAs a relative newcomer this critique is refreshing. I have observed a fair amount of ego and ego-stroking within a system that allows punishment of dissenters. (the core group has a lot of power to chase off things they don't like to hear and are therefore missing out on expanded thinking.) Encouragement should not be confused with praise. And correction is not punishment. A good take-away message from the above review.

In 2008, Steve Omohundro's foundational Basic AI Drives made important conjectures about what superintelligent goal-directed AIs might do, including gaining as much power as possible to best achieve their goals. Toy models have been constructed in which Omohundro's conjectures bear out, and the supporting philosophical arguments are intuitive. The conjectures have recently been the center of debate between well-known AI researchers.

Instrumental convergence has been heuristically understood as an anticipated risk, but not as a formal phenomenon with a well-understood cause. The goal of this pos

It seems a common reading of my results is that agents tend to seek out states with higher power. I think this is usually right, but it's false in some cases. Here's an excerpt from the paper:

So, just because a state has more resources, doesn't technically mean the agent will go out of its way to reach it. Here's what the relevant current results say: parts of the future allowing you to reach more terminal states are instrumentally convergent, and the formal POWER contributions of different possibilities are approximately proportionally related to their in

... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

Sangha: Part 1

In years past, the word “community” conjured for me images of many people talking to each other, as at a party or a bake sale. When I thought of “finding community”, I thought of looking for a group of people that would frequently interact with each other and also me. It didn’t really sound appealing — lots of chaos, too many people talking at once, constant misunderstandings, and so forth. But I knew that I felt worse and worse over time if I never saw other people. So I entered a “community&#... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

Of course, I'm not expecting you to support the idea in the answers, but simply mentioning its conclusion:)

1MathieuRoy10hby "that's fine", you mean "I learned helplessness", right? (just checking, because I'm not sure what it means to say that something terrible is fine)
2Dagon3hI think it's actual helplessness, not "learned helplessness". "learned", in this context, usually implies "incorrect". "that's fine" means I believe that a change in my actions will cause more harm than it does good to my life-satisfaction index (or whatever we're calling the artificial construct of "sum of (possibly discounted) future utility stream"). It's perfectly reasonable to say it's "terrible" compared to some non-real ideal, and "fine" compared to actual likely futures. Unless you're just saying "people are hopelessly bad at modeling the world and making decisions", in which case I agree, but the problem is WAY deeper than you imply here.

I did a bad job of saying that I'm trying to highlight the attentional failures involved specifically.

6TurnTrout5hGoing through an intro chem textbook, it immediately strikes me how this should be as appealing and mysterious as the alchemical magic system of Fullmetal Alchemist. "The law of equivalent exchange" ≈ "conservation of energy/elements/mass (the last two holding only for normal chemical reactions)", etc. If only it were natural to take joy in the merely real [https://www.readthesequences.com/Joy-In-The-Merely-Real]...
4Hazard5hHave you been continuing your self-study schemes into realms beyond math stuff? If so I'm interested in both the motivation and how it's going! I remember having little interest in other non-physics science growing up, but that was also before I got good at learning things and my enjoyment was based on how well it was presented.
4TurnTrout4hYeah, I've read a lot of books since my reviews fell off last year, most of them still math. I wasn't able to type reliably until early this summer, so my reviews kinda got derailed. I've read Visual Group Theory, Understanding Machine Learning, Computational Complexity: A Conceptual Perspective, Introduction to the Theory of Computation, An Illustrated Theory of Numbers, most of Tadellis' Game Theory, parts of several graph theory textbooks, and I'm going through Munkres' Topology right now. I've gotten through the first fifth of the first Feynman lectures, which has given me an unbelievable amount of mileage for generally reasoning about physics. I want to go back to my reviews, but I just have a lot of other stuff going on right now. Also, I run into fewer basic confusions than when I was just starting at math, so I generally have less to talk about. I guess I could instead try and re-present the coolest concepts from the book. My "plan" is to keep learning math until the low graduate level (I still need to at least do complex analysis, topology, field / ring theory, ODEs/PDEs, and something to shore up my atrocious trig skills, and probably more)[1] [#fn-y2By8sAcCYAaKbkv5-1], and then branch off into physics + a "softer" science (anything from microecon to psychology). CS ("done") -> math -> physics -> chem -> bio is the major track for the physical sciences I have in mind, but that might change. I dunno, there's just a lot of stuff I still want to learn. :) -------------------------------------------------------------------------------- 1. I also still want to learn Bayes nets, category theory, get a much deeper understanding of probability theory, provability logic, and decision theory. ↩︎ [#fnref-y2By8sAcCYAaKbkv5-1]

Yay learning all the things! Your reviews are fun, also completely understandable putting energy elsewhere. Your energy for more learning is very useful for periodically bouncing myself into more learning.

Suppose a general-population survey shows that people who exercise less, weigh more. You don't have any known direction of time in the data - you don't know which came first, the increased weight or the diminished exercise. And you didn't randomly assign half the population to exercise less; you just surveyed an existing population.

The statisticians who discovered causality were trying to find a way to distinguish, within survey data, the direction of cause and effect - whether, as common sense would have it, more obese people exercise less because they find physical activity less rewarding; o... (Read more)

There are two issues with it.

You can not figure out how something works by only looking at some aspect. Think of the blind people and elephant story.

But it still has a point because with a subsystem that makes predictions the understanding of a system by pure observation becomes impossible.

Lately I've come to think of human civilization as largely built on the backs of intelligence and virtue signaling. In other words, civilization depends very much on the positive side effects of (not necessarily conscious) intelligence and virtue signaling, as channeled by various institutions. As evolutionary psychologist Geoffrey Miller says, "it’s all signaling all the way down."

A question I'm trying to figure out now is, what determines the relative proportions of intelligence vs virtue signaling? (Miller argued that intelligence signaling can be considered a kind of virtue signaling, but

4interstice15hAbility to cooperate is important, but I think that status-jockeying is a more 'fundamental' advantage because it gives an advantage to individuals, not just groups. Any adaptation that aids groups must first be useful enough to individuals to reach fixation(or near-fixation) in some groups.

I don't think so, jockeying can only get you so far, and even then only in situations where physical reality doesn't matter. If you're in a group of ~50 people, and your rival brings home a rabbit, but you and your friend each bring back half a stag because of your superior coordination capabilities, the guy who brought back the rabbit can say all the clever things he wants, but it's going to be clear to everyone who's actually deserving of status. The two of you will gain a significant fitness advantage over the rest of the members of the tribe, and so you will outcompete them.

3Vaniver18hThere are primates with proto-language, which I think let them communicate well enough to do these sorts of things. The question then becomes "why go from a four-grunt language to the full variety of human speech?", and it seems like runaway dynamics make more sense here (in a way that rhymes with the Deutsch-style "humans developed causal reasoning as part of figuring out how to do ritual-style mimicry better" arguments).
1FactorialCode5hBandwidth. 4 grunts let you communicate 2 bits of information per grunt n grunts let you communicate log(n) bits per grunt. In addition, without a code or compositional language, that's the most information you can communicate. Even the simple agents in the OpenAI link were developing a binary code to communicate because 2 bits wasn't enough: In my model, the marginal utility of extra bandwidth and a more expressive code is large and positive when cooperating. This goes on up to the information processing limits of the brain, at which point further bandwidth is probably less beneficial. I think we don't talk as fast as Marshal Mathers [https://www.youtube.com/watch?v=XbGs_qK2PQA] simply because our brains can't keep up. Evolution is just following the gradient. The main reason I don't think runaway dynamics are a major factor is simply because language is very grounded. Most of our language is dedicated to referencing reality. If language evolved because of a signalling spiral, especially an IQ signalling spiral, I'd expect language to look like a game, something like verbal chess. Sometimes it does look like that, [https://en.wikipedia.org/wiki/Battle_rap] but it's the exception, not the rule. Social signalling seems to be mediated through other communication mechanisms, such as body-language and tone or things like vibing. [https://www.lesswrong.com/posts/jXHwYYnqynhB3TAsc/what-vibing-feels-like] In all cases, the actual content of the language is mostly irrelevant and doesn't need to be the expressive, grounded, and compositional mechanism of language to fulfill it's purpose.

# What is voting theory?

Voting theory, also called social choice theory, is the study of the design and evaulation of democratic voting methods (that's the activists' word; game theorists call them "voting mechanisms", engineers call them "electoral algorithms", and political scientists say "electoral formulas"). In other words, for a given list of candidates and voters, a voting method specifies a set of valid ways to fill out a ballot, and, given a valid ballot from each voter, produces an outcome.

(An "electoral system" includes a voting method... (Read more)

As the author, I think this has generally stood the test of time pretty well. There are various changes I'd make if I were doing a rewrite today; but overall, these are minor.

Aside from those generally-minor changes, I think that the key message of this piece remains important to the purpose of Less Wrong. That is to say: making collective decisions, or (equivalently) statements about collective values, is a tough problem; it's important for rationalists; and studying existing theory on this topic is useful.

Here are the specific changes I'd ... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post