# All of SquirrelInHell's Comments + Replies

Then what makes Peterson so special?

This is what the whole discussion is about. You are setting boundaries that are convenient for you, and refuse to think further. But some people in that reference class you are now denigrating as a whole are different from others. Some actually know their stuff and are not charlatans. Throwing a tantrum about it doesn't change it.

I did in fact have something between those two in mind, and was even ready to defend it, but then I basically remembered that LW is status-crazy and and gave up on fighting that uphill battle. Kudos to alkjash for the fighting spirit.

3Jacob Falkovich5y
Not to sound glib, but what good is LW status if you don't use it to freely express your opinions and engage in discussion on LW? The same is true of other things: blog/Twitter followers, Facebook likes etc. are important inasmuch as they give me the ability to spread my message to more people. If I never said anything controversial for fear of losing measurable status, I would be foregoing all the benefits of acquiring it in the first place.
6gjm5y
I think you should consider the possibility that the not-very-positive reaction your comments about Peterson here have received may have a cause other than status-fighting. (LW is one of the less status-crazy places I'm familiar with. The complaints about Peterson in this discussion do not look to me as if they are primarily motivated by status concerns. Some of your comments about him seem needlessly status-defensive, though.)
They explicitly said that he's not wrong-on-many-things in the T framework, the same way Eliezer is T.correct.

Frustrating, that's not what I said! Rule 10: be precise in your speech, Rule 10b: be precise in your reading and listening :P My wording was quite purposeful:

I don't think you can safely say Peterson is "technically wrong" about anything

I think Raemon read my comments the way I intended them. I hoped to push on a frame in people seem to be (according to my private, unjustified, wanton opinion) obviously too stuck in. See a...

6Jacob Falkovich5y
Your reply below says: What exactly did you think I meant when I said he's "technically wrong about many things" and you told me to be careful? I meant something very close to what your quote says, I don't even know if we're disagreeing about anything. And by the way, there is plenty of room for disagreement. alkjash just wrote what I thought you were going to [https://www.lesserwrong.com/posts/tLYKdGBgRXcrzEatb/the-jordan-peterson-mask#CfMft56HMsYK885vo] , a detailed point-by-point argument for why Peterson isn't, in fact, wrong. There's a big difference between alkjash's "Peterson doesn't say what you think he says" and "Peterson says what you think and he's wrong, but it's not important to the big picture". If Peterson really says "humans can't do math without terminal values" that's a very interesting statement, certainly not one that I can judge as obviously wrong.

Cool examples, thanks! Yeah, these are issues outside of his cognitive expertise and it's quite clear that he's getting them wrong.

Note that I never said that Peterson isn't making mistakes (I'm quite careful with my wording!). I said that his truth-seeking power is in the same weight class, but obviously he has a different kind of power than LW-style. E.g. he's less able to deal with cognitive bias.

But if you are doing "fact-checking" in LW style, you are mostly accusing him of getting things wrong about which he never c...

5mako yass5y
Then what the heck do you mean by "equal in truth-seeking ability"?
8Gaius Leviathan XV5y
Sorry, but I think that is a lame response. It really, really isn't just lack of expertise-- it's a matter of Peterson's abandonment of skepticism and scholarly integrity. I'm sorry, but you don't need to be a historian to tell that the ancient Egyptians didn't know about the structure of DNA. You don't need to be a statistician to know that coincidences don't disprove scientific materialism. Peterson is a PhD who know the level of due diligence needed to publish in peer reviewed journals from experience. He knows better but did it anyway. He cares enough to tell his students, explicitly, that he "really does believe" that ancient art depicts DNA - repeatedly! - and put it in public youtube videos with his real name and face. It's more like if Eliezer used the "ancient aliens built the pyramids" theory as an example in one of the sequences in a way that made it clear that he really does believe aliens built the pyramids. It's stupid to believe it in the first place, and it's stupid to use it as an example. Then what makes Peterson so special? Why should I pay more attention to him than, say, Deepak Chopra? Or an Islamist Cleric? Or a postmodernist gender studies professor who thinks western science is just a tool of patriarchal oppression? Might they also have messages that are "metaphorically true" even though their words are actually bunk? If Peterson gets the benefit of the doubt when he says stupid things, why shouldn't everybody else? If uses enough mental gymnastics, almost anything can be made to be "metaphorically true". Peterson's fans are too emotionally invested in him to really consider what he's saying rationally - akin to religious believers. Yes, he gives his audience motivation and meaning - much in the same way religion does for other demographics- but that can be a very powerful emotional blinder. If you really think that something gives your life meaning and motivation, you'll overlook its flaws, even when it means weakening your epistemology.
This story is trash and so am I.
If people don't want to see this on LW I can delete it.

You are showcasing a certain unproductive mental pattern, for which there's a simple cure. Repeat after me:

This is my mud pile

I show it with a smile

And this is my face

It also has its place

For increased effect, repeat 5 times in rap style.

7alkjash5y
To clarify, I was happy about finally accepting "being trash" but ambivalent about whether this trash should be on LW. But I agree with the sentiment.

[Please delete this thread if you think this is getting out of hand. Because it might :)]

I'm not really going to change my mind on the basis of just your own authority backing Peterson's authority.

See right here, you haven't listened. What I'm saying is that there is some fairly objective quality which I called "truth-seeking juice" about people like Peterson, Eliezer and Scott which you can evaluate by yourself. But you are just dug yourself into the same trap a little bit more. From what you write, your heuristics for evalua...

I'm worried we may be falling into an argument about definitions, which seems to happen a lot around JBP. Let me try to sharpen some distinctions.

In your quote, Chapman disagrees with Eliezer about his general approach, or perhaps about what Eliezer finds meaningful, but not about matters of fact. I disagree with JBP about matters of fact.

My best guess at what "truth-seeking juice" means comes in two parts: a desire to find the truth, and a methodology for doing so. All three of Eliezer/Scott/JBP have the first part down, but their methodolo...

[Note: somewhat taking you up on the Crocker's rules]

Peterson's truth-seeking and data-processing juice is in super-heavy weight class, comparable to Eliezer etc. Please don't make the mistake of lightly saying he's "wrong on many things".

At the level of analysis in your post and the linked Medium article, I don't think you can safely say Peterson is "technically wrong" about anything; it's overwhelmingly more likely you just didn't understand what he means. [it's possible to make more case-specific arguments here but I think the outside view meta-rationality should be enough...]

If you want to me to accept JBP as an authority on technical truth (like Eliezer or Scott are), then I would like to actually see some case-specific arguments. Since I found the case-specific arguments to go against Peterson on the issues where I disagree, I'm not really going to change my mind on the basis of just your own authority backing Peterson's authority.

For example: the main proof Peterson cites to show he was right about C-16 being the end of free speech is the Lindsay Shepherd fiasco. Except her case wasn't even in the relevant j...

4) The skill to produce great math and skill to produce great philosophy are secretly the same thing. Many people in either field do not have this skill and are not interested in the other field, but the people who shape the fields do.

FWIW I have reasonably strong but not-easily-transferable evidence for this, based on observation of how people manipulate abstract concepts in various disciplines. Using this lens, math, philosophy, theoretical computer science, theoretical physics, all meta disciplines, epistemic rationality, etc. form a cluster in which math is a central node, and philosophy is unusually close to math even considered in the context of the cluster.

Note that this is (by far) the least incentive-skewing from all (publicly advertised) funding channels that I know of.

Apply especially if all of 1), 2) and 3) hold:

1) you want to solve AI alignment

2) you think your cognition is pwned by Moloch

3) but you wish it wasn't

2whpearson5y
I might take this up at a later date. I want to solve AI alignment, but I don't want to solve it now. I'd prefer it if our societies institutions (both governmental and non-governmental) were a bit more prepared. Differential research that advances safety more than AI capability still advances AI capability.

Maybe it'd be useful to make a list of all the publicly advertised funding channels? Other ones I know of:

• http://existence.org/getting-support/
• https://futureoflife.org/2017/12/20/2018-international-ai-safety-grants-competition/
• https://www.lesserwrong.com/posts/4WbNGQMvuFtY3So7s/announcement-ai-alignment-prize-winners-and-next-round
• https://intelligence.org/mirix/
tl;dr: your brain hallucinates sensory experiences that have no correspondence to reality. Noticing and articulating these “felt senses” gives you access to the deep wisdom of your soul.

I think this snark makes it clear that you lack gears in your model of how focusing works. There are actual muscles in your actual body that get tense as a result of stuff going on with your nervous system, and many people can feel that even if they don't know exactly what they are feeling.

1alkjash5y
That's true; I've had "butterflies" that gave me actual stomachaches and indigestion. "No correspondence to reality" isn't exactly right, I'm not sure how to phrase it. Perhaps "no correspondence to external reality, if you consider normal bodily functions as external reality." But your claim that I lack gears in my focusing model is definitely true.

[Note that I am in no way an expert on strategy, probably not up to date with the discourse, and haven't thought this through. I also don't disagree with your conclusions much.]

[Also note that I have a mild feeling that you engage with a somewhat strawman version of the fast-takeoff line of reasoning, but have trouble articulating why that is the case. I'm not satisfied with what I write below either.]

These possible arguments seem not included in your list. (I don't necessarily think they are good arguments. Just mentioning whatever int...

I think it's perfectly valid to informally say "gears" while meaning both "gears" (how clear a model is on what it predicts) and "meta-gears" (how clear the meta model is on which models it a priori expects to be correct). And the new clarity you bring to this would probably be the right time to re-draw the boundaries around gears-ness, to make it match the structure of reality better. But this is just a suggestion.

2abramdemski5y
Maybe so. I'm also tempted to call meta-gears "policy-level gears" to echo my earlier terminology post [https://www.lesserwrong.com/posts/vKbAWFZRDBhyD6K6A/gears-level-and-policy-level] , but it seems a bit confusing. Definitely would be nice to have better terminology for it all.

[excellent, odds ratio 3:2 for worth checking LW2.0 sometimes and 4:3 for LW2.0 will succeed]

I think "Determinism and Reconstructability" are great concepts but you picked terrible names for them, and I'll probably call them "gears" and "meta-gears" or something short like that.

This article made me realize that my cognition runs on something equivalent to logical inductors, and what I recently wrote on Be Well Tuned about cognitive strategies is a reasonable attempt at explaining how to implement logical inductors in a human brain.

7abramdemski5y
Thank you! I'm glad to contribute to those odds ratios. I neglected to optimize those names, yeah. But "gears" v "meta-gears"? I think the two things together make what people call "gears", so it should be more like "gears inside" v "gears outside" (maybe "object-leves gears" v "meta-level gears"), so that you can say both are necessary for good gears-level models. I hadn't seen Be Well Tuned!
Request: Has this idea already been explicitely stated elsewhere? Anything else regular old TAPs are missing?

It's certainly not very new, but nothing wrong with telling people about your TAP modifications. There are many nuances to using TAPs in practice, and ultimately everyone figures out their own style anyway. Whether you have noticed or not, you probably already have this meta-TAP:

"TAPs not working as I imagined -> think how to improve TAPs"

It is, ultimately, the only TAP you need to successfully install to start the process of recursive improvement.

I have the suspicion that everyone is secretly a master at Inner Sim

There's a crucial difference here between:

• good "secretly": I'm so good at it it's my second nature, and there's little reason to bring it up anymore
• bad "secretly": I'm not noticing what I'm doing, so I can't optimize it, and never have

One example is that the top tiers of the community are in fact composed largely of people who directly care about doing good things for the world, and this (surprise!) comes together with being extremely good at telling who's faking it. So in fact you won't be socially respected above a certain level until you optimize hard for altruistic goals.

Another example is that whatever your goals are, in the long run you'll do better if you first become smart, rich, knowledgeable about AI, sign up for cryonics, prevent the world from ending etc.

if people really wanted to optimize for social status in the rationality community there is one easiest canonical way to do this: get good at rationality.

I think this is false: even if your final goal is to optimize for social status in the community, real rationality would still force you to locally give it up because of convergent instrumental goals. There is in fact a significant first order difference.

1alkjash5y
Can you elaborate on this? I have the feeling that I agree now but I'm not certain what I'm agreeing with.
I realized today that UDT doesn't really need the assumption that other players use UDT.

Was there ever such an assumption? I recall a formulation in which the possible "worlds" include everything that feeds into the decision algorithm, and it doesn't matter if there are any games and/or other players inside of those worlds (their treatment is the same, as are corresponding reasons for using UDT).

4cousin_it5y
Yeah, it's a bit subtle and I'm not sure it even makes sense. But the idea goes something like this. Most formulations of UDT are self-referential: "determine the logical consequences of this algorithm behaving so-and-so". That automatically takes into account all other instances of this algorithm that happen to exist in the world, as you describe. But in this post I'm trying to handwave a non-self-referential version: "If you're playing a game where everyone has the same utility function, follow the simplest Nash equilibrium maximizing everyone's expected utility, no matter how your beliefs change during the game". That can be seen as an individually rational decision! The other players don't have to be isomorphic to you, as long as they are rational enough and have no incentive to cheat you. That goes against something I've been telling people for years - that UDT cannot be used in real life, because the self-referential version requires proving detailed theorems about other people's minds. The idea in this post can be used in real life. The fact that it can't handle PD is a nice sanity check, because cooperating in PD requires proving detailed theorems to prevent cheating, while the problems I'm solving have no incentives to cheat in the first place.
You’d reap the benefits of being pubicly wrong

By the way - did I mention that inventing the word "hammertime" was epic, and that now you might just as well retire because there's no way to compete against your former glory.

Thanks for that - if I thought like that I'd have retired a long time ago.

Edit: Oh god I'm blind, took another 5 reads to notice. And here I'm supposed to be teaching noticing or something.

I think this comment is 100% right despite being perhaps maybe somewhat way too modest. It's more useful to think of sapience as introducing a delta on behavior, rather than a way to execute desired behavior. The second is a classic Straw Vulcan failure mode.

I wonder if all of the CFAR techniques will have different names after you are done with them :) Looking forward to your second and third iteration.

7alkjash5y
What can I say, I only have a few tricks and one of them is renaming things. :)

All sounds sensible.

Also, reminds me of the 2nd Law of Owen:

In a funny sort of way, though, I guess I really did just end up writing a book for myself.

[Note: I am writing from my personal epistemic point of view from which pretty much all the content of the OP reads as obvious obviousness 101.]

The reason why people don't know this is not because it's hard to know it. This is some kind of common fallacy: "if I say true things that people apparently don't know, they will be shocked and turn their lives around". But in fact most people around here have more than enough theoretical capacity to figure out this, and much more, without any help. The real bottleneck is human psychology, ...

5Qiaochu_Yuan5y
I'm sympathetic to this. I do think there's something important about making all of this stuff common knowledge in addition to making it psychologically palatable to take seriously.
Note to everyone else: the least you can do is share this post until everyone you know is sick of it.

I would feel averse to this post being shared outside LW circles much, given its claims about AGI in the near future being plausible. I agree with the claim but not really for the reasons provided in the post; I think it's reasonable to put some (say 10-20%) probability on AGI in the next couple of decades due to the possibility of unexpectedly fast progress and the fact that we don't actually know what would be needed for AGI. But that isn'...

5Raemon5y
Generally, yeah. But I know that I got something very valuable out of the conversations in question, which wasn't about social pressure or scrupulosity, but... just actually taking the thing seriously. This depended on my psychological state in the past year, and at least somewhat depended on psychological effects of having a serious conversation about xrisk with a serious xrisk person. My hope is that at least some of the benefits of that could be captured in written form. If that turns out to just not be possible, well, fair. But I think if at least a couple people in the right-life-circumstances gets 25% of the value I got from the original conversation(s) from reading this, it'll have been a good use of time. I also disagree slightly with the "the reason people don't know this isn't that it's hard to know." It's definitely achievable to figure out most of the content here. But there's a large search space of things worth figuring out, and not all of it is obvious.

It is a little bit unfair to say that buying 10 bicoins was everything you needed to do. I owned 10 bitcoins, and then sold them at a meager price. Nothing changed as a result of me merely understanding that buying bitcoins was a good idea.

What you really needed was to sit down and think up a strict selling schedule, and also commit to following it. E.g. spend $100 on bitcoin now, and later sell exactly 10% of your bitcoins every time that 10% becomes worth at least$10,000 (I didn't run the numbers to check if these exact values make sense, but you g...

1[deleted]5y
I strongly agree. Despite appearances, I wouldn't say someone with 10 bitcoin today has "won" at all. Winning means getting more of what you ultimately care about, like goods and services. You only win if you convert your bitcoin into goods or dollars at the right time. I am reminded of "buy low, sell high": an empty phrase that can sound deceptively like good investment advice.

A good general rule here is to think in terms of what percentage of your portfolio (or net worth) you want in a specific asset class, rather than making buying/selling a binary decision. Then rebalance every 3 months.

For example, you might decide you want 2.5%-5% in crypto. If the price quadrupled, you would well about 75% of your stake at the end of the quarter. If it halved, you would buy more.

The major benefit is that this moves you from making many small decisions to one big decision, which is usually easier to get right.

At grave peril of strawmanning, a first order-approximation to SquirrelInHell’s meta-process (what I think of as the Self) is the only process in the brain with write access, the power of self-modification. All other brain processes are to treat the brain as a static algorithm and solve the world from there.

Let me clarify: I consider it the meta level when I think something like "what trajectory do I expect to have as a result of my whole brain continuing to function as it already tends to do, assuming I do nothing special with the output of the thoug...

1alkjash5y
Yea, I wanted to write that this is the next step in the pendulum rather than a rebuttal.

Humans are not thermostats, and they can do better than a simple mathematical model. The idea of oscillation with decreasing amplitude you mention is well known from control theory, and it's looking at the phenomenon from a different (and, I dare say, less interesting) perspective.

To put it in another way, there is no additional deep understanding of reality that you could use to tell apart the fourth and the sixth oscillation of a converging mathematical model. If you know the model, you are already there.

1Gordon Seidoh Worley5y
I'm becoming less sure of this, but let me explain. Now I actually tend to use the number 5 and sometimes even 7 rather than 3 because there is stuff going below the level of adult human that is sometimes worth talking about, but if we take our starting point as adult humans and their naive perspectives then I think 3 is fine. So what I've thought for a long time is that at 3 we get enough to form stable loops that short circuit the need for more levels. You give one of my favorite examples of this in your piece—thesis, antithesis, synthesis—which forms a loop and doesn't require more oscillations because you can just make the synthesis the new thesis and repeat to refine without needing to see deeper. Some folks around here know this as the "you don't need more than 3 levels of recursion" heuristic. However there are times when you may want to manipulate the loop itself while also working on it at the object level. I sometimes find myself wandering in this direction during meditation but find it hard to do because it's asking me to keep a lot of stuff in working memory at once. It may turn out that I can't actually do it or take advantage of another oscillation/level/etc., but I do find myself wandering that way. This is not to dispute that 3 is not enough, only that having more may enable experiences that would otherwise be too complex to have without the aid of memory.
1zulupineapple5y
What do humans do better? In what ways are human oscillations more interesting? What is the deep understanding of reality that lets you tell apart first and third oscillation? Honestly, I have no idea what you're talking about. Maybe "less interesting" hints at mysterious answers [http://lesswrong.com/lw/iu/mysterious_answers_to_mysterious_questions/]?

[Note: I'm not sure if this was your concern - let me know if what I write below seems off the mark.]

The most accurate belief is rarely the best advice to give; there is a reason why these corrections tend to happen in a certain order. People holding the naive view need to hear the first correction, those who overcompensated need to hear the second correction. The technically most accurate view is the one that the fewest people need to hear.

I invoke this pattern to forestall a useless conversation about whose advice is objectively best.

In fact, I thin...

4Qiaochu_Yuan5y
I quite like this.

Here we go: the pattern of this conversation is "first correction, second correction, accurate belief" (see growth triplets).

Naive view: "learn from masters"

The OP is the first correction: "learn from people just above you"

Your comment is the second correction: "there are cases where teacher's advice is better quality"

The accurate belief takes all of this into account: "it's best learn from multiple people in a way that balances wisdom against accessibility"

3alkjash5y
I worry that some kind of fallacy of grey is going on here which loses despite being technically more accurate.

Yes! Not just improved, but leading by stellar example :)

People have recently discussed short words from various perspectives. While I was initially not super-impressed by this idea, this post made me shift towards "yeah, this is useful if done just right".

Casually reading this post on your blog yesterday was enough for the phrase to automatically latch on to the relevant mental motion (which it turns out I was already using a lot), solidify it, make it quicker and more effective, and make me want to use it more.

It has since then been popping up in my consciousness repeatedly, on at least 5 separate oc...

Your point can partially be translated to "make reasonably close to 1" - this makes the decisions less about what the moderators want, and allows longer chains of passing the "trust buck".

However, to some degree "a clique moved in that wrote posts that the moderators (and the people they like) dislike" is pretty much the definition of a spammer. If you say "are otherwise extremely good", what is the standard by which you wish to judge this?

Yes, and also it's even more general than that - it's sort of how progress works on every scale of everything. See e.g. tribalism/rationality/post-rationality; thesis/antithesis/synthesis; life/performance/improv; biology/computers/neural nets. The OP also hints at this.

This seems to rest on a model of people as shallow, scripted puppets.

"Do you want my advice, or my sympathy?" is really asking: "which word-strings are your password today?" or "which kind of standard social script do you want to play out today?" or "can you help me navigate your NPC conversation tree today?".

Personally, when someone tries to use this approach on me I am inclined to instantly write them off and never look back. I'm not saying everyone is like me but you might want to be wary of what kind of people you are optimizing yourself for.

3gwillen5y
6PeterBorah5y
I read "response" much more broadly than you do. I'd translate the question as something more like, "What sort of currency are you currently lacking?" or "What sort of aid do you currently require?" Imagine someone says, "Work has gotten really overwhelming." There are many things this could mean, and many ways you could potentially help them. Perhaps they suspect they are making a strategic error in their work, and you can help by analyzing strategic options with them. Perhaps they are in danger of a bucket error suggesting that they are a bad person for letting work get overwhelming, in which case you can help by providing evidence that you don't think they're bad. Perhaps they are tired, and you can help by bringing them to a state that's more restful. If you don't know which of these things is more likely, asking is a pretty good shortcut to figuring it out.

I'd add that the desire to hear apologies is itself a disguised status-grabbing move, and it's prudent to stay wary of it.

While I 100% agree with your views here, and this is by far the most sane opinion on akrasia that I've seen in a long time, I'm not convinced that so many people on LW really "get it". Although to be sure, the distribution of behavior that signals this has significantly shifted since the move to LW2.0.

So overall I am very uncertain, but I still find it more plausible that the reason why the community as a whole stopped talking about akrasia is more like, people run out of impressive-seeming or fresh-seeming things to say about it? While the minority that could have contributed actual real new insights turned away for better reasons.

3Qiaochu_Yuan5y
Right, that's why I labeled the above "the optimistic story." The pessimistic stories were left as exercises to the reader.

Big props for posting a book review - that's always great and in demand. However, some points on (what I think is) good form while doing these:

• a review on LW is not an advertisement; try to write reviews in a way that is useful to people who decide to not read the book
• I also don't care to see explicit encouragement to read a book - if what you relate about its content is tempting enough, I expect that I will have the idea to go and read it on my own
2ryan_b5y
Update completed. Do you find it improved along those dimensions?
1ryan_b5y
This is very helpful, thank you. Between these points and the first two responses being things I really should have anticipated up front, I will make an update.

[Note: you post is intentionally poetic, so I'll let myself be intentionally poetic while answering this:]

Would you trust someone without a shadow?

The correct answer is, I think, "don't care". On Friday night you dance with a Gervais-sociopath. On Saturday you build a moon rocket together and use it to pick up groceries. Do you "trust" the rocket to be "good"? No, but you don't need to.

3whpearson5y
The trust you have to have is that the person you are building with won't take the partially built rocket and and finish it for themselves to go off to a gambling den. That they too actually want to get groceries and aren't just saying that they do to gain your cooperation. You want to avoid cursing their sudden but inevitable betrayal. You do want to get the groceries right?

Not to put too fine a point on it: through the tone and content of the post, I can still see the old attachments and subconscious messed-up strategies shining through.

I am, of course, not free of blame here because the same could be said about my comment.

However, I reach out over both of these and touch you, Val.

Sure, and that's probably what almost all users do. But the situation is still perverse: the broken incentives of the system are fighting against your private incentive to not waste effort.

This kind of conflict is especially bad if people have different levels of the internal incentive, but also bad even they don't, because on the margin it pushes everyone to act slightly against their preferences. (I don't think this particular case is really so bad, but the more general phenomenon is and that's what you get if you design systems with poor incentives)

Ultimately the primary constraint on almost any feature on LessWrong is UI complexity, and so there is a very strong prior against any specific passing the very high bar to make it into the final UI

On the low end, you can fit the idea entirely inside of the existing UI, as a new fancy way of calculating voting weights under the hood (and allowing multiple clicks on the voting buttons).

Then, in a rough order of less to more shocking to users:

• showing the user some indication of how many points their one click is currently worth
• showing how many unused "v
...
I'm still not really sure what the root issues you're trying to resolve are. What are examples of cases you're either worried about the current system specifically failing, or areas where we just little don't have anything even trying to handle a particular use-case?

Sure, I can list some examples, but first note that while I agree that examples are useful, focusing on them too much is not a good way in general to think about designing systems.

A good design can preempt issues that you would never have predicted could happen; a bad design...

7Viliam5y
For some reason, I don't do that. The interesting comments I upvote, the few annoying ones I downvote, but half or more comments fall in the "meh" territory where I neither appreciate them, nor mind them, so I simply do not vote on those. Not voting is not exactly halfway between an upvote and a downvote. Upvote costs you a click, downvote costs you a click, but no-vote costs nothing.
9habryka5y
I like this idea. Ultimately the primary constraint on almost any feature on LessWrong is UI complexity, and so there is a very strong prior against any specific passing the very high bar to make it into the final UI, but this is a pretty good contender. I am particularly interested in more ideas for communicating this to the user in a way that makes intuitive sense, and that they can understand with existing systems and analogies they are already familiar with. The other big constraint on UI design are hedonic-gradients. While often a system can be economically optimal, because of hyperbolic discounting and naive reinforcement learning, you often end up with really bad equilibria if one of the core interactions on your site is not fun to use. This in particular limits the degree to which you can force the user to spend a limited number of resources, since it both strongly increases mental overhead (instead of just asking themselves "yay or nay?" after reading a comment, they now need to reason about their limited budget and compare it to alternative options), and because people hate spending limited resources (which results in me having played through 4 final fantasy games, and only using two health potions in over 200 hours of gameplay, because I really hate giving up limited resources, and I might really need them in the next fight, even though I never, ever will).

This is very well done :) Thanks for the Terence Tao link - it's amusing that he describes exactly the same meta-level observation which I expressed in this post.

2Gunnar_Zarncke5y
I think this goes beyond math and is really a general pattern about learning by system 1 and 2 interacting. It's just more clearly visible with math because it is neccessarily more precise. I once described it here (before knowing about system 1 and 2 terminology): http://wiki.c2.com/?FuzzyAndSymbolicLearning
3alkjash5y
Yes, I think its possible that an entire field like machine learning could be still in Stage 2, the technical results going so much farther, faster, and more sporadically that the systematic intuition-coalescing and metaphor-building has yet to catch up.
Classes of interpersonal problems often translate into classes of intrapersonal problems, and the tools to solve them are broadly similar.

This is true, but it seems you don't have any ideas about why it's true. I offer the following theory: if you are designing brains to deal with social situations, it is very adaptive to design them in a way that internally mirrors some of the structure that arises in social environments. This makes the computations performed by the brain more directly applicable to social life, in several interesting ways (e.g....

2alkjash5y
That's an appealing hypothesis! It does seem like part of the picture, but I would offer the alternative hypothesis that even absent social environments such a system might arise. It's natural to design and compartmentalize subprocesses for specific tasks, and to give them isolated virtual address spaces. Eventually, because each subprocess is engaging with a different region of thingspace it collects different information (e.g. about human nature) and that produces different beliefs and values when it inhabits you, so to speak. I will definitely give this question more thought.
We should expect that anyone should be able to get over 1000 karma if they hang around the site long enough.

I second this worry. Historically, karma on LW has been a very good indicator of hours of life burned on the site, and a somewhat worse indicator of other things.

5Raemon5y
We have an upcoming blogpost that goes into both the problems we're concerned about solving, and our thoughts on how to resolve them, and I'll probably hold off till then to dive into this too much. (I'll probably respond once more with quick clarifications if need be and if there's a lot more to discuss, will do so when I can dedicate a good chunk of time to it) But it seems like there are actually 2 (3?) different issues here and I'm not sure which of them is more significant: I. Should we become more like a network of personal blogs than a forum? This is certainly a marked change. The main reason we're considering the idea is because many people do seem to prefer discussing things in personal-blog-like spaces - sometimes because they own the space, other times because someone they trust owns the space and they have more of a sense that it's in the control of someone they trust (and people vary in what sort of people they trust and how they want discussions curated) II. Is Karma a reasonable tool to determine trust? For any features that we might want to limit to trusted users. Both questions are important, I wanted to make sure I didn't respond to one if the crux of the disagreement was more about the other. Re: the Karma question (disclaimer: this is just some high level examples, not concrete plans) If not karma, how would you determine who gets access to trusted permissions? The two main solutions I can see here is "some kind of systemized approach" and "fiat, careful decisions by the site admins." Both of them seem to have risks and issues. Systems can be gamed. Discretion of admins or existing trusted users can become insular. (The third option of "don't ever create tools that can only be given to 'trusted' people is impractical since, at the very least, someone needs to deal with spam and trolls", and as the site grows you will need to grant that power to more people to deal with higher volume) It's perhaps worth noting that I see karma as "the syst

Excellent content, would be even beter in a shorter post.

As a 5-minute exercise, I'm coming up with some more examples:

• assume that you can make progress on AI alignment
• or at least, assume that there is some way in which you can contribute to saving the world
• run fast enough to win the race, even if it means you won't make it to the finish
• assume you will earn enough money to survive while doing things you care about
• assume the brain of the person who stopped breathing is still alive
• assume your epistemology is good enough to ignor
...

Obvious note: this sequence of posts is by itself a good example of what circumambulation looks like in practice.

Well, if ageing was slowed proportionally, and the world were roughly unchanged from the present condition, I'd expect large utility gains (in total subjective QoL) from prioritizing longer lives, with diminishing returns to this only in late 100s or possibly 1000s. But I think both assumptions are extremely unlikely.

I think at this point it's fair to say that you have started repeating yourself, and your recent posts strongly evoke the "man with a hammer" syndrome. Yes, your basic insight describes a real aspect of some part of reality. It's cool, we got it. But it's not the only aspect, and (I think) also not the most interesting one. After three or four posts on the same topic, it might be worth looking for new material to process, and other insights to find.

-1Bound_up5y
I am indeed repeating myself. New descriptions and examples pointing at the same concept over and over. Is that a problem?