[ Question ]

What posts do you want written?

by Mark Xu1 min read19th Oct 202040 comments

47

Quests / Projects Someone Should DoSite MetaCommunity
Frontpage

I have many posts that I want written but do not have time to write and I suspect there are other people that feel similarly. This post on the Solomonoff prior was one example, until I got fed up and just wrote it.

Please write one post idea per answer so they can be voted on seperately.

New Answer
Ask Related Question
New Comment

23 Answers

A review of Thinking Fast and Slow that focuses on whether or not various parts of the book replicated.

4aa.oswald1moHonestly, I would like to see this for pretty much any pop-science psychology book that trends in the rationality sphere.

A solid, minimal-assumption description of value handshakes. This SSC post contains the best description of which I'm aware, which I think is slightly sad:

Values handshakes are a proposed form of trade between superintelligences. Suppose that humans make an AI which wants to convert the universe into paperclips. And suppose that aliens in the Andromeda Galaxy make an AI which wants to convert the universe into thumbtacks.

When they meet in the middle, they might be tempted to fight for the fate of the galaxy. But this has many disadvantages. First, there’s the usual risk of losing and being wiped out completely. Second, there’s the usual deadweight loss of war, devoting resources to military buildup instead of paperclip production or whatever. Third, there’s the risk of a Pyrrhic victory that leaves you weakened and easy prey for some third party. Fourth, nobody knows what kind of scorched-earth strategy a losing superintelligence might be able to use to thwart its conqueror, but it could potentially be really bad – eg initiating vacuum collapse and destroying the universe. Also, since both parties would have superintelligent prediction abilities, they might both know who would win the war and how before actually fighting. This would make the fighting redundant and kind of stupid.

Although they would have the usual peace treaty options, like giving half the universe to each of them, superintelligences that trusted each other would have an additional, more attractive option. They could merge into a superintelligence that shared the values of both parent intelligences in proportion to their strength (or chance of military victory, or whatever). So if there’s a 60% chance our AI would win, and a 40% chance their AI would win, and both AIs know and agree on these odds, they might both rewrite their own programming with that of a previously-agreed-upon child superintelligence trying to convert the universe to paperclips and thumbtacks in a 60-40 mix.

This has a lot of advantages over the half-the-universe-each treaty proposal. For one thing, if some resources were better for making paperclips, and others for making thumbtacks, both AIs could use all their resources maximally efficiently without having to trade. And if they were ever threatened by a third party, they would be able to present a completely unified front.

A review of The Design of Everyday Things, ideally with some discussion of how the ideas there intersect with rationality-adjacent topics.

A minimal-assumption description of Updateless Decision Theory. This wiki page describes the basic concept, but doesn't include motivation, examples or intuition.

A thorough description of how to do pair debugging, a CFAR exercise partially described here.

As a response to this request, wrote something here.

9Neel Nanda1moI've written up my thoughts on doing (informal) pair debugging from the debugger perspective here [https://www.lesswrong.com/posts/a2X6z2eiwxviKEiKP/helping-people-to-solve-their-problems]

Against GDP as a metric for timelines and takeoff speeds: I think that world GDP growth increasing significantly from its current rate is something which could happen years before, OR YEARS AFTER, transformative AI. Or anything in between. I think it is a poor proxy for what we care about and that people currently go astray on several occasions when they rely on it too heavily. I think this goes for timelines, but also for takeoff speeds: GDP growth doubling in one year before it doubles in four years is a bad proxy for fast vs. slow takeoff.

A response and critique of Ajeya Cotra's awesome timelines report.

Metformin as a rationalist win.  For several years I have been taking 2 grams of Metformin a day for anti-aging reasons.  There is a vast literature on Metformin and as a mere economist I'm unqualified to summarize it.  But my (skin-in-the-game) guess is that all adults over 40 (and perhaps simply all adults) should be taking Metformin and I would love if someone with a bio-background wrote up a Metformin literature review understandable to those of us who understand statistics but not much about medicine.  The reason why Metformin might be universally beneficial and yet not generally taken is because no one holds a patent on Metformin (it's cheap), in the US you need a prescription to get it, and the medical system doesn't consider aging to be a disease.

5Piotr Orszulak1moHello James. I have not heard about anti aging effects but apart from standard indications, I know it helps to loose weight and to an extent prevents obesity. In a oblique manner it may be also a way to deage yourself but... How do you know about the anti-aging effect and what does it mean really? It doesn't reverse time obviously. I am sorry, to doubt. It just seems to be an extraordinary claim. Best regards, Piotr, anaesthetist intensivist.
4James_Miller1moMuch of the harm of aging is the increased likelihood of getting many diseases such as cancer, heart disease, alzheimer's, and strokes as you age. From my limited understanding, Metformin reduces the age-adjusted chance of getting many of these diseases and thus it's reasonable, I believe, to say that Metformin has anti-aging effects.
4Piotr Orszulak1moOh, ok, I get it slows down ageing. I hoped that you may know of some evidence that it reverses degeneration. in retrospect, I can see that you wrote anti and not de ageing , so the misunderstanding is entirely my fault. Thanks for your clarification 😊
4romeostevensit1moBerberine supposedly has many of the same effects and potentially fewer side effects and is OTC.
3niplav1moHave you by any chance seen this [https://www.gwern.net/Longevity#metformin]? (It's not published yet, but I read it a year ago and thought it was quite good, as far as I can judge such things).
3James_Miller1moThanks!

An intuitive explanation of the kelly criterion, with a bunch of worked examples. Zvi's post is good but lacks worked examples and justification for heuristics. Jacobian advises us to Kelly bet on everything, but I don't understand what a "kelly bet" is in all but the simplest financial scenarios.

Persuasion tools: What they are, how they might get really good prior to TAI, how that might change the world in important ways (e.g. it's an x-risk factor and possibly a point of no return) and what we can do about it now.

A review of the history of translations of Aesop's and other similar fables with the emphasis on what was added, subtracted or equivocated by the translators. Such as, did the original Fox tell himself the grapes were sour, or did he announce it to the world at large?

Ships as precedent for AI: Lots of the arguments against fast takeoff, against AGI, against discontinuous takeoff, against local takeoff and decisive strategic advantage, are somewhat mirrored by arguments that could have been made in the middle ages about ships. I think that history turned out to mostly support the fast/AGI/discontinuous/local/DSA side of those arguments.

I've read that Less Wrong attracts people with mental health concerns so articles about using mental health related information may be useful.

I want more people to write down their models for various things. For example, a model I have of the economy is that it's a bunch of boxes with inputs and outputs that form a sparsely directed graph. The length of the shortest cycle controls things like economic growth and AI takeoff speeds.

Another example is that people have working memory in both their brains and their bodies. When their brain-working-memory is full, information gets stored in their bodies. Techniques like focusing are often useful to extact information stored in body-working-memory.

6Mary Chernyshenko1mo(Body memory is great. When I worked in a shop and could not find an item by the end of the day, because my eyes refused to scan the whole depth of the shelves, I was told to close my eyes and "just take the thing". The arm remembers.)
3Piotr Orszulak1moI find your post confusing. Do you believe in body - mind dualism or was it just a manner of speaking? Maybe you mean that "body memory" is an intuitive subconscious process in the brain?
3Mark Xu1moYes, but I like thinking of it as "body memory" because it is easier to conceptualize.
1Piotr Orszulak1moOk, thanks for the clarification.
1FCCC1moHere's a model I made recently about when a goal is "good" [https://www.lesswrong.com/posts/xcFn7GGrypEFuDjmd/].

I promised a followup to my Soft Takeoff can Still Lead to DSA post. Well, maybe it's about time I delivered...

Hello. I would like to read about a fine line between the Sunk Cost Fallacy and remuneration delay in a long term investment, whether in relationship or changing workplace, and ways to discern the difference. Thank you.

3Pattern1moWhat's remuneration delay?
3Piotr Orszulak1moSorry, English is not my first language. What I mean by renumeration delay is a waiting period between e.g. sowing and harvesting the crops. So in my original question I imply that I have difficulty to discern whether the crops will show up at all.

Explanation of how what we really care about when forecasting timelines is not the point when the last human is killed, nor the point where AGI is created, but the point where it's too late for us to prevent the future from going wrong. And, importantly, this point could come before AGI, or even before TAI. It certainly can come well before the world economy is growing at 10%+ per year. (I give some examples of how this might happen)

4Daniel Kokotajlo1moHere it is [https://www.lesswrong.com/posts/JPan54R525D68NoEt/the-date-of-ai-takeover-is-not-the-day-the-ai-takes-over]

Argument that AIs are reasonably likely to be irrational, tribal, polarized, etc. as much or more than humans are. More broadly an investigation of the reasons for and against that claim.

I have learned Belief Reporting from Leverage Research at a two hour workshop someone gave at the European Community Weekend. I think it would be great if someone would write a post on the technique. 

I would like to see lukeprogs happiness sequences updated for 2020

An overview of past attempts at wireheading humans/animals, what the effects were & how we could do better.

A formal statement of the problem of Pascal's mugging (or a discussion of several ways to formally state it), and a summary/review of different people's approaches to solving/dissolving it.

An overview of the common disagreements between landscape designers, interior designers etc. and their clients. (A friend of mine had to explain that she liked her windows shaded by the tree, it made the house cooler during summer.) As in, what people tend to miscommunicate, overrule, not order, repair etc.