# Shortform Content [Beta]

Write your thoughts here! What have you been thinking about?
Exploratory, draft-stage, rough, and rambly thoughts are all welcome on Shortform.
AlexMennen's Shortform

Theorem: Fuzzy beliefs (as in https://www.alignmentforum.org/posts/Ajcq9xWi2fmgn8RBJ/the-credit-assignment-problem#X6fFvAHkxCPmQYB6v ) form a continuous DCPO. (At least I'm pretty sure this is true. I've only given proof sketches so far)

The relevant definitions:

A fuzzy belief over a set is a concave function such that (where is the space of probability distributions on ). Fuzzy beliefs are partially ordered by ... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

Vanessa Kosoy's Shortform

Game theory is widely considered the correct description of rational behavior in multi-agent scenarios. However, real world agents have to learn, whereas game theory assumes perfect knowledge, which can be only achieved in the limit at best. Bridging this gap requires using multi-agent learning theory to justify game theory, a problem that is mostly open (but some results exist). In particular, we would like to prove that learning agents converge to game theoretic solutions such as Nash equilibria (putting superrationality aside: I think that superrational

... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post
Showing 3 of 4 replies (Click to show all)
2Vanessa Kosoy7dWe can modify the population game setting to study superrationality. In order to do this, we can allow the agents to see a fixed size finite portion of the their opponents' histories. This should lead to superrationality for the same reasons I discussed [https://www.alignmentforum.org/posts/S3W4Xrmp6AL7nxRHd/formalising-decision-theory-is-hard#3yw2udyFfvnRC8Btr] before [https://agentfoundations.org/item?id=507]. More generally, we can probably allow each agent to submit a finite state automaton of limited size, s.t. the opponent history is processed by the automaton and the result becomes known to the agent. What is unclear about this is how to define an analogous setting based on source code introspection. While arguably seeing the entire history is equivalent to seeing the entire source code, seeing part of the history, or processing the history through a finite state automaton, might be equivalent to some limited access to source code, but I don't know to define this limitation.
2Gurkenglas16hWhat do you mean by equivalent? The entire history doesn't say what the opponent will do later or would do against other agents, and the source code may not allow you to prove what the agent does if it involves statements that are true but not provable.

For a fixed policy, the history is the only thing you need to know in order to simulate the agent on a given round. In this sense, seeing the history is equivalent to seeing the source code.

The claim is: In settings where the agent has unlimited memory and sees the entire history or source code, you can't get good guarantees (as in the folk theorem for repeated games). On the other hand, in settings where the agent sees part of the history, or is constrained to have finite memory (possibly of size ?), you can (maybe?) prove convergence to Pareto

... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post
BrienneYudkowsky's Shortform

Thread on The Abolition of Man by C. S. Lewis

10BrienneYudkowsky2dNotes on Part One: Men Without Chests: * What is the relationship between believing that some things merit liking while others merit hatred, and the power to act? * Is there a way to preserve the benefits of a map/territory distinction mentality while gaining the benefits of map/territory conflation when it comes to taste/value/quality? * What exactly *are* the benefits of map/territory conflation? * Are terrible contortions necessary to believe in objective value wholeheartedly? * What are we protecting when we dismiss objective value? What does it seem to threaten? * "It is the doctrine of objective value, the belief that certain attitudes are really true, and others really false, to the kind of thing the universe is and the kind of things we are." What exactly is the word "to" doing in that sentence? * Everybody knows that value is objective, and also that it isn't. What are we confused about, and why? * What role does religion play in a community's relationship to value? * If everyone who ever lived thought a certain combination of musical notes was ugly, but in fact everyone were wrong, how could you know? * The Lesswrong comment guidelines say, "Aim to explain, not persuade." Is this a method by which we cut out our own chests?

The Lesswrong comment guidelines say, "Aim to explain, not persuade." Is this a method by which we cut out our own chests?

I‘m curious how this question parses for Vaniver

After this weeks's stereotypically sad experience with the DMV....

(spent 3 hours waiting in lines, filling out forms, finding out I didn't bring the right documentation, going to get the right documentation, taking a test, finding out somewhere earlier in the process a computer glitched and I needed to go back and start over, waiting more, finally getting to the end only to learn I was also missing another piece of identification which rendered the whole process moot)

...and having just looked over a lot of 2018 posts investigating coordination failure...&n

... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post
Showing 3 of 4 replies (Click to show all)
4Raemon3dI recall them being terrible in NY, although it's been awhile. I was also in a uniquely horrible situation because I moved from NY, lost my drivers license, couldn't easily get a new from from NY (cuz I don't live there anymore) and couldn't easily get one from CA because I couldn't prove I had one to transfer. (The results is that I think I need to take the driving test again, but it'll get scheduled out another couple months from now, or something) Which, I dunno I'd be surprised if any bureaucracy handled that particularly well, honestly.

Fwiw, my experiences with DMVs in DC, Maryland, Virginia, New York, and Minnesota have all been about as terrible as my experiences in California.

1Pattern2dUnless there was a bureaucracy that used witnesses.

So apparently Otzi the Iceman still has a significant amount of brain tissue. Conceivably some memories are preserved?

In response to lifelonglearner's comment I did some experimenting with making the page a bit bolder. Curious what people think of this screenshot where "unread" posts are bold, and "read" posts are "regular" (as opposed to the current world, where "unread" posts "regular", and read posts are light-gray).

Showing 3 of 6 replies (Click to show all)

Fwiw, for reasons I can't explain I vastly prefer just the title bolded to the entire line bolded, and significantly prefer the status quo to title bolded.

4Evan Rysdam1dI think I prefer the status quo design, but not very strongly. Between the two designs pictured here, I at first preferred the one where the authors weren't bolded, but now I think I prefer the one where the whole line is bolded, since "[insert author whose posts I enjoy] has posted something" is as newsworthy as "there's a post called [title I find enticing]". Something I've noticed about myself is that I tend to underestimate how much I can get used to things, so I might end up just as happy with whichever design is chosen.
5Raemon1dI initially wanted "bold everywhere" because it helped my brain reliably parse things as "this is a bold line" instead of "this is a line with some bold parts but you have to hunt for them". But, after experimenting a bit I started to feeling having bold elements semi-randomly distributed across the lines made it a lot busier.
Showing 3 of 14 replies (Click to show all)

Great things about Greaterwrong:

[On LW] if a comment is automatically minimized and buried in a long thread, then even with a link to it, it's hard to find the comment - at best the black line on the side briefly indicates which one it is. This doesn't seem to be a problem in greaterwrong.

Example: Buried comment, not buried.

0Pattern5dNorms https://www.lesswrong.com/posts/rob7tX4bmrLM93G3C/lw-authors-how-many-clusters-of-norms-do-you-personally-want#ppwA8EzkCmhWvs2LK [https://www.lesswrong.com/posts/rob7tX4bmrLM93G3C/lw-authors-how-many-clusters-of-norms-do-you-personally-want#ppwA8EzkCmhWvs2LK] Style: Clarity https://www.lesswrong.com/posts/3pwikSmxeieybyJSi/hazard-s-shortform-feed#hRdsM7keFuWN8nqXC [https://www.lesswrong.com/posts/3pwikSmxeieybyJSi/hazard-s-shortform-feed#hRdsM7keFuWN8nqXC] The problem https://www.lesswrong.com/posts/i2XikYzeL39HoSSTr/matt-goldenberg-s-short-form-feed#Quazimcq7rzdgco7K [https://www.lesswrong.com/posts/i2XikYzeL39HoSSTr/matt-goldenberg-s-short-form-feed#Quazimcq7rzdgco7K]

Over in this thread, Said asked the reasonable question "who exactly is the target audience with this Best of 2018 book?"

By compiling the list, we are saying: “here is the best work done on Less Wrong in [time period]”. But to whom are we saying this? To ourselves, so to speak? Is this for internal consumption—as a guideline for future work, collectively decided on, and meant to be considered as a standard or bar to meet, by us, and anyone who joins us in the future?

Or, is this meant for external consumption—a way of saying to others, “see what we ha

... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post
6toonalfrink1dI'm looking forward to a bookshelf with LW review books in my living room. If nothing else, the very least this will give us is legitimacy, and legitimacy can lead to many good things.

+1 excitement about bookshelves :)

5Said Achmiz2dThank you, this is a useful answer.
BrienneYudkowsky's Shortform

Some advice to my past self about autism:

Learn about what life is like for people with a level 2 or 3 autism diagnosis. Use that reference class to predict the nature of your problems and the strategies that are likely to help. Only after making those predictions, adjust for your own capabilities and circumstances. Try this regardless of how you feel about calling yourself autistic or seeking a diagnosis. Just see what happens.

Many stereotypically autistic behaviors are less like symptoms of an illness, and more like excellent strategies for getting shit d... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

AABoyles's Shortform

Attention Conservation Warning: I envision a model which would demonstrate something obvious, and decide the world probably wouldn't benefit from its existence.

The standard publication bias is that we must be 95% certain a described phenomenon exists before a result is publishable (at which time it becomes sufficiently "confirmed" to treat the phenomenon as a factual claim). But the statistical confidence of a phenomenon conveys interesting and useful information regardless of what that confidence is.

Consider the space of all possible relationships: most o

... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post
Vanessa Kosoy's Shortform

I recently realized that the formalism of incomplete models provides a rather natural solution to all decision theory problems involving "Omega" (something that predicts the agent's decisions). An incomplete hypothesis may be thought of a zero-sum game between the agent and an imaginary opponent (we will call the opponent "Murphy" as in Murphy's law). If we assume that the agent cannot randomize against Omega, we need to use the deterministic version of the formalism. That is, an agent that learns an incomplete hypothesis converges to the corresponding max

... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post
Showing 3 of 12 replies (Click to show all)
2Chris_Leong3d"The key point is, "applying the counterfactual belief that the predictor is always right" is not really well-defined" - What do you mean here? I'm curious whether you're referring to the same as or similar to the issue I was referencing in Counterfactuals for Perfect Predictors [https://www.lesswrong.com/posts/AKkFh3zKGzcYBiPo7/counterfactuals-for-perfect-predictors] . The TLDR is that I was worried that it would be inconsistent for an agent that never pays in Parfait's Hitchhiker to end up in town if the predictor is perfect, so that it wouldn't actually be well-defined what the predictor was predicting. And the way I ended up resolving this was by imagining it as an agent that takes input and asking what it would output if given that inconsistent input. But not sure if you were referencing this kind of concern or something else.
5Vanessa Kosoy3dIt is not a mere "concern", it's the crux of problem really. What people in the AI alignment community have been trying to do is, starting with some factual and "objective" description of the universe (such a program or a mathematical formula) and deriving counterfactuals. The way it's supposed to work is, the agent needs to locate all copies of itself or things "logically correlated" with itself (whatever that means) in the program, and imagine it is controlling this part. But a rigorous definition of this that solves all standard decision theoretic scenarios was never found. Instead of doing that, I suggest a solution of different nature. In quasi-Bayesian RL, the agent never arrives at a factual and objective description of the universe. Instead, it arrives at a subjective description which already includes counterfactuals. I then proceed to show that, in Newcomb-like scenarios, such agents receive optimal expected utility (i.e. the same expected utility promised by UDT).

Yeah, I agree that the objective descriptions can leave out vital information, such as how the information you know was acquired, which seems important for determining the counterfactuals.

Chris_Leong's Shortform

EDT agents handle Newcomb's problem as follows: they observe that agents who encounter the problem and one-box do better on average than those who encounter the problem and two-box, so they one-box.

That's the high-level description, but let's break it down further. Unlike CDT, EDT doesn't worry about the fact that their may be a correlation between your decision and hidden state. It assumes that if the visible state before you made your decision is the same, then the counterfactuals generated by considering your possible decisions are c... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

BrienneYudkowsky's Shortform

Here’s what Wikipedia has to say about monographs .

“A monograph is a specialist work of writing… or exhibition on a single subject or an aspect of a subject, often by a single author or artist, and usually on a scholarly subject… Unlike a textbook, which surveys the state of knowledge in a field, the main purpose of a monograph is to present primary research and original scholarship ascertaining reliable credibility to the required recipient. This research is presented at length, distinguishing a monograph from an article.ȁ... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

I was honestly a bit surprised how well you managed to pull the exact moment from my childhood where I learned the word 'monograph'. I read every page of a beautiful red book that contained all of the Sherlock Holmes stories, and I distinctly recall the line about having written a monograph on the subject of cigar ash, and being able to discern the different types.

19habryka3dI really like this concept. It currently feels to me like a mixture between a fact post [https://www.lesswrong.com/posts/Sdx6A6yLByRRs8iLY/fact-posts-how-and-why] and an essay [http://www.paulgraham.com/essay.html]. From the fact-post post: From Paul Graham's essay post:
Hazard's Shortform Feed

From Gwern's about page:

I personally believe that one should think Less Wrong and act Long Now, if you follow me.

Possibly my favorite catch-phrase ever :) What do I think is hiding there?

• Think Less Wrong
• Self anthropology- "Why do you believe what you believe?"
• Hugging the Query and not sinking into confused questions
• Litany of Tarski
• Notice your confusion - "Either the story is false or you model is wrong"
• Act Long Now
• Cultivate habits and practice routines that seem small / trivial on a day/week/month timeline, but will result in you
... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post
7Hazard4dWhat am I currently doing to Act Long Now? (Dec 4th 2019) * Switching to Roam [http://roamresearch.com/]: Though it's still in development and there are a lot of technical hurdles to this being a long now move (they don't have good import export, it's all cloud hosted and I can't have my own backups), putting ideas into my roam network feels like long now organization for maximized creative/intellectual output over the years. * Trying to milk a lot of exploration out of the next year before I start work, hopefully giving myself springboards to more things at points in the future where I might not have had the energy to get started / make the initial push. * Being kind. * Arguing Politics* With my Best Friends [https://www.lesswrong.com/posts/n4ukoQzkgbAqpzqb5/argue-politics-with-your-best-friends] What am I currently doing to think Less Wrong? * Writing more has helped me hone my thinking. * Lot's of progress on understanding emotional learning [https://www.lesswrong.com/s/BP8vfvg5RhXsBERX9] (or more practically, how to do emotional unlearning) allowing me to get to a more even keeled center from which to think and act. * Getting better at ignoring the bottom line [https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line] to genuinely consider what the world would be like for alternative hypothesis.
4mr-hire3dThis is a great list! I'd be curious about things you are currently doing to act short now and think more wrong as well. I often find I get a lot out of such lists.

Act Short Now

• Sleeping in
• Flirting more

Think More Wrong

• I longer buy that there's a structural difference between math/the formal/a priori and science/the empirical/ a posteriori.
• Probability theory feels sorta lame.
Matt Goldenberg's Short Form Feed

As part of the Athena Rationality Project, we've recently launched two new prototype apps that may be of interest to LWers

Virtual Akrasia Coach

The first is a Virtual Akrasia Coach, which comes out of a few months of studying various interventions for Akrasia, then testing the resulting ~25 habits/skills through internet based lessons to refine them.  We then took the resulting flowchart for dealing with Akrasia, and created a "Virtual Coach" that can walk you through a work session, ensuring your work is focused, productive and enjoyab... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

hamnox's Shortform

I could discuss everything within a few very concrete examples. A concrete example tends to create a working understanding in a way mathematical abstraction fails to. I want to give my readers real knowledge, so I do often insist on describing concepts in the world without numbers or equations or proofs.

However, math exists for a reason.

Some patterns generalize so strongly that you simply cannot communicate the breadth of its applications in concrete examples. You have to describe the shape of it by constraint. To do otherwise would render it a handful of independent parlor tricks instead of one sharp and heavy blade.

cousin_it's Shortform

A fun problem I'm trying to figure out: how to make the PADsynth algorithm faster.

The idea of the algorithm is to simulate an infinite choir of voices. Imagine a sound whose fundamental isn't exactly a sine wave with frequency f, but a spread of frequencies in a narrow Gaussian around f. The second harmonic is a twice wider Gaussian around 2f, and so on. The amplitudes of harmonics can fall off as 1/n, 1/n^2, or something else. It sounds very pleasant, like a smooth choir.

The obvious way to synthesize such a sound is by IFFT. But I don't like it, because I

... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

Could convolution work?

EDIT: confused why I am downvoted. Don't we want to encourage giving obvious (and obviously wrong) solutions to short form posts?

eigen's Shortform

Has someone re-read the sequences? did you find value in doing so?

Further, I do think the comments on each of the essays are worthy of reading, something I did not do the first time. I can pinpoint a few comments from people in this community on the essays which were very insightful! I wonder if I lost something by not participating in it or by not having read all the comments when I was reading the sequences.

Showing 3 of 9 replies (Click to show all)

It would be nice to have a "comment synthesis" that is written sufficiently long after the debate ended (not sooner than one month after publishing the original article?).

By the way, if you do this for many articles in the Sequences, perhaps you could also afterwards join those reactions into one big "community reaction to the Sequences", as a new article where people could read it all in one place.

5Viliam3dReading entire Sequences with all comments seems like an enormous waste of time; that's a ton of text. Your time would be better spent reading a few other books, I think. That's just my opinion, though; see other comments.
3Zack_M_Davis6dI'd love to see exercises for "Lonely Dissent" [https://www.lesswrong.com/s/M3TJ2fTCzoQq66NBJ/p/CEGnJBHmkcwPTysb7].
Hazard's Shortform Feed

Sketch of a post I'm writing:

"Keep your identity small" by Paul Graham $$\cong$$ "People get stupid/unreasonable about an issue when it becomes part of their identity. Don't put things into your identity."

"Do Something vs Be Someone" John Boyd distinction.

I'm going to think about this in terms of "What is one's main strategy to meet XYZ needs?" I claim that "This person got unreasonable because their identity was under attack" is more a situation of "This person is panicing at the p... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

Yesterday I read the first 5 articles on google for "why arguments are useless". It seems pretty in the zeitgeist that "when people have their identity challenged you can't argue with them. A few of them stopped there and basically declared communication to be impossible if identity is involved, a few of them sequitously hinted at learning to listen and find common ground. A reason I want to get this post out is to add to the "Here's why identity doesn't have to be a stop sign."

ozziegooen's Shortform

I think one idea I'm excited about is the idea that predictions can be made of prediction accuracy. This seems pretty useful to me.

## Example

Say there's a forecaster Sophia who's making a bunch of predictions for pay. She uses her predictions to make a meta-prediction of her total prediction-score on a log-loss scoring function (on all predictions except her meta-predictions). She says that she's 90% sure that her total loss score will be between -5 and -12.

The problem is that you probably don't think you can trust Sophia unless she has a lot of experience

... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post
"I'd be willing to bet \$1,000 with anyone that the eventual total error of my forecasts will be less than the 65th percentile of my specified predicted error."

I think this is equivalent to applying a non-linear transformation to your proper scoring rule. When things settle, you get paid S(p) both based on the outcome of your object-level prediction p, and your meta prediction q(S(p)).

Hence:

S(p)+B(q(S(p)))

where B is the "betting scoring function".

This means getting the scoring rules to work while preserving properness will be trick... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post