Shortform Content [Beta]

MikkW's Shortform

When a republic is considering reforming the way it elects holders of power, there are certain desirable criteria the chosen method should have: officeholders should be agreeable to all citizens, not just to a fraction of them, since maximizing agreeableness will select for competence and the desire to do right by the people; in bodies with many members (e.g. a legislature, but not a singular executive such as a president or prime minister), the various view points of the voters should be proportionately represented by the various members of the body; and ... (read more)

One potential criticism of this method is the appeal to precedence: while using party lists (modulo the similarity scores, which seems like a straightforward and uncontroversial improvement) in this way has been used to much success in the Nordic countries, approval voting is (somewhat surprisingly IMO) not well established. As far as governments go, I only know of St. Louis and Fargo, ND using approval- that is, two municipalities. One could observe the concern that we don't have much empirical data on how approval works in the real world.

One response is ... (read more)

frontier64's Shortform

The future may have a use for frozen people from the current era. In the future, historical humans may be useful as an accurate basis to interpret the legal documents of our era.

Original pubic meaning is a is a fairly modern mode of legal interpretation of the US Constitution. It's basis is that the language of the constitution should be interpreted the way that the original meaning of the text was when it was drafted and amended into the constitution. A similar mode of interpretation is used less commonly for statutes. It's likely that this mode of interp... (read more)

Ramiro P.'s Shortform

LW is quoted (in a kind of flattering way) in Simon Dedeo's awesome piece on the last Nautil.us issue. For spoiler lovers:

[...] Rationality is my ticket out. The only reason I can trust you is that you seem rational enough to talk to. But now you’re telling me that rationality is just a layer on top of the System—it’s just as irrational as the people I’m trying to escape. I don’t know which is worse: being duped by someone else’s priors, or being a biological machine.

Teacher: Don’t go too far. You’re a smart kid—you can iterate faster than most. You can ma

... (read more)
MichaelA's Shortform

Problems in AI risk that economists could potentially contribute to

List(s) of relevant problems

... (read more)

Recently I was also trying to figure out what resources to send to an economist, and couldn't find a list that existed either! The list I came up with is subsumed by yours, except:
- Questions within Some AI Governance Research Ideas
- "Further Research" section within an OpenPhil 2021 report: https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth
- The AI Objectives Institute just launched, and they may have questions in the future 

MikkW's Shortform

A commonly given reason for why Nordic countries tend to rank highly as desirable places to live, is because the people there are supported by a robust welfare system. In America, I've often heard it said that similar systems shouldn't be implemented, because they are government programs, and (the argument goes) government shouldn't be trusted.

This suggests the government as a potentially important point of comparison between the Nordic countries and the US. Are there features that differ between the American and Nordic governments (keep in mind that there... (read more)

TurnTrout's shortform feed

Does anyone have tips on how to buy rapid tests in the US? Not seeing any on US Amazon, not seeing any in person back where I'm from. Considering buying German tests. Even after huge shipping costs, it'll come out to ~$12 a test, which is sadly competitive with US market prices.

Wasn't able to easily find tests on the Mexican and Canadian Amazon websites, and other EU countries don't seem to have them either. 

I've been able to buy from the CVS website several times in the past couple months, and even though they're sold out online now, they have some (sparse) in-store availability listed.  Worth checking there, Walgreens, etc. periodically.

MikkW's Shortform

For a long time, I found the words "clockwise" and "counterclockwise" confusing, because they are so similar to each other, and "counterclockwise" is a relatively long word at 4 syllables, much longer than similarly common words.

At some point in time, I took to calling them "dexter" and "winstar", from the Latin »dexter« and Middle English »winstre«, meaning "right" and "left", respectively. I like these words more than the usual "clockwise", but of course, new words aren't worth much of others don't know them, so this is a PSA that these are words that I ... (read more)

Showing 3 of 5 replies (Click to show all)
1Taleuntum2dIs it that intuitive to you that you should name the rotating object's direction using the movement of the top of the object? I think I would get confused with your words after a while. I just use "positive" and "negative" direction.
2Measure2dIs "positive" equivalent to clockwise (clocks) or counterclockwise (cartesian coordinates)?

Counterclockwise, I've never heard anyone use it for clockwise.

adamzerner's Shortform

I suspect that the term "cognitive" is often over/misused.

Let me explain what my understanding of the term is. I think of it as "a disagreement with behaviorism". If you think about how psychology progressed as a field, first there was Freudian stuff that wasn't very scientific. Then behaviorism emerged as a response to that, saying "Hey, you have to actually measure stuff and do things scientifically!" But behaviorists didn't think you could measure what goes on inside someone's head. All you could do is measure what the stimulus is and then how the human... (read more)

Showing 3 of 4 replies (Click to show all)
2adamzerner4dHm, can you think of any examples of cognitive biases that aren't about beliefs? You mention that the term "cognitive" also has to do with perception. When I hear "perception" I think sight, sound, etc. But biases in things like sight and sound feel to me like they would be called illusions, not biases.
1JBlack3dThe first one to come to mind was Recency Bias, but maybe I'm just paying that one more attention because it came up recently. Having noticed that bias in myself, I consulted an external source https://en.wikipedia.org/wiki/List_of_cognitive_biases [https://en.wikipedia.org/wiki/List_of_cognitive_biases] and checked that rather a lot of them are about preferences, perceptions, reactions, attitudes, attention, and lots of other things that aren't beliefs. They do often misinform beliefs, but many of the biases themselves seem to be prior to belief formation or evaluation.

Ah, those examples have made the distinction between biases that misinform beliefs and biases of beliefs clear. Thanks!

As someone who seems to understand the term better than I do, I'm curious whether you share my impression that the term "cognitive" is often misused. As you say, it refers to a pretty broad set of things, and I feel like people use the term "cognitive" when they're actually trying to point to a much narrower set of things.

Beth Barnes's Shortform

When can models report their activations?

Related to call for research on evaluating alignment

Here's an experiment I'd love to see someone run (credit to Jeff Wu for the idea, and William Saunders for feedback):

Finetune a language model to report the activation of a particular neuron in text form.

E.g., you feed the model a random sentence that ends in a full stop. Then the model should output a number from 1-10 that reflects a particular neuron's activation.

We assume the model will not be able to report the activation of a neuron in the final layer, even i... (read more)

2TurnTrout3dSurely there exist correct fixed points, though? (Although probably not that useful, even if feasible)

You mean a fixed point of the model changing its activations as well as what it reports? I was thinking we could rule out the model changing the activations themselves by keeping a fixed base model.

Buck's Shortform

I know a lot of people through a shared interest in truth-seeking and epistemics. I also know a lot of people through a shared interest in trying to do good in the world.

I think I would have naively expected that the people who care less about the world would be better at having good epistemics. For example, people who care a lot about particular causes might end up getting really mindkilled by politics, or might end up strongly affiliated with groups that have false beliefs as part of their tribal identity.

But I don’t think that this prediction is true: I... (read more)

Showing 3 of 5 replies (Click to show all)

For example, people who care a lot about particular causes might end up getting really mindkilled by politics, or might end up strongly affiliated with groups that have false beliefs as part of their tribal identity.

These both seem pretty common, so I'm curious about the correlation that you've observed. Is it mainly based on people you know personally? In that case I expect the correlation not to hold amongst the wider population.

Also, a big effect which probably doesn't show up much amongst the people you know: younger people seem more altruistic (or at least signal more altruism) and also seem to have worse epistemics than older people.

2Viliam1moAh. I meant, I would like to see a group that has the sanity level of a typical rationalist, and the productivity level of these super-agenty irrationalists. (Instead of having to choose between "sane with lots of akrasia" and "awesome but insane".)
2Pattern1moHm. Maybe there's something to be gained from navigating 'trade-offs' differently? I thought perpetual motion machines were impossible (because thermodynamics) aside from 'launch something into space, pointed away from stuff it would crash into', though I'd read that 'trying to do so is a good way to learn about physics., but I didn't really try because I thought it'd be pointless.' And then this [https://en.wikipedia.org/wiki/Time_crystal] happened.
MikkW's Shortform

There is probably overlap between the matter of aligning AI and the matter of aligning governments

MikkW's Shortform

I dreamt up the following single-winner voting system in the car while driving to Eugene, Oregon on vacation. I make no representation that it is any good, nor that it's better than anything currently known or in use, nor that's it's worth your time to read this.

Commentary and rationale will be explained at the bottom of this post

The system is a 2-round system. The first round uses an approval ballot, and the second round asks voters to choose between two candidates.

([•] indicates a constant that can be changed during implementation)

Round One:

  1. Tally all a
... (read more)

Alternative: Something like condorcet voting, where voters receive a random subset of pairs to compare. For a simple analysis, the number of pairs could be 1. (Or instead of pairs, a voter could asked to choose 'the best'.)

[This comment is no longer endorsed by its author]Reply
MikkW's Shortform

This Generative Ink post talks about curating GPT-3, creating a much better output than it normally would give, turning it from quite often terrible to usually pround and good. I'm testing out doing the same with this post, choosing one of many branches every few dozens of words.

For a 4x reduction in speed, I'm getting very nice returns on coherence and brevity. I can actually pretend like I'm not a terrible writer! Selection is a powerful force, but more importantly, continuing a thought in multiple ways forces you to actually make sure you're saying thin... (read more)

It occurs to me that this is basically Babble & Prune adapted to be a writing method. I like Babble & Prune.

1MikkW6dThis post was written in 5 blocks, and I wrote 4 (= 2^2) branches for each block, for 5*2 = 10 bits of curation, or 14.5 words per bit of curation. As it happens, I always used the final branch for each block, so it was more effects of revision and consolidation than selection effects that contribute to the end result of this excercise.
Vanessa Kosoy's Shortform

I propose a new formal desideratum for alignment: the Hippocratic principle. Informally the principle says: an AI shouldn't make things worse compared to letting the user handle them on their own, in expectation w.r.t. the user's beliefs. This is similar to the dangerousness bound I talked about before, and is also related to corrigibility. This principle can be motivated as follows. Suppose your options are (i) run a Hippocratic AI you already have and (ii) continue thinking about other AI designs. Then, by the principle itself, (i) is at least as good as... (read more)

Showing 3 of 12 replies (Click to show all)
2Charlie Steiner4dAgree with the first section, though I would like to register my sentiment that although "good at selecting but missing logical facts" is a better model, it's still not one I'd want an AI to use when inferring my values. I think my point is if "turn off the stars" is not a primitive action, but is a set of states of the world that the AI would overwhelming like to go to, then the actual primitive actions will get evaluated based on how well they end up going to that goal state. And since the AI is better at evaluating than us, we're probably going there. Another way of looking at this claim is that I'm telling a story about why the safety bound on quantilizers gets worse when quantilization is iterated. Iterated quantilization has much worse bounds than quantilizing over the iterated game, which makes sense if we think of games where the AI evaluates many actions better than the human.
2Vanessa Kosoy4dI think you misunderstood how the iterated quantilization works. It does not work by the AI setting a long-term goal and then charting a path towards that goal s.t. it doesn't deviate too much from the baseline over every short interval. Instead, every short-term quantilization is optimizing for the user's evaluation in the end of this short-term interval.

Ah. I indeed misunderstood, thanks :) I'd read "short-term quantilization" as quantilizing over short-term policies evaluated according to their expected utility. My story doesn't make sense if the AI is only trying to push up the reported value estimates (though that puts a lot of weight on these estimates).

Rafael Harth's Shortform

Keeping stock of and communicating what you haven't understood is an underrated skill/habit. It's very annoying to talk to someone and think they've understood something, only to realize much later that they haven't. It also makes conversations much less productive.

It's probably more of a habit than a skill. There certainly are some contexts where the right thing to do is pretend that you've understood everything even though you haven't. But on net, people do it way too much, and I'm not sure to what extent they're fooling themselves.

MikkW's Shortform

A personal anecdote which illustrates the difference between living in a place that uses choose-one voting (i.e. FPTP) to elect its representatives, and one that uses a form of proportional representation:

I was born as a citizen of both the United States and the Kingdom of Denmark, with one parent born in the US, and one born in Denmark. Since I was born in the States with Danish blood, my Danish citizenship was provisional until age 22, with a particular process being required to maintain my citizenship after that age to demonstrate sufficient connection ... (read more)

Showing 3 of 4 replies (Click to show all)
1MikkW6dDo not both the resources needed to run a government and the resources a government can receive in taxes grow linearly with the size of a country? Or do you have different size dynamics in mind?
2Pattern5dI was thinking that 'size dynamics' seem like 'a more obvious reason for delay' than 'diverse ethnic makeup'. Not 'this dynamic makes a lot of sense' but 'this other dynamic would make more sense'.

Gotcha. My main explanation is just that the American political framework is old, having been around since the start of the modern democracy movement, and voting theory wasn't a thing people thought about back then; that, plus the particular historical reasons many countries adopted proportional representation didn't play out to the same degree in the US.

supposedlyfun's Shortform

Prediction: In a month, if we look at vaccine doses administered per day in the U.S., the FDA's approval of Comirnaty will not be reflected in a subsequent increase, even temporary, exceeding 10%. Confidence: 80%

Subsequent evidence suggests I had the right idea but was overly precise in my predictions or should have tried to predict the effect over a longer period of time to avoid extreme but temporary outcomes:

In the two weeks since the Food and Drug Administration approved Pfizer's COVID-19 vaccine, the US's average weekly vaccination rate has declined 38%.

1supposedlyfun24dInitial evidence suggests I was wrong [https://abcnews.go.com/Health/americans-vaccinated-full-fda-approval-pfizer-covid-vaccine/story?id=79750505] :
Alex Ray's Shortform

AGI technical domains

When I think about trying to forecast technology for the medium term future, especially for AI/AGI progress, it often crosses a bunch of technical boundaries.

These boundaries are interesting in part because they're thresholds where my expertise and insight falls off significantly.

Also interesting because they give me topics to read about and learn.

A list which is probably neither comprehensive, nor complete, nor all that useful, but just writing what's in my head:

  • Machine learning research - this is where a lot of the tip-of-the-spear o
... (read more)
Load More