ryan_b

Sequences

National Institute of Standards and Technology: AI Standards

Wiki Contributions

Comments

The 2020 Review: Preliminary Voting

I expect the quadratic voting not to be very different from the 1-4-9 system, but I favor including quadratic voting again even if that is the case. I have two actual reasons for this:

  1. It's a cool mechanism, with flexible levels of engagement, and this is a good way to practice using it. If we don't make options like this available when voting opportunities arise, we can't expect them to ever appear in critical arenas like elections or governance.
  2. The more posts there are, the more valuable being able to fine-tune our votes becomes, operating under the assumption that the number of quality posts correlates with the number of posts overall (which I strongly expect). Since there are more posts this year, more granular voting has more value than it did last year. I want to be able to capture the additional value of the opportunity for granular voting.

Ha! This is a good one!

 The part of the book that got skimmed is titled 1984.

Quadratic Voting and Collusion

I have not read this one, thank you for the link!

Quadratic Voting and Collusion

From the MACI link, my objection is a generalized version of this:

Problems this does not solve

  • A key-selling attack where the recipient is inside trusted hardware or a trustworthy multisig
  • An attack where the original key is inside trusted hardware that prevents key changes except to keys known by an attacker

This is the level where trust is a problem in most real elections, not the voter level. I also note this detail:

It’s assumed that  is a smart contract that has some procedure for admitting keys into this registry, with the social norm that participants in the mechanism should only act to support admitting keys if they verify two things

Emphasis mine. In total this looks like it roughly says "Assuming we trust everyone involved, we can eliminate some of the incentive to breach that trust by eliminating certain information."

That is a cool result on the technical merits, but doesn't seem to advance the pragmatic goal of finding a better voting system.

Quadratic Voting and Collusion

I agree collusion is not a showstopper, because individual people very rarely bother to try anything dishonest, and even when they do it isn't effective. Also political parties will simply disseminate recommended spending plans. To prevent this would require something like absolute power over all communication, wielded by an entity over which no political party has any influence.

The truly secret voting suggestion is possibly the most awful idea I have ever heard with respect to voting, because while individual voters rarely commit fraud or do anything else inappropriate with their votes a very common and highly successful method of cheating an election is for the people who tally the votes to simply declare victory for one candidate or the other. If we cannot prove who anyone actually voted for, we can't prove who actually won at all.

Why I am no longer driven

A note on the metaphor of sprint, marathon, and hike: where you wound up is the only pace associated with carrying any load.

Attempted Gears Analysis of AGI Intervention Discussion With Eliezer

I am struck by two elements of this conversation, which this post helped clarify did indeed stick out how I thought they did (weigh this lightly if at all, I'm speaking from the motivated peanut gallery here). 

A. Eliezer's commentary around proofs has a whiff of Brouwer's intuitionism about it to me. This seems to be the case on two levels: first the consistent this is not what math is really about and we are missing the fundamental point in a way that will cripple us tone; second and on a more technical level it seems to be very close to the intuitionist attitude about the law of the excluded middle. That is to say, Eliezer is saying pretty directly that what we need is P, and not-not-P is an unacceptable substitute because it is weaker.

B. That being said, I think Steve Omohundro's observations about the provability of individual methods wouldn't be dismissed in the counterfactual world where they didn't exist; rather I expect that Eliezer would have included some line about how to top it all off, we don't even have the ability to prove our methods mean what we say they do, so even if we crack the safety problem we can still fuck it up at the level of a logical typo.

C. The part about incentives being bad for researchers which drives too much progress, and lamenting that corporations aren't more amenable to secrecy around progress, seems directly actionable and literally only requiring money. The solution is to found a ClosedAI (naturally not named anything to do with AI), go ahead and set those incentives, and then go around outbidding the FacebookAIs of the world for talent that is dangerous in the wrong hands. This has even been done before, and you can tell it will work because of the name: Operation Paperclip.

I really think Eliezer and co. should spend more time wish-listing about this, and then it should be solidified into a more actionable plan. Under entirely-likely circumstances, it would be easy to get money from the defense and intelligence establishments to do this, resolving the funding problem.

Where did the 5 micron number come from? Nowhere good. [Wired.com]
  1. This article is a wild ride.
  2. They do not jest about the difficulty of acquiring the book (Airborne Contagion and Air Hygiene: An Ecological Study of Droplet Infections). It has no DOI number; Worldcat confirms it was digitized in 2009 but it must have been a weird method because it doesn't get referenced like other old books I've searched for. I did find at least one review that said the book was to airborne disease as the pumphandle investigation was to waterborne disease, which is about the highest conceivable endorsement. Put the damn thing back into print, Harvard!
  3. Katie Randall's historical research.
  4. Access to a PDF versions of a few articles co-authored by Linsey Marr:
    1. The indoors influenza article from 2011.
    2. Letter published in Science, Oct 2020.
    3. Minimizing indoor transmission of COVID, Sept 2020.
    4. A review in Science from Aug, 2021
  5. Almost everything by Firth and co is unavailable.
    1. A first page of Firth's tuberculosis rabbits experiment, 1948.
    2. The guinea pig and UV study, done by Firth's student Richard Riley, 1962.

I have examined none of these in depth, but the publications all appear to be real and also make the reported claims. However, I notice that when you start from Firth, information about this was pretty widespread in the 2010-2019 timeframe. We had plenty of time not to screw this one up.

I feel like agencies who make recommendations to the public, either as a matter of routine or in times of crisis, should have a historian of science on staff whose job is to discover and maintain the intellectual history of these recommendations. This way we will know how to update them in light of whatever current crisis.

ryan_b's Shortform

I also have a notion this would help with things like the renewal of old content by making it incremental. For example, there has been a low-key wish for the Sequences to be revised and updated, but they are huge and this has proved too daunting a task for anyone to volunteer to tackle by themselves, and Eliezer is a busy man. With a tool similar to this, the community could divide up the work into comment-size increments, and once a critical mass has been reached someone can transform the post into an updated version without carrying the whole burden themselves. Also solves the problem of being too dependent on one person's interpretations.

ryan_b's Shortform

I want to be able to emphasize how to make a great comment, and therefore contribution to the ongoing discussion. Some people have the norm of identifying good comments, but that doesn't help as much with how to make them, or what the thought process looks like. It would also be tedious to do this for every comment, because the workload would be impossible.

What if there were some kind of nomination process, where if I see a good comment I could flag it in such a way the author is notified that I would like to see a meta-comment about writing it in the first place?

I already enjoy meta-posts which explain other posts, and the meta-comments during our annual review where people comment on their own posts. The ability to easily request such a thing in a way that doesn't compete for space with other commentary would be cool.

Load More