Multicore

I lurk and tag stuff.

Posts

Sorted by New

Wiki Contributions

Load More

Comments

Things that probably actually fit into your interests:

A Sensible Introduction to Category Theory

Most of what 3blue1brown does

Videos that I found intellectually engaging but are far outside of the subjects that you listed:

Cursed Problems in Game Design

Luck and Skill in Games

Disney's FastPass: A Complicated History 

The Congress of Vienna

Building a 6502-based computer from scratch (playlist)

 

(I am also a jan Misali fan)

The preview-on-hover for those manifold links shows a 404 error. Not sure if this is Manifold's fault or LW's fault.

One antifeature I see promoted a lot is "It doesn't track your data". And this seems like it actually manages to be the main selling point on its own for products like DuckDuckGo, Firefox, and PinePhone.

The major difference from the game and movie examples is that these products have fewer competitors, with few or none sharing this particular antifeature.

Antifeatures work as marketing if a product is unique or almost unique in its category for having a highly desired antifeature. If there are lots of other products with the same antifeature, the antifeature alone won't sell the product. But the same is true of regular features. You can't convince your friends to play a game by saying "it has a story" or "it has a combat system" either.

On the first read I was annoyed at the post for criticizing futurists for being too certain in their predictions, while it also throws out and refuses to grade any prediction that expressed uncertainty, on the grounds that saying something "may" happen is unfalsifiable.

On reflection these two things seem mostly unrelated, and for the purpose of establishing a track record "may" predictions do seem strictly worse than either predicting confidently (which allows scoring % of predictions right), or predicting with a probability (which none of these futurists did, but allows creating a calibration curve).

Yes. The one I described is the one the paper calls FairBot. It also defines PrudentBot, which looks for a proof that the other player cooperates with PrudentBot and a proof that it defects against DefectBot. PrudentBot defects against CooperateBot.

The part about two Predictors playing against each other reminded me of Robust Cooperation in the Prisoner's Dilemma, where two agents with the algorithm "If I find a proof that the other player cooperates with me, cooperate, otherwise defect" are able to mutually prove cooperation and cooperate.

If we use that framework, Marion plays "If I find a proof that the Predictor fills both boxes, two-box, else one-box" and the Predictor plays "If I find a proof that Marion one-boxes, fill both, else only fill box A". I don't understand the math very well, but I think in this case neither agent finds a proof, and the Predictor fills only box A while Marion takes only box B - the worst possible outcome for Marion.

Marion's third conditional might correspond to Marion only searching for proofs in PA, while the Predictor searches for proofs in PA+1, in which case Marion will not find a proof, the Predictor will, and then the Predictor fills both boxes and Marion takes only box B. But in this case clearly Marion has abandoned the ability to predict the Predictor and has given the Predictor epistemic vantage over her.

I think in a lot of people's models, "10% chance of alignment by default" means "if you make a bunch of AIs, 10% chance that all of them are aligned, 90% chance that none of them are aligned", not "if you make a bunch of AIs, 10% of them will be aligned and 90% of them won't be".

And the 10% estimate just represents our ignorance about the true nature of reality; it's already true either that alignment happens by default or that it doesn't, we just don't know yet.

I generally disagree with the idea that fancy widgets and more processes are the main thing keeping the LW wiki from being good. I think the main problem is that not a lot of people are currently contributing to it. 

The things that discourage me from contributing more look like:

-There are a lot of pages. If there are 700 bad pages and I write one really good page, there are still 699 bad pages.

-I don't have a good sense of which pages are most important. If I put a bunch of effort into a particular page, is that one that people are going to care about?

-I don't get much feedback about whether anyone saw the page after I edited it - karma for edits basically just comes from the tag dashboard and the frontpage activity feed.

So the improvements I would look for would be like:

-Expose view counts for wiki pages somewhere.

-Some sort of bat-signal on the tag dashboard for if a page is getting a lot of views but still has a bunch of TODO flags set.

-Big high-quality wiki page rewrites get promoted to frontpage or something.

-Someone of authority actually goes through and sets the "High Priority" flag on, say, 20 pages that they know are important and neglected.

-Some sort of event or bounty to drive more participation.

Load More