technicalities

Wiki Contributions

Comments

The Best Software For Every Need

Ooh that's more intense that I realised. There might be plugins for yEd, but I don't know em. Maybe Tetrad?

The Best Software For Every Need

I love Sketchviz for 10 second prototypes, but it requires the DOT language, and if you need very specific label placements it's a nightmare.

For using a mouse, yEd is good. Exports to GraphML for version control.

We have some evidence that masks work

Givewell's fine! 

Thanks again for caring about this.

We have some evidence that masks work

Sounds fine. Just noticed they have a cloth and a surgical treatment. Take the mean?

We have some evidence that masks work

Great! Comment below if you like this wording and this can be our bond:

"Gavin bets 100 USD to GiveWell, to Mike's 100 USD to GiveWell that the results of NCT04630054 will show a median reduction in Rt > 15.0 % for the effect of a whole population wearing masks [in whatever venues the trial chose to study]."

Fire Law Incentives

This is an interesting counterpoint (though I'd like to see a model of CO2 cost vs thinning cost if you have one), and it's funny we happen to have such a qualified person on the thread. But your manner is needlessly condescending and - around here - brandishing credentials as a club will seriously undermine you rather than buttressing you. 

Critiques of the Agent Foundations agenda?

Stretching the definition of 'substantial' further:

Beth Zero was an ML researcher and Sneerclubber with some things to say. Her blog is down unfortunately but here's her collection of critical people. Here's a flavour of her thoughtful Bulverism. Her post on the uselessness of Solomonoff induction and the dishonesty of pushing it as an answer outside of philosophy was pretty good.

Sadly most of it is against foom, against short timelines, against longtermism, rather than anything specific about the Garrabrant or Demski or Kosoy programmes.

Critiques of the Agent Foundations agenda?

Nostalgebraist (2019) sees it as equivalent to solving large parts of philosophy: a noble but quixotic quest. (He also argues against short timelines but that's tangential here.)

Here is what this ends up looking like: a quest to solve, once and for all, some of the most basic problems of existing and acting among others who are doing the same. Problems like “can anyone ever fully trust anyone else, or their future self, for that matter?” In the case where the “agents” are humans or human groups, problems of this sort have been wrestled with for a long time using terms like “coordination problems” and “Goodhart’s Law”; they constitute much of the subject matter of political philosophy, economics, and game theory, among other fields.

The quest for “AI Alignment” covers all this material and much more. It cannot invoke specifics of human nature (or non-human nature, for that matter); it aims to solve not just the tragedies of human coexistence, but the universal tragedies of coexistence which, as a sad fact of pure reason, would befall anything that thinks or acts in anything that looks like a world.

It sounds misleadingly provincial to call such a quest “AI Alignment.” The quest exists because (roughly) a superhuman being is the hardest thing we can imagine “aligning,” and thus we can only imagine doing so by solving “Alignment” as a whole, once and forever, for all creatures in all logically possible worlds. (I am exaggerating a little in places here, but there is something true in this picture that I have not seen adequately talked about, and I want to paint a clear picture of it.)

There is no doubt something beautiful – and much raw intellectual appeal – in the quest for Alignment. It includes, of necessity, some of the most mind-bending facets of both mathematics and philosophy, and what is more, it has an emotional poignancy and human resonance rarely so close to the surface in those rarefied subjects. I certainly have no quarrel with the choice to devote some resources, the life’s work of some people, to this grand Problem of Problems. One imagines an Alignment monastery, carrying on the work for centuries. I am not sure I would expect them to ever succeed, much less to succeed in some specified timeframe, but in some way it would make me glad, even proud, to know they were there.

I do not feel any pressure to solve Alignment, the great Problem of Problems – that highest peak whose very lowest reaches Hobbes and Nash and Kolomogorov and Gödel and all the rest barely began to climb in all their labors...

#scott wants an aligned AI to save us from moloch; i think i'm saying that alignment would already be a solution to moloch

Rationalists from the UK -- what are your thoughts on Dominic Cummings?

Huh, works for me. Anyway I'd rather not repeat his nasty slander but "They're [just] a sex cult" is the gist.

Load More