Arepo

Wiki Contributions

Comments

Honoring Petrov Day on LessWrong, in 2020

It doesn't matter whether you'd have been hypothetically willing to do something for them. As I said on the Facebook thread, you did not consult with them. You merely informed them they were in a game, which, given the social criticism Chris has received, had real world consequences if they misplayed. In other words, you put them in harm's way without their consent. That is not a good way to build trust.

Honoring Petrov Day on LessWrong, in 2020

The downvotes on this comment seem ridiculous to me. If I email 270 people to tell them I've carefully selected them for some process, I cannot seriously presume they will give up >0 of their time to take part in it. 

Any such sacrifice they make is a bonus, so if they do give up >0 time, it's absurd to ask that they give up even more time to research the issue.

Any negative consequences are on the person who set up the game. Adding the justification that 'I trust you' does not suddenly make the recipient more obligated to the spammer.

The Case for The EA Hotel

My impression is that many similar projects are share houses or other flat hierarchies. IMO a big advantage of the model here is a top-down approach, where the trustees/manager view it as a major part of our job to limit and mitigate interpersonal conflicts, zero sum status games etc.

[link] Choose your (preference) utilitarianism carefully – part 1

Whatever you call it, they've got to identify some alternative, even if only tacitly by following some approximation of it in their daily life.

[link] Choose your (preference) utilitarianism carefully – part 1

I would like to write an essay about that eventually, but I figured persuading PUs of the merits of HU was lower hanging fruit.

For what it's worth, I have a lot of sympathy with your scepticism - I would rather (and believe it possible to) build a system resembling ethics up without reference to normativity, 'oughts', or any of their associated baggage. I think the trick will be to properly understand the overlap of ethics and epistemology, both of which are subject to similar questions (how do we non question-beggingly 'ground' 'factual' questions?), but the former of whose questions people disproportionately emphasise.

[ETA] It's also hard to pin down what the null hypothesis would be. Calling it 'nihilism' of any kind is just defining the problem away. For eg, if you just decide you want to do something nice for your friend - in the sense of something beneficial for her, rather than just picking an act that will give you warm fuzzies - then your presumption of what category of things would be 'nice for her' implicitly judges how to group states of the world. If you also feel like some things you might do would be nicer for her than others, then you're judging how to order states of the world.

This already has the makings of a 'moral system', even though there's not a 'thou shalt' in sight. If you further think that how she'll react to whatever you do for her can corroborate/refute your judgement of what things are nice(r than others) for her, your system seems to have, if not a 'realist' element, at least a non purely antirealist/subjectivist one. It's not utilitarianism (yet), but it seems to be heading in that sort of direction.

xkcd on the AI box experiment

How do we know EY isn't doing the same?

'Effective Altruism' as utilitarian equivocation.

‘A charity that very efficiently promoted beauty and justice’ would still be a utilitarian charity (where the form of util defined utility as beauty and justice), so if that’s not EA, then EA does not = utilitarianism, QED.

Also, as Ben Todd and others have frequently pointed out, many non-utilitarian ethics subsume the value of happiness. A deontologist might want more happiness and less suffering, but feel that he also had a personal injunction against violating certain moral rules. So long as he didn’t violate those codes, he might well want to maximise efficient use of welfare.

What should a college student do to maximize future earnings for effective altruism?

I'd guess these effects are largely not causation, but correlation caused by conscientiousness/ambition causing both double majors and higher earnings.

Unless you're certain of this or have some reason to suspect a factor pulling in the other direction, this still seems to suggest higher expectation from doing a double major.

Discussion of LW going on in felicifia

Written a full response to your comments on Felicifia (I'm not going to discuss this in three different venues), but...

your opponent's true rejection seems to be "cryonics does not work"

This sort of groundless speculation about my beliefs (and its subsequent upvoting success), a) in a thread where I’ve said nothing about them, b) where I’ve made no arguments to whose soundness the eventual success/failure of cryo would be at all relevant, and c) where the speculator has made remarks that demonstrate he hasn’t even read the arguments he’s dismissing (among other things a reductio ad absurdum to an ‘absurd’ conclusion which I’ve already shown I hold), does not make me more confident that the atmosphere on this site supports proper scepticism.

Ie you're projecting.

Reply to Holden on 'Tool AI'

Assuming you accept the reasoning, 90% seems quite generous to me. What percentage of complex computer programmes when run for the first time exhibit behaviour the programmers hadn't anticipated? I don't have much of an idea, but my guess would be close to 100. If so, the question is how likely unexpected behaviour is to be fatal. For any programme that will eventually gain access to the world at large and quickly become AI++, that seems (again, no data to back this up - just an intuitive guess) pretty likely, perhaps almost certain.

For any parameter of human comfort (eg 253 degrees Kelvin, 60% water, 40 hour working weeks), a misplaced decimal point misplaced by seems like it would destroy the economy at best and life on earth at worst.

If Holden’s criticism is appropriate, the best response might be to look for other options rather than making a doomed effort to make FAI – for example trying to prevent the development of AI anywhere on earth, at least until we can self-improve enough to keep up with it. That might have a low probability of success, but if FAI has sufficiently low probability, it would still seem like a better bet.

Load More