Kenny

Wiki Contributions

Comments

Wouldn't it be better to accept contractual bindings and then at least have the opportunity to whistleblow (even if that means accepting the legal consequences)?

Or do you think that they have some kind of leverage by which the labs would agree to NOT contractually bind them? I'd expect the labs to just not allow them to evaluate the model at all were ARC to insist on or demand this.

I'm definitely not against reading your (and anyone else's) blog posts, but it would be friendlier to at least outline or excerpt some of the post here too.

It looks like you didn't (and maybe can't) enter the ASCII art in the form Bing needs to "decode" it? For one, I'd expect line breaks, both after and before the code block tags and also between each 'line' of the art.

If you can, try entering new lines with <kbd>Shift</kbd>+<kbd>Enter</kbd>. That should allow new lines without being interpreted as 'send message'.

I really like David's writing generally but this 'book' is particularly strong (and pertinent to us here on this site).

The second section, What is the Scary kind of AI?, is a very interesting and (I think) useful alternative perspective on the risks that 'AI safety' do and (arguably) should focus on, e.g. "diverse forms of agency".

The first chapter of the third ('scenarios') section, At war with the machines, provides a (more) compelling version of a somewhat common argument, i.e. 'AI is (already) out to get us'.

The second detailed scenario, in the third chapter, triggered my 'absurdity heuristic' hard. The following chapter points out that absurdity was deliberate – bravo David!

The rest of the book is a surprisingly comprehensive synthesis of a lot of insights from LW, the greater 'Modern Rationalist' sphere, and David's own works (much of which is very much related to and pertinent to the other two sets of insights). I am not 'fully sold' on 'Mooglebook AI doom', but I have definitely updated fairly strongly towards it.

This seems like the right trope:

That's why I used a fatal scenario, because it very obviously cuts all future utility to zero

I don't understand why you think a decision resulting in some person's or agent's death "cuts all future utility to zero". Why do you think choosing one's death is always a mistake?

I think I'd opt to quote the original title in a post here to indicate that it's not a 'claim' being made (by me).

IIRC, RDIs (and I would guess EARs) vary quite significantly among the various organizations that calculate/estimate/publish them. That might be related to the point ChristianKI seemed to be trying to make. (Tho I don't know whether 'iron' is one of the nutrients for which this is, or was, the case.)

I can't tell what's the output of ChatGPT or your prompts or commentary.

I don't think 'chronic fatigue syndrome' is a great example of what the post discusses because 'syndrome' is a clear technical (e.g. medical) word already. Similarly, 'myalgic encephalitis' is (for most listeners or readers) not a phrase made up of common English words. Both examples seem much more clearly medical or technical terms. 'chronic fatigue' would be a better example (if it was widely used) as it would conflate the unexplained medical condition with anything else that might have the same effects (like 'chronic overexertion').

Load More