LESSWRONG
LW

fin
3935210
Message
Dialogue
Subscribe

New research group on AI. Previously Longview Philanthropy; FHI. I do a podcast called Hear This Idea. finmoorhouse.com.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
when will LLMs become human-level bloggers?
Answer by finMar 15, 202532

There are some social reasons for writing and reading blogs.

One reason is that “a blog post is a very long and complex search query to find fascinating people and make them route interesting stuff to your inbox”. I expect to continue to value finding new people who share my interests after AI starts writing better blog posts than me, which could be very soon. I'm less sure about whether this continues to be a good reason to write them, since I imagine blog posts will become a less credible signal of what I'm like.

Another property that makes me want to read a blog or blogger is the audience: I value that it's likely my peers will also have read what I'm reading, so I can discuss it. This gives the human bloggers some kind of first-mover advantage, because it might only be worth switching your attention to the AI bloggers if the rest of the audience coordinates to switch with you. Famous bloggers might then switch into more of a curation role.

To some extent I also intrinsically care about reading true autobiography (the same reason I might intrinsically care about watching stunts performed by real humans, rather than CGI or robots).

I think these are relatively minor factors, though, compared to the straightforward quality of reasoning and writing.

Reply
Preparing for the Intelligence Explosion
fin6mo30

Yes.

Reply
Why is Toby Ord's likelihood of human extinction due to AI so low?
Answer by finApr 13, 202250

As Buck points out, Toby's estimate of P(AI doom) is closer to the 'mainstream' than MIRI's, and close enough that "so low" doesn't seem like a good description.

I can't really speak on behalf of others at FHI, of course, by I don't think there is some 'FHI consensus' that is markedly higher or lower than Toby's estimate.

Also, I just want to point out that Toby's 1/10 figure is not for human extinction, it is for existential catastrophe caused by AI, which includes scenarios which don't involve extinction (forms of 'lock-in'). Therefore his estimate for extinction caused by AI is lower than 1/10.

Reply
Ethics in Many Worlds
fin4y20

Yes, I'm almost certain it's too 'galaxy brained'! But does the case rely on entities outside our light cone? Aren't there many 'worlds' within our light cone? (I literally have no idea, you may be right, and someone who knows should intervene)

I'm more confident that this needn't relate to the literature on infinite ethics, since I don't think any of this relies on inifinities.

Reply
Ethics in Many Worlds
fin4y20

Thanks, this is useful.

Reply
I'm still mystified by the Born rule
fin4y*30

There are some interesting and tangentially related comments in the discussion of this post (incidentally, the first time I've been 'ratioed' on LW).

Reply
Inner Alignment in Salt-Starved Rats
fin5y10

Thanks, really appreciate it!

Reply
Embedded Interactive Predictions on LessWrong
fin5y50

Was wondering the same thing — would it be possible to set others' answers as hidden by default on a post until the reader makes a prediction?

Reply
Inner Alignment in Salt-Starved Rats
fin5y40

I interviewed Kent Berridge a while ago about this experiment and others. If folks are interested, I wrote something about it here, mostly trying to explain his work on addiction. You can listen to the audio on the same page.

Reply
Ethics in Many Worlds
fin5y10

Got it, thanks very much for explaining.

Reply
Load More
11Podcast on “AI tools for existential security” — transcript
4mo
0
78Preparing for the Intelligence Explosion
6mo
17
121The Dangers of Mirrored Life
9mo
9
17Announcing a contest: EA Criticism and Red Teaming
3y
1
45Effective Ideas is announcing a $100,000 blog prize
3y
1
8Ethics in Many Worlds
5y
24
11Review and Summary of 'Moral Uncertainty'
5y
7