Nate Showell

Wikitag Contributions

Comments

Sorted by

I hardly ever listen to podcasts. Part of this is because I find earbuds very uncomfortable, but the bigger part is that they don't fit into my daily routines very well. When I'm walking around or riding the train, I want to be able to hear what's going on around me. When I do chores it's usually in short segments where I don't want to have to repeatedly pause and unpause a podcast when I stop and start. When I'm not doing any of those things, I can watch videos that have visual components instead of just audio, or can read interview transcripts in much less time than listening to a podcast would take. The podcast format doesn't have any comparative advantage for me.

Metroid Prime would work well as a difficult video-game-based test for AI generality.

  • It has a mixture of puzzles, exploration, and action.
  • It takes place in a 3D environment.
  • It frequently involves backtracking across large portions of the map, so it requires planning ahead.
  • There are various pieces of text you come across during the game. Some of them are descriptions of enemies' weaknesses or clues on how to solve puzzles, but most of them are flavor text with no mechanical significance.
  • The player occasionally unlocks new abilities they have to learn how to use.
  • It requires the player to manage resources (health, missiles, power bombs)
  • It's on the difficult side for human players, but not to an extreme level.

There are no current AI systems that are anywhere close to being able to autonomously complete Metroid Prime. Such a system would probably have to be at or near the point where it could automate large portions of human labor.

I recently read This Is How You Lose the Time War, by Max Gladstone and Amal El-Mohtar, and had the strange experience of thinking "this sounds LLM-generated" even though it was written in 2019. Take this passage, for example:

You wrote of being in a village upthread together, living as friends and neighbors do, and I could have swallowed this valley whole and still not sated my hunger for the thought. Instead I wick the longing into thread, pass it through your needle eye, and sew it into hiding somewhere beneath my skin, embroider my next letter to you one stitch at a time.

I found that passage just by opening to a random page without having to cherry-pick. The whole book is like that. I'm not sure how I managed to stick it out and read the whole thing.

 

The short story on AI and grief feels very stylistically similar to This Is How You Lose the Time War. They both read like they're cargo-culting some idea of what vivid prose is supposed to sound like. They overshoot the target of how many sensory details to include, while at the same time failing to cohere into anything more than a pile of mixed metaphors. The story on AI and grief is badly written, but its bad writing is of a type that human authors sometimes engage in too, even in novels like This Is How You Lose the Time War that sell well and become famous.

 

How soon do I think an LLM will write a novel I would go out of my way to read? As a back-of-the-envelope estimate, such an LLM is probably about as far away from current LLMs in novel-writing ability as current LLMs are from GPT-3. If I multiply the 5 years between GPT-3 and now by a factor of 1.5 to account for a slowdown in LLM capability improvements, I get an estimate of that LLM being 7.5 years away, so around late 2032.

As you mentioned at the beginning of the post, popular culture contains examples of people being forced to say things they don't want to say. Some of those examples end up in LLMs' training data. Rather than involving consciousness or suffering on the part of the LLM, the behavior you've observed has a simpler explanation: the LLM is imitating characters in mind control stories that appear in its training corpus.

There are sea slugs that photosynthesize, but that's with chloroplasts they steal from the algae they eat.

As I use the term, the presence or absence of an emotional reaction isn't what determines whether someone is "feeling the AGI" or not. I use it to mean basing one's AI timeline predictions on a feeling.

Getting caught up in an information cascade that says AGI is arriving soon. A person who's "feeling the AGI" has "vibes-based" reasons for their short timelines due to copying what the people around them believe. In contrast, a person who looks carefully at the available evidence and formulates a gears-level model of AI timelines is doing something different than "feeling the AGI," even if their timelines are short. "Feeling" is the crucial word here.

The phenomenon of LLMs converging on mystical-sounding outputs deserves more exploration. There might be something alignment-relevant happening to LLMs' self-models/world-models when they enter the mystical mode, potentially related to self-other overlap or to a similar ontology in which the concepts of "self" and "other" aren't used. I would like to see an interpretability project analyzing the properties of LLMs that are in the mystical mode.

The question of population ethics can be dissolved by rejecting personal identity realism. And we already have good reasons to reject personal identity realism, or at least consider it suspect, due to the paradoxes that arise in split-brain thought experiments (e.g., the hemisphere swap thought experiment) if you assume there's a single correct way to assign personal identity.

LLMs are more accurately described as artificial culture instead of artificial intelligence. They've been able to achieve the things they've achieved by replicating the secret of our success, and by engaging in much more extensive cultural accumulation (at least in terms of text-based cultural artifacts) than any human ever could. But cultural knowledge isn't the same thing as intelligence, hence LLMs' continued difficulties with sequential reasoning and planning.

Load More