johnswentworth

Sequences

From Atoms To Agents
"Why Not Just..."
Basic Foundations for Agent Models
Framing Practicum
Gears Which Turn The World
Abstraction 2020
Gears of Aging
Model Comparison

Wiki Contributions

Comments

Sorted by

Terminological note: something which does not buy ample time is not a pivotal act. Eliezer introduced the term to mean a specific thing, which he summarized as:

(as of late 2021) researchers use 'pivotal' and 'pivotal act' to refer to good events that upset the current gameboard - events that decisively settle a win, or drastically increase the probability of a win.

That same page also talks a bunch about how sticking to that definition is important, because there will predictably be lots of pressure to water the term down.

This should be a top-level post.

Sorry, I skimmed and didn't get your main idea at the time. A three-sentence summary upfront would help a lot.

Y'know, typical usage of "trust" feels related to the "believing vs believing in" thing.

We can keep seeking the perfect worldview forever, and we'll never find one. The answer to how to make the best choice every time. The answer to moral dilemmas. The answer to social issues, personal issues, well-being issues. No worldview will be able to output the best answer in every circumstance.

Sounds like a skill issue.

I'm reminded of a pattern:

  1. Someone picks a questionable ontology for modeling biological organisms/neural nets - for concreteness, let’s say they try to represent some system as a decision tree.
  2. Lo and behold, this poor choice of ontology doesn’t work very well; the modeler requires a huge amount of complexity to decently represent the real-world system in their poorly-chosen ontology. For instance, maybe they need a ridiculously large decision tree or random forest to represent a neural net to decent precision.
  3. The modeler concludes that the real world system is hopelessly complicated (i.e. fractal complexity), and no human-interpretable model will ever capture it to reasonable precision.

… and in this situation, my response is “It’s not hopelessly complex, that’s just what it looks like when you choose the ontology without doing the work to discover the ontology”.

There is a generalized version of this pattern, beyond just the "you don't get to choose the ontology" problem:

  • Someone latches on to a particular strategy to solve some problem, or to solve problems in general, without doing the work to discover a strategy which works well.
  • Lo and behold, the strategy does not work.
  • The person concludes that the real world is hopelessly complex/intractable/ever-changing, and no human will ever be able to solve the problem or to solve problems in general.

My generalized response is: it's not impossible, you just need to actually do the work to figure it out properly.

Load More