Oooh yeah dude I think you want to un-buy the obsidian. That is almost literally glass. You want chert!
I don't think I've ever had one fall off in a bag! The only time I can recall them falling off/catching is if I start flipping through the book and the page facing the "back" of the dart slips into the tiny fold and pops it off.
There are several stories of him being able to read a book once and then repeat it back, word for word, until he was told to stop.
This is an aside, but I roll to disbelieve this
Ooh, I love this.
By contrast, if "it's hard to even think of how experiments would be relevant to what I'm doing," you have precisely zero means of ever determining that your theories are inappropriate for the question at hand.
Here, you've gotten too hyperbolic about what I said. When I say "experiments", I don't mean "any contact with reality". And when I said "what I'm doing", I didn't mean "anything I will ever do". Some people I talk to seem to think it's weird that I never run PyTorch, and that's the kind of thing where I can't think of how it would be relevant to what I'm currently doing.
When trying to formulate conjectures, I am constantly fretting about whether various assumptions match reality well enough. And when I do have a theory that is at the point where it's making strong claims, I will start to work out concrete ways to apply it.
But I don't even have one yet, so there's not really anything to check. I'm not sure how long people are expecting this to take, and this difference in expectation might be one of the implicit things driving the confusion. As many theorems there are that end up in the dustbin, there is even more pre-theorem work that end up in the dustbin. I've been at this for three and change years, and I would not be surprised if it takes a few more years. But the entire point is to apply it, so I can certainly imagine conditions under which we end up finding out whether the theory applies to reality.
I am not personally working on "equipping AI with the means of detecting and predictively modeling agency in other systems", but I have heard other people talk about that cluster of ideas. I think it's in-scope for agent foundations.
I'm not very confident about this, but it's my current impression. Happy to have had it flagged!
...I also do not use "reasoning about idealized superintelligent systems as the method" of my agent foundations research. Certainly there are examples of this in agent foundations, but it is not the majority. It is not the majority of what Garrabrant or Demski or Ngo or Wentworth or Turner do, as far as I know.
It sounds to me like you're not really familiar with the breadth of agent foundations. Which is perfectly fine, because it's not a cohesive field yet, nor is the existing work easily understandable. But I think you should aim for your statements to be more calibrated.
You buy them in quantities of like 50 at a time, and they come in a little tin. So I just keep the tin of them in a drawer, and when I start a new book I stick a dart in it. But they're also super cheap so I wouldn't care about losing one.
FWIW I currently think it's bad practice to paste LLM output in a forum. It's like pasting the google search results for something. Anyone can ask an LLM, and the reason I'm here reading comments on a forum is to talk to humans, who have separate identities, reputations, and tons of cultural context of the forum.