LESSWRONG
LW

Alex_Altair
5535Ω386364987
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Entropy from first principles
7Alex_Altair's Shortform
Ω
3y
Ω
48
Should you make stone tools?
Alex_Altair2d83

FWIW I currently think it's bad practice to paste LLM output in a forum. It's like pasting the google search results for something. Anyone can ask an LLM, and the reason I'm here reading comments on a forum is to talk to humans, who have separate identities, reputations, and tons of cultural context of the forum.

Reply1
Should you make stone tools?
Alex_Altair2d30

Oooh yeah dude I think you want to un-buy the obsidian. That is almost literally glass. You want chert!

Reply
Somebody invented a better bookmark
Alex_Altair3d40

I don't think I've ever had one fall off in a bag! The only time I can recall them falling off/catching is if I start flipping through the book and the page facing the "back" of the dart slips into the tiny fold and pops it off.

Reply
Von Neumann's Fallacy and You
Alex_Altair3d45

There are several stories of him being able to read a book once and then repeat it back, word for word, until he was told to stop.

This is an aside, but I roll to disbelieve this

Reply
Alex_Altair's Shortform
Alex_Altair10d20

Ooh, I love this.

Reply
Agent foundations: not really math, not really science
Alex_Altair11d132

By contrast, if "it's hard to even think of how experiments would be relevant to what I'm doing," you have precisely zero means of ever determining that your theories are inappropriate for the question at hand.

Here, you've gotten too hyperbolic about what I said. When I say "experiments", I don't mean "any contact with reality". And when I said "what I'm doing", I didn't mean "anything I will ever do". Some people I talk to seem to think it's weird that I never run PyTorch, and that's the kind of thing where I can't think of how it would be relevant to what I'm currently doing.

When trying to formulate conjectures, I am constantly fretting about whether various assumptions match reality well enough. And when I do have a theory that is at the point where it's making strong claims, I will start to work out concrete ways to apply it.

But I don't even have one yet, so there's not really anything to check. I'm not sure how long people are expecting this to take, and this difference in expectation might be one of the implicit things driving the confusion. As many theorems there are that end up in the dustbin, there is even more pre-theorem work that end up in the dustbin. I've been at this for three and change years, and I would not be surprised if it takes a few more years. But the entire point is to apply it, so I can certainly imagine conditions under which we end up finding out whether the theory applies to reality.

Reply1
Agent foundations: not really math, not really science
Alex_Altair11d30

I am not personally working on "equipping AI with the means of detecting and predictively modeling agency in other systems", but I have heard other people talk about that cluster of ideas. I think it's in-scope for agent foundations.

Reply
Agent foundations: not really math, not really science
Alex_Altair11d2-5

I'm not very confident about this, but it's my current impression. Happy to have had it flagged!

Reply
Agent foundations: not really math, not really science
Alex_Altair11d20

...I also do not use "reasoning about idealized superintelligent systems as the method" of my agent foundations research. Certainly there are examples of this in agent foundations, but it is not the majority. It is not the majority of what Garrabrant or Demski or Ngo or Wentworth or Turner do, as far as I know.

It sounds to me like you're not really familiar with the breadth of agent foundations. Which is perfectly fine, because it's not a cohesive field yet, nor is the existing work easily understandable. But I think you should aim for your statements to be more calibrated.

Reply
Somebody invented a better bookmark
Alex_Altair12d40

You buy them in quantities of like 50 at a time, and they come in a little tin. So I just keep the tin of them in a drawer, and when I start a new book I stick a dart in it. But they're also super cheap so I wouldn't care about losing one.

Reply
Load More
42Apply for the 2025 Dovetail fellowship
15d
2
109Agent foundations: not really math, not really science
15d
24
73The Inheritors: a book review
16d
4
169Somebody invented a better bookmark
18d
22
169Should you make stone tools?
4d
46
42ChatGPT is the Daguerreotype of AI
25d
2
26Report & retrospective on the Dovetail fellowship
6mo
3
24Come join Dovetail's agent foundations fellowship talks & discussion
7mo
0
29Towards building blocks of ontologies
7mo
0
59Work with me on agent foundations: independent fellowship
1y
5
Load More
Cellular automata
3y
(+306)
Dynamical systems
3y
Quantilization
3y
Solomonoff induction
3y
Kolmogorov Complexity
3y
(-37)
Kolmogorov Complexity
3y
(+42/-25)
Agent
13y
(+5/-42)
Computing Overhang
13y
(+111/-1)