LESSWRONG
LW

1208
Alex_Altair
5573Ω386365087
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Entropy from first principles
7Alex_Altair's Shortform
Ω
3y
Ω
48
henryaj's Shortform
Alex_Altair5d20

It's not national, so I might just go with BloWriMo.

Reply
Should you make stone tools?
Alex_Altair14d147

FWIW I currently think it's bad practice to paste LLM output in a forum. It's like pasting the google search results for something. Anyone can ask an LLM, and the reason I'm here reading comments on a forum is to talk to humans, who have separate identities, reputations, and tons of cultural context of the forum.

Reply1
Should you make stone tools?
Alex_Altair14d30

Oooh yeah dude I think you want to un-buy the obsidian. That is almost literally glass. You want chert!

Reply
Somebody invented a better bookmark
Alex_Altair16d40

I don't think I've ever had one fall off in a bag! The only time I can recall them falling off/catching is if I start flipping through the book and the page facing the "back" of the dart slips into the tiny fold and pops it off.

Reply
Von Neumann's Fallacy and You
Alex_Altair16d45

There are several stories of him being able to read a book once and then repeat it back, word for word, until he was told to stop.

This is an aside, but I roll to disbelieve this

Reply
Alex_Altair's Shortform
Alex_Altair23d20

Ooh, I love this.

Reply
Agent foundations: not really math, not really science
Alex_Altair23d142

By contrast, if "it's hard to even think of how experiments would be relevant to what I'm doing," you have precisely zero means of ever determining that your theories are inappropriate for the question at hand.

Here, you've gotten too hyperbolic about what I said. When I say "experiments", I don't mean "any contact with reality". And when I said "what I'm doing", I didn't mean "anything I will ever do". Some people I talk to seem to think it's weird that I never run PyTorch, and that's the kind of thing where I can't think of how it would be relevant to what I'm currently doing.

When trying to formulate conjectures, I am constantly fretting about whether various assumptions match reality well enough. And when I do have a theory that is at the point where it's making strong claims, I will start to work out concrete ways to apply it.

But I don't even have one yet, so there's not really anything to check. I'm not sure how long people are expecting this to take, and this difference in expectation might be one of the implicit things driving the confusion. As many theorems there are that end up in the dustbin, there is even more pre-theorem work that end up in the dustbin. I've been at this for three and change years, and I would not be surprised if it takes a few more years. But the entire point is to apply it, so I can certainly imagine conditions under which we end up finding out whether the theory applies to reality.

Reply1
Agent foundations: not really math, not really science
Alex_Altair23d30

I am not personally working on "equipping AI with the means of detecting and predictively modeling agency in other systems", but I have heard other people talk about that cluster of ideas. I think it's in-scope for agent foundations.

Reply
Agent foundations: not really math, not really science
Alex_Altair23d2-5

I'm not very confident about this, but it's my current impression. Happy to have had it flagged!

Reply
Agent foundations: not really math, not really science
Alex_Altair23d30

...I also do not use "reasoning about idealized superintelligent systems as the method" of my agent foundations research. Certainly there are examples of this in agent foundations, but it is not the majority. It is not the majority of what Garrabrant or Demski or Ngo or Wentworth or Turner do, as far as I know.

It sounds to me like you're not really familiar with the breadth of agent foundations. Which is perfectly fine, because it's not a cohesive field yet, nor is the existing work easily understandable. But I think you should aim for your statements to be more calibrated.

Reply
Load More
42Apply for the 2025 Dovetail fellowship
1mo
2
110Agent foundations: not really math, not really science
1mo
24
73The Inheritors: a book review
1mo
4
170Somebody invented a better bookmark
1mo
22
189Should you make stone tools?
16d
47
42ChatGPT is the Daguerreotype of AI
1mo
2
26Report & retrospective on the Dovetail fellowship
6mo
3
24Come join Dovetail's agent foundations fellowship talks & discussion
7mo
0
29Towards building blocks of ontologies
7mo
0
59Work with me on agent foundations: independent fellowship
1y
5
Load More
Cellular automata
3 years ago
(+306)
Dynamical systems
3 years ago
Quantilization
3 years ago
Solomonoff induction
3 years ago
Kolmogorov Complexity
3 years ago
(-37)
Kolmogorov Complexity
3 years ago
(+42/-25)
Agent
13 years ago
(+5/-42)
Computing Overhang
13 years ago
(+111/-1)