Theoretical Computer Science Msc student at the University of [Redacted] in the United Kingdom.
I'm an aspiring alignment theorist; my research vibes are descriptive formal theories of intelligent systems (and their safety properties) with a bias towards constructive theories.
I think it's important that our theories of intelligent systems remain rooted in the characteristics of real world intelligent systems; we cannot develop adequate theory from the null string as input.
There is not an insignificant sense of guilt/of betraying myself from 2023 and my ambitions from before.
And I don't want to just end up doing irrelevant TCS research that only a few researchers in a niche field will ever care about.
It's not high impact research.
And it's mostly just settling. I get the sense that I enjoy theoretical research, I don't currently feel poised to contribute to the AI safety problem, I seem to have an unusually good (at least it appears so to my limited understanding) opportunity to pursue a boring TCS PhD in some niche field that few people care about.
I don't think I'll be miserable pursuing the boring TCS PhD or not enjoy it, or anything of the sort. It's just not directly contributing to what I wanted to contribute to. It's somewhat sad and it's undignified (but it's less undignified than the path I thought I was on at various points in the last 15 months).
I still want to work on technical AI safety eventually.
I feel like I'm on quite far off path from directly being useful in 2025 than I felt in 2023.
And taking a detour to do a TCS PhD that isn't directly pertinent to AI safety (current plan) feels like not contributing.
Cope is that becoming a strong TCS researcher will make me better poised to contribute to the problem, but short timelines could make this path less viable.
[Though there's nothing saying I can't try to work on AI on the side even if it isn't the focus of my PhD.]
I think LW is a valuable intellectual hub and community.
Haven't been an active participant of recent, but it's still a service I occasionally find myself relying on explicitly, and I prefer the world where it continues to exist.
[I donated $20. Am unemployed and this is a nontrivial fraction of my disposable income.]
o1's reasoning trace also does this for different languages (IIRC I've seen Chinese and Japanese and other languages I don't recognise/recall), usually an entire paragraph not a word, but when I translated them it seemed to make sense in context.
This is not a rhetorical question:) What do you mean by "probability" here?
Yeah, since posting this question:
I have updated towards thinking that it's in a sense not obvious/not clear what exactly "probability" is supposed to be interpreted as here.
And once you pin down an unambiguous interpretation of probability the problem dissolves.
I had a firm notion in mind for what I thought probability meant. But Rafael Harth's answer really made me unconfident that the notion I had in mind was the right notion of probability for the question.
I have not read all of them!
My current position now is basically:
Actually, I'm less confident and now unsure.
Harth's framing was presented as an argument re: the canonical Sleeping Beauty problem.
And the question I need to answer is: "should I accept Harth's frame?"
I am at least convinced that it is genuinely a question about how we define probability.
There is still a disconnect though.
While I agree with the frequentist answer, it's not clear to me how to backgpropagate this in a Bayesian framework.
Suppose I treat myself as identical to all other agents in the reference class.
I know that my reference class will do better if we answer "tails" when asked about the outcome of the coin toss.
But it's not obvious to me that there is anything to update from when trying to do a Bayesian probability calculation.
There being many more observers in the tails world to me doesn't seem to alter these probabilities at all:
- P(waking up)
- P(being asked questions)
- P(...)
By stipulation my observational evidence is the same in both cases.
And I am not compelled by assuming I should be randomly sampled from all observers.
There are many more versions of me in this other world does not by itself seem to raise the probability of me witnessing the observational evidence since by stipulation all versions of me witness the same evidence.
I'm curious how your conception of probability accounts for logical uncertainty?
So in this case, I agree that like if this experiment is repeated multiple times and every Sleeping Beauty version created answered tails, the reference class of Sleeping Beauty agents would have many more correct answers than if the experiment is repeated many times and every sleeping Beauty created answered heads.
I think there's something tangible here and I should reflect on it.
I separately think though that if the actual outcome of each coin flip was recorded, there would be a roughly equal distribution between heads and tails.
And when I was thinking through the question before it was always about trying to answer a question regarding the actual outcome of the coin flip and not what strategy maximises monetary payoffs under even bets.
While I do think that like betting odds isn't convincing re: actual probabilities because you can just have asymmetric payoffs on equally probable mutually exclusive and jointly exhaustive events, the "reference class of agents being asked this question" seems like a more robust rebuttal.
I want to take some time to think on this.
Strong up voted because this argument actually/genuinely makes me think I might be wrong here.
Much less confident now, and mostly confused.
When I saw the beginning/title I thought the post would be a refutation of the material scarcity thesis; I found myself disappointed it is not.