This reminds me of a lot of discussions I've had with people where we seem to be talking past each other, but can't quite pin down what the disagreement is.
Usually we just end up talking about something else instead that we both seem to derive value from.
It seems to me that the constraints of reality are implicit. I don't think "it can be done by a human" is satisfied by a method requiring time travel with a very specific form of paradox resolution. It sounds like you're arguing that the Church-Turing thesis is simply worded ambiguously.
It looks Deutschian CTCs are similar to a computer that can produce all possible outputs in different realities, then selectively destroy the realities that don't solve the problem. It's not surprising that you could solve the halting problem in such a framework.
Our symbolic conception of numbers is already logarithmic, as order of magnitude corresponds to the number of digits. I think an estimate of a product based on an imaginary slide rule would be roughly equivalent to estimating based on the number of digits and the first digit.
Similar to point 2: I find that reading a book in the morning helps my mood. Particularly a physical fiction book.
I've definitely noticed the pattern of habits seeming to improve my life without them feeling like they are improving my life. On a similar note, a lot of habits seem easy to maintain while I'm doing them and obviously beneficial, but when I stop I have no motivation to continue. I don't know why that is, but my hope is that if I notice this hard enough it will become easier for me to recognize that I should do the thing anyway.
I read some of the post and skimmed the rest, but this seems to broadly agree with my current thoughts about AI doom, and I am happy to see someone fleshing out this argument in detail.
[I decided to dump my personal intuition about AI risk below. I don't have any specific facts to back it up.]
It seems to me that there is a much larger possibility space of what AIs can/will get created than the ideal superintelligent "goal-maximiser" AI put forward in arguments for AI doom.
The tools that we have depend more on the specific details of the underlying mechanics, and how we can wrangle it to do what we want, rather than our prior beliefs on how we would expect the tools to behave. I imagine that if you lived before aircraft and imagined a future in which humans could fly, you might think that humans would be flapping giant wings or be pedal-powered or something. While it would be great for that to exist, the limitations of the physics we know how to use require a different kind of mechanic that has different strengths and weaknesses to what we would think of in advance.
There's no particular reason to think that the practical technologies available will lead to an AI capable of power-seeking, just because power-seeking is a side effect of the "ideal" AI that some people want to create. The existing AI tools, as far as I can tell, don't provide much evidence in that direction. Even if a power-seeking AI is eventually practical to create, it may be far from the default and by then we may have sufficiently intelligent non-power-seeking AI.
Perhaps they could be next to the "Reply" button, and fully contained in the comment's container?
The answer is pretty clear with Bayes' Theorem. The world in which the coin lands heads and you get the card has probability 0.0000000005, and the world in which the coin lands tails has probability 0.5. Thus you live a world with a prior probability of 0.5000000005, so the probability of the coin being heads is 0.0000000005/0.5000000005, or a little under 1 in a billion.
Given that the worst case scenario of losing the bet is saying you can't pay it and losing credibility, you and Adam should take the bet. If you want to (or have to) actually commit to paying, then you have to decide whether you would completely screw over 1 alternate self so that a billion selves can have a bit more money. Given that $100 would not really make a difference to my life in the long run, I think I would not take the bet in this scenario.
Personally I would be interested in a longer post about whatever you have to say about the battery and battery design. You could make a sequence, so that it can be split into multiple posts.