I share your confusion about the first two, but I believe that the question about the subjective passage of time can be dissolved, unless I am misunderstanding your question.
As you mention, the laws of physics are reversible, but as you mention, entropy gives an arrow of time. That arrow of time causes it to be the case that your brain encodes memories of the past instead of the future. You are not a subjective observer outside of time. You are always experiencing the current moment from the perspective of your brain at that moment. Thus, you always have the experience of being in a brain that remembers the past and remembers the immediately prior moment as the immediate past. So your experience will always be that your current experience has moved forward in time from the past. As a thought experiment, imagine that the 4D block universe existed and that there is some process that evaluates slices at which subjective experience happens. Imagine that instead of that process moving forward in time, it is moving backward in time from the "end" of time to the Big Bang. What would your subjective experience be in that case? It would still be that you are traveling forward in time. Because, as you experienced each moment, you would do so with the memory of the past moment, not of the moment that was experienced sequentially before (which is the temporally future moment). There is therefore no question here except what gives rise to qualia, and perhaps whether the block universe view is correct, or the universe is actually an evolving system for which only some "now" exists for each point in space.
(In reality, I don't think a block universe makes sense. While I don't understand what gives rise to qualia, all evidence says that it is tied to the execution of the "thinking" algorithm of my brain. A block universe would have no "execution" and so I think would have no qualia unless qualia exist eternally at all places in the block universe where there is a conscious being.)
The author, @Max Harms, is working on a high-quality AI-read audio book version. He had hoped to release it at the same time as the book, but is currently planning to release it in early 2026. There is a prediction market for When will the Red Heart audiobook come out? There is a preview on YouTube
There are existing crypto algos for "coin flipping". You should be using one of those. See https://en.wikipedia.org/wiki/Commitment_scheme#Coin_flipping
A browser doesn't count as a significant business integration?
I worry that this paper and article seem to be safety washing. They imply that existing safety techniques for LLMs are appropriate for more powerful systems. They apply a safety mindset from other domains to AI in an inappropriate way. I do think AI safety can learn from other fields, but those must be fields with an intelligent adversarial agent. Studying whether failure modes are correlated doesn't matter when you have an intelligent adversary who can make failure modes that would not normally be correlated happen at the same time. If one is thinking only about current systems, then perhaps such an analysis would be helpful. But both the paper and article fail to call that out.
Most of this is interesting but unsurprising. Having reflected on it for a bit, I do find one thing surprising. It is very strange that Illya doesn't know who is paying his lawyers. Really, he assumes that it is OpenAI and is apparently fine with that. I'm surprised he isn't concerned about a conflict of interest. I assume he has enough money that he could hire his own lawyers if he wanted. I would expect him to hire at least one lawyer himself to ensure that his own interests are represented and to check the work of the other lawyers.
I signed the statement. My concern, which you don't address, is that I think the statement should call for a prohibition on AGI, not just ASI. I don't think there is any meaningful sense in which we can claim that particular developments are likely to lead to AGI, but definitely won't lead to ASI. History has shown that anytime narrow AI reaches human levels, it is already superhuman. Indeed, if one imagines that tomorrow one had a true AGI (I won't define AGI here, but imagine an uploaded human that never needs to sleep or rest), then all one would need to do to make ASI is to add more hardware to accelerate thinking or add parallel copies.
As a professional software developer with 20+ years of experience who has repeatedly tried to use AI coding assistants and gotten consistently poor results, I am skeptical of even your statement that, "The average over all of Anthropic for lines of merged code written by AI is much less than 90%, more like 50%." 50% seems way too high. Or if it is then most of that code is extraneous changes that aren't part of the core code that executes. For example, I've seen what I believe to be AI-generated code where 3/4 of the API endpoints are unused. They exist just because the AI assumes that the rest endpoint for each entity ought to have all the actions even though that didn't make sense in this case.
I think there is a natural tendency for AI enthusiasts to overestimate the amount of useful code they are getting from AI. If we were going to make any statements about how much code was generated by AI at any organization, I think we would need much better data than I have ever seen.
Have you considered that the policies are working correctly for most people with a "normie" communication style? I agree that they should be clearer. However, when I read your description of what they are saying, I think the rule makes sense. It isn't that everything must be entirely fragrance-free. The intended rule seems to be nothing strongly scented. For example, I've met women who use scented shampoo, but you don't notice it on them even when you are close to them. I've also met women who you immediately smell the scent of their shampoo from 3 feet. It seems they are basically asking that people use reasonable judgment. That may not be sufficient for extremely sensitive people, but it will address a lot of the problem. By having it in their code of conduct, they can ask people to leave if it is a problem.
Your Beantown Stomp statement seems to be the proper way to communicate the actually intended policy.
I don't understand what you mean.
The human brain takes time to process sensory signals so that the qualia experienced are slightly delayed from when the sensory input that gave rise to those qualia entered the brain. In that sense, experience happens over time. But at any moment, there is only the qualia that is being experienced. How could it be otherwise? If you say that you then recall that the qualia you had just before that was different. Well, that is a different instant in time in which you are experiencing recall of a memory from your brain.