US laws:
Well firstly note that the RAISE act has $1M - $3M escalating fines explicitly built in.
But yes, my understanding is that if you pay up, you can carry on. However, they can be fined "per violation" and the law does not precisely define what this means. So the Attorney General can argue that each missed requirement / each non-complying model is a separate violation. Or if they're feeling very aggressive, the AG could try to say that each day of non-compliance is a separate violation, but that may not hold up in court.
This is where injunctive relief could come in, to directly order a company to retract a model. As noted, this was only explicitly included in the RAISE act[1], and that looks set to be removed. However, AGs can often get injunctions even without explicit statutory language.
EU AI Act:
No, the consequences will escalate. Fines can be periodic, the maximum fine is huge and the length of the infraction is a factor in determining the penalty.
And on top of that, companies have a duty to take “immediate corrective actions”. The Commission (with advice from the AI Office) can issue administrative orders requiring the withdrawal of a model. And ignoring that can get you into very heavy fines or criminal penalties.
Except on whistleblower protections, where SB 53 explicitly includes injunctive relief.
Perhaps it could use time-since-frontpaged only if the karma is below some threshold.
On LessWrong, the frontpage algorithm down-weights older posts based on the time-since-posted, not the time-since-frontpaged. So, if a post doesn't get frontpaged until a few days after posting, then it's unlikely to get many views.
LessWrong has an autofrontpager that works a reasonable amount of the time. Otherwise, posts have to be manually frontpaged by a person. In my experience, this was always quite quick, but my most recent post was not frontpaged until 3 days after it was posted, so AFAICT it never actually appeared on the frontpage (unless you clicked "Load More").
I think the solution is to downweight posts based on the time-since-frontpaged.
Strictly speaking, the version that passed the senate has now been put into law.
But to answer your question, no, this post describes what the bill will be like after the agreed upon "chapter amendments" have been implemented.
Footnote 4 mentions one of the changes that is expected to be made.
I would prefer this post if it didn't talk about consciousness / internal experience. The issue is whether the LLM has some internal algorithms that are somehow similar / isomorphic to those in human brains.
As it stands this posts implicitly assumes the Camp 2[1] view of consciousness which I and many others find to be deeply sus. But the arguments the post puts forward are still relevant from a Camp 1 point of view, they just answer a question about algorithms, not about qualia.
Quoting the key section of the linked post:
Camp #1 tends to think of consciousness as a non-special high-level phenomenon. Solving consciousness is then tantamount to solving the Meta-Problem of consciousness, which is to explain why we think/claim to have consciousness. In other words, once we've explained the full causal chain that ends with people uttering the sounds kon-shush-nuhs, we've explained all the hard observable facts, and the idea that there's anything else seems dangerously speculative/unscientific. No complicated metaphysics is required for this approach.
Conversely, Camp #2 is convinced that there is an experience thing that exists in a fundamental way. There's no agreement on what this thing is – some postulate causally active non-material stuff, whereas others agree with Camp #1 that there's nothing operating outside the laws of physics – but they all agree that there is something that needs explaining. Therefore, even if consciousness is compatible with the laws of physics, it still poses a conceptual mystery relative to our current understanding. A complete solution (if it is even possible) may also have a nontrivial metaphysical component.
Even if all value is in computations (eg. everyone lives inside simulations), wouldn't you have the same problems, just one level down? The physical world may be a type of computation and the computations may resemble the physical world.
So if you wanted to live forever (level 2) or have a mechanical mind / body (level 5), would you have to leave Earth?
Datapoint: I spoke to one Horizon fellow a couple of years ago and they did not care about x-risk.
Nice! I would like to see a visual showing the full decision tree. I think that would be even better for clarifying the different views of consciousness.
Or even better, at the the time when a post is frontpaged, check if will actually appear on the frontpage. If it is too old and has too little karma to be seen, then use the time-since-frontpaged.