Strictly speaking, the version that passed the senate has now been put into law.
But to answer your question, no, this post describes what the bill will be like after the agreed upon "chapter amendments" have been implemented.
Footnote 4 mentions one of the changes that is expected to be made.
I would prefer this post if it didn't talk about consciousness / internal experience. The issue is whether the LLM has some internal algorithms that are somehow similar / isomorphic to those in human brains.
As it stands this posts implicitly assumes the Camp 2[1] view of consciousness which I and many others find to be deeply sus. But the arguments the post puts forward are still relevant from a Camp 1 point of view, they just answer a question about algorithms, not about qualia.
Quoting the key section of the linked post:
Camp #1 tends to think of consciousness as a non-special high-level phenomenon. Solving consciousness is then tantamount to solving the Meta-Problem of consciousness, which is to explain why we think/claim to have consciousness. In other words, once we've explained the full causal chain that ends with people uttering the sounds kon-shush-nuhs, we've explained all the hard observable facts, and the idea that there's anything else seems dangerously speculative/unscientific. No complicated metaphysics is required for this approach.
Conversely, Camp #2 is convinced that there is an experience thing that exists in a fundamental way. There's no agreement on what this thing is – some postulate causally active non-material stuff, whereas others agree with Camp #1 that there's nothing operating outside the laws of physics – but they all agree that there is something that needs explaining. Therefore, even if consciousness is compatible with the laws of physics, it still poses a conceptual mystery relative to our current understanding. A complete solution (if it is even possible) may also have a nontrivial metaphysical component.
Even if all value is in computations (eg. everyone lives inside simulations), wouldn't you have the same problems, just one level down? The physical world may be a type of computation and the computations may resemble the physical world.
So if you wanted to live forever (level 2) or have a mechanical mind / body (level 5), would you have to leave Earth?
Datapoint: I spoke to one Horizon fellow a couple of years ago and they did not care about x-risk.
Nice! I would like to see a visual showing the full decision tree. I think that would be even better for clarifying the different views of consciousness.
It doesn't matter what IQ they have or how rational they were in 2005
This is a reference to Eliezer, right? I really don't understand why he's on Twitter so much. I find it quite sad to see one of my heroes slipping into the ragebait Twitter attractor.
To clarify, the primary complaint from my perspective is not that they published the report a month after external deployment per se, but that the timing of the report indicates that they did not perform thorough pre-deployment testing (and zero external testing).
And the focus on pre-deployment testing is not really due to any opinion about the relative benefits of pre- vs. post- deployment testing, but because they committed to doing pre-deployment testing, so it's important that they in fact do pre-deployment testing.
On LessWrong, the frontpage algorithm down-weights older posts based on the time-since-posted, not the time-since-frontpaged. So, if a post doesn't get frontpaged until a few days after posting, then it's unlikely to get many views.
LessWrong has an autofrontpager that works a reasonable amount of the time. Otherwise, posts have to be manually frontpaged by a person. In my experience, this was always quite quick, but my most recent post was not frontpaged until 3 days after it was posted, so AFAICT it never actually appeared on the frontpage (unless you clicked "Load More").
It think the solution is to downweight posts based on the time-since-frontpaged.