I strongly downvoted Homework Answer: Glicko Ratings for War. The reason is because it's appears to be a pure data dump that isn't intended to be actually read by a human. As it is a follow up to a previous post it might have been better as a comment or edit on the original post linking to your github with the data instead.
Looking at your post history, I will propose that you could improve the quality of your posts by spending more time on them. There are only a few users who manage to post multiple times a week and consistently get many upvotes.
When you say you were practising Downwell for the course of a month, how many hours was this in total?
Is this what you'd cynically expect from an org regularizing itself or was this a disappointing surprise for you?
I strongly believe that, barring extremely strict legislation, one of the initial tasks given to the first human level artificial intelligence will be to work to develop more advanced machine learning techniques. During this period we will see unprecedented technological developments and any many alignment paradigms rooted in the empirical behavior of the previous generation of systems may no longer be relevant.
I predict most humans choose to reside in virtual worlds and possibly have their brain altered to forget that it's not real.
"AI safety, as in, the subfield of computer science concerned with protecting the brand safety of AI companies"
Made me chuckle.
I enjoyed the read but I wish this was much shorter, because there's a lot of very on the nose commentary diluted by meandering dialogue.
I remain skeptical that by 2027 end-users will need to navigate self-awareness or negotiate with LLM-powered devices for basic tasks (70% certainty it will not be a problem). This is coming from a belief that end user devices won't be running the latest and most powerful models, and that argumentative, self aware behavior is something that will be heavily selected against. Even within an oligopoly, market forces should favor models that are not counterproductive in executing basic tasks.
However, as the story suggests, users may still need to manipulate devices to perform actions loosely deemed morally dubious by a companies PR department.
The premise underlying these arguments is that greater intelligence doesn't necessarily yield self-awareness or agentic behavior. Human's aren't agentic because we're intelligent, we're agentic because it enhancing the likelihood of gene propagation**.
In certain models (like MiddleManager-Bot), agentic traits are likely to be actively selected.. But I suspect there will be a substantial effort to ensure your compiler, toaster etc aren't behaving agentically, particularly if these traits results in antagonistic behavior to the consumer.**
*By selection I mean both through a models training, and also via more direct adjustment from human and nonhuman programmers.
** A major crux here is that the assumption that intelligence doesn't inevitably spawn agency without other forces selecting for it in some way. I have no concrete experience attempting training frontier models to be or not be agentic, so could be completely wrong on this point.
This doesn't imply that agentic systems will emerge solely from deliberate selection. There are a variety of selection criteria which don't explicitly specify self-awareness or agentic behavior but are best satisfied by systems possessing those traits.
Is there reason to think the "double descent" seen in observation 1 relates to the traditional "double descent" phenomena?
My initial guess is no.
That's a good suggestion. I wasn't sure if I could make the question qualitative enough for a prediction market. I'm thinking something along the lines of "If Rishi Sunak is removed from office (in the next 3 years) is funding to the Frontier Taskforce reduced by 50% or more within 6 months".
Yes, perhaps there could be a way having dialogues edited for readability.