Archetypal Transfer Learning (ATL) is a proposal by @whitehatStoic for what is argued by the author to be a fine tuning approach that "uses archetypal data" to "embed Synthetic Archetypes". These Synthetic Archetypes are derived from patterns that models assimilate from archetypal data, such as artificial stories. The method yielded a shutdown activation rate of 57.33% in the GPT-2-XL model after fine-tuning. .. (read more)
AI Evaluations focus on experimentally assessing the capabilities, safety, and alignment of advanced AI systems. These evaluations can be divided into two main categories: behavioral and understanding-based... (read more)
Encultured AI is a for-profit public benefit corporation working to make AI safer and healthier for human beings... (read more)
Poor Air Quality can reduce cognitive functioning[1], lifespans[2] and the techniques to improve air quality are also useful for getting rid of aerosolized respiratory pathogens. Improving air quality can be an impactful global health intervention.[3] Many members of the LessWrong community have also put effort into improving the air quality of their own homes or offices, as an implication of instrumental rationality... (read more)
Bureaucracy.. (read more)
| User | Post Title | Wikitag | Pow | When | Vote |
Focuses on the intersection of frontier AI agents and traditional infrastructure security, including exploit detection, system persistence, and hardware-level attributability
do you still think these are possible to build/define/do you know about any relevant papers?
I have been exploring building something like this as an exploration into better-controllable default developer choices for agent programming primitives, as opposed to "assistant persona + completions API for everything".
A reasoning step is "logically valid" when that kind of step never produces a false conclusion from true premises. For example, in algebra, "Add 2 to both sides of the equation" is valid because it only produces true equations from true equations, while "Divide both sides by x" is invalid because x might be 0. So even if "2x = (y+1)x", letting x = 0 and y = 2, the original equation can be true while "2 = y + 1" is false. But "2x + 2 = (y+1)x + 2" will be true in every semantic model where the original equation is true.
More generally in life, there's a question of "did you execute each local step of reasoning correctly", which can be considered apart from "did you arrive at the correct conclusion". Validity is a local property of a reasoning step or sequence; we can (and should) evaluate each step's validity separately from whether we agree with the premises or end up agreeing with the conclusion. For near-logical domains, this asks "Does the next proposition follow (with very high probability, given other things usually believed about the world or explicitly introduced as premises) from the previous proposition?" For probabilistic reasoning, informal validity asks, "Given everything else believed or introduced as a premise, is this next step adjusting probabilities by the right amount?" or "Does this kind of reasoning step in general produce well-calibrated conclusions from well-calibrated premises?"
Eg, consider why the ad hominem fallacy should be seen as "invalid" or a "locally invalid reasoning step" from this viewpoint. Suppose you start out with well-calibrated probabilities (things you say "60%" for, happen around 60% of the time). You assign 60% probability that the sky is blue. Then somebody says, "Yeah, well, people who believe in blueskyism are ugly" and you nod and adjust your credence in blueskyism down to 40%. Your odds just went from 3:2 to 2:3, so by Bayes's Rule you should've heard evidence with a likelihood ratio of 4:9 to produce that probability shift. Unless you already believe that false propositions are 225% as likely as true propositions to be believed by ugly people, you should already expect that believing an ad hominem argument is something that can produce ill-calibrated conclusions in expectation from well-calibrated premises.
Main articles:
Focuses on the intersection of frontier AI agents and traditional infrastructure security, including exploit detection, system persistence, and hardware-level attributability
The Machine Alignment, Transparency, and Security (MATS) Program is an independent research and educational seminar program that provides emerging researchers with mentorship, talks,talks & workshops, research support, and research support and connects themconnections with the SF Bay Area and London AI safety research communities.
ATOW (2025-09-09)(2026-04-03), nothing has been published that claimMoore et al. (2026) is probably the best academic account of LLM-Induced Psychosis (LIP) is a definite, real, phenomena. Though, many anecdotal accounts exist. It is not yet clear, if LIP is caused by AIs, if pre-existing disillusion are 'sped up' or reinforced by interactinginduced psychosis. They "analyze logs of conversations with an AI, or, if LIP exists at all.LLM chatbots from 19 users who report having experienced psychological harms from chatbot use" where the users mostly came from " support group for such chatbot users."
Although slavery is usually involuntary and involves coercion, there are also cases where people voluntarily enter into slavery (like(link to wikipedia!) to pay a debt or earn money due to poverty.
CDT agents don't consider the acausal impactlogical impacts of their decisionsdecision algorithms' outputs when choosing actions.actions, only the physical consequences of their physical act. Whenever a CDT agent is put in a situation where it has to make a decision, it considers multiple hypothetical worlds,hypotheticals, one for each decision it could make. In a CDT agent, the only difference between these hypothetical worlds is the decision it makes.physical act in the moment of that act, and what happens physically / causally downstream from that. This means that when CDT is faced with something trying to predict its actions, CDT imagines its decision to not have any effect on its predicted decision.
Scalable oversight is an approach to AI control [1]in which AIs supervise each other.the problem of providing reliable supervision of outputs from AIs, even as they become smarter than humans. Often groups of weaker AIs supervise a stronger AI, or AIs are set in a zero-sum interactiondebate with each other.
People used to refer to scalable oversight as a set of AI alignment techniques, but they usually work on the level of incentives to the AIs, and have less to do with architecture.
The name was suggested by Ryan Grenblatt in niplav in a reply“AI companies are unlikely to Daniel Kokotajlo's shortformmake high-assurance safety cases if timelines are short”.
When CDT makes its decisions, it only thinks it controls things causally downstream of its actions. UDT by contrast, is choosing as if it controls every part of reality that is logically correlated withdownstream of its actions.logical output. This allows it to acausally bargaindetermine a wide range of other facts across the multiverse.universe that are logically correlated with itself, like what is or has been reliably predicted about its present decision, or what other agents sufficiently similar to itself will choose. Son of CDT is somewhere in the middle. It acts as if it controls only the things logically correlated with its actions that are causally downstream of its moment of original creation.
If a Son of CDT agent goes on to create further agents, all of those agents will have the same magic moment. They will all care about whether or not Omega's knowledge of them is causally downstream of the moment the moment the CDT agent first wrote Son-of-CDT code.
In doxastic modal logic, the statement "P is a hyperstition" is written as □P→□P→P. Modal reasoners that satisfy Löb's Theorem believe all personal hyperstitions. This can cause some problems for modal embedded agents. Löbian cooperation works by making mutual cooperation a collective hyperstition.
The main problems with CEV include, firstly, the great difficulty of implementing such a program - “If one attempted to write an ordinary computer program using ordinary computer programming skills, the task would be a thousand lightyears beyond hopeless.” Secondly, the possibility that human values may not converge. Yudkowsky considered CEV obsolete almost immediately after its publication in 2004. He states that there's a "principled distinction between discussing CEV as an initial dynamic of Friendliness, and discussing CEV as a Nice Place to Live" and his essay was essentially conflating the two definitions.










If youWe used to have 100 or more karma on both LessWronga feature for crossposting to EA Forum. It caused a lot of bugs that were difficult to deal with and didn't feel like it was pulling its weight, so we remove it in the EA Forum, you can automatically crosspost from LessWronglatest update to the EA Forum (and from the EA Forum to LessWrong). You also need to have accepted the EA Forum's Terms of Use,which you can do by trying to create a new post on the EA Forum (if you haven't already done so after the Terms of Use requirement was put in place).
You should be logged in on both sites. To ensure that a post is crossposted after it's published, or to crosspost an already-published post, follow the authentication flow in the Options menu on the post editor page.
hey Chris and Mick! wanna include Atlas Computing? we're a fieldbuilding org scoping the problems in AGI risks that make recruiting expertise to lead those orgs easier.
we're also hiring: https://atlascomputing.org/jobs
our onepager here:
https://docs.google.com/document/d/1v9yVAkfnjrFwsp3jH5aYTwfwjVBsNYND/edit?usp=sharing&ouid=109085206565751232228&rtpof=true&sd=true