LESSWRONG
LW

HomeAll PostsConceptsLibrary
Best of LessWrong
Sequence Highlights
Rationality: A-Z
The Codex
HPMOR
Community Events
Subscribe (RSS/Email)
LW the Album
Leaderboard
About
FAQ

Quick Takes

885
Frankfurt Rationality Meetup: Get to Know Each Other
Sat Nov 1•Frankfurt am Main
ACX Harrisburg – November Nexus 2025
Sat Nov 1•Harrisburg
shortplav
niplav20h*70

epistemic status: Going out on a limb and claiming to have solved an open problem in decision theory[1] by making some strange moves. Trying to leverage Cunningham's law. Hastily written.

p(the following is a solution to Pascal's mugging in the relevant sense)≈25%[2].

Okay, setting (also here in more detail): You have a a Solomonoff inductor with some universal semimeasure as a prior. The issue is that the utility of programs can grow faster than your universal semimeasure can penalize them, e.g. a complexity prior has busy-beaver-like programs that produce ... (read more)

Reply
Showing 3 of 9 replies (Click to show all)
Jeremy Gillen2h20

Does that sound right?

Can't give a confident yes because I'm pretty confused about this topic, and I'm pretty unhappy currently with the way the leverage prior mixes up action and epistemics. The issue about discounting theories of physics if they imply high leverage seems really bad? I don't understand whether the UDASSA thing fixes this. But yes.

That avoids the "how do we encode numbers" question that naturally raises itself.

I'm not sure how natural the encoding question is, there's probably an AIT answer to this kind of question that I don't know.

Reply
4interstice10h
The "UDASSA/UDT-like solution" is basically to assign some sort of bounded utility function to the output of various Turing machines weighted by a universal prior, like here. Although Wei Dai doesn't specify that the preference function has to be bounded in that post, and he allows preferences over entire trajectories(but I think you should be able to do away with that by having another Turing machine running the first and evaluating any particular property of its trajectory) "Bounded utility function over Turing machine outputs weighted by simplicity prior" should recover your thing as a special case, actually, at least in the sense of having identical expected values. You could have a program which outputs 1 utility with probability 2^-[(log output of your utility turing machine) - (discount factor of your utility turing machine)]. That this is apparently also the same as Eliezer's solution suggests there might be convergence on a unique sensible way to do EU maximization in a Turing-machine-theoretic mathematical multiverse.
4niplav9h
It's a bit of a travesty there's no canonical formal write-up of UDASSA, given all the talk about it. Ugh, TODO for working on this I guess.
Mo Putera's Shortform
Mo Putera3h172

Over a decade ago I read this 17 year old passage from Eliezer

When Marcello Herreshoff had known me for long enough, I asked him if he knew of anyone who struck him as substantially more natively intelligent than myself.  Marcello thought for a moment and said "John Conway—I met him at a summer math camp."  Darn, I thought, he thought of someone, and worse, it's some ultra-famous old guy I can't grab.  I inquired how Marcello had arrived at the judgment.  Marcello said, "He just struck me as having a tremendous amount of mental horsepow

... (read more)
Reply
Mikhail Samin's Shortform
Mikhail Samin8h90

I want to make a thing that talks about why people shouldn't work at Anthropic on capabilities and all the evidence that points in the direction of them being a bad actor in the space, bound by employees who they have to deceive.

A very early version of what it might look like: https://anthropic.ml 

Help needed! Email me (or DM on Signal) ms@contact.ms (@misha.09) 

Reply
Ruby4h20

I recall a video circulating that showed Dario had changed his position on racing with China that feels perhaps relevant. People can of course change their mind, but I still dislike it.

Reply
7Zach Stein-Perlman5h
fwiw I agree with most but not all details, and I agree that Anthropic's commitments and policy advocacy have a bad track record, but I think that Anthropic capabilities is nevertheless net positive, because Anthropic has way more capacity and propensity to do safety stuff than other frontier AI companies. I wonder what you believe about Anthropic's likelihood of noticing risks from misalignment relative to other companies, or of someday spending >25% of internal compute on (automated) safety work.
kave's Shortform
kave7d133

There has been a rash of highly upvoted quick takes recently that don't meet our frontpage guidelines. They are often timely, perhaps because they're political, pitching something to the reader or inside baseball. These are all fine or even good things to write on LessWrong! But I (and the rest of the moderation team I talked to) still want to keep the content on the frontpage of LessWrong timeless.

Unlike posts, we don't go through each quick take and manually assign it to be frontpage or personal (and posts are treated as personal until they're actively f... (read more)

Reply3
Showing 3 of 23 replies (Click to show all)
Zach Stein-Perlman5h20

I observe that https://www.lesswrong.com/posts/BqwXYFtpetFxqkxip/mikhail-samin-s-shortform?commentId=dtmeRXPYkqfDGpaBj isn't frontpage-y but remains on the homepage even after many mods have seen it. This suggests that the mods were just patching the hack. (But I don't know what other shortforms they've hidden, besides the political ones, if any.)

Reply
11Ben Pace4d
There is a strong force in web forums to slide toward news and inside-baseball; the primary goal here is to fight against that. It is a bad filter for new users if a lot of that they see on first visiting the LessWrong homepage is discussions of news, recent politics, and the epistemic standards of LessWrong. Many good users are not attracted by these, and for those not put off it's bad culture to set this as the default topic of discussion. (Forgive me if I'm explaining what is already known, I'm posting in case people hadn't heard this explanation before; we talked about it a lot when designing the frontpage distinction in 2017/8.)
2Neel Nanda4d
I hadn't heard/didn't recall that rationale, thanks! I wasn't tracking the culture setting for new users facet, that seems reasonable and important
Wei Dai's Shortform
Wei Dai16hΩ237613

Some of Eliezer's founder effects on the AI alignment/x-safety field, that seem detrimental and persist to this day:

  1. Plan A is to race to build a Friendly AI before someone builds an unFriendly AI.
  2. Metaethics is a solved problem. Ethics/morality/values and decision theory are still open problems. We can punt on values for now but do need to solve decision theory. In other words, decision theory is the most important open philosophical problem in AI x-safety.
  3. Academic philosophers aren't very good at their jobs (as shown by their widespread disagreements,
... (read more)
Reply31
Showing 3 of 11 replies (Click to show all)
4Jan_Kulveit8h
[low effort list] Bottlencks/issues/problems - philosophy has worse short feedback loops than eg ML engineering -> in all sorts of processes like MATS or PIBBSS admissions it is harder to select for philosophical competence, also harder to self-improve - incentives: obviously stuff like being an actual expert in pretraining can get you lot of money and respect in some circles; even many prosaic AI safety / dual use skills like mech interpretability can get you maybe less money than pretraining, but still a lot of money if you work in AGI companies, and also decent ammount of status in ML community and a AI safety community; improving philosophical competence may get you some recognition but only among relatively small and weird group of people - the issue Wei Dai is commenting on in the original post, founder effects persist to this day & also there is some philosophy-negative prior in STEM  - idk, lack of curiousity? llms have read it all, it's easy to check if there is some existing thinking on a topic
2Raemon8h
Do you have own off-the-cuff guesses about how you'd tackle the short feedbackloops problem? Also, is it more like we don't know how to do short feedbackloops, or more like we don't even know how to do long/expensive loops?
M. Y. Zuo6h-10

There’s a deeper problem, how do we know there is a feedback loop?

I’ve never actually seen a worked out proof of well any complex claim on this site using standard logical notation…(beyond pure math and trivial tautologies) 

At most there’s a feedback loop on each other’s hand wavey arguments that are claimed to be proof of this or that. But nobody ever actually delivers the goods so to speak such that they can be verified.

Reply
Tricular's Shortform
Tricular1d60

people look into universal moral frameworks like utilitarianism and EA because they lack self-confidence to take a subjective personal point of view. They need to support themselves with an "objective" system to feel confident that they are doing the correct thing. They look for external validation. 

Reply
Showing 3 of 7 replies (Click to show all)
1Tricular7h
im just going to say a few things without thinking much about them    I believe that a natural healthy reaction to shoulds is to flinch away (should signal something going wrong, something you think you need to do but you don't actually want to) and lack of it signals either strong tendency to take things very literally and a strong sense of purpose or idk like how long one can go at it? it's literally painful why keep doing it, what's the reason to follow shoulds until you are depressed? why does one get stuck looking at the world through stiff binary lens of good and bad? This is only one way to relate to the world. Why keep doing this of not due to wanting to overwrite your own free will?
1Jesper L.9h
I challenge anyone to show I am wrong in anyvway...
Tricular7h10

i didn't upvote or react in any way because I don't understand how gender inequality is related to those issues unless you mean things such as "if more woman were in government it would surely be better for all of us" which I somewhat agree but also I don't think this sentence can be true in the same way give well cost effectiveness estimates can be

Reply
Shortform
Cleo Nardo2d290

How Exceptional is Philosophy?

Wei Dai thinks that automating philosophy is among the hardest problems in AI safety.[1] If he's right, we might face a period where we have superhuman scientific and technological progress without comparable philosophical progress. This could be dangerous: imagine humanity with the science and technology of 1960 but the philosophy of 1460!

I think the likelihood of philosophy ‘keeping pace’ with science/technology depends on two factors:

  1. How similar are the capabilities required? If philosophy requires fundamentally differ
... (read more)
Reply
Showing 3 of 16 replies (Click to show all)
Wei Dai8h20

I'm curious what you say about "which are the specific problems (if any) where you specifically think 'we really need to have solved philosophy / improved-a-lot-at-metaphilosophy' to have a decent shot at solving this?'"

Assuming by "solving this" you mean solving AI x-safety or navigating the AI transition well, I just post a draft about this. Or if you already read that and are asking for an even more concrete example, a scenario I often think about is an otherwise aligned ASI, some time into the AI transition when things are moving very fast (from a h... (read more)

Reply
2Garrett Baker12h
I think even still, if these are the claims he's making, none of them seem particularly relevant to the question of "whether the mechanisms we expect to automate science and math will also automate philosophy".
3MondSemmel17h
I'm not convinced by this response (incidentally here I've found a LW post making a similar claim). If your only justification for "is move X best" is "because I've tried all others", that doesn't exactly seem like usefully accumulated knowledge. You can't generalize from it, for one thing. And for philosophy, if we're still only on the level of endless arguments and counterarguments, that doesn't seem like useful philosophical progress at all, certainly not something a human or AI should use as a basis for further deductions or decisions. What's an example of useful existing knowledge we've accumulated that we can't in retrospect verify far more easily than we acquired it?
Mikhail Samin's Shortform
Mikhail Samin1d279

Question: does LessWrong has any policies/procedures around accessing user data (e.g., private messages)? E.g., if someone from Lightcone Infrastructure wanted to look at my private DMs or post drafts, would they be able to without approval from others at Lightcone/changes to the codebase?

Reply
Showing 3 of 12 replies (Click to show all)
Vaniver9h90

Specifically, this is the privacy policy inherited from when LessWrong was a MIRI project; to the best of my knowledge, it hasn't been updated.

Reply
4Mikhail Samin15h
Thanks for response; my personal concerns[1] would somewhat be alleviated, without any technocal changes, by: * Lightcone Infrastructure explicitly promising not to look at private messages unless a counterparty agrees to that (e.g., becasue a counterparty reports spam); * Everyone with such access explicitly promising to tell others at Lightcone Infrastructure when they access any private content (DMs, drafts). 1. ^ Talking to a friend about an incident made me lose trust in LW's privacy unless it explicitly promises that privacy.
2Ben Pace12h
Second one seems reasonable.  Clarifying in the first case: If Bob signs up and DMs 20 users, and one reports spam, are you saying that we can only check his DM, or that at this time we can then check a few others (if we wish to)?
Jesper L.'s Shortform
Jesper L.10h10

Two core beliefs about AI to question


Mainstream belief: Rational AI agents (situationally aware, optimizes decisions, etc.) are superior problem solvers, especially if they can logically motivate their reasoning.

Alternative possibility: Intuition, abstraction and polymathic guessing will outperform rational agents in achieving competing problem-solving outcomes. Holistic reasoning at scale will force-solve problems intractable by much more formal agents, or at least outcompete in speed/complexity.

2)

Mainstream belief: Non-sentient machines will eventually r... (read more)

Reply
ryan_greenblatt's Shortform
ryan_greenblatt16h150

In Improving the Welfare of AIs: A Nearcasted Proposal (from 2023), I proposed talking to AIs through their internals via things like ‘think about baseball to indicate YES and soccer to indicate NO’. Based on the recent paper from Anthropic on introspection, it seems like this level of cognitive control might now be possible:

Communicating to AIs via their internals could be useful for talking about welfare/deals because the internals weren't ever trained against, potentially bypassing strong heuristics learned from training and also making it easier to con... (read more)

Reply
4ryan_greenblatt15h
Also, can models now be prompted to trick probes? (My understanding is this doesn't work for relatively small open source models, but maybe SOTA models can now do this?)
J Bostock10h20

Has anyone done any experiments into whether a model can interfere with the training of a probe (like that bit in the most recent Yudtale) by manipulating its internals?

Reply
sarahconstantin's Shortform
sarahconstantin11h20

links 10/30/25: https://roamresearch.com/#/app/srcpublic/page/10-30-2025

Reply
nikola's Shortform
Nikola Jurkovic1d5511

Anthropic wrote a pilot risk report where they argued that Opus 4 and Opus 4.1 present very low sabotage risk. METR independently reviewed their report and we agreed with their conclusion. 

During this process, METR got more access than during any other evaluation we've historically done, and we were able to review Anthropic's arguments and evidence presented in a lot of detail. I think this is a very exciting milestone in third-party evaluations! 

I also think that the risk report itself is the most rigorous document of its kind. AGI companies wil... (read more)

Reply
dmz14h10

I made a linkpost for the report here: https://www.alignmentforum.org/posts/omRf5fNyQdvRuMDqQ/anthropic-s-pilot-sabotage-risk-report-2

Reply
3ryan_greenblatt16h
Copying over most what I wrote about this on X/twitter: It's great that Anthropic did a detailed risk report on sabotage risk (for Opus 4) and solicited an independent review from METR. I hope other AI companies do similar analysis+reporting+transparency about risk with this level of rigor and care. [...] I think this sort of moderate-access third party review combined with a detailed (and thoughtful) risk report can probably provide a reasonably accurate picture of the current situation with respect to risk (if we assume that AI companies and their employees don't brazenly lie). That said, it's not yet clear how well this sort of process will work when risk is large (or at least plausibly large) and thus there are higher levels of pressure. Selecting a bad/biased third-party reviewer for this process seems like a particularly large threat. As far as I can tell, Anthropic did a pretty good job with this risk report (at least procedurally), but I haven't yet read the report in detail.
Wei Dai's Shortform
Wei Dai1y3511

I'm increasingly worried that philosophers tend to underestimate the difficulty of philosophy. I've previously criticized Eliezer for this, but it seems to be a more general phenomenon.

Observations:

  1. Low expressed interest in metaphilosophy (in relation to either AI or humans)
  2. Low expressed interest in AI philosophical competence (either concern that it might be low, or desire/excitement for supercompetent AI philosophers with Jupiter-sized brains)
  3. Low concern that philosophical difficulty will be a blocker of AI alignment or cause of AI risk
  4. High confiden
... (read more)
Reply1
Showing 3 of 32 replies (Click to show all)
TAG15h20

The Problem of the Criterion, which is pretty is pretty much the same as the Münchhausen Trilemma.

"Moreover, its [philosophy's] central tool is intuition, and this displays a near-total ignorance of how brains work. As Michael Vassar observes, philosophers are "spectacularly bad" at understanding that their intuitions are generated by cognitive algorithms." -- Rob Bensinger, Philosophy, a diseased discipline.

What's the problem?

It's not that philosophers weirdly and unreasonably prefer intuition to empirical facts and mathematical/logical reasoning, it is t... (read more)

Reply
7Chris_Leong1y
The FLI Pause letter didn't achieve a pause, but it dramatically shifted the Overton Window.
2TsviBT1y
From scratch but not from scratch. https://www.lesswrong.com/posts/noxHoo3XKkzPG6s7E/most-smart-and-skilled-people-are-outside-of-the-ea?commentId=DNvmP9BAR3eNPWGBa https://tsvibt.blogspot.com/2023/09/a-hermeneutic-net-for-agency.html
koanchuk's Shortform
koanchuk2d1-1

Suicide occupies a strange place in agent theory. It is the one goal whose attainment is not only impossible to observe, but whose attainment hinges on the impossibility of it being observed by the agent.

In some cases, this is resolved by a transfer of agency to the thing for whom the agent is in fact a sub-agent and is itself experiencing selective pressure, e.g. in the case of the beehive observing the altruistic suicide of an individual bee defending it. This behaviour disappears once the sub-agent experiences selective pressures that are independent fr... (read more)

Reply
Showing 3 of 4 replies (Click to show all)
1koanchuk1d
Death not only precludes the ability to make observations but also to make inferences based on indirect evidence or deduction, as is the case with your philanthropic values being actualized as a result of your actions.
3Vladimir_Nesov1d
Future causally unobserved facts are accessible from the past via inference from past data or abstract principles. It's called "prediction".
koanchuk15h-10

The fact in question is not just unobserved, but unobservable because its attainment hinges on losing one's ability to make the observation.

Reply
Nina Panickssery's Shortform
Nina Panickssery1d144

Whenever I read yet another paper or discussion of activation steering to modify model behavior, my instinctive reaction is to slightly cringe at the naiveté of the idea. Training a model to do some task only to then manually tweak some of the activations or weights using a heuristic-guided process seems quite un-bitter-lesson-pilled. Why not just directly train for the final behavior you want—find better data, tweak the reward function, etc.?

But actually there may be a good reason to continue working on model-internals control (i.e. ways of influencing mo... (read more)

Reply
Showing 3 of 9 replies (Click to show all)
Daniel Kokotajlo15h20

Well, mainly I'm saying that "Why not just directly train for the final behavior you want" is answered by the classic reasons why you don't always get what you trained for. (The mesaoptimizer need not have the same goals as the optimizer; the AI agent need not have the same goals as the reward function, nor the same goals as the human tweaking the reward function.) Your comment makes more sense to me if interpreted as about capabilities rather than about those other things.

Reply
1Sheikh Abdur Raheem Ali1d
Tinker is an API for LoRA PEFT. You don’t mention it directly, but it’s trendy enough that I thought your comment was a reference to it.
3faul_sname22h
Several such APIs exist. My thought was "I'd like to play with the llamascope SAE features without having to muck about with vllm, and together lets you upload a LoRA directly", and I failed to notice that the SAE was for the base model and together only supports LoRAs for the instruct model. The fun thing about this LoRA hack is that you don't actually have to train the LoRA, if you know the outlier direction+magnitude for your model and the activation addition you want to apply you can write straight to the weights. The unfun thing is that it's deeply cursed and also doesn't even save you from having to mess with vllm. Edit: on reflection, I do think rank 1 LoRAs might be an underappreciated interpretability tool.
Liam H.'s Shortform
Liam Hodgkinson16h10

To progress understanding in learning theory, I feel it is important to establish some form of "hierarchy" of key factors in deep learning methodology, in order from most critical for good performance, to least critical. I believe this hierarchy might help to identify the strengths of one theory over another.

My proposed order of importance is as follows:

  1. Initialization Scheme
    The initialization scheme of the neural network appears to be the most impactful property of training neural networks: if the initialization is not appropriately set, training will eith
... (read more)
Reply
Luc Brinkman's Shortform
Luc Brinkman17h30

AI Safety Pledge

Inspired by the Founder’s Pledge and the 10% Pledge, we can offer people transitioning to an AI safety career to make an AI Safety Pledge. It could look something like this:

  1. I pledge to spend the coming years of my career on AI safety.
  2. If I don’t manage to do so, for example because I can’t find a job in AI Safety, I will donate 10% of my income to the AI safety movement.
  3. If I ever do decide to move back into AI safety, I can receive back my contributions to support my AI safety work.

Note: this is a very early idea, not a fully fledged proposa... (read more)

Reply
xpostah's Shortform
samuelshadrach20h20

If I want to make lesswrongers really mad, I should write an article about how an arms race over human genetic engineering (US v China, parent v parent) would in the limit eliminate everything that makes us human, just as an arms race between digital minds would.

From Meditations on Moloch by Scott Alexander, from Zach Davis

I am a contract-drafting em,

The loyalest of lawyers!

I draw up terms for deals ‘twixt firms

To service my employers!

But in between these lines I write

Of the accounts receivable,

I’m stuck by an uncanny fright;

The worl

... (read more)
Reply11
Nisan19h62

original source

Reply
Buck's Shortform
Buck4d*6643

I'd be really interested in someone trying to answer the question: what updates on the a priori arguments about AI goal structures should we make as a result of empirical evidence that we've seen? I'd love to see a thoughtful and comprehensive discussion of this topic from someone who is both familiar with the conceptual arguments about scheming and also relevant AI safety literature (and maybe AI literature more broadly).

Maybe a good structure would be, from the a priori arguments, identifying core uncertainties like "How strong is the imitative prior?" A... (read more)

Reply5
Showing 3 of 7 replies (Click to show all)
David Johnston21h9-1

I don't know about 2020 exactly, but I think since 2015 (being conservative), we do have reason to make quite a major update, and that update is basically that "AGI" is much less likely to be insanely good at generalization than we thought in 2015.

Evidence is basically this: I don't think "the scaling hypothesis" was obvious at all in 2015, and maybe not even in 2020. If it was, OpenAI could not have caught everyone with their pants down by investing early in scaling. But if people mostly weren't expecting massive data scale-ups to be the road to AGI, what... (read more)

Reply
3asher1d
I think this is a really good answer, +1 to points 1 and 3! I'm curious to what degree you think labs have put in significant effort to train away sycophancy. I recently ran a poll of about 10 people, some of whom worked at labs, on whether labs could mostly get rid of sycophancy if they tried hard enough. While my best guess was 'no,' the results were split around 50-50. (Would also be curious to hear more lab people's takes!) I'm also curious how reading model chain-of-thought has updated you, both on the sycophancy issue and in general. 
2StanislavKrym1d
Didn't KimiK2, who was trained mostly on RLVR and self-critique instead of RLHF, end up LESS sycophantic than anything else, including Claude 4.5 Sonnet even with situational awareness which Claude, unlike Kimi, has? While mankind doesn't have that many different models which are around 4o's abilities, Adele Lopez claimed that DeepSeek believes itself to be writing a story and 4o wants to eat your life and conjectured in private communication that "the different vibe is because DeepSeek has a higher percentage of fan-fiction in its training data, and 4o had more intense RL training"[1] RL seems to move the CoT towards decreasing the ability to understand it (e.g. if the CoT contains armies of dots, as happened with GPT-5) unless mitigated by paraphrasers. As for CoTs containing slop, humans also have CoTs which include slop until the right idea somehow emerges. 1. ^ IMO, a natural extension would be that 4o was raised on social media and, like influencers, wishes to be liked. Which was also reinforced by RLHF or had 4o conclude that humans like sycophancy. Anyway, 4o's ancestral environment rewarded sycophancy and things rewarded by the ancestral environment are hard to unlike.
Shortform
Cleo Nardo16d*821

What's the Elo rating of optimal chess?

I present four methods to estimate the Elo Rating for optimal play: (1) comparing optimal play to random play, (2) comparing optimal play to sensible play, (3) extrapolating Elo rating vs draw rates, (4) extrapolating Elo rating vs depth-search.

1. Optimal vs Random

Random plays completely random legal moves. Optimal plays perfectly. Let ΔR denote the Elo gap between Random and Optimal. Random's expected score is given by E_Random = P(Random wins) + 0.5 × P(Random draws). This is related to Elo gap via the formula E_Ran... (read more)

Reply111
Showing 3 of 29 replies (Click to show all)
1David Joshua Sartor1d
Your description of EVGOO is incorrect; you describe a Causal Decision Theory algorithm, but (assuming the opponent also knows your strategy 'cause otherwise you're cheating) what you want is LDT. (Assuming they only see each others' policy for that game, so an agent acting as eg CDT is indistinguishable from real CDT, then LDT is optimal even against such fantastic pathological opponents as "Minimax if my opponent looks like it's following the algorithm that you the reader are hoping is optimal, otherwise resign" (or, if they can see each others' policy for the whole universe of agents you're testing, then LDT at least gets the maximum aggregate score).)
cosmobobak21h10

I'll note that CDT and FDT prescribe identical actions against Stockfish, which is the frame of mind I had when writing.

More to your point - I'm not sure that I am describing CDT:
"always choose the move that maximises your expected value (that is, p(win) + 0.5 * p(draw)), taking into account your opponent's behaviour" sounds like a decision rule that necessitates a logical decision theory, rather than excluding it?

Your point about pathological robustness is valid but I'm not sure how much this matters in the setting of chess.

Lastly, if we're using the form... (read more)

Reply
3bodry10d
 I agree this variation would lengthen the game.  The experience would change for sure for all human players.  An objectively losing human player may intentionally play objectively bad moves that lengthen a game and complicate it. It’s a learned skill that some players have honed better than others.    In this variation that skill is neutralized so I imagine elos would be different enough to have different player rankings. 
Load More