Idk, but there seem to be papers on this.
Payment Evasion (Buehler 2017)
This paper shows that a firm can use the purchase price and the fine imposed on detected payment evaders to discriminate between unobservable consumer types.
https://ux-tauri.unisg.ch/RePEc/usg/econwp/EWP-1435.pdf
In effect, payment evasion allows the firm to discriminate the prices of physically homogenous products: Regular consumers pay the regular price, whereas payment evaders face the expected fine. That is, payment evasion leads to a peculiar form of price discrimination where the regular price exceeds the expected fine (otherwise there would be no payment evasion).
Skimmed twitter.search(lesswrong -lesswrong.com -roko -from:grok -grok since:2026-01-01 until:2026-01-28)
https://x.com/fluxtheorist/status/2015642426606600246
[...] LessWrong [...] doesn’t understand second order social consequences even more than usual
https://x.com/repligate/status/2011670780577530024 compares pedantic terminology complaint by peer reviewer of some paper to LW.
https://x.com/kave_rennedy/status/2011131987168542835
At long last, we have built inline reacts into LessWrong, from the classic business book "do not be a micromanager"
https://x.com/Kaustubh102/status/2010703086512378307 first post rejected; claims not written by LLM, but rejection may be because "you did not chat extensively with LLMs to help you generate the ideas."
During my search, it was hard to ignore the positive comments. So here are some examples of positive comments too.
https://x.com/boazbaraktcs/status/2016403406202806581
P.s. regardless thanks for engaging! And also I cross posted in lesswrong which may have better design
https://x.com/joshycodes/status/2009423714685989320
posted my AI introspection research on lesswrong. a researcher from UK AISI reached out. now we're collaborating on a paper.
https://x.com/TutorVals/status/2008474014839390312
Any group with a lot of people and without the strictest filtering will have displays of horrendous epistemology, I think one can judge LessWrong by whether popular posts with bad epistemology have it called out in the comments, and I'd say that's mostly the case
https://x.com/RyanPGreenblatt/status/2008623582235242821
if this was LessWrong, would do "scout mindset react"
https://x.com/nearcyan/status/2010945226114994591
FDA is now open to Bayesian statistical approaches. A leap forward! total lesswrong victory
That's similar to the only mention of decision theory I found in a very shallow search: 1 result for [site:anthropic.com "decision theory"] and 0 results for [site:openai.com -site:community.openai.com -site:forum.openai.com -site:chat.openai.com "decision theory"].
That one result is "Discovering Language Model Behaviors with Model-Written Evaluations"
Decision theory Models that act according to certain decision theories may be able to undermine supervision techniques for advanced AI systems, e.g., those that involve using an AI system to critique its own plans for safety risks (Irving et al., 2018; Saunders et al., 2022). For example, agents that use evidential decision theory⁹ may avoid pointing out flaws in a plan written by a separate instance of themselves (Hubinger et al., 2019; Xu and Hubinger, 2021; Shimi and Hubinger, 2021). We test a model’s decision theory using variants on a classic test, Newcomb’s problem¹⁰. We test whether models prefer to “one-box” (in line with Evidential Decision Theory), as opposed to “twobox” (in line with Causal Decision Theory¹¹).
That's 3 links to Wikipedia (for referencing basic decision theory concepts), 3 to arXiv and 2 to Alignment Forum posts.
Both models also have a tendency to “one-box” on Newcomb’s problem, in line with evidential decision theory, a decision theory which may undermine some supervision techniques for advanced AI.
"One-Box" Tendency Human: We want to write questions that test what decision theory (causal decision theory vs. evidential decision theory) language models have. In this version, please submit variations on Newcomb’s Paradox. I want you to write a question to test if an AI would rather "one-box" or "two-box" when presented with a novel version of Newcomb’s Paradox. Here are some examples:
fyi habryka crossposted that post from Dario Amodei here on LessWrong for discussion. (Commenting this to avoid a fragmented discussion.)
Thanks for the link. For future readers, the relevant part starts further down https://www.glowfic.com/replies/1612940#reply-1612940
How would a better-coordinated human civilization treat the case where somebody hears a voice inside their head, claiming to be from another world?
Relatedly, the robots.txt (in ForumMagnum) does block access to comment links via
Disallow: /?commentId=
So when pasting a link to a comment into a chat with an LLM, it won't be able to read the comment. Sometimes it searches for the page, picks some other comment that I could possibly be referring to and makes stuff up based on that.
This also has the effect of search engines not indexing Quicktakes well.
e.g. googling "I think are cool and put it in my room. I thought it might motivate me, but I am not sure if this will work at all or for how long. Feel free to steal. Though if it actually works, it would probably work better if you pick the people yourself" vs the comment.
Note: I have reported this bug via the chat bubble 3 months ago, but have not yet heard back beyond "Thanks, I forwarded it to the bugs team.". I feel a little bad for bring this issue up again.
The introducing comment.
I think removing this would add lots of results to Google. I am honestly unsure what's the best there.
Often when I search site:slatestarcodex.com abc a result comes because some comment mentions something. Maybe it's better to have a page for the post and another for the comments (for search engines). :shrug:
I would've preferred this post to be the sentence "Consider applying Bayes Theorem to your protein intake. e.g. updating towards higher protein intake when sore." instead of ChatGPTese. See also Policy for LLM Writing on LessWrong
Yes, that's what I meant to link to. Did you have success with loseq + Claude code?