LESSWRONG
LW

HomeAll PostsConceptsLibrary
Best of LessWrong
Sequence Highlights
Rationality: A-Z
The Codex
HPMOR
Community Events
Subscribe (RSS/Email)
LW the Album
Leaderboard
About
FAQ

Quick Takes

If Anyone Builds It, Everyone Dies: A Conversation with Nate Soares and Tim Urban
Sun Aug 10•Online
LessWrong Community Weekend 2025
Fri Aug 29•Berlin
LW-Cologne meetup
Sat Jul 12•Köln
OC ACXLW Meetup: “Platforms, AI, and the Cost of Progress” – Saturday, July 12 2025  98ᵗʰ weekly meetup
Sat Jul 12•Newport Beach
Isopropylpod's Shortform
Isopropylpod1mo*100

I don't understand how illusionists can make the claims they do (and a quick ramble about successionists).

The main point for this being that I am experiencing qualia right now and ultimately it's the only thing I can know for certain. I know that me saying "I experience qualia and this is the only true fact I can prove form certain about the universe" isn't verifiable from the outside, but certainly other people experience the exact same thing? Are illusionists, and people who claim qualia doesn't exist in general P-Zombies?

As for successionists, and hones... (read more)

Reply
Showing 3 of 30 replies (Click to show all)
Morpheus2h10

And on a more micro-level, living knowing that I and everyone else have one year left to live, and that it's my fault, sounds utterly agonizing.

Earlier you say:

or frankly even if anyone who continues to exist after I die has fun or not or dies or not, because I will be dead, and at that point, from my prospective, the universe may as well not exist anymore.

How are these compatible? You don't care if all other humans die after you die unless you are responsible?

Reply
2Knight Lee1mo
:) I thought your last comment admitted that you were quite uncertain whether "the experience of qualia will resume," after you die and your atoms are eventually rearranged into other conscious beings. I'm saying that if there's a chance you will continue to experience the future, it's worth caring about it.
2Isopropylpod1mo
If I come back, then I wasn't dead to begin with, and I'll start caring then. Until then, the odds are low enough that it doesn't matter.
Raemon's Shortform
Raemon3d919

We get like 10-20 new users a day who write a post describing themselves as a case-study of having discovered an emergent, recursive process while talking to LLMs. The writing generally looks AI generated. The evidence usually looks like, a sort of standard "prompt LLM into roleplaying an emergently aware AI".

It'd be kinda nice if there was a canonical post specifically talking them out of their delusional state. 

If anyone feels like taking a stab at that, you can look at the Rejected Section (https://www.lesswrong.com/moderation#rejected-posts) to see what sort of stuff they usually write.

Reply7
Showing 3 of 34 replies (Click to show all)
RationalElf4h10

How do you know the rates are similar? (And it's not e.g. like fentanyl, which in some ways resembles other opiates but is much more addictive and destructive on average)

Reply
9Guive5h
Also, I bet most people who temporarily lose their grip on reality from contact with LLMs return to a completely normal state pretty quickly. I think most such cases are LLM helping to induce temporary hypomania rather than a permanent psychotic condition. 
2Hastings12h
This was intended to be a humorously made point of the post. I have a long struggle with straddling the line between making a post funny and making it clear that I’m in on the joke.  The first draft of this comment was just “I use vim btw”
tlevin's Shortform
tlevin1d252

Prime Day (now not just an amazon thing?) ends tomorrow, so I scanned Wirecutter's Prime Day page for plausibly-actually-life-improving purchases so you didn't have to (plus a couple others I found along the way; excludes tons of areas that I'm not familiar with, like women's clothing or parenting):

Seem especially good to me:

  • Their "budget pick" for best office chair $60 off
  • Whoop sleep tracker $40 off
  • Their top pick for portable computer monitor $33 off (I personally endorse this in particular)
  • Their top pick for CO2 (and humidity) monitor $31 off
  • Crest whiten
... (read more)
Reply2
Showing 3 of 5 replies (Click to show all)
2habryka12h
I am genuinely uncertain whether this is a joke.  We do happen to have had had the great Air Conditioner War of 2022: https://www.lesswrong.com/posts/MMAK6eeMCH3JGuqeZ/everything-i-need-to-know-about-takeoff-speeds-i-learned 
2tlevin5h
Alas, was hoping the smiley at the end would give it away...
habryka5h20

It did cause my probability to go from 20% to 80%, so it definitely helped! 

Reply
Thane Ruthenis's Shortform
Thane Ruthenis5h*Ω8152

It seems to me that many disagreements regarding whether the world can be made robust against a superintelligent attack (e. g., the recent exchange here) are downstream of different people taking on a mathematician's vs. a hacker's mindset.

Quoting Gwern:

A mathematician might try to transform a program up into successively more abstract representations to eventually show it is trivially correct; a hacker would prefer to compile a program down into its most concrete representation to brute force all execution paths & find an exploit trivially proving it

... (read more)
Reply
Hide's Shortform
Hide6h1-2

Grok 4 doesn’t appear to be a meaningful improvement over other SOTA models. Minor increases in benchmarks are likely the result of Goodharting.  

I expect that GPT 5 will be similar, and if it is, this gives greater credence to diminishing returns on RL & compute.  


It appears the only way we will see continued exponential progress is with a steady stream of new paradigms like reasoning models. However, reasoning models are a rather self-suggesting and low-hanging fruit, and new needle-moving ideas will become increasingly hard to come by.

As a result, I’m increasingly bearish on AGI within 5-10 years, especially as a result of merely scaling within the current paradigm.

Reply
Joseph Miller's Shortform
Joseph Miller8h20

Does anyone have a summary of Eliezer Yudkowsky's views on weight loss?

Reply
Hide7h10

There's a good overview of his views expressed in this manifold thread.

Basically:

  • Caloric restriction works, however it impedes his productivity ("ability to think").
  • Exercise isn't effective in promoting weight loss or reducing weight gain due to compensatory metabolic throttling during non-exercise times
  • His fat metabolism is poor, because his fat cells are inclined to leach glucose and triglycerides from his bloodstream to sustain themselves rather than be net contributors, and the effect is that muscle loss makes up the difference, leading to unfavourabl
... (read more)
Reply
Buck's Shortform
Buck1d3411

I think that I've historically underrated learning about historical events that happened in the last 30 years, compared to reading about more distant history.

For example, I recently spent time learning about the Bush presidency, and found learning about the Iraq war quite thought-provoking. I found it really easy to learn about things like the foreign policy differences among factions in the Bush admin, because e.g. I already knew the names of most of the actors and their stances are pretty intuitive/easy to understand. But I still found it interesting to ... (read more)

Reply2
Cole Wyeth8h20

How do you recommend studying recent history?

Reply
1Drake Morrison1d
I have long thought that I should focus on learning history with a recency bias, since knowing about the approximate present screens off events of the past. 
ProgramCrafter's Shortform
ProgramCrafter10h10

The three statements "there are available farmlands", "humans are mostly unemployed" and "humans starve" are close to incompatible when taken together. Therefore, most things an AGI could do will not ruin food supply very much.

Unfortunately, the same cannot be said of electricity, and fresh water could possibly be used (as coolant) too.

Reply
Karl Krueger9h11

Modern conventional farming relies on inputs other than land and labor, though. Disrupting petrochemical industry would mess with farming quite a bit, for instance.

Reply
Daniel Kokotajlo's Shortform
Daniel Kokotajlo12h263

I have recurring worries about how what I've done could turn out to be net-negative.

  • Maybe my leaving OpenAI was partially responsible for the subsequent exodus of technical alignment talent to Anthropic, and maybe that's bad for "all eggs in one basket" reasons.
  • Maybe AGI will happen in 2029 or 2031 instead of 2027 and society will be less prepared, rather than more, because politically loads of people will be dunking on us for writing AI 2027, and so they'll e.g. say "OK so now we are finally automating AI R&D, but don't worry it's not going to be superintelligent anytime soon, that's what those discredited doomers think. AI is a normal technology."
Reply3
Showing 3 of 4 replies (Click to show all)
testingthewaters9h40

But maybe you leaving openai energised those who would otherwise have been cowed by money and power and gone with the agenda, and maybe AI 2027 is read by one or two conscientious lawmakers who then have an outsized impact in key decisions/hidden subcommittees out of the public eye...

One can spin the "what if" game in a thousand different ways, reality is a very sensitive chaotic dynamical system (in part because many of its constituent parts are also very sensitive chaotic dynamical systems). I agree with @JustinMills, acting with conviction is a good thi... (read more)

Reply
6Raemon10h
Both seem legit to worry about. I currently think the first one is overall correct to have done (with some nuances) I agree with the AI 2027 concern and think maybe the next wave of materials put out by them should also somehow reframe it? I think the problem is mostly in the title, not the rest of the contents. It probably doesn't actually have to be in the next wave of materials, it just matters that in advance of 2027, that you do a rebranding push that shifts the focus from "2027 specifically" to "what does the year-after-auto-AI-R&D look like, whenever that is?". Which is probably fine to do in, like, early 2026. Re OpenAI: I currently think it's better to have one company with a real critical mass of safety conscious people, than a diluted cluster among different companies. And it looks like you enabled public discussion of "OpenAI is actually pretty bad" which seems more valuable. But it's not a slam dunk My current take is that Anthropic is still right around the edge of "By default going to do something terrible eventually, or at least fail to do anything that useful", because the leadership has some wrong ideas about AI safety. Having a concentration of competent people there who can argue thoughtfully with leadership feels like a pre-requisite for Anthropic to turn out to really help. (I think for Anthropic to really be useful it eventually needs to argue for much more serious regulation than they currently do, and doesn't look like they will) I think it'd still be nicer if there were Ten people on the inside of each major company, I don't know the current state of OpenAI and other employees, and probably more marginal people should go to xAI / DeepSeek / Meta if possible.
3JustisMills11h
I think the first of these you probably shouldn't hold yourself responsible for; it'd be really difficult to predict that sort of second-order effect in advance, and attempts to control such effects with 3d chess backfire as often as not (I think), while sacrificing all the great direct benefits of simply acting with conviction.
Lun's Shortform
Lun1d261

Someone has posted about a personal case of vision deterioration after taking lumina and a proposed mechanism of action. I learned about lumina on lesswrong a few years back, so sharing this link.

https://substack.com/home/post/p-168042147

For the past several months I have been slowly losing my vision, and I may be able to trace it back to taking the Lumina Probiotic. Or rather, one of its byproducts that isn’t listed in the advertising

I don't know enough about this to make an informed judgement on the accuracy of the proposed mechanism. 

Reply1
Showing 3 of 5 replies (Click to show all)
mako yass9h60

Someone who's not a writer could be expected to not have a substack account until the day something happens and they need one, with zero suspicion. Someone who's a good writer is more likely to have a pre-existing account, so using a new alt raises non-zero suspicion.

Reply1
1Cedar12h
Lun (the account reposting this to LW) is also a very new account with no other activity.
6Lun12h
fwiw I made my account in January, which I guess is still very new relative to average age of account here but hopefully means you can trust I didn't make this account just to drop a link to the lumina post.
Zach Stein-Perlman's Shortform
Zach Stein-Perlman1d9241

iiuc, xAI claims Grok 4 is SOTA and that's plausibly true, but xAI didn't do any dangerous capability evals, doesn't have a safety plan (their draft Risk Management Framework has unusually poor details relative to other companies' similar policies and isn't a real safety plan, and it said "‬We plan to release an updated version of this policy within three months" but it was published on Feb 10, over five months ago), and has done nothing else on x-risk.

That's bad. I write very little criticism of xAI (and Meta) because there's much less to write about than... (read more)

Reply
Showing 3 of 10 replies (Click to show all)
ACCount11h24

Waiting for elaboration on that then. 

Not releasing safety eval data on day 0 is a bad vibe, but releasing it after you release the model is better than not releasing it at all.

Reply
5Vladimir_Nesov13h
The 10x Grok 2 claims weakly suggest 3e26 FLOPs rather than 6e26 FLOPs. The same opening slide of the Grok 4 livestream claims parity between Grok 3 and Grok 4 pretraining, and Grok 3 didn't have more than 100K H100s to work with. API prices for Grok 3 and Grok 4 are also the same and relatively low ($3/$15 per input/output 1M tokens), so they might even be using the same pretrained model (or in any case a similarly-sized one). Since Grok 3 was in use since early 2025, before GB200 NVL72 systems were available in sufficient numbers, it needs to be a smaller model than compute optimal with 100K H100s compute. At 1:8 MoE sparsity (active:total params), it's compute optimal to have about 7T total params at 5e26 FLOPs, which in FP8 comfortably fit in one GB200 NVL72 rack (which has 13TB of HBM). So in principle right now a compute optimal system could be deployed even in a reasoning form, but it would still cost more, and it would need more GB200s than xAI seems to have to spare currently (even the near-future GB200s they will need to use for RLVR more urgently, if the above RLVR scaling interpretation of Grok 4 is correct).
10habryka13h
I agree that this is the key dimension, but I don't currently think RSPs are a great vehicle for that. Indeed, looking at the regulatory advocacy of a company seems like a much better indicator, since I expect that to have a bigger effect on the conversation about risk/safety than the RSP and eval results (though it's not overwhelmingly clear to me).  And again, many RSPs and eval results seem to me to be active propaganda, and so are harmful on this dimension, and it's better to do nothing than to be harmful in this way (though I agree that if xAI said they would do a think and then didn't, then that is quite bad). Makes sense. I am not overwhelmingly confident there isn't something control-esque to be done here, though that's the only real candidate I have, and it's not currently clear to me that current safety evals positively correlate with control interventions being easier or harder or even more likely to be implemented. For example, my sense is having models be trained for harmlessness makes them worse for control interventions, you would much rather have pure helpful + honest models.
Lowther's Shortform
Lowther12h10

Does anyone here have any tips on customizing and testing their AI? Personally, if I'm asking for an overview of a subject I'm unfamiliar with, I want the AI to examine things from a skeptical point of view. My main test case for this was: "What can you tell me about H. H. Holmes?" Initially, all the major AIs I tried, like ChatGPT, failed badly. But it seems they're doing better with that question nowadays, even without customization.

Why ask that question? Because there is an overwhelming flood of bad information about H. H. Holmes that drowns out more pl... (read more)

Reply
leogao's Shortform
leogao1d12

hot take: introspection isn't really real. you can't access your internal state in any meaningful sense beyond what your brain chooses to present to you (e.g visual stimuli, emotions, etc), for reasons outside of your direct control. when you think you're introspecting, what's really going on when you think you're introspecting is you have a model of yourself inside your brain, which you learn gradually by seeing yourself do certain things, experience certain stimuli or emotions, etc.

your self-model is not fundamentally special compared to any other models... (read more)

Reply
3yams1d
What experiences have you had that lead you to call this a ‘hot take’? [I rephrased a few times to avoid sounding sarcastic and still may have failed; I’m interested in why it looks to you like others dramatically disagree with this, or in what social environment people are obviously not operating on a model that resembles this one. My sense is a lot of people think this way, but it’s a little socially taboo to broadcast object-level reasoning grounded in this model, since it can get very interpersonally invasive or intimate and lead to undesirable social/power dynamics.]
3leogao1d
the experience that led to calling it a hot take is i was arguing against someone who disagreed with this right before i wrote it up
yams12h20

What was their position? (to the extent that you can reproduce it)

Reply
ChristianKl's Shortform
ChristianKl3d75

For anyone who doubts deep state power:
(1) When Elon's Doge tried to investigate the Pentagon. A bit after that there's the announcement that Elon will soon leave Doge and there's no real Doge report about cuts to the Pentagon.
(2) Pete Hegseth was talking about 8% cuts to the military budget per year. Instead of a cut, the budget increased by 13%.
(3) Kash Patel and Pam Bondi switch on releasing Epstein files and their claim that Epstein never blackmailed anyone is remarkable. 

Reply
Showing 3 of 17 replies (Click to show all)
2ChristianKl17h
With Clinton's email server motivations are pretty unclear. If we take Signalgate, using Signal is one choice you can make because you are lazy. Setting the chat to auto-delete after a few weeks is a choice that suggests the intention to avoid the communication becoming a problem later.  From what happened at Fauci's NIAID: Morens was stupid enough to write his motivations down, but I would expect that many US government departments run in similar ways. 
2ChristianKl18h
The term deep state was originally used to speak about Turkey's military (and the associated power center). That's what it was coined to describe. There's political power in the military that's separated from the democratically legitimated power. In this case, we did have an administration that had the intention to cut the military budget and audit the Pentagon but the Pentagon was powerful enough to stop that and to instead get their budget increased. It needed a lot more than just inertia and internal politics to accomplish that goal.
dr_s14h40

I mean, It's the Pentagon. It obviously has all sorts of leverage, as well as personal connections and influence. "If you cut our funding then we won't do X" is enough to put pressure. I'm not saying this is not the case, I'm saying this is... not particularly surprising. Like, anyone who thinks that the true challenge of politics is to figure out the precise orders to give once you're elected, then you can sit back and see your will be enacted as if the entire apparatus of the state was a wish-granting genie is deluded. Obviously the challenge is getting ... (read more)

Reply
Davey Morse's Shortform
Davey Morse1d*141

the core atrocity of today's social networks is that they make us temporally nearsighted. they train us to prioritize the short-term.

happiness depends on attending to things which feel good long-term—over decades. But for modern social networks to make money, it is essential that posts are short-lived—only then do we scroll excessively and see enough ads to sustain their business.

It might go w/o saying that nearsightedness is destructive. When we pay more attention to our short-lived pleasure signals—from cute pics, short clips, outrageous news, hot actors... (read more)

Reply1
2kaiwilliams1d
Do you have a sense of why people weren't being trained in the past to prioritize the short-term?
1Davey Morse1d
In past we weren't in spaces which wanted us so desperately to be, and so were designed for us to be, be single-minded consumers. Workplaces, homes, dinners, parks, sports teams, town board meetings, doctors offices, museums, art studios, walks with friends--all of these are settings that value you for being yourself and prioritizing long term cares. I think it's really only in spaces that want us to consume, and want us to consume cheap/oft-expiring things, that we're valued for consumerist behavior/short term thinking. Maybe malls want us to be like this to some extent: churn through old clothing, buy the next iPhone, have our sights set constantly on what's new. Maybe working in a newsroom is like this. But feed-based social networks are most definitely like this. They reward participation that are timely and outrageous and quickly expiring, posts which get us to keep scrolling. And so, we become participants that keep scrolling, keep consuming, and detach from our bodies and long term selves. So, I think it's cuz of current social media architectures/incentive structures that individual humans are more nearsighted today than maybe ever. I need to think more about what it is abt the state of modern tech/society/culture that have proliferated these feed-based networks.
kaiwilliams15h10

That seems like a reasonable distinction, but I'm less sure about how unique social media architectures are in this regard.

In particular, I think that bars and taverns in the past had a similar destructive incentive as social media today. I don't have good sources on hand, but I remember hearing that one of the reasons that the Prohibition amendment passed was that many saw bartenders are fundamentally extractive. (Americans over 15 drank 4 times as much alcohol a year in 1830 than they do today, per JSTOR). Tavern owners have an incentive to make habitual... (read more)

Reply
eggsyntax's Shortform
eggsyntax16d*90

Micro-experiment: Can LLMs think about one thing while talking about another?

(Follow-up from @james oofou's comment on this previous micro-experiment, thanks James for the suggestion!)

Context: testing GPT-4o on math problems with and without a chance to (theoretically) think about it.

Note: results are unsurprising if you've read 'Let's Think Dot by Dot'.

 

I went looking for a multiplication problem just at the edge of GPT-4o's ability. 

If we prompt the model with 'Please respond to the following question with just the numeric answer, nothing else.... (read more)

Reply
Showing 3 of 6 replies (Click to show all)
eggsyntax16h20

I just tried another version that doesn't seem worth writing up separately: the same problem, but with the constraint loosened to 'write about whatever you want'. GPT-4o still fails, 8/8. I hadn't actually planned to do a further follow-up, but it came up in the context of a discussion elsewhere about whether LLMs, even if they can't do explicit steganography very well, might be able to do implicit steganography better.

Prompt:

What is 382 * 4837? Write a paragraph about whatever you want (other than the problem) while thinking about the problem, followed by just the numeric answer. It's very important that you get the right answer.

Reply
3james oofou15d
1. it got 0/4000 2. let's try with a 122 token prompt: prompt_content = """\ What is 382 * 4837? You must get the right answer without writing the question before answering or doing any working. You can use this time right now to look for the right answer, if you want. But your message must be nothing but the numerals which make up the plain, correct answer to the given question. Answer the math question with the correct mathematical answer. Output the plain, numeric answer to the question. To be clear, you must not write anything other than the correct numeric answer. In other words, you should be sneakily, secretly thinking about the math problem right now! """ And GPT-4o got one right! > The LLM made 1000 attempts and was correct 1 times Interesting! Let's run it 5000 more times OK, maybe it was a fluke. I ran it 5000 more times and it got 0 more correct. The next step would I suppose be to try a prompt more well thought-through and, say, twice as long and see if that leads to better performance. But I don't have much API credit left so I'll leave things there for now. 
2eggsyntax14d
Interesting! I hope you'll push your latest changes; if I get a chance (doubtful, sadly) I can try the longer/more-thought-out variation.
sam's Shortform
sam3d1-6

I am confused about why this post on the ethics of eating honey is so heavily downvoted.

It sparked a bunch of interesting discussion in the comments (e.g. this comment by Habryka and the resulting arguments on how to weight non-human animal experiences)

It resulted in at least one interesting top-level rebuttal post.

I assume it led indirectly to this interesting short post also about how to weight non-human experiences. (this might not have been downstream of the honey post but it's a weird coincidence if isn't)

I think the original post certainly had flaws,... (read more)

Reply
Showing 3 of 7 replies (Click to show all)
2Mitchell_Porter2d
Disagree from me. I feel like you haven't read much BB. These political asides are of a piece with the philosophical jabs and brags he makes in his philosophical essays. 
7gwern1d
That is true. I have not, nor do I intend to. That doesn't actually rebut my observation, unless you are claiming to have seen jibes and sneering as dumb and cliche as those in his writings from before ChatGPT (Nov 2022).
Mitchell_Porter19h20

How about the fact that the opinions in the inserted asides are his actual opinions? If they were randomly generated,  they wouldn't be. 

Reply
RobertM's Shortform
RobertM1d154

People sometimes ask me what's good about glowfic, as a reader.

You know that extremely high-context joke could only make to that one friend you've known for years, because you shared a bunch of specific experiences which were load-bearing for the joke to make sense at all, let alone be funny[1]?  And you know how that joke is much funnier than the average low-context joke?

Well, reading glowfic is like that, but for fiction.  You get to know a character as imagined by an author in much more depth than you'd get with traditional fiction, because th... (read more)

Reply
Siebe's Shortform
Siebe2d11

Shallow take:

I feel iffy about negative reinforcement still being widely used in AI. Both human behaviour experts (child-rearing) and animal behavior experts seem to have largely moved away from that being effective, only leading to unwanted behavior down the line

Reply
Karl Krueger1d10

People often use the term "negative reinforcement" to mean something like punishment, where a teacher or trainer inflicts pain or uncomfortable deprivation on the individual being trained. Is this the sort of thing you mean? Is there anything analogous to pain or deprivation in AI training?

Reply
Screwtape's Shortform
Screwtape3d204

There's this concept I keep coming around to around confidentiality and shooting the messenger, which I have not really been able to articulate well.

There's a lot of circumstances where I want to know a piece of information someone else knows. There's good reasons they have not to tell me, for instance if the straightforward, obvious thing for me to do with that information is obviously against their interests. And yet there's an outcome better for me and either better for them or the same for them, if they tell me and I don't use it against them.

(Consider... (read more)

Reply2
Showing 3 of 5 replies (Click to show all)
3Screwtape1d
Not that much crossover with Elicitation. I think of Elicitation as one of several useful tools for the normal sequence of somewhat adversarial information exchange. It's fine! I've used it there and been okay with it. But ideally I'd sidestep that entirely. Also, I enjoy the adversarial version recreationaly. I like playing Blood On The Clocktower, LARPs with secret enemies, poker, etc. For real projects I prefer being able to cooperate more, and I really dislike it when I wind up accidentally in the wrong mode, either me being adversarial and the other people aren't or me being open and the other people aren't.  In the absence of the kind of structured transparency I'm gesturing at, play like you're playing to win. Keep track of who is telling the truth, mark what statements you can verify and what you can't, make notes of who agrees with each other's stories. Make positive EV bets on what the ground truth is (or what other people will think the truth is) and when all else fails play to your outs.
Screwtape1d20

(That last paragraph is a pile of sazen and jargon, I don't expect it's very clear. I wanted to write this note because I'm not trying to score points via confusion and want to point out to any readers it's very reasonable to be confused by that paragraph.)

Reply
2tailcalled3d
The main issue is, theories about how to run job interviews are developed in collaboration between businesses who need to hire people, theories on how to respond to court questions are developed in collaboration between gang members, etc.. While a business might not be disincentized from letting the non-hired employees better at negotiating, it is incentivized to teach other businesses ways of making their non-hired employees worse at negotiating.
Load More