All of DirectedEvolution's Comments + Replies

Over the last six months, I've grown more comfortable writing posts that I know will be downvoted. It's still frustrating. But I used to feel intensely anxious when it happened, and now, it's mostly just a mild annoyance.

The more you're able to publish your independent observations, without worrying about whether others will disagree, the better it is for community epistemics.

Changing a person's strongly-held belief is difficult. They may not be willing to spend the time it would take to address all your arguments. They might not be capable of understanding them. And they may be motivated to misunderstand.

An alternative is to give them short, fun intros to the atomic ideas and evidence for your argument, without revealing your larger aim. Let them gradually come to the right conclusion on their own.

The art of this approach is motivating why the atomic ideas are interesting, without using the point you're trying to make as the m... (read more)

If I want to change minds...

... how many objections will I have to overcome?

... from how many readers?

... in what combinations?

... how much argument will they tolerate before losing interest?

... and how individually tailored will they have to be?

... how expensive will providing them with legible evidence be?

... how equipped are they to interpret it accurately?

A: “If I were going to Paris, where would be the best place to get a baguette?” B: “Oh! You’re going to Paris?”

I've done B's conversational move plenty of times, and I am fully capable of understanding conditionals!

If A is asking me this, the most plausible inference is that this is a playful way of telling me that they're going to Paris, and they want to get my opinions on what I enjoyed while I was there. My first reaction might be surprise to learn that (plausibly) they are planning a trip there, so I want to establish that with more certainty. This is ... (read more)

2mako yass6h
Yes, if I did B, I might even say something like "I'm not sure what you mean" (though not in those words, but others might), but what I mean is I'm not sure what your intentions are in asking, and I am way more interested in that than guessing about baguettes. The mismatch in interest is so acute that if you don't answer my question I don't think it would be especially mean of me to decline to answer yours.

That's true! However, I would feel weird and disruptive trying to ask ChatGPT questions when working alongside coworkers in the lab.

Here is a quote from the same text that I think is more apt to the point you are making about apparent shortcomings in ET Jaynes' interpretation of more general agentic behavior:

Of course, for many purposes we would not want our robot to adopt any of these more ‘human’ features arising from the other coordinates. It is just the fact that computers do not get confused by emotional factors, do not get bored with a lengthy problem, do not pursue hidden motives opposed to ours, that makes them safer agents than men for carrying out certain tasks.

To readers of this post, I would like to note that a small number of people on the forum appear to be strong-downvoting my posts on this subject shortly after they are published. I don't know specifically why, but it is frustrating.

For those of you who agree or disagree with my post, I hope you will choose to engage and comment on it to help foster a productive discussion. If you are a person who has chosen to strong-downvote any of the posts in this series, I especially invite you to articulate why - I precommit that my response will be somewhere between "thank you for your feedback" and something more positive and engaged than that.

Thoughts on Apple Vision Pro:

  • The price point is inaccessibly high.
  • I'm generally bullish on new interfaces to computing technology. The benefits aren't always easy to perceive until you've had a chance to start using it.
  • If this can sit on my head and allow me to type or do calculations while I'm working in the lab, that would be very convenient. Currently, I have to put gloves on and off to use my phone, and office space with my laptop is a 6-minute round trip from the lab.
  • I can see an application that combines voice-to-text and AI in a way that makes it fe
... (read more)
2rhollerith_dot_com2d
Sure, but an audio-only interface can be done with an iPhone and some Airpods; no need for a new interface.
2[comment deleted]2d
2[comment deleted]2d

Conservatism says "don't be first, keep everything the same." This is a fine, self-consistent stance.

A responsible moderate conservative says "Someone has to be first, and someone will be last. I personally want to be somewhere in the middle, but I applaud the early adopters for helping me understand new things." This is also a fine, self-consistent stance.

Irresponsible moderate conservatism endorses "don't be first, and don't be last," as a general rule, and denigrates those who don't obey it. It has no answers for who ought to be first and last. But for ... (read more)

While I agree with you that Jaynes' description of how loss functions operate in people does not extend to agents in general, the specific passage you have quoted reads strongly to me as if it's meant to be about humans, not generalized agents.

You claim that Jaynes' conclusion is that "agents with similar goal specifications are in conflict, because the specified objective (for food, energy, status, whatever) binds to an agent's own state, not a world-model." But this isn't true. His conclusion is specifically about humans.

I want to reinforce that I'm not ... (read more)

4Zack_M_Davis1d
Thanks. We don't seem to have a "That's fair" or "Touché" react (which seems different and weaker than "Changed my mind").

I’m anticipating writing several posts on this topic in the coming weeks on the EA forum. I just want to flag that I think your questions about how to think about and value reputation are important, that the EA community is rife with contradictory ideas and inadequate models on this too if, and that we can do a lot better by getting a grip on this subject. I don’t have all the answers, but right now it seems like people are afraid to even talk about the issue openly.

1jenn5d
I share your sense that EAs should be thinking about reputation a lot more. A lot of the current thinking has also been very reactive/defensive, and I think that's due both to external factors and to the fact that the community doesn't realize how valuable an actually good reputation can be - thought Nathan is right that it's not literally priceless. Still, I'd love to see the discourse develop in a more proactive position.

I wouldn’t be surprised if a lot of EAs see my takes here as a slippery slope to warm glow thinking and wanton spending that needs to be protected against.

 

I didn't have this reaction at all. The four lessons you present are points about execution, not principles. IMO a lot of these ideas are cheap or free while being super high-value. We can absolutely continue our borg-like utilitarianism and coldhearted cost-benefit analysis while projecting hospitality, building reputation, conserving slack, and promoting inter-institutional cooperation!

But I do t... (read more)

9Scrooge Mcduck3d
Agree little could be done then.  But since then, I've noticed the community has an attitude of "Well I'll just keep an eye out next time" or "I'll be less trusting next time" or something.  This is inadequate, we can do better. I'm offering decision markets [https://manifold.markets/group/crypto-exchange-risk] that will make it harder for frauds to go unnoticed, prioritizing crypto (still experimenting with criteria).  But when I show EAs these, I'm kind of stunned by the lack of interest.  As if their personal judgment is supposed to be less-corruptible at detecting fraud, than a prediction market.  This has been very alarming for me to see. But who knows -- riffing off the post, maybe that just means prediction markets haven't built up enough reputation for LW/EA to trust it.
7jenn6d
Thanks for your super thought out response! I agree with all of it, especially the final paragraph about making EA more human-compatible. Also, I really love this passage: Yes. You get me :')

The most common anti-eugenics stance I encounter is also opposed to epilogenics. From this point of view, parents choosing to select for desirable traits in their offspring using advanced medical technology is wasteful, immoral and gross. They have roughly the same feelings about epilogenics (including for intelligence) as they have about cosmetic plastic surgery. To them, a natural and traditional trajectory of healthy human lifespan is ideal - we should maintain our health via diet and exercise, try not to care too much about superficial traits like appe... (read more)

I think that if there is an objective morality, then you can use your concern about self-congratulatory narratives as a starting point. What moral view is leading you to think there’s any problem at all with enjoying a self-congratulatory narrative? Once you’ve identified it, you can figure out what other moral positions it might imply.

1ymeskhout7d
It's not any specific moral view that causes me concern, but rather how the entire edifice is organized. There's likely no end to the specific moral principles I can cite which hit me on a visceral level which I, purportedly, feel very strongly about. This includes the revulsion against slavery, which in my case partly extends even to consensual labor arrangements that have a significant imbalance of power (this is most prominent for the hospitality industry in poor countries serving a much richer clientele). But when I contemplate how genetically identical I am to people from just a few hundred years ago, and when I see what those people felt strongly about, it's reasonable for this to generate some pause on my end. Because what it demonstrates to me is that even the core visceral disgust I reflexively feel can't be trusted. Nevertheless, I still rely heavily on this disgust reflex but it's with the understanding that I'm engaging in fiction but have no better alternative.
2TAG8d
It doesn't have to be a moral objection. It's circular argumentation, so it already goes against epistemic norms.

Even that .69%-acceptable statistic may be a political maneuver. I found a meta analysis a year or two ago of AI healthcare diagnostics that found about this level of acceptability in the literature.

Where it becomes political is that a prestigious doctor friend unsympathetic to AI diagnosis used this statistic to blow off the whole field, rather than to become interested in the tiny fraction of acceptable research. Which is political on its own, and also has to make you wonder if researchers set their quality bar to get the result they want.

Nevertheless it IS discouraging that about 276/40000 papers would be acceptable.

I think it's a complex question. For example, people debate whether porn is harmful or helpful:

  • Morally or practically
  • In the short vs. long term
  • To the actors
  • Directly to the viewer
  • To the viewer's partner
  • To culture as a whole
  • For intrinsict reasons or because of how it intersects with the rest of our culture
  • Universally, on average, or in specific circumstances
  • Whether it's actually harmful/helpful or just a suboptimal/better way to express sexuality

If you get specific enough about these questions, it may be possible to ask meaningful scientific or moral questio... (read more)

8sanxiyn8d
As an example of question specific enough to be answerable by science, there is Is Pornography Use Associated with Sexual Difficulties and Dysfunctions among Younger Heterosexual Men? [https://pubmed.ncbi.nlm.nih.gov/25816904/] (2015). It begins: The answer is no. As far as I know, this was among the first study powerful enough to answer this question. Well done, science! Of course, nobody listens to science. Compare the introduction above with another introduction written 4 years later, from Is Pornography Use Related to Erectile Functioning? [https://pubmed.ncbi.nlm.nih.gov/30621919/] (2019). The shift in tone is palpable, and you can just feel the powerlessness researchers feel about the situation.

Yes, I agree that if "practical problem in your life" did not include "looking good" or "goes with my other clothes" as design parameters then you'd probably end up in a situation like that. I succeeded at avoiding this problem because I specifically set out to find pants that were good for biking and looked like professional work pants (fortunately I already had some that did). This can be useful: it puts a sharp constraint on the shirts I buy, requiring them to look good with these specific pants. That limitation can be helpful in making the overwhelming number of choices manageable.

I agree with the perspective you're laying out here. These days, I take a slightly more concrete approach to choosing my wardrobe. It still fits the perspective, but the thought process is different.

To decide what to buy, I think about a specific purpose in my life for which I need clothes, and I try to get as specific as possible.

For example, I just started a new job, and I wanted to buy some new clothes for it. Because I already had plenty of suitable shirts, I started to think about the requirements for optimal pants for this application.

  • I bike to work,
... (read more)
4Gordon Seidoh Worley8d
This is how I used to buy clothes. At least in my case I got some hard advice from a friend: I was picking pieces of clothes that were fine in isolation but didn't really come together to create a look/fit that was me and made me look unintentional and thus less good. It also made it too easy to optimize for function at the expense of form to the point of picking things that met great functional requirements but looked bad, like technical hiking pants that met tons of needs other than looking good or fitting my body well. In order to actually look put together I realized that I needed to take a more global approach to my clothes optimization.

I understand your point is that material circumstances control the moral ideas prevalent in a culture, and that these cultural ideals in turn control individual beliefs and actions. Our morality and that of our ancestors is therefore determined largely by material circumstances.

Alongside this deterministic framework, you are arguing for a Dawkins selfis meme-based explanation for which cultural ideas survive and flourish. Specifically, you are arguing that historical material circumstances favored the survival of a pro-slavery, pro-war morality, while mode... (read more)

3ymeskhout8d
Thank you for your reply, you accurately summarized my argument. Regarding "objective" morality, I honestly would have no idea where to start, especially as an atheist. I was hoping my post would prompt some responses pushing back on this particular point. I'm not opposed towards working to discover what this "objective" morality would be, but given my argument above there should be significant caution that we're not just slumbering into yet another self-congratulatory narrative. Along that same line, nothing I wrote above should imply that I lack curiosity, hopefully I established quite the opposite.

Many commenters seem to be reading this post as implying something like slavery and violence being good or at least morally okay... I read it as a caution similar to the common points of "how sure are you that you would have made the morally correct choice if you had been born as someone benefiting from slavery back when it was a thing" combined with "the values that we endorse are strongly shaped by self-interest and motivated cognition"

 

I don't agree with your characterization of the post's claims. The title is synonymous with "morality is arbitrary... (read more)

5ymeskhout8d
Kaj_Sotala's interpretation of my post is absolutely correct. There's an obvious reason I did not encourage anyone to view themselves as the victims; my assumption is that no one needs any convincing to agree being raped and murdered is awful. The second reason I did not focus on that perspective is because my argument was about how memeplexes achieve dominance, which means that the opinions of the historically oppressed is by definition irrelevant.

Based on the evident historical record, without the environmentally deleterious bounty fossil fuels facilitated, most of us would be conjuring up creatively compelling excuses for why forcing your neighbor to work for free is the Moral thing to do.

 

I can't speak to every era, but in the middle ages, about 75% of us would have been serfs: not tradeable individually, but bound to a plot of purchaseable land. No way most of us would have been spending our time innovating arguments for the moralilty of slavery.

Arguments for the morality of slavery come do... (read more)

Another UI note - the scrollbar is so thin it is hard to use.

2Measure13d
Also, when scrolled fully to either end, there remains a large space of the background color that makes it look like the bar is not fully at the end.

I'm honestly not sure if this system would be:

  • Harmful, mostly replacing high-quality comments with modest-quality reacts
    • Very harmful, with interest in the site draining away as commenting becomes abnormal
  • Helpful, with silence or low-quality comments (which could include inflammatory comments) replaced with modest-quality reacts
    • Very helpful, as the continuum of ability to engage escalates people into interactions they'd otherwise have skipped, as authors see that apparently unseen comments actually have a lot of eyeballs on them, and leading to a positive f
... (read more)

Just noting a point of confusion - if changing minds is a social endeavor having to do with personal connection, why is it necessary to get people to engage System 2/Central Route thinking? Isn’t the main thing to get them involved in a social group where the desired beliefs are normal and let System 1/Peripheral Route thinking continue to do its work?

1Seth Herd12d
I think you are absolutely correct that getting someone involved in a social group where everyone already has those ideas would be better at changing minds. But that's way harder than getting someone to have a ten-minute conversation. In fact, it's so hard that I don't think it's ever been studied experimentally. Hopefully I'm wrong and there are limited studies; but I've looked for them and not found them (~5 years ago). I'd frame it this way: what you're doing in that interview is supplying the motivation to do System 2 thinking. The Socratic method is about asking people the same questions they'd ask themselves if they cared enough about that topic, and had the reasoning skills to reach the truth.
7AnthonyC12d
If I understand correctly I think it's more that system 1/peripheral route thinking can get someone to affectively endorse an idea without forming a deeper understanding of it, whereas system 2/central route thinking can produce deeper understanding, but many (most?) people need to feel sufficiently psychologically and socially safe/among friends to engage in that kind of thinking.

I would pay about $5/month for a version of Twitter that was read-only. I want a window, not a door.

1Pat Myron12d
https://github.com/zedeus/nitter [https://github.com/zedeus/nitter]
2Viliam13d
I could imagine this functionality implemented as a simple browser plugin or script. Just hide the input box. (No idea whether it already exists.) Would be useful for many social networks.

And I’m not sure about the scales being an icon for “seems borderline.” Some sort of fuzzy line or something might be more appropriate. Scales make me think “well measured.”

4Measure14d
I would like something like a block pushed halfway off a ledge.
3mako yass14d
I think a tilde would do better, yeah.

The support icon looks at first glance like a garbage can although I can tell it’s meant to be a pillar.

I think with this system you will end up with too many large difficult and uncatchy jumps. Plus similar phone numbers will sound similar which is not what you want.

3Rana Dexsin14d
That's what I did, though with do on 1 and mi′ on 0 (treating it as 10). I'm not sure what similarity metric is most relevant here, but in case of near-collisions, small additions of supporting rhythm or harmony could help nudge some sequences apart; swing time would give added contrast to even vs odd positions, for instance. Anyway, it sounds like this might not generalize well across people… Aside: this is a time when I'm really appreciating two-axis voting. The vote-signal of “that's mildly interesting but really doesn't seem like it'd work” is very useful to see in a compact form, even though the written responses contain similar information in more detail.

How does that work with 10 available digits and only 7 scale notes? Do three digits become accidentals or something?

2jefftk15d
Maybe use more than one octave of range? So if we wanted to do it in Am we'd turn 123-456-7890 into A3 B3 C4 - D4 E4 F4 - G4 A4 B4 G3

I did this for a while, but then returned it and just started opening the windows more often, especially when it felt stuffy.

Steelman as the inverse of the Intellectual Turing Test

The Intellectual Turing Test (ITT) checks if you can speak in such a way that you convincingly come across as if you believe what you're saying. Can you successfully pose as a libertarian? As a communist?

Lately, the ITT has been getting boosted over another idea, "steelmanning," which I think of as making "arguing against the strongest version of an idea," the opposite of weakmanning or strawmanning.

I don't think one is better than the other. I think that they're tools for different purposes.

If I'm doi... (read more)

2Vladimir_Nesov17d
I think ITT is most useful for practicing privately, as a method for systematically developing intellectual understanding of arguments. Practicing it publicly is somewhat useless (though a good sanity check) and requires a setup where claims so channeled are not taken as your own beliefs. Unlike ITT, steelmanning is not aiming for accurate understanding, so it's much less useful for intellectual understanding of the actual points. It's instead a mode of taking inspiration from something you don't consider good or useful, and running away with whatever gears survive the analogy to what you do see as good or useful. Steelmanning is great at opposing aversion to associating with a thing that appears bad or useless, and making some intellectual use of it, even if it's not for the intended purpose and lossy on intended nuance.

I came across GreyZone Health today, thought it might be relevant:

GreyZone Health
Hope for Difficult to Diagnose, Rare, and Complex Medical Conditions


Facing a Misdiagnosis, or Having No Diagnosis at All?
With our exceptional patient advocate service, GreyZone Health helps patients like you with difficult to diagnose, rare, and complex medical conditions. GreyZone Health finds answers and improves your quality of life. Based in Seattle, Washington, our professional patient advocates serve patients around Washington state and around the world, both virtually a

... (read more)

My suggestion would be to start by focusing on hypotheses that your illness has a single cause that is short-term, like a matter of minutes, hours, or at most a day. And also that it’s reliable - do X and Y happens, almost every time. These assumptions are easiest to rule out and do not require elaborate tracking. You may also want to focus on expanding your hypothesis space if you haven’t already - food, exercise, sleep, air quality, pets, genetic and hormonal issues, and chronic infections, are all worth looking at.

As you noticed, testing more complex hy... (read more)

This is a staged prompt, with the first stage initiating the conversation and stages 2 and 3 coming after GPT-4’s first and second replies respectively.

First stage:

You are a science fiction writer giving instructions to a genie to get it to write you a science fiction novel capable of winning the Hugo award. However, you know that genies often misconstrue wishes, so your wish needs to be detailed, conceptually bulletproof, and covering all facets of what makes a science fiction novel great. You also only get three wishes, so it has to be a good prompt. Fi... (read more)

Story (first try, no edits. prompt in a reply to this comment)

Chapter 1: The Last Sunrise

The horizon wore an orange-red hue, a token of farewell from the sun. It was the last sunrise Jonas would ever witness, the last that his biological eyes would capture and transmit to his fleshy, mortal brain. Tomorrow, he would wake up inside a machine.

A sigh escaped his lips, a whisper in the morning air. He sat on the edge of the roof, feet dangling four stories above the city, staring at the kaleidoscope of colors. The city was waking up, the sounds of the waking w... (read more)

7DirectedEvolution22d
This is a staged prompt, with the first stage initiating the conversation and stages 2 and 3 coming after GPT-4’s first and second replies respectively. First stage: You are a science fiction writer giving instructions to a genie to get it to write you a science fiction novel capable of winning the Hugo award. However, you know that genies often misconstrue wishes, so your wish needs to be detailed, conceptually bulletproof, and covering all facets of what makes a science fiction novel great. You also only get three wishes, so it has to be a good prompt. First, simulate the science fiction author’s thought process in coming up with a good prompt. Then have them make a wish to the genie for the sci fi novel using the prompt. Second stage: Now, have the genie write the first chapter of the novel based on the science fiction author’s wish. Third stage: have the genie continue writing chapter 1 to the end of the chapter.

I thought she was going to start disseminating seeds and sprouting vines in the end. This made me laugh out loud.

And in one:

print('\n'.join(['Fizz' * (i % 3 == 0) + 'Buzz' * (i % 5 == 0) or str(i) for i in range(1, 101)]))

ChatGPT does it in two:

for i in range(1, 101):

   print("Fizz" * (i % 3 == 0) + "Buzz" * (i % 5 == 0) or i)

4DirectedEvolution22d
And in one:

My 3-line FizzBuzz in python:

for i in range(1, 101):

    x = ["", "Fizz"][i%3==0] + ["", "Buzz"][i%5==0]

    print([x, i][len(x)==0])

2DirectedEvolution22d
ChatGPT does it in two:

Up front: I am biased against extreme diets like water-only fasts. I can see a use case in carefully medically supervised settings, such as for a cancer treatment, and I know that some religious practitioners use them. I've never tried them and have never been morbidly obese.

The only truly releveant paper I found that looked relevant was a case study of a woman whose 40-day water fast caused thiamine deficiency, which led to her developing a severe neurological disorder called Wernicke's encephalopathy (source).

The academic literature on prolonged water-on... (read more)

I'll consider that! Thanks MondSemmel.

Back on my laptop, so I can quote conveniently. First, I went back and read the Tale of Alice Almost more carefully, and found I had misinterpreted it. So I will go back and edit my original comment that you were responding to.

Second, Villiam's point is that "ok but slightly worse than current group average" behavior has "potential to destroy your group" if you "keep adding people who are slightly below the average... thus lowering the average," lowering the bar indefinitely.

Villiam is referencing a mathematical truth that may or may not be empirically rel... (read more)

2Raemon24d
I’d been a bit confused at your earlier reaction to the post, this makes more sense to me.

Your comment is a response to my rejection of the claims in Alice Almost that a good way to improve group quality is to publicly humiliate below average performers.

Specifically, you say that praising the improvement of the lower performing members fails to stop Villiam’s proposal to stop evaporative cooling by kicking out or criticizing low performers.

So I read you and Villiam as rejecting the idea that a combination of nurture and constructive criticism is the most important way to promote high group performance, and that instead, kicking out or publicly ... (read more)

1M. Y. Zuo24d
 I'm really unsure how you read that into my comment. I'll spell it out step by step and let's see where the confusion is: Only one sentence out of many was quoted. This usually indicates on LW the replier wants to address a specific portion, for some reason or another.  If I wanted to address all your claims I probably would have quoted the whole comment or left it unquoted, following the usual practice on LW. Your one sentence was: My view is: People are capable of correct interpretation some fraction of the time. Some fraction of that fraction will result in them 'taking away the right message'. These ratios are unknown but cumulatively will be well under 100% in any real life scenario I can think of. Therefore, Villiam's point follows. And so on.

Im responding to Raemon’s link to the Tale of Alice Almost, which is what I thought you were referring as well. If you haven’t read it already, it emphasizes the idea that by holding up members of a group who are slightly below the group average as negative examples, then this can somehow motivate an improvement in the group. Your response made me think you were advocating doing this in order to ice out low-performing members. If that’s wrong, then sorry for making false assumptions - my comment can mainly be treated as a response to the Tale of Alice Almost.

1M. Y. Zuo25d
Is there some part of my original comment that you do not understand?

The fundamental premise of trying to have a group at all is that you don’t exclusively care about group average quality. Otherwise, the easiest way to maximize that would be to kick out everybody except the best member.

So given that we care about group size as well as quality, kicking out or pushing away low performers is already looking bad. The natural place to start is by applying positive reinforcement for participating in the group, and only applying negative pressures, like holding up somebody as a bad example, when we’re really confident this is a h... (read more)

1M. Y. Zuo25d
What are you talking about?  I'm referring to Villiam's point that a common scenario is that the original members leave once the group average quality declines below a threshold.

I think the post is describing a real problem (how to promote higher standards in a group that already has very high standards relative to the general population). I would like to see a different version framed around positive reinforcement. Constructive criticism is great, but it’s something we always need to improve, even the best of us.

People are capable of correctly interpreting the context of praise and taking away the right message. If Alice is a below-average fighter pilot, and her trainer praises her publicly for an above-average (for Alice) flight... (read more)

1M. Y. Zuo25d
The rate of success of this is not anywhere near 100%. So for group dynamics, where members have a finite level of patience, this really doesn't prevent Villiam's point of every new member being ever so slightly below the level of the previous leading to evaporative cooling of the original members past a few dozen iterations.

I’m not sure that “group average” is always the metric we want to improve. My intuition is that we want to think of most groups as markets, and supply and demand for various types of interaction with particular people varies from day to day. Adding more people to the market, even if they’re below average, can easily create surplus to the benefit of all and be desirable.

Obviously even in real markets it’s not always beneficial to have more entrants, I think mainly because of coordination costs as the market grows. So in my model, adding extra members to the group is typically good as long as they can pay for their own coordination costs in terms of the value they provide to the group.

Yeah, I think this is an important explanation for why (in my preferred image), we’d find the faeries hiding under the leaves in the faerie forest.

To avoid behavior that’s costly to police, or shortcomings that are hard to identify, and also to attract virtues that are hard to define, we rely in part on private, reputation- and relationship-based networks.

These types of ambiguous bad behaviors are what I had in mind when I wrote “predatory,” but of course they are not necessarily so easy to define as such. They might just be uncomfortable, or sort of “icky... (read more)

I don't think this is an adequate account of the selection effects we find in the job market. Consider:

  • We don't expect people to disappear from the job market just because they're the best. They disappear from the market when they've found a match, so that they and their counterpart become confident in the relationship's durability and shift from explore to exploit, investing in the relationship for the long term. The pool of candidates is comprised both of those who outgrew their previous position or relationship, and those who got fired or dumped. Insofa
... (read more)
2Viliam23d
Yeah, it does not work absolutely. As you say, sometimes the incompetent people and companies disappear from the market; and sometimes for various reason the competent ones are looking for something new. Yet, I would say that in my personal experience there seems to be a correlation: the bad jobs I had were often those where I responded to a printed job announcement (even if I responded to multiple postings and chose the job that seemed relatively best among them), and the good jobs I had were often those where people I knew actively approached me saying "hey, I have a great job, and we are currently looking for someone just like you". (Or in one case, it was me calling my friends and asking: "hey, where are you working currently? is it an okay place? are you hiring?".) From the opposite perspective, I have interview a few job candidates whose CVs seemed impressive, but their actual skills were somewhere around the Hello-World level. So it seems to me that responding to job announcements is indeed a lemon market for both sides.
Load More