Changing a person's strongly-held belief is difficult. They may not be willing to spend the time it would take to address all your arguments. They might not be capable of understanding them. And they may be motivated to misunderstand.
An alternative is to give them short, fun intros to the atomic ideas and evidence for your argument, without revealing your larger aim. Let them gradually come to the right conclusion on their own.
The art of this approach is motivating why the atomic ideas are interesting, without using the point you're trying to make as the m...
If I want to change minds...
... how many objections will I have to overcome?
... from how many readers?
... in what combinations?
... how much argument will they tolerate before losing interest?
... and how individually tailored will they have to be?
... how expensive will providing them with legible evidence be?
... how equipped are they to interpret it accurately?
A: “If I were going to Paris, where would be the best place to get a baguette?” B: “Oh! You’re going to Paris?”
I've done B's conversational move plenty of times, and I am fully capable of understanding conditionals!
If A is asking me this, the most plausible inference is that this is a playful way of telling me that they're going to Paris, and they want to get my opinions on what I enjoyed while I was there. My first reaction might be surprise to learn that (plausibly) they are planning a trip there, so I want to establish that with more certainty. This is ...
That's true! However, I would feel weird and disruptive trying to ask ChatGPT questions when working alongside coworkers in the lab.
Here is a quote from the same text that I think is more apt to the point you are making about apparent shortcomings in ET Jaynes' interpretation of more general agentic behavior:
Of course, for many purposes we would not want our robot to adopt any of these more ‘human’ features arising from the other coordinates. It is just the fact that computers do not get confused by emotional factors, do not get bored with a lengthy problem, do not pursue hidden motives opposed to ours, that makes them safer agents than men for carrying out certain tasks.
To readers of this post, I would like to note that a small number of people on the forum appear to be strong-downvoting my posts on this subject shortly after they are published. I don't know specifically why, but it is frustrating.
For those of you who agree or disagree with my post, I hope you will choose to engage and comment on it to help foster a productive discussion. If you are a person who has chosen to strong-downvote any of the posts in this series, I especially invite you to articulate why - I precommit that my response will be somewhere between "thank you for your feedback" and something more positive and engaged than that.
Thoughts on Apple Vision Pro:
Conservatism says "don't be first, keep everything the same." This is a fine, self-consistent stance.
A responsible moderate conservative says "Someone has to be first, and someone will be last. I personally want to be somewhere in the middle, but I applaud the early adopters for helping me understand new things." This is also a fine, self-consistent stance.
Irresponsible moderate conservatism endorses "don't be first, and don't be last," as a general rule, and denigrates those who don't obey it. It has no answers for who ought to be first and last. But for ...
While I agree with you that Jaynes' description of how loss functions operate in people does not extend to agents in general, the specific passage you have quoted reads strongly to me as if it's meant to be about humans, not generalized agents.
You claim that Jaynes' conclusion is that "agents with similar goal specifications are in conflict, because the specified objective (for food, energy, status, whatever) binds to an agent's own state, not a world-model." But this isn't true. His conclusion is specifically about humans.
I want to reinforce that I'm not ...
I’m anticipating writing several posts on this topic in the coming weeks on the EA forum. I just want to flag that I think your questions about how to think about and value reputation are important, that the EA community is rife with contradictory ideas and inadequate models on this too if, and that we can do a lot better by getting a grip on this subject. I don’t have all the answers, but right now it seems like people are afraid to even talk about the issue openly.
You inspired me to write this up over at EA forum, where it’s getting a terrible reception :D All the best ideas start out unpopular?
I wouldn’t be surprised if a lot of EAs see my takes here as a slippery slope to warm glow thinking and wanton spending that needs to be protected against.
I didn't have this reaction at all. The four lessons you present are points about execution, not principles. IMO a lot of these ideas are cheap or free while being super high-value. We can absolutely continue our borg-like utilitarianism and coldhearted cost-benefit analysis while projecting hospitality, building reputation, conserving slack, and promoting inter-institutional cooperation!
But I do t...
The most common anti-eugenics stance I encounter is also opposed to epilogenics. From this point of view, parents choosing to select for desirable traits in their offspring using advanced medical technology is wasteful, immoral and gross. They have roughly the same feelings about epilogenics (including for intelligence) as they have about cosmetic plastic surgery. To them, a natural and traditional trajectory of healthy human lifespan is ideal - we should maintain our health via diet and exercise, try not to care too much about superficial traits like appe...
I think that if there is an objective morality, then you can use your concern about self-congratulatory narratives as a starting point. What moral view is leading you to think there’s any problem at all with enjoying a self-congratulatory narrative? Once you’ve identified it, you can figure out what other moral positions it might imply.
Even that .69%-acceptable statistic may be a political maneuver. I found a meta analysis a year or two ago of AI healthcare diagnostics that found about this level of acceptability in the literature.
Where it becomes political is that a prestigious doctor friend unsympathetic to AI diagnosis used this statistic to blow off the whole field, rather than to become interested in the tiny fraction of acceptable research. Which is political on its own, and also has to make you wonder if researchers set their quality bar to get the result they want.
Nevertheless it IS discouraging that about 276/40000 papers would be acceptable.
I think it's a complex question. For example, people debate whether porn is harmful or helpful:
If you get specific enough about these questions, it may be possible to ask meaningful scientific or moral questio...
Yes, I agree that if "practical problem in your life" did not include "looking good" or "goes with my other clothes" as design parameters then you'd probably end up in a situation like that. I succeeded at avoiding this problem because I specifically set out to find pants that were good for biking and looked like professional work pants (fortunately I already had some that did). This can be useful: it puts a sharp constraint on the shirts I buy, requiring them to look good with these specific pants. That limitation can be helpful in making the overwhelming number of choices manageable.
I agree with the perspective you're laying out here. These days, I take a slightly more concrete approach to choosing my wardrobe. It still fits the perspective, but the thought process is different.
To decide what to buy, I think about a specific purpose in my life for which I need clothes, and I try to get as specific as possible.
For example, I just started a new job, and I wanted to buy some new clothes for it. Because I already had plenty of suitable shirts, I started to think about the requirements for optimal pants for this application.
I understand your point is that material circumstances control the moral ideas prevalent in a culture, and that these cultural ideals in turn control individual beliefs and actions. Our morality and that of our ancestors is therefore determined largely by material circumstances.
Alongside this deterministic framework, you are arguing for a Dawkins selfis meme-based explanation for which cultural ideas survive and flourish. Specifically, you are arguing that historical material circumstances favored the survival of a pro-slavery, pro-war morality, while mode...
Many commenters seem to be reading this post as implying something like slavery and violence being good or at least morally okay... I read it as a caution similar to the common points of "how sure are you that you would have made the morally correct choice if you had been born as someone benefiting from slavery back when it was a thing" combined with "the values that we endorse are strongly shaped by self-interest and motivated cognition"
I don't agree with your characterization of the post's claims. The title is synonymous with "morality is arbitrary...
Based on the evident historical record, without the environmentally deleterious bounty fossil fuels facilitated, most of us would be conjuring up creatively compelling excuses for why forcing your neighbor to work for free is the Moral thing to do.
I can't speak to every era, but in the middle ages, about 75% of us would have been serfs: not tradeable individually, but bound to a plot of purchaseable land. No way most of us would have been spending our time innovating arguments for the moralilty of slavery.
Arguments for the morality of slavery come do...
I'm honestly not sure if this system would be:
Just noting a point of confusion - if changing minds is a social endeavor having to do with personal connection, why is it necessary to get people to engage System 2/Central Route thinking? Isn’t the main thing to get them involved in a social group where the desired beliefs are normal and let System 1/Peripheral Route thinking continue to do its work?
I would pay about $5/month for a version of Twitter that was read-only. I want a window, not a door.
And I’m not sure about the scales being an icon for “seems borderline.” Some sort of fuzzy line or something might be more appropriate. Scales make me think “well measured.”
The support icon looks at first glance like a garbage can although I can tell it’s meant to be a pillar.
I think with this system you will end up with too many large difficult and uncatchy jumps. Plus similar phone numbers will sound similar which is not what you want.
How does that work with 10 available digits and only 7 scale notes? Do three digits become accidentals or something?
I did this for a while, but then returned it and just started opening the windows more often, especially when it felt stuffy.
Steelman as the inverse of the Intellectual Turing Test
The Intellectual Turing Test (ITT) checks if you can speak in such a way that you convincingly come across as if you believe what you're saying. Can you successfully pose as a libertarian? As a communist?
Lately, the ITT has been getting boosted over another idea, "steelmanning," which I think of as making "arguing against the strongest version of an idea," the opposite of weakmanning or strawmanning.
I don't think one is better than the other. I think that they're tools for different purposes.
If I'm doi...
I came across GreyZone Health today, thought it might be relevant:
...GreyZone Health
Hope for Difficult to Diagnose, Rare, and Complex Medical Conditions
Facing a Misdiagnosis, or Having No Diagnosis at All?
With our exceptional patient advocate service, GreyZone Health helps patients like you with difficult to diagnose, rare, and complex medical conditions. GreyZone Health finds answers and improves your quality of life. Based in Seattle, Washington, our professional patient advocates serve patients around Washington state and around the world, both virtually a
My suggestion would be to start by focusing on hypotheses that your illness has a single cause that is short-term, like a matter of minutes, hours, or at most a day. And also that it’s reliable - do X and Y happens, almost every time. These assumptions are easiest to rule out and do not require elaborate tracking. You may also want to focus on expanding your hypothesis space if you haven’t already - food, exercise, sleep, air quality, pets, genetic and hormonal issues, and chronic infections, are all worth looking at.
As you noticed, testing more complex hy...
This is a staged prompt, with the first stage initiating the conversation and stages 2 and 3 coming after GPT-4’s first and second replies respectively.
First stage:
You are a science fiction writer giving instructions to a genie to get it to write you a science fiction novel capable of winning the Hugo award. However, you know that genies often misconstrue wishes, so your wish needs to be detailed, conceptually bulletproof, and covering all facets of what makes a science fiction novel great. You also only get three wishes, so it has to be a good prompt. Fi...
Story (first try, no edits. prompt in a reply to this comment)
Chapter 1: The Last Sunrise
The horizon wore an orange-red hue, a token of farewell from the sun. It was the last sunrise Jonas would ever witness, the last that his biological eyes would capture and transmit to his fleshy, mortal brain. Tomorrow, he would wake up inside a machine.
A sigh escaped his lips, a whisper in the morning air. He sat on the edge of the roof, feet dangling four stories above the city, staring at the kaleidoscope of colors. The city was waking up, the sounds of the waking w...
I thought she was going to start disseminating seeds and sprouting vines in the end. This made me laugh out loud.
And in one:
print('\n'.join(['Fizz' * (i % 3 == 0) + 'Buzz' * (i % 5 == 0) or str(i) for i in range(1, 101)]))
ChatGPT does it in two:
for i in range(1, 101):
print("Fizz" * (i % 3 == 0) + "Buzz" * (i % 5 == 0) or i)
My 3-line FizzBuzz in python:
for i in range(1, 101):
x = ["", "Fizz"][i%3==0] + ["", "Buzz"][i%5==0]
print([x, i][len(x)==0])
Up front: I am biased against extreme diets like water-only fasts. I can see a use case in carefully medically supervised settings, such as for a cancer treatment, and I know that some religious practitioners use them. I've never tried them and have never been morbidly obese.
The only truly releveant paper I found that looked relevant was a case study of a woman whose 40-day water fast caused thiamine deficiency, which led to her developing a severe neurological disorder called Wernicke's encephalopathy (source).
The academic literature on prolonged water-on...
Back on my laptop, so I can quote conveniently. First, I went back and read the Tale of Alice Almost more carefully, and found I had misinterpreted it. So I will go back and edit my original comment that you were responding to.
Second, Villiam's point is that "ok but slightly worse than current group average" behavior has "potential to destroy your group" if you "keep adding people who are slightly below the average... thus lowering the average," lowering the bar indefinitely.
Villiam is referencing a mathematical truth that may or may not be empirically rel...
Your comment is a response to my rejection of the claims in Alice Almost that a good way to improve group quality is to publicly humiliate below average performers.
Specifically, you say that praising the improvement of the lower performing members fails to stop Villiam’s proposal to stop evaporative cooling by kicking out or criticizing low performers.
So I read you and Villiam as rejecting the idea that a combination of nurture and constructive criticism is the most important way to promote high group performance, and that instead, kicking out or publicly ...
Im responding to Raemon’s link to the Tale of Alice Almost, which is what I thought you were referring as well. If you haven’t read it already, it emphasizes the idea that by holding up members of a group who are slightly below the group average as negative examples, then this can somehow motivate an improvement in the group. Your response made me think you were advocating doing this in order to ice out low-performing members. If that’s wrong, then sorry for making false assumptions - my comment can mainly be treated as a response to the Tale of Alice Almost.
The fundamental premise of trying to have a group at all is that you don’t exclusively care about group average quality. Otherwise, the easiest way to maximize that would be to kick out everybody except the best member.
So given that we care about group size as well as quality, kicking out or pushing away low performers is already looking bad. The natural place to start is by applying positive reinforcement for participating in the group, and only applying negative pressures, like holding up somebody as a bad example, when we’re really confident this is a h...
I think the post is describing a real problem (how to promote higher standards in a group that already has very high standards relative to the general population). I would like to see a different version framed around positive reinforcement. Constructive criticism is great, but it’s something we always need to improve, even the best of us.
People are capable of correctly interpreting the context of praise and taking away the right message. If Alice is a below-average fighter pilot, and her trainer praises her publicly for an above-average (for Alice) flight...
I’m not sure that “group average” is always the metric we want to improve. My intuition is that we want to think of most groups as markets, and supply and demand for various types of interaction with particular people varies from day to day. Adding more people to the market, even if they’re below average, can easily create surplus to the benefit of all and be desirable.
Obviously even in real markets it’s not always beneficial to have more entrants, I think mainly because of coordination costs as the market grows. So in my model, adding extra members to the group is typically good as long as they can pay for their own coordination costs in terms of the value they provide to the group.
Yeah, I think this is an important explanation for why (in my preferred image), we’d find the faeries hiding under the leaves in the faerie forest.
To avoid behavior that’s costly to police, or shortcomings that are hard to identify, and also to attract virtues that are hard to define, we rely in part on private, reputation- and relationship-based networks.
These types of ambiguous bad behaviors are what I had in mind when I wrote “predatory,” but of course they are not necessarily so easy to define as such. They might just be uncomfortable, or sort of “icky...
I don't think this is an adequate account of the selection effects we find in the job market. Consider:
Over the last six months, I've grown more comfortable writing posts that I know will be downvoted. It's still frustrating. But I used to feel intensely anxious when it happened, and now, it's mostly just a mild annoyance.
The more you're able to publish your independent observations, without worrying about whether others will disagree, the better it is for community epistemics.