The big difference between AI and these technologies is that we're worried about adversarial behavior by the AI.
A more direct analogy would be if Wright & co had been worried that airplanes might "decide" to fly safely until humanity had invented jet engines, then "decide" to crash them all at once. Nuclear bombs do have a direct analogy - a Dr. Strangelove-type scenario in which, after developing an armamentarium in a ostensibly carefully-controlled manner, some madman (or a defect in an automated launch system) triggers an all-out nuclear attac...
No, but it and its competitors do somehow exist... Why isn't there something similar for paywalled websites?
Good thoughts in general, I'm about where you are - VR is overall headed in a direction where I'm really excited to use it.
Disagree on the device looking cool. It looks like a snorkeling mask, which is still better than the blindfolded look of the Meta Quest.
What it might achieve is being acceptable in public. Google Glass failed because people perceived wearers as potential perverts, photographing people surreptitiously. If Apple can make people perceive the AVP as people who are "having more fun than you are on the plane" - i.e. get people intrigued about other people's use of the technology rather than intimidated by it - that will be a win for the company (and its customers).
I am sort of surprised that there's no equivalent of "spotify for websites." It's easy for me to imagine a service offering an ad-supported and paid subscription that streams otherwise-paywalled websites to you, distributing the revenue as a fraction of clicks or something like that, and only displaying ads on the websites to the ad-supported tier of users. Is there some enormous technological or security hurdle that makes this much harder to do for streaming websites than for streaming music?
I’d also add that female labor force participation rates will move these numbers around some. Their calculations assume all countries have 50% female participation when calculating income, when it actually varies from 11%-85% or so.
Just wanted to chime in and say that for weeks before reading your post, I'd also been interpreting Tyler's behavior on AI in exactly the same way you describe here. Thanks for expressing it so well.
(3) isn't about AI so I don't think Zvi's model explains that. If we ignore (1) and (2), then the one example we're left with (which may or may not be badly reasoned) isn't good enough evidence to say that somebody "just consistently makes badly-reasoned statements."
The whole point of capitalism is that the people who have and direct money are the ones who can make good decisions about how it should be used. When you see firsthand that high-level decision-making is a farce, where does that leave you?
I actually don't see capitalism as being fueled primarily by good decision-making in the C suite. Instead, I think that there's significant uncertainty around decisions at all levels of the company, and many limitations to their courses of action. Many, many businesses fail because of this.
But an existing company has been ...
Relatedly, Bill Gates’ article wasn’t that bad. Sure, there’s some inaccuracies if you’re reading it strictly. But it’s not meant to be read strictly. It’s basically marketing material aimed at a very large crowd, which, as discussed above, requires using phrasing that gets the point across, not phrasing that is scientifically accurate when dissected.
There are inaccuracies in the article, period. It would be embarassing for an engineer to make the mistake of conflating Celsius and Kelvin when comparing boiling point ratios, as in th...
but rather to lay out an opposing position
I understand that this is your aim. I guess what I am saying is that it does not seem like a good aim to me relative to the aim of countering specific, quotable arguments and generally making an effort to contextualize your arguments in some specific strain of discourse. I.e. cite something and respond to something.
One important reason why is that your argument is not specific or evidence-backed or carefully defined enough to agree or disagree with it, without asking you lots of additional clarifying questions. The...
Can you say more about your experirences that led you to have your view of job creation? It's not often that I talk to people who've had the chance to personally observe the behavior of big investors and powerful executives sufficient to overturn the fundamentals of economics. It might be interesting to hear some of those stories - it's hard for me to envision how the personal observations of an individual could provide such powerful evidence against basic market economic principles that they'd overturn one's whole worldview. I haven't had this experience, and nobody I know has either, so from my point of view you are an individual with a very unusual point of view based on very unusual experiences.
Well, it's hard to know what specifically you're objecting to if you won't link to them. Citing the arguments you're criticizing is a pretty basic norm of discourse, unless you want to make an argument that these arguments are so well known that the source doesn't matter. But I don't think that is the case here, and even if it were, you still have to lay out your version of those arguments cleanly and clearly.
The part that I'm most skeptical of is your claim about the dynamics of job creation. You offer a vision in which a fixed number of rich people creat...
I think it's great that you took a stand to present your independent observations and relay some information people here may not have encountered on the subject, especially since political discourse is a minor LW taboo. This is good for epistemics IMO.
The key argument for why we'd predict 18+ Western fascisms in the next decade is that we should by default extrapolate current trends, while rejecting the appearance of stability.
I find this contradictory. Why are current derivative of fascism something we should view as "stable" and likely to hold constant o...
I think it would be helpful if you supplied specific YIMBY articles and quoted from them to illustrate your disagreements. As it is, I have so many "what do you mean by this, exactly"-type questions about your original article that I wouldn't know where to start.
The reason mosquito bites itch is because they are injecting saliva into your skin. Saliva contains mosquito antigens, foreign particles that your body has evolved to attack with an inflammatory immune response that causes itching. The compound histamine is a key signaling molecule used by your body to drive this reaction.
In order for the mosquito to avoid provoking this reaction, they would either have to avoid leaving compounds inside of your body, or mutate those compounds so that they do not provoke an immune response. The human immune system is an adv...
Epistemic activism
I think LW needs better language to talk about efforts to "change minds." Ideas like asymmetric weapons and the Dark Arts are useful but insufficient.
In particular, I think there is a common scenario where:
The V-Dem Institute's tracker shows that, after widespread growth in democracy during the 1980s and 90s, many more countries are now becoming autocratic than democratic, especially weighted by population:
The V-Dem tracker you show doesn't show "widespread growth in democracy during the 1980s and 90s." It shows a giant explosion all of a sudden starting in 1991, when the Soviet Union collapsed, and then an equally sudden reversion from about 2000-2004.
It's tracking autocratization, the sign of the direction of change, rather than the magnitude of autocracy ...
When I'm reasoning about a practical chemistry problem at work, I'm usually thinking in terms of qualitative mechanisms:
Models do not need to be exactly true in order to produce highly precise and useful inferences. Instead, the objective is to check the model’s adequacy for some purpose. - Richard McElreath, Statistical Rethinking
Lightly edited for stylishness
I use ChatGPT as a starting point to investigate hypotheses to test at my biomedical engineering job on a daily basis. I am able to independently approach the level of understanding of specific problems of an experienced chemist with many years of experience on certain problems, although his familiarity with our chemical systems and education makes him faster to arrive at the same result. This is a lived example of the phenomenon in which AI improves the performance of the lower-tier performers more than the higher-tier performers (I am a recent MS grad, h...
I pasted your question into Bing Chat, and it returned Python code to export your PredictionBook data to a CSV file. Haven't tested it. Might be worth looking into this approach?
Over the last six months, I've grown more comfortable writing posts that I know will be downvoted. It's still frustrating. But I used to feel intensely anxious when it happened, and now, it's mostly just a mild annoyance.
The more you're able to publish your independent observations, without worrying about whether others will disagree, the better it is for community epistemics.
Changing a person's strongly-held belief is difficult. They may not be willing to spend the time it would take to address all your arguments. They might not be capable of understanding them. And they may be motivated to misunderstand.
An alternative is to give them short, fun intros to the atomic ideas and evidence for your argument, without revealing your larger aim. Let them gradually come to the right conclusion on their own.
The art of this approach is motivating why the atomic ideas are interesting, without using the point you're trying to make as the m...
If I want to change minds...
... how many objections will I have to overcome?
... from how many readers?
... in what combinations?
... how much argument will they tolerate before losing interest?
... and how individually tailored will they have to be?
... how expensive will providing them with legible evidence be?
... how equipped are they to interpret it accurately?
A: “If I were going to Paris, where would be the best place to get a baguette?” B: “Oh! You’re going to Paris?”
I've done B's conversational move plenty of times, and I am fully capable of understanding conditionals!
If A is asking me this, the most plausible inference is that this is a playful way of telling me that they're going to Paris, and they want to get my opinions on what I enjoyed while I was there. My first reaction might be surprise to learn that (plausibly) they are planning a trip there, so I want to establish that with more certainty. This is ...
That's true! However, I would feel weird and disruptive trying to ask ChatGPT questions when working alongside coworkers in the lab.
Here is a quote from the same text that I think is more apt to the point you are making about apparent shortcomings in ET Jaynes' interpretation of more general agentic behavior:
Of course, for many purposes we would not want our robot to adopt any of these more ‘human’ features arising from the other coordinates. It is just the fact that computers do not get confused by emotional factors, do not get bored with a lengthy problem, do not pursue hidden motives opposed to ours, that makes them safer agents than men for carrying out certain tasks.
To readers of this post, I would like to note that a small number of people on the forum appear to be strong-downvoting my posts on this subject shortly after they are published. I don't know specifically why, but it is frustrating.
For those of you who agree or disagree with my post, I hope you will choose to engage and comment on it to help foster a productive discussion. If you are a person who has chosen to strong-downvote any of the posts in this series, I especially invite you to articulate why - I precommit that my response will be somewhere between "thank you for your feedback" and something more positive and engaged than that.
Thoughts on Apple Vision Pro:
Conservatism says "don't be first, keep everything the same." This is a fine, self-consistent stance.
A responsible moderate conservative says "Someone has to be first, and someone will be last. I personally want to be somewhere in the middle, but I applaud the early adopters for helping me understand new things." This is also a fine, self-consistent stance.
Irresponsible moderate conservatism endorses "don't be first, and don't be last," as a general rule, and denigrates those who don't obey it. It has no answers for who ought to be first and last. But for ...
While I agree with you that Jaynes' description of how loss functions operate in people does not extend to agents in general, the specific passage you have quoted reads strongly to me as if it's meant to be about humans, not generalized agents.
You claim that Jaynes' conclusion is that "agents with similar goal specifications are in conflict, because the specified objective (for food, energy, status, whatever) binds to an agent's own state, not a world-model." But this isn't true. His conclusion is specifically about humans.
I want to reinforce that I'm not ...
I’m anticipating writing several posts on this topic in the coming weeks on the EA forum. I just want to flag that I think your questions about how to think about and value reputation are important, that the EA community is rife with contradictory ideas and inadequate models on this too if, and that we can do a lot better by getting a grip on this subject. I don’t have all the answers, but right now it seems like people are afraid to even talk about the issue openly.
You inspired me to write this up over at EA forum, where it’s getting a terrible reception :D All the best ideas start out unpopular?
I wouldn’t be surprised if a lot of EAs see my takes here as a slippery slope to warm glow thinking and wanton spending that needs to be protected against.
I didn't have this reaction at all. The four lessons you present are points about execution, not principles. IMO a lot of these ideas are cheap or free while being super high-value. We can absolutely continue our borg-like utilitarianism and coldhearted cost-benefit analysis while projecting hospitality, building reputation, conserving slack, and promoting inter-institutional cooperation!
But I do t...
The most common anti-eugenics stance I encounter is also opposed to epilogenics. From this point of view, parents choosing to select for desirable traits in their offspring using advanced medical technology is wasteful, immoral and gross. They have roughly the same feelings about epilogenics (including for intelligence) as they have about cosmetic plastic surgery. To them, a natural and traditional trajectory of healthy human lifespan is ideal - we should maintain our health via diet and exercise, try not to care too much about superficial traits like appe...
I think that if there is an objective morality, then you can use your concern about self-congratulatory narratives as a starting point. What moral view is leading you to think there’s any problem at all with enjoying a self-congratulatory narrative? Once you’ve identified it, you can figure out what other moral positions it might imply.
Even that .69%-acceptable statistic may be a political maneuver. I found a meta analysis a year or two ago of AI healthcare diagnostics that found about this level of acceptability in the literature.
Where it becomes political is that a prestigious doctor friend unsympathetic to AI diagnosis used this statistic to blow off the whole field, rather than to become interested in the tiny fraction of acceptable research. Which is political on its own, and also has to make you wonder if researchers set their quality bar to get the result they want.
Nevertheless it IS discouraging that about 276/40000 papers would be acceptable.
I think it's a complex question. For example, people debate whether porn is harmful or helpful:
If you get specific enough about these questions, it may be possible to ask meaningful scientific or moral questio...
Yes, I agree that if "practical problem in your life" did not include "looking good" or "goes with my other clothes" as design parameters then you'd probably end up in a situation like that. I succeeded at avoiding this problem because I specifically set out to find pants that were good for biking and looked like professional work pants (fortunately I already had some that did). This can be useful: it puts a sharp constraint on the shirts I buy, requiring them to look good with these specific pants. That limitation can be helpful in making the overwhelming number of choices manageable.
I agree with the perspective you're laying out here. These days, I take a slightly more concrete approach to choosing my wardrobe. It still fits the perspective, but the thought process is different.
To decide what to buy, I think about a specific purpose in my life for which I need clothes, and I try to get as specific as possible.
For example, I just started a new job, and I wanted to buy some new clothes for it. Because I already had plenty of suitable shirts, I started to think about the requirements for optimal pants for this application.
I understand your point is that material circumstances control the moral ideas prevalent in a culture, and that these cultural ideals in turn control individual beliefs and actions. Our morality and that of our ancestors is therefore determined largely by material circumstances.
Alongside this deterministic framework, you are arguing for a Dawkins selfis meme-based explanation for which cultural ideas survive and flourish. Specifically, you are arguing that historical material circumstances favored the survival of a pro-slavery, pro-war morality, while mode...
Many commenters seem to be reading this post as implying something like slavery and violence being good or at least morally okay... I read it as a caution similar to the common points of "how sure are you that you would have made the morally correct choice if you had been born as someone benefiting from slavery back when it was a thing" combined with "the values that we endorse are strongly shaped by self-interest and motivated cognition"
I don't agree with your characterization of the post's claims. The title is synonymous with "morality is arbitrary...
Based on the evident historical record, without the environmentally deleterious bounty fossil fuels facilitated, most of us would be conjuring up creatively compelling excuses for why forcing your neighbor to work for free is the Moral thing to do.
I can't speak to every era, but in the middle ages, about 75% of us would have been serfs: not tradeable individually, but bound to a plot of purchaseable land. No way most of us would have been spending our time innovating arguments for the moralilty of slavery.
Arguments for the morality of slavery come do...
I'm honestly not sure if this system would be:
Just noting a point of confusion - if changing minds is a social endeavor having to do with personal connection, why is it necessary to get people to engage System 2/Central Route thinking? Isn’t the main thing to get them involved in a social group where the desired beliefs are normal and let System 1/Peripheral Route thinking continue to do its work?
I would pay about $5/month for a version of Twitter that was read-only. I want a window, not a door.
And I’m not sure about the scales being an icon for “seems borderline.” Some sort of fuzzy line or something might be more appropriate. Scales make me think “well measured.”
How in-depth have you looked at the studies about declining performance in doctors with age? An obvious alternative hypothesis is that doctors gain skill as they age, and therefore tend to take on higher-risk patients and procedures with worse outcomes. I am not saying that's what's going on here - I'd just like to know if this is something you've looked into.