Posts

Sorted by New

Wiki Contributions

Comments

A summary of every Replacing Guilt post

Something something akrasia maybe? Or some of the other stuff in that wiki's "see also" section?

A Quick Ontology of Agreement

Maybe John Nerst's erisology is the "dual" to your essay here, since it's basically the study of disagreement. There's also a writeup in The Atlantic, and a podcast episode with Julia Galef. Quoting Nerst:

By “disagreement” I don’t mean the behavior of disagreeing. I mean the plain fact that people have different beliefs, different tastes, and react differently to things.

I find this endlessly interesting. A person that disagrees with me must have a different mind in some way. Can that difference be described? Explained? What do such differences say about the contingent nature of my own mind? Can this different mind to some extent be simulated inside my own? Can I understand what it feels like to think like someone else?

That’s one part. How we negotiate these differences is also interesting. How do we communicate our beliefs to each other? How to we interpret, model and counter others’ beliefs? How, and how well, does language work as a medium for connecting and comparing mind with mind, and with reality? Negotiating the differences — including trying to reshape minds in your own image through argumentation and rhetoric — tend to result in coordination and organization of ideas and beliefs across groups of people.

From one perspective it doesn’t matter so much if an idea is in a single person or distributed across many; the study of disagreement is perhaps best thought of as the study of differences and dissonances between ideas and systems of ideas and how they affect and are affected by the individual and collective mechanisms by which ideas are shaped and organized inside and among minds.

As I see it, this doesn’t exactly match any particular existing discipline, even though there’s plenty of relevant research and knowledge already. Psychologists and political scientists study opinions, anthropologists and historians study differences in how and what people think across space and time, philosophers study how concepts work, and machine learning specialists come up with ways to create them from data. Rhetoricians know how to argue convincingly, economists know what incentives we face when doing so, and biologists know why those things are incentives at all. And so it goes, for a dozen more disciplines (feel free to complain that I’ve overlooked yours). All these fields are relevant for understanding disagreement, but there’s no institutional structure for integrating it into a cross-disciplinary body of knowledge fit for public consumption.

I particularly liked A Deep Dive into the Harris-Klein Controversy, although it's long (9,000 words), and I frequently find myself thinking of The Signal and the Corrective and Decoupling Revisited as frequently-occurring failure modes in online discourse between smart well-meaning people. 

Relationship Advice Repository

The 'Resources' section lists How to Talk So Kids Will Listen and Listen So Kids Will Talk [book] -- I also enjoyed weft's Book Review: How To Talk So Little Kids Will Listen, written by Julie King and Joanne Faber, daughter of Adele Faber, who co-wrote the former with Elaine Mazlich. Quoting weft:

The core principles are the same, but the update stands on its own. Where the original "Kids" acts more like a workbook, asking the reader to self-generate responses, "Little Kids" feels more like it's trying to download a response system into your head via modeling and story-telling. I personally prefer this system better, because the workbook approach feels like it's only getting to my System 2 (sorry for the colloquialism). Meanwhile being surrounded with examples and stories works better for me to fully integrate a new mode of interaction.

I too prefer examples and stories to self-generated responses, so I thought it'd be a useful complement to others like weft and I. 

Humans are very reliable agents

I'm guessing you're referring to Brian Potter's post Where Are The Robotic Bricklayers?, which to me is a great example of reality being surprisingly detailed. Quoting Brian:

Masonry seemed like the perfect candidate for mechanization, but a hundred years of limited success suggests there’s some aspect to it that prevents a machine from easily doing it. This makes it an interesting case study, as it helps define exactly where mechanization becomes difficult - what makes laying a brick so different than, say, hammering a nail, such that the latter is almost completely mechanized and the former is almost completely manual?

Slow motion videos as AI risk intuition pumps

This reminds me of Eliezer's short story That Alien Message, which is told from the other side of the speed divide. There's also Freitas' "sentience quotient" idea upper-bounding information-processing rate per unit mass at SQ +50 (it's log scale -- for reference, human brains are +13, all neuronal brains are several points away, vegetative SQ is -2, etc).

What are some smaller-but-concrete challenges related to AI safety that are impacting people today?

Perhaps I'm missing something (I don't work in AI research), but isn't the obvious first stop Christiano et al's Concrete Problems in AI Safety? Apologies if you already know about this paper and meant something else.

The Problem With The Current State of AGI Definitions

I concur with your last paragraph, and see it as a special case of rationalist taboo (taboo "AGI"). I'd personally like to see a set of AGI timeline questions on Metaculus where only the definitions differ. I think it would be useful for the same forecasters to see how their timeline predictions vary by definition; I suspect there would be a lot of personal updating to resolve emergent inconsistencies (extrapolating from my own experience, and also from ACX prediction market posts IIRC), and it would be interesting to see how those personal updates behave in the aggregate. 

An inquiry into the thoughts of twenty-five people in India

I'm reminded of Sarah Constantin's Humans Who Are Not Concentrating Are Not General Intelligences. A quote that resonates with my own experience:

I’ve noticed that I cannot tell, from casual conversation, whether someone is intelligent in the IQ sense.

I’ve interviewed job applicants, and perceived them all as “bright and impressive”, but found that the vast majority of them could not solve a simple math problem. The ones who could solve the problem didn’t appear any “brighter” in conversation than the ones who couldn’t.

I’ve taught public school teachers, who were _incredibly _bad at formal mathematical reasoning (I know, because I graded their tests), to the point that I had not realized humans could be that bad at math — but it had _no _effect on how they came across in friendly conversation after hours. They didn’t seem “dopey” or “slow”, they were witty and engaging and warm.

I’ve read the personal blogs of intellectually disabled people — people who, by definition, score poorly on IQ tests — and _they _don’t read as any less funny or creative or relatable than anyone else.

Whatever ability IQ tests and math tests measure, I believe that lacking that ability doesn’t have _any _effect on one’s ability to make a good social impression or even to “seem smart” in conversation.

Category Theory Without The Baggage

Just wondering -- did you ever get around to writing this post? I've bounced off many Yoneda explainers before, but I have a high enough opinion of your expository ability that I'm hopeful yours might do it for me.

The Meaninglessness of it all

You may be interested in Kevin Simler's essay A Nihilist's Guide to Meaning, which is a sort of graph-theory flavored take on meaning and purpose. I was pleasantly surprised to see how much mileage he got out of his working definition, how many examples of meaningful vs not-meaningful things it explains:

A thing X will be perceived as meaningful in context C to the extent that it's connected to other meaningful things in C.

Load More