There are critical gaps in the accessibility and affordability of mental health services worldwide: In some countries, you have to wait for years for therapy, in others you get at max one session per month covered by the health insurance, in others therapies for particular conditions like cluster B personality disorders are virtually nonexistent.
We want to leverage LLMs to fill these gaps and complement regular therapy. Our product is in development. We've based it on Gemini and want it to interface with widely used messaging apps, so users can interact with it like they would with a friend or coach.
I’ve previously founded or worked for several charities and spent a few years in earning to give for work on invertebrate welfare and s-risks from AI.
You can get up to speed on my thinking at Impartial Priorities.
Do you just mean that there are some dimensions along which it's unique among the items in the list (each of them is probably unique along some dimensions), or do you mean that it's not pathological and so shouldn't be on the list?
Ty! Yeah, I think I'm trying to figure out whether I have such a noun and if so if it's the same as the self, so the question you don't want to weigh in on.
I observe processes in me that take the shape of:
Insofar as these evaluations, the chill one and the threatening one, have to do with me, this body/algorithm/identity, I've been referring to them as my self. I think that is already a more narrow definition than the one that is widely used that I should switch to once I get a fuller picture of everything else that is also self.
When I look for the homunculus, I look at the moment between ranking and execution. But in most everyday situations, there's nothing there.
Willpower also doesn't map clearly to anything. If I want to finish part of a software project some evening despite being tired because someone else depends on me, I have this part that tells me that I should finish it because someone depends on me, which becomes part of the ranking procedure. Previously the threat that I'd self-punish if I don't deliver was also part of the ranking procedure.
It's hard to tell for me whether my implicit model of a door knob is that it's hard to turn rather than that I have trouble turning it. Maybe? When it comes to taste, “X tastes bad” always (or as far as I can remember) seemed like a linguistic shortcut to me rather than a meaningful statement about the external world. Accepting moral antirealism, i.e. also seeing “X is morally bad” as a linguistic shortcut, is something that I only became convinced of 10 years ago when a friend of mine got me to consider it seriously for the first time.
If someone were to tell me, “You owe me!” and it's plausible, I think that would cause some kind of stir in me that a statement like, “You're toxic!” doesn't cause anymore. The first seems to still connect to something that is clearly not viridical but that I'm still reifying whereas I don't seem to believe in the second anymore.
So idk, I seem to be in some kind of messy state where it's super hard for me to find this homunculus, and where I and my self are fairly distinct to me but not fully, still blur into each other in some contexts.
Now I don't doubt that other people have a clear homunculus like that. I'm often puzzled by the reluctance some people display to accept that we might be in a simulation or that copies of them would feel the same as they do or that AIs can be conscious in the same sense they are. So I am a bit weird (though not by LW standards), but figuring out exactly in what ways my perception is different from the conventional one eludes me.
Oh, until 2013 I had a process that narrated all my decisions. The ranking and execution happened as always, but there was this separate process that observed the ranking and execution and, by trial and error, tried to construct narratives of why the execution was the one that it was. There usually made some sense, but in extreme situations they were also often clearly self-deceptive. I stopped doing that in late August 2013. Maybe that was a kind of homunculus illusion?
I'm almost done reading your sequence, and I looove it! Lots of awesome insights! Especially the application to BPD (and by extension other PDs) is very interesting to me!
I have the same problem. I can't find anything that might correspond to this homunculus, and I remember befuddlement when I first discovered that some people find it unintuitive that there could be multiple digital copies of themselves. But I'm not in a no-self state.
That's another interesting hypothesis!
In James Fallon's book (from 2013) it sounds like – though no one really knows – that traits associated with psychopathy (like the so-called “warrior gene”) are likely epigenetic, what I mention above. So preventing wars should gradually get rid of it.
But people with all the genetic predispositions toward psychopathy can still grow up to become perfectly prosocial folks with a good-enough, peaceful, loving upbringing. I know some. No affective empathy, no guilt, etc., but they would be quite disappointed in themselves if they harmed someone, sort of how we're disappointed if we don't study and fail an exam.
Then again I imagine that even in a perfectly peaceful society, psychopathy, neurologically/genetically, will continue to exist at some low rate.
Does this include all donors in the calculation or are there hidden donors?
Donors have a switch in their profiles where they can determine whether they want to be listed or not. The top three in the private, complete listing are Jaan Tallinn, Open Phil, and the late Future Fund, whose public grants I've imported. The total ranking lists 92 users.
But I don't think that's core to understanding the step down. I've gone through the projects around the threshold before I posted my last comment, and I think it's really the 90% cutoff that causes it. Not a big donor who has donated to the first 22 but not to the rest.
There are plenty of projects in the tail that have also received donations from a single donor with a high score – but more or less only that so that said donor has > 90% influence over the project and will be ignored until more donors register donations to it.
Ok so the support score is influenced non-linearly by donor score.
By the inverse rank in the ranking that is sorted by the score. So the difference between the top top donor and the 2nd top donor is 1 in terms of the influence they have.
It displays well for me!
TL;DR: Great question! I think it mostly means that we don't have enough data to say much about these projects. So donors who've made early donations to them, can register them and boost their project score.
Taken together, our top donors have (by design) the greatest influence over project scores, but they are also at a greater risk of ending up with > 90% influence over the project score, especially if the project has so far not found many other donors who've been ready to register their donations. So the contributions of top donors are also at greater risk of being ignored until more donors confirm the top donors' donation decisions.
"GiveWiki" as the authority for the picker, to me, implied that this was from a broader universe of giving, and this was the AI Safety subset.
Could be… That's not so wrong either. We rather artificially limited it to AI safety for the moment to have a smaller, more sharply defined target audience. It also had the advantage that we could recruit our evaluators from our own networks. But ideally I'd like to find owners for other cause areas too and then widen the focus of GiveWiki accordingly. The other cause area where I have a relevant network is animal rights, but we already have ACE there, so GiveWiki wouldn't add so much on the margin. One person is interested in potentially either finding someone or themselves taken responsibility for an global coordination/peace-building branch, but they probably won't have the time. That would be excellent though!
No biggie, but I'm sad there isn't more discussion about donations to AI safety research vs more prosaic suffering-reduction in the short term.
Indeed! Rethink Priorities has made some progress on that. I need to dig into the specifics more to see whether I need to update on it. The particular parameters that they discuss in the article have not been so relevant to my reasoning on these parameters, but it's well possible that animal rights wins out even more clearly on the basis of the parameters that I've been using.
Gosh, yeah… Is that what's called the watcher? Can I even literally watch this process unfold or is it by necessity what I'm doing now, looking at my memories and trying to collect and timeline everything that I remember?
If D is the decision process, then perhaps during the process there can only be D whereas afterwards I can have S(D) and thus become aware of D? I feel like I'm always looking at an “echo” of the real thing when I'm writing comments like the above.