riceissa

I am Issa Rice. https://issarice.com/

riceissa's Comments

The Epistemology of AI risk

As should be clear, this process can, after a few iterations, produce a situation in which most of those who have engaged with the arguments for a claim beyond some depth believe in it.

This isn't clear to me, given the model in the post. If a claim is false and there are sufficiently many arguments for the claim, then it seems like everyone eventually ends up rejecting the claim, including those who have engaged most deeply with the arguments. The people who engage deeply "got lucky" by hearing the most persuasive arguments first, but eventually they also hear the weaker arguments and counterarguments to the claim, so they end up at a level of confidence where they don't feel they should bother investigating further. These people can even have more accurate beliefs than the people who dropped out early in the process, depending on the cutoff that is chosen.

Moral public goods

If I didn't make a calculation error, the nobles in general recommend up to a 100*max(0, 1 - (the factor by which peasants outnumber nobles)/(the factor by which each noble is richer than each peasant))% tax (which is also equivalent to 100*max(0, 2-1/(the fraction of total wealth collectively owned by the nobles))%). With the numbers given in the post, this produces 100*max(0, 1 - 1000/10000)% = 90%. But for example with a billion times as many peasants as nobles, and each noble a billion times richer than each peasant, the nobles collectively recommend no tax. When I query my intuitions though, these two situations don't feel different. I like the symmetry in "Each noble cares about as much about themselves as they do about all peasants put together", and I'm wondering if there's some way to preserve that while making the tax percentage match my intuitions better.

The Alignment-Competence Trade-Off, Part 1: Coalition Size and Signaling Costs

I find it interesting to compare this post to Robin Hanson's "Who Likes Simple Rules?". In your post, when people's interests don't align, they have to switch to a simple/clear mechanism to demonstrate alignment. In Robin Hanson's post, people's interests "secretly align", and it is the simple/clear mechanism that isn't aligned, so people switch to subtle/complicated mechanisms to preserve alignment. Overall I feel pretty confused about when I should expect norms/rules to remain complicated or become simpler as groups scale.

I am a little confused about the large group sizes for some of your examples. For example, the vegan one doesn't seem to depend on a large group size: even among one's close friends or family, one might not want to bother explaining all the edge cases for when one will eat meat.

Open & Welcome Thread - January 2020

I noticed that the parliamentary model of moral uncertainty can be framed as trying to import a "group rationality" mechanism into the "individual rationality" setting, to deal with subagents/subprocesses that appear in the individual setting. But usually when the individual rationality vs group rationality topic is brought up, it is to talk about how group rationality is much harder/less understood than individual rationality (here are two examples of what I mean). I can't quite explain it, but I find it interesting/counter-intuitive/paradoxical that given this general background, there is a reversal here, where a solution in the group rationality setting is being imported to the individual rationality setting. (I think this might be related to why I've never found the parliamentary model quite convincing, but I'm not sure.)

Has anyone thought about this, or more generally about transferring mechanisms between the two settings?

Judgment Day: Insights from 'Judgment in Managerial Decision Making'

I'm curious how well you are doing in terms of retaining all the math you have learned. Can you still prove all or most of the theorems in the books you worked through, or do all or most of the exercises in them? How much of it still feels fresh in mind vs something much vaguer that you can only recall in broad strokes? Do you have a reviewing system in place, and if so, what does it look like?

Open & Welcome Thread - December 2019

Comments like this one and this one come to mind, but I have no idea if those are what you're thinking of. If you could say more about what you mean by "updating/changing after the week", what the point he was trying to make was, and more of the context (e.g. was it about academia? or an abstract decision in some problem in decision theory?), then I might be able to locate it.

We run the Center for Applied Rationality, AMA

I had already seen all of those quotes/links, all of the quotes/links that Rob Bensinger posts in the sibling comment, as well as this tweet from Eliezer. I asked my question because those public quotes don't sound like the private information I referred to in my question, and I wanted insight into the discrepancy.

We run the Center for Applied Rationality, AMA

I have seen/heard from at least two sources something to the effect that MIRI/CFAR leadership (and Anna in particular) has very short AI timelines and high probability of doom (and apparently having high confidence in these beliefs). Here is the only public example that I can recall seeing. (Of the two examples I can specifically recall, this is not the better one, but the other was not posted publicly.) Is there any truth to these claims?

We run the Center for Applied Rationality, AMA

What are your thoughts on Duncan Sabien's Facebook post which predicts significant differences in CFAR's direction now that he is no longer working for CFAR?

We run the Center for Applied Rationality, AMA

Back in April, Oliver Habryka wrote:

Anna Salamon has reduced her involvement in the last few years and seems significantly less involved with the broader strategic direction of CFAR (though she is still involved in some of the day-to-day operations, curriculum development, and more recent CFAR programmer workshops). [Note: After talking to Anna about this, I am now less certain of whether this actually applies and am currently confused on this point]

Could someone clarify the situation? (Possible sub-questions: Why did Oliver get this impression? Why was he confused even after talking talking to Anna? To what extent and in what ways has Anna reduced her involvement in CFAR in the last few years? If Anna has reduced her involvement in CFAR, what is she spending her time on instead?)

Load More