riceissa

I am Issa Rice. https://issarice.com/

Comments

Bucket Errors

You have my permission!

Bucket Errors

I see, thank you for the response!

Bucket Errors

I am curious what you think of my old comment here that I made on Anna's post (some related discussion here).

How to get people to produce more great exposition? Some strategies and their assumptions

For me, the thing that distinguishes exposition from teaching is that in exposition one is supposed to produce some artifact that does all the work of explaining something, whereas in teaching one is allowed to jump in and e.g. answer questions or "correct course" based on student confusion. This ability to "use a knowledgeable human" in the course of explanation makes teaching a significantly easier problem (though still a very interesting one!). It also means though that scaling teaching would require scaling the creation of knowledgeable people, which is the very thing we are trying to solve. Can we make use of just one knowledgeable human, and somehow produce an artifact that can scalably "copy" this knowledge to other humans? -- that's the exposition problem. (This framing is basically Bloom's 2 sigma problem.)

How to get people to produce more great exposition? Some strategies and their assumptions

That's very exciting to me! I personally study how science worked and failed historically and epistemic progress and vigilance in general to make alignment go faster and better, so I'll be interested to discuss exposition as a science with you (and maybe give feedback on your follow-up posts if you want. ;) )

Cool! I just shared my draft post with you that goes into detail about the "exposition as science" strategy; if that post seems interesting to you, I'd be happy to discuss more with you (or you can just leave comments on the post if that is easier). 

AGI Ruin: A List of Lethalities

Doesn't do what? I understand Eliezer to be saying that he figured out AI risk via thinking things through himself (e.g., writing a story that involved outcome pumps; reflecting on orthogonality and instrumental convergence; etc.), rather than being argued into it by someone else who was worried about AI risk. If Eliezer didn't do that, there would still presumably be someone prior to him who did that, since conclusions and ideas have to enter the world somehow. So I'm not understanding what you're modeling as ridiculous.

My understanding of the history is that Eliezer did not realize the importance of alignment at first, and that he only did so later after arguing about it online with people like Nick Bostrom. See e.g. this thread. I don't know enough of the history here, but it also seems logically possible that Bostrom could have, say, only realized the importance of alignment after conversing with other people who also didn't realize the importance of alignment. In that case, there might be a "bubble" of humans who together satisfy the null string criterion, but no single human who does.

The null string criterion does seem a bit silly nowadays since I think the people who would have satisfied it would have sooner read about AI risk on e.g. LessWrong. So they wouldn't even have the chance to live to age ~21 to see if they spontaneously invent the ideas.

AMA: I was unschooled for most of my childhood, then voluntarily chose to leave to go to a large public high school.

Would you say you are traumatized/did unschooling traumatize you/did attending the public high school and college traumatize you?

AMA: I was unschooled for most of my childhood, then voluntarily chose to leave to go to a large public high school.

Do you have a sense of where your anxiety/distractability/"minor mental health problems" came from?

AMA: I was unschooled for most of my childhood, then voluntarily chose to leave to go to a large public high school.

What was the chain of events leading up to you discovering LessWrong/the rationality community?

Load More