[Cross-posted to the EA Forum] TL;DR: I propose a form of conditional supplemental income (CSI) for humans once AI’s can do basically all work. This would be on top of a universal basic income (UBI), and people would earn the CSI by doing things that help improve their overall well-being....
[Crossposted to the EA Forum here.] TL;DR: I present initial work towards creating a “conscience calculator” that could be used to guard-rail an AGI to make decisions in pursuing its goal(s) as if it had a human-like conscience. A list of possible conscience breaches is presented with two lexical levels,...
Lamini recently posted a paper explaining their “memory tuning” methodology of using a Mixture of Memory Experts to significantly reduce LLM hallucinations over a limited domain of knowledge. They describe using this technique with a Mistral 2 open-source LLM model to achieve 95% accuracy on a text-to-SQL query task for...
[Note: this is a slightly edited version of an essay I entered into the AI Impacts essay contest on the Automation of Wisdom and Philosophy - entries due July 14, 2024. Crossposted to the EA Forum.] TL;DR A few possible scenarios are put forth to explore the likely ranges of...
TL;DR: This is an update on my progress towards creating an “ethics calculator” that could be used to help align an AGI to act ethically. In its first iteration, the calculator uses a utilitarian framework with “utility” measured in terms of value as net “positive” experiences, with the value of...
(If you work for a company that’s trying to develop AGI, I suggest you don’t publicly answer this question lest the media get ahold of it.) (Let’s assume you’ve “aligned” this AGI and done significant sandbox testing before you let it loose with its first task(s). If you’d like to...