I've been thinking a lot about exploiting the capitalistic tendencies of most social systems to solve difficult social issues on a theoretical level. (not very well, unfortunately).
I think most ideas usually prescribe more/less/same amount of something and that instead of fighting the capitalistic tendency to produce more and more by calling for less and less, we should push for more and more, collapse the existing system (and future systems like it) and instead force a better, more equitable system to form. How to push without plain causing harm/destroying the entire system/in a realisti... (Read more)
I live in Iran, and here people strongly believe in Avicenna’s humorism (or what is thought of it in popular culture anyways.). It is believed on the level of it being “common sense.” For example, if you eat fish, milk, broccoli, and tomato sauce, all of which are “cold”, you’re supposed to balance that out by eating walnuts and dates. My personal impression is that there is probably some truth to this simplistic model of nutrition, as I see a lot of anecdotal evidence for it, but well, I like to see what science is on the subject.
Note that the humorism believed in here (Iran) is not a strawma... (Read more)
Artificial intelligence defeated a pair of professional Starcraft II players for the first time in December 2018. Although this was generally regarded as an impressive achievement, it quickly became clear that not everybody was satisfied with how the AI agent, called AlphaStar, interacted with the game, or how its creator, DeepMind, presented it. Many observers complained that, in spite of DeepMind’s claims that it performed at similar speeds to humans, AlphaStar was able to control the game with greater speed and accuracy than any human, and that this was the reason why it prevailed.
Although... (Read more)
A false dilemma is of the form “It’s either this, or that. Pick one!” It tries to make you choose from a limited set of options, when, in reality, more options are available. With that in mind, what’s wrong with the following examples?
Ex. 1: You either love the guy or hate him
Counterargument 1: “Only a Sith deals in absolutes!”
Counterargument 2: I can feel neutral towards the guy
Ex. 2: You can only ad... (Read more)
In the last post, we discussed a common problem in arguments that Prove Too Much. In this post, we’ll generalize that problem to help determine useful categories. But before we go on, what’s wrong with these arguments?
Ex. 1 [Stolen from slatestarcodex]
“A few months ago, a friend confessed that she had abused her boyfriend. I was shocked, because this friend is one of the kindest and gentle... (Read more)
When I'm learning a new skill, there's a technique I often use to quickly gain the basics of the new skill without getting drowned in the plethora of resources that exist. I've found that just 3 resources that cover the skill from 3 separate viewpoints(along with either daily practice or a project) is enough to quickly get all the pieces I need to learn the new skill.
I'm partial to books, so I've called this The 3 Books Technique, but feel free to substitute books for courses, mentors, or videos as needed.
The "What" book is used as r... (Read more)
The "paradox of tolerance" is a continually hot topic, but I've not seen it framed as a member in a category of fallacies where a principle is conceptualized as either absolute or hypocritical and the absolute conception then rejected as self-contradictory or incoherent. Other examples of commonly absolutized principles are pacifism, pluralism, humility, openness, specific kinds of freedoms, etc.
I've been provisionally calling it the 'false self-contradiction fallacy', meaning a specialized case of black-and-white fallacy as applied to ethical, moral or practical ... (Read more)
Epistemic status: Not a historian of science, but I have thought fairly extensively about meetups. Kind of making this up as I go along, almost certainly missing important points.
Other meta: Written all in one sitting, to not let the perfect be the enemy of the good. No one proofread it, so hopefully there aren't sentences that just cut off in the middle. Also, forgive my excessive use of scare quotes.
tl;dr: The difference between historical salons and LW meetups is that meetups do not feel like the place where progress is made. They’re not doing research or publishing anything. I... (Read more)
Growing up as an aspiring javelin thrower in Kenya, the young Julius Yego was unable to find a coach: in a country where runners command the most prestige, mentorship was practically nonexistent. Determined to succeed, he instead watched YouTube recordings of Norwegian Olympic javelin thrower Andreas Thorkildsen, taking detailed notes and attempting to imitate the fine details of his movements. Yego went on to win gold in the 2015 World Championships in Beijing, silver in the 2016 Rio de Janeiro Olympics, and holds the 3rd-longest javelin throw on world record. He acquired a coach only six mon... (Read more)
Format warning: This post has somehow ended up consisting primarily of substantive endnotes. It should be fine to read just the (short) main body without looking at any of the endnotes, though. The endnotes elaborate on various claims and distinctions and also include a much longer discussion of decision theory.
Thank you to Pablo Stafforini, Phil Trammell, Johannes Treutlein, and Max Daniel for comments on an initial draft.
When discussing normative questions, many members of the rationalist community identify as anti-realists. But normative anti-realism seems to me to be in tension with some o... (Read more)
Suppose that 1% of the world’s resources are controlled by unaligned AI, and 99% of the world’s resources are controlled by humans. We might hope that at least 99% of the universe’s resources end up being used for stuff-humans-like (in expectation).
Jessica Taylor argued for this conclusion in Strategies for Coalitions in Unit-Sum Games: if the humans divide into 99 groups each of which acquires influence as effectively as the unaligned AI, then by symmetry each group should end, up with as much influence as the AI, i.e. they should end up with 99% of the influence.
This argument rests on what I... (Read more)
Do people think we could make a singleton (or achieve global coordination and preventative policing) just by imitating human policies on computers? If so, this seems pretty safe to me.
Some reasons for optimism: 1) these could be run much faster than a human thinks, and 2) we could make very many of them.
Acquiring data: put a group of people in a house with a computer. Show them things (images, videos, audio files, etc.) and give them a chance to respond at the keyboard. Their keyboard actions are the actions, and everything between actions is an observation. Then learn the policy of the group ... (Read more)
To do effective differential technological development for AI safety, we'd like to know which combinations of AI insights are more likely to lead to FAI vs UFAI. This is an overarching strategic consideration which feeds into questions like how to think about the value of AI capabilities research.
As far as I can tell, there are actually several different stories for how we may end up with a set of AI insights which makes UFAI more likely than FAI, and these stories aren't entirely compatible with one another.
Note: In this document, when I say "FAI", I mean any superintelligent system which do... (Read more)
- Ask for your drink without a straw.
- Unplug your microwave when not in use.
- Bring a water bottle to events.
- Stop using air conditioning.
- Choose products that minimize packaging.
I've recently heard people advocate for all of these, generally in the form of "here are small things you can be doing to help the planet." In the EA Facebook group someone asked why we haven't tried to make estimates so we can prioritize among these. Is it more important to reuse containers, or to buy locally made soap?
I think the main reason we haven't put a lot of work into quantifying the impacts of t... (Read more)
It is a relatively intuitive thought that if a Bayesian agent is uncertain about its utility function, it will act more conservatively until it has a better handle on what its true utility function is.
This might be deeply flawed in a way that I'm not aware of, but I'm going to point out a way in which I think this intuition is slightly flawed. For a Bayesian agent, a natural measure of uncertainty is the entropy of its distribution over utility functions (the distribution over which possible utility function it thinks is the true one). No matter how uncertain a Bayesian agent is abou... (Read more)
Where I write up some small ideas that I've been happening that may eventually become their own top level posts. I'll start populating with a few ideas I've posted up as twitter/Facebook thoughts.