There's already a well-written (I only read the first) history of part of EVE Online: https://www.amazon.com/dp/B0962ZVWPG
Metaphilosophy
I appreciate you sharing many of the same philosophical interests as me (and giving them a signal boost here), but for the sake of clarity / good terminology, I think all the topics you list under this section actually belong to object-level philosophy, not metaphilosophy.
I happen to think metaphilosophy is also extremely interesting/important, and you can see my latest thoughts on it at Some Thoughts on Metaphilosophy (which also links to earlier posts on the topic) if you're interested.
Surely there are more prediction markets you'd want to serve as a liquidity provider on. Like, markets on longevity approaches, on intelligence augmentation, on nuclear fusion, on alzheimer's cures, on the effects of gene drives to remove malaria etc.
Fair enough. It just felt like this list didn't contain the most impactful interventions, even accounting for constraints. I'm confused about what you're optimizing for, so I suppose it is eccentric. Also, what's up with "$mio" and "$bio" instead of "$mil" and "$bil"?
ohmygodthatlojbanbabyissocute! —but anyway I don't think you need to be raised speaking a new language for a good one to have large effect on your ability to think.
I find it weird that people call it the "Sapir-Whorf hypothesis" as if there's an alternative way people can robustly learn to think better. Engineering a language isn't really about the language, it's about trying to rewrite the way we think. LessWrong and other academic disciplines have had decent success with this on the margin, I'd say—and the phrase "on the margin" is a good example of a recent innovation that's marginally helped us think better.
There seems to be a trend that breakthrough innovations often arise from somebody trying to deeply understand and reframe the simplest & most general constituents of whatever field they're working in. At least it fits with my own experience and the experience of others I've read. I think it's fairly common advice in math research especially.
The reason I'm enthusiastic about the idea of creating a conlang is that all natural languages have built up a large amount of dependency debt that makes it very difficult to adapt them to fit well with whatever specialised purposes we try to use it for. Just like with large code projects, it gets increasingly expensive to refactor the base if it needs to be adapted to e.g. serve novel purposes.[1]
For language, you also face the problem that even if you've correctly identified a pareto-improvement in theory, you can't just tell people and expect them to switch to your system. Unless they do it at the same time (atomic commit), there's always going to be a cost (confusion, misunderstanding, embarrassment, etc) associated with trying to push for the change. And people won't be willing to try unless they expect that other people expect it to work.
Those are some of the reasons I expect natural languages to be very suboptimal relative to what's possible, and just from this I would expect it to be easy to improve upon it for people who've studied cognition to the extent that LessWrongers have—iff those changes could be coordinated on. For that, we first need a proof of concept. It's not that it's futile or pointless—just nobody's tried. Lojban doesn't count, and while Ithkuil is probably the closest, it doesn't have the right aims. You'd really be willing to spend only ~40M on it?
Let's say you're trying to rewrite a very basic function that was there from the beginning, but you notice that 40 other functions depend on it. The worst-case complexity of trying to refactor it isn't limited to those 40 functions: even if you only have to adapt 10 of them to fit your new ontology, those might have further dependencies you have to control for. When the dependencies are obscure, it can get whac-a-mole-y: for each change you consider, you have to search a branching graph of dependencies to check for new problems.
Language is just worse because A) you have to coordinate a change with many more people, and B) very few words have "internal definitions" that make it easy to predict the consequences of intervening on them. Words usually have magnetic semantics/semiotics, where if you try to shift the meaning of one word, the meaning of other words will often 1) move in to fill the gap, 2) be dragged along by association, 3) be displaced, or 4) be pushed outward by negative association.
If your plan for being a trillionaire unconditionally is "maximize EA-style utility to others", then your plan for being a trillionaire, conditional on not having EA as a primary goal, should be "maximize EA-style utility to the extent that the conditions permit it". Since you are allowed to do things that incidentally help others, you should maximize the incidental benefit that your choices do to others.
If the conditions require that you do things that benefit yourself or that you would find amusing, you should go down the list of things that benefit yourself or that you would find amusing and choose the ones with the greatest incidental benefit to others. So snowball fights should be right out.
Disclaimer: I am not an EA, I am just taking the reasoning to its logical conclusion and don't endorse it.
cross-posted from niplav.site
I sometimes like to engage in idle speculation. One of those speculations is: "If someone came up to me and told me that they would give me a lot of money, but only under the condition that I would spend most of it on unconventional and interesting projects, and I was forbidden to give it to Effective Altruist organizations narrowly defined, what would I do? Not disallowing the projects from having positive consequences accidentally, of course."
The following is a result of this speculation. Many of the ideas might be of questionable morality; I hope it's clear I would think a bit more about them if I were to actually put them into practice (which I won't, since I don't have that type of money, nor am I likely to get hold of it myself anytime soon).
Lots of these ideas aren't mine, and I have tried to attribute them wherever I could find the source. If guess that if they were implemented (not sure whether that's possible: legality & all that) I'd very likely become very unpopular in polite society. But the resulting discourse would absolutely be worth it.
Culture
Language
We know that people can use a constructed language as their native tongue, as there are >1k native Esperanto speakers in the world. But I do not know of any examples of raising a child primarily on a language engineered to exceed the bounds of natural language, the closest being this video. So it would interesting to pay some new parents (ideally already both speakers of the engineered language) to raise a child with that language. The difficulty of achieving this depends on how difficult the target language is to learn, and how many speakers there are: Toki Pona should be easiest (allegedly has ~100 speakers), followed by Lojban (hard to learn, has ~15 speakers) and Láadan (perhaps easier to learn, but less developed and there are negligibly many speakers (and therefore likely none willing to raise a child)), Kēlen would be quite difficult (since there are probably no fluent speakers, and speakers would need to be trained) and Ithkuil is probably impossible, as even the creator can't speak it fluently. I don't know what price parents would put on raising one of their children in primarily the constructed language, which might be in the highest case several hundreds of thousands of dollars per year: If we have two children in different families per language, and pick Toki Pona, Láadan, Lojban and Kēlen, at $200k per parent and year, until the child is 18 years old, we pay $200000⋅2⋅4⋅18=$28.8 mio. We know that children can be bilingual, so the danger of inability to communicate can basically be excluded—and since money is not a huge issue, one could offer a ~$10 mio. insurance against worst-case outcomes. If we assume that worst-case outcomes are possible but unlikely5%, we pay (in expectation) 4⋅0.05⋅$10 mio.=$2 mio., for a total of $30.8 mio.
Art
Science
Metascience
Other
It's not clear that Nauru is the best choice here. While it probably is the smallest nation state that can conceivably be bought (I don't think there is any realistic (or unrealistic) amount of money for which Vatican City could be acquired), it is not very fertile, and has only limited freshwater reserves, relying mostly on rainwater. The highest point is only 71 metres above sea level, which means that a large part of the island might be at risk of going under water with rising sea levels. ↩︎
"Yes, I want housing costs to be AS HIGH AS POSSIBLE! MWAHAHAHAHAH!" ↩︎
Rest in peace. ↩︎
Indeed, there is some evidence that Auckland island was settled briefly by Polynesians 600-700 years ago. ↩︎
Maybe I'm lacking in imagination, but this implies both that polynesians can survive for weeks on the open ocean, can reliably find their way back home if need be, and are adventurous enough to just sail out to the open ocean in the hopes of finding new islands. This seems extremely crazy to me. ↩︎
Another method of finding and moving to Antarctica would be from the Tierra del Fuego to the Siffrey point, which is much closer (~1030km). I'm not sure whether this is more or less likely: the Yahgan people have lived in the Tierra del Fuego for ~8k years, which would give far more time for for extensive exploration, and the Prime Head is likely warmer and more hospitable than the rest of Antarctica, but I believe that the Polynesians were much better at spending long durations of time at sea, and at finding far away land from subtle cues. ↩︎
Since the experiment would solely involve prostitution, my best guess80 is that it would be significantly more difficult to find a similar number of female participants. ↩︎
I'd like to hear feedback on what people believe the right amounts of money for indifference between membership of the two groups+participation would be. ↩︎