There's the epistemic discount rate (ex.: probability of simulation shut down per year) and the value discount (ex.: you do the funner things first, so life is less valuable per year as you become older).
Asking "What value discount rate should be applied" is a category error. "should" statements are about actions done towards values, not about values themselves.
As for "What epistemic discount rate should be applied", it depends on things like "probability of death/extinction per year".
Why do we have offices?
They seem expensive, and not useful for jobs that can apparently be done remotely.
status: to integrate
epistemic status: a thought I just had
EtA: for those that are not familiar with the concept of moral trade, check out: https://concepts.effectivealtruism.org/concepts/moral-trade/
Epistemic status: thinking outloud
The term "weirdness points" puts a different framing on the topic.
I'm thinking maybe I/we should also do this for "recommendation points".
The amount I'm willing to bet depends both on how important it seems to me and how likely I think the other person will appreciate it.
The way I currently try to spend my recommendation points is pretty fat tail, because I see someone's attention as scarce, so I want to keep it for things I think are really important, and the importance I assign to information is pretty fat tail. I'll som... (read more)
current intuitions for personal longevity interventions in order of priority (cryo-first for older people): sleep well, lifelogging, research mind-readers, investing to buy therapies in the future, staying home, sign up for cryo, paying a cook / maintain low weight, hiring Wei Dai to research modal immortality, paying a trainer, preserving stem cells, moving near a cryo facility, having some watch you to detect when you die, funding plastination research
EtA: maybe lucid dreaming to remember your dreams; some drugs (becopa?) to improve memory retention)
also not really important in the long run, but sleeping less to experience more
Suggestion for retroactive prizes: Pay the most undervalued post on the topic for the prize, whenever it was written, assuming the writer is still alive or cryopreserved (given money is probably not worth much for most dead people). "undervalue" meaning amount the post is worth minus amount the writers received.
Topic: Can we compute back the Universe to revive everyone?
Quality / epistemic status: I'm just noting this here for now. Language might be a bit obscure, and I don't have a super robust/formal understanding of this. Extremely speculative.
This is a reply to: https://www.reddit.com/r/slatestarcodex/comments/itdggr/is_there_a_positive_counterpart_to_red_pill/g5g3y3a/?utm_source=reddit&utm_medium=web2x&context=3
The map can't be larger than the territory. So you need a larger territory to scan your region of interest: your scanner can't scan themselves... (read more)
Topic: AI adoption dynamic
So an AI currently seems more expensive to train, but less expensive to use (as might be obvious for most of you).
Of course, trained humans are better than GPT-3. And this comparison has other limitations. But I still find it interesting.
generalising from what a friend proposed me: don't aim at being motivated to do [desirable habit], aim at being addicted (/obsessed) at doing [desirable habit] (ie. having difficulty not to do it). I like this framing; relying on always being motivated feels harder to me
(I like that advice, but it probably doesn't work for everyone)
I can pretty much only think of good reasons for having generally pro-entrapment laws. Not any kind of traps, but some kind of traps seem robustly good. Ex.: I'd put traps for situations that are likely to happen in real life, and that show unambiguous criminal intent.
It seems like a cheap and effective way to deter crimes and identify people at risk of criminal behaviors.
I've only thought about this for a bit though, so maybe I'm missing something.
x-post with Facebook: https://www.facebook.com/mati.roy.09/posts/10158763751484579
Philosophical zombies are creatures that are exactly like us, down to the atomic level, except they aren't conscious.
Complete philosophical zombies go further. They too are exactly like us, down to the atomic level, and aren't conscious. But they are also purple spheres (except we see them as if they weren't), they want to maximize paperclips (although they act and think as if they didn't), and they are very intelligent (except they act and think as if they weren't).
I'm just saying this because I find it funny ^^. I think consciousness is harder (for us) to reduce than shapes, preferences, and intelligence.
topic: lifelogging as life extension
which formats should we preserve our files in?
I think it should be:
- open source and popular (to increase chances it's still accessible in the future)
- resistant to data degradation: https://en.wikipedia.org/wiki/Data_degradation (thanks to Matthew Barnett for bringing this to my attention)
epistemic status: idea
Backup Day. Day where you commit all your data to blu-rays in a secure location.
When could that be?
Aphelion is at the beginning of the year. But maybe would be better to have it on a day that commemorates some relevant events for us.
I feel like I have slack. I don't need to work much to be able to eat; if I don't work for a day, nothing viscerally bad happens in my immediate surrounding. This allows me to think longer term and take on altruistic projects. But on the other hand, I feel like every movement counts; that there's no loose in the system. Every lost move is costly. A recurrent thought I've had in the past weeks is that: there's no slack in the system.
Today is Schelling Day. You know where and when to meet for the hangout!
imagine having a physical window that allowed you to look directly in the past (but people in the past wouldn't see you / the window). that would be amazing, right? well, that's what videos are. with the window it feels like it's happening now, whereas with videos it feels like it's happening in the past, but it's the same
tattoo idea: I won't die in this body
in Toki Pona: ale pini mi li ala insa e sijelo ni
direct translation: life's end (that is) mine (will) not (be) inside body this
EtA: actually I got the Toki Pona wrong; see: https://www.reddit.com/r/tokipona/comments/iyv2r2/correction_thread_can_your_sentences_reviewed_by/
When you're sufficiently curious, everything feels like a rabbit hole.
Challenge me by saying a very banal statement ^_^
People say we can't bet about the apocalypse. But what about taking debt? The person thinking the probability of apocalypse is higher would accept higher interest rate on their debt, as once at the judgement period their might be no one to whom the money is worth or the money itself might not be worth much.
I guess there are also reasons to want more money during a global catastrophe, and there are also reasons to not want to keep money for great futures (see: https://matiroy.com/writings/Consume-now-or-later.html), so that wouldn't actually work.
meta - LessWrong
have people predict whether they will upvote a post just based on the title
There's a post, I think by Robin Hanson on Overcoming Bias, that says people care about what their peers think of them, but we can hack our brains to doing awesome things by making this reference group the elite of the future. I can't find this post. Do you have a link?
might be useful for people to have personal wiki where they take note instead of everyone taking notes in private Gdoc
status: to do / to integrate
A Hubble Brain: a brain taking all the resources present in a Hubble-Bubble-equivalent.
I want to look into roleplay in animals, but Google is giving me animal roleplay, which is interesting too, but not what I'm looking for right now 😅
I'm wonder how much roleplay there is in the animal kingdom. I wouldn't be surprised if there was very few.
Maybe if you're able to roleplay, then you're able to communicate?? Like, roleplay might need to have a theory of mind, because you're imagining yourself in someone else's body.
Maybe you can teach words to an animal without a theory of mind, but they'll be more like levers for them: for them, saying "bana... (read more)
I remember someone in the LessWrong community (I think Eliezer Yudkowsky, but maybe Robin Hanson or someone else, or maybe only Rationalist-adjacent; maybe an article or a podcast) saying that people believing in "UFOs" (or people believing in unproven theories of conspiracy) would stop being so enthusiastic about those if they became actually known as true with good evidence for them. does anyone know what I'm referring to?
sometimes I see people say "(doesn't) believe in science" when in fact they should say "(doesn't) believe in scientists"
or actually actually "relative credence in the institutions trying to science"
hummm, I think I prefer the expression 'skinsuit' to 'meatbag'. feels more accurate, but am not sure. what do you think?
I just realized my System 1 was probably anticipating our ascension to the stars to start in something like 75-500 years.
But actually, colonizing the stars could be millions of subjective years away if we go through an em phase (http://ageofem.com/). On the other hand, we could also have finished spreading across the cosmos in only a few subjective decades if I get cryopreserved and the aestivation hypothesis is true (https://en.wikipedia.org/wiki/Aestivation_hypothesis).
I created a Facebook group to discuss moral philosophies that value life in and of itself: https://www.facebook.com/groups/1775473172622222/
For non-human animal brains, I would compare them to the baseline of individuals in the... (read more)
The original Turing test has a human evaluator.
Other evaluators I think would be interesting include: the AI passing the test, a superintelligent AI, and an omniscient maximally-intelligent entity (except without the answer to the test).
Thought while reading this thread.
Category: Weird life optimization
One of my ear canal is in a different shape. When I was young, my mother would tell me that this one was harder to clean, and that ze couldn't see my ear-drum. This ear gets wax accumulation more easily. A few months ago, I decided to let it block.
Obvious possible cognitive bias is the "just world bias": if something bad happens often enough, I'll start think it's good.
But here are benefits this has for me:
When sleeping, I can put my good ear on the pillow, and this now isolates me from sound prett