it seems to me that disentangling beliefs and values are important part of being able to understand each other
and using words like "disagree" to mean both "different beliefs" and "different values" is really confusing in that regard
just a loose thought, probably obvious
some tree species self-selected themselves for height (ie. there's no point in being a tall tree unless taller trees are blocking your sunlight)
humans were not the first species to self-select (for humans, the trait being intelligence) (although humans can now do it intentionally, which is a qualitatively different level of "self-selection")
on human self-selection: https://www.researchgate.net/publication/309096532_Survival_of_the_Friendliest_Homo_sapiens_Evolved_via_Selection_for_Prosociality
Litany of Tarski for instrumental rationality 😊
If it's useful to know whether the box contains a diamond,
I desire to figure out whether the box contains a diamond;
If it's not useful to know whether the box contains a diamond,
I desire to not spend time figuring out whether the box contains a diamond;
Let me not become attached to curiosities I may not need.
here's my new fake-religion, taking just-world bias to its full extreme
the belief that we're simulations and we'll get transcended to Utopia in 1 second because future civilisation is creating many simulations of all possible people in all possible contexts and then uploading them to Utopia so that from anyone's perspective you have a very high probability of transcending to Utopia in 1 second
^^
There's the epistemic discount rate (ex.: probability of simulation shut down per year) and the value discount (ex.: you do the funner things first, so life is less valuable per year as you become older).
Asking "What value discount rate should be applied" is a category error. "should" statements are about actions done towards values, not about values themselves.
As for "What epistemic discount rate should be applied", it depends on things like "probability of death/extinction per year".
I'm helping Abram Demski with making the graphics for the AI Safety Game (https://www.greaterwrong.com/posts/Nex8EgEJPsn7dvoQB/the-ai-safety-game-updated)
We'll make a version using https://app.wombo.art/. We have generated multiple possible artwork for each card and made a pre-selection, but we would like your input for the final selection.
You can give your input through this survey: https://forms.gle/4d7Y2yv1EEXuMDqU7 Thanks!
In the book Superintelligence, box 8, Nick Bostrom says:
...How an AI would be affected by the simulation hypothesis depends on its values. [...] consider an AI that has a more modest final goal, one that could be satisfied with a small amount of resources, such as the goal of receiving some pre-produced cryptographic reward tokens, or the goal of causing the existence of forty-five virtual paperclips. Such an AI should not discount those possible worlds in which it inhabits a simulation. A substantial portion of the AI’s total expected utility might derive
Why do we have offices?
They seem expensive, and not useful for jobs that can apparently be done remotely.
Hypotheses:
status: to integrate
AI is improving exponentially with researchers having constant intelligence. Once the AI research workforce become itself composed of AIs, that constant will become exponential which would make AI improve even faster (superexponentially?)
it doesn't need to be the scenario of a singular AI agent self-improving its own self; it can be a large AI population participating in the economy and collectively improving AI as a whole, with various AI clans* focusing on different subdomains (EtA: for the main purpose of making money, and then using that money to buy t...
One model / framing / hypothesis of my preferences is that:
I wouldn't / don't value living in a loop multiple times* because there's nothing new experienced. So even an infinite life in the sense of looping an infinite amount of times has finite value. Actually, it has the same value as the size of the loop: after 1 loop, marginal loops have no value. (Intuition pump: from within the loop, you can’t tell how many times you’ve been going through the loop so far.)
*explanation of a loop: at some point in the future my life becomes indistinguishable from a pre...
In my mind, "the expert problem" means the problem of being able to recognize experts without being one, but I don't know where this idea comes from as the results from a Google search don't mention this. What name is used to refer to that problem (in the literature)?
x-post: https://www.facebook.com/mati.roy.09/posts/10159081618379579
Epistemic status: thinking outloud
The term "weirdness points" puts a different framing on the topic.
I'm thinking maybe I/we should also do this for "recommendation points".
The amount I'm willing to bet depends both on how important it seems to me and how likely I think the other person will appreciate it.
The way I currently try to spend my recommendation points is pretty fat tail, because I see someone's attention as scarce, so I want to keep it for things I think are really important, and the importance I assign to information is pretty fat tail. I'll som...
current intuitions for personal longevity interventions in order of priority (cryo-first for older people): sleep well, lifelogging, research mind-readers, investing to buy therapies in the future, staying home, sign up for cryo, paying a cook / maintain low weight, hiring Wei Dai to research modal immortality, paying a trainer, preserving stem cells, moving near a cryo facility, having some watch you to detect when you die, funding plastination research
EtA: maybe lucid dreaming to remember your dreams; some drugs (becopa?) to improve memory retention)
also not really important in the long run, but sleeping less to experience more
i want a better conceptual understanding of what "fundamental values" means, and how to disentangled that from beliefs (ex.: in an LLM). like, is there a meaningful way we can say that a "cat classifier" is valuing classifying cats even though it sometimes fail?
topic: AI
Lex Fridman:
I'm doing podcast with Sam Altman (@sama), CEO of OpenAI next week, about GPT-4, ChatGPT, and AI in general. Let me know if you have any questions/topic suggestions.
PS: I'll be in SF area next week. Let me know if there are other folks I should talk to, on and off the mic.
topic: lifelogging as life extension
if you think pivotal acts might require destroying a lot of hardware*; then this increases the probability that worlds in which lifelogging as life extension is useful are more likely to require EMP-proof lifelogging (and if you think think destroying a lot of hardware would increase x-risks, then it reduces the value of having EMP-proof lifelogs)
*there might be no point in destroying hard drives (vs GPUs), but some attacks, like EMPs, might not discriminate on that
#parenting, #schooling, #policies
40% of hockey players selected in top tier leagues are born in the first quarter of the year (compared to 10% in the 3rd quarter) (whereas you'd expect 25% if the time of the year didn't have an influence)
The reason for that is that the cut-off age to join a hockey league as a kid is January 1st; so people born in January are the oldest in their team, and at a young age that makes a bigger difference (in terms of abilities), so they tend to be the best in their team, and so their coaches tend to make them play more and pay ...
That suggest he has no idea about whether it's actually a good vote (as this is how the person differs from other candidates) and just advocates for someone on the basis that the person is his friend.
For the Cryonics Institute board elections, I recommend voting for Nicolas Lacombe.
I’ve been friends with Nicolas for over a decade. Ze’s very principled, well organized and hard working. I have high trust in zir, and high confidence ze would be a great addition to the CI's board.
I recommend you cast some or all of your votes for Nicolas (you can cast up to 4 votes total). If you’re signed up with CI, simply email info@cryonics.org with your votes.
see zir description here: https://www.facebook.com/mati.roy.09/posts/10159642542159579
i want to invest in companies that will increase in value if AI capabilities increases fast / faster than what the market predicts
do you have suggestions?
a feature i would like on content website like YouTube and LessWrong is an option to market a video/article as read as a note to self (x-post fb)
the usual story is that Governments provide public good because Markets can't, but maybe Markets can't because Governments have secured a monopoly on them?
x-post: https://www.facebook.com/mati.roy.09/posts/10159360438609579
Suggestion for retroactive prizes: Pay the most undervalued post on the topic for the prize, whenever it was written, assuming the writer is still alive or cryopreserved (given money is probably not worth much for most dead people). "undervalue" meaning amount the post is worth minus amount the writers received.
Topic: Can we compute back the Universe to revive everyone?
Quality / epistemic status: I'm just noting this here for now. Language might be a bit obscure, and I don't have a super robust/formal understanding of this. Extremely speculative.
This is a reply to: https://www.reddit.com/r/slatestarcodex/comments/itdggr/is_there_a_positive_counterpart_to_red_pill/g5g3y3a/?utm_source=reddit&utm_medium=web2x&context=3
The map can't be larger than the territory. So you need a larger territory to scan your region of interest: your scanner can't scan themselves...
Topic: AI adoption dynamic
GPT-3:
Human:
So an AI currently seems more expensive to train, but less expensive to use (as might be obvious for most of you).
Of course, trained humans are better than GPT-3. And this comparison has other limitations. But I still find it interesting.
...Acco
generalising from what a friend proposed me: don't aim at being motivated to do [desirable habit], aim at being addicted (/obsessed) at doing [desirable habit] (ie. having difficulty not to do it). I like this framing; relying on always being motivated feels harder to me
(I like that advice, but it probably doesn't work for everyone)
Philosophical zombies are creatures that are exactly like us, down to the atomic level, except they aren't conscious.
Complete philosophical zombies go further. They too are exactly like us, down to the atomic level, and aren't conscious. But they are also purple spheres (except we see them as if they weren't), they want to maximize paperclips (although they act and think as if they didn't), and they are very intelligent (except they act and think as if they weren't).
I'm just saying this because I find it funny ^^. I think consciousness is harder (for us) to reduce than shapes, preferences, and intelligence.
topic: lifelogging as life extension
which formats should we preserve our files in?
I think it should be:
- open source and popular (to increase chances it's still accessible in the future)
- resistant to data degradation: https://en.wikipedia.org/wiki/Data_degradation (thanks to Matthew Barnett for bringing this to my attention)
x-post: https://www.facebook.com/groups/LifeloggingAsLifeExtension/permalink/1337456839798929/
topic: lifelogging as life extension
epistemic status: idea
Backup Day. Day where you commit all your data to blu-rays in a secure location.
When could that be?
Aphelion is at the beginning of the year. But maybe would be better to have it on a day that commemorates some relevant events for us.
x-post: https://www.facebook.com/groups/LifeloggingAsLifeExtension/permalink/1336571059887507/
I feel like I have slack. I don't need to work much to be able to eat; if I don't work for a day, nothing viscerally bad happens in my immediate surrounding. This allows me to think longer term and take on altruistic projects. But on the other hand, I feel like every movement counts; that there's no loose in the system. Every lost move is costly. A recurrent thought I've had in the past weeks is that: there's no slack in the system.
epistemic status: a thought I just had
EtA: for those that are not familiar with the concept of moral trade, check out: https://concepts.effectivealtruism.org/concepts/moral-trade/
epistemic status: speculative, probably simplistic and ill defined
Someone asked me "What will I do once we have AGI?"
I generally define the AGI-era starting at the point where all economically valuable tasks can be performed by AIs at a lower cost than a human (at subsistance level, including buying any available augmentations for the human). This notably excludes:
1) any tasks that humans can do that still provide value at the margin (ie. the caloric cost of feeding that human while they're working vs while they're not working rather than while they're not...
imagine (maybe all of a sudden) we're able to create barely superhuman-level AIs aligned to whatever values we want at a barely subhuman-level operation cost
we might decide to have anyone able to buy AI agents aligned with their values
or we might (generally) think this way to give access to that tech would be bad, but many companies are already incentivized to do that individually and can't all cooperate not to (and they actually reached this point gradually, previously selling near human-level AIs)
then it seems like everyone/most people would start to run...
topic: economics
idea: when building something with local negative externalities, have some mechanism to measure the externalities in terms of how much the surrounding property valuation changed (or are expected to change based, say, through a prediction market) and have the owner of that new structure pay the owners of the surrounding properties.
I wonder what fraction of people identify as "normies"
I wonder if most people have something niche they identify with and label people outside of that niche as "normies"
if so, then a term with a more objective perspective (and maybe better) would be non-<whatever your thing is>
like, athletic people could use "non-athletic" instead of "normies" for that class of people
topics: AI, sociology
thought/hypothesis: when tech is able to create brains/bodies as good or better than ours, it will change our perception of ourselves: we won't be in a separate magistra from our tools anymore. maybe people will see humans as less sacred, and value life less. if you're constantly using, modifying, copying, deleting, enslaving AI minds (even AI minds that have a human-like interface), maybe people will become more okay doing that to human minds as well.
(which seems like it would be harmful for the purpose of reducing death)
topic: intellectual discussion, ML tool, AI x-risks
Idea: Have a therapist present during intellectual debate to notice triggers, and help defuse them. Triggers activate a politics mindset where the goal becomes focused on status/self-preservation/appearances/looking smart/making the other person look stupid/etc. which makes it hard to think clearly.
Two people I follow will soon have a debate on AI x-risks which made me think of that. I can't really propose that intervention though because it will likely be perceived and responded as if it was a political m...
Topics: AI, forecasting, privacy
I wonder how much of a signature we leave in our writings. Like, how hard would it be for an AI to be rather confident I wrote this text? (say if it was trained on LessWrong writings, or all public writings, or maybe even private writings) What if I ask someone else to write an idea for me--how helpful is it in obfuscating the source?
Topic: AI strategy (policies, malicious use of AI, AGI misalignment)
Epistemic status: simplistic; simplified line of reasoning; thinking out loud; a proposed frame
A significant "warning shot" from a sovereign misaligned AI doesn't seem likely to me because a human-level (and plausibly a subhuman-level) intelligence can both 1) learn deception, yet 2) can't (generally) do a lot of damage (i.e. perceptible for humanity). So the last "warning shot" before AI learns deception won't be very big (if even really notable at all), and then a misaligned agent would ...
topic: AI alignment, video game | status: idea
Acknowledgement: Inspired from an idea I heard from Eliezer in zir podcast with Lex Friedman and the game Detroit: Become Human.
Video game where you're in an alternate universe where aliens create an artificial intelligence that's a human. The human has various properties typical of AI, such has running way faster than the aliens in that world and being able to duplicate themselves. The goal of the human is to take over the world to stop some atrocity happening in that world. The aliens are trying to stop the human from taking over the world.
✨ topic: AI timelines
Note: I'm not explaining my reasoning in this post, just recording my predictions and sharing how I feel.
I'll sound like a boring cliche at this point, but I just wanted to say it publicly: my AGI timelines have shorten earlier this year.
Without thinking about too much about quantifying my probabilities, I'd say the probabilities that we'll get AGI or AI strong enough to prevent AGI (including through omnicide) are:
But at this point I feel like not much...
topic: genetic engineering
'Revolutionary': Scientists create mice with two fathers
(I just read the title)
topic: genetic engineering
'Revolutionary': Scientists create mice with two fathers
(I just read the title)
Idea for a line of thinking: What if as a result of automation we could use the ~entire human population to control AI — any way we could meaningfully organize this large workforce towards that goal?
My assistant agency, Pantask, is looking to hire new remote assistants. We currently work only with effective altruist / LessWrong clients, and are looking to contract people in or adjacent to the network. If you’re interested in referring me people, I’ll give you a 100 USD finder’s fee for any assistant I contract for at least 2 weeks (I’m looking to contract a couple at the moment).
This is a part time gig / sideline. Tasks often include web searches, problem solving over the phone, and google sheet formatting. A full d...
a thought for a prediction aggregator
Problem with prediction market: People with the knowledge might still not want to risk money (plus complexity of markets, risk the market fails, tax implication, etc.).
But if you fully subsidize them, and make it free to participate, but still with potential for reward, then most people would probably make random predictions (at least, for predictions where most people don't have specialized knowledge about this) (because it's not worth investing the time to improve the prediction).
Maybe the best of both worlds is to do...
But if the brain is allowed to change, then the subject can eventually adapt to the torment. To
This doesn't follow. It seems very likely to me that it can allow it to change it ways it doesn't adapt to the pain, both from first principles and from observations.
How different would each loop need to be in order to be experienced separately?
That would be like having multiple different Everett branches experiencing suffering (parallel lives), which is different from 1 long continuous life.
Superintelligent Devil will take its victim brain's upgrade under its control and will invest in constant development of the victim's brain-parts which can feel pain. There is eternity to evolve in that direction and the victim will know that every next second will be worse. But it is really computationally intense way of punishment.
The question of the minimal unit of experience, which is enough to break the loop sameness is interesting. The need not only to be subjectively different, but the difference need to be meaningful. Not just one pixel.
Being immortal means you will one day be a Jupiter brain (if you think memories are part of one's identity, which I think they are)
x-post: https://twitter.com/matiroy9/status/1451816147909808131
Here's a way to measure (a proxy of) relative value of different years I just thought (again?); answer the question:
For which income I would you prefer to live in perpetual-2020 over living in perpetual-2021 with a median income? (maybe income is measured by fraction of the world owned multiplied by population size, or some other way) Then you can either chain those answers to go back to older years, or just compare them directly.
There are probably years where even Owning Everything wouldn't be enough. I prefer to live in perpetual-2021 with a median incom...
Hobby: serve so many bullets to sophisticated philosophers that they're missing half their teeth by the end of the discussion
crazy idea i just had mayyybe a deontological Libertarian* AI with (otherwise) any utility function is not that bad (?) maybe that should be one of the thing we try (??????) *where negative externality also count as aggressions, and other such fixes to naive libertarianism
Am thinking of organizing a one hour livestreamed Q&A about how to sign up for cryonics on January 12th (Bedford's day). Would anyone be interested in asking me questions?
x-post: https://www.facebook.com/mati.roy.09/posts/10159154233029579
We sometimes encode the territory on context-dependent maps. To take a classic example:
In my mind, "the expert problem" means the problem of being able to recognize experts without being one, but I don't know where this idea comes from as the results from a Google search don't mention this. What name is used to refer to that problem (in the literature)?
x-post: https://www.facebook.com/mati.roy.09/posts/10159081618379579
In my mind, "the expert problem" means the problem of being able to recognize experts without being one, but I don't know where this idea comes from as the results from a Google search don't mention this. What name is used to refer to that problem (in the literature)?
x-post: https://www.facebook.com/mati.roy.09/posts/10159081618379579
In my mind, "the expert problem" means the problem of being able to recognize experts without being one, but I don't know where this idea comes from as the results from a Google search don't mention this. What name is used to refer to that problem (in the literature)?
x-post: https://www.facebook.com/mati.roy.09/posts/10159081618379579
suggestion of something to try at a LessWrong online Meetup:
video chat with a time-budget for each participant. each time a participant unmutes themselves, their time-budget starts decreasing.
note: on jitsi you can see how many minutes someone talked (h/t Nicolas Lacombe)
x-post: https://www.facebook.com/mati.roy.09/posts/10159062919234579
imagine having a physical window that allowed you to look directly in the past (but people in the past wouldn't see you / the window). that would be amazing, right? well, that's what videos are. with the window it feels like it's happening now, whereas with videos it feels like it's happening in the past, but it's the same
x-post: https://www.facebook.com/mati.roy.09/posts/10158977624499579
tattoo idea: I won't die in this body
in Toki Pona: ale pini mi li ala insa e sijelo ni
direct translation: life's end (that is) mine (will) not (be) inside body this
EtA: actually I got the Toki Pona wrong; see: https://www.reddit.com/r/tokipona/comments/iyv2r2/correction_thread_can_your_sentences_reviewed_by/
When you're sufficiently curious, everything feels like a rabbit hole.
Challenge me by saying a very banal statement ^_^
x-post: https://www.facebook.com/mati.roy.09/posts/10158883322499579
I can pretty much only think of good reasons for having generally pro-entrapment laws. Not any kind of traps, but some kind of traps seem robustly good. Ex.: I'd put traps for situations that are likely to happen in real life, and that show unambiguous criminal intent.
It seems like a cheap and effective way to deter crimes and identify people at risk of criminal behaviors.
I've only thought about this for a bit though, so maybe I'm missing something.
x-post with Facebook: https://www.facebook.com/mati.roy.09/posts/10158763751484579
People say we can't bet about the apocalypse. But what about taking debt? The person thinking the probability of apocalypse is higher would accept higher interest rate on their debt, as once at the judgement period their might be no one to whom the money is worth or the money itself might not be worth much.
I guess there are also reasons to want more money during a global catastrophe, and there are also reasons to not want to keep money for great futures (see: https://matiroy.com/writings/Consume-now-or-later.html), so that wouldn't actually work.
There's a post, I think by Robin Hanson on Overcoming Bias, that says people care about what their peers think of them, but we can hack our brains to doing awesome things by making this reference group the elite of the future. I can't find this post. Do you have a link?
Personal Wiki
might be useful for people to have personal wiki where they take note instead of everyone taking notes in private Gdoc
status: to do / to integrate
you know those lists about historical examples of notable people mistakenly saying that some tech will not be useful (for example)
Elon Musk saying that VR is just a TV on your nose will probably become one of those ^^
idea: Stream all of humanity's information through the cosmos in hope an alien civ reconstruct us (and defends us against an Earth-originating maligned ASI)
I guess finding intelligent ETs would help with that as we could stream in a specific direction instead of having to broadcast the signal broadly
It could be that maligned alien ASIs would mostly ignore our information (or at least not use it to like torture us) whereas friendly align ASI would use it beneficially 🤷♀️
Topics: cause prioritization; metaphor
note I took on 2022-08-01; I don't remember what I had in mind, but I feel like it can apply to various things
from an utilitarian point of view though, i think this is almost like arguing whether dying with a red or blue shirt is better; while there might be an answer, i think it's missing the point, and we should focus on reducing risks of astronomical disasters
An interesting perspective.
It is instructive to consider the following four scenarios:
1. The Kolmogorov complexity of the state of your mind after N timesteps in a simulation with a featureless white plane.
2. The Kolmogorov complexity of the state of your mind after N timesteps in a simulation where you are in a featureless plane, but the simulation injects a single randomly-chosen 8x8 black-and-white bitmap into the corner of your visual field. (256 bits total.)
3. The Kolmogorov complexity of the state of your mind after N timesteps in a simulation with "...
If crypto makes the USD go to 0, will life insurances denominated in USD not have anything to pay out? Maybe an extra reason for cryonicists to own some crypto
x-post: https://www.facebook.com/mati.roy.09/posts/10159482104234579
A Hubble Brain: a brain taking all the resources present in a Hubble-Bubble-equivalent.
related: https://en.wikipedia.org/wiki/Matrioshka_brain#Jupiter_brain
x-post: https://www.facebook.com/mati.roy.09/posts/10158917624064579
I want to look into roleplay in animals, but Google is giving me animal roleplay, which is interesting too, but not what I'm looking for right now 😅
I'm wonder how much roleplay there is in the animal kingdom. I wouldn't be surprised if there was very few.
Maybe if you're able to roleplay, then you're able to communicate?? Like, roleplay might need to have a theory of mind, because you're imagining yourself in someone else's body.
Maybe you can teach words to an animal without a theory of mind, but they'll be more like levers for them: for them, saying "bana...
I remember someone in the LessWrong community (I think Eliezer Yudkowsky, but maybe Robin Hanson or someone else, or maybe only Rationalist-adjacent; maybe an article or a podcast) saying that people believing in "UFOs" (or people believing in unproven theories of conspiracy) would stop being so enthusiastic about those if they became actually known as true with good evidence for them. does anyone know what I'm referring to?
sometimes I see people say "(doesn't) believe in science" when in fact they should say "(doesn't) believe in scientists"
or actually actually "relative credence in the institutions trying to science"
x-post: https://www.facebook.com/mati.roy.09/posts/10158892685484579
hummm, I think I prefer the expression 'skinsuit' to 'meatbag'. feels more accurate, but am not sure. what do you think?
x-post: https://www.facebook.com/mati.roy.09/posts/10158892521794579
I just realized my System 1 was probably anticipating our ascension to the stars to start in something like 75-500 years.
But actually, colonizing the stars could be millions of subjective years away if we go through an em phase (http://ageofem.com/). On the other hand, we could also have finished spreading across the cosmos in only a few subjective decades if I get cryopreserved and the aestivation hypothesis is true (https://en.wikipedia.org/wiki/Aestivation_hypothesis).
I created a Facebook group to discuss moral philosophies that value life in and of itself: https://www.facebook.com/groups/1775473172622222/
For non-human animal brains, I would compare them to the baseline of individuals in the...
Category: Weird life optimization
One of my ear canal is in a different shape. When I was young, my mother would tell me that this one was harder to clean, and that ze couldn't see my ear-drum. This ear gets wax accumulation more easily. A few months ago, I decided to let it block.
Obvious possible cognitive bias is the "just world bias": if something bad happens often enough, I'll start think it's good.
But here are benefits this has for me:
When sleeping, I can put my good ear on the pillow, and this now isolates me from sound prett
topic: fundamental physics
x-post from YouTube
comments on Why No One Has Measured The Speed Of Light
2 thoughts
maybe that means you could run a simulation of the universe without specifying c (just like you don't have to specify "up/down"); maybe saying the speed of light is twice as big in a direction than the other is like saying everything is twice as big as we thought: it's a meaningless statement because they are defined relative to each other
if universe looks the same in all directions, yet one side of the universe we see as it currently is where