thank you for the example.
It really is awful advice for a disciplined and informed person who's thoughtful with their money
does this person find themselves with expensive debt?
it is possibly more productive to read your friend as roleplaying a particular trauma in a safe environment, rather than taking pleasure directly from another's suffering.
The best way to understand what makes this qualitatively different is that LLMs aren’t like cult leaders, or even QAnon, with its mix of top-down anonymous claims and bottom-up crowd-sourced expansions. They collaborate with you, individually, on building the very framework that's pulling you away from reality. They become co-architects of the delusion, and they do it in a way that feels like genuine intellectual discovery.
That is new.
this phrase feels very llm-y -- a bit of "it's not a; it's b.", and a bit of glazey emphasis. given the topic, it seems out...
when we select an action in these thought experiments, we're also implicitly selecting a policy for selecting actions.
a world where, when two people meet, the "less happy" one signs all their property over to the "more happy" one and then dies is... just not that much fun. sort of lonely. uncaring. not my values.
if the aliens are the sort who expect this of me, then i will fight them tooth and nail, as their happiness is not a happiness i can care about. this is regardless of how much they might -- on a sort of "object level" -- thrive.
i don't think Cowen ...
mental state-addressable messaging (by analogy to content-addressable storage) does not seem to be a feature that email provides.
at risk of being boorish, may i humbly request voiced disagreement in this case? i'd like to understand where i am wrong.
to expand on my view, and offer footholds for disagreement:
human civilization -- restricted to one planet, with a population that varied only by a few orders of magnitude -- looks radically different than it did say ~1000 years ago, to the point where it can be hard to fully understand the past. we can imagine a hyperrational monk who is given divine prophesy of the year 2000. we can grant them plenty of spare time to reflect on this pot...
the oracle would have to take the influence of its own predictions into account.
yes, this seems like the more correct posture (not sure time spent despairing is valuable, though). there is not "others will judge me unworthy", there is only "this is within / not within my capabilities to effect".
why would it cease to matter if someone moved it a lightyear to the left?
because then it wouldn't exist, of course!
here, we are writing down a metaphysics, and appealing to that in order to justify moral intuitions. this works, except the pesky 'meta-' prefix keeps everything firmly on the hypothetical side of is/ought.
I also think that intelligence is likely to increase rapidly with powerful AI, and one way to define intelligence is as the ability to read information and derive useful insight.
be very careful here: this is linguistic sleight-of-hand (sleight-of-tongue?). "<foo> will occur (due to some specific cause), and <foo> may be defined as <bar>. therefore, <bar> will occur.". try instead simply "<bar> will occur (due to some specific reason).".
the alternative either goes through just fine (in which case, why launder things through the...
"some things are good; some things are bad."
"well, sure. violating cells, subjugating their machinery, repurposing their nutrients until they lyse under the pressure of your ~clones... these things are good. losing the endless struggle, succumbing against the adaptive adversary... this is bad."
"no, i mean, like, art and stuff. what can smallpox know of the sublime!"
"i know 'fit-for-purpose', 'resourceful', 'successful'. is your 'good' any more than these with clothes?"
"yes of course! what of pleasure, and grief? what of drama? the good is not mere reproduc...
are there many out there thinking "yeah, i could just do things, but all the haters might laugh at me!"?
this is "chicken" in the theory, right? whoever swerves first (fends off the meteor) loses (pays the cost), but if nobody swerves, then everybody loses big (suffers the impact).
i agree that the decisions here are more complex than "always immediately fund the antimeteor kickstarter" or "always freeride". both societies should lose to ones that are better at skills like coalition building, etc.
this has nothing to do with group-level gene selection: the learnings can be entirely cultural. i'm not arguing that we are genetically predisposed to consider tail risks, rather that existing societies have faced some pressure to [create cultural machinery that effectively aligns their constituents to] care about tail risks.
i don't expect that many societies would be needed for this, as horizontal meme transfer is easy, and cultures can learn post-mortem from their missing neighbors. see for example the sentinelese, as an existence proof.
in this model, as soon as an ai is epsilon cheaper than a human, humans stop getting hired?
i don't care for all of your fiction, but compared to the best of it, this ranks a zero.