LESSWRONG
LW

256
Measure
1621Ω1016340
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
3Measure's Shortform
5y
10
3Measure's Shortform
5y
10
Temporarily Losing My Ego
Measure2d20

Just like your mind can only see the rightside-up-stairs (w/ a blue wall closer to you) or the upside-down-stairs (w/ a green wall closer to you) (but never both of them at the same time)

If I unfocus my eyes, I can see-double with a different mode in each eye.

Reply
On Fleshling Safety: A Debate by Klurl and Trapaucius.
Measure4d73

(Those two had, specifically, asked an automatic result-filtering algorithm to select that fleshling of the highest discernible intelligence class up to measurement noise, whose Internet traces suggested the greatest ability to quickly adapt to being seized by aliens without disabling emotional convulsions. And if this was, itself, an odd sort of request-filter by fleshling standards -- liable to produce strange and unexpected correlations to its oddness -- neither of those two aliens had any way to know that.)

My read was that natural human variation plus a few dozen bits of optimization was sufficient explanation.

Reply
Assessing Far UVC Positioning
Measure6d40

But have you considered... pointing them all at a disco ball?

Reply
AI #138 Part 2: Watch Out For Documents
Measure15d20

The question is whether restrictions on AI speech violate the first amendment rights of users or developers

I'm assuming this means restrictions on users/developers being legally allowed to repeat AI-generated text, rather than restrictions built into the AI on what text it is willing to generate.

Reply
Sublinear Utility in Population and other Uncommon Utilitarianism
Measure18d20

Either I'm misunderstanding what you wrote, or you didn't mean to write what you did.

Suppose A is a human and B is a shrimp.

The value of adding a shrimp to a world where A exists is small.

The value of replacing the shrimp with A is large.

Reply
Notes on the need to lose
Measure25d20

Related: Trying to Try

Reply
LLMs one-box when in a "hostile telepath" version of Newcomb's Paradox, except for the one that beat the predictor
Measure25d20

Could this be the result of a system prompt telling them that the COT isn't exposed? Similarly to how they denied that events after their knowledge cutoff could have occurred?

Reply
Non-Dualism and AI Morality
Measure2mo20

Related: https://www.lesswrong.com/posts/hzt9gHpNwA2oHtwKX/self-other-overlap-a-neglected-approach-to-ai-alignment

Reply1
Quality Precision
Measure2mo50

Both of my comments were about the thought experiment at the end of the post:

You are given a moral dilemma, either a million people will get an experience worth 100 utility points each, or a million + 1 people will get 99 utility points each. The first option gets you more utility total, but if we take the second option we get one more person served and nobody else can even tell the difference.

Reply
Load More