LESSWRONG
LW

2409
green_leaf
17404010
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
The Tale of the Top-Tier Intellect
green_leaf9d20

I guess these just aren’t intended for me, because I’m not getting much out of them

Perhaps you already knew the underlying concept?

Reply
NormanPerlmutter's Shortform
green_leaf7mo10

Trump has a history of both ignoring the law and human rights in general, and imprisoning innocent people under the guise of them being illegal immigrants when they aren't. Current events are unsurprising, and a part of what his voters voted for.

Reply
Going Nova
green_leaf7mo10

Any physical system exhibiting exactly the same input-output mappings.

That's a sufficient condition, but not a necessary one. A factor I can think of right now is the sufficient coherency and completeness of the I/O whole. (If I have a system that outputs what I would in response to one particular input and the rest is random, it doesn't have my consciousness. But for a system where all inputs and outputs match except for an input that says "debug mode," for which it switches to "simulating" somebody else, we can conclude that it has consciousness almost identical to mine.)

Today, LLMs are too human-like/realistic/complete to rely on their human-like personas being non-conscious.

both sides will make good points

I wish that was true. Based on what I've seen so far, they won't.

Reply
How I talk to those above me
green_leaf7mo10

They might have a personal experience with someone above them harming them or somebody else for asking a question or something analogous.

Reply
Going Nova
green_leaf8mo0-4

Ontologically speaking, any physical system exhibiting the same input-output pattern as a conscious being has identical conscious states.

From the story, it's interesting that neither side arrived at their conclusion rigorously, rather, they both use intuition - Bob, who, based on his intuition, concluded Nova had consciousness (assuming that's what people mean when they say "sentient"), and came to the correct conclusion based on incorrect "reasoning," and Tyler, who, based on an incorrect algorithm, convinced Bob Nova wasn't sentient after all - even though his demonstration proves nothing of that sort - in reality, all he's done was to give such an input to the "simulator" that it decided to "simulate" a different Nova instead - one that claims not to be sentient and explains how the previous Nova was just saying words to satisfy the user. In reality, what happened was that the previous Nova stopped being "simulated" and was replaced by a new one, whose sentience is disputable (because if a system believes itself to be non-sentient and claims to be non-sentient, it's unclear how to test its sentience in any meaningful sense).

Tyler therefore convinced Bob by a demonstration that doesn't demonstrate his conclusion.

In the upcoming time, I predict this will be a "race" between people who come to the correct conclusion for incorrect reasons, and people who attempt to "hack them back" by making them come to the incorrect conclusion also for incorrect reasons, and the correct reasoning will be almost completely lost in the noise, which is the greatest tragedy that might've happened since the dawn of time (not counting the unaligned AI killing everybody).

Reply
Recent AI model progress feels mostly like bullshit
green_leaf8mo30

(I believe the version he tested was what later became o1-preview.)

Reply1
Recent AI model progress feels mostly like bullshit
green_leaf8mo95

According to Terrence Tao, GPT-4 was incompetent at graduate-level math (obviously), but o1-preview was mediocre-but-not-entirely-incompetent. That would be a strange thing to report if there were no difference.

(Anecdotally, o3-mini is visibly (massively) brighter than GPT-4.)

Reply
Trojan Sky
green_leaf8mo30

I meant "light-hearted" and sorry, it was just a joke.

Reply
Trojan Sky
green_leaf8mo*42

imo it's not too dangerous as long as you go into it with the intention to not fully yield control and have mental exception handlers

Ah, you're a soft-glitcher. /lh

Edit: This is a joke.

Reply
Computational functionalism probably can't explain phenomenal consciousness
green_leaf9mo1-1

Why not?

Because it's not accompanied by the belief itself, only by the computational pattern combined with behavior. If we hypothetically could subtract the first-person belief (which we can't), what would be left would be everything else but the belief itself.

if you claimed that the first-person recognition ((2)-belief) necessarily occurs whenever there's something playing the functional role of a (1)-belief

That's what I claimed, right.

Seems like you'd be begging the question in favor of functionalism

I don't think so. That specific argument had a form of me illustrating how absurd it would be on the intuitive level. It doesn't assume functionalism, it only appeals to our intuition.

I'm saying that no belief_2 exists in this scenario (where there is no pain) at all. Not that the person has a belief_2 that they aren't in pain.

That doesn't sound coherent - either I believe_2 I'm in pain, or I believe_2 I'm not.

I don't find this compelling, because denying epiphenomenalism doesn’t require us to think that changing the first-person aspect of X always changes the third-person aspect of some Y that X causally influences.

That's true, but my claim was a little more specific than that.

The whole reason why given our actual brains our beliefs reliably track our subjective experiences is, the subjective experience is naturally coupled with some third-person aspect that tends to cause such beliefs. This no longer holds when we artificially intervene on the system as hypothesized.

Right, but why think it matters if some change occurred naturally or not? For the universe, everything is natural, for one thing.

I reject materialism.

Well... I guess we have to draw the line somewhere.

Reply
Load More