LESSWRONG
LW

Mitchell_Porter
9101Ω64623710
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
8Mitchell_Porter's Shortform
2y
24
sam's Shortform
Mitchell_Porter3d20

How about the fact that the opinions in the inserted asides are his actual opinions? If they were randomly generated,  they wouldn't be. 

Reply
Asking for a Friend (AI Research Protocols)
Mitchell_Porter3d30

I would really like a test

There is no agreed-upon test for consciousness because there is no agreed-upon theory for consciousness. 

There are people here who believe current AI is probably conscious, e.g. @JenniferRM and @the gears to ascension. I don't believe it but that's because I think consciousness is probably based on something physical like quantum entanglement. People like Eliezer may be cautiously agnostic on the topic of whether AI has achieved consciousness. You say you have your own theories, so, welcome to the club of people who have theories! 

Sabine Hossenfelder has a recent video on Tiktokkers who think they are awakening souls in ChatGPT by giving it roleplaying prompts. 

Reply
sam's Shortform
Mitchell_Porter4d20

Disagree from me. I feel like you haven't read much BB. These political asides are of a piece with the philosophical jabs and brags he makes in his philosophical essays. 

Reply
You Can't Objectively Compare Seven Bees to One Human
Mitchell_Porter5d20

CEV is group level relativism, not objectivism.

I think Eliezer's attempt at moral realism derives from two things: first, the idea that there is a unique morality which objectively arises from the consistent rational completion of universal human ideals; second, the idea that there are no other intelligent agents around with a morality drive, that could have a different completion. Other possible agents may have their own drives or imperatives, but those should not be regarded as "moralities" - that's the import of the second idea. 

This is all strictly phrased in computational terms too, whereas I would say that morality also has a phenomenological dimension, which might serve to further distinguish it from other possible drives or dispositions. It would be interesting to see CEV metaethics developed in that direction, but that would require a specific theory of how consciousness relates to computation, and especially how the morally salient aspects of consciousness relate to moral cognition and decision-making. 

Reply
You Can't Objectively Compare Seven Bees to One Human
Mitchell_Porter6d20

These issues matter not just for human altruism but also for AI value systems. If an AI takeover occurs and if the AI(s) care about the welfare of other beings at all, they will have to make judgements about which entities even have a well-being to care about, and they will also have to make judgements about how to aggregate all these individual welfares (for the purpose of decision-making). Even just from a self-interested perspective, moral relativism is not enough here, because in the event of AI takeover, you the human individual will be on the receiving end of AI decisions. It would be good to have a proposal for AI value system that is both safe for you the individual, and also appealing enough to people in general, that it has a chance of actually being implemented. 

Meanwhile, the CEV philosophy tilts towards moral objectivism. It is supposed that the human brain implicitly follows some decision procedure specific to our species, that this encompasses what we call moral decisions, and that the true moral ideal of humanity would be found by applying this decision procedure to itself ("our wish if we knew more, thought faster, were more the people we wished we were", etc). It is not beyond imagining that if you took a brain-based value system like PRISM (LW discussion), and "renormalized" it according to a CEV procedure, that it would output a definite standard for comparison and aggregation of different welfares. 

Reply
On the functional self of LLMs
Mitchell_Porter6d60

This all seems of fundamental importance if we want to actually understand what our AIs are. 

Over the course of post-training, models acquire beliefs about themselves. 'I am a large language model, trained by…' And rather than trying to predict/simulate whatever generating process they think has written the preceding context, they start to fall into consistent persona basins. At the surface level, they become the helpful, harmless, honest assistants they've been trained to be.

I always thought of personas as created mostly by the system prompt, but I suppose RLHF can massively affect their personalities as well... 

Reply
Zach Stein-Perlman's Shortform
Mitchell_Porter7d20

I'm interested in being pitched projects

Are you wanting to hire people, wanting to be hired, looking to collaborate...?

Reply
Why abandon “probability is in the mind” when it comes to quantum dynamics?
Mitchell_Porter8d81

You can't actually presume that... The relevant quantum concept is the "spectrum" of an observable. These are the possible values that a property can take (eigenvalues of the corresponding operator). An observable can have a finite number of allowed eigenvalues (e.g. spin of a particle), a countably infinite number (e.g. energy levels of an oscillator), or it can have a continuous spectrum, e.g. position of a free particle. But the latter case causes problems for the usual quantum axioms, which involve a Hilbert space with a countably infinite number of dimensions - there aren't enough dimensions to represent an uncountable number of distinct position eigenstates. You have to add extra structure to include them, and concrete applications always involve integrals over continua of these generalized eigenstates, so one might reasonably suppose that the "ontological basis" with respect to which branching is defined is something countable. In fact, I don't remember ever seeing a many-worlds ontological interpretation of the generalized eigenstates or the formalism that deals with them (e.g. rigged Hilbert space). 

In any case, the counterpart of branch counting for a continuum is simply integration. If you really did have uncountably many branches, you would just need a measure. The really difficult case may actually be when you have a countably infinite number of branches, because there's no uniform measure in that case (I suppose you could use literal infinitesimals, the equivalent of "1/alephzero"). 

Reply
Why I am not a Theist
Mitchell_Porter9d20

Do you reason in similar fashion,  about whether or not you are living in a simulation?

Reply
Why abandon “probability is in the mind” when it comes to quantum dynamics?
Mitchell_Porter9d20

I explored my skepticism about this paper in a brief dialogue with Claude... 

Reply
Load More
No wikitag contributions to display.
72Requiem for the hopes of a pre-AI world
2mo
0
12Emergence of superintelligence from AI hiveminds: how to make it human-friendly?
3mo
0
21Towards an understanding of the Chinese AI scene
4mo
0
11The prospect of accelerated AI safety progress, including philosophical progress
4mo
0
23A model of the final phase: the current frontier AIs as de facto CEOs of their own companies
4mo
2
21Reflections on the state of the race to superintelligence, February 2025
5mo
7
29The new ruling philosophy regarding AI
8mo
0
15First and Last Questions for GPT-5*
Q
2y
Q
5
3The national security dimension of OpenAI's leadership struggle
2y
3
25Bruce Sterling on the AI mania of 2023
2y
1
Load More