Of course, Loqi's suggestion could contingently be less optimal than the less easy to accept presentation.
While the approach you suggest could provide a more subjectively negative experience, the cognitive dissonance could cause the utterance to gain more attention with the brain as a more aberrant occurrence in its stimuli and as a result be worthy of further analysis and consideration.
I am generally in favor of delivering notions I believe to be helpful in a manner which can/will be accepted. In some cases however, others are able and more likely to accept a less than pleasant delivery mechanism. This is contingent upon the audience, of course, as well as the level of knowledge you have about your audience. In the absence of such knowledge, the more gentle approach seems advisable.
I'd take this differently.
I would at least hope that you are claiming that there is, in fact, a choice, whether the subjective experience of the moment provides indication of the choice or not.
Maybe stated differently you could be claiming that there is the possibility of choice for all people whether a person is aware or capable of taking advantage of that fact. That a person can alter his or her self in order to provide his or her self with the opportunity to choose in such situations.
Loqi's feedback seems to me to be suggesting that individuals who do not have a belief that they have such a "possibility of choice" could have a more positive phenomenological experience of your assertion and as a result be more likely to integrate the belief into their own belief set and [presumably] gain advantage by encountering it.
That is me asserting that Loqi does not appear to be rejecting your assertion but only suggesting a manner by which it can be improved.
It is simply less demanding to choose a small set of ideas one supports or the contrary than to understand both and perform the even more difficult reconciliation of the differentiated concepts.
For example: the individual versus society. Individuals are by definition part of the collection of people that is a society and societies do not exist except where there are individuals. The greater utility exists where both individuals and societies are served to their greatest interests by the choices we make but it is much easier to communicate about the importance of one over the other. The falseness of the division or belief in the contention is the problem/distraction rather than the solution.
If intelligence is efficient optimization across domains then satisfying the utility of a greater set of domains requires greater intelligence. Increasing the number of sides or the complexity of the considerations and you reduce the population that can grasp or support the initiatives or arguments and as a result reduce your success as a candidate. This, of course, is the difficulty of improving beyond the current steady state.
The simplest reason to care about Friendly AI is that we are going to be coexisting with AI, and so we should want it to be something we can live with.
I'd like to suggest that it is important that the friendly AGI would hopefully also want to live with us. I'd further suggest that this is part of why efforts such as LW are important.
As wedrifid appeared to intone in the original reply, the actually discovered "there is cognitive activity present" from the given link is the key knowledge of pertinence to open the exploration of what is self.
Thanks for the further context.
I was originally impressed (and continue to be) by diegocaleiro's open self presentation (awesome!) and hoped to merely provide, in its greatest hope, a possible sense of dependable enough structure for accelerated progression beyond pitfalls that I had previously slowed within.
While the analytical ideation is pleasant, relevant, and useful, the emotive or experiential consequences seem relevant and vital as catalyst that can either grow or inhibit the evolution we are attempting to partake in for the artifact of sentience. We can chose our preferential modes or aspects but it does not deny that our persons are more broad or that each has strengths to provide and weaknesses to avoid.
A reference to further reading I should do would be likewise appreciated.
I would merely suggest, qualitative assignments aside, that it is enough to deny nihilistic mind states that can occur as one possible result of abandoning cached selves.
I am curious if there is a link or further explanation you can provide to help me understand your objections and why you have them more easily. I'm not interested in defending Descartes or his body of work but his is one of the earliest and better known accounts I have encountered (being relatively not-well-read) of that particular strain of thought that was itself an important part of my formative years. Care to provide?
In my experience of similar appearing shifts of person, what you are experiencing is the "instability" that is to become your new (and more) "stable" state. It will provide advantages and disadvantages and is, in my finding, a more optimal but longer term strategy for the living of life.
Regarding the concerns you have for the emerging new morality, I think you'll find well enough over time that you come full circle. There are experientially more options before you than you previously provided yourself. However, some of those are better options than the others. In the end, given the shared nature of existence your own most selfish interests will bear relationship to the greatest selfish interests of the other sentiences in said existence. There is some trickiness in that last statement but I stand by it. As you begin to come around this "full circle" what I would suggest you'll find is that you'll not only approach your previous state in a sense but that it will be supported by a greater appreciation of, awareness of, and capability in how to better obtain your goals.
Enjoy the exploration of your possible person states!
I am Erik Erikson. By day I currently write patents and proofs of concept in the field of enterprise software. My chosen studies included neuro and computer sciences in pursuit of the understanding that can produce generally intelligent entities of equal to or greater than human intelligence less our human limitations. I most distinctly began my "rationalist" development around the age of ten when I came to doubt all truth, including my own existence. I am forever in debt to the "I think, therefore I am" idiom as my first piece of knowledge. I happened upon LW through singularity.org and appreciate the efforts here. Of particular interest to me is improved consideration of the formulated goal for AI (really for any sentient entity) I have devised: the manifested unification of all ideals. I pleasantly found this related to the formulation of intelligence that appears commonly accepted here: "cross-domain optimization". However, I have also been concerned for some time about the mechanical bias that may be implicit: it seems clear that a system which functions through growth (the establishment of connections) as a result of correlated signals would be inherently and, of concern, incorrectly biased towards favoring the unification concept.
Regarding pieces of oneself, consider the ideas of IFS (internal family systems). "Parts" can be said to attenuate to different concerns and if one can distract from others then an opportunity to maximize utility across dimensions may be missed. One might also suggest that attenuation to only one concern over time can result in a slight movement towards disintegration as a result of increasingly strong feelings about "ignored" concerns. Integration or alignment, with every part joining a cooperative council is often considered a goal and personification can assist some in more peaceably achieving that. I personally found the suggestion to personify felt weird and false.