All of AlexLundborg's Comments + Replies

Open Thread March 28 - April 3 , 2016

Whether or not momentary death is necessary for multiverse immortality depends on what view of personal identity is correct. According to empty individualism, it should not matter that you know you will die, you will still "survive" while not remember having died as if that memory was erased.

1qmotus5yI think the point is that if extinction is not immediate, then the whole civilisation can't exploit big world immortality to survive; every single member of that civilisation would still survive in their own piece of reality, but alone.
Meetup : First meetup in Lund

Do you mean Monday or Tuesday? :)

0kotrfa6yI answered to that thread. And I think I wrote you on Facebook. You should have my message in "others".
0kotrfa6yI was thinking about today - Tuesday. But it seems a bit in hurry for other to notice. Do you have some date which would suit you? For example next week Wednesday 11?
Meetup : First meetup in Lund

Kotrfa never turned up but another LWer did and we had a nice discussion! When is next meeting? :)

0kotrfa6yI am so sorry about not appearing on the meeting - I've got stuck in a train from east for several hours. I should have at least post it here when I knew that I can't make it. I am still really looking forward to meet you guys. What about meeting on November 3 (Tuesday)?
Meetup : First meetup in Lund

Count me in! I'm 20, also skinny and curly haired :)

[link] Essay on AI Safety

You write that the orthogonality thesis "...states that beliefs and values are independent of each other", whereas Bostrom writes that it states that almost any level of intelligence is compatible with almost any values, isn't that a deviation? Could you motivate the choice of words here, thanks.

From The Superintelligent Will: "...the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal."

​My recent thoughts on consciousness

Should we value outside entities as much as we do ourselves? Why?

Nate Soares recently wrote about problems with using the word "should" that I think are relevant here, if we assume meta-ethical relativism (if there are no objective moral shoulds). I think his post "Caring about something larger than yourself" could be valuable in providing a personal answer to the question, if you accept meta-ethical relativism.

0Elo6yAgain a negative, not clear as to why.
​My recent thoughts on consciousness

The very notion of something being "out there" independent of us is itself a mental model we use to explain our perceptions.

Yes, I think that's right, the conviction that something exists in the world is also a (unconscious) judgement made by the mind that could be mistaken. However, when we what to explain why we have the perceptual data, and it's regularities, it makes sense to attribute it to external causes, but this conviction could perhaps too be mistaken. The underpinnings of rational reasoning seems to bottom out to in unconsciously fo... (read more)

0eternal_neophyte6ySolipsism is not really workable due to changes in perceptual data that you cannot predict. Even if you're hallucinating, the data factory is external to the conscious self. So assuming an "objective reality" (whether generated by physics or by DMT) is nothing to apologize for.
Open Thread, Jun. 22 - Jun. 28, 2015

The same animation studio also made this fairly accurate and entertaining introduction to (parts of) Bostrom's argument. Although I don't know what to think of their (subjective) probability for possible outcomes.