Alex Beyman

linktr.ee/alexbeyman

Wiki Contributions

Comments

Ah yes, the age old struggle. "Don't listen to them, listen to me!" In Deuteronomy 4:2 Moses declares, “You shall not add to the word which I am commanding you, nor take away from it, that you may keep the commands of the Lord your God which I command you.” And yet, we still saw Christianity, Islam and Mormonism follow it. 

A conspiracy theory about Jeffrey Epstein has 264 votes currently: https://www.lesswrong.com/posts/hurF9uFGkJYXzpHEE/a-non-magical-explanation-of-jeffrey-epstein

How commonly are arguments on LessWrong aimed at specific users? Sometimes, certainly. But it seems the rule, rather than the exception, that articles here dissect commonly encountered lines of thought, absent any attribution. Are they targeting "someone not in the room"? Do we need to put a face to every position?

By the by, "They're making cognitive errors" is an insultingly reductive way to characterize, for instance, the examination of value hierarchies and how awareness of them vs unawareness influence both our reasoning and appraisal of our fellow man's morals. 

When I tried, it didn't work. I don't know why. I agree with the premise of your article, having noticed that phenomenon in journalism myself before. I suppose when I say truth, I don't mean the same thing you do, because it's selective and with dishonest intent. 

"Saying you put the value of truth above your value of morality on your list of values is analogous to saying you put your moral of truth above your moral of values; it's like saying bananas are more fruity to you than fruits."

I'm not sure if I understand your meaning here. Do you mean that truth and morality are one in the same, or that one is a subset of the other?

"Where does non-misleadingness fall on your list of supposedly amoral values such as truth and morality? Is non-misleadingness higher than truth or lower?"

Surely to be truthful is to be non-misleading...?

>"Perhaps AIs would treat humans like humans currently treat wildlife and insects, and we will live mostly separate lives, with the AI polluting our habitat and occasionally demolishing a city to make room for its infrastructure, etc."

Planetary surfaces are actually not a great habitat for AI. Earth in particular has a lot of moisture, weather, ice, mud, etc. that poses challenges for mechanical self replication. The asteroid belt is much more ideal. I hope this will mean AI and human habitats won't overlap, and that AI would not want the Earth's minerals simply because the same minerals are available without the difficulty of entering/exiting powerful gravity wells.

I suppose I was assuming non-wrapper AI, and should have specified that. The premise is that we've created an authentically conscious AI.

Humans are not wrapper-minds.


Aren't we? In fact, doesn't evolution consistently produce minds which optimize for survival and reproduction? Sure, we're able to overcome mortal anxiety long enough to commit suicide. But survival and reproduction is a strongly enough engrained instinctual goal that we're still here to talk about it, 3 billion years on.  

Bad according to whose priorities, though? Ours, or the AI's? That was more the point of this article, whether our interests or the AI's ought to take precedence, and whether we're being objective in deciding that. 

Load More