Sorted by New

Wiki Contributions


It seems to me that this is related to the idea of roles. If you don't see yourself as being responsible for handling emergencies, you probably won't do anything about them, hoping someone else will. But if you do see yourself as being the person responsible for handling a crisis situation, then you're a lot more likely to do something about it, because you've taken that responsibility upon yourself.

It's a particularly nuanced response to both take that kind of responsibility for a situation, and then, after carefully evaluating the options, decide that the best course is to do nothing, since it conflicts with that cultivated need to respond. That said, it could easily be a better choice than the alternative of making a probably-bad decision in the spur of the moment with incomplete information. Used properly, it's a level above the position of decisive but unplanned action... though on the surface, it can be hard to distinguish from the default bystander position of passing off responsibility.

I think I understand. There is something of what you describe here that resonates with my own past experience.

I myself was always much smarter than my peers; this isolated me, as I grew contemptuous of the weakness I found in others, an emotion I often found difficult to hide. At the same time, though, I was not perfect; the ease at which I was able to do many things led me to insufficient conscientiousness, and the usual failures arising from such. These failures would lead to bitter cycles of guilt and self-loathing, as I found the weakness I so hated in others exposed within myself.

Like you, I've found myself becoming more functional over time, as my time in university gives me a chance to repair my own flaws. Even so, it's hard, and not entirely something I've been able to do on my own... I wouldn't have been able to come this far without having sought, and received, help. If you're anything like me, you don't want to seek help directly; that would be admitting weakness, and at the times when you hurt the worst, you'd rather do anything, rather hurt yourself, rather die than admit to your weakness, to allow others to see how flawed you are.

But ignoring your problems doesn't make them go away. You need to do something about them. There are people out there who are willing to help you, but they can't do so unless you make the first move. You need to take the initiative in seeking help; and though it will seem like the hardest thing you could do... it's worth it.

Not necessarily. Cosmic rays are just electromagnetic energy on particular (high) frequencies. So if it interprets everything along those lines, it's just seeing everything purely in terms of the EM spectrum... in other words 'normal, uninteresting background case, free of cosmic rays'. So things that don't trigger high enough to be cosmic rays, like itself, parse as meaningless random fluctuations... presumably, if it was 'intelligent', it would think that it existed for no reason, as a matter of random chance, like any other case of background radiation below the threshold of cosmic rays, without losing any ability to perceive or understand cosmic rays.


As a former Objectivist, I understand the point being made.

That said, I no longer agree... I now believe that Ayn Rand made an axiom-level mistake. Existence is not Identity. To assume that Existence is Identity is to assume that all things have concrete properties, which exist and can therefore be discovered. This is demonstrably false; at the fundamental level of reality, there is Uncertainty. Quantum-level effects inherent in existence preclude the possibility of absolute knowledge of all things; there are parts of reality which are actually unknowable.

Moreover, we as humans do not have absolute knowledge of things. Our knowledge is limited, as is the information we're able to gather about reality. We don't have the ability to gather all relevant information to be certain of anything, nor the luxury to postpone decision-making while we gather that information. We need to make decisions sooner then that, and we need to make them in the face of the knowledge that our knowledge will always be imperfect.

Accordingly, I find that a better axiom would be "Existence is Probability". I'm not a good enough philosopher to fully extrapolate the consequences of that... but I do think if Ayn Rand had started with a root-level acknowledgement of fallibility, it would've helped to avoid a lot of the problems she wound up falling into later on.

Also, welcome, new person!

Yeah, that happens too. Best argument I've gotten in support of the position is that they feel that they are able to reasonably interpret the will of God through scripture, and thus instructions 'from God' that run counter to that must be false. So it's not quite the same as their own moral intuition vs a divine command, but their own scriptural learning used as a factor to judge the authenticity of a divine command.


This argument really isn't very good. It works on precisely none of the religious people I know, because:

A: They don't believe that God would tell them to do anything wrong.

B: They believe in Satan, who they are quite certain would tell them to do something wrong.

C: They also believe that Satan can lie to them and convincingly pretend to be God.

Accordingly, any voice claiming to be God and also telling them to do something they feel is evil must be Satan trying to trick them, and is disregarded. They actually think like that, and can quote relevant scripture to back their position, often from memory. This is probably better than a belief framework that would let them go out and start killing people if the right impulse struck them, but it's also not a worldview that can be moved by this sort of argument.

Well, that gets right to the heart of the Friendliness problem, now doesn't it? Mother Brain is the machine that can program, and she reprogrammed all the machines that 'do evil'. It is likely, then, that the first machine that Mother Brain reprogrammed was herself. If a machine is given the ability to reprogram itself, and uses that ability to make itself decide to do things that are 'evil', is the machine itself evil? Or does the fault lie with the programmer, for failing to take into account the possibility that the machine might change its utility function? It's easy to blame Mother Brain; she's a major antagonist in her timeline. It's less easy to think back to some nameless programmer behind the scenes, considering the problem of coding an intelligent machine, and deciding how much freedom to give it in making its own decisions.

In my view, Lucca is taking personal responsibility with that line. 'Machines aren't capable of evil', (they can't choose to do anything outside their programming). 'Humans make them that way', (so the programmer has the responsibility of ensuring their actions are moral). There are other interpretations, but I'd be wary of any view that shifts moral responsibility to the machine. If you, as a programmer, give up any of your moral responsibility to your program, then you're basically trying to absolve yourself of the consequences if anything goes wrong. "I gave my creation the capacity to choose. Is it my fault if it chose evil?" Yes, yes it is.

My point in posting it was that UFAI isn't 'evil', it's badly programmed. If an AI proves itself unfriendly and does something bad, the fault lies with the programmer.

This. Took a while to build that foundation, and a lot of contemplation in deciding what needed to be there... but once built, it's solid, and not given to reorganization on whim. That's not because I'm closed-minded or anything, it's because stuff like a belief that the evidence provided by your own senses is valid really is kind of fundamental to believing anything else, at all. Not believing in that implies not believing in a whole host of other things, and develops into some really strange philosophies. As a philosophical position, this is called 'empiricism', and it's actually more fundamental than belief in only the physical world (ie: disbelief in spiritual phenomena, 'materialism'), because you need a thing that says what evidence is considered valid before you have a thing that says 'and based on this evidence, I conclude'.

Load More