All of the-citizen's Comments + Replies

True! I was actually trying to be funny in (4), tho apparently I need more work.

mistakes that are too often made by those with a philosophical background rather than the empirical sciences: the reasoning by analogy instead of the building and analyzing of predictive models

While there are quite a few exceptions, most actual philosophy is not done through metaphors and analogies. Some people may attempt to explain philosophy that way, while others with a casual interest in philosophy might not known the difference, but few actual philosophers I've met are silly enough not to know an analogy is an analogy. Philosophy and empirical sci... (read more)

Suffering and AIs

Disclaimer - For the sake of argument this post will treat utilitarianism as true, although I do not neccesarily think that

One future moral issue is that AIs may be created for the purpose of doing things that are unpleasant for humans to do. Let's say an AI is designed with the ability to have pain, fear, hope and pleasure of some kind. It might be reasonable to expect in such cases the unpleasant tasks might result in some form of suffering. Added to this problem is the fact that a finite lifespan and an approaching termination/shutdown ... (read more)

Killer robots with no pain or fear of death would be much easier to fight off than ones who have pain and fear of death. It doesn't mean they won't get distracted and lose focus on fighting when they're injured or in danger. It means that they won't avoid getting injured or killed. It's a lot easier to kill someone if they don't mind it if you succeed.

That seems like an interesting article, though I think it is focused on the issue of free-will and morality which is not my focus.

Suffering and AIs

Disclaimer - Under utilitarianism suffering is an intrinsically bad thing. While I am not a utilitarian, many people are and I will treat it as true for this post because it is the easiest approach for this issue. Also, apologies if others have already discussed this idea, which seems quite possible

One future moral issue is that AIs may be created for the purpose of doing things that are unpleasant for humans to do. Let's say an AI is designed with the ability to have pain, fear, hope and pleasure of some kind. It might be reasonable to ex... (read more)

It does seem familiar.

Yeah I think you're right on that one. Still, I like and share his moral assumption that my-side-ism is harmful because it distorts and is often opposed to the truth in communication.

I retracted an earlier incorrect assertion and then edited to make this one instead. Not sure how that works exactly...

His gratuitous imposition of his own moral assumptions are worse.

I don't see the problem with moral assumptions, as long as they are clear and relevant. I think generally the myside effect is a force that stands against truth-seeking - I guess its a question of definition whether you consider that to be irrational or not. People that bend the truth to suit themselves distort the information that rational people use for decision making, so I think its relevant.

*The lack of an "unretract" feature is annoying.

[This comment is no longer endorsed by its author]Reply
There's not a problem if he is aware of the dependency on his moral premises and discloses that dependency to his readers. I don't see evidence of either. Yeah. Interesting that in my inbox, it is not showing as retracted.

Yes some humans seem to have adopted this view where intelligence moves from being a tool and having instrumental value to being instrinsically/terminally valuable. I find often the justifcation for this to be pretty flimsy, though quite a few people seem to have this view. Let's hope a AGI doesn't lol.

Thanks for reply. That makes more sense to me now. I agree with a fair amount of what you say. I think you'd have a sense from our previous discussions why I favour physicalist approaches to the morals of a FAI, rather than idealist or dualist, regardless of whether physicalism is true or false. So I won't go there. I pretty much agree with the rest.

EDIT> Oh just on the deep ecology point, I believe that might be solvable by prioritising species based on genetic similarity to humans. So basically weighting humans highest and other species less so based ... (read more)

I think you have an idea from our previous discussions why I don't think you physicalism, etc, is relevant to ethics.

Thanks that's informative. Not entirely sure your own position is from your post, but I agree with what I take your implication to be - that a rationally discoverable set of ethics might not be as sensible notion as it sounds. But on the other hand human preference satisfaction seems a really bad goal - many human preferences in the world are awful - take a desire for power over others for example. Otherwise human society wouldn't have wars, torture, abuse etc etc. I haven't read up on CEV in detail, but from what I've seen it suffers from a confusion that... (read more)

That wasn't the point I thought I was making. I thought I was making the point that the idea of tractable sets of moral truths had been sidelined rather than sidestepped...that it had been neglected on the basis of a simplification that has not been delivered. Having said that, I agree that discoverable morality has the potential downside of being inconvenient to, or unfriendly for , humans: the one true morality might be some deep ecology that required a much lower human population, among many other possibilities. That might have been a better argument against discoverable morality ethics than the one actually presented. Most people have a preference for not being the victims of war or torture. Maybe something could be worked up from that. I've seen comments to the effect that to the effect that it has been abandoned. The situation is unclear.

The stability under self-modification is a core problem of AGI generally, isn't it? So isn't that an effort to solve AGI, not safety/friendliness (which would be fairly depressing given its stated goals)? Does MIRI have a way to define safety/friendliness that isn't derivative of moral philosophy?

Additionally, many human preferences are almost certainly not moral... surely a key part of the project would be to find some way to separate the two. Preference satisfaction seems like a potentially very unfriendly goal...

5Rob Bensinger9y
If you want to build an unfriendly AI, you probably don't need to solve the stability problem. If you have a consistently self-improving agent with unstable goals, it should eventually (a) reach an intelligence level where it could solve the stability problem if it wanted to, then (b) randomly arrive at goals that entail their own preservation, then (c) implement the stability solution before the self-preserving goals can get overwritten. You can delegate the stability problem to the AI itself. The reason this doesn't generalize to friendly AI is that this process doesn't provide any obvious way for humans to determine which goals the agent has at step (b).

For the record, my current position is that if MIRI doesn't think it's central, then it's probably doing it wrong.

But perhaps moral philosophy is important for a FAI? Like for knowing right and wrong so we can teach/build it into the FAI? Understanding right and wrong in some form seems really central to FAI?

2Rob Bensinger9y
There may be questions in moral philosophy that we need to answer in order to build a Friendly AI, but most MIRI-associated people don't think that the bulk of the difficulty of Friendly AI (over generic AGI) is in generating a sufficiently long or sufficiently basic list of intuitively moral English-language sentences. Eliezer thinks the hard part of Friendly AI is stability under self-modification; I've heard other suggestions to the effect that the hard part is logical uncertainty, or identifying how preference and motivation are implemented in human brains. The problems you need to solve in order to convince a hostile human being to become a better person, or to organize a society, or to motivate yourself to do the right thing, aren't necessarily the same as the problems you need to solve to build the brain of a value-conducive agent from scratch.
MIRI makes the methodological proposal that it simplifies the issue of friendliness (or morality or safety) to deal with the whole of human value, rather than identifying a morally relevant subset. Having done that, it concludes that human morality is extremely complex. In other words, the payoff in terms of methodological simplification never arrives, for all that MIRI relieves itself of the burden of coming up with a theory of morality. Since dealing with human value in total is in absolute terms very complex, the possibility remains open that identifying the morally relevant subset of values is relatively easier (even if still difficult in absolute terms) than designing an AI to be friendly in terms of the totality of value, particularly since philosophy offers a body of work that seeks to identify simple underlying principles of ethics. The idea of a tractable, rationally discoverable , set of ethical principles is a weaker form of, or lead into, one of the most common objections to the MIRI approach: "Why doesn't the AI figure out morality itself?".

What do you feel is bad about moral philosophy? It looks like you dislike it because place it next to anthropormorphic thinking and technophobia.

1Rob Bensinger9y
It's appropriate to anthropomorphize when you're dealing with actual humans, or relevantly human-like things. Someone could legitimately research issues surrounding whole brain emulations, or minor variations on whole brain emulations. Likewise, moral philosophy is a legitimate and important topic. But the bulk of MIRI's attention doesn't go to ems or moral philosophy.

I'll leave these two half-baked ideas here in case they're somehow useful:

DO UNTIL - Construct an AI to perform its utility function until an undesirable failsafe condition is met. (Somehow) make the utility function not take the failsafe into account when calculating utility (can it be made blind to the failsafe somehow? Force the utility function to exclude their existence? Make lack of knowledge about failsafes part of the utility function?) Failsafes could be every undesirable outcome we can think of, such as human death rate exceeds X, biomass reduct... (read more)

Wouldn't most AGI goals disregard property rights unless it was explicitly built in? And if it was built in, wouldn't an AGI just create a situation (eg. progressive blackmail or deception or something) where we wanted to sell it the universe for a dollar?

The risk of course is the AI predicting that it's nested in this sort of environment and finding a way to signal to observers. Even if it's blind to the other layers it might try it just in case. What you want is to develop a way for the simulated world environment to detect a harmful intellgience explosion and send a single bit binary communication "out of the box" to indicate that it has occurred. Then you can shut it down and keep trying multiple instances until you get a success at this level of safety. I guess you can then slowly expand the ... (read more)

Thanks for comment. I will reply as follows:

  • Knowing how things could go wrong gives useful knowledge about scenarios/pathways to avoid
  • Our knowledge of how to make things go right is not zero

My intention with the article is to draw attention to some broader non-technical difficulties in implementing FAI. One worrying theme in the reponses I've gotten is a conflation between knowledge of AGI risk and building a FAI. I think they are separate projects, and that success of the second relies on comprehensive prior knowledge of the first. Apparently MIRI's approach doesn't really acknowledge the two as separate.

Thanks for the reply.

If we change the story as you describe I guess the moral of the story would probably become "investigate thoroughly". Obviously Bayesians are never really certain - but deliberate manipulation of one's own map of probabilities is unwise unless there is an overwhelmingly good reason (your hypothetical would probably be one - but I believe in real life we rarely run into that species of situation).

The story itself is not the argument, but an illustration of it - it is "a calculation of the instrumentality of various option... (read more)

Your engagement here is insincere. You argue based on to cherry-picking and distorting my statements. You simply ignore the explanations given and say "you haven't given justification" and then you give off-hand vague answers for my own queries and then state "already answered". I'm done with this.

Not much I can think of that we can do about that, except provide a system with disincentives for harmful behaviour. What can easily correct is the possiblity of meaning well but making mistakes due to self-deception. This post attempts to examine one instance of that.

Cheers, that all seems to make sense. I wonder if the Basilisk with its rather obvious flaws actually provides a rather superb illustration of how memetic hazard works in practice, and so doing so provides a significant opportuntity of improving how we handle it.

Thanks for the feedback.

On top of that, it actually ends in the optimal outcome.

Just to clarify, no it doesn't. It's implied that the 20 deaths is worse than 5 for the consequentialist protagonist.

The analogy is problematic ... bizarre and unrealistic example.

Thanks for the feedback. It seems people have been split on this. Others have also found the analogy problematic. On the other hand an analogy doesn't usually attempt to provide proof, but illustrate the structure of an argument in an understandable way. I don't think it's bizarre if you thin... (read more)

What in particular was wrong with his handling of this incident? I'm not aware of all the details of his handling so its an honest question.

Most obviously, the Streisand effect means that any effort used to silence a statement might as been used to shout it from the hilltops. The Basilisk is very heavily discussed despite its obvious flaws, in no small part because of the context of being censored. If we're actually discussing a memetic hazard, that's the exact opposite of what we want. There are also some structural and community outreach issues that resulted from the effort and weren't terribly good. Yudkowsky's discussed the matter from his perspective here (warning: wall of text). ((On the upside, we don't have people intentionally discussing more effective memetic hazards in the open in contexts of developing stronger ones, nor trying to build intentional decision theory traps. There doesn't seem to be enough of a causative link to consider this a benefit to the censorship, though.))

There aren't any inherently, unchangeably, mental concepts.

From what I can observe in your position it seems like you are treating consciousness in exactly this way. For example, could you explain how it could possibly be challenged by evidence? How could it change or be refined if we say "introspection therefore consciousness"?

The very fact of introspection indicates that consciousness, by a fairly minimal definition, is existent.

I don't see how this follows. As there are a whole host of definitions of consciousness, could you explicitl... (read more)

I have put forward the existence of introspection as evidence for the existence of consciousness . It is therefore logically possible for the existence of consciousness to be challenged by the non existence of introspection. It's not actually possible because introspection actually exists. The empirical claim that consciousness exists is supported by the empirical evidence,like any other. (Not empirical in your gerrymandered sense, of course, but empirical in the sense of not being apriori or tautologous). Already answered: again, Consciousness =def self awareness Introspection =def self awareness Is the ability to introspect not an unusual property? Are we actually differing, apart from your higher level of vagueness? Person B can tell what person B is thinking, as well. That is important. Who said anything about disembodied thought. So what is the actual contradiction? Why a discrete phenomenon? Is a historical association enough to make an inconsistency? I have given a detailed explanation as to why consciousness is not an inherently mental concept. You need to respond to that, and not just repeat your claim. False. Here is the explanation again: "Both versions are naive. The explanatory process doesn't start with a perfect set of concepts...reality isn't pre-labelled. The explanatory process start with a set of "folk" or prima facie concepts, each of which may be retained, modified, or discarded as things are better understood. You cant start from nothing because you have to be able to state your explanandum, you have to state which phenomenon you are trying to explain. But having to have to a starting point does not prejudice the process forever, since the modification and abandonment options are available. For instance, the concept of phlogiston was abandoned, whereas the concept the atom was modified to no longer require indivisibility. Heat is a favourite example of a reductive explanation. The concept of heat as something to be explained was ret

Cheers now that we've narrowed down our differences that's some really constructive feedback. I think I intended it primarily as a illustration and assume that most people in this context would probably already agree with that perspective, though this could be a bad assumption and it probably makes the argument seem pretty sloppy in any case. It'll definitely need refinement, so thanks.

EDIT> My reply attracted downvotes? Odd.

What part do you think was forced? So far quite a several others said they didn't mind that part so much, and that actually the second section bothered them. I'll probably make future alterations when I have spare time.

Jungle? Didn't we live on the savannah?

LOL it was just a turn of phrase.

And forming groups for survival, it seems just as plausible that we formed groups for availability of mates.

Genetically speaking mate-availability is a component to survival. My understanding of the forces that increased group size is that they are more complex than either of these (big groups win conflicts for terrritory, but food availability (via tool use) and travel speed are limiting factors I believe - big groups only work if you can access a lot of food and move on before... (read more)

Yes, and I think it's a good suggestion. I think I can phrase my real objection better now. My objection is that I don't think this article gives any evidence for that suggestion. The historical storytelling is a nice illustration, but I don't think it's evidence. I don't think it's evidence because I don't expect evolutionary reasoning at this shallow a depth to produce reliable results. Historical storytelling can justify all sorts of things, and if it justifies your suggestion, that doesn't really mean anything to me. A link to a more detailed evolutionary argument written by someone else, or even just a link to a Wikipedia article on the general concept, would have changed this. But what's here is just evolutionary/historical storytelling like I've seen justifying all sorts of incorrect conclusions, and the only difference is that I happen to agree with the conclusion. If you just want to illustrate something that you expect your readers to already believe, this is fine. If you want to convince anybody you'd need a different article.

Thanks for the useful suggestion. This appears to emerging as a consensus. I'll probably either tidy up the second section or cut it when I have time.

LOL. Well I agree with your first three sentences, but I'd also add that we systematically the costs of false beliefs because (1) at the point of deception we cannot reliably predict future instances in which the self-deceptive belief will become a premise in a decision (2) in instances where we make a instrumentally poor decision due to a self-deception, we often receive diminished or no feedback (we are unaware of the dead stones floating down the river).

LW appears to be mixed on the "truthiness should be part of instrumental rationality" issue.

It seems we disagree on the compartmentalising issue. I believe self-deception can't easily be compartmentalised in the way you describe because we can't accurately predict, in most cases, where our self-deception might become a premise in some future piece of reasoning. By its nature, we can't correct at the later date, because we are unaware that our belief is wrong. What's your reasoning regarding compartmentalizing? I'm interested in case I am overlook... (read more)

I guess my argument is that when people can't see an immediate utility for the truth, they can become lazy or rationalise that a self-deception is acceptable. This occurs because truth is seen as useful rather than essential or at least essential in all but the most extreme circumstances. I think this approach is present in the "truth isn't everything" interpretation of instrumental rationality. The systematised winning isn't intended to comprise this kind of interpretation, but I think the words it uses evokes too much that's tied into a problematic engagement with the truth. That's where I currently sit on the topic in any case.

Thanks for the group selection link. Unfortunately I'd have to say, to the best of my non-expert judgement, that the current trends in the field disagrees somewhat with Eliezer in this regard. The 60s group selection was definitely overstated and problematic, but quite a few biologists feel that this resulted in the idea being ruled out entirely in a kind of overreaction to the original mistakes. Even Dawkins, who's traditionally dismissed group selection, acknowledged it may play more of a role than he previously thought. So its been refined and is making... (read more)

Really? I was not aware of that trend in the field, maybe I should look into it. Well, at least I understand you now.

Well I've done Map & Territory and have skimmed through random selections of other things. Pretty early days I know! So far I've not run into anything particularly objectionable for me or conflicting with any of the decent philosophy I've read. My main concern is this truth as incidental thing. I just posted on this topic:

Ah, I think you may have gotten the wrong idea when I said truth was incidental, that a thing is incidental does not stop it from being useful and a good idea, it is just not a goal in and of itself. Fortunately, no-one here is actually suggesting active self-deception as a viable strategy. I would suggest reading Terminal values and Instrumental Values. Truth seeking is an instrumental value, in that it is useful to reach the terminal values of whatever your actual goals are. So far as I can tell, we actually agree on the subject for all relevant purposes. You may also want to read the tragedy of group selectionism.

Cheers for comment. I think I perhaps should have made the river self-deception less deliberate, to create a greater link between it and the "winning" mentality. I guess I'm suggesting that there is a little inevitable self-deception incurred in the "systematised winning" and general "truth isn't everything" attitudes that I've run into so far in my LW experience. Several people have straight-up told me truth is only incidental in the common LWers approach to instrumental rationality, though I can see there are a range of views.

The truth indeed is only incidental, pretty much by the definition of instrumental rationality when truth isn't your terminal goal. But surely the vast majority agree that the truth is highly instrumentally valuable for almost all well-behaved goals? Finding out the truth is pretty much a textbook example of an instrumental goal which very diverse intelligences would converge to.

I think I'd be quite interested to know what % of CRAF people believe that rationality ought to include a component of "truthiness". Anything that could help on that?

I like the exporation of how emotions interact with rationality that seems to be going on over there.

For me over-analysis would be where further analysis is unlikely to yield practically improved knowledge of options to solve the problem at hand. I'd probably treat this as quite separate from bad analysis or the information supplied by instinct and emotion. In a sense then emotions wouldn't come to bear on the question of over-analysis generally. However, I'd heartily agree with the proposition that emotions are a good topic of exploration and study becaus... (read more)

CFAR has its own private mailing list, which isn't available to individuals who haven't attended a CAR event before. As a CFAR alumnus, though, I can ask them your questions on your behalf. If I get a sufficient response, I can summarize their insight in a discussion post. I believe CFAR alumni are 40% active Less Wrong users, and 60% not. The base of CFAR, i.e. its staff, may have a substantially different perspective from its hundreds of workshop members that compose the broader community.

Thanks for the interesting comments. I've not been on LW for wrong and so far I'm being selective about which sequences I'm reading. I'll see how that works out (or will I? lol).

I think my concern on the truthiness part of what you say is that there is an assumption that we can accurately predict the consequences of a non-truth belief decision. I think that's rarely the case. We are rarely given personal corrective evidence though, because its the nature of a self-deception that we're oblivious that we've screwed up. Applying a general rule of truthiness is a far more effective approach imo.

Agreed, a general rule of truthiness is definitely a very effective approach and probably the most effective approach, especially once you've started down the path. So far as I can tell stopping halfway through is... risky in a way that never having started is not. I only recently finished the sequences myself (apart from the last half of QM). At the time of starting I thought it was essentially the age old trade off between knowledge and happy ignorance, but it appears at some point of reading the stuff I hit critical mass and now I'm starting to see how I could use knowledge to have more happiness than if I was ignorant, which I wasn't expecting at all. Which sequences are you starting with? By the way, I just noticed I screwed up on the survey results: I read the standard deviation as the range. IQ should be mean 138.2 with SD 13.6, implying 95% are above 111 and 99% above 103.5. It changes my first argument a little, but I think the main core is still sound.

I think you've got a good point regarding having as many virtues as possible.

On the idea of perfection being dystopic, this reminds me of an argument I sometimes hear along the lines of "evil is good because without evil, good would just be normal", which I don't find very convincing. Still I guess a society and its people should always focus on betterment of themselves, and perfection is probably better thought of as a idealised goal than some place we arrive at.

This isn't productive. As you've insisted on a long and highly critical response again, I sadly feel I need to briefly reply.

What you need to disprove is the claim that physicalists employ the concept of consciousness

No. I merely show that the core claims of physicalism and this use of consciousness are incompatible. That's what I've done. Whether some particular physicalists choose to employ some concept is a different question and a massive red herring, as I already said.

One ought to provide evidence for extraordinary claims,

Whereas ordinary clai... (read more)

They don't need the re-presentation of existing evidence. They disagree about some things, and they agree enough to be talking about the same thing. Disagreement requires commonalities , otherwise it's just miscommumication. I didn't say you were eliminativist. I said you were shoehorning three categories into two categories. What is your response to that? Both versions are naive. The explanatory process doesn't start with a perfect set of concepts...reality isn't pre-labelled. The explanatory process start with a set of "folk" or prima facie concepts, each of which may be retained, modified, or discarded as things are better understood. You cant start from nothing because you have to be able to state your explanandum, you have to state which phenomenon you are trying to explain. But having to have to a starting point does not prejudice the process forever, since the modification and abandonment options are available. For instance, the concept of phlogiston was abandoned, whereas the concept the atom was modified to no longer require indivisibility. Heat is a favourite example of a reductive explanation. The concept of heat as something to be explained was retained, but the earlier, non reductive explanation of heat as a kind of substance was abandoned, in favour of identifying heat with molecular motion. Since molecular motion exits, heat exists, but it doesn't exist separately - dualistically - from everything else. This style of explanation is what non eliminative physicalists, the type 2 position, are aiming at. Your background assumptions are wrong. There aren't any inherently, unchangeably, mental concepts. If you can reduce something to physics, like heat, then it's physical. You don't know in advance what you can reduce. The different positions on the nature of consciousness are different guesses or bets on the outcome. Non eliminating physicalism, the type 2 position is bet on the outcome that consciousness will be identified with some physical process.

I don't 100% follow your comment, but I find the content of those links interesting. Care to expand on that thought at all?

Sometimes we might really, actually be over-analyzing things, and what our true goals are may be better discovered by paying more attention to what System 1 is informing us of. If we don't figure this out for ourselves, it might be other rational people who tell us about this. If someone says: how are we supposed to tell if what they're saying is a: * someone trying to genuinely help us solve our problem(s) in a rational way or * someone dismissing attempts at analyzing a problem at all? It can only be one, or the other. Now, someone might not have read the Less Wrong, but that doesn't preclude them from noticing when we really are over-analyzing a problem. When someone responds like this, how are we supposed to tell if they're just strawmanning rationality, or really trying to help us achieve a more rational response? This isn't some rhetorical question for you. I've got the same concerns as you, and I'm not sure how to ask this particular question better. Is it a non-issue? Am I using confusing terms?

I think I'm broadly supportive of your approach. The only problem I can see is that most people think its better to try to do stuff, as opposed to getting better at doing stuff. Rationality is a very generalised and very long-term approach and payoff. Still I'd not reject your approach at this point.

Another issue I find interesting is that several people have commented recently on LW that (instrumental) rationality isn't about knowing the truth but simply achieving goals most effectively. They claim this is the focus of most LWers too. As if "Truthiness" is only a tool that can be even be discarded when neccessary. I find that view curious.

I'm not sure they're wrong to be honest (assuming an average cross section of people). Rationality is an extremely long term approach and payoff, I am not sure it would even work for the majority of people and if it does I'm not sure if it reaches diminishing returns compared to other strategies. The introductory text (sequences) is 9,000 pages long and the supplementary texts (kahneman, ariely ect) take it up to 11,000. I'm considered a very fast reader and it took me 3 unemployed months of constant reading to get through. For a good period of that time I was getting a negative return, I became a worse person. It took a month after that to end up net positive. I don't want to harp on about unfair inherent advantages, but I just took a look at the survey results from last year and the lowest IQ was 124.6. This stuff could be totally ineffective for average people and we would have no way of knowing. Simply being told the best path for self improvement or effective action by someone who was a rationalist or just someone who knows what they're doing, a normal expert in whatever field may well be more optimal for a great many people. Essentially data-driven life coaching. I can't test this hypothesis one way or the other without attempting to teach an average person rationalism and I don't know if anyone has done that, nor how I would find out if they had. So far as instrumental rationality not being in core about truth, to be honest I broadly agree with them. There may be a term in my utility function for truth but it is not a large term, not nearly so important as the term for helping humanity or the one for interesting diversions. I seek truth not as an end in itself, but because it is so damn useful for achieving other things I care about. If I were in a world where my ignorance would save a life with no downside while my knowledge had no longterm benefit then I would stay ignorant. If my ignorance was a large enough net benefit to me and others, I would keep it.

Cheers for the mention. I still haven't worked out if Divergent is meant to be a dystopia or utopia (somewhere in between I think?). Its an interesting world.

I think it's dystopic that they see the virtues as rivalrous instead of cooperative (wouldn't you want someone to have as many virtues as possible, and to 'graduate' from various groups?). The post-apocalypse part is hard to measure; less alienation, but also less trade. I would suggest, though, that a real teen dystopia is one in which everything is perfect and you are not needed- and so the existence of an obvious defect that you can change (and become important by doing so) seems like a component of a teen utopia.

Yes that's a fairly good point and I don't know any easy way around it either. Looking in the world of business, government, politics etc etc. would be a matter of fairly subjective ideas about moral goodness.

I suppose you could formulate an approach along the lines of experimental psychology, where you could deliberately design experiments with clearcut good/bad group outcomes. So get a bunch of people to be leaders in an experiment where their goal was to minimise their group members (including themselves) getting hit in the head with something unpleasan... (read more)

:-( I'm disappointed at this outcome. I think you're mentally avoiding the core issue here - but I guess my saying so is not going to achieve much. I'll answer some of your points quickly and make this my last post in this subthread.

What you need is evidence that monists don't or can't or shouldn't shouldn't believe in consciousness or subjectivity or introspection. Evidence that dualists do is not equivalent.

You're twisting my claim. Someone can't disprove a pure concept or schema - asking them to do so is a red herring. Instead one ought to prove ra... (read more)

I didn't say anything about disproving a concept. What you need to disprove is the claim that physicalists employ the concept of consciousness. That is not the concept itself. . One ought to provide evidence for extraordinary claims, In order to support your claim about consciousness, you have made an identical claim about subjectivity, which is equally in need of support, and equally unsupported. That is going in circles. What is the second claim even asserting.. Subjective is a term used by dualusts? Yes. It is only used by dualists? No. I've list count of the number of physicalists who have informed me of the "fact" that morality is subjective... See your own claims that the MENTAL can be explained physically, below. How do you know? As it happens, the meaning of "subjective" is closer to "private mental event" than it is to "non physical mind stuff". You haven't really argued against that, since vague claims that the the two terms have a common origin, or are used by the same people don't establish synonymity. As opposed to what? "Subjective" has a primarily epistemological know that, right? Epistemology is largely orthogonal to ontology ... you know that, right? If your computer tells you it is low on memory, is that not empirical? In any case, coming up with an idiosyncratic definitions of empirical that is narrower than the definition actual physicalists and scientists use proves nothing. !!! There are no thoughts happening to you? Whatever that means. No, dualism is not having different categories, .or philosophers would be arguing about Pepsi-Coke dualism. Dualism is ONTOLOGICAL categories. I would summarize that as hopelessly vague. You need to argue that point. I can't see any connection at all. Define the subject as the perceiver of states of affairs external to itself (like the observer in physics)... where is the immaterial mind there? I've never heard them do that. There are reasons why one wouldn't expect a combination o

Ok thanks for this comment.

studying philosophy for 35 years

Stealthy appeal to authority, but ok. I can see you're a good philosopher, I wouldn't seek to question your credibility as a philosopher, but I do wish to call into question this particular position, and I hope you'll come with on this :-)

Who told you that introspection implies separation of mind and body?

I wrote on this topic at uni, but you'll have to forgive me if I haven't got proper sources handy...

"The sharp distinction between subject and object corresponds to the distinction, i... (read more)

I am trying not to appeal to authority. I like unconventional claims. I also like good arguments. I am trying to get you to give a good argument for your unconventional claim. Both. Well the claim that that consciousness is ontologocally fundamental is a dualism/idealist claim. The claim that consciousness exists at all isn't. You don't seem to put much weight on the qualification "ontologocally fundamental" What you need is evidence that monists don't or can't or shouldn't shouldn't believe in consciousness or subjectivity or introspection. Evidence that dualists do is not equivalent. There's an article on SEP. Where did you see the definition? In any case, introspection is widely used in psychology. There are plenty, because of the way it is self awareness or higher order thought. It's use in dualism doesn't counteract that...particularly as it is not exclusive of its use in physicalism. Says who? You haven't demonstrated that any concepts are inherently dualist, and physicalist clearly do use terms like consciousness. Here's an experiment: Stand next to someone. Without speaking, Think about something hard to guess. Ask them what it is. If they don't know, you have just proved you have private thoughts, if which you are aware.

I am suprised that a significant group of people think that rationality is inclusive of useful false beliefs. Wouldn't we call LW an effectiveness forum, rather than a rationalist forum in that case?

That basically means that sometimes the person who seeks the truth doesn't win.

I think you're reading too much into that one quite rhetorical article, but I acknowledge he prioritises "winning" quite highly. I think he ought to revise that view. Trying to win with false beliefs risks not achieving your goals, but being oblivious to that fact. Lik... (read more)

Often they use “instrumental rationality” for that meaning and “epistemic rationality” for the other one. Searching this site for epistemic instrumental returns some relevant posts.

Agreed, but I think they'd have some correlation, and I strongly suspect their absense would predict corruptability.

Load More