All of shminux's Comments + Replies

Thoughtfully engaging with the existing body of literature might help. Show that you understand the claims, the counter-claims, the arguments for and against. Show that your argument is novel and interesting, not something that has been already put forward and critiqued numerous times. Basically, whatever makes a good scientific paper.

Isn't the field oversaturated? 

I think not. Maybe circuits-style mechanistic interpretability is though. I generally wouldn't try dissuading people from getting involved in research on most AIS things. 

Are you saying that outside experts were better at understanding potential consequences in these cases? I have trouble believing it.

Why though? How does understanding the physics that makes nukes work help someone understand their implications? Game theory seems a much better background than physics to predict the future in this case. For example, the idea of Mutually assured destruction as a civilizing force was first proposed first by Wilkie Collins, an English novelist, and playwright.

Other than the printing press, do you have other members of the reference class you are constructing, where outside holistic experts are better at predicting the consequences of a new invention than the inventors themselves?

Every other important technological breakthrough. The Internet and nuclear weapons are specific examples if you want any.

the question of what is actually moral, beyond what you have been told is moral

that is what a moral realist would say

To say there is a question is not to insist it has an answer.

Like most things, it is sometimes helpful, sometimes harmful, sometimes completely benign, depending on the person, the type, the amount and the day of the week. There is no "consensus" because the topic is so heterogeneous. What is your motivation for asking?

Note that if you have a little bit extra to spend, you can outsource some of the dimensions to experts. For example, those with a sense of style can offer you options you'd not have thought of yourself. The same applies to functionality and comfort (different experts though).

Exterminating humans can be done without acting on humans directly. We are fragile meatbags, easily destroyed by an inhospitable environment. For example:

  • Raise CO2 levels to cause a runaway greenhouse effect (hard to do quickly though).
  • Use up enough oxygen in the atmosphere to make breathing impossible, through some runaway chemical or physical process.

There have been plenty of discussions on "igniting the atmosphere" as well.

I am not confidently claiming anything, not really an expert... But yeah, I guess I like the way you phrased it. The more disparity there is in intelligence, the less extra noise matters. I do not have a good model of it though. Just feels like more and more disparate dangerous paths appear in this case, overwhelming the noise.

1sudo -i7d
Fair enough! For what it’s worth, I think the reconstruction is probably the more load-bearing part of the proposal.

If a plan is adversarial to humans, the plan's executor will face adverse optimization pressure from humans and adverse optimization pressure complicates error correction.

I can see that working when the entity is at the human level of intelligence or less. Maybe I misunderstand the setup, and this is indeed the case. I can't imagine that it would work on a superintelligence...

1sudo -i7d
Is your claim that the noise borne asymmetric pressure away from treacherous plans disappears in above-human intelligences? I could see it becoming less material as intelligence increases, but the intuition should still hold in principle.

I thought it was a sort of mundane statement that morality is a set of evolved heuristics that make cooperation rather than defection possible, even when it is ostensibly against the person's interests in the moment. 

Basically, a resolution of the Parfit's hitchhiker problem is inducing morality into the setup: it is immoral not to pick up a dying hitchhiker, and it is dishonorable to renege on the promise to pay. If you dig into the decision-theoretic logic of it, you can figure out that in repeated Parfit's hitchhiker setup you are better off picking up/paying up, but humans are not great at that, so evolutionarily we ended up with morality as a crutch.

Evolutionary, and other naturalistic accounts, aren't quite a slam dunk, because they leave the open question -- the question of what is actually moral, beyond what you have been told is moral -- open. A society might tell its members to pillage and enslave other socieities, and that would be good for the society, but it can still be criticised from a universalistic perspective. The question of what is the prevailing, de facto morality is different to the question of what is the best-adapated form of the prevailing morality, given the constraints a society is under. But that question itself is itself different to the question of the ideal morality without any material constraints -- but note that such morality could be a "luxury belief", an unimplementable ideal.

Brittleness: Since treacherous plans may require higher precision than benign plans, treacherous plans should be more vulnerable to noise.

I wonder where this statement is coming from? I'd assume the opposite, most paths lead to bad outcomes by default, making a plan work as intended is what requires higher precision.

2sudo -i7d
"Most paths lead to bad outcomes" is not quite right. For most (let's say human developed, but not a crux) plan specification languages, most syntactically valid plans in that language would not substantially permute the world state when executed. I'll begin by noting that over the course of writing this post, the brittleness of treacherous plans became significantly less central. However, I'm still reasonably convinced that the intuition is sound. If a plan is adversarial to humans, the plan's executor will face adverse optimization pressure from humans and adverse optimization pressure complicates error correction. Consider the case of a sniper with a gun that is loaded with 50% blanks and 50% lethal bullets (such that the ordering of the blanks and lethals are unknown to the sniper). Let's say his goal is to kill a person on the enemy team. If the sniper is shooting at an enemy team equipped with counter-snipers, he is highly unlikely to succeed (<50%). In fact, he is quite likely to die. Without the counter-snipers, the fact that his gun is loaded with 50% blanks suddenly becomes less material. He could always just take another shot.  I claim that our world resembles the world with counter-snipers. The counter-snipers in the real world are humans who do not want to be permanently disempowered.

If someone says "I believe that the probability of cryonic revival is 7%", what useful information can you extract from it, beyond "this person has certain beliefs"? Of course, if you consider them an authority on the topic, you can decide whether 7% is enough for you to sign up for cryonics. Or maybe because you know them to be well calibrated on a variety of subjects they have expressed probabilistic views on, including topics that have so many unknowns, they have to have some special ineffable insight to be well calibrated on. I am skeptical that there is a reference class like this that includes cryonic revival where one can be considered well calibrated.

It is, at the very least, interesting that people signed up for cryonics tend to give lower estimates for probability of future revival than the general population, and this may give useful insight for both the state of the field ("If you haven't looked into it, the odds are probably worse than you think."), and variance in human decision making ("How much do you value increased personal longevity, really?"), and how the field should strive to educate and market and grow.  It could also be interesting and potentially insightful to see how those numbers have changed over time. Even if the numbers themselves are roughly meaningless, any trends in them may reflect advancement of the field, or better marketing, or change in the population signing up or considering doing so. If I had strong reason to think that there were encouraging trends in odds of revival, as well as cost and public acceptance, that would increase my odds of signing up. After all, under most non-catastrophic-future scenarios, and barring personal disasters likely to prevent preservation anyway, I'm much more likely to die in the 2050s-2080s than before that, and be preserved with that decade's technologies, which means compounding positive trends vs. static odds can make a massive difference to me. OTOH, if we're not seeing such improvement yet but there's reason to think we will, then waiting a few years could greatly reduce my costs (relative to early adopters) without dramatically increasing my odds of dying before signing up. (If we're really lucky and sane in the coming decades there's a small chance preservation of some sort will be considered standard healthcare practice by the time I die, but I don't put much weight on that.)

Clarke's quote is apt, but the rest of the article does not hold all that well together. All you can say about cryonics is that it arrests the decay at the cost of destroying some structures in the process. Whether what is left is enough for eventual reversal, whether biological or technological, is a huge unknown whose probability you cannot reasonably estimate at this time. All we know is that the alternative (natural decomposition) is strictly worse. If someone gives you a concrete point estimate probability of revival, their estimate is automatically untrustworthy. We do not have anywhere close to the amount of data we need to make a reasonable guess.

Your comment creates a misleading impression of my article. Nowhere do I say experts can give a point probability of success. On the contrary, I frequently reject that idea. I also find it silly when people say the probably of AI destroying humans is 20%, or 45%, or whatever.  You don't provide any support for the claim that "the rest of the article doesn't hold all that well together", so I'm unable to respond usefully.
This goes strongly against probabilistic forecasting. It seems a wrong principle to me.

Eliezer discussed it multiple times, quite recently on Twitter and on various podcasts. Other people did, too. 

Yes, agents whose inner model is counting possible worlds, assigning probabilities and calculating expected utility can be successful in a wider variety of situations than someone who always picks 1. No, thinking like "an entity that "acts like they have a choice"" does not generalize well, since "acting like you have choice" leads you to CDT and two-boxing.

You don't know enough to accurately decide whether there is a high risk of extinction. You don't know enough to accurately decide whether a specific measure you advocate would increase or decrease it. Use epistemic modesty to guide your actions. Being sure of something you cannot derive from first principles, as opposed to from parroting select other people's arguments is a good sign that you are not qualified. 

One classic example is the environmentalist movement accelerating anthropogenic global climate change by being anti-nuclear energy. If you think you are smarter now about AI dangers than they were back then about climate, it is a red flag.

But  AI doomers do think there is a high risk of extinction. I am not saying a call to violence is right: I am saying that not discussing it seems inconsistent with their worldview.

I suggest (well, my partner does) including those you like as a part of a diverse vegan diet. Oat milk is nominally processed and enriched, but it is not a central example of "processed foods" by any means. There are many vegan options that are enriched with vitamins and minerals to cover nearly everything that humans get from eggs, milk products and meats, most people can find something they like with a bit of trying. Of course, there are always those who are allergic, sensitive, unable to process well, or supertasters that need something special. I am not talking about these cases.

None of this is relevant. I don't like the "realityfluid" metaphor, either. You win because you like the number 1 more than number 2, or because you cannot count past 1, or because you have a fancy updateless model of the world, or because you have a completely wrong model of the world which nonetheless makes you one-box. You don't need to "act like you have a choice" at all. 

The difference between an expected utility maximizer using updateless decision theory and an entity who likes the number 1 more than the number 2, or who cannot count past 1, or who has a completely wrong model of the world which nonetheless makes it one-box is that the expected utility maximizer using updateless decision theory wins in scenarios outside of Newcomb's problem where you may have to choose to $2 instead of $1, or have to count amounts of objects larger than 1, or have to believe true things. Similarly, an entity that "acts like they have a choice" generalizes well to other scenarios whereas these other possible entities don't.

There is no "ought" or "should" in a deterministic world of perfect predictors. There is only "is". You are an algorithm and Omega knows how you will act. Your inner world is an artifact that gives you an illusion of decision making. The division is simple: one-boxers win, two-boxers lose, the thought process that leads to the action is irrelevant.

One-boxers win because they reasoned in their head that one-boxers win because of updateless decision theory or something so they "should" be a one-boxer. The decision is predetermined but the reasoning acts like it has a choice in the matter (and people who act like they have a choice in the matter win.) What carado is saying is that people who act like they can move around the realityfluid tend to win more, just like how people who act like they have a choice in Newcomb's problem and one-box in Newcomb's problem win even though they don't have a choice in the matter.

I addressed a general question like that in 

Basically, guardrails exist for a reason, and you are generally not smart enough to predict the consequences of removing them. This applies to most suggestions of the form "why don't we just <do some violent thing> to make the world better". There are narrow exceptions where breaking a guardrail has actual rather than imaginary benefits, but finding them requires a lot of careful analysis and modeling.

Isn't the prevention of the human race one of those exceptions?

My partner is vegan, and it seems like there is nothing special one needs to do to stay healthy, just eat everything (vegan) in moderation, like veggies, legumes, fruits, nuts etc. Most processed products like oat milk, soy milk, impossible meat, beyond meat, daiya cheese are enriched with whatever supplements are needed already unless one is specifically susceptible to some deficiencies.

Sounds like you're suggesting eating those processed, fortified foods? Lots of people avoid those or just don't like them, so knowing they're valuable is important information.

Individual humans are not aligned at all, see "power corrupts". Human societies are somewhat aligned with individual humans, in the sense that they need humans to exist and keep the society going, and those "unaligned" disappear pretty quickly. I do not see any alignment difference between totalitarian and democratic regimes, if you measure alignment by the average happiness of society. I don't disagree that human misalignment has only moderate effects because of various limits on their power.

Something is very very hard if we see no indication of it happening naturally. Thus FTL is very very hard, at least without doing something drastic to the universe as a whole... which is also very very hard. On the other hand,

 "hacking" the human brain, using only normal-range inputs (e.g. regular video, audio), possible, for various definitions of hacking and bounds on time and prior knowledge

is absolutely trivial. It happens to all of us all the time to various degrees, without us realizing it. Examples: falling in love, getting brainwashed, getting... (read more)

3Max H9d
Not sure what you mean by "happening naturally". There are lots of inventions that are the result of human activity which we don't observe anywhere else in the universe - an internal combustion engine or a silicon CPU do not occur naturally, for example. But inventing these doesn't seem very hard in an absolute sense.   Yes, and I think that puts certain kinds of brain hacking squarely in the "possible" column. The question is then how tractable, and to what degree is it possible to control this process, and under what conditions. Is it possible (even in principle, for a superintelligence) to brainwash a randomly chosen human just by making them watch a short video? How short?

I think it's a very useful perspective, sadly the commenters do not seem to engage with your main point, that the presentation of the topic is unpersuasive to an intelligent layperson, instead focusing on specific arguments.

The focus of the post is not on this fact (at least not in terms of the quantity of written material). I responded to the arguments made because they comprised most of the post, and I disagreed with them. If the primary point of the post was "The presentation of AI x-risk ideas results in them being unconvincing to laypeople", then I could find reason in responding to this, but other than this general notion, I don't see anything in this post that expressly conveys why (excluding troubles with argumentative rigor, and the best way to respond to this I can think of is by refuting said arguments).
There is, of course, no single presentation, but many presentations given by many people, targeting many different audiences.  Could some of those presentations be improved?  No doubt. I agree that the question of how to communicate the problem effectively is difficult and largely unsolved.  I disagree with some of the specific prescriptions (i.e. the call to falsely claim more-modest beliefs to make them more palatable for a certain audience), and the object-level arguments are either arguing against things that nobody[1] thinks are core problems[2] or are missing the point[3]. 1. ^ Approximately. 2. ^ Wireheading may or may not end up being a problem, but it's not the thing that kills us.  Also, that entire section is sort of confused.  Nobody thinks that an AI will deliberately change its own values to be easier to fulfill; goal stability implies the opposite. 3. ^ Specific arguments about whether superintelligence will be able to exploit bugs in human cognition or create nanotech (which... I don't see an arguments against, here, except for the contention that nothing was ever invented by a smart person sitting in an armchair, even though of course an AI will not be limited in its ability to experiment in the real world if it needs to) are irrelevant.  Broadly speaking, the reason we might expect to lose control to a superintelligent AI is that achieving outcomes in real life is not a game with an optimal solution the way tic tac toe is, and the idea that something more intelligent than us will do better at achieving its goals than other agents in the system should be your default prior, not something that needs to overcome a strong burden of proof.

Did your model change in the last 6 months or so, since the GPTx takeover? If so, how? Or is it a new model? If so, can you mentally go back to pre-GP-3.5 and construct the model then? Basically, I wonder which of your beliefs changes since then.

Your question seems to focus mainly on timeline model and not alignment model, so I shall focus on explaining how my model of the timeline has changed. My timeline shortened from about four years (mean probability) to my current timeline of about 2.5 years (mean probability) since the GPT-4 release. This was because of two reasons: * gut-level update on GPT-4's capability increases: we seem quite close to human-in-the-loop RSI. * a more accurate model for bounds on RSI. I had previously thought that RSI would be more difficult than I think it is now. The latter is more load-bearing than the former, although my predictions for how soon AI labs will achieve human-in-the-loop RSI creates an upper bound on how much time we have (assuming no slowdown), which is quite useful when making your timeline.

Well, if we only have one try, extra time does not help, unless alignment is only an incremental extra on AI, and not a comparably hard extra effort. If we have multiple tries, yes, there is a chance. I don't think that at this point we have enough clue as to how it is likely to go. Certainly LLMs have been a big surprise.

I think it might be useful to consider the framing of being an embedded agent in a deterministic world (in Laplace's demon sense). There is no primitive "should", only an emergent one. The question to ask in that setup is "what kind of embedded agents succeed, according to their internal definition of success?" For example it is perfectly rational to believe in God in a setup in a situation where this belief improves your odds of success, for some internal definition of success. If one's internal definition of success is different, fighting religious dogma... (read more)

Are you positing that the argument "we only have one try to get it right" is incorrect? Or something else?

2Cole Wyeth14d
Not really. To be clear, I am criticizing the argument Eliezer tends to make. There can be flaws in that argument and we can still be doomed. I am saying his stated confidence is too high because even if alignment is as hard as he thinks, A.I. itself may be harder than he thinks, and this would give us more time to take alignment seriously. In the second scenario I outlined (say, scenario B) where gains to intelligence feed back into hardware improvements but not drastic software improvements, multiple tries may be possible. On the whole I think that this is not very plausible (1/3 at most), and the other two scenarios look like they only give us one try. 

Not "CDT does not make sense", but any argument that fights a hypothetical such as "predictor knows what you will do" is silly. EDT does that sometimes. I don't understand FDT (not sure anyone does, since people keep arguing what it predicts), so maybe it fares better. Two-boxing in a perfect predictor setup is a classic example. You can change the problem, but it will not be the same problem. 11 doses outcome is not a possibility in the Moral Newcomb's. I've been shouting in the void for a decade that all you need to do is enumerate the worlds, assign pro... (read more)

A one-paragraph summary to start your post would really be helpful. A long and convoluted story without an obvious carrot at the end is not a way to invite engagement.

9Ivan Ordonez17d
Thank you for the suggestion! I have added a one-paragraph summary at the start. I hope this improves things a bit.

I assume you are not actually trying to save money or energy this way, since the savings if any would be minuscule, but are doing a calculation for fun. In that case a simple rule of thumb is likely to give you all the savings you want, such as closing the door whenever the time is indeterminate and/or longer than, say, 10 seconds in expectation.

Ah, thank you, that makes sense. I agree that we definitely need some opaque entity to do these two operations. Though maybe not as opaque as magic, unless you consider GPT-4 magic. As you say, "GPT-4 can do all of the magic required in the problem above." In which case you might as well call everything an LLM does "magic", which would be fair, but not really illuminating.

GPT-4 analysis, for reference:

One possible decision tree for your problem is:

graph TD
A[Will it rain?] -->|Yes| B[Throw party inside]
A -->|No| C[Throw party outside]
B --> D[Enjoym... (read more)

I probably should have listened to the initial feedback on this post along the lines that it wasn't entirely clear what I actually meant by "magic" and was possibly more confusing than illuminating, but, oh well. I think that GPT-4 is magic in the same way that the human decision-making process is magic: both processes are opaque, we don't really understand how they work at a granular level, and we can't replicate them except in the most narrow circumstances. One weakness of GPT-4 is it can't really explain why it made the choices it did. It can give plausible reasons why those choices were made, but it doesn't have the kind of insight into its motives that we do.

I am confused as to what work the term "magic" does here. Seems like you use it for two different but rather standard operations: "listing possible worlds" and "assigning utility to each possible world". Is the "magic" part that we have to defer to a black-box human judgment there?

Short answer, yes, it means deferring to a black-box. Longer answer, we don't really understand what we're doing when we do the magic steps, and nobody has succeeded in creating an algorithm to do the magic steps reliably. They are all open problems, yet humans do them so easily that it's difficult for us to believe that they're hard. The situation reminds me back when people thought that object recognition from images ought to be easy to do algorithmically, because we do it so quickly and effortlessly. Maybe I'm misunderstanding your specific point, but the operations of "listing possible worlds" and "assigning utility to each possible world" are simultaneously "standard" in the sense that they are basic primitives of decision theory and "magic" in the sense that we haven't had any kind of algorithmic system that was remotely capable of doing these tasks until GPT-3 or -4.

I guess even without symmetry if one assumes finite interaction time, and the nearest-neighbor-only interaction, an analog of the light cone emerges from these two assumptions. As in, Nth neighbor is unaffected until the time Nt where t is the characteristic interaction time. But I assume you are claiming something much less trivial than that.

I'm wondering if you are reinventing lattice waves., phonons and maybe even phase transitions in the Ising model.

Phase transitions are definitely on the todo list of things to reinvent. Haven't thought about lattice waves or phonons; I generally haven't been assuming any symmetry (including time symmetry) in the Bayes net, which makes such concepts trickier to port over.

I'm sure there is a sweet spot. Having 5 different definitions of reality is not it.

OP is primarily describing different things that people mean by "existing", not prescribing them. 
That's relative to your concerns. I could add a sixth. Theres something to be said for ontological parsimony , and there's something to be said for explanatory comprehensiveness. They are both values, so there is no completely objective resolution. I could add that, even if you are not interested in social constructs, like money or morals, they are interested in you.

You can discuss most topics without bringing the notion of reality into the argument. Replace "true" with "accurate", where "accurate" relates to predictions a model makes. Then all your reality zoo collapses into that one point.

9Jim Pivarski22d
If you replace "true" with "accurate," what does "accurate" mean? I would have thought that "accurate" means that the distance between the model result and the true result is small, so it contains a notion of truth and a notion of distance.
If you have a narrower definition of truth, you can do less with it.

I definitely agree with that, and there is a clear pattern of this happening on LW among the newbie AI Doomers

(I assume you meant your quote unspoilered? Since it is clearly visible.)

In general, this is a very good heuristic, I agree. If you think there is a low-hanging fruit, everyone is passing on, it is good to check or inquire, usually privately and quietly, whether anyone else noticed it, before going for it. Sometimes saying out loud that the king has no clothes is equivalent to shouting in the Dark Forest. Once in a while though there is indeed low-hanging fruit. Telling the two situations apart is the tricky part.

Yeah I think in isolation the quote is not a spoiler.

This is cute, but has nothing to do with superintelligent AI though. The whole point is that you will not recognize that you are being manipulated and then you are dead. Trying to be "on the lookout" is naive at best. Remember AI can model you better than you can model yourself. If something much smarter than you is intent on killing you, you are as good as dead.

I agree with all these considerations and the choice not being straightforward. It gets even more complicated when one goes deeper into the weeds of the J.S. Mill's version of utilitarianism. I guess my original point expressed less radically is that assuming that higher IQ is automatically better is far from obvious. 

A few points:

"It is unethical to donate to effective-altruist charities, since giving away money will mean that your life becomes less happy.

Oh come on, this is an informed personal choice, not something your parents decided for you, why would you even put the two together.

Your logic would seem to go beyond "don't use embryo selection to boost IQ, have kids the regular way instead".

I said or implied nothing of the sort! Maybe you can select for both intelligence and emotional stability, I don't know. Just don't focus on one trait and assume it is an indisp... (read more)

5Jackson Wagner1mo
Thanks for all these clarifications; sorry if I came off as too harsh. "Yes, so would I! Again, when it is a personal informed choice, the situation is entirely different."  -- It seems to me like in the case of the child (who, having not been born yet, cannot decide either way), the best we can do is guess what their personal informed choice would be.  To me it seems likely that the child might choose to trade off a bit of happiness in order to boost other stats (relative to my level of happiness and other stats, and depending of course on how much that lost happiness is buying).  After all, that's what I'd choose, and the child will share half my genes!  To me, the fact that it's not a personal choice is unfortunate, and I take your point -- forcing /some random other person/ to donate to EA charities would seem unacceptably coercive.  (Although I do support the idea of a government funded by taxes.)  But since the child isn't yet born, the situation is intermediate between "informed personal choice" vs coercing a random guy.  In this intermediate situation, I think choosing based on my best guess of the unborn child's future preferences is the best option.  Especially since it's unclear what the "default" choice should be -- selecting for IQ, selecting against IQ, or leaving IQ alone (and going with whatever level of IQ and happiness is implied by the genes of me and my partner), all seem like they have an equal claim to being the default.  Unless I thought that my current genes were shaped by evolution to be at the optimal tradeoff point already, which (considering how much natural variation there is among people, and the fact that evolution's values are not my values) seems unlikely to me. Agreed that it is possible that IQ --> less happiness, for most people / on average, even though that strikes me as unlikely.  It would be great to see more research that tries to look at this more closely and in various ways. And totally agreed that this would be a tough

Well, I think we are in agreement, and it all comes down to evaluating expected happiness. Maybe one can select for both intelligence and happiness, But that does not seem to be covered in OP, which seems like a pretty big omission, just assuming that intelligence is an unquestionable positive on a personal scale.

So I agree with your general point that it is important to consider negative pleiotropy between traits. However in the specific case of happiness and intelligence, the first two studies I found from googling suggest that happiness and intelligence are positively correlated.12

Here's a meta-analysis of 23 studies that found no correlation between intelligence and happiness at an individual level but a strong correlation at the country level.

So I think that unless you're dealing with much stronger techniques than simple embryo selection, this is not a concern... (read more)

3Jackson Wagner1mo
Wait, it seems like those last two points would totally change the argument!  Consider: * "It is unethical to donate to effective-altruist charities, since giving away money will mean that your life becomes less happy.  It may benefit society as a whole and lead to greater happiness overall.  But it does not change the argument: donations are unethical because the donation makes your own life worse."  This seems crazy to me??  If anything it seems like many would consider it unethical to keep the money for yourself. * Your logic would seem to go beyond "don't use embryo selection to boost IQ, have kids the regular way instead".  It seems to extend all the way to "you should use embryo selection to deliberately hamstring IQ, in the hopes of birthing a smiling idiot".  Am I thus obligated to try and damage my child's intelligence?  (Perhaps for instance by binge-drinking during pregnancy, if I can't afford IVF?) * It also seems like the child's preferences would matter to this situation.  For instance, personally, I am a reasonably happy guy; I wouldn't mind sacrificing some of my personal life happiness in order to become more intelligent.  (Actually, since I also consider myself a reasonably smart guy, what I would really like is to sacrifice some happiness in order to become more hardworking / conscientious / ambitious.  A little more of a "Type-A" high-achieving neurotic... not too much, of course, but just a little in that direction.  I think this would improve my material circumstances since I'd work harder, and it would also be better for the world since I'd be producing more societal value.  Having a slightly more harried and tumultuous inner life seems like an acceptable price to pay; I know lots of people who are more Type-A than I am, and they seem alright.)  I would hate for someone to paternalistically say to me: "Nope, you would be happier if you were even more of a lazy slacker, an

Yeah. that is definitely not uncommon. But also, like with a dumb dog, it is easier to "end up being content and happy due to luck" when your aspirations and goals are moderate.

Frank is dumb. You can reward him for being smart all you want, but that's just not gonna get him to take any community college classes. He is content with his life. It works for him. He enjoys his job, loves his family, owns his home, and just isn't interested in change.

He sure sounds smart. Or at least life-smart. He knows what he wants, he achieved it and he is happy. He may not get far in the Raven progressive matrices test, but this test does not affect his ability to achieve what he wants and probably even live in harmony with himself.

2Adam Zerner1mo
Sometimes people end up being content and happy due to luck though, and that is what I was going for with Frank.

I am not a compatibilist, so not my answer, but Sean Carroll says, in his usual fashion, that free will is an emergent phenomenon, akin to Dennett's intentional stance. This AMA has an in-depth discussion I bolded his definition at the very end.

whether you’re a compatibilist or an in compatibilist has nothing at all to do with whether the laws of physics are deterministic. I cannot possibly emphasize this enough. What matters is that there are laws. Whether those laws are deterministic

... (read more)
Yes. It's a conceptual issue to do with what "free will" means ... and a physicist would have no special insight into that. "Making choices" is setting the bar very low indeed. I don't think Carrol undestands libertarians too well. There are a number of main concerns about free will: 1. Concerns about conscious volition, whether your actions are decided consciously or unconsciously. 2. Concerns about moral responsibility, punishment and reward. 3. Concerns about "elbow room", the ability to "have done other wise", regret about the past, whether and in what sense it is possible to change the future. Which has no bearing at all on the existence of determinism, or free will. Determinism also needs to be distinguished from predictability. A universe that unfolds deterministically is a universe that can be predicted by an omniscient being which can both capture a snapshot of all the causally relevant events, and have a perfect knowledge of the laws of physics. The existence of such a predictor, known as a Laplace's demon is not a prerequisite for the actual existence of determinism, it is just a way of explaining the concept. It is not contradictory to assert that the universe is deterministic but unpredeictable. If you are unable to make predictions in a deterministic universe, it is still deterministic, and you still lack the ability to have done otherwise in the libertarian sense, so the existence of free will still depends on whether that is conceptually important, which can't be determined by predictability. Predictability does not matter in itself, it matters insofar it relates to determinis m.

Yes, and I think it is worse than that. Even existence in the map is not clearcut. As I said in the other comment, do dragons exist in the map? In what sense? Do they also exist in the territory, given that you can go and buy a figurine of one?

Yeah, I was a bit vague there, definitely worth going deeper. One would start comparing societies that survive/thrive with those that do not, and compare prevailing ethics and how it responds to the external and internal changes. Basically "moral philosophy" would be more useful as a descriptive observational science, not a prescriptive one. I guess in that sense it is more like decision theory. And yes, it interfaces with psychology, education and what not. 

Load More