Posts

Sorted by New

Wiki Contributions

Comments

AGO1y10

I’ve often heard and seen the divide people are talking about where people involved in the tech world are a lot more skeptical than your average non-tech friend. I’m curious what people think is the reason for this. Is the main claim that people in tech have been lulled into a false sense of security by familiarity? Or perhaps that they look down on safety concerns as coming from a lay audience scared by Terminator or vague sci fi ideas without understanding the technology deeply enough?

AGO1y10

This was an interesting read. There were two steps in this reasoning that really tripped me up, though. The first to me seems to be a flaw. The second, I’d love to see more info on. 

1) you seem very quick to dismiss the factor of “Jewish culture,” without giving it the same inspection as any other theory. Ashkenazi Jewish culture is definitely not the same as Sephardi Jewish culture (or any other type), so just noting that this effect seems to be primarily Ashkenazi isn’t enough to invalidate the theory. Not to mention the other factors you bring up later about how regions associated with Ashkenazi Jews were the main ones really fostering enough Jewish wealth and development to be conducive to this kind of academic intelligence.  And the simple fact that Ashkenazim make up 80% of the worlds Jews would account for this effect skewing Ashkenazi. (Not sure if the numbers were different during the period in question, but I figure probably similar— Ashkenazim are often discussed as the large majority).  And the remainder are not mostly Sephardi. More are Mizrahi (Jews who stayed in the Middle East and North Africa), a group which did not have remotely the same opportunities to go into academia as the European Jews.   

In quick support of culture as a theory, Judaism overall is a religion which encourages far more doubt/questioning than many other religions, especially Christianity. It’s not exactly shocking that for a religion like Christianity, where doubt is sin or originates with the devil, the skepticism necessary for good research might not come as naturally. Conversely, in a culture where rabbis spend hundreds of years debating minutiae just for the sake of debating and a regular practice of studying Torah is to try to find interesting patterns or loopholes or alternate interpretations just to see things from all angles, careers in research might seem appealing. Plus, there may be an argument to be made about how a religion where faith tends to be a little less central compared to Christianity might encourage better thinkers. Or about what kinds of work ethics are valued in different religious cultures, though it starts to get a little messy and circular at this point.

Plus, there is other evidence of religious culture as educational choices and wealth even when the genetic factor doesn't serve as an appropriate alternative theory. The whole “Protestant work ethic” is highly controversial and probably not worth considering as it’s own theory, but there are some more promising studies which seem to indicate interesting differences between educational outcomes for women of different religious groups (the focus on women is because of the theory that women are much more significantly impacted by religious norms due to religious gender roles). This study shows both high Jewish performance and high "Liberal and Eastern Religions," (clarified to be the rather broad category of UU, agnostics, Buddhists, etc) which share some of the cultural points I mentioned above.  It also delves into a few more cultural explanations for other differences observed. This example is a bit crude and doesn't have the ideal methodology for eliminating other factors, but it and others like it seem to point to something cultural that we'd be failing to appropriately consider if we dismissed culture so incredibly quickly as irrelevant. I haven't done a deep dive into the research here, but at bare minimum it seems to be more than nothing and worth thinking about for a few moments longer than it takes to say "but we don't see this in Sephardim so it's not cultural." 

The genetic points are certainly interesting, but the cultural theory ought to be given proper consideration too. 

2) Do no other genetic diseases which impair people physically but not mentally have the effect of increased IQ? It seems intuitive to me that people with certain types of physical impairment would seek out more cognitive careers to ensure financial stability as very little else would be readily available to them. It seems similarly intuitive that physical impairment leaves people to focus more on their mental and intellectual pursuits. Higher IQ would correlate with these efforts. But I’m absolutely no expert. Very quick google search showed a correlation between gout and high IQ, which as far as I know is not associated with Jews. But if someone has more clarity on this I would appreciate it! 

AGO1y10

I second this. In my comment this is why I wanted to ask more about what's meant by "observer" in the definition.  An individual mind/perspective (regardless of computational power) being able to predict action is different than "predictability" by theoretical simulations of the universe. 

That said, if we do define free will as predictability by a fellow human observer, then we could absolutely have free will of that type. We don't even really need proof of that, we can just observe the plethora of evidence that people do not often perfectly predict each others actions. 

AGO1y10

I do like the idea of coming up with a good way to quantify the degree of deterministic free will. While it's not necessarily a useful concept in terms of actionability, when did that ever stop curiosity? I think we can fairly reasonably estimate that this degree of free will is very very low.

In response to defining types of free will, I'd personally propose "experiential free will" and "deterministic free will." The former refers to the more common usage. When someone says "I have free will" outside of a rigorous philosophical debate, they usually mean "I experience life in such a way that I feel I can make at least some conscious choices about what actions to take." This is pretty hard to dispute. People do tend to feel this way. This definition of free will may well be an illusion, but that illusion is very much experientially real and worth discussing. It seems like "deterministic free will" might be a better term for what you're talking about. The idea that free will is a spectrum where the higher the certainty with which your actions can be predicted, the less free will you have. 

AGO1y10

If in a deterministic universe, no observer B can 100% correctly predict the behavior of subject A, except when B is in the future of A, we can say that subject A has free will. 

This strikes me as a very unintuitive definition of free will.

We often talk about free will experientially; defining it based on a specific external observer strikes me as odd. But before I critique that in any way, I'd love more clarification on what you mean by "observer." Is this anything capable of prediction (e.g. a faithful simulation of the universe)? 

But more importantly, I think "100% correctly" is doing the bulk of the work here. I fully agree with your claim that if we define free will in a manner similar to this, we will never reach it. But really, very little outside of statements within self-contained axiomatic systems can ever be held to the standard of 100% certainty. If your concept of free will hinges on the realistically minuscule chance that a random event will alter your decisions substantively, then I ask if this conception still resembles anything like the idea of "free will" as we tend to think of it. 

Overall, I concede your claim follows from your definition. But I question the usefulness of such a definition in the first place. I think we can all agree that we cannot have 100% predictive certainty. The question is more whether or not we want to call that shred of uncertainty "free will." Semantically, I think it's confusing to call this "free will" when that is not usually the intended meaning of the phrase, but ultimately the decision is somewhat arbitrary as our experience remains the same regardless. 

AGO1y10

B (at least B as I intended him) is trying to create consistent general principles that minimize that inevitable repugnancy. I definitely agree that it is entirely impossible to get rid of it, but some take the attitude of “then I’ll have to accept some repugnancy to have a consistent system” rather than “I shall abandon consistency and maintain my intuition in those repugnant edge cases.”

Perhaps I wasn’t clear, but that was at least the distinction I intended to convey.