Alexander's Shortform

by Alexander27th Sep 202116 comments
18 comments, sorted by Highlighting new comments since Today at 8:03 PM
New Comment

[Minor spoiler alert] I've been obsessed with Dune lately. I watched the movie and read the book and loved both. Dune contains many subtle elements of rationality and x-risks despite the overall mythological/religious theme. Here are my interpretations: the goal of the Bene Gesserit is to selectively breed a perfect Bayesian who can help humanity find the Golden Path. The Golden Path is the narrow set of futures that don't result in an extinction event. The Dune world is mysteriously and powerfully seductive.

[+][comment deleted]19d 4

I just came across Lenia, which is a modernisation of Conway's Game of Life. There is a video by Neat AI explaining and showcasing Lenia. Pretty cool!

Fascinating question, Carmex. I am interested in the following space configurations:

  1. Conservation: when a lifeform dies, its constituents should not disappear from the system but should dissipate back into the background space.
  2. Chaos: the background space should not be empty. It should have some level of background chaos (e.g. dispersive forces) mimicking our physical environment.

I'd imagine that you'd have to encode a kind of variational free energy minimisation to enable robustness against chaos.

I might play around with the simulation on my local machine when I get the chance.

A common criticism of rationality I come across rests upon the absence of a single, ultimate theory of rationality.

Their claim: the various theories of rationality offer differing assertions about reality and, thus, differing predictions of experiences.

Their conclusion: Convergence on objective truth is impossible, and rationality is subjective. (Which I think is a false conclusion to draw).

I think that this problem is congruent to Moral Uncertainty. What is the solution to this problem? Does a parliamentary model similar to that proposed by Bostrom and Ord make sense here? I am sure this problem has been talked about on LessWrong or elsewhere. Please direct me to where I can learn more about this!

I would like to improve my argument against the aforementioned conclusion. I would like to understand this problem

I would like to improve my argument against the aforementioned conclusion.

An unrelated musing: Improving arguments for a particular side is dangerous, but I think a safe alternative is improving gears for a particular theory. The difference is that refinement of a theory is capable of changing its predictions in unanticipated ways. This can well rob it of credence as it's balanced against other theories through prediction of known facts.

In another way, gears more directly influence understanding of what a theory says and predicts, the internal hypothetical picture, not its credence, the relation of the theory to reality. So they can be a safe enough distance above the bottom line not to be mangled by it, and have the potential to force it to change, even if it's essentially written down in advance.

Thank you for the thoughtful response Vladimir.

I should have worded that last sentence differently. I agree with you that the way I phrased it sounds like I have written at the bottom of my sheet of paper .

I am interested in a solution to the problem. There exist several theories of epistemology and decision theory and we do now know which is "right." Would a parliamentary approach solve this problem?

This is not an answer to my question but a follow-up elaboration.

This quote by Jonathan Rauch from The Constitution of Knowledge attempts to address this problem:

Francis Bacon and his followers said that scientific inquiry is characterized by experimentation; logical positivists, that it is characterized by verification; Karl Popper and his followers, by falsification. All of them were right some of the time, but not always. The better generalization, perhaps the only one broad enough to capture most of what reality-based inquirers do, is that liberal science is characterized by orderly, decentralized, and impersonal social adjudication. Can the marketplace of persuasion reach some sort of stable conclusion about a proposition, or tackle it in an organized, consensual way? If so, the proposition is grist for the reality-based community, whether or not a clear consensus is reached.

However, I don't find it satisfying. Rauch focuses on persuasion and ignores explanatory power. It reminds me of this claim from The Enigma of Reason, stating:

Whereas reason is commonly viewed as a superior means to think better on one’s own, we argue that it is mainly used in our interactions with others. We produce reasons in order to justify our thoughts and actions to others and to produce arguments to convince others to think and act as we suggest.

I will stake a strong claim: lasting persuasion is the byproduct of good explanations. Assertions that achieve better map-territory convergence or are more effective at achieving goals tend to be more persuasive in the long run. Galileo's claim that the Earth moved around the Sun was not persuasive in his day. Still, it has achieved lasting persuasion because it is a map that reflects the territory more accurately than preceding theories.

It might very well be the case that the competing theories of rationality all boil down to Bayesian optimality, i.e., generating hypotheses and updating the map based on evidence. However, not everyone is satisfied with that theory. I keep seeing the argument that rationality is subjective because there isn't a single theory, and therefore convergence on a shared understanding of reality is impossible.

A parliamentary model with delegates corresponding to the competing theories being proportional to some metric (e.g. track record of prediction accuracy?) explicitly asserts that rationality is not dogmatic; rationality is not contingent on the existence of a single, ultimate theory. This way, the aforementioned arguments against rationality dissolve in their own contradictions.

Rationality is the quality of ingredients of cognition that work well. As long as we don't have cognition figured out, including sufficiently general formal agents based on decision theory that's at the very least not in total disarray, there is also no clear notion of rationality. There's only the open problem of what it should be, some conjectures as to the shape it might take, and particular examples of cognitive tools that seem to work.

Is bias within academia ever actually avoidable?

Let us take the example of Daniel Dennett vs David Chalmers. Dennett calls philosophical zombies an "embarrassment," while Chalmers continues to double-down on his conclusion that consciousness cannot be explained in purely physical terms. If Chalmers conceded and switched teams, then he is going to be "just another philosopher," while Dennett achieves an academic victory.

As an aspiring world-class philosopher, you have little incentive to adopt the dominant view because if you do you will become just another ordinary philosopher. By adopting a radically different stance, you establish an entirely new "school" and become at its helm. Meanwhile, it would be considerably more effortful to become at the helm of the more well-established schools, e.g. physicalism and compatibilism.

Thus, motivated skepticism and motivated reasoning seem to me to be completely unavoidable in academia.

Are you sure that's an argument for it being completely unavoidable, or just an argument that our current incentive structures are not very good?

It surely is an incentive structure problem. However, I am uncertain about to what extend incentive structures can be "designed". They seem to come about as a result of thousands of years of culture gene coevolution.

Peer reviews have a similar incentive structure misalignment. Why would you spend a month reviewing someone else's paper when you can write your own instead? This point was made by Scott Aaronson during one of his AMAs but he didn't attempt at offering a solution.

Do we need more academics that agree with the status quo? If you reframe your point as "academia selects for originality," it wouldn't seem such a bad thing. Research requires applied creativity: creating new ideas that are practically useful. A researcher who concludes that the existing solution to a problem is the best is only marginally useful.

The debate between Chalmers and Dennett is practically useful, because it lays out the boundaries of the dispute and explores both sides of the argument. Chalmers is naturally more of a contrarian and Dennett more of a small c conservative; people fit into these natural categories without too much motivation from institution incentives.

The creative process can be split into idea generation and idea evaluation. Some people are good at generating wacky, out-there ideas, and others are better at judging the quality of said ideas. As De Bono has argued, it's best for there to be some hygiene between the two due to the different kinds of processing required. I think there's a family resemblence here with exploration-explotation trade-offs in ML. 

TL;DR I don't think that incentives are the only constraint faced by academia. It's also difficult for individual people to be the generators and evaluators of their own ideas, and both processes are necessary.

Do rational communities undervalue idea generation because of their focus on rational judgement?

You make excellent points. The growth of knowledge is ultimately a process of creativity alternating with criticism and I agree with you that idea generation is under appreciated. Outlandish ideas are met with ridicule most of the time.

This passage from Quantum Computing Since Democritus by Scott Aaronson captures this so well:

[I have changed my attitudes towards] the arguments of John Searle and Roger Penrose against “strong artificial intelligence.” I still think Searle and Penrose are wrong on crucial points, Searle more so than Penrose. But on rereading my 2006 arguments for why they were wrong, I found myself wincing at the semi-flippant tone, at my eagerness to laugh at these celebrated scholars tying themselves into logical pretzels in quixotic, obviously doomed attempts to defend human specialness. In effect, I was lazily relying on the fact that everyone in the room already agreed with me – that to these (mostly) physics and computer science graduate students, it was simply self-evident that the human brain is nothing other than a “hot, wet Turing machine,” and weird that I would even waste the class's time with such a settled question. Since then, I think I’ve come to a better appreciation of the immense difficulty of these issues – and in particular, of the need to offer arguments that engage people with different philosophical starting-points than one's own.

I think we need to strike a balance between the veracity of ideas and tolerance of their outlandishness. This topic has always fascinated me but I don't know of a concrete criterion for effective hypothesis generation. The simplicity criterion of Occam's Razor is ok but it is not the be-all end-all.

What kinds of incentives are we referring to here? I'm assuming it's on the psychological level of Maslow's hierarchy (ego, status, socialisation, curiosity, desire for truth, etc) rather than physical (hunger, thirst, physical security, anxiety due to threat to previous). Because all incentives can be analysed and consciously designed imo.

[+][comment deleted]2mo 1
[+][comment deleted]2mo 1