Alexander

Life is about joyful explorations.

Wiki Contributions

Comments

Open & Welcome Thread October 2021

Hello Stephie, I set my goodreads profile to private a while back because of spam. I understand your concerns. I assure you that I do not write hate speech in my reviews or engage in any other kind of misconduct. Thanks for reaching out!

Alexander's Shortform

Fascinating question, Carmex. I am interested in the following space configurations:

  1. Conservation: when a lifeform dies, its constituents should not disappear from the system but should dissipate back into the background space.
  2. Chaos: the background space should not be empty. It should have some level of background chaos mimicking our physical environment.

I'd imagine that you'd have to encode a kind of variational free energy minimisation to enable robustness against chaos.

I might play around with the simulation on my local machine when I get the chance.

Alexander's Shortform

I just came across Lenia, which is a modernisation of Conway's Game of Life. There is a video by Neat AI explaining and showcasing Lenia. Pretty cool!

The Neglected Virtue of Scholarship

This post reminded me of this quote from Bertrand Russell's epic polemic A History of Western Philosophy:

It is noteworthy that modern Platonists, almost without exception, are ignorant of mathematics, in spite of the immense importance that Plato attached to arithmetic and geometry, and the immense influence that they had on his philosophy. This is an example of the evils of specialization: a man must not write on Plato unless he has spent so much of his youth on Greek as to have had no time for the things that Plato thought important.

Apprenticeship Online

Excellent points. With the proper juridical structure, it is possible to make work more open.

Have you come across Joseph Henrich's books on cultural evolution by any chance? He talks extensively about cultural learning. His books convinced me that cultural learning sets humanity apart from other animals. He sites plenty of empirical research showing that human babies outshine other primate babies primarily in their ability to learn from others.

I work in the software industry (safe to assume you do, too, given you follow Andy Matuschak?). My company has something called "shadowing," which is basically when you join the meetings with someone more senior and watch them do their work. It is hugely underutilized in my experience, and I think it is primarily an incentive misalignment problem. I suspect that the more senior members would feel burdened by facilitating shadowing for juniors.

The recent book "Software Engineering at Google" by Hyrum Wright dedicates a significant portion to talking about mentorship and giving juniors room to grow. Giving juniors menial work and not putting thoughtful effort into developing them is a big mistake many companies make.

Apprenticeship Online

I would love to watch a livestream of a top AI researcher doing their job. I wish someone from MIRI would do that. It would be awesome to get a feel for what AI alignment research is actually like in practice.

Apprenticeship Online

Relevant to the question about how we can make it scalable for novices to enter workspaces are these livestreams Stephen Wolfram released on YouTube of his days at work.

https://youtu.be/XSO4my8mTs8

Given that most of the work of Wolfram is open source, he can record his work and put it out there. However, most workers and executives wouldn’t be able to do that as easily given red tape and NDAs.

Alexander's Shortform

This is not an answer to my question but a follow-up elaboration.

This quote by Jonathan Rauch from The Constitution of Knowledge attempts to address this problem:

Francis Bacon and his followers said that scientific inquiry is characterized by experimentation; logical positivists, that it is characterized by verification; Karl Popper and his followers, by falsification. All of them were right some of the time, but not always. The better generalization, perhaps the only one broad enough to capture most of what reality-based inquirers do, is that liberal science is characterized by orderly, decentralized, and impersonal social adjudication. Can the marketplace of persuasion reach some sort of stable conclusion about a proposition, or tackle it in an organized, consensual way? If so, the proposition is grist for the reality-based community, whether or not a clear consensus is reached.

However, I don't find it satisfying. Rauch focuses on persuasion and ignores explanatory power. It reminds me of this claim from The Enigma of Reason, stating:

Whereas reason is commonly viewed as a superior means to think better on one’s own, we argue that it is mainly used in our interactions with others. We produce reasons in order to justify our thoughts and actions to others and to produce arguments to convince others to think and act as we suggest.

I will stake a strong claim: lasting persuasion is the byproduct of good explanations. Assertions that achieve better map-territory convergence or are more effective at achieving goals tend to be more persuasive in the long run. Galileo's claim that the Earth moved around the Sun was not persuasive in his day. Still, it has achieved lasting persuasion because it is a map that reflects the territory more accurately than preceding theories.

It might very well be the case that the competing theories of rationality all boil down to Bayesian optimality, i.e., generating hypotheses and updating the map based on evidence. However, not everyone is satisfied with that theory. I keep seeing the argument that rationality is subjective because there isn't a single theory, and therefore convergence on a shared understanding of reality is impossible.

A parliamentary model with delegates corresponding to the competing theories being proportional to some metric (e.g. track record of prediction accuracy?) explicitly asserts that rationality is not dogmatic; rationality is not contingent on the existence of a single, ultimate theory. This way, the aforementioned arguments against rationality dissolve in their own contradictions.

Alexander's Shortform

Thank you for the thoughtful response Vladimir.

I should have worded that last sentence differently. I agree with you that the way I phrased it sounds like I have written at the bottom of my sheet of paper .

I am interested in a solution to the problem. There exist several theories of epistemology and decision theory and we do now know which is "right." Would a parliamentary approach solve this problem?

Alexander's Shortform

A common criticism of rationality I come across rests upon the absence of a single, ultimate theory of rationality.

Their claim: the various theories of rationality offer differing assertions about reality and, thus, differing predictions of experiences.

Their conclusion: Convergence on objective truth is impossible, and rationality is subjective. (Which I think is a false conclusion to draw).

I think that this problem is congruent to Moral Uncertainty. What is the solution to this problem? Does a parliamentary model similar to that proposed by Bostrom and Ord make sense here? I am sure this problem has been talked about on LessWrong or elsewhere. Please direct me to where I can learn more about this!

I would like to improve my argument against the aforementioned conclusion. I would like to understand this problem

Load More