All of Alexander's Comments + Replies

How do you write original rationalist essays?

I have a similar goal. It could be helpful to share my journey thus far and plans for how I want to keep progressing in this direction.

My primary motivation is that reading and writing are foundational skills, and I will directly or indirectly improve many other aspects of my world by enhancing both. Exploring the world of knowledge and creating new knowledge are fun activities in themselves.

I find it challenging to write without material that offers an underlying framework. Therefore, I decided to practice writing book reviews to overcome this obstacle, b... (read more)

Which song do you think is perfect? Why?

Ok, this is too good. I wish I understood Russian.

Which song do you think is perfect? Why?

That's a lovely song. The beat is so rich. The instrumental version is also excellent.

I love groovy songs. One of my favourite songs over the past year has been Glamour Profession by Steely Dan. It is insanely groovy.

1Yiar10dGlad you liked it! 🙂 Yeah, I love it when they add instrumental versions 😃 What a great song! 😃 Thanks for recommending! Well if you like groove, here’s a little playlist I put together of groovy goodies https://music.apple.com/se/playlist/groovy/pl.u-pRomsWXGVP0?l=en [https://music.apple.com/se/playlist/groovy/pl.u-pRomsWXGVP0?l=en]
Which song do you think is perfect? Why?

I definitely don't think any human-made song is perfect. (Do you claim a superintelligent AI would be incapable of improving it? If yes, I question your models; if not, in what sense is it perfect?)

This is one of the funniest things I've read on LessWrong in recent times. I laughed out loud.

Which song do you think is perfect? Why?

Wait, is Aha! by Pentatonix entirely beatboxed? That's incredible. It is a very beautiful song!

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

This is very insightful and matches my personal experience and the experiences of some friends:

Sociology is less inherently subjective and meta than psychology, having intersubjectively measurable properties such as events in human lifetimes and social network graph structures.

I have not done too much meditation myself, but some friends who've gone very deep into that rabbit hole reported that too much meta-cognition made them hyperaware to an unhealthy extent.

I have noticed myself oscillating between learning how to make my cognition more effective (intro... (read more)

Worst Commonsense Concepts?

You're right. As romeostevensit pointed out, "commonsense ideas rarely include information about the domain of applicability." My issue with self-fulfilling prophecy is that it gets misapplied, but I don't think it is an irretrievably bad idea.

This insightful verse from the Tao Te Ching is an exemplary application of the self-fulfilling prophecy:

If you don't trust the people, you make them untrustworthy.

It explicitly states a feedback loop.

2Yoav Ravid17dYou can add it toSelf Fulfilling/Refuting Prophecies [https://www.lesswrong.com/tag/self-fulfilling-refuting-prophecies] as an example
Worst Commonsense Concepts?

Local Optimisation Leads to Global Optimisation

The idea that if everyone takes care of themselves and acts in their own parochial best interest, then everyone will be magically better off sounds commonsensical but is fallacious.

Biological evolution, as Dawkins has put it, is an example of a local optimisation process that "can drive a population to extinction while constantly favouring, to the bitter end, those competitive genes destined to be the last to go extinct."

Parochial self-interest is indirectly self-defeating, but I keep getting presented with the same commonsense-sounding and magical argument that it is somehow :waves-hands: a panacea.

Sasha Chapin on bad social norms in rationality/EA

writing and publishing fan fiction

That made me chuckle. Or writing some of the funniest philosophical humour I've read.

I don't understand the view that "rationalists" are emotionless and incapable of appreciating aesthetics. I haven't seen much evidence to back this claim, mere anecdotes. If anything, people who see reality more clearly can see more of its beauty. As Feynman put it, a scientist can see more beauty in the world than an artist because the scientist can see the surface level beauty as well as the beauty in the layers of abstraction all the wa... (read more)

Sasha Chapin on bad social norms in rationality/EA

You make good points. Toxicity is relative to some standard. A set of social norms that are considered toxic from the perspective of, say, a postmodern studies department (where absolute non-offensiveness is prime), might be perfectly healthy from the perspective of a physics research department (where objectivity is prime). It’s important to ask, “Toxic according to who, and with respect to what?”

Emile Durkheim asked his readers to imagine what would happen in a “society of saints.” There would still be sinners because “faults which appear venial to the layman” would there create scandal.

Sasha Chapin on bad social norms in rationality/EA

This was highly insightful. Thanks for sharing.

How would we go about disincentivizing this drift towards undesirable social norms? This seems like a situation in which individuals acting in their parochial best interest (virtue signalling, gaining social acceptance, over-identifying with an ideal) is detrimental to the group as a whole—and ultimately to the individuals whose identity has become defined by the group. I’m reminded of this quote from The Greatest Show on Earth by Dawkins:

Natural selection […] chooses between rival individuals within a populat

... (read more)

How would we go about disincentivizing this drift towards undesirable social norms?

Perhaps it could be useful if we had some high-status members in the community, who would sometimes very visibly do something non-rational, non-effective, non-altruist, just because it is fun for them.

As an extreme thought experiment, imagine Eliezer Yudkowsky writing and publishing fan fiction. LOL

Quadratic Voting and Collusion

Thanks for writing this. You clearly explained the collusion problem.

Is there much experimental/real-world data out there around how QV performs in practice?

We have laws against insider trading. Similarly, we can create laws against collusion (we already have anti-collusion laws for 1P1V), at least to discourage it. Eliminating collusion and insider trading is impossible, but laws nevertheless do a good job discouraging such actions.

I read Radical Markets recently and liked it. It’s written by Eric Posner and Glen Weyl, who are experts on mechanism design.... (read more)

2BrandonKMLee20dThere might be some clues from "Standard Voting Power Indexes Do Not Work: An Empirical Analysis" that claims population^0.9 is closer to US political reality, http://www.stat.columbia.edu/~gelman/research/published/gelmankatzbafumi.pdf [http://www.stat.columbia.edu/~gelman/research/published/gelmankatzbafumi.pdf] However a major critique of this, is that the American political system is not as diverse as the European system.
Worst Commonsense Concepts?

I tend to agree but only to an extent. To our best understanding, cognition is a process of predictive modelling. Prediction is an intrinsic property of the brain that never stops. A misprediction (usually) causes you to attend to the error and update your model.

Suppose we define science as any process that achieves better map-territory convergence (i.e. minimise predictive error). In that case, it is uncontroversial to say that we are all, necessarily, engaged in the scientific process at all times, whether we like it or not. Defining science this way, it... (read more)

2Anon User20dRight, something like "Some objective truths are outside of science's purview" might have been a slightly better phrasing, but as the goal is to stay at the commonsense level, trying to parse this more precisely is probably out of scope anyway, so can as well stay concise...
Worst Commonsense Concepts?

Self-Fulfilling Prophecy

The idea is that if you think about something, then it is more likely to happen because of some magical and mysterious "emergent" feedback loopiness and complex chaotic dynamics and other buzzwords.

This idea has some merit (e.g. if your thoughts motivate you to take effective actions). I don't deny the power of ideas. Ideas can move mountains. Still, I've come across many people who overstate and misapply the concept of a self-fulfilling prophecy.

I was discussing existential risks with someone, and they confidently said, "The soluti... (read more)

6abramdemski18dI don't want to do without the concept. I agree that it is abused, but I would simply contest whether those cases are actually self-fulfilling. So maybe what I would point to, as the bad concept, would be the idea that most beliefs are self-fulfilling. However, in my experience, this is not common enough that I would label it "common sense". Although it certainly seems to be something like a human mental predisposition (perhaps due to confirmation bias, or perhaps due to a confusion of cause and effect, since by design, most beliefs are true).
Chaos Induces Abstractions

Excellent post, John. If I understood correctly, the general idea here is that capturing initial conditions accurately is hard, and any minuscule error in the initial conditions can become amplified in the future of this system. However, the laws of physics govern all physical systems, and they assert that certain states are impossible regardless of initial conditions.

The assertions made by the laws of physics make it relatively easy to predict specific properties about the system's future irrespective of initial conditions (i.e. initial conditions are bei... (read more)

Being the (Pareto) Best in the World

Good points by both of you. I like the idea of discovering unknown unknowns.

I should've clarified what I meant by 'useful'. The broader point I was going for is that you can always become Pareto 'better' by arbitrarily choosing to compete along evermore dimensions. As you said, once we define a goal, then we can decide whether competing along one more dimension is better than doing something else or not.

Being the (Pareto) Best in the World

Firstly, I wonder how this would apply to the “meta-ness” of skills. The first kind of dimensionality is for the distinct skills, e.g. macroeconomics, tennis, cooking, etc. Another kind of dimensionality is for how meta the skills are, I.e. how foundational and widely applicable they are across a skills “hierarchy”. If you choose to improve the more foundational skills (e.g. computing, probabilistic reasoning, interpersonal communication) then you’ll be able to have really high dimensionality by leveraging those foundational skills efficiently across many ... (read more)

I can choose to read the Wikipedia overviews of 1,000,000 different fields, which will allow me to reach the Pareto frontier in this 1,000,000-dimensional graph. However, this isn’t practically useful.

That... actually sounds extremely useful, this is a great idea. The closest analogue I've done is read through a college course catalogue from cover to cover, which was extremely useful. Very good way to find lots of unknown unknowns.

Alexander's Shortform

[Minor spoiler alert] I've been obsessed with Dune lately. I watched the movie and read the book and loved both. Dune contains many subtle elements of rationality and x-risks despite the overall mythological/religious theme. Here are my interpretations: the goal of the Bene Gesserit is to selectively breed a perfect Bayesian who can help humanity find the Golden Path. The Golden Path is the narrow set of futures that don't result in an extinction event. The Dune world is mysteriously and powerfully seductive.

3[comment deleted]24d
Speaking of Stag Hunts

I think you are getting at something here, Duncan. I've become interested in the following question lately: "How should rationalists conduct themselves if their goal is to promote rationality?" Now, I understand that promoting rationality is not every rationalist's top priority, hence I stated that condition explicitly.

I've been thoroughly impressed by how Toby Ord conducts himself in his writings and interviews. He is kind, respectful, reassuring and most importantly, he doesn't engage in fear-mongering despite working on x-risks. In his EA interview, he ... (read more)

Book Review: Being You by Anil Seth

Do they need to prove beyond reasonable doubt that the intent was in the mind of Eva0 before duplication?

That's gnarly.

Another aspect that I'm led to contemplate is the ease of collusion with your clone. It's reasonable to believe that Eva1 would collude with Eva2 more easily than with an entirely different person.

Book Review: Being You by Anil Seth

Excellent points. I hadn’t given much thought to the psychological vs external sameness of selfhood.

One is naturally lead to wonder about how such dilemma would be dealt with in legal proceedings. Your assertion about the lack of an external objective criteria for the sameness of selfhood implies that if Eva1 committed a crime then we cannot reasonably convict Eva2 for it.

4JBlack1moYes, legal identity is an even bigger can of worms. Even in some cases in the real world, you can already lose your continuity of "legal identity" in some corner cases. Being able to duplicate people would just make it even messier. Do duplicates "inherit" into some sort of joint ownership of property? Is the property divided like inheritance? Are they new people entirely with no claims on property at all? What about citizenship? If Eva0 committed a crime, should we hold both Eva1 and Eva2 responsible for it? If after duplication Eva2 committed a crime that strongly benefits Eva1, but killed herself before conviction, can the prosecution go after Eva1? Do they need to prove beyond reasonable doubt that the intent was in the mind of Eva0 before duplication? Being able to "merge" mind states would make it very much messier still.
Book Review: Being You by Anil Seth

Indeed! The good regulator theorem certainly gives concreteness to the abstract notion of a map. I find clarity in viewing intelligence/consciousness as analogous to the processes of mapmaking—walking around, surveying the territory, recording observations, and so on—rather than simply the map.  In my view, this analogy to mapmaking makes more explicit the relationship between physical processes and intelligence/consciousness and the ever-changing nature of the map. I find it a little mind-blowing to conceptualise the map as the territory modelling it... (read more)

Book Review: Being You by Anil Seth

I love the title of that paper. Formalising falsification for theories of consciousness is exactly what the consciousness space needs to maximise signal and minimise noise. Thank you for sharing it! I’m going to give that paper a read. I’m very curious about how J R Hanson defines “consciousness”. To falsify a theory, we first need to be precise about what it must predict.

I am fairly certain that Anil Seth did not mention either of these incisive knock-downs of IIT in the book but I could’ve missed it. The reason I’m so certain is because the way Seth spoke about IIT was of admiration and approval. I’m sure he would’ve updated.

LessWrong is providing feedback and proofreading on drafts as a service

This is super cool! I cannot wait to give this a go on my next post. Having access to a high-quality feedback loop is such a powerful way to improve our writing and hence our thinking.

Assuming this results in better quality posts being created on LessWrong, I, therefore, wonder if it means more posts will be promoted to Frontpage as a result or if the Frontpage promotion will be made more stringent.

[The rest of this comment is off-topic, feel free to ignore it.]

I also wonder how this feature will scale as LessWrong continues to grow.

More broadly, I've been... (read more)

Book Review: Being You by Anil Seth

On some further thought, although the quote you shared is relevant, it is not exactly the part of the book that I was referring to. I was referring to the teleportation thought experiment in chapter 8 "Expect Yourself":

One day, there’s a hitch. The vaporisation module in London malfunctions and Eva – the Eva who is in London, anyway – feels like nothing’s happened and that she’s still in the transportation facility. A minor inconvenience. They’ll have to reboot the machine and try again, or maybe leave it until the following day. But then a technician shuf

... (read more)
4JBlack1moThe "selfhood" relation doesn't necessarily have to be symmetric or transitive, but the term is used as if it is, and I think this causes a lot of problems in discussion. Eva1 and Eva2 likely both consider Eva0 (who walked into the machine) to be their past self, but that doesn't mean that they must automatically consider themselves to be the same person as each other. It also doesn't mean that Eva0 would agree with one or both of them. I also think there is not any objective, external way to determine this relation: it's purely psychological. However, if I think further into a future where people could copy themselves, and later psychologically integrate both sets of memories, behaviour, and so on, then Eva1 and Eva2 in such a world may well consider themselves to be the same person as each other, and also some future Eva3,4, and so on. The thought of this few minutes branch of herself not contributing to her future self's memories might not be so horrible, but I don't think she'd merely take the word of a technician that Eva2 actually exists to carry on her survival.
Book Review: Being You by Anil Seth

Very insightful comment, Steven. Putting it that way, I agree with you that the quantum fluctuations (most likely) don’t matter for our experiences.

I was indeed nitpicking, but the broader point I'm interested in is about the futility of thought experiments that ignore the constraints imposed by physics rather than about whether quantum fluctuations play a role in how consciousness works.

This quote from Frank Wilczek claims that we are yet to attribute any high-level phenomena to quantum fluctuations:

Consistency requires the metric field to be a quantum fi

... (read more)
Book Review: Being You by Anil Seth

If I recall correctly, I was first introduced to the map-territory meme via LessWrong, and I've found it a useful idea in that it has helped me conceptualise the world and my place in it more clearly (as far as I can tell). I hear with great interest that you, too, have found this perspective insightful!

[The following are speculative ramblings.]

I wonder what the limits of map-territory convergence are and what those limits tell us about the limits of intelligence. Is complete convergence possible? Or is the limit determined by computational irreducibility ... (read more)

3Ape in the coat1moWell of course I was already familiar with map-territory distinction, and while insightful itself, it wasn't the insight I grasped from that paragraph. The new insight is deeper understanding to what degree consciousness is functionally necessary for human behaviour. Literally as necessary as thermostats for air conditioning system. Also, while understanding that I have maps of reality in my consciousness, I suppose, I wasn't explicitly thinking that my consciousness is itself a map.
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I sought a lesson we could learn from this situation, and your comment captured such a lesson well.

This is reminiscent of the message of the Dune trilogy. Frank Herbert warns about society's tendencies to "give over every decision-making capacity" to a charismatic leader. Herbert said in 1979:

The bottom line of the Dune trilogy is: beware of heroes. Much better rely on your own judgment, and your own mistakes.

Open & Welcome Thread October 2021

Hello Stephie, I set my goodreads profile to private a while back because of spam. I understand your concerns. I assure you that I do not write hate speech in my reviews or engage in any other kind of misconduct. Thanks for reaching out!

Alexander's Shortform

Fascinating question, Carmex. I am interested in the following space configurations:

  1. Conservation: when a lifeform dies, its constituents should not disappear from the system but should dissipate back into the background space.
  2. Chaos: the background space should not be empty. It should have some level of background chaos (e.g. dispersive forces) mimicking our physical environment.

I'd imagine that you'd have to encode a kind of variational free energy minimisation to enable robustness against chaos.

I might play around with the simulation on my local machine when I get the chance.

Alexander's Shortform

I just came across Lenia, which is a modernisation of Conway's Game of Life. There is a video by Neat AI explaining and showcasing Lenia. Pretty cool!

The Neglected Virtue of Scholarship

This post reminded me of this quote from Bertrand Russell's epic polemic A History of Western Philosophy:

It is noteworthy that modern Platonists, almost without exception, are ignorant of mathematics, in spite of the immense importance that Plato attached to arithmetic and geometry, and the immense influence that they had on his philosophy. This is an example of the evils of specialization: a man must not write on Plato unless he has spent so much of his youth on Greek as to have had no time for the things that Plato thought important.

Apprenticeship Online

Excellent points. With the proper juridical structure, it is possible to make work more open.

Have you come across Joseph Henrich's books on cultural evolution by any chance? He talks extensively about cultural learning. His books convinced me that cultural learning sets humanity apart from other animals. He sites plenty of empirical research showing that human babies outshine other primate babies primarily in their ability to learn from others.

I work in the software industry (safe to assume you do, too, given you follow Andy Matuschak?). My company has som... (read more)

1Henrik Karlsson2moYeah, I'm fairly well versed in Henrich's work. I trained as an anthropologist and was at one point discussing how I could join his lab for my Ph.D. – but then yeah, I got sidetracked into software development. And now I run an art museum. So yeah, life is weird. The incentives are tricky. Because there is a real cost to shadowing and mentoring, and especially in a culture where people frequently change employer it is hard to justify allowing it to slow down productivity. Is that the same incentive misalignment you refer to, or do you mean something else? How do you think one should go about it?
Apprenticeship Online

I would love to watch a livestream of a top AI researcher doing their job. I wish someone from MIRI would do that. It would be awesome to get a feel for what AI alignment research is actually like in practice.

Apprenticeship Online

Relevant to the question about how we can make it scalable for novices to enter workspaces are these livestreams Stephen Wolfram released on YouTube of his days at work.

https://youtu.be/XSO4my8mTs8

Given that most of the work of Wolfram is open source, he can record his work and put it out there. However, most workers and executives wouldn’t be able to do that as easily given red tape and NDAs.

3Henrik Karlsson2moThat is a great film recommendation! I just watched Andy Matuschak write notes, and it was the first full length film I've sat through this year. There something absolutely mesmerizing about watching someone skilled perform knowledge work (or handicraft for that matter - my three year old loves to watch people do ceramics on YouTube). About the last point: open source is much easier because of that reason. But the same models that are being developed in the open domain can be exported to closed domains, don't you think? There are some examples, Ray Dalio live stream within Bridgewater for example, and there is a rich history of apprenticeship models in industry, and in especially Germany and Switzerland it seems like it works fine along a glass box pattern. It is just about trickier outside of open source, and needs another financial and juridical structure around it.
Alexander's Shortform

This is not an answer to my question but a follow-up elaboration.

This quote by Jonathan Rauch from The Constitution of Knowledge attempts to address this problem:

Francis Bacon and his followers said that scientific inquiry is characterized by experimentation; logical positivists, that it is characterized by verification; Karl Popper and his followers, by falsification. All of them were right some of the time, but not always. The better generalization, perhaps the only one broad enough to capture most of what reality-based inquirers do, is that liberal scie

... (read more)
4Vladimir_Nesov2moRationality is the quality of ingredients of cognition that work well. As long as we don't have cognition figured out, including sufficiently general formal agents based on decision theory that's at the very least not in total disarray, there is also no clear notion of rationality. There's only the open problem of what it should be, some conjectures as to the shape it might take, and particular examples of cognitive tools that seem to work.
Alexander's Shortform

Thank you for the thoughtful response Vladimir.

I should have worded that last sentence differently. I agree with you that the way I phrased it sounds like I have written at the bottom of my sheet of paper .

I am interested in a solution to the problem. There exist several theories of epistemology and decision theory and we do now know which is "right." Would a parliamentary approach solve this problem?

Alexander's Shortform

A common criticism of rationality I come across rests upon the absence of a single, ultimate theory of rationality.

Their claim: the various theories of rationality offer differing assertions about reality and, thus, differing predictions of experiences.

Their conclusion: Convergence on objective truth is impossible, and rationality is subjective. (Which I think is a false conclusion to draw).

I think that this problem is congruent to Moral Uncertainty. What is the solution to this problem? Does a parliamentary model similar to that proposed by Bostrom and Or... (read more)

3Alexander2moThis is not an answer to my question but a follow-up elaboration. This quote by Jonathan Rauch from The Constitution of Knowledge attempts to address this problem: However, I don't find it satisfying. Rauch focuses on persuasion and ignores explanatory power. It reminds me of this claim from The Enigma of Reason, stating: I will stake a strong claim: lasting persuasion is the byproduct of good explanations. Assertions that achieve better map-territory convergence or are more effective at achieving goals tend to be more persuasive in the long run. Galileo's claim that the Earth moved around the Sun was not persuasive in his day. Still, it has achieved lasting persuasion because it is a map that reflects the territory more accurately than preceding theories. It might very well be the case that the competing theories of rationality all boil down to Bayesian optimality, i.e., generating hypotheses and updating the map based on evidence. However, not everyone is satisfied with that theory. I keep seeing the argument that rationality is subjective because there isn't a single theory, and therefore convergence on a shared understanding of reality is impossible. A parliamentary model with delegates corresponding to the competing theories being proportional to some metric (e.g. track record of prediction accuracy?) explicitly asserts that rationality is not dogmatic; rationality is not contingent on the existence of a single, ultimate theory. This way, the aforementioned arguments against rationality dissolve in their own contradictions.
8Vladimir_Nesov2moAn unrelated musing: Improving arguments for a particular side is dangerous [https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line], but I think a safe alternative is improving gears [https://www.lesswrong.com/tag/gears-level] for a particular theory. The difference is that refinement of a theory is capable of changing its predictions in unanticipated ways. This can well rob it of credence as it's balanced against other theories through prediction of known facts. In another way, gears more directly influence understanding of what a theory says and predicts, the internal hypothetical picture, not its credence, the relation of the theory to reality. So they can be a safe enough distance above the bottom line not to be mangled by it, and have the potential to force it to change, even if it's essentially written down in advance.
Open & Welcome Thread October 2021

I am considering using Goodreads to manage my bookshelves electronically. But for reviews, I plan to post links to my LessWrong reviews to avoid spending time formatting text for both editors. Formatting text for Goodreads is rather effortful.

I have found the reviews and the discussions on Goodreads to be, on average, more concerned with persuasion than explanation.

Additionally, Goodreads would benefit significantly from a more effective voting system. You can only upvote, so people with a large following tend to dominate, regardless of the veracity or eloquence of what they write.

-5Stephie Williams2mo
A review of Steven Pinker's new book on rationality

Funny how the top-rated review of this book on Goodreads ignores everything Pinker says about cognitive biases and probabilistic reasoning and claims that "There are no objective facts; such things are self-contradictory" as some strawman rebuttal. If true, then that statement itself is a contradiction.

I find it astonishing that people continue to conflate "rationality" with "objective facts" when the modern meaning of rationality acknowledges that the map is not the territory.

Open & Welcome Thread October 2021

This is my Goodreads profile (removed link for privacy given this is the public internet). You are welcome to add me as a friend if you use Goodreads.

I am considering posting book reviews on LessWrong instead of Goodreads because I love the software quality here, especially the WYSIWYG editor. Goodreads is still stuck on a HTML editor from 1993. However, given the high epistemic standards on LessWrong, I will be slower to post here. I never expect anyone to ask me to provide a source over at Goodreads but here I better be rigorous and prudent with what I say, which is a good thing!

3Yoav Ravid2moYou can post on both if you want. Crossposting is common and accepted here.
Open & Welcome Thread October 2021

Hello,

My name is Alexander, and I live and work as a software engineer in Australia. I studied the subtle art of computation at university and graduated some years ago. I don't know the demographics of LessWrong, but I don't imagine myself unique around here.

I am fascinated by the elegance of computation. It is stunning that we can create computers to instantiate abstract objects and their relations using physical objects and their motions and interactions.

I have been reading LessWrong for years but only recently decided to start posting and contributing t... (read more)

4ChristianKl2moIf you don't mind sharing the link to your profile, I'm curious about your Goodreads reviews.
Alexander's Shortform

You make excellent points. The growth of knowledge is ultimately a process of creativity alternating with criticism and I agree with you that idea generation is under appreciated. Outlandish ideas are met with ridicule most of the time.

This passage from Quantum Computing Since Democritus by Scott Aaronson captures this so well:

[I have changed my attitudes towards] the arguments of John Searle and Roger Penrose against “strong artificial intelligence.” I still think Searle and Penrose are wrong on crucial points, Searle more so than Penrose. But on rereadin

... (read more)
Alexander's Shortform

It surely is an incentive structure problem. However, I am uncertain about to what extend incentive structures can be "designed". They seem to come about as a result of thousands of years of culture gene coevolution.

Peer reviews have a similar incentive structure misalignment. Why would you spend a month reviewing someone else's paper when you can write your own instead? This point was made by Scott Aaronson during one of his AMAs but he didn't attempt at offering a solution.

1[comment deleted]2mo
4RationalRomantic2moDo we need more academics that agree with the status quo? If you reframe your point as "academia selects for originality," it wouldn't seem such a bad thing. Research requires applied creativity: creating new ideas that are practically useful. A researcher who concludes that the existing solution to a problem is the best is only marginally useful. The debate between Chalmers and Dennett is practically useful, because it lays out the boundaries of the dispute and explores both sides of the argument. Chalmers is naturally more of a contrarian and Dennett more of a small c conservative; people fit into these natural categories without too much motivation from institution incentives. The creative process can be split into idea generation and idea evaluation. Some people are good at generating wacky, out-there ideas, and others are better at judging the quality of said ideas. As De Bono has argued, it's best for there to be some hygiene between the two due to the different kinds of processing required. I think there's a family resemblence here with exploration-explotation trade-offs in ML. TL;DR I don't think that incentives are the only constraint faced by academia. It's also difficult for individual people to be the generators and evaluators of their own ideas, and both processes are necessary. Do rational communities undervalue idea generation because of their focus on rational judgement?
1[comment deleted]2mo
1Samuel Shadrach2moWhat kinds of incentives are we referring to here? I'm assuming it's on the psychological level of Maslow's hierarchy (ego, status, socialisation, curiosity, desire for truth, etc) rather than physical (hunger, thirst, physical security, anxiety due to threat to previous). Because all incentives can be analysed and consciously designed imo.
Explanations as Hard to Vary Assertions

Incidentally, Popper also thought that you couldn't falsify a theory unless we have a non-ad hoc alternative that explains the data better.


This is so interesting. Do you know where I can read more about this? Conjectures and Refutations?

Explanations as Hard to Vary Assertions

Good points. There were several chapters in Rationality: A-Z dedicating to this. According to Max Tegmark's speculations, all mathematically possible universes exist, and we happen to be in one described by a simple Standard Model. I suspect that this question about why simple explanations are so effective in this universe is unanswerable but still fun to speculate about.

Good points about the lack of emphasis on hypothesis-formation within the Bayesian paradigm. Eliezer talks about this a little in Do Scientists Already Know This Stuff?

Sir Roger Penrose—a

... (read more)
1TAG2moWhat does "effective" mean? If you are using a simplicity criterion to decide between theories that already known to be predictive , as in Solomonoff induction, then simplicity doesn't buy you any extra predictiveness.
Explanations as Hard to Vary Assertions

In a literal sense, Eliezer said, "The roots of knowledge are in observation." If we took this statement in isolation to Deutsch, he would vehemently disagree and tell us, "No, we interpret observations through explanatory theories." However, I don't think Eliezer and Deutsch disagree here. Both agree that there is a map and a territory and that the map comprises models, i.e., explanatory theories.

Explanations as Hard to Vary Assertions

I agree with you here. I made a mistake but on the bright side, I learnt a lot about the generalised form of Bayes' theorem which applies to all possible hypotheses. This was also how Eliezer explained this relationship between the posterior and the numerator in Decoherence is Falsifiable and Testable. I was trying to simplify the relationship between Bayes' theorem and Deutsch's criterion for good explanations for the sake of the post but I oversimplified too much.

I still think that Bayes' theorem and Deutsch's criterion for good explanation are compatibl... (read more)

3dedz2moIt seems that he makes the same mistake in that post (though he makes it clear in the rest of the essay that alternatives matter). You paraphrased him right. Incidentally, Popper also thought that you couldn't falsify a theory unless we have a non-ad hoc alternative that explains the data better.
Explanations as Hard to Vary Assertions

I tend to agree. It isn't easy to generalise what entails a successful explanation, especially as one goes higher up the layers of abstraction (as you've put it) or further out to the more infeasibly testable realm.

What do you think is an elegant way to define the phenomenon of explanation that is more general than "hard-to-vary assertions about reality"?

4Charlie Steiner2moI'm not sure there's a neat form. Consider the explanation of why a mirror flips left and right but not up and down. Maxwell's equations predict mirrors just fine, but it's certainly not what people (well, most people) want from this explanation. Even if we try to be elegant we'll probably have yo say complicated words like "the listener's understanding".
Alexander's Shortform

Is bias within academia ever actually avoidable?

Let us take the example of Daniel Dennett vs David Chalmers. Dennett calls philosophical zombies an "embarrassment," while Chalmers continues to double-down on his conclusion that consciousness cannot be explained in purely physical terms. If Chalmers conceded and switched teams, then he is going to be "just another philosopher," while Dennett achieves an academic victory.

As an aspiring world-class philosopher, you have little incentive to adopt the dominant view because if you do you will become just another... (read more)

3JBlack2moAre you sure that's an argument for it being completely unavoidable, or just an argument that our current incentive structures are not very good?
Load More