Wiki Contributions

Comments

I sometimes worry that ideas are prematurely rejected because they are not guaranteed to work, rather than because they are guaranteed not to work. In the end it might turn out that zero ideas are actually guaranteed to work and thus we are left with an assortment of not guaranteed to work ideas which are underdeveloped because some possible failure mode was found and thus the idea was abandoned early.

I didn't want to derail the OP with a philosophical digression, but I was somewhat startled to find the degree I found it difficult to think at all without at least some kind of implicit "inner dimensionality reduction." In other words, this framing allowed me to put a label on a mental operation I was doing almost constantly but without any awareness.

I snuck a few edge-case spatial metaphors in just to show how common they really are in a tongue-in-cheek fashion.

You could probably generalize the post to a different version along the lines of "Try being more thoughtful about the metaphors you employ in communication," but this framing singles out a specific class of metaphor which is easier to notice.

Totally get where you're coming from and we appreciate the feedback. I personally regard memetics as an important concept to factor into a big-picture-accurate epistemic framework. The landscape of ideas is dynamic and adversarial. I personally view postmodernism as a specific application of memetics. Or memetics as a generalization of postmodernism, historically speaking. Memetics avoids the infinite regress of postmodernism by not really having an opinion about "truth." Egregores are a decent handle on feedback-loop dynamics of the idea landscape, though I think there are risks to reifying egregores as entities.

My high-level take is that CFAR's approach to rationality training has been epistemics-first and the Guild's approach has been instrumental-first. (Let me know if this doesn't reflect reality from your perspective.) In our general approach, you gradually improve your epistemics in the course of improving your immediate objective circumstances, according to each individual's implicit local wayfinding intuition. In other words, you work on whatever current-you judges to be currently-critical/achievable. This may lead to spending some energy pursuing goals that haven't been rigorously linked up to an epistemically grounded basis, that future-you won't endorse, but at least this way folks are getting in the reps, as it were. It's vastly better than not having a rationality practice at all.

In my role an art critic I have been recently noticing how positively people have reacted to stuff like Top Gun: Maverick, a film which is exactly what it appears to be, aggressively surface-level, just executing skillfully on a concept. This sort of thing causes me to directionally agree that the age of meta and irony may be waning. Hard times push people to choose to focus on concrete measurables, which you could probably call "modernist."

To be clear ... it's random silly hats, whatever hats we happen to have on hand. Not identical silly hats. Also this is not really a load bearing element of our strategy. =)

This sort of thing is so common that I would go so far as to say is the norm, rather than the exception. Our proposed antidote to this class of problem is to attend the monthly Level Up Sessions, and simply making a habit of regularly taking inventory of the bugs (problems and inefficiencies) in your day-to-day life and selectively solving the most crucial ones. This approach starts from the mundane and eventually builds up your environment and habits, until eventually you're no longer relying entirely on your "tricks."

You're may be right, but I would suggest looking through the full list of workshops and courses. I was merely trying to give an overall sense of the flavor of our approach, not give an exhaustive list. The Practical Decision-Making course would be an example of content that is distinctly "rationality-training" content. Despite the frequent discussions of abstract decision theory that crop up on LessWrong, practically nobody is actually able to draw up a decision tree for a real-world problem, and it's a valuable skill and mental framework. 

I would also mention that a big part of the benefit of the cohort is to have "rationality buddies" off whom you can bounce your struggles. Another Curse of Smart is thinking that you need to solve every problem yourself.

Partly as a hedge against technological unemployement, I built a media company based on personal appeal. An AI will be able to bullshit about books and movies “better” than I can, but maybe people will still want to listen to what a person thinks, because it’s a person. In contrast, nobody prefers the opinion of a human on optimal ball bearing dimensions over the opinion of an AI.

If you can find a niche where a demand will exist for your product strictly because of the personal, human element, then you might have something.

shminux is right that the very concept of a “business” will likely lack meaning too far into an AGI future.

I actually feel pretty confident that your former behavior of drinking coffee until 4 pm was a highly significant contributor to your low energy, because your sleep quality was getting chronically demolished every single night you did this. You probably created a cycle where you felt like you needed an afternoon coffee because you were tired from sleeping so badly … because of the previous afternoon coffee.

I suggest people in this position first do the experiment of cutting out all caffeine after noon, before taking the extra difficult step of cutting it out entirely.

Load More