What if Harry was a scientist? What would you do if the universe had magic in it? 
A story that conveys many rationality concepts, helping to make them more visceral, intuitive and emotionally compelling.

Recent Discussion

Previously: https://www.lesswrong.com/posts/bZ2w99pEAeAbKnKqo/optimal-exercise

Firstly, do the basic epistemics hold up? As far as I know, yes. The basic idea that lifting twice a week and doing cardio twice a week add up to a calorie expenditure that get you the vast majority of exercise benefits compared to extreme athletes holds up, especially when you take reverse causality adjustments into effect (survivorship bias on the genetic gifts of the extreme). Nothing I've encountered since has cast much doubt on this main takeaway.

What updates have I had, then, both in personal experience and in giving training advice to others as well as any research that has come out since then?

  1. A greater emphasis on injury prevention, as the dis-utility from injuries vastly outweighs positive effects from chasing numbers. This one was sadly

1Vlad Sitalo39m
You talk about it in snippets here and there, but I'd love for you to share your full up-to-date strength training program recommendation! 
it's mostly the same it's just exercise selection that has changed slightly upper body push: dumbbell standing press, incline press, pushups, push press upper body pull: dumbbell row, chinups lower body push: step ups (full rom), front squat lower body pull: RDL, hyperextensions, one legged bridges accessory: hip abductor with band, face pull, body saws I do one or two from each depending on mood. Usually 3x8-12.

Thank you! Surprised to see front squats and RDL given your comments about avoiding powerlifts, are these variations less injury-prone vs their more standard options?

This is a linkpost for https://fatebook.io

Fatebook is a website that makes it extremely low friction to make and track predictions.

It's designed to be very fast - just open a new tab, go to fatebook.io, type your prediction, and hit enter. Later, you'll get an email reminding you to resolve your question as YES, NO, or AMBIGUOUS.

It's private by default, so you can track personal questions and give forecasts that you don't want to share publicly. You can also share questions with specific people, or publicly.

Fatebook syncs with Fatebook for Slack - if you log in with the email you use for Slack, you’ll see all of your questions on the website.

As you resolve your forecasts, you'll build a track record - Brier score, Relative Brier score, and see your calibration chart. You can...

Just a thought: I experience discomfort with only being able to sign up via a Google account. I can get over it personally, but we should observe I'm probably not the only one, so there are people out there for whom this is an insurmountable hump that stops them from getting started. I dunno how many in actuality, but there are definitely bubbles where it's normal not to have used a Google service for years.

Alas, I dunno what alternative sign-up would be quickest and easiest to implement.

Just a short post to highlight an issue with debate on LW; I have recently been involved with some interest in the debate on covid-19 origins on here. User viking_math posted a response which I was keen to respond to, but it is not possible for me to respond to that debate (or any) because the LW site has rate-limited me to one comment per 24 hours because my recent comments are on -5 karma or less. 

So, I feel that I should highlight that one side of the debate (my side) is simply not going to be here. I can't prosecute a debate like this. 

This is funnily enough an example of brute-force manufactured consensus - there will be a debate, people will make points on their side...

You can still write posts, it doesn't look like brute-force manufactured consensus to me. Your original post got over 200 karma which seems pretty high for a censorship attempt (whether intentional or not).

(We check for "downvoter count within window", not all-time.)
Oh, I am an idiot, you are right. I got mislead by the variable name.  Then yeah, this seems pretty good to me (and seems like it should prevent basically all instances of one or two people having a grudge against someone causing them to be rate-limited).
2Seth Herd2h
That makes more sense, thanks. This is placing a high bar on the tone of comments. But the culture of collegiality is valuable in a subtle and powerful way, so I'd probably endorse it.

We have long been waiting for a version of this story, where someone hacks together the technology to use Generative AI to work the full stack of the dating apps on their behalf, ultimately finding their One True Love.

Or at least, we would, if it turned out he is Not Making This Up.

Fun question: Given he is also this guy, does that make him more or less credible?

Alas, something being Too Good to Check does not actually mean one gets to not check it, in my case via a Manifold Market. The market started trading around 50%, but has settled down at 15% after several people made strong detailed arguments that the full story did not add up, at minimum he was doing some recreations afterwards.

Which is...

The WWII generation is negligible in 2024. The actual effect is partly the inverted demographic pyramid (older population means more women than men even under normal circumstances), and partly that even young Russian men die horrifically often:

At 2005 mortality rates, for example, only 7% of UK men but 37% of Russian men would die before the age of 55 years

And for that, a major culprit is alcohol (leading to accidents and violence, but also literally drinking oneself to death).

Among the men who don't self-destruct, I imagine a large fraction have already b... (read more)

Especially targeting working-memory, long-term memory, and conceptual understanding.

This thread is exclusively for things like highly-risky self-gene-therapy, or getting a brain-computer interface surgically implanted. No "get more sleep" or "try melatonin" here.

(If the idea is really good/anti-inductive, you might DM or email it to me instead.)

Answer by gilchFeb 21, 202420

I recently saw What's up with psychonetics?. It seems like a kind of meditation practice, but one focused on gaining access to and control of mental/perceptual resources. Not sure how risky this is, but the linked text had some warnings about misuse. It might be applicable to working or long-term memory, and specifically talks about conceptual understanding ("pure meanings") as a major component of the practice.

People often parse information through an epistemic consensus filter. They do not ask "is this true", they ask "will others be OK with me thinking this is true". This makes them very malleable to brute force manufactured consensus; if every screen they look at says the same thing they will adopt that position because their brain interprets it as everyone in the tribe believing it.

- Anon, 4Chan, slightly edited

Ordinary people who haven't spent years of their lives thinking about rationality and epistemology don't form beliefs by impartially tallying up evidence like a Bayesian reasoner. Whilst there is a lot of variation, my impression is that the majority of humans we share this Earth with use a completely different algorithm for vetting potential beliefs: they just believe some...

But gain of function is a new invention - it only really started in 2011 and funding was banned in 2014, then the moratorium was lifted in 2017. The 2011-2014 period had little or no coronavirus gain of function work as far as I am aware. So coronavirus gain of function from a lab could only have occurred after say 2010 and was most likely after 2017 when it had the combination of technology and funding. 

Ralph Baric's lab was doing work that he thought would fall under the gain-of-function ban in 2014. He published the paper where Fauci said in front ... (read more)

To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with

The Sudden Savant Syndrome is a rare phenomenon in which an otherwise normal person gets some kind of brain injury and immediately develops a new skill. The linked article tells the story of a 40-years old guy who banged his head against a wall while swimming, and woke up with a huge talent for playing piano (relevant video). Now, I've spent 15 years in formal music training and I can ensure you that nobody can fake that kind of talent without spending years in actual piano practice.

Here's the story of another guy who banged his head and became a math genius better with math; you can find several other stories like that. And maybe most puzzling of all is this paper, describing a dozen cases of sudden...

Did you see the question on Psychonetics yet? I'm wondering if these ideas can be connected. Could someone learn a savant skill through Psychonetic practice? Has the Psychonetic community tried?

TL;DR: Scaling labs have their own alignment problem analogous to AI systems, and there are some similarities between the labs and misaligned/unsafe AI. 


Major AI scaling labs (OpenAI/Microsoft, Anthropic, Google/DeepMind, and Meta) are very influential in the AI safety and alignment community. They put out cutting-edge research because of their talent, money, and institutional knowledge. A significant subset of the community works for one of these labs. This level of influence is beneficial in some aspects. In many ways, these labs have strong safety cultures, and these values are present in their high-level approaches to developing AI – it’s easy to imagine a world in which things are much worse. But the amount of influence that these labs have is also something to be cautious about. 

The alignment community...

See also this much older and closely related post by Thomas Woodside: Is EA an advanced, planning, strategically-aware power-seeking misaligned mesa-optimizer?

  1. Probably there will be AGI soon -- literally any year now.
  2. Probably whoever controls AGI will be able to use it to get to ASI shortly thereafter -- maybe in another year, give or take a year.
  3. Probably whoever controls ASI will have access to a spread of powerful skills/abilities and will be able to build and wield technologies that seem like magic to us, just as modern tech would seem like magic to medievals.
  4. This will probably give them godlike powers over whoever doesn't control ASI.
  5. In general there's a lot we don't understand about modern deep learning. Mo
... (read more)