benjamin.j.campbell

Wiki Contributions

Comments

Sorted by

And it goes deeper. Because what if Mickey never actually woke up, and the brooms had been keeping him asleep the whole time? The Sleeping Beauty problem is actually present in quite a lot of Disney media where the MC goes to sleep. It's also a theme in Mulan. Maybe she never went to war and made her parents proud. It may well have been a dream she just didn't wake up from.

Thank you! This is such an important crux post, and really gets to the bottom of why the world is still so far from perfect, even though it feels like we've been improving it FOREVER. My only critique is that it could have been longer

It's worse than that. I've been aware of this since I was a teenager, but apparently there's no amount of correction that's enough. These days I try to avoid making decisions that will be affected in either direction by limerance, or pre-commit firmly to a course of action and then trust that even if I want to update the plan, I'm going to regret not doing what I pre-committed to earlier.

Seconded. The perfect level of detail to be un-put-down-able while still making sure everything is explained in enough detail to be gripping and well understood

Those are some extreme outliers for age. Was that self-reported, or some kind of automated information gathering related to their Positly profiles?

This is targeted at all 3 groups:

  • Every year, our models of consciousness and machine learning grow more powerful, and better at performing the same forms of reasoning as humans.
  • Every year, the amount of computing power we can throw at these models ratchets ever higher.
  • Every year, each human's baseline capacity for thinking and reasoning remains exactly the same.

There is a time coming in the next decade or so when we will have released a veritable swarm of different genies that are able to understand and improve themselves better than we can. At that point, the genies will not being going back in the bottle, so we can only pray they like us.

By this stage of their careers, they already have those bits of paper. MIRI are asking people who don't a priori highly value alignment research to jump through extra hoops they haven't already cleared, for what they probably perceive as a slim chance of a job outside their wheelhouse. I know a reasonable number of hard science academics, and I don't know any who would put in that amount of effort in the application for a job they thought would be highly applied for by more qualified applicants. The very phrasing makes it sound like they expect hundreds of applicants and are trying to be exclusive. If nothing else is changed, that should be.

I gave this an upvote because it is directly counter to my current belief about how relative/absolute pitch work and interact with each other. I agree that if someone's internalised absolute pitch can constantly identify out of tune notes, even after minutes of repetition, this is a strong argument against my position. On the other hand, maybe they do produce one internal reference note of set frequency, and when comparing known intervals against this, it returns "out of tune" every time. I can see either story being true, but I would like to hunt down some more information on which of these models is more accurate

I think your suggestion is effectively what everyone with absolute pitch is actually doing, if the reports from the inside I've heard are accurate. It's definitely how I would start converting my relative pitch proficiency into absolute

I know what you mean, and I think that similar to Richard Kennaway says below, we need to teach people new to the sequences and to exotic decision theories not to drive off a cliff because of a thread they couldn't resist pulling.

I think we really need something in the sequences about how to tell if your wild seeming idea is remotely likely. I.e a "How to Trust Your SatNav" post. The basic content in the post is: remember to stay grounded, and ask how likely this wild new framework might be. Ask others who can understand and assess your theory, and if they say you're getting some things wrong, take them very seriously. This doesn't mean you can't follow your own convictions, it just means you should do it in a way that minimises potential harm.

Now, having read the content you're talking about, I think a person needs to already be pretty far gone epistemically before this info hazard can "get them," and I mean either the original idea-haver and also those who receive it via transmission. But I think it's still going to help very new readers to not drive off so many cliffs. It's almost like some of them want to, which is... its own class of concerns.

Load More