benjamin.j.campbell

Posts

Sorted by New

Wiki Contributions

Comments

Limerence Messes Up Your Rationality Real Bad, Yo

It's worse than that. I've been aware of this since I was a teenager, but apparently there's no amount of correction that's enough. These days I try to avoid making decisions that will be affected in either direction by limerance, or pre-commit firmly to a course of action and then trust that even if I want to update the plan, I'm going to regret not doing what I pre-committed to earlier.

The Brain That Builds Itself

Seconded. The perfect level of detail to be un-put-down-able while still making sure everything is explained in enough detail to be gripping and well understood

An inquiry into the thoughts of twenty-five people in India

Those are some extreme outliers for age. Was that self-reported, or some kind of automated information gathering related to their Positly profiles?

[$20K in Prizes] AI Safety Arguments Competition

This is targeted at all 3 groups:

  • Every year, our models of consciousness and machine learning grow more powerful, and better at performing the same forms of reasoning as humans.
  • Every year, the amount of computing power we can throw at these models ratchets ever higher.
  • Every year, each human's baseline capacity for thinking and reasoning remains exactly the same.

There is a time coming in the next decade or so when we will have released a veritable swarm of different genies that are able to understand and improve themselves better than we can. At that point, the genies will not being going back in the bottle, so we can only pray they like us.

Have You Tried Hiring People?

By this stage of their careers, they already have those bits of paper. MIRI are asking people who don't a priori highly value alignment research to jump through extra hoops they haven't already cleared, for what they probably perceive as a slim chance of a job outside their wheelhouse. I know a reasonable number of hard science academics, and I don't know any who would put in that amount of effort in the application for a job they thought would be highly applied for by more qualified applicants. The very phrasing makes it sound like they expect hundreds of applicants and are trying to be exclusive. If nothing else is changed, that should be.

How would you learn absolute pitch?

I gave this an upvote because it is directly counter to my current belief about how relative/absolute pitch work and interact with each other. I agree that if someone's internalised absolute pitch can constantly identify out of tune notes, even after minutes of repetition, this is a strong argument against my position. On the other hand, maybe they do produce one internal reference note of set frequency, and when comparing known intervals against this, it returns "out of tune" every time. I can see either story being true, but I would like to hunt down some more information on which of these models is more accurate

How would you learn absolute pitch?

I think your suggestion is effectively what everyone with absolute pitch is actually doing, if the reports from the inside I've heard are accurate. It's definitely how I would start converting my relative pitch proficiency into absolute

Yitz's Shortform

I know what you mean, and I think that similar to Richard Kennaway says below, we need to teach people new to the sequences and to exotic decision theories not to drive off a cliff because of a thread they couldn't resist pulling.

I think we really need something in the sequences about how to tell if your wild seeming idea is remotely likely. I.e a "How to Trust Your SatNav" post. The basic content in the post is: remember to stay grounded, and ask how likely this wild new framework might be. Ask others who can understand and assess your theory, and if they say you're getting some things wrong, take them very seriously. This doesn't mean you can't follow your own convictions, it just means you should do it in a way that minimises potential harm.

Now, having read the content you're talking about, I think a person needs to already be pretty far gone epistemically before this info hazard can "get them," and I mean either the original idea-haver and also those who receive it via transmission. But I think it's still going to help very new readers to not drive off so many cliffs. It's almost like some of them want to, which is... its own class of concerns.

Yitz's Shortform

It's great that you have that satnav. I worry about people like me. I worry about being incapable of leaving those thoughts alone until I've pulled the thread enough be sure I should ignore it. In other words, if I think there's a chance something like that is true, I do want to trust the satnav, but I also want to be sure my "big if true" discovery genuinely isn't true.

Of course, a good innoculation against this has been reading some intense blogs of people who've adopted alternative decision-theories which lead them down really scary paths to watch.

I worry "there but for the grace of chance go I." But that's not quite right, and being able to read that content and not go off the deep end myself is evidence that maybe my satnav is functioning just fine after all.

I suspect I'm talking about the same exact class of infohazard as mentioned here. I think I know what's being veiled and have looked it in the eye.

Omicron variolation?

Within reason, I can see how it might be wise for you. I think the largest uncertainty this question hinges upon is whether hospitals in your area have the capacity to treat you if your case is unexpectedly bad. You can get a good sense of this by monitoring available ICU beds in the immediate/short term, but beyond a week it's hard to know.

And here's maybe a more important question, though far harder to model: will hospitals in my area have more/less capacity to treat me later, if I just catch it at the naturally occurring rate?

I'm in NSW, Australia, so even though Omicron is somewhat milder, I'm not inclined to catch it right now. All the hospitals within a reasonable range are getting full and making hard choices. So if I'm going to choose a time to ideally contract it, I'd have to shoot for just after the Omicron wave burns itself out. I'll get the Pfizer booster in February (after the initial 2x AZ mid-2021) but that's a month later than I was hoping to.

And thinking about hindsight counter-factuals on what I could have done differently is very confusing in different ways (yes, I agree hindsight counter-factuals are verboten as actual evidence). I should have booked in my booster before everyone else here realised Omicron was going to significantly change the situation, or that could have been a good time to inoculate by contracting it. Unfortunately, by the time I could possibly have had enough information to know it was a good idea, it was too late to beat the rush of cases. Not sure if that dynamic would have been in play where you live, but it fascinates me that 3-4 weeks is long enough to resolve a lot of the very salient things we didn't know. It may have resolved almost enough unknowns to be sure about when you'd want to catch it. But I'm not sure if we know when people in different locations should contract Covid intentionally. I think my inner jury is out, but I have the sense that I missed the best time I could have caught it.

Load More