niplav

I operate by Crocker's rules.

Website.

Sequences

Inconsistent Values and Extrapolation

Wiki Contributions

Comments

Courtesy of the programming language checklist.

So you're proposing a new economic policy. Here's why your policy will not work.

Your policy depends on science/theorizing that:
    ☐ has been replicated only once
    ☐ has failed to replicate
    ☐ for which there exist no replication attempts
    ☐ was last taken seriously sometime around 1900
    ☐ requires a dynamic stochastic general equilibrium model with 200 free parameters to perfectly track reality perfectly
Your policy would:
    ☐ disincentivize good things
    ☐ incentivize bad things
    ☐ both
    ☐ be wildly unpopular, even though you think it's the best thing since sliced bread (it's not)
    ☐ You seem to think that taking options away from people helps them
Your policy just reinvents
    ☐ land-value taxes, but worse
    ☐ universal basic income, but worse
    ☐ price discrimination, but worse
    ☐ demand subsidy, but worse
    ☐ demand subsidy, better, but that's still no excuse
    ☐ Your policy sneakily redistributes money from poor to rich people
    ☐ Your policy only works if every country on earth accepts it at the same time
    ☐ You actually have no idea what failure/success of your policy would look like
You claim it fixes
    ☐ climate change
    ☐ godlessness
    ☐ police violence
    ☐ wet socks
    ☐ teenage depression
    ☐ rising rents
    ☐ war
    ☐ falling/rising sperm counts/testosterone levels
You seem to assume that
    ☐ privatization always works
    ☐ privatization never works
    ☐ your country will never become a dictatorship
    ☐ your country will always stay a dictatorship
    the cost of coordination is
        ☐ negligible
        ☐ zero
        ☐ negative
        ☐ Your policy is a Pareto-worsening
In conclusion,
    ☐ You have copied and mashed together some good ideas with some mediocre ideas
    ☐ You have not even tried to understand basic economics/political science/sociology concepts
    ☐ Living under your policy is an adequate punishment for adequating for it

One could introduce 🌵 for such users.

I like more.

Some people definitely say they believe climate change will kill all humans.

Answer by niplavMay 19, 202340

In a current caffeine self-experiment I'm tracking the following variables:

  • Meditation performance using Meditavo (unfortunately only exportable using the premium version)
    • Mindfulness
    • Concentration
  • Cognition using Anki flashcards scores, exported from the Collection.anki2 by using sqlite3
  • Mood via Moodpatterns
    • Happiness, Contentment and Stress
    • I use the additional interested/disinterested spectrum for tracking horniness
  • Creativity and productivity for the day via a simple self-written script that activates via scron

More here. Maybe you could try to self-blind yourself, e.g. have half of your injections be saline and the other half testosterone? (I don't know much about how testosterone is consumed, maybe the benefits accrue slowly enough that randomization doesn't make sense).

Happy to respond to more questions as well, but I haven't finished my experiment yet so I can't provide any code for analysis.

Historically, LessWrong was seeded by the writings of Eliezer Yudkowsky, an artificial intelligence researcher.

He usually descibes himself as a decision theorist if asked for a description of his job.

Man I think I am providing value to the world by posting and commenting here. If it cost money I would simply stop posting here, and not post anywhere else.

The value flows in both directions. I'm fine not getting paid but paying is sending a signal of "what you do here isn't appreciated".

(Maybe I'd feel different if the money was reimbursed to particularly good posters? But then Goodharts law)

I encourage you to fix the mistake. (I can't guarantee that the fix will be incorporated, but for something this important it's worth a try).

I notice I am confused. How do you violate an axiom (completeness) without behaving in a way that violates completeness? I don't think you need an internal representation.

Elaborating more, I am not sure how you even display a behavior that violates completeness. If you're given a choice between only universe-histories and , and your preferences are imcomplete over them, what do you do? As soon as you reliably act to choose one over the other, for any such pair, you have algorithmically-revealed complete preferences.

If you don't reliably choose one over the other, what do you do then?

  • Choose randomly? But then I'd guess you are again Dutch-bookable. And according to which distribution?
  • Your choice is undefined? That seems both kinda bad and also Dutch-bookable to me tbh. Alwo don't see the difference between this and random choice (shodt of going up in flames, which would constigute a third, hitherto unassumed option).
  • Go away/refuse the trade &c? But this is denying the premise! You only have universe-histories and tp choose between! I think what happens with humans is that they are often incomplete over very low-ranking worlds and are instead searching for policies to find high-ranking worlds while not choosing. I think incomplwteness might be fine if there are two options you can guarantee to avoid, but with adversarial dynamics that becomes more and more difficult.

To clarify the comment for @tjaffee, a superintelligence could do the following

  • Use invalid but extremely convincing arguments that make the "doomer"[1] change his mind. This appears realistic because sometimes people become convinced of false things through invalid argumentation[2].
  • Give a complete plan for an aligned superintelligence, shutting itself down in the process, creating a true and probably convincing argument (this is vanishingly unlikely).

  1. Ugh. ↩︎

  2. Like maybe once or twice. ↩︎

Load More