Mati_Roy

Comments

AGI Predictions

So the following, for example, don't count as "existential risk caused by AGI", right?

  • many AIs
    • an economy run by advanced AIs amplifying negative externalities, such as pollution, leading to our demise
    • an em world with minds evolving to the point of being non-valuable anymore ("a Disneyland without children")
    • a war by transcending uploads
  • narrow AI
    • a narrow AI killing all humans (ex.: by designing grey goo, a virus, etc.)
    • a narrow AI eroding trust in society until it breaks apart
  • intermediary cause by an AGI, but not ultimate cause
    • a simulation shutdown because our AI didn't have a decision theory for acausal cooperation
    • an AI convincing a human to destroy the world
Straight-edge Warning Against Physical Intimacy

Strong like! For me, this is an important consideration to preserving my identity, staying productive / mentally sharp, and as independent as I want:)

Predictions made by Mati Roy in early 2020

The prediction is (emphasis added)

At least one other CRISPR baby will be born by January 2030.

Is the article you linked mentioning a second one? (I doubt because I looked into it after that article was published, and even wrote a wiki page on it)

Mati_Roy's Shortform

There's the epistemic discount rate (ex.: probability of simulation shut down per year) and the value discount (ex.: you do the funner things first, so life is less valuable per year as you become older).

Asking "What value discount rate should be applied" is a category error. "should" statements are about actions done towards values, not about values themselves.

As for "What epistemic discount rate should be applied", it depends on things like "probability of death/extinction per year".

Mati_Roy's Shortform

Suggestion for retroactive prizes: Pay the most undervalued post on the topic for the prize, whenever it was written, assuming the writer is still alive or cryopreserved (given money is probably not worth much for most dead people). "undervalue" meaning amount the post is worth minus amount the writers received.

Announcing the Forecasting Innovation Prize

I'm curious though, do you have thoughts on what a proposal would look like?

Suggestion: Paying the most undervalued post on the topic, whenever it was written, assuming the writer is still alive or cryopreserved. "undervalue" meaning amount the post is worth minus amount the writers received.

What features would you like a prediction platform to have?

2) a) probability mass distribution over time and some other value

I would like to easily be able to predict on a question such as "What will be the price of neuropreservation at Alcor in 2030?" but for many years at the same time.

I was thinking what would be a good way to do that, and here's a thought.

Instead of plotting probability mass over price for a specific year, we could plot price over years for a specific probability.

So to take the same example, the question could become: "For what price is it 50% sure that Alcor will charge more than it over the coming century?" You could repeat the same question for the 10% and 90% mark.

Or you could just a specific distribution, like a normal distribution. And then you have just 2 curves to make:

  • What's the mean of Alcor's expected prices over the coming century?
  • What's the standard deviation of Alcor's expected prices over the coming century?

That way, you can easily get a probability mass distribution over price, over time.

x-post: https://www.metaculus.com/questions/935/discussion-topic-what-features-should-metaculus-add/#comment-44545

Taking Ideas Seriously is Hard

nitpic

If you’re taking compounding seriously, you’d learn the skills with the greatest return first.

I don't see how that follows. Whether you multiply your initial value by 1.3 before 1.1, or the other way around, the end result is the same.

Edit: ah, maybe you meant to learn the skill which unlocks the most opportunity for more learning

Load More