Nebu

Wiki Contributions

Comments

Would you like me to debug your math?

as they code I notice nested for loops that could have been one matrix multiplication.

 

This seems like an odd choice for your primary example.

  • Is the primary concern that a sufficiently smart compiler could take your matrix multiplication and turn it into a vectorized instruction?
    • Is it only applicable in certain languages then? E.g. do JVM languages typically enable vectorized instruction optimizations?
  • Is the primary concern that a single matrix multiplication is more maintainable than nested for loops?
    • Is it only applicable in certain domains then (e.g. machine learning)? Most of my data isn't modelled as matrices, so would I need some nested for loops anyway to populate a matrix to enable this refactoring?

Is it perhaps worth writing a (short?) top level post with an worked out example of the refactoring you have in mind, and why matrix multiplication would be better than nested for loops?
 

‘Maximum’ level of suffering?
Answer by NebuJul 21, 20203

For something to experience pain, some information needs to exist (e.g. in the mind of the sufferer, informing them that they are experiencing pain). There are known information limits, e.g. https://en.wikipedia.org/wiki/Bekenstein_bound or https://en.wikipedia.org/wiki/Landauer%27s_principle

These limits are related to entropy, space, energy, etc., so if you further assume the universe is finite (or perhaps equivalently, that the malicious agent can only access a finite portion of the universe due to e.g. speed-of-light limits), then there is an upon bound of information possible, which implies an upper bound of pain possible.

Operationalizing Newcomb's Problem

Yeah, which I interpret to mean you'd "lose" (where getting $10 is losing and getting $200 is winning). Hence this is not a good strategy to adopt.

Operationalizing Newcomb's Problem
99% of the time for me, or for other people?

99% for you (see https://wiki.lesswrong.com/wiki/Least_convenient_possible_world )

More importantly, when the fiction diverges by that much from the actual universe, it takes a LOT more work to show that any lessons are valid or useful in the real universe.

I believe the goal of these thought experiments is not to figure out whether you should, in practice, sit in the waiting room or not (honestly, nobody cares what some rando on the internet would do in some rando waiting room).

Instead, the goal is to provide unit tests for different proposed decision theories as part of research on developing self modifying super intelligent AI.

12020: a fine future for these holidays

Any recommendations for companies that can print and ship the calendar to me?

Operationalizing Newcomb's Problem

Okay, but then what would you actually do? Would you leave before the 10 minutes is up?

Operationalizing Newcomb's Problem
why do I believe that it's accuracy for other people (probably mostly psych students) applies to my actions?

Because historically, in this fictional world we're imagining, when psychologists have said that a device's accuracy was X%, it turned out to be within 1% of X%, 99% of the time.

Overcoming Akrasia/Procrastination - Volunteers Wanted

I really should get around to signing up for this, but...

How much background technical knowledge do LW readers have?

Seems like the survey is now closed, so I cannot take the survey at the moment I see the post.

suppose Bob is trying to decide to go left or right at an intersection. In the moments where he is deciding to go either left or right, many nearly identical copies in nearly identical scenarios are created. They are almost entirely all the same, and if one Bob decides to go left, one can assume that 99%+ of Bobs made the same decision.

I don't think this assumption is true (and thus perhaps you need to put more effort into checking/arguing its true, if the rest of your argument relies on this assumption). In the moments where Bob is trying to decide whether to go either left or right, there is no apriori reason to believe he would choose one side over the other -- he's still deciding.

Bob is composed of particles with quantum properties. For each property, there is no apriori reason to assume that those properties (on average) contribute more strongly to causing Bob to decide to go left vs to go right.

For each quantum property of each particle, an alternate universe is created where that property takes on some value. In a tiny (but still infinite) proportion of these universe, "something weird" happens, like Bob spontaneously disappears, or Bob spontaneously becomes Alice, or the left and right paths disappear leaving Bob stranded, etc. We'll ignore these possibilities for now.

Of the remaining "normal" universes, the properties of the particles have proceeded in such a way to trigger Bob to think "I should go Left", and in other "normal" universes, the properties of the particles have proceeded in such a way to trigger Bob to think "I should go Right". There is no apriori reason to think that the proportion of the first type of universe is higher or lower probability than the proportion of the second type of universe. That is, being maximally ignorant, you'd expect about 50% of Bobs to go left, and 50% to go right.

Going a bit more meta, if MWI is true, then decision theory "doesn't matter" instrumentally to any particular agent. No matter what arguments you (in this universe) provide for one decision theory being better than another, there exists an alternate universe where you argue for a different decision theory instead.

Load More