torekp

Wiki Contributions

Comments

I think you can avoid the reddit user's criticism if you go for an intermediate risk averse policy. On that policy, there being at least one world without catastrophe is highly important, but additional worlds also count more heavily than a standard utilitarian would say, up until good worlds approach about half (1/e?) the weight using the Born rule.

However, the setup seems to assume that there is little enough competition that "we" can choose a QRNG approach without being left behind. You touch on related issues when discussing costs, but this merits separate consideration.

"People on the autistic spectrum may also have the experience of understanding other people better than neurotypicals do."

I think this casts doubt on the alignment benefit. It seems a priori likely that an AI, lacking the relevant evolutionary history, will be in an exaggerated version of the autistic person's position. The AI will need an explicit model. If in addition the AI has superior cognitive abilities to the humans it's working with - or expects to become superior - it's not clear why simulation would be a good approach for it. Yes that works for humans, with their hardware accelerators and their clunky explicit modeling, but...

I read you as saying that simulation is what makes preference satisfaction a natural thing to do. If I misread, please clarify.

Update:  John Collins says that "Causal Decision Theory" is a misnomer because (some?) classical formulations make subjunctive conditionals, not causality as such, central.  Cited by the Wolfgang Schwarz paper mentioned by wdmcaskill in the Introduction.

I have a terminological question about Causal Decision Theory.

Most often, this [causal probability function] is interpreted in counterfactual terms (so P (SA) represents something like the probability of ​S​ coming about were I to choose ​A​) but it needn’t be.

Now it seems to me that causation is understood to be antisymmetric, i.e. we can have at most one of "A causes B" and "B causes A".  In contrast, counterfactuals are not antisymmetric, and "if I chose A then my simulation would also do so" and "If my simulation chose A then I would also do so" are both true.  Brian Hedden's Counterfactual Decision Theory seems like a version of FDT.

Maybe I am reading the quoted sentence without taking context sufficiently into account, and I should understand "causal counterfactual" where "counterfactual" was written.  Still, in that case, I think it's worth noting that antisymmetry is a distinguishing mark of CDT in contrast to FDT.

I love #38

A time-traveller from 2030 appears and tells you your plan failed. Which part of your plan do you think is the one ...?

And I try to use it on arguments and explanations.

Right, you're interested in syntactic measures of information, more than a physical one  My bad.

the initial conditions of the universe are simpler than the initial conditions of Earth.

This seems to violate a conservation of information principle in quantum mechanics.

On #4, which I agree is important, there seems to be some explanation left implicit or left out.

#4: Middle management performance is inherently difficult to assess. Maze behaviors systematically compound this problem.

But middle managers who are good at producing actual results will therefore want to decrease mazedom, in order that their competence be recognized.  Is it, then, that incompetent people will be disproportionately attracted to - and capable of crowding others out from - middle management?  That they will be attracted is a no-brainer, but that they will crowd others out seems to depend on further conditions not specified.  For example, if an organization lets people advance in two ways, one through middle management, another through technical fields, then it naturally diverts the competent away from middle management.  But short of some such mechanism, it seems that mazedom in middle management is up for grabs.

When I read

To be clear, if GNW is "consciousness" (as Dehaene describes it), then the attention schema is "how we think about consciousness". So this seems to be at the wrong level! [...] But it turns out, he wants to be one level up!

I thought, thank goodness, Graziano (and steve2152) gets it. But in the moral implications section, you immediately start talking about attention schemas rather than simply attention. Attention schemas aren't necessary for consciousness or sentience; they're necessary for meta-consciousness. I don't mean to deny that meta-consciousness is also morally important, but it strikes me as a bad move to skip right over simple consciousness.

This may make little difference to your main points. I agree that "There are (presumably) computations that arguably involve something like an 'attention schema' but with radically alien properties." And I doubt that I could see any value in an attention schema with sufficiently alien properties, nor would I expect it to see value in my attentional system.

how to quote

Paste text into your comment and then select/highlight it. Formatting options will appear, including a quote button.

Load More