All of lukstafi's Comments + Replies

Rodney Brooks' predictions made at the beginning of 2018:

Is veganism morally correct?

Moral anti-realists do not claim that people don't have preferences. Rather, they claim that there are no preference-assumption-free facts regarding preference system comparisons. Therefore moral anti realists will not seek such facts. Moral realists may seek such facts in order to improve/correct their preferences.

Is veganism morally correct?

The implications of moral anti realism for action revolve around pursuing facts to feed into terminal preference updates.

Do you mind explaining further?
Is veganism morally correct?

"if anti-realism is true, it doesn't matter [to us] what we do" -- that's false. Whether something does matter to us is a fact independent of whether something ought to matter to us.

Rationality is a norm. A rational agent will believe what they rationally-ought to believe, and that includes any implications of anti-realism for action.
Reinforcement Learning Study Group

I advise using JAX instead of Tensorflow.

Pain is not the unit of Effort

I recently read David Goggins "Can't Hurt Me". On one level it does glorify superhuman pain tolerance. But a constructive perspective on such attitudes is: they illustrate courage. Do not tolerate pain, laugh at it! Do not tense under cold shower, relax into it. Do not bear problems, solve them.

I strongly believe there are many ways to respond to difficult sensory inputs, that we can learn new responses, and people like David Goggins exemplify some of this, but there are even more extreme examples, such as self-immolation by Buddhist monks. I learned a first step on the path in myself, in a very trivial way, by watching my cats go out into the rain. They completely ignored it. So I tried that, instead of hunching up when I went out into the rain. And the suffering - in response to the same sensory input - immediately went away. This turned out to generalise to some of the inputs during workouts, although by no means all of them. DOMS, for example, appears as a sign of a hard workout the day before, rather than as pain per se. I sincerely doubt I could get even to the Goggins level, but this is clearly a learnable thing. So maybe at least some of those who appear to experience pain and suffering are actually experiencing something else, but describing what - to you - would be pain?
Industrial literacy

The fridge / the freezer!

AlphaStar: Impressive for RL progress, not for AGI progress

Would you consider MuZero an advance in causal reasoning? Despite intentionally not representing causality / explicit model dynamics, it supports hypothetical reasoning via state tree search.

Do you think there's a chance of MuZero - AlphaStar crossover?

What Evidence Is AlphaGo Zero Re AGI Complexity?

The general tool: residual networks variant of convolutional NNs, MCTS-like variable-depth tree search. Prerequisites: input can be presented as K layers of N-D data (where N=1,2,3... not too large), the action space is discrete. If the actions are not discrete, an additional small module would be needed to quantize the action space based on the neural network's action priors.

Robert Aumann on Judaism

Perhaps a satisfactory answer can be found in "Jewish Philosophy as a Guide to Life: Rosenzweig, Buber, Levinas, Wittgenstein" by Hilary Putnam (who seemed to me to be a reasonable philosopher, but converted to Judaism). I've just started listening to its audiobook version, prompted by this post.

What attracts smart and curious young people to physics? Should this be encouraged?

At high-school level, physics has perhaps the richest tightly-knit concept structures.

Learn (and Maybe Get a Credential in) Data Science

Including signaling "thanks" to the university. :-)

an ethical puzzle about brain emulation

Reminds me of the error -- on charitable reading, of the characters, but perhaps of the author -- in "Permutation City". There's no such a thing as out-of-order simulation.

Does the simulation argument even need simulations?

Only in objective modal sense. Beliefs are probabilistic constraints over observations anticipated given a context. So in the example with stars moving away, the stars are still observables because there is counterfactual context where we observe them from nearby (by traveling with them etc.)

Does the simulation argument even need simulations?

(1) It's totally tongue-in-cheek. (2) By "modern" I don't mean "contemporary", I mean "since Descartes onwards". (3) By "notes" I mean criticisms. (4) The point is that I see responses to the simulation aka. Daemon argument recurring in philosophy.

Ahh, that one makes a difference in connotation. There certainly seems to be more of that than I would judge worthwhile.
Does the simulation argument even need simulations?

Modern philosophy is just a set of notes on the margins of Descartes' "Meditations".

That is the most damning criticism of philosophy I have ever seen.
Thought experiment: The transhuman pedophile

Persons do not have fixed value systems anyway. A value system is a partly-physiologically-implemented theory of what is valuable (good, right, etc.) One can recognize a better theory and try to make one's habits and reactions fit to it. Pedophilia is bad if it promotes a shallower reaction to a young person, and good if it promotes a richer reaction, it depends on particulars of brain-implementing-pedophilia. Abusing anyone is bad.

Eudaimonic Utilitarianism

It is not necessary for Nazis hating Jews to be rational that there are reasons for hating Jews, only that the reasons for not hating Jews do not outweigh the reasons for hating Jews. But their reasons for hating Jews are either self-contradictory or in fact support not hating Jews when properly worked out.


Let me get this straight. You want to promote the short-circuiting the mental circuit of promotion?

Theists are wrong; is theism?

If God created the universe, then that's some evidence that He knows a lot. Not overwhelming evidence, since some models of creation might not require of the creator to know much.

How I Am Productive

Set up automatic filters.

Reality is weirdly normal

As a function of how long the universe will exist? ETA: a short period of time might be significantly located.

This reminds of a website where you can check if the Earth has been destroyed.
Reality is weirdly normal

The absurd claim is "there is nothing you ought to do or ought to not do". The claim "life is tough" is not absurd. ETA: existentialism in the absurdist flavor (as opposed to for example the Christian flavor) is a form of value anti-realism which is not nihilism. It denies that there are values that could guide choices, but puts intrinsic value into making choices.

Reality is weirdly normal

I would still be curious how much I can get out of life in billions of years.

You are curious now. You might become less curious after the first billion years. :-) PS. I'm not advocating deathism. Just saying that any period of time is insignificant, when compared to a much larger period of time. The billion-year happy intergalactic civilization is still absurdly tiny compared to the eternity of Boltzmann-brain-infested Darkness that follows... PPS. Or we could just taboo the word "absurd".
How sure are you that brain emulations would be conscious?

I do not strongly believe the claim, just lay it out for discussion. I do not claim that experiences do not supervene on computations: they have observable, long-term behavioral effects which follow from the computable laws of physics. I just claim that in practice, not all processes in a brain will ever be reproduced in WBEs due to computational resource constraints and lack of relevance to rationality and the range of reported experiences of the subjects. Experiences can be different yet have roughly the same heterophenomenology (with behavior diverging only statistically or over long term).

How sure are you that brain emulations would be conscious?

Isn't it sufficient for computationalism that WBEs are conscious and that experience would be identical in the limit of behavioral identity? My intent with the claim is to weaken computationalism -- accommodate some aspects of identity theory -- but not to directly deny it.

You seem to be suggesting that there are properties of the system that are relevant for the quality of its experiences, but are not computational properties. To get clearer on this, what kind of physical details do you have in mind, specifically?
How sure are you that brain emulations would be conscious?

The truth of the claim, or the degree of difference? The claim is that identity obtains in the limit, i.e. in any practical scenario there wouldn't be identity between experiences of a biological brain and WBE, only similarity. OTOH identity between WBEs can obviously be obtained.

The claim then rules out computationalism.
How sure are you that brain emulations would be conscious?

The relevant notion of consciousness we are concerned with is technically called phenomenal experience. Whole Brain Emulations will necessarily leave out some of the physical details, which means the brain processes will not unfold in exactly the same manner as in biological brains. Therefore a WBE will have different consciousness (i.e. qualitatively different experiences), although very similar to the corresponding human consciousness. I expect we will learn more about consciousness to address the broader and more interesting issue of what kinds and degrees of consciousness are possible.

That would depend on the granularity of the WBE, which has not beens pecified, and the nature of the superveninece of experince on brains states, which is unknown.
What Bayesianism taught me

I'd like to add that if the curriculum has a distinction between "probability" and "statistics", it is taught in the "probability" class. Much later, the statistics class has "frequentist" part and "bayesian" part.

[Link] Cosmological Infancy

Inflationary multiverse is essentially infinite. But as you take a slice through (a part of) the multiverse, there is way more young universes. The proportion of universes of given age is inversely (exponentially, as in memoryless distribution) proportional to the age. This resolves the doomsday paradox (because our universe is very young relative to its lifespan).

Another argument to similar effect would be to consider a measure over possible indices. Indices pointing into old times would be less probable -- by needing... (read more)

Responses to Catastrophic AGI Risk: A Survey

I'm glad to see Mark Waser cited and discussed, I think he was omitted in a former draft but I might misremember. ETA: I misremembered, I've confused it with which has an explicitly narrower focus.

Living in the shadow of superintelligence

We should continue growing so that we join the superintelligentsia.

Although I wouldn't say this, I don't see how my comment contradicts this.

Let's take "the sexual objectification of women in some advertisement" as an example. Do you mean that sexual objectification takes place when the actress feels bad about playing in an erotic context, and agreed only because of commercial incentive, or something similar? ETA: I guess objectification generally means not treating someone as a person. With a focus on this explication, objectification in (working on) a film (advertisement is a short film) would be when the director does not collaborate with the actors, but rather is authoritarian in ... (read more)

I see it this way. It is "objectification" when it's used to attract attention. It's "for the purpose of appreciation" when it's used to enrich emotional reaction (usually of the aesthetic evaluation, but sometimes of the moral evaluation). So it is hard to say just by the content, but if the content is both erotic and boring it's objectification.

Erm... what are you talking about? Sexual objectification is done by one person to another.
Learning programming: so I've learned the basics of Python, what next?

I mean learning Prolog in the way it would be taught in a "Programming Languages" course, not as an attempt at facilitating AI. Two angles are important here: (1) programming paradigm features: learning the concept of late-bound / dataflow / "logical" variables. is an OK substitute. (2) logic, which is also something to be taught in a "Programming Languages" context, not (only) in AI context. With Prolog, this means learning about SLD-resolution and perhaps making some bro... (read more)

Learning programming: so I've learned the basics of Python, what next?

OCaml is my favorite language. At some point you should also learn Prolog and Haskell to have a well-rounded education.

I'm not sure knowing Prolog is actually useful, and I speak as someone who has been teaching Prolog as part of an undergraduate AI course for the last few years, and who learned it way back when Marseilles Prolog didn't even support negative numbers and I had to write my own code just to do proper arithmetic. (I'm not responsible for the course design, I'm just one of the few people in the department who knows Prolog.) Functional languages, imperative languages, object-oriented languages, compiled languages, interpreted languages: yes. Even some acquaintance with assembler and the techniques that are used to execute all the other languages, just so you know what the machine is really doing. But Prolog remains a tiny, specialised niche, and I'm inclined to agree with what Paul Graham says of it: it's a great language for writing append, but after that it's all downhill.
Normative uncertainty in Newcomb's problem

Actually, the ratio alone is not sufficient, because there is a reward for two-boxing related to "verifying if Omega was right" -- if Omega is right "apriori" then I see no point in two-boxing above 1:1. I think the poll would be more meaningful if 1 stood for $1. ETA: actually, "verifying" or "being playful" might mean for example tossing a coin to decide.

[This comment is no longer endorsed by its author]Reply
Mahatma Armstrong: CEVed to death.

An interesting problem with CEV is demonstrated in chapter 5 "On the Rationality of Preferences" of Hilary Putnam "The Collapse of the Fact/Value Dichotomy and Other Essays". The problem is that a person might assign value to that a choice of a preference, underdetermined at a given time, being of her own free will.

Is a paperclipper better than nothing?

I agree with your premise, I should have talked about moral progress rather than CEV. ETA: one does not need a linear order for the notion of progress, there can be multiple "basins of attraction". Some of the dynamics consists of decreasing inconsistencies and increasing robustness.

Mahatma Armstrong: CEVed to death.

I agree. In case it's not clear, my opinion is that an essential part of being a person is developing one's value system. It's not something that you can entirely outsource because "the journey is part of the destination" (but of course any help one can get matters) and it's not a requirement for having ethical people or AI. ETA: i.e. having a fixed value system is not a requirement for being ethical.

Mahatma Armstrong: CEVed to death.

The last forbidden transition would be the very last one, since it's outright wrong while the previous ones do seem to have reasons behind them.

Mahatma Armstrong: CEVed to death.

Valuing everything means you want to go as far from nothingness as you can get. You value that more types are instantiated over less types being instantiated.

Mahatma Armstrong: CEVed to death.

By letting people evolve their values at their own pace, within ethical boundaries.

There are certain problems in attempting to program such a procedure, however, that being the problem at hand.
Mahatma Armstrong: CEVed to death.

I'm with you up to 6. Having a terminal value on everything does not mean that the final consistent evaluation is uniform over everything, because instrumental values come into play -- some values cancel out and some add up. But it does mean that you have justifications to make before you start destroying stuff.

Prisoner's Dilemma (with visible source code) Tournament

Is each participant limited to submitting a single program? Have you considered "team mode", where the results of programs from a single team are summed up?

Yes. No. A team can work together to submit one program. But I don't see the point of adding up the scores of separate programs like that.
Load More