All Comments

Book review: Rethinking Consciousness

Well, I wasn't nitpicking you. Friedenbach was assserting locality+determinism. You are asserting locality+nondeterminism, which is OK.

Book review: Rethinking Consciousness

I am strongly disinclined to believe (as I think David Chalmers has suggested) that there’s a notion of p-zombies, in which an unconscious system could have exactly the same thoughts and behaviors as a conscious one, even including writing books about the philosophy of consciousness, for reasons described here and elsewhere.

Again: Chalmers doesn't think p-zombies are actually possible.

If I believe (1), it seems to follow that I should endorse the claim “if we have a complete explanation of the meta-problem of consciousness, then there is nothing left to explain regarding the hard problem of consciousness”.

That doesn't follow from (1). It would follow from the claim that everyone is a zombie, because then there would be nothing to consciousness except false claims to be conscious. However, if you take the view that reports of consciousness are caused by consciousness per se, then consciousness per se exists and needs to be explained separately from reports and behaviour.

Exploring safe exploration
A particular prediction I have now, but is weakly held, is that episode boundaries are weak and permeable, and will probably be obsolete at some point. There's a bunch of reasons I think this, but maybe the easiest to explain is that humans learn and are generally intelligent and we don't have episode boundaries.
Given this, I think the "within-episode exploration" and "across-episode exploration" relax into each other, and (as the distinction of episode boundaries fades) turn into the same thing, which I think is fine to call "safe exploration".

My main reason for making the separation is that in every deep RL algorithm I know of there is exploration-that-is-incentivized-by-gradient-descent and exploration-that-is-not-incentivized-by-gradient-descent and it seems like these should be distinguished. Currently due to episode boundaries these cleanly correspond to within-episode and across-episode exploration respectively, but even if episode boundaries become obsolete I expect the question of "is this exploration incentivized by the (outer) optimizer" to remain relevant. (Perhaps we could call this outer and inner exploration, where outer exploration the exploration that is not incentivized by the outer optimizer.)

I don't have a strong opinion on whether "safe exploration" should refer to just outer exploration or both outer and inner exploration, since both options seem compatible with the existing ML definition.

Book review: Rethinking Consciousness

What is specifically ruled out by test's of Bell's inequalities is the conjunction of (local, deterministic). The one thing we know is that the two things you just asserted are not both true. What we don't know is which is false.

I think you're nitpicking here. While we don't know the fundamental laws of the universe with 100% confidence, I would suggest that based on what we do know, they are extremely likely to be local and non-deterministic (as opposed to nonlocal hidden variables). Quantum field theory (QFT) is in that category, and adding general relativity doesn't change anything except in unusual extreme circumstances (e.g. microscopic black holes, or the Big Bang—where the two can't be sensibly combined). String theory doesn't really have a meaningful notion of locality at very small scales (Planck length, Planck time), but at larger scales in normal circumstances it approaches QFT + classical general relativity, which again is local and non-deterministic. (So yes, probably our everyday human interactions have nonlocality at a part-per-googolplex level or whatever, related to quantum fluctuations of the geometry of space itself, but it's hard to imagine that this would matter for anything.)

(By non-deterministic I just mean that the Born rule involves true randomness. In Copenhagen interpretation you say that collapse is a random process. In many-worlds you would say that the laws of physics are deterministic but the quasi-anthropic question "what branch of the wavefunction will I happen to find myself in?" has a truly random answer. Either way is fine; it doesn't matter for this comment.)

Reality-Revealing and Reality-Masking Puzzles

It might be useful to know that I'm not that sold on a lot of singularity stuff, and the parts of rationality that have affected me the most are some of the more general thinking principles. "Look at the truth even if it hurts" / "Understanding tiny amounts of evo and evo psyche ideas" / "Here's 18 different biases, now you can tear down most people's arguments".

It was those ideas (a mix of the naive and sophisticated form of them) + my own idiosyncrasies that caused me a lot of trouble. So that's why I say "rationalist memes". I guess that if I bought more singularity stuff I might frame it as "weird but true ideas".

Can we always assign probabilities?

Just a passing though here. Is probability really the correct term? I wonder if what we do in these types of cases is more an assessment of our confidence in our ability to extrapolate from past experience into new, and often completely different, situations.

If so that is really not a probability about the event we're thinking about -- though perhaps is could be seen as one about our ability to make "wild" guesses (and yes, that is hyperbole) about stuff we don't really know anything about. Event there I'm not sure probability is the correct term.

With regard to the supernatural things, that tends to be something of a hot button to a lot of people I think. Perhaps a better casting would be things we have some faith in -- which tend to be things we must infer rather than have any real evidence providing some proof. I think these change over time -- we've had faith in a number of theories that have been later proven -- electrons for example or other sub atomic particles.

But then what about dark matter and energy? The models seem to say we need that but as yet we cannot find it. So we have faith in the model and look to prove that faith was justified by finding the dark stuff. But one might as why we have that faith rather than being skeptical of the model, even while acknowledging it has proven of value and helped expand knowledge. I think we have better discussion about faith in this context (perhaps) that if we get into religion and supernatural subjects (though arguably we should treat them the same as the faith we have in other models to my view).

How to Escape From Immoral Mazes
As George Carlin says, some people need practical advice. I didn't know how to go about providing what such a person would need, on that level. How would you go about doing that?

The solution is probably not a book. Many books have been written on escaping the rat race that could be downloaded in the next 5 minutes, yet people don't, and if some do in reaction to this comment they probably won't get very far.

Problems that are this big and resistant to being solved are not waiting for some lone genius to find the 100,000 word combination that will drive a stake right through the middle. What this problem needs most is lots of smart but unexceptional people hacking away at the edges. It needs wikis. It needs offline workshops. It needs case studies from people like you so it feels like a real option to people like you.

Then there's the social and financial infrastructure part of the problem. Things such as:

  • Finding useful things for people to do outside of salaried work that don't feel like sitting at the kids table. (See: every volunteer role outside of open source.)
  • Establishing intellectual networks outside of the high cost of living/rat race cities. (Not necessarily out of cities in general.)
  • Developing things that make it cheaper to maintain a comfortable standard of living at a lower level of income.
  • Finding ways to increase productivity on household tasks so it becomes economically practical to do them yourself rather than outsource them.
Reality-Revealing and Reality-Masking Puzzles

I want a similarly clear-and-understood generalization of the “reasoning vs rationalizing” distinction that applies also to processes to spread across multiple heads. I don’t have that yet. I would much appreciate help toward this.

I feel like Vaniver's interpretation of self vs. no-self is pointing at a similar thing; would you agree?

I'm not entirely happy with any of the terminology suggested in that post; something like "seeing your preferences realized" vs. "seeing the world clearly" would in my mind be better than either "self vs. no-self" or "design specifications vs. engineering constraints".

In particular, Vaniver's post makes the interesting contribution of pointing out that while "reasoning vs. rationalization" suggests that the two would be opposed, seeing the world clearly vs. seeing your preferences realized can be opposed, mutually supporting, or orthogonal. You can come to see your preferences more realized by deluding yourself, but you can also deepen both, seeing your preferences realized more because you are seeing the world more clearly.

In that ontology, instead of something being either reality-masking or reality-revealing, it can

  • A. Cause you to see your preferences more realized and the world more clearly
  • B. Cause you to see your preferences more realized but the world less clearly
  • C. Cause you to see your preferences less realized but the world more clearly
  • D. Cause you to see your preferences less realized and the world less clearly

But the problem is that a system facing a choice between several options has no general way to tell whether some option it could take is actually an instance of A, B, C or D or if there is a local maximum that means that choosing one possiblity increases one variable a little, but another option would have increased it even more in the long term.

E.g. learning about the Singularity makes you see the world more clearly, but it also makes you see that fewer of your preferences might get realized than you had thought. But then the need to stay alive and navigate the Singularly successfully, pushes you into D, where you are so focused on trying to invest all your energy into that mission that you fail to see how this prevents you from actually realizing any of your preferences... but since you see yourself as being very focused on the task and ignoring "unimportant" things, you think that you are doing A while you are actually doing D.

How to Escape From Immoral Mazes

First to be clear I have not closely read all the series or even this one completely -- just feeling sick today so not focused. However, I did have a thought I wanted to get out. May have been well addressed already.

It seems that we are perhaps missing an element here. Is it possible that even if one is working, from a entire corporate structure setting, in a moral maze that various levels and don't really impose the same problems. Thinking of this as a setting where we see the whole as one large pond. However, what if rather than one large pond what we have is actually a collection or connected smaller ponds and the maze really only applies in some and at the collection of ponds level.

Is there something of a fallacy of composition error potential here? The whole is a moral maze but many of the ponds it is comprised of lack that character?

If so then it may well be possible to escape the maze without having to quit the job.

Book review: Rethinking Consciousness

Postulating hard emergence requires a non-local postulate.

That is not obvious.

Book review: Rethinking Consciousness

Taking (2) to its logical conclusion seems to imply that we live in a deterministic block universe,

That was not implied by (2) as stated, and isn't implied by physics in general. Both the block universe and determinism are open questions (and not equivalent to each other).

One of the chief problems here is that physics, so far as we can tell, is entirely local.

[emph. added]

Nope. What is specifically ruled out by test's of Bell's inequalities is the conjunction of (local, deterministic). The one thing we know is that the two things you just asserted are not both true. What we don't know is which is false.

Reality-Revealing and Reality-Masking Puzzles

I like your example about your math tutoring, where you "had a fun time” and “[weren’t] too results driven” and reality-masking phenomena seemed not to occur.

It reminds me of Eliezer talking about how the first virtue of rationality is curiosity.

I wonder how general this is. I recently read the book “Zen Mind, Beginner’s Mind,” where the author suggests that difficulty sticking to such principles as “don’t lie,” “don’t cheat,” “don’t steal,” comes from people being afraid that they otherwise won’t get a particular result, and recommends that people instead… well, “leave a line of retreat” wasn’t his suggested ritual, but I could imagine “just repeatedly leave a line of retreat, a lot” working for getting unattached.

Also, I just realized (halfway through typing this) that cousin_it and Said Achmiz say the same thing in another comment.

Can we always assign probabilities?

There are a lot of different types of question, and probabilities don't seem to mean the same thing across them.

There are definitely a lot of different types of questions. There are also definitely multiple interpretation of probability. (This post presumes a Bayesian/subjectivist interpretation of probability, but a major contender is the frequentist view.) And it's definitely possible that there are some types of questions where it's more common, empirically speaking, to use one interpretation of probability than another, and possibly where that's more useful too. But I'm not aware of it being the case that probabilities just have to mean a different thing for different types of questions. If that's roughly what you meant, could you expand on that? (That might go to the heart of the claim I'm exploring the defensibility of in this post, as I guess I'm basically arguing that we could always assign at least slightly meaningful subjective credences to any given claim.)

If instead you meant just that "a 0.001% chance of god being real" could mean either "a 0.001% chance of precisely the Judeo-Christian God being real, in very much the way that religion would expect" or "a 0.001% chance that any sort of supernatural force at all is real, even in a way no human has ever imagined at all", and that those are very different claims, then I agree.

Can we always assign probabilities?

The possibility of a god existing doesn't equate, to me, to seeing if a possible thing exists or not, but rather whether the set of concepts are in any way possible. This is a question about the very nature of reality, and I'm pretty sure that reality is weird enough that the question falls far short of having any real meaning.

I don't understand the last half of that last sentence. But as for the rest, if I'm interpreting you correctly, here's how I'd respond:

The probability of a god existing is not necessarily equal to the probability of "the set of concepts [being] in any way possible" (or we might instead say something like "it being metaphysically possible", "the question even being coherent", or similar). Instead, it's less than or equal to that probability. That is, a god can indeed only exist if the set of concepts are in any way possible, but it seems at least conceivable that the set of concepts could be conceivable and yet it still happen to be that there's no god.

And in any case, for the purposes of this post, what I'm really wondering about is not what the odds of there being a god are, but rather whether and how we can arrive at meaningful probabilities for these sorts of claims. So I'd then also ask whether and how we can arrive at a meaningful probability for the claim "It is metaphysically possible/in any way possible that there's a god" (as a separate claim to whether there is a god). And I'd argue we can, through a process similar to the one described in this post.

To sketch it briefly, we might think about previous concepts that were vaguely like this one, and whether, upon investigation, they "turned out to be metaphysically possible". We might find they never have ("yet"), but that that's not at all surprising, even if we assume that those claims are metaphysically possible, because we just wouldn't expect to have found evidence of that anyway. In which case, we might be forced to either go for way broader reference classes (like "weird-seeming claims", or "things that seemed to violate occam's razor unnecessarily"), or abandon reference class forecasting entirely, and lean 100% on inside-view type considerations (like our views on occam's razor and how well this claim fits with it) or our "gut feelings" (hopefully honed by calibration training). I think the probability we assign might be barely meaningful, but still more meaningful than nothing.

Moloch Hasn’t Won

Happy to delete the word 'you' there since it's doing no work. Not going to edit this version, but will update OP and mods are free to fix this one. Also took opportunity to do a sentence break-up.

As for saying explicitly that slavery is bad, well, pretty strong no. I'm not going to waste people's time doing that, nor am I going to invite further concern trolling, or the implication that when I do not explicitly condemn something it means I might secretly support it or something. If someone needs reassurance that someone talking about slavery as one of the horrible things also opposes a less horrible form of slavery, then they are not the target audience.

Can we always assign probabilities?

Your comment made me realise that I skipped over the objection that the questions are too ambiguous to be worth engaging with. I've now added a paragraph to fix that:

To me, and presumably most LessWrong readers, the most obvious response to these questions is to dissolve them, or to at least try to pin the questioner down on definitions. And I do think that's very reasonable. But in this post I want to put my (current) belief that "we can always assign probabilities to propositions (or at least use something like an uninformative prior)" to a particularly challenging test, so from here on I'll assume we've somehow arrived at a satisfactorily precise understanding of what the question is actually meant to mean.

I think the reason why I initially skipped over that without noticing I'd done so was that:

  • this post was essentially prompted by the post from Chris Smith with the "Kyle the atheist" example
  • Smith writes in a footnote "For the benefit of the doubt, let’s assume everyone you ask is intelligent, has a decent understanding of probability, and more or less agrees about what constitutes an all-powerful god."
  • I wanted to explore whether the idea of it always being possible to assign probabilities could stand up to that particularly challenging case, without us having to lean on the (very reasonable) strategy of debating the meaning of the question. I.e., I wanted to see if, if we did agree of the definitions, we could still come to meaningful probabilities on that sort of question (and if so, how).

But I realise now that it might seem weird to readers that I neglected to mention the ambiguity of the questions, so I'm glad your comment brought that to my attention.

What is Success in an Immoral Maze?

'Rat race' is a highly related concept. It's mostly a subset, I think, although your view of the term may vary. Rat race illustrates the idea that when all the workers try harder to get ahead of other workers, everyone does lots more work, often to no useful end, without people on net getting ahead. Or, alternatively, that you do all this work just to stay in place. It certainly has implications of 'what I am doing doesn't actually matter' and also 'what I am doing is a zero-sum game" which implies the first thing.

Can we always assign probabilities?

As, basically, an atheist, my response to the question 'Is there an all-powerful god?' is to ask: is that question actually meaningful? Is it akin to asking, 'is there an invisible pink unicorn?', or 'have you stopped beating your wife yet?'. To whit, a mu situation https://en.wikipedia.org/wiki/Mu_(negative) .

There are a lot of different types of question, and probabilities don't seem to mean the same thing across them. Sometimes those questions are based on fuzzy semantics that require interpretation, and may not necessarily correspond to a possible state of affairs.

The possibility of a god existing doesn't equate, to me, to seeing if a possible thing exists or not, but rather whether the set of concepts are in any way possible. This is a question about the very nature of reality, and I'm pretty sure that reality is weird enough that the question falls far short of having any real meaning.

Reality-Revealing and Reality-Masking Puzzles

Thanks; you naming what was confusing was helpful to me. I tried to clarify here; let me know if it worked. The short version is that what I mean by a "puzzle" is indeed person-specific.

A separate clarification: on my view, reality-masking processes are one of several possible causes of disorientation and error; not the only one. (Sort of like how rationalization is one of several possible causes of people getting the wrong answers on math tests; not the only one.) In particular, I think singularity scenarios are sufficiently far from what folks normally expect that the sheer unfamiliarity of the situation can cause disorientation and errors (even without any reality-masking processes; though those can then make things worse).

What is Success in an Immoral Maze?

How does that relate to all what was said (and sang) about 'rat race'?

Reality-Revealing and Reality-Masking Puzzles

The difficulties above were transitional problems, not the main effects.

Why do you say they were "transitional"? Do you have a notion of what exactly caused them?

Book review: Rethinking Consciousness

Interesting!

We also need (I would think) for the experience of consciousness to somehow cause your brain to instruct your hands to type "cogito ergo sum". From what you wrote, I'm sorta imagining physical laws plus experience glued to it ... and that physical laws without experience glued to it would still lead to the same nerve firing pattern, right? Or maybe you'll say physical laws without experience is logically impossible? Or what?

Bay Solstice 2019 Retrospective

I have a bunch of comments on this:

  1. I really liked the bit. Possibly because I've been lowkey following his efforts.
  2. He looks quite good, and I like the beard on him.
  3. ..

I've always thought that his failed attempts at researching weightloss and applying what he learned were a counter example of how applicable LW/EY rationality is. Glad to see he solved it when it became more important.

  1. Eliezer clearly gets too much flack in general, and especially in this case. It's not like I haven't criticised him but come on.
  2. several people’s reaction was, “Why is this guy talking to me like I’m his friend, I don’t even know him”

Really? Fine, you don't know him but if you don't know EY and are at a rationalist event why would you be surprised by not knowing a speaker? From the public's reaction to his openning it should've been clear most people did know him.

  1. I'm not against the concept of triggering - some stuff can be, including eating disorders, but like this? Can a person not talk at all about weight gain/loss? Is the solstice at all LW-related if things can't be discussed even at their fairly basic (and socially accepted) level? Please, if you hated it give a detailed response as to why. I'm genuinely curious.
Reality-Revealing and Reality-Masking Puzzles

A couple people asked for a clearer description of what a “reality-masking puzzle” is. I’ll try.

JamesPayor’s comment speaks well for me here:

There was the example of discovering how to cue your students into signalling they understand the content. I think this is about engaging with a reality-masking puzzle that might show up as "how can I avoid my students probing at my flaws while teaching" or "how can I have my students recommend me as a good tutor" or etc.

It's a puzzle in the sense that it's an aspect of reality you're grappling with. It's reality-masking in that the pressure was away from building true/accurate maps.

To say this more slowly:

Let’s take “tinkering” to mean “a process of fiddling with a [thing that can provide outputs] while having some sort of feedback-loop whereby the [outputs provided by the thing] impacts what fiddling is tried later, in such a way that it doesn’t seem crazy to say there is some ‘learning’ going on.”

Examples of tinkering:

  • A child playing with legos. (The “[thing that provides outputs]” here is the [legos + physics], which creates an output [an experience of how the legos look, whether they fall down, etc.] in reply to the child’s “what if I do this?” attempts. That output then affects the child’s future play-choices some, in such a way that it doesn’t seem crazy to say there is some “learning” happening.)
  • An person doodling absent-mindedly while talking on the phone, even if the doodle has little to no conscious attention;
  • A person walking. (Since the walking process (I think) contains at least a bit of [exploration / play / “what happens if I do this?” -- not necessarily conscious], and contains some feedback from “this is what happens when you send those signals to your muscles” to future walking patterns)
  • A person explicitly reasoning about how to solve a math problem
  • A family member A mostly-unconsciously taking actions near another family member B [while A consciously or unconscoiusly notices something about how the B responds, and while A has some conscious or unconscious link between [how B responds] and [what actions A takes in future].

By a “puzzle”, I mean a context that gets a person to tinker. Puzzles can be person-specific. “How do I get along with Amy?” may be a puzzle for Bob and may not be a puzzle for Carol (because Bob responds to it by tinkering, and Carol responds by, say, ignoring it). A kong toy with peanut butter inside is a puzzle for some dogs (i.e., it gets these dogs to tinker), but wouldn’t be for most people. Etc.

And… now for the hard part. By a “reality-masking puzzle”, I mean a puzzle such that the kind of tinkering it elicits in a given person will tend to make that person’s “I” somehow stupider, or in less contact with the world.

The usual way this happens is that, instead of the tinkering-with-feedback process gradually solving an external problem (e.g., “how do I get the peaut butter out of the kong toy?”), the tinkering-with-feedback process is gradually learning to mask things from part of their own mind (e.g. “how do I not-notice that I feel X”).

This distinction is quite related to the distinction between reasoning and rationalization.

However, it differs from that distinction in that “rationalization” usually refers to processes happening within a single person’s mind. And in many examples of “reality-masking puzzles,” the [process that figures out how to mask a bit of reality from a person’s “I”] is spread across multiple heads, with several different tinkering processes feeding off each other and the combined result somehow being partially about blinding someone.

I am actually not all that satisfied by the “reality-revealing puzzles” vs “reality-masking puzzles” ontology. It was more useful to me than what I’d had before, and I wanted to talk about it, so I posted it. But… I understand what it means for the evidence to run forwards vs backwards, as in Eliezer’s Sequences post about rationalization. I want a similarly clear-and-understood generalization of the “reasoning vs rationalizing” distinction that applies also to processes to spread across multiple heads. I don’t have that yet. I would much appreciate help toward this. (Incremental progress helps too.)

How to Identify an Immoral Maze

When you say "the military" do you mean "the US military" here? I would be surprised if that's a consistent phenomena over the different militarizes that exist.

Toon Alfrink's sketchpad

Well, it sounds to me like it's more of a heterarchy than a hierarchy, but yeah.

Bay Solstice 2019 Retrospective

That was excellent, and I quite enjoyed watching it. I’m not going to spoil it for anyone who’s not seen yet, of course, but I just want to say:

Well done, Eliezer!

ACDT: a hack-y acausal decision theory

That's annoying - thanks for pointing it out. Any idea what the issue is?

Reality-Revealing and Reality-Masking Puzzles

I agree with this, based on my experience.

At least one reason for it seems straightforward, though. Whether something is important is a judgment that you have to make, and it’s not an easy one; it’s certainly not obvious what things are important, and you can’t ever be totally certain that you’ve judged importance correctly (and importance of things can change over time, etc.). On the other hand, whether something is interesting (to you!) is just a fact, available to you directly; it’s possible to deceive yourself about whether something’s interesting to you, but not easy… certainly the default is that you just know whether you find something interesting or not.

In other words, self-deception about what’s important is just structurally much more likely than self-deception about what’s interesting.

Reality-Revealing and Reality-Masking Puzzles

To me, doing things because they are important seems to invite this kind of self-deception (and other problems as well), while doing things because they are interesting seems to invite many good outcomes. Don't know if other people have the same experience, though.

Book review: Rethinking Consciousness

Postulating hard emergence requires a non-local postulate. I’m not willing to accept that without testable predictions.

I don’t really see how “ergo sum” is an assumption. If any thing it is a direct inference, but not an assumption. Something exists that is perceiving. Any theory that says otherwise must be incorrect.

What are beliefs you wouldn't want (or would feel apprehensive about being) public if you had (or have) them?

Neat solution, but I feel the dynamics of advicer/advicee is more like I acquired this piece of wisdom over time through experiences, hardships, etc, so that you may not have to go through all of them. And mostly I think this is what lures in people. So, I think it won't play out the same way thus defeating the original intention of the solution.

I am okay with people sharing their experiences and wisdom they acquired as a result of their journey, but what irks me is the extrapolation of it without sharing the downsides.

Book review: Rethinking Consciousness

Not surprisingly, I have a few issues with your chain of reasoning.

1. I exist. (Cogito, ergo sum). I'm a thinking, conscious entity that experiences existence at this specific point in time in the multiverse.

Cogito is an observation. I am not arguing with that one. Ergo sum is an assumption, a model. The "multiverse" thing is a speculation.

Our understanding of physics is that there is no fundamental thing that we can reduce conscious experience down to. We're all just quarks and leptons interacting.

This is very much simplified. Sure, we can do reduction, but that doesn't mean we can do synthesis. There is no guarantee that it is even possible to do synthesis. In fact, there are mathematical examples where synthesis might not be possible, simply because the relevant equations cannot be solved uniquely. I made a related point here. Here is an example. Consciousness can potentially be reduced to atoms, but it may also be reduced to bits, a rather different substrate. Maybe there are other reductions possible.

And it is also possible that constructing consciousness out of quarks and leptons is impossible because of "hard emergence". Of the sorites kind. There is no atom of water. A handful of H2O molecules cannot be described as a solid, liquid or gas. A snowflake requires trillions of trillions of H2O molecules together. There is no "snowflakiness" in a single molecule. Just like there is no consciousness in an elementary particle. There is no evidence for panpsychism, and plenty against it.

ACDT: a hack-y acausal decision theory

To other readers: If you see broken image links, try right-click+View Image, or open the page in Chrome or Safari. In my Firefox 71 they are not working.

Please Critique Things for the Review!

Yeah, true, that seems like a fair reason to point out for why there wouldn't be more reviews. Thanks for sharing your personal reasons.

Reality-Revealing and Reality-Masking Puzzles
“Getting out of bed in the morning” and “caring about one’s friends” turn out to be useful for more reasons than Jehovah—but their derivation in the mind of that person was entangled with Jehovah.

Cf: "Learning rationality" and "Hanging out with like-minded people" turn out to be useful for more reasons than AI risk -- but their derivation in the mind of CFAR staff is entangled with AI risk.

Bay Solstice 2019 Retrospective

Could someone explain what the "Eliezer bit" actually was, for those of us who weren't there?

Against Rationalization II: Sequence Recap

So *that's* where that UI lives! I did look for it.

Might go back and convert this into a proper sequence when I get back from Mystery Hunt.

Underappreciated points about utility functions (of both sorts)

Ahh, thanks for clarifying. I think what happened was that your modus ponens was my modus tollens -- so when I think about my preferences, I ask "what conditions do my preferences need to satisfy for me to avoid being exploited or undoing my own work?" whereas you ask something like "if my preferences need to correspond to a bounded utility function, what should they be?" [1]

That doesn't seem right. The whole point of what I've been saying is that we can write down some simple conditions that ought to be true in order to avoid being exploitable or otherwise incoherent, and then it follows as a conclusion that they have to correspond to a [bounded] utility function. I'm confused by your claim that you're asking about conditions, when you haven't been talking about conditions, but rather ways of modifying the idea of decision-theoretic utility.

Something seems to be backwards here.

I agree, one shouldn't conclude anything without a theorem. Personally, I would approach the problem by looking at the infinite wager comparisons discussed earlier and trying to formalize them into additional rationality condition. We'd need

  • an axiom describing what it means for one infinite wager to be "strictly better" than another.
  • an axiom describing what kinds of infinite wagers it is rational to be indifferent towards

I'm confused here; it sounds like you're just describing, in the VNM framework, the strong continuity requirement, or in Savage's framework, P7? Of course Savage's P7 doesn't directly talk about these things, it just implies them as a consequence. I believe the VNM case is similar although I'm less familiar with that.

Then, I would try to find a decisioning-system that satisfies these new conditions as well as the VNM-rationality axioms (where VNM-rationality applies). If such a system exists, these axioms would probably bar it from being represented fully as a utility function.

That doesn't make sense. If you add axioms, you'll only be able to conclude more things, not fewer. Such a thing will necessarily be representable by a utility function (that is valid for finite gambles), since we have the VNM theorem; and then additional axioms will just add restrictions. Which is what P7 or strong continuity do!

Reality-Revealing and Reality-Masking Puzzles

The post mentions problems that encourage people to hide reality from themselves. I think that constructing a 'meaningful life narrative' is a pretty ubiquitous such problem. For the majority of people, constructing a narrative where their life has intrinsic importance is going to involve a certain amount of self-deception.

The post mentioned some of the problems that come from the interaction between these sorts of narratives and learning about x-risks. To me, however, it looks like at least some of the AI x-risk memes themselves are partially the result of reality-masking optimization with the goal of increasing the perceived meaningfulness of the lives of people working on AI x-risk. As an example, consider the ongoing debate about whether we should expect the field of AI to mostly solve x-risk on its own. Clearly, if the field can't be counted upon to avoid the destruction of humanity, this greatly increases the importance of outside researchers trying to help them. So to satisfy their emotional need to feel that their actions have meaning, outside researchers have a bias towards thinking that the field is more incompetent than it is, and to come up with and propagate memes justifying that conclusion. People who are already in insider institutions have the opposite bias, so it makes sense that this debate divides to some extent along these lines.

From this perspective, it's no coincidence that internalizing some x-risk memes leads people to feel that their actions are meaningless. Since the memes are partially optimized to increase the perceived meaningfulness of the actions of a small group of people, by necessity they will decrease the perceived meaningfulness of everyone else's actions.

(Just to be clear, I'm not saying that these ideas have no value, that this is being done consciously, or that the originators of said ideas are 'bad'; this is a pretty universal human behavior. Nor would I endorse bringing up these motives in an object-level conversation about the issues. However, since this post is about reality-masking problems it seems remiss not to mention.)

Reality-Revealing and Reality-Masking Puzzles
It is easy to get the impression that the concerns raised in this post are not being seen, or are being seen from inside the framework of people making those same mistakes.

I don't have a strong opinion about the CFAR case in particular, but in general, I think this is impression is pretty much what happens by default in organizations, even when people running them are smart and competent and well-meaning and want to earn the community's trust. Transparency is really hard, harder than I think anyone expects until they try to do it, and to do it well you have to allocate a lot of skill points to it, which means allocating them away from the organization's core competencies. I've reached the point where I no longer find even gross failures of this kind surprising.

(I think you already appreciate this but it seemed worth saying explicitly in public anyway.)

ialdabaoth is banned

I'm less worried about "cancel culture" becoming a thing on LW than in EA (because we seem naturally more opposed to that kind of thing), but I'm still a bit worried. I think having mods be obligated to explain all non-trivial banning decisions (with specifics instead of just pointing to broad categories like "manipulative") would be a natural Schelling fence to protect against a potential slippery slope, so the costs involved may be worth paying from that perspective.

Bay Solstice 2019 Retrospective

I notice that When I Die is incorrectly listed as requiring guitar, likely because in the spreadsheets the linked musical reference is to the solo guitar-and-voice version I recorded ages ago...but that guitar arrangement is (a) tricky, (b) not terribly conducive to singalong. Thus, I suggest anyone maintaining these sort of spreadsheets change the When I Die musical reference to a youtube link for an a cappella version that gets all the harmony lines in, such as this link:

https://www.youtube.com/watch?v=M7ndK8aIF-I

How to Identify an Immoral Maze

One factor is that the military has a pretty consistent policy of moving officers around to different postings every few years. You never work with the same people very long, except maybe at the very top. This might help enable some of the outrunning-your-mistakes phenomenon mentioned above, but it also probably means you can't develop the kind of interpersonal politics you might see in a big corporation.

Book review: Rethinking Consciousness

develop your own intuitive understanding of everything

I agree 100%!! That's the goal. And I'm not there yet with consciousness. That's why I used the word "annoying and unsatisfying" to describe my attempts to understand consciousness thus far. :-P

You should not be trusting textbook authors when they say that Theorem X is true

I'm not sure you quite followed what I wrote here.

I am saying that it's possible to understand a math proof well enough to have 100% confidence—on solely one's own authority—that the proof is mathematically correct, but still not understand it well enough to intuitively grok it. This typically happens when you can confirm that each step of the proof, taken on its own, is mathematically correct.

If you haven't lived this experience, maybe imagine that I give you a proof of the Riemann hypothesis in the form of 500 pages of equations kinda like this, with no English-language prose or variable names whatsoever. Then you spend 6 months checking rigorously that every line follows from the previous line (or program a computer to do that for you). OK, you have now verified on solely your own authority that the Riemann hypothesis is true. But if I now ask you why it's true, you can't give any answer better than "It's true because this 500-page argument shows it to be true".

So, that's a bit like where I'm at on consciousness. My "proof" is not 500 pages, it's just 4 steps, but that's still too much for me to hold the whole thing in my head and feel satisfied that I intuitively grok it.

  1. I am strongly disinclined to believe (as I think David Chalmers has suggested) that there's a notion of p-zombies, in which an unconscious system could have exactly the same thoughts and behaviors as a conscious one, even including writing books about the philosophy of consciousness, for reasons described here and elsewhere.

  2. If I believe (1), it seems to follow that I should endorse the claim "if we have a complete explanation of the meta-problem of consciousness, then there is nothing left to explain regarding the hard problem of consciousness". The argument more specifically is: Either the behavior in which a philosopher writes a book about consciousness has some causal relation to the nature of consciousness itself (in which case, solving the meta-problem requires understanding the nature of consciousness), or it doesn't (in which case, unconscious p-zombies should (bizarrely) be equally capable of writing philosophy books about consciousness).

  3. I think that Attention Schema Theory offers a complete and correct answer to every aspect of the meta-problem of consciousness, at least every aspect that I can think of.

  4. ...Therefore, I conclude that there is nothing to consciousness beyond the processes discussed in Attention Schema Theory.

I keep going through these steps and they all seem pretty solid, and so I feel somewhat obligated to accept the conclusion in step 4. But I find that conclusion highly unintuitive, I think for the same reason most people do—sorta like, why should any information processing feel like anything at all?

So, I need to either drag my intuitions into line with 1-4, or else crystallize my intuitions into a specific error in one of the steps 1-4. That's where I'm at right now. I appreciate you and others in this comment thread pointing me to helpful and interesting resources! :-)

Reality-Revealing and Reality-Masking Puzzles

The review has definitely had an effect on me looking at new posts, and thinking "which of these would I feel good about including in a Best of the Year Book?" as well as "which of these would I feel good about including in an actual textbook?

This post is sort of on the edge of "timeless enough that I think it'd be fine for the 2020 Review", but I'm not sure whether it's quite distilled enough to fit nicely into, say, the 2021 edition of "the LessWrong Textbook." (this isn't necessarily a complaint about the post, just noting that different posts can be optimized for different things)

Reality-Revealing and Reality-Masking Puzzles
"timeless content".

It's interesting to think about the review effort in this light. (Also, material about doing group rationality stuff can fit in with timeless content, but less in a oneshot way.)

Underappreciated points about utility functions (of both sorts)

Ahh, thanks for clarifying. I think what happened was that your modus ponens was my modus tollens -- so when I think about my preferences, I ask "what conditions do my preferences need to satisfy for me to avoid being exploited or undoing my own work?" whereas you ask something like "if my preferences need to correspond to a bounded utility function, what should they be?" [1]. As a result, I went on a tangent about infinity to begin exploring whether my modified notion of a utility function would break in ways that regular ones wouldn't.

Why should one believe that modifying the idea of a utility function would result in something that is meaningful about preferences, without any sort of theorem to say that one's preferences must be of this form?

I agree, one shouldn't conclude anything without a theorem. Personally, I would approach the problem by looking at the infinite wager comparisons discussed earlier and trying to formalize them into additional rationality condition. We'd need

  • an axiom describing what it means for one infinite wager to be "strictly better" than another.
  • an axiom describing what kinds of infinite wagers it is rational to be indifferent towards

Then, I would try to find a decisioning-system that satisfies these new conditions as well as the VNM-rationality axioms (where VNM-rationality applies). If such a system exists, these axioms would probably bar it from being represented fully as a utility function. If it didn't, that'd be interesting. In any case, whatever happens will tell us more about either the structure our preferences should follow or the structure that our rationality-axioms should follow (if we cannot find a system).

Of course, maybe my modification of the idea of a utility function turns out to show such a decisioning-system exists by construction. In this case, modifying the idea of a utility function would help tell me that my preferences should follow the structure of that modification as well.

Does that address the question?

[1] From your post:

We should say instead, preferences are not up for grabs -- utility functions merely encode these, remember. But if we're stating idealized preferences (including a moral theory), then these idealized preferences had better be consistent -- and not literally just consistent, but obeying rationality axioms to avoid stupid stuff. Which, as already discussed above, means they'll correspond to a bounded utility function.
Reality-Revealing and Reality-Masking Puzzles

As I commented elsewhere I think this is great, but there's one curious choice here, which is to compare exposure to The Singularity as a de-conversion experience and loss of faith rather than a conversion experience where one gets faith. The parallel is from someone going from believer to atheist, rather than atheist to believer.

Which in some ways totally makes sense, because rationality goes hand in hand with de-conversion, as the Sequences are quite explicit about over and over again, and often people joining the community are in fact de-converting from a religion (and when and if they convert to one, they almost always leave the community). And of course, because the Singularity is a real physical thing that might really happen and really do all this, and so on.

But I have the system-1 gut instinct that this is actually getting the sign wrong in ways that are going to make it hard to understand people's problem here and how to best solve it.

(As opposed to it actually being a religion, which it isn't.)

From the perspective of a person processing this kind of new information, the fact that the information is true or false, or supernatural versus physical, doesn't seem that relevant. What might be much more relevant is that you now believe that this new thing is super important and that you can potentially have really high leverage over that thing. Which then makes everything feel unimportant and worth sacrificing - you now need to be obsessed with new hugely important thing and anyone who isn't and could help needs to be woken up, etc etc.

If you suddenly don't believe in God and therefore don't know if you can be justified in buying hot cocoa, that's pretty weird. But if you suddenly do believe in God and therefore feel you can't drink hot cocoa, that's not that weird.

People who suddenly believe in God don't generally have the 'get up in the morning' question on their mind, because the religions mostly have good answers for that one. But the other stuff all seems to fit much better?

Or, think about the concept Anna discusses about people's models being 'tangled up' with stuff they've discarded because they lost faith. If God doesn't exist why not [do horrible things] and all that because nothing matters so do what you want. But this seems like mostly the opposite, it's that the previous justifications have been overwritten by bigger concerns.

Reality-Revealing and Reality-Masking Puzzles

This post is great and much needed, and makes me feel much better about the goings-on at CFAR.

It is easy to get the impression that the concerns raised in this post are not being seen, or are being seen from inside the framework of people making those same mistakes. Sometimes these mistakes are disorientation that people know are disruptive and need to be dealt with, but other times I've encountered many who view such things as right and proper, and view not having such a perspective as blameworthy. I even frequently find an undertone of 'if you don't have this orientation something went wrong.'

It's clear from this post that this is not what is happening for Anna/CFAR, which is great news.

This now provides, to me, two distinct things.

One, a clear anchor from which to make it clear that failure to engage with regular life, and failure to continue to have regular moral values and desires and cares and hobbies and so on, is a failure mode of some sort of phase transition that we have been causing. That it is damaging, and it is to be avoided slash the damage contained and people helped to move on as smoothly and quickly as possible.

Two, the framework of reality-revealing versus reality-masking, which has universal application. If this resonates with people it might be a big step forward in being able to put words to key things, including things I'm trying to get at in the Mazes sequence.

Reality-Revealing and Reality-Masking Puzzles

Having a go at pointing at "reality-masking" puzzles:

There was the example of discovering how to cue your students into signalling they understand the content. I think this is about engaging with a reality-masking puzzle that might show up as "how can I avoid my students probing at my flaws while teaching" or "how can I have my students recommend me as a good tutor" or etc.

It's a puzzle in the sense that it's an aspect of reality you're grappling with. It's reality-masking in that the pressure was away from building true/accurate maps.

Having a go at the analogous thing for "disabling part of the epistemic immune system": the cluster of things we're calling an "epistemic immune system" is part of reality and in fact important for people's stability and thinking, but part of the puzzle of "trying to have people be able to think/be agenty/etc" has tended to have us ignore that part of things.

Rather than, say, instinctively trusting that the "immune response" is telling us something important about reality and the person's way of thinking/grounding, one might be looking to avoid or disable the response. This feels reality-masking; like not engaging with the data that's there in a way that moves toward greater understanding and grounding.

Using Expert Disagreement
The lying theory is tricky, as it can explain anything.

The lying theory can explain away any "evidence", but not tell you what the truth is - at best it can tell you where the truth is not.

How to Escape From Immoral Mazes

1: First are should be 'what if'

2: Difference is that third-to-last question is about the 'can't afford it' concern, which is distinct from generally being trapped. Could see changing it to be last three, or unifying the notes.

3: Differently. Arcane here means 'complex and obscure details that need to be mastered and done correctly, or it won't work'. Incantation here means 'a thing you say in order to evoke a particular response' in this case a social web pattern.

Toon Alfrink's sketchpad
Any combination of goals/drives could have a (possibly non-linear) mapping which turns them into a single unified goal in that sense, or vice versa. .

Yeah, I think that if the brain in fact is mapped that way it would be meaningful to say you have a single goal.

Let me put it more simply: can achieving "self-determination" alleviate your need to eat, sleep, and relieve yourself? If not, then there are some basic biological needs (maintenance of which is a goal) that have to be met separately

Maybe, it depends on how the brain is mapped. I know of at least a few psychology theories which would say things like avoiding pain and getting food are in the service of higher psychological needs. If you came to believe for instance that eating wouldn't actually lead to those higher goals, you would stop.

I think this is pretty unlikely. But again, I'm not sure.

Bay Solstice 2019 Retrospective

So I made my own spreadsheet, which is publicly editable and incorporates every song, poem, story, and speech from the above two repositories.

This looks pretty useful, thanks!

Reality-Revealing and Reality-Masking Puzzles

That's close.

Engaging with CFAR and LW's ideas about redesigning my mind and focusing on important goals for humanity (e.g. x-risk reduction), has primarily - not partially - majorly improved my general competence, and how meaningful my life is. I'm a much better person, more honest and true, because of it. It directly made my life better, not just my abstract beliefs about the future.

The difficulties above were transitional problems, not the main effects.

Reality-Revealing and Reality-Masking Puzzles

Curated, with some thoughts:

I think the question of "how to safely change the way you think, in a way that preserves a lot of commonsense things" is pretty important. This post gave me a bit of a clearer sense of "Valley of Bad Rationality" problem.

This post also seemed like part of the general project of "Reconciling CFAR's paradigm(s?) with the established LessWrong framework. In this case I'm not sure it precisely explains any parts of CFAR that people tend to find confusing. But it does lay out some frameworks that I expect to be helpful groundwork for that.

I shared some of Ben's confusion re: what point the post was specifically making about puzzles:

I guess this generally connects with my confusion around the ontology of the post. I think it would make sense for the post to be 'here are some problems where puzzling at them helped me understand reality' and 'here are some problems where puzzling at them caused me to hide parts of reality from myself', but you seem to think it's an attribute of the puzzle, not the way one approaches it, and I don't have a compelling sense of why you think that.

There were some hesitations I had about curating it – to some degree, this post is a "snapshot of what CFAR is doing in 2020", which is less obviously "timeless content". The post depends a fair bit on the reader already knowing what CFAR is and how they relate to LessWrong. But the content was still focused on explaining concepts, which I expect to be generally useful.

Reality-Revealing and Reality-Masking Puzzles
I should say, these shifts have not been anything like an unmitigated failure, and I don't now believe were worth it just because they caused me to be more socially connected to x-risk things.

Had a little trouble parsing this, especially the second half. Here's my attempted paraphrase:

I take you to be saying that: 1) the shifts that resulted from engaging with x-risk were not all bad, despite leading to the disorienting events listed above, and 2) in particular, you think the shifts were (partially) beneficial for reasons other than just that they led you to be more socially connected to x-risk people.

Is that right?

A rant against robots
Typically, Legg-Hutter intelligence does not seem to require any "embodied intelligence".

Don't make the mistake of basing your notions of AI on uncomputable formalisms. That mistake has destroyed more minds on LW than probably anything else.

Toon Alfrink's sketchpad
It can be if the basic structure is "I need to get my basic needs taken care of so that I can work on my ultimate goal".

That's a fully generic response though. Any combination of goals/drives could have a (possibly non-linear) mapping which turns them into a single unified goal in that sense, or vice versa.

Let me put it more simply: can achieving "self-determination" alleviate your need to eat, sleep, and relieve yourself? If not, then there are some basic biological needs (maintenance of which is a goal) that have to be met separately from any "ultimate" goal of self-determination. That's the sense in which I considered it obvious we don't have singular goal systems.

Conclusion to the sequence on value learning

I feel like you are trying to critique something I wrote, but I'm not sure what? Could you be a bit more specific about what you think I think that you disagree with?

(In particular, the first paragraph sounds like a statement that I myself would make, so I'm not sure how it is a critique.)

Toon Alfrink's sketchpad

Thanks, I learned something.

Although for the purposes of this discussion it seems that Maslow's specific factorization of goals is questionable, but not the general idea of a hierarchy of needs. Does that sound reasonable?

How to Escape From Immoral Mazes

Errata:

Are mazes are where our human and/or social capital pays off?

Are mazes where our human and/or social capital pays off?


What's the reason for the difference here?:

I realize some people have already become so trapped in mazes that they cannot walk away.
If you actually can’t walk away, see the last two questions.

2

If you actually can’t afford to quit, see the last three questions.

3


Jargon:

incantation
arcane

Are these words being used similarly or differently? (They both seem to be words associated with magic, but that could be a coincidence.)

A rant against robots
that the most powerful algorithms, the ones that would likely first become superintelligent, would be distributed and fault-tolerant, as you say, and therefore would not be in a box of any kind to begin with.

Algorithms don't have a single "power" setting. It is easier to program a single computer than to make a distributed fault tolerant system. Algorithms like alpha go are run on a particular computer with an off switch, not spread around. Of course, a smart AI might soon load its code all over the internet, if it has access. But it would start in a box.

Go F*** Someone

You're probably right. It would be 10x more useful if it offered some specifics as to what's bad about the post, though. As it is, it's just a differently-shaped downvote.

Reality-Revealing and Reality-Masking Puzzles

Interesting post, although I wish "reality-masking" puzzles had been defined better. Most of this post is around disorientation pattern or disabling parts of the epistemic immune system more than anything directly masking reality.

Also related: Pseudo-rationality

Against Rationalization II: Sequence Recap

Congrats! Note that if you go to the library page and scroll down a bit, you'll find a "create sequence" button, which you can use if you want to create a formal sequence for this. 

(Also happy to help with this if the UI is confusing – we haven't really optimized our sequence UI as much as we'd like)

Impact measurement and value-neutrality verification

Hmm, I somehow never saw this reply, sorry about that.

you get something like Paul's going out with a whimper where our easy-to-specify values win out over our other values [...] it's very important that your AGI not be better at optimizing some of your values over others, as that will shift the distribution of value/resources/etc. away from the real human preference distribution that we want.

Why can't we tell it not to overoptimize the aspects that it understands until it figures out the other aspects?

value-neutrality verification isn't just about strategy-stealing: it's also about inner alignment, since it could help you separate optimization processes from objectives in a natural way that makes it easier to verify alignment properties (such as compatibility with strategy-stealing, but also possibly corrigibility) on those objects.

As you (now) know, my main crux is that I don't expect to be able to cleanly separate optimization and objectives, though I also am unclear whether value-neutral optimization is even a sensible concept taken separately from the environment in which the agent is acting (see this comment).

Reality-Revealing and Reality-Masking Puzzles

I see. I guess that framing feels slightly off to me - maybe this is what you meant or maybe we have a disagreement - but I would say "Helping people not have worse lives after interacting with <a weird but true idea>". 

Like I think that similar disorienting things would happen if someone really tried to incorporate PG's "Black Swan Farming" into your action space, and indeed many good startup founders have weird lives with weird tradeoffs relative to normal people that often leads to burnout. "Interacting with x-risk" or "Interacting with the heavy-tailed nature of reality" or "Interacting with AGI" or whatever. Oftentimes stuff humans have only been interacting with in the last 300 years, or in some cases 50 years.

A voting theory primer for rationalists

Congratulations on finishing your doctorate! I'm very much looking forward to the next post in the sequence on multi-winner methods, and I'm especially the metric you mention.

A voting theory primer for rationalists

I think this post should be included in the best posts of 2018 collection. It does an excellent job of balancing several desirable qualities: it is very well written, being both clear and entertaining; it is informative and thorough; it is in the style of argument which is preferred on LessWrong, by which I mean makes use of both theory and intuition in the explanation.

This post adds to the greater conversation by displaying rationality of the kind we are pursuing directed at a big societal problem. A specific example of what I mean that distinguishes this post from an overview that any motivated poster might write is the inclusion of Warren Smith's results; Smith is a mathematician from an unrelated field who has no published work on the subject. But he had work anyway, and it was good work which the author himself expanded on, and now we get to benefit from it through this post. This puts me very much in mind of the fact that this community was primarily founded by an autodidact who was deeply influenced by a physicist writing about probability theory.

A word on one of our sacred taboos: in the beginning it was written that Politics is the Mindkiller, and so it was for years and years. I expect this is our most consistently and universally enforced taboo. Yet here we have a high-quality and very well received post about politics, and of the ~70 comments only one appears to have been mindkilled. This post has great value on the strength of being an example of how to address troubling territory successfully. I expect most readers didn't even consider that this was political territory.

Even though it is a theory primer, it manages to be practical and actionable. Observe how the very method of scoring posts for the review, quadratic voting, is one that is discussed in the post. Practical implications for the management of the community weigh heavily in my consideration of what should be considered important conversation within the community.

Carrying on from that point into its inverse, I note that this post introduced the topic to the community (though there are scattered older references to some of the things it contains in comments). Further, as far as I can tell the author wasn't a longtime community member before this post and the sequence that followed it. The reason this matters is that LessWrong can now attract and give traction to experts in fields outside of its original core areas of interest. This is not a signal of the quality of the post so much as the post being a signal about LessWrong, so there is a definite sense in which this weighs against its inclusion: the post showed up fully formed rather than being the output of our intellectual pipeline.

I would have liked to see (probably against the preferences of most of the community and certainly against the signals the author would have received as a lurker) the areas where advocacy is happening as a specific section. I found them anyway, because they were contained in the disclosures and threaded through the discussion, and clicking the links, but I suspect that many readers would have missed them. This is especially true for readers less politically interested than I, which most of them. The obvious reason is for interested people to be able to find it more easily, which matters a lot to problems like this one. The meta-reason is that posts that tread dangerous ground might benefit from directing people somewhere else for advocacy specifically, kind of like a communication-pressure release valve. It speaks to the quality of the post this wasn't even an issue here, but for future posts on similar topics in a growing LessWrong I expect it to be.

Lastly I want to observe the follow-up posts in the sequence are also good, suggesting that this post was fertile ground for more discussion. In terms of additional follow-up: I would like to see this theory deployed at the level of intuition building, in a way similar to how we use markets, Prisoner's Dilemmas, and more recently considered Stag Hunts. I feel like it would be a good, human-achievable counterweight to things like utility functions and value handshakes in our conversation, and make our discussions more actionable thereby.




Reality-Revealing and Reality-Masking Puzzles

Overall I'm still quite confused, so for my own benefit, I'll try to rephrase the problem here in my own words:

Engaging seriously with CFAR’s content adds lots of things and takes away a lot of things. You can get the affordance to creatively tweak your life and mind to get what you want, or the ability to reason with parts of yourself that were previously just a kludgy mess of something-hard-to-describe. You might lose your contentment with black-box fences and not applying reductionism everywhere, or the voice promising you'll finish your thesis next week if you just try hard enough.

But in general, simply taking out some mental stuff and inserting an equal amount of something else isn't necessarily a sanity-preserving process. This can be true even when the new content is more truth-tracking than what it removed. In a sense people are trying to move between two paradigms -- but often without any meta-level paradigm-shifting skills.

Like, if you feel common-sense reasoning is now nonsense, but you’re not sure how to relate to the singularity/rationality stuff, it's not an adequate response for me to say "do you want to double crux about that?" for the same reason that reading bible verses isn't adequate advice to a reluctant atheist tentatively hanging around church.

I don’t think all techniques are symmetric, or that there aren't ways of resolving internal conflict which systematically lead to better results, or that you can’t trust your inside view when something superficially pattern matches to a bad pathway.

But I don’t know the answer to the question of “How do you reason, when one of your core reasoning tools is taken away? And when those tools have accumulated years of implicit wisdom, instinctively hill-climbing to protecting what you care about?”

I think sometimes these consequences are noticeable before someone fully undergoes them. For example, after going to CFAR I had close friends who were terrified of rationality techniques, and who have been furious when I suggested they make some creative but unorthodox tweaks to their degree, in order to allow more time for interesting side-projects (or, as in Anna's example, finishing your PhD 4 months earlier). In fact, they've been furious even at the mere suggestion of the potential existence of such tweaks. Curiously, these very same friends were also quite high-performing and far above average on Big 5 measures of intellect and openness. They surely understood the suggestions.

There can be many explanations of what's going on, and I'm not sure which is right. But one idea is simply that 1) some part of them had something to protect, and 2) some part correctly predicted that reasoning about these things in the way I suggested would lead to a major and inevitable life up-turning.

I can imagine inside views that might generate discomfort like this.

  • "If AI was a problem, and the world is made of heavy tailed distributions, then only tail-end computer scientists matter and since I'm not one of those I lose my ability to contribute to the world and the things I care about won’t matter."
  • "If I engaged with the creative and principled optimisation processes rationalists apply to things, I would lose the ability to go to my mom for advice when I'm lost and trust her, or just call my childhood friend and rant about everything-and-nothing for 2h when I don't know what to do about a problem."

I don't know how to do paradigm-shifting; or what meta-level skills are required. Writing these words helped me get a clearer sense of the shape of the problem.

(Note: this commented was heavily edited for more clarity following some feedback)

Reality-Revealing and Reality-Masking Puzzles

I find the structure of this post very clear, but I'm confused about which are the 'reality-masking' problems that you say you spent a while puzzling. You list three bullets in that section, let me rephrase them as problems.

  • How to not throw things out just because they seem absurd
  • How to update on bayesian evidence even if it isn't 'legible, socially approved evidence'
  • How to cause beliefs to propagate through one's model of the world

I guess this generally connects with my confusion around the ontology of the post. I think it would make sense for the post to be 'here are some problems where puzzling at them helped me understand reality' and 'here are some problems where puzzling at them caused me to hide parts of reality from myself', but you seem to think it's an attribute of the puzzle, not the way one approaches it, and I don't have a compelling sense of why you think that.

You give an example of teaching people math, and finding that you were training particular bad patterns of thought in yourself (and the students). That's valid, and I expect a widespread experience. I personally have done some math tutoring that I don't think had that property, due to background factors that affected how I approached it. In particular, I wasn't getting paid, my mum told me I had to do it (she's a private english teacher who also offers maths, but knows I grok maths better than her), and so I didn't have much incentive to achieve results. I mostly just spoke with kids about what they understood, drew diagrams, etc, and had a fun time. I wasn't too results-driven, mostly just having fun, and this effect didn't occur.

More generally, many problems will teach you bad things if you locally hill-climb or optimise in a very short-sighted way. I remember as a 14 year old, I read Thinking Physics, spent about 5 mins per question, and learned nothing from repeatedly just reading the answers. Nowadays I do Thinking Physics problems weekly, and I spend like 2-3 hours per question. This seems more like a fact about how I approached it than a fact about the thing itself.

Looking up at the three bullets I pointed to, all three of them are important things to get right, that most people could be doing better on. I can imagine healthy and unhealthy ways of approaching them, but I'm not sure what an 'unhealthy puzzle' looks like.

Reality-Revealing and Reality-Masking Puzzles

I found this a very useful post. It feels like a key piece in helping me think about CFAR, but also it sharpens my own sense of what stuff in "rationality" feels important to me. Namely "Helping people not have worse lives after interacting with rationalist memes"

Underappreciated points about utility functions (of both sorts)
I already said that I think that thinking in terms of infinitary convex combinations, as you're doing, is the wrong way to go about it; but it took me a bit to put together why that's definitely the wrong way.
Specifically, it assumes probability! Fishburn, in the paper you link, assumes probability, which is why he's able to talk about why infinitary convex combinations are or are not allowed (I mean, that and the fact that he's not necessarily arbitrary actions).
Savage doesn't assume probability!

Savage doesn't assume probability or utility, but their construction is a mathematical consequence of the axioms. So although they come later in the exposition, they mathematically exist as soon as the axioms have been stated.

So if you want to disallow certain actions... how do you specify them?

I am still thinking about that, and may be some time.

As a general outline of the situation, you read P1-7 => bounded utility as modus ponens: you accept the axioms and therefore accept the conclusion. I read it as modus tollens: the conclusion seems wrong, so I believe there is a flaw in the axioms. In the same way, the axioms of Euclidean geometry seemed very plausible as a description of the physical space we find ourselves in, but conflicts emerged with phenomena of electromagnetism and gravity, and eventually they were superseded as descriptions of physical space by the geometry of differential manifolds.

It isn't possible to answer the question "which of P1-7 would I reject?" What is needed to block the proof of bounded utility is a new set of axioms, which will no doubt imply large parts of P1-7, but might not imply the whole of any one of them. If and when such a set of axioms can be found, P1-7 can be re-examined in their light.

Go F*** Someone

95%+ of people who drop out of the workforce to raise children are women

Citation needed.

Other than that, you are supporting my general argument by writing from within the very framework that I lay out here. Why is the choice to leave work "destructive"? Why is it OK for a man to depend on a woman for the biological necessities of having a family, but not OK for either partner do depend on the other for the financial necessities?

Accomplished women who drop out to raise families usually don't surrender the spending of money to their husbands (I agree that demanding that they do so is patriarchal and bad). They only surrender the making of the money. The ability to spend money is what lets people build good lives and families, but making money is what contributes to their status*. Post-divorce, it's usually much easier for a woman (particularly an accomplished one) to make money again than it is for a man to have children again.

*At least, their status among some people. I personally care about LW karma more than income :)

Go F*** Someone

I am fairly sure it’s criticism (and I agree with it).

Please Critique Things for the Review!

Also, I haven't voted yet because I don't remember the details of the vast majority of the posts, and don't feel comfortable just voting based on my current general feeling about each post

Reminder here that it's pretty fine to vote proportional to "how good does the post seem" and "how confident you are in that assessment." (i.e. I expect it to improve the epistemic value of the vote if people in your reference class weakly vote on the posts that seem good)

What are beliefs you wouldn't want (or would feel apprehensive about being) public if you had (or have) them?
Stop commoditizing startup wisdom, I feel it creates more failures than successes.

Advice should be pre-registered, so there isn't publication bias from startup founders that succeed?

What long term good futures are possible. (Other than FAI)?

At the moment, human brains are a cohesive whole, that optimizes for human values. We haven't yet succeeded in making the machines share our values, and the human brain is not designed for upgrading. The human brain can take knowledge from an external source and use it. External tools follow the calculator model. The human thinks about the big picture world, and realizes that as a mental subgoal of designing a bridge, they need to do some arithmetic. Instead of doing the arithmetic themselves, they pass the task on to the machine. In this circumstance, the human controls the big picture, the human understands what cognitive labor has been externalized and knows that it will help the humans goals.

If we have a system that a human can say "go and do whatever is most moral", that's FAI. If we have a calculator style system where humans specify the power output, weight, material use, radiation output ect of a fusion plant, and the AI tries to design a fusion plant meeting those specs, that's useful but not nearly as powerful as full ASI. Humans with calculator style AI could invent molecular nanotech without working out all the details, but they still need an Eric Drexler to spot the possibility.

In my model you can make a relativistic rocket, but you can't take a sparrow, and upgrade it into something that flies through space at 10% light speed and is still a sparrow. If your worried that relativistic rockets might spew dangerous levels of radiation, you can't make a safe spacecraft by taking a sparrow and upgrading it to fly at 10% c. (Well with enough R&D you could make a rocket that superficially resembles a sparrow. Deciding to upgrade a sparrow doesn't make the safety engineering any easier.)

Making something vastly smarter than a human is like making something far faster than a sparrow. Trying to strap really powerful turbojets to the sparrow and it crashes and burns. Try to attach a human brain to 100X human brain gradient decent and you get an out of control AI system with nonhuman goals. Human values are delicate. I agree that it is possible to carefully unravel what a human mind is thinking and what its goals are, and then upgrade it in a way that preserves those goals, but this requires a deep understanding of how the human mind works. Even granted mind uploading, it would still be easier to create a new mind largely from first principles. You might look at the human brain to figure out what those principles are, in the same way a plane designer looks at birds.

I see a vast space of all possible minds, some friendly, most not. Humans are a small dot in this space. We know that humans are usually friendly. We have no guarantees about what happens as you move away from humans. In fact we know that one small error can sometimes send a human totally mad. If we want to make something that we know is safe, we either need to copy that dot exactly, (ie normal biological reproduction, mind uploading) or we need something we can show to be safe for some other reason.

My point with the Ejypt metafor was that the sentence

Society continues as-is, but with posthuman capabilities.

is incoherent.

Try "the stock market continues as is, except with all life extinct"

Describing the modern world as "like a tribe of monkeys, except with post monkey capabilities" is either wrong or so vague to not tell you much.

At the point when the system (upgraded human, AI whatever you want to call it) is 99% silicon, a stray meteor hits the biological part. If the remaining 99% stays friendly, somewhere in this process you have solved FAI. I see no reason why aligning a 99% silicon being is easier that a 100% silicon being.

Please Critique Things for the Review!

I think if there was a period where every few days a mod would post a few nominated posts and ask people to re-read and re-discuss them, that might have helped to engage people like me more. (Although honestly there's so much new content on LW competing for attention now that I might not have participated much even in that process.)

That's a pretty good idea, might try something like that next year.

the ones that did jump out at me I think I already commented on back when they were first posted and don't feel motivated to review them now.

Not sure how helpful this is, but fwiw: 

I think it's useful for the post authors to write reviews basically saying "here is how much thinking has evolved since writing this, and/or 'yup, I still just endorse this and think it's great".

In the same way, I think it'd be useful people did most of their commenting back-in-the-day write a short review that basically says "I still endorse the things I said back then", or "my thinking has changed a bit, here's how." (As I noted elsethread, I think it was also helpful when Vanessa combined several previous comments into one more distilled comment, although obviously that's a bit more work).

Please Critique Things for the Review!

And yeah, the whole thing feels mostly like work, which can’t help.

This is partly why I haven't done any reviews, despite feeling a vague moral obligation to do so. Another reason is that I wasn't super engaged with LW throughout most of 2018 and few of the nominated posts jumped out at me (as something that I have a strong opinion about) from a skim of the titles, and the ones that did jump out at me I think I already commented on back when they were first posted and don't feel motivated to review them now. Maybe that's because I don't like to pass judgment (I don't think I've written a review for anything before) and when I first commented it was in the spirit of "here are some tentative thoughts I'm bringing up for discussion".

Also, I haven't voted yet because I don't remember the details of the vast majority of the posts, and don't feel comfortable just voting based on my current general feeling about each post (which is probably most strongly influenced by how much I currently agree with the main points it tried to make), and I also don't feel like taking the time to re-read all of the posts. (I think for this reason perhaps whoever's selecting the final posts to go into the book should consider post karma as much or even more than the votes?)

I think if there was a period where every few days a mod would post a few nominated posts and ask people to re-read and re-discuss them, that might have helped to engage people like me more. (Although honestly there's so much new content on LW competing for attention now that I might not have participated much even in that process.)

Exploring safe exploration

Hey Aray!

Given this, I think the "within-episode exploration" and "across-episode exploration" relax into each other, and (as the distinction of episode boundaries fades) turn into the same thing, which I think is fine to call "safe exploration".

I agree with this. I jumped the gun a bit in not really making the distinction clear in my earlier post “Safe exploration and corrigibility,” but I think that made it a bit confusing, so I went heavy on the distinction here—but perhaps more heavy than I actually think is warranted.

The problem I have with relaxing within-episode and across-episode exploration into each other, though, is precisely the problem I describe in “Safe exploration and corrigibility,” however, which is that by default you only end up with capability exploration not objective exploration—that is, an agent with a goal (i.e. a mesa-optimizer) is only going to explore to the extent that it helps its current goal, not to the extent that it helps it change its goal to be more like the desired goal. Thus, you need to do something else (something that possibly looks somewhat like corrigibility) to get the agent to explore in such a way that helps you collect data on what its goal is and how to change it.

Bay Solstice 2019 Retrospective

Yeah, wanted to basically just echo these points.

Go F*** Someone

I have no idea whether this is intended as a compliment or a criticism.

Conclusion to the sequence on value learning

Seems as if there's some sleight-of-hand going on here. Yes, we can show that any policy that is invulnerable to dutch-booking is equivalent to optimizing some utility function. But you've also shown earlier that "equivalent to optimizing some utility function" is a nearly-vacuous concept. There are plenty of un-dutch-bookable policies which still don't end up paving the universe in utilitronium, for ANY utility function.

Furthermore, I find it easy to imagine human-like value systems which are in fact dutch-bookable; e.g., "I like to play peekaboo with babies" is dutch-bookable between "eyes covered" and "eyes uncovered". So the generalization at the outset of this chapter seems over-broad.

Clarifying The Malignity of the Universal Prior: The Lexical Update

Thanks, that makes sense. Here is my rephrasing of the argument:

Let the 'importance function' take as inputs machines and , and output all places where is being used as a universal prior, weighted by their effect on -short programs. Suppose for the sake of argument that there is some short program computing ; this is probably the most 'natural' program of this form that we could hope for.

Even given such a program, we'll still lose to the aliens: in , directly specifying our important decisions on Earth using will require both and to be fed into , costing bits, then bits to specify us. For the aliens, getting them to be motivated to control -short programs costs bits, but then they can skip directly to specifying us given , so they save bits over the direct explanation. So the lexical update works.

(I went wrong in thinking that the aliens would need to both update their notion of importance to match ours *and* locate our world; but if we assume the 'importance function' exists then the aliens can just pick out our world using our notion of importance)

Why Quantum?

Comments should be indexed by Google. I just went to 5 very old posts with hundreds of comments and randomly searched text-strings from them on Google, and all of them returned a result: 

If anyone can find any comments that are not indexed, please let me know, and I will try to fix it, but it seems (to me) that all comments are indexed for now. 

A rant against robots

This is probably more contentious. But I believe that the concept of "intelligence" is unhelpful and causes confusion. Typically, Legg-Hutter intelligence does not seem to require any "embodied intelligence".

I would rather stress two key properties of an algorithm: the quality of the algorithm's world model and its (long-term) planning capabilities. It seems to me (but maybe I'm wrong) that "embodied intelligence" is not very relevant to world model inference and planning capabilities.

human psycholinguists: a critical appraisal

Thank you for this!

It seems that my ignorance is on display here, the fact that these papers are new to me shows just how out of touch with the field I am. I am unsurprised that 'yes it works, mostly, but other approaches are better' is the answer, and should not be surprised that someone went and did it.

It looks like the successful Facebook AI approach is several steps farther down the road than my proposal, so my offer is unlikely to provide any value outside of the intellectual exercise for me, so I'm probably not actually going to go through with it--by the time the price drops that far, I will want to play with the newer tools.

Waifulabs is adorable and awesome. I've mostly been using style transfers on still life photos and paintings, I have human waifu selfie to anime art on my to do list but it has been sitting there for a while.

Are you planning integration with DeepAnime and maybe WaveNet so your perfect waifus can talk? Though you would know if that's a desirable feature for your userbase better than I would...

On the topic, it looks like someone could, today, convert a selfie of a partner into an anime face, train wavenet on a collection of voicemails, and train a generator using an archive of text message conversations, so that they could have inane conversations with a robot, with an anime face reading the messages to them with believable mouth movements.

I guess the next step after that would be to analyze the text for inferred emotional content (simple approaches with NLP might get really close to the target here, pretty sure they're already built), and warp the voice/eyes for emotional expression (I think WaveNet can do this for voice, if I remember correctly?

Maybe a deepfake type approach that transforms the anime girls using a palatte of a set of representative emotion faces? I'd be unsurprised if this has already been done, though maybe it's niche enough that it has not been.

This brings to mind an awful idea: In the future I could potentially make a model of myself and provide it as 'consolation' to someone I am breaking up with. Or worse, announce that the model has already been running for two weeks.

I suspect that today older style, still image heavy anime could probably be crafted entirely using generators (limited editing of the writing, no animators or voice actors), is there a large archive of anime scripts somewhere that a generator could train on, or is that data all scattered across privately held archives?

What do you think?

Toon Alfrink's sketchpad
Prioritization of goals is not the same as goal unification.

It can be if the basic structure is "I need to get my basic needs taken care of so that I can work on my ultimate goal".

I think Kaj has a good link on experimental proof for Maslow's Hierarchy.

I also think that it wouldn't be a stretch to call Self-determination theory a "single goal" framework, that goal being "self-determination", which is a single goal made up of 3 seperate subgoals, which crucially, must be obtained together to create meaning (if they could be obtained seperately to create meaning, and people were OK with that, than I don't think it would be fair to categorize it as a single goal theory.

Bay Solstice 2019 Retrospective

Thank you, Nat and Chelsea, for organising the Solstice. It's one of the most meaningful events I go to each year, that makes me feel like I care about the same things as so many other people I know.

As a second point, this retrospective is really detailed and I feel like I can get a lot of your knowledge from it, and I'm really glad something like this will be around for future solstice organisers to learn from.

Why Quantum?

Apparently LessWrong comments are not indexed by google, so I don’t have a non-time-intensive way of tracking down that comment.

Comments should be indexed by Google (I've seen comments show up in my search results before), but maybe not completely? Can you send a note to the LW team (telling them why you think comments are not being indexed) to see if there's anything they can do about this? In the meantime, have you tried LW's own search feature (the magnifying glass icon at the top)?

Here’s a paper by David Wallace on Deutsch’s decision theory formulation of the Born probabilities

I actually wrote a comment about that back in 2009 but haven't revisited it since. Have you read the response/counterargument I linked to, and still find Wallace's paper compelling?

The Rocket Alignment Problem

Fair, but I expect I've also read those comments buried in random threads. Like, Nate said it here three years ago on the EA Forum.

The main case for [the problems we tackle in MIRI's agent foundations research] is that we expect them to help in a gestalt way with many different known failure modes (and, plausibly, unknown ones). E.g., 'developing a basic understanding of counterfactual reasoning improves our ability to understand the first AGI systems in a general way, and if we understand AGI better it's likelier we can build systems to address deception, edge instantiation, goal instability, and a number of other problems'.

I have a mental model of directly working on problems. But before Eliezer's post, I didn't have an alternative mental model to move probability mass toward. I just funnelled probability mass away from "MIRI is working on direct problems they foresee in AI systems" to "I don't understand why MIRI is doing what it's doing". Nowadays I have a clearer pointer to what technical research looks like when you're trying to get less confused and get better concepts.

This sounds weirdly dumb to say in retrospect, because 'get less confused and get better concepts' is one of the primary ways I think about trying to understand the world these days. I guess the general concepts have permeated a lot of LW/rationality discussion. But at the time I guess I had a concept shaped whole in my discussion of AI alignment research, and after reading this post I had a much clearer sense of that concept.

Reality-Revealing and Reality-Masking Puzzles

I experienced a bunch of those disorientation patterns during my university years. For example:

  • I would only spend time with people who cared about x-risk as well, because other people seemed unimportant and dull, and I thought I wouldn't want to be close to them in the long run. I would choose to spend time with people even if I didn't connect with very much, hoping that opportunities to do useful things would show up (most of the time they didn't). And yet I wasn't able to hang out with these people. I went through maybe a 6 month period where when I met up with someone, the first thing I'd do was list out like 10-15 topics we could discuss, and try to figure out which were the most useful to talk about and in what order we should talk. I definitely also turned many of these people off hanging out with me because it was so taxing. I was confused about this at the time. I though I was not doing it well enough or something, because I wasn't providing enough value to them such that they were clearly having a good time.
  • I became very uninterested in talking with people whose words didn't cache out into a gears level model of the situation based in things I could confirm or understand. I went through a long period of not being able to talk to my mum about politics at all. She's very opinionated and has a lot of tribal feels and affiliations, and seemed to me to not be thinking about it in the way I wanted to think about it, which was a more first-principles fashion. Nowadays I find it interesting to put engage with how she sees the world, argue with it, feel what she feels. It's not the "truth" that I wanted, I can't take in the explicit content of her words and just input them into my beliefs, but this isn't the only way to learn from her. She has a valuable perspective on human coordination, that's tied up with important parts of her character and life story, that a lot of people share.
  • Relatedly, I went through a period of not being able to engage with aphorisms or short phrases that sounded insightful. Now I feel more trusting of my taste in what things mean and which things to take with me.
  • I generally wasn't able to connect with my family about what I cared about in life / in the big picture. I'd always try to be open and honest, and so I'd say something like "I think the world might end and I should do something about it" and they'd think that sounded mad and just ignore it. My Dad would talk about how he just cares that I'm happy. Nowadays I realise we have a lot of shared reference points for people who do things, not because they make you happy or because they help you be socially secure, but because they're right, because they're meaningful and fulfilling, and because it feels like it's your purpose. And they get that, and they know they make decisions like that, and they understand me when I talk about my decisions through that frame.
  • I remember on my 20th birthday, I had 10 of my friends round and gave a half-hour power-point presentation on my life plan. Their feedback wasn't that useful, but I realised like a week later, that the talk only contained info about how to evaluate whether a plan was good, and not how to generate plans to be evaluated. I'd just picked the one thing that people talked about that sounded okay under my evaluation process (publishing papers in ML, which was a terrible choice for me, I interacted very badly with academia). It took me a week to notice that I'd not said how to come up with plans. I then realised that I'd been thinking in a very narrow and evaluative way, and not been open to exploring interesting ideas before I could evaluate whether they worked.

I should say, these shifts have not been anything like an unmitigated failure, and I don't now believe were worth it just because they caused me to be more socially connected to x-risk things or because they were worth it in some pascal's mugging kind of way. Like, riffing off that last example, the birthday party was followed by us doing a bunch of other things I really liked - my friends and I read a bunch of dialogues from GEB after that (the voices people did were very funny) and ate cake, and I remember it fondly. The whole event was slightly outside my comfort zone, but everyone had a great time, and it was also in the general pattern of me trying to more explicitly optimise for what I cared about. A bunch of the stuff above has lead me to form the strongest friendships I had, much stronger than I think I expected I could have. And many other things I won't detail here.

Overall the effects on me personally, on my general fulfilment and happiness and connection to people I care about, has been strongly positive, and I'm glad about this. I take more small social risks, and they pay off bigger. I'm better at getting what I want, getting sh*t done, etc. Here, I'm mostly just listing some of the awkward things I did while at university.
 

Load More