jacobjacob

jacobjacob's Comments

2018 Review: Voting Results!

That seems like weak evidence of karma info-cascades: posts with more karma get more upvotes *simply because* they have more karma, in a way which ultimately doesn't correlate with their "true value" (as measured by the review process).

Potential mediating causes include users being anchored by karma, or more karma causing a larger share of the attention of the userbase (due to various sorting algorithms).

Reality-Revealing and Reality-Masking Puzzles

Overall I'm still quite confused, so for my own benefit, I'll try to rephrase the problem here in my own words:

Engaging seriously with CFAR’s content adds lots of things and takes away a lot of things. You can get the affordance to creatively tweak your life and mind to get what you want, or the ability to reason with parts of yourself that were previously just a kludgy mess of something-hard-to-describe. You might lose your contentment with black-box fences and not applying reductionism everywhere, or the voice promising you'll finish your thesis next week if you just try hard enough.

But in general, simply taking out some mental stuff and inserting an equal amount of something else isn't necessarily a sanity-preserving process. This can be true even when the new content is more truth-tracking than what it removed. In a sense people are trying to move between two paradigms -- but often without any meta-level paradigm-shifting skills.

Like, if you feel common-sense reasoning is now nonsense, but you’re not sure how to relate to the singularity/rationality stuff, it's not an adequate response for me to say "do you want to double crux about that?" for the same reason that reading bible verses isn't adequate advice to a reluctant atheist tentatively hanging around church.

I don’t think all techniques are symmetric, or that there aren't ways of resolving internal conflict which systematically lead to better results, or that you can’t trust your inside view when something superficially pattern matches to a bad pathway.

But I don’t know the answer to the question of “How do you reason, when one of your core reasoning tools is taken away? And when those tools have accumulated years of implicit wisdom, instinctively hill-climbing to protecting what you care about?”

I think sometimes these consequences are noticeable before someone fully undergoes them. For example, after going to CFAR I had close friends who were terrified of rationality techniques, and who have been furious when I suggested they make some creative but unorthodox tweaks to their degree, in order to allow more time for interesting side-projects (or, as in Anna's example, finishing your PhD 4 months earlier). In fact, they've been furious even at the mere suggestion of the potential existence of such tweaks. Curiously, these very same friends were also quite high-performing and far above average on Big 5 measures of intellect and openness. They surely understood the suggestions.

There can be many explanations of what's going on, and I'm not sure which is right. But one idea is simply that 1) some part of them had something to protect, and 2) some part correctly predicted that reasoning about these things in the way I suggested would lead to a major and inevitable life up-turning.

I can imagine inside views that might generate discomfort like this.

  • "If AI was a problem, and the world is made of heavy tailed distributions, then only tail-end computer scientists matter and since I'm not one of those I lose my ability to contribute to the world and the things I care about won’t matter."
  • "If I engaged with the creative and principled optimisation processes rationalists apply to things, I would lose the ability to go to my mom for advice when I'm lost and trust her, or just call my childhood friend and rant about everything-and-nothing for 2h when I don't know what to do about a problem."

I don't know how to do paradigm-shifting; or what meta-level skills are required. Writing these words helped me get a clearer sense of the shape of the problem.

(Note: this commented was heavily edited for more clarity following some feedback)

jacobjacob's Shortform Feed

I saw an ad for a new kind of pant: stylish as suit pants, but flexible as sweatpants. I didn't have time to order them now. But I saved the link in a new tab in my clothes database -- an Airtable that tracks all the clothes I own.

This crystallised some thoughts about external systems that have been brewing at the back of my mind. In particular, about the gears-level principles that make some of them useful, and powerful,

When I say "external", I am pointing to things like spreadsheets, apps, databases, organisations, notebooks, institutions, room layouts... and distinguishing those from minds, thoughts and habits. (Though this distinction isn't exact, as will be clear below, and some of these ideas are at an early stage.)

Externalising systems allows the following benefits...

1. Gathering answers to unsearchable queries

There are often things I want lists of, which are very hard to Google or research. For example:

  • List of groundbreaking discoveries that seem trivial in hindsight
  • List of different kinds of confusion, identified by their phenomenological qualia
  • List of good-faith arguments which are elaborate and rigorous, though uncertain, and which turned out to be wrong

etc.

Currently there is no search engine (but the human mind) capable of finding many of these answers (if I am expecting a certain level of quality). But for that reason researching the lists is also very hard.

The only way I can build these lists is by accumulating those nuggets of insight over time.

And the way I make that happen, is to make sure to have external systems which are ready to capture those insights as they appear.

2. Seizing serendipity

Luck favours the prepared mind.

Consider the following anecdote:

Richard Feynman was fond of giving the following advice on how to be a genius. [As an example, he said that] you have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will be a hit, and people will say: "How did he do it? He must be a genius!"

I think this is true far beyond beyond intellectual discovery. In order for the most valuable companies to exist, there must be VCs ready to fund those companies when their founders are toying with the ideas. In order for the best jokes to exist, there must be audiences ready to hear them.

3. Collision of producers and consumers

Wikipedia has a page on "Bayes theorem".

But it doesn't have a page on things like "The particular confusion that many people feel when trying to apply conservation of expected evidence to scenario X".

Why?

One answer is that more detailed pages aren't as useful. But I think that can't be the entire truth. Some of the greatest insights in science take a lot of sentences to explain (or, even if they have catchy conclusions, they depend on sub-steps which are hard to explain).

Rather, the survival of Wikipedia pages depends on both those who want to edit and those who want to read the page being able to find it. It depends on collisions, the emergence of natural Schelling points for accumulating content on a topic. And that's probably something like exponentially harder to accomplish the longer your thing takes to describe and search for.

Collisions don't just have to occur between different contributors. They must also occur across time.

For example, sometimes when I've had 3 different task management systems going, I end up just using a document at the end of the day. Because I can't trust that if I leave a task in any one of the systems, future Jacob will return to that same system to find it.

4. Allowing collaboration

External systems allow multiple people to contribute. This usually requires some formalism (a database, mathematical notation, lexicons, ...), and some sacrifice of flexibility (which grows superlinearly as the number of contributors grow).

5. Defining systems extensionally rather than intensionally

These are terms from analytic philosophy. Roughly, the "intension" of the concept "dog" is a furry, four-legged mammal which evolved to be friendly and cooperative with a human owner. The "extension" of "dog" is simply the set of all dogs: {Goofy, Sunny, Bo, Beast, Clifford, ...}

If you're defining a concept extensionally, you can simply point to examples as soon as you have some fleeting intuitive sense of what you're after, but long before you can articulate explicit necessary and sufficient conditions for the concept.

Similarly, an externalised system can grow organically, before anyone knows what it is going to become.

6. Learning from mistakes

I have a packing list database, that I use when I travel. I input some parameters about how long I'll be gone and how warm the location is, and it'll output a checklist for everything I need to bring.

It's got at least 30 items per trip.

One unexpected benefit from this, is that whenever I forget something -- sunglasses, plug converters, snacks -- I have a way to ensure I never make that mistake again. I simply add it to my database, and as long as future Jacob uses the database, he'll avoid repeating my error.

This is similar to Ray Dalio's Principles. I recall him suggesting that the act of writing down and reifying his guiding wisdom gave him a way to seize mistakes and turn them into a stronger future self.

This is also true for the Github repo of the current project I'm working on. Whenever I visit our site and find a bug, I have a habit of immediately filing an issue, for it to be solved later. There is a pipeline whereby these real-world nuggets of user experience -- hard-worn lessons from interacting with the app "in-the-field", that you couldn't practically have predicted from first principles -- get converted into a better app. So, whenever a new bug is picked up by me or a user, in addition to annoyance, it causes a little flinch of excitement (though the same might not be true for our main developer...). This also relates to the fact that we're dealing in code. Any mistake can be improved in such a way that no future user will experience it.

Key Decision Analysis - a fundamental rationality technique

For some reason seeing all this concreteness made me more excited/likely to try this technique.

Key Decision Analysis - a fundamental rationality technique

I'm curious, could you share more details about what patterns you observed, and which heuristics you actually seemed to use?

Voting Phase of 2018 LW Review (Deadline: Sun 19th Jan)

I voted in category mode, and am some way through fine-tuning in quadratic mode.

What is Life in an Immoral Maze?

It is common for people who quit (based on personal experiences of friends) to have no idea how to actually do real object-level work

I'm quite surprised by this but don't find it entirely implausible.

Concretely, what evidence caused you to believe it? I'm curious about data (anecdotes, studies, experience, ...) rather than models.

What were the biggest discoveries / innovations in AI and ML?

Check the section called "derivations" here: it links to a document attempting to list all conceptual breakthroughs in AI, of at least a certain significance, ever http://mediangroup.org/insights with related discussion on forecasting implications here: https://ai.metaculus.com/questions/2920/will-the-growth-rate-of-conceptual-ai-insights-remain-linear-in-2019/

[Part 1] Amplifying generalist research via forecasting – Models of impact and challenges

Question 3

It seems like Ozzie is answering on a more abstract level than the question was asked. There's a difference between "How valuable will it be to answer question X?" (what Ozzie said) and "How outsourceable is question X?" (what Lawrence's question was related to).

I think that outsourceability would be a sub-property of Tractability.

In more detail, some properties I imagine to affect outsourceability, are whether the question:

1) Requires in-depth domain knowledge/experience

2) Requires substantial back-and-forth between question asker and question answerer to get the intention right

3) Relies on hard-to-communicate intuitions

4) Cannot easily be converted into a quantitative distribution

5) Has independent subcomponents which can be answered separately and don't rely on each other to be answered (related to Lawrence point about tractability)

[Part 1] Amplifying generalist research via forecasting – Models of impact and challenges

I'll try to paraphrase you (as well as extrapolating a bit) to see if I get what you're saying:

Say you want some research done. The most straightforward way to do so is to just hire a researcher. This "freeform" approach affords a lot of flexibility in how you delegate, evaluate, communicate, reward and aggregate the research. You can build up subtle, shared intuitions with your researchers, and invest a lot of effort in your ability to communicate nuanced and difficult instructions. You can also pick highly independent researchers who are able to make many decisions for themselves in terms of what to research, and how to research it.
By using "amplification" schemes and other mechanisms, you're placing significant restrictions on your ability to do all of those things. Hence you better get some great returns to compensate.
But looking through various ways you might get these benefits, they all seem at best... fine.
Hence the worry is that despite all the bells-and-whistles, there's actually no magic happening. This is just like hiring a researcher, but a bit worse. This is only "amplification" in a trivial sense.
As a corollary, if your research needs seem to be met by a handful in-house researchers, this method wouldn't be very helpful to you.

1) Does this capture your views?

2) I'm curious what you think of the sections: "Mitigating capacity bottlenecks" and "A way for intellectual talent to build and demonstrate their skills"?

In particular, I didn't feel like your comment engaged with A) the scalability of the approach, compared to the freeform approach, and B) that it might be used as a "game" for young researchers to build skills and reputation, which seems way harder to do with the freeform approach.

Load More