Director of Research at PAISRI
A decent intuition might be to think about what exploration looks like in human children. Children under the age of 5 but old enough to move about on their own—so toddlers, not babies or "big kids"—face a lot of dangers in the modern world if they are allowed to run their natural exploration algorithm. Heck, I'm not even sure this is a modern problem, because in addition to toddlers not understanding and needing to be protected from exploring electrical sockets and moving vehicles they also have to be protected from more traditional dangers that they would definitely otherwise check out like dangerous plants and animals. Of course, since toddlers grow up into powerful adult humans, this is a kind of evidence that they are powerful enough explorers (even with protections) to become powerful enough to function in society.
Obviously there are a lot of caveats to taking this idea too seriously since I've ignored issues related to human development, but I think it points in the right direction of something everyday that reflects this result.
Thanks, this is a really useful summary to have since linking back to Bostrom on info hazards is reasonable but not great if you want people to actually read something and understand information hazards rather than bounce of something explaining the idea. Kudos!
Couple of notes on the song:
I think of applied rationality pretty narrowly, as the skill of applying reasoning norms that maximize returns (those norms happening to have the standard name "rationality"). Of course there's a lot to that, but I also think this framing is a poor one to train all the skills required to "win". To use a metaphor, as requested, it's like the skill of getting really good at reading a map to find optimal paths between points: your life will be better for it, but it also doesn't teach you everything, like how to figure out where you are on the map now or where you might want to go.
tl;dr: read multiple things concurrently so you read them "slowly" over multiple days, weeks, months
When I was a kid, it took a long time to read a book. How could it not: I didn't know all the words, my attention span was shorter, I was more restless, I got lost and had to reread more often, I got bored more easily, and I simply read fewer words per minute. One of the effects of this is that when I read a book I got to live with it for weeks or months as I worked through it.
I think reading like that has advantages. By living with a book for longer the ideas it contained had more opportunity to bump up against other things in my life. I had more time to think about what I had read when I wasn't reading. I more deeply drunk in the book as I worked to grok it. And for books I read for fun, I got to spend more time enjoying them, living with the characters and author, by having it spread out over time.
As an adult it's hard to preserve this. I read faster and read more than I did as a kid (I estimate I spend 4 hours a day reading on a typical day (books, blogs, forums, etc.), not including incidental reading in the course of doing other things). Even with my relatively slow reading rate of about 200 wpm, I can polish off ~50k words per day, the length of a short novel.
The trick, I find, is to read slowly by reading multiple things concurrently and reading only a little bit of each every day. For books this is easy: I can just limit myself to a single chapter per day. As long as I have 4 or 5 books I'm working on at once, I can spread out the reading of each to cover about a month. Add in other things like blogs and I can spread things out more.
I think this has additional benefits over just getting to spend more time with the ideas. It lets the ideas in each book come up against each other in ways they might otherwise not. I sometimes notice patterns that I might otherwise not have because things are made simultaneously salient that otherwise would not be. And as a result I think I understand what I read better because I get the chance not just to let it sink in over days but also because I get to let it sink in with other stuff that makes my memory of it richer and more connected.
So my advice, if you're willing to try it, is to read multiple books, blogs, etc. concurrently, only reading a bit of each one each day, and let your reading span weeks and months so you can soak in what you read more deeply rather than letting it burn bright and fast through your mind to be forgotten like a used up candle.
Welcome to LessWrong!
Given the content of your post, you might find these posts interesting:
I few months ago I found a copy of Staying OK, the sequel to I'm OK—You're OK (the book that probably did the most to popularize transactional analysis), on the street near my home in Berkeley. Since I had previously read Games People Play and had not thought about transactional analysis much since, I scooped it up. I've just gotten around to reading it.
My recollection of Games People Play is that it's the better book (based on what I've read of Staying OK so far). Also, transactional analysis is kind of in the water in ways that are hard to notice so you are probably already kind of familiar with some of the ideas in it, but probably not explicitly in a way you could use to build new models (for example, as far as I can tell notions of strokes and life scripts were popularized by if not fully originated within transactional analysis). So if you aren't familiar with transactional analysis I recommend learning a bit about it since although it's a bit dated and we arguably have better models now, it's still pretty useful to read about to help notice patterns of ways people interact with others and themselves, sort of like the way the most interesting thing about Metaphors We Live By is just pointing out the metaphors and recognizing their presence in speech rather than whether the general theory is maximally good or not.
One things that struck me as I'm reading Staying OK is its discussion of the trackback technique. I can't find anything detailed online about it beyond a very brief summary. It's essentially a multi-step process for dealing with conflicts in internal dialogue, "conflict" here being a technical term referring to crossed communication in the transactional analysis model of the psyche. Or at least that's how it's presented. Looking at it a little closer and reading through examples in the book that are not available online, it's really just poorly explained memory reconsolidation. To the extent it's working as a method in transactional analysis therapy, it seems to be working because it's tapping into the same mechanisms as Unlocking the Emotional Brain.
I think this is interesting both because it shows how we've made progress and because it shows that transactional analysis (along with a lot of other things), were also getting at stuff that works, but less effectively because they had weaker evidence to build on that was more confounded with other possible mechanisms. To me this counts as evidence that building theory based on phenomenological evidence can work and is better than nothing, but will be supplanted by work that manages to tie in "objective" evidence.
First, thanks for posting about this even though it failed. Success is built out of failure, and it's helpful to see it so that it's normalized.
Second, I think part of the problem is that there's still not enough constraints on learning. As others notice, this mostly seems to weaken the optimization pressure such that it's slightly less likely to do something we don't want but doesn't actively make it into something that does things we do want and not those we don't.
Third and finally, what this most reminds me of is impact measures. Not in the specific methodology, but in the spirit of the approach. That might be an interesting approach for you to consider given that you were motivated to look for and develop this approach.
As Stuart previously recognized with the anchoring bias, it's probably worth keeping in mind that any bias is likely only a "bias" against some normative backdrop. Without some way reasoning was supposed to turn out, there are no biases, only the way things happened to work.
Thus things look confusing around confirmation bias, because it only becomes bias when it results in reason that produces a result that doesn't predict reality after the fact. Otherwise it's just correct reasoning based on priors.
Yeah, I think #1 sounds right to me, and there is nothing strange about it.