Raj Thimmiah

Comments

The LessWrong 2018 Book is Available for Pre-order

I think gumroad allows you to offer more than 1 format for download

Are there non-AI projects focused on defeating Moloch globally?

I actually have been wondering about the safety mechanism stuff, if anyone wants to give examples of actually produced things in AI alignment I’d be interested in hearing about them.

No Really, Why Aren't Rationalists Winning?

Did your friend ever finish that sequence? I'd still be quite interested in seeing it. After reading Chinese Businessmen: Superstition Doesn't Count, I've become very interested in becoming more instrumental.

If there were an interactive software teaching Yudkowskian rationality, what concepts would you want to see it teach?

Agree on this, memory coherence is pretty important. Cramming leads to results sort of like how you can't combine the trig you learned in highschool with some physics knowledge: there aren't good connections between the subjects, leaving them relatively siloed.

It requires both effort and actually wanting to learn a thing for the thing to integrate well. We tend to forget easily the things we don't care about (see school knowledge).

If there were an interactive software teaching Yudkowskian rationality, what concepts would you want to see it teach?

On seeing the title of this post again, I'm reminded of an obvious answer: teach people how to decide what to learn for themselves. Sort of like the feed a man a day vs. teaching fishing thing.

I don't think there's a more useful meta thing to learn since that's what you need to figure out everything else for yourself.

If there were an interactive software teaching Yudkowskian rationality, what concepts would you want to see it teach?

Haha, thanks for the rewrite, makes much more sense now.


tradeoff cognitive buck

Completely agree: too easy to cram mindlessly with Anki, I think in large part because of how much work it takes to make cards yourself.

I'm a bit skeptical of the drilling idea because cards taking more than 5 seconds to complete tend to become leeches and aren't the kind of thing you could do long-term, especially with Anki's algorithm. Still worth trying though, would be interested to hear if you or anyone else you know has gotten much benefit from it.

With the thoroughness vs. designer complexity, I think all the options with Anki kind of suck (mainly because I don't think they would work for my level of conscientiousness, at least).

If end users make their own cards, they'll give up (or at least most people would, I think. It's not very fun making cards from scratch).

If you design something for end users (possibly with some of the commoncog tacit knowledge stuff) I think it's sort of beneficial but you wouldn't get same coherence boost as making stuff yourself. Too easy to learn cards but not actually integrate them, usably. It also seems like a pain to make.

For declarative knowledge, I think the best balance for learning is curating content really well for incremental reading alongside (very importantly) either coaching* or more material on meta-skills of knowledge selection to prevent people from FOMO memorizing everything. I think with SuperMemo it wouldn't be hard to make a collection of good material for people to go through in a sane, inferential distance order. Still a fair bit of work for makers but not hellish.


I'm very, very, very curious about the tacit knowledge stuff. I still haven't gotten through all of the commoncog articles on tacit knowledge, though I've been going through them for a while, but in terms of instrumental rationality they seem very pragmatic. (I particularly enjoyed his criticism of rationalists in Chinese Businessmen: Superstition Doesn't Count [by which he means, superstition doesn't mess much with instrumentality]. I still have yet to figure out how to put any of it to use.



*while teaching people how to do IR, I've found direct feedback while people are trying it works well. It took me ages to be any good at IR (5 months to even start after buying supermemo and then another like 3 to be sort of proficient) while I can get someone to me after 1-2 month proficiency in a single ~2 hour session. Works wonders in areas where you can do lots of trial/error with quick feedback.

If there were an interactive software teaching Yudkowskian rationality, what concepts would you want to see it teach?

Could you rewrite some of the first paragraph? I read it 2-3 times and was still kind of confused.

Funny you linked commoncog, was about to link that too. Great blog.

If there were an interactive software teaching Yudkowskian rationality, what concepts would you want to see it teach?

Inferential distance based knowledge systems would be super cool. There are lots of stats ideas I'd like to engage in but ordering is too much of a pain.

The mentor thing is also true, I think for math in particular. Math/physics are the only subjects where I'd hesitate to just learn them by myself.

If there were an interactive software teaching Yudkowskian rationality, what concepts would you want to see it teach?

Aside from memorizing declarative knowledge, the question of how to acquire tacit knowledge is very interesting.

I don’t have any current great ideas (other than adding in hammer time like practical tests into things) but I think commoncog’s blog is very interesting, especially the stuff about naturalistic decision making. https://commoncog.com/blog/the-tacit-knowledge-series/ (Can’t link more specifically, on mobile)

If there were an interactive software teaching Yudkowskian rationality, what concepts would you want to see it teach?

Anki deck is a bad idea because as you said: a. formulation b. poor coherence (when you’re stuffing things other people though was cool in your brain it won’t connect with other things in your brain as well as if you’d made the deck

I think incremental reading with supermemo is a decent option. I’ve taught a few rat adjacenct people supermemo and the ones that have spent time on the sequences inside it have said it’s useful. I’m not sure how to summarize it well but basically, anki let’s you memorize stuff algorithmically while incremental reading let’s you learn (algorithmically) then memorize.

I’d be surprised if after day a year of using IR on the sequences you weren’t at least a fair bit more instrumental

(If you want to give it a try I’ll gladly teach you. I don’t think there’s any more efficient way to process declarative information)

Load More