Posts

Sorted by New

Wiki Contributions

Comments

I got into the idea of deliberately developmental organizations. Are DDOs a good idea? I still think probably yes, but they're easy to get wrong. What's important is that I spent a lot of time thinking about and then experimenting with ways to affect the culture of the organization and thereby understanding how organizations work.

What have you found in your experiments, in terms of what helps or hurts in developing DDO culture?

If it's all about prediction, why do poor team still have fans?

2 years later, I'd still be interested in your model if you're willing to share it.

I can't shake the feeling that throughout the book Sowell tries to make a case for a more right-wing/free-market point of view without admitting it, albeit in the most eloquent manner.

Did you find any of his political claims to be dubious?

I really like the idea of doing a pre-mortem here.

Suppose you and I have two different models, and my model is less wrong than yours. Suppose that my model assigns a 40% probability to event X, and your model assigns a 60%, we disagree and bet, and event X happens. If I had an oracle over the true distribution of X, my write-up would consist of saying "this falls into the 40% of cases, as predicted by my model", which doesn't seem very useful. In the absence of an oracle, I would end up writing up praise for, and updating towards, your more wrong model, which is obviously not what we want.


This approach might lead to over updating on single bets. You'd need to record your bets, and the odds on those bets over time to see how calibrated you were. If your calibration over time is poor, then you should be updating your model. Perhaps we can weaken the suggestion in the post to writing a post-mortem on why you may be wrong. Then when you reflect over multiple bets over time, you could try to tease out common patterns and deficits in your model making.

Interesting about ultralearning, I will need to skim that in more detail some point. Without spaced repetition/incremental reading, that looks like the best method of learning to me.

His book touches on spaced repetition (he's a big proponent of the testing effect) and other things. It's really about how to put together effective learning projects, from the research phase, through execution.

Regarding SuperMemo, yes, I use the software and incremental reading extensively (if you have an interest in learning it, I would happily teach you).

I am interested in IR, but I don't have a windows machine (MacOS/Linux) and don't think the overhead of maintaining a VM would be worth it. Do you IR everything you read online, or do you reserve it for materials in your field? I mostly take notes in roam, and add particularly salient things that I think I'll want to remember to anki.

I also subscribe heavily to Woz's ideas. I like them because they tend to be much closer to global maximas (e.g. free running sleep) because societal/academic norms do not restrict his views.

Noted. The SuperMemo wiki has always seemed quite unwieldy to me, but I'll take closer to what he says to say on topics outside of spaced repetition.

1. you know what you don't know so if you need some preceding information you can find that for yourself (in large part thanks to the internet)
2. teaching is centered around the idea that a teacher knows what you should know better than you do. In many cases, I don't think this makes much sense. If I want to learn how to make x thing, getting a general education on the field x falls into (field y) doesn't make sense. Learning a bunch of useless things in field y is a waste of my time. If I'm deciding what to learn by myself, I can make sure that I'm not only learning things efficiently but that I'm choosing what to learn effectively.

This is the approach advocated by Scott Young in the book Ultralearning. You build out a learning project for the thing you actually want to learn, learn by doing, and you fill in obvious gaps that are 'rate-limiting' to the learning 'reaction' as you go along. Learning by working on the end result that you actually want directly also sidesteps the issues with transfer learning - students are typically able to apply the abstract classroom skills they've been taught to real world situations.


I see you link to SuperMemo and ask about it a lot. Do you use that software, and do you generally subscribe to Wozniak's ideas?

I think that's a bit of a shame because I personally have found LW-style thinking useful for programming. My debugging process has especially benefited from applying some combination of informal probabilistic reasoning and "making beliefs pay rent", which enabled me to make more principled decisions about which hypotheses to falsify first when finding root causes.

As someone who landed on your comment specifically by searching for what LW has said about software engineering in particular, I'd love to read more about your methods, experiences, and thoughts on the subject. Have you written about this anywhere?

Load More