IrenicTruth

Wiki Contributions

Comments

Competence/Confidence

I had a similar issue. I could not do the exercise because I could not figure out how to evaluate confidence and competence separately. I always end up on the x=y line. Reading this thread did not help. "Anticipated okayness of failure" doesn't change much with time for the same task, so that is a vertical line. "Confidence" = "Self-related ability to improve" is an interesting interpretation (working on "confidence" would be working on learning skills). Still, intuitively it feels off from what the graphs say (though I haven't been able to put the disconnect into words). Thinking about the improv/parachute graph, maybe "confidence" is "willingness to attempt a task despite being incompetent." I'm giving up for now.

Book summary: Unlocking the Emotional Brain

I found a review on Amazon (quoted at the bottom, since I cannot link to it) that says Ecker is injecting significant personal opinion and slanting his report of the science. I don't know if this is true, but the gushing praise from readers and psychology's history of jumping on things rather than evaluating evidence make it seem more likely than not. For me, this means that reading this book will involve getting familiar with the associated papers.

The Review

by "scholar"

Previously I posted a very positive review of this book. On further reflection and study of the relevant research papers, I have a very different view. The science of memory reconsolidation is complex and subtle. Its application to clinical work with real patients remains predominantly hypothetical. Ecker creates the impression that the conditions for memory reconsolidation and updating are now known and clear. They are not. His claims for their application to clinical practice (in my view) go rather beyond the evidence. Moreover, when I read his clinical examples later in the book, I completely fail to see how they relate specifically to the science he earlier quotes - they just seem to be examples of his therapeutic approach called Coherence Therapy (which predates his interest in memory reconsolidation) - and although these are certainly interesting, I cannot grasp how they illustrate the principles of memory reconsolidation. The positive outcome is that this book, which I eventually found confusing and infuriating, prompted me to study further this fascinating field of enquiry. There are undoubtedly potential clinical applications, but I feel Ecker's enthusiasm is a little premature.

An Emergency Fund for Effective Altruists (second version)

If recoupments occur sparingly, as I'd expect, where should the remaining funds go?

Keep them for "times of national emergency" etc. to hedge against correlated risk.

How big is the risk that the fund will be used in illicit ways, such as tax evasion, despite the fact that donors cannot claim more than they spent?

Modern society strongly incentivizes misusing anything that touches money, so without further evidence, I'd say that the risk is very high (near certainty). If we haven't found a way to misuse it, it is more likely that we are not clever enough than that the way does not exist.

First thought: I put in $100 to an 80% fund. I wait a year to claim the tax break on the 80% donation netting say 30%*$80=$24 in reduced taxes. Then I take out $95. I've made $19 on the trade. Of course, a government would see this right away and not allow tax breaks for such contributions. But this sort of thing seems rife for problems.

Another: I put in $100, $80 goes to a "charity" that gives me a 10% kickback. Then I take out $95 and I've made $3.

You might be able to fix this by requiring that contributors maintain almost all of their assets as property of the fund. Then if I make a withdrawal for "an emergency" I can't keep any profit or buy anything that doesn't go right back to the fund. But that sounds a lot like the "everything in common" schemes that have failed so often in the past. So, we'd need to modify it to make it viable.

On Humor

What experimental tests has clash theory survived?

Assigning probabilities to metaphysical ideas

Take all the metaphysical models of the universe that any human ever considers.

This N is huge. Approximate it with the number of strings generatable in a certain formal language over the lifetime of the human race. We're probably talking about billions even if the human race ceases to exist tomorrow. (Imagine that 1/7 of the people have had a novel metaphysical idea, and you get 1B with just the people currently on earth today. If you think that's a high estimate, remember that people get into weird states of consciousness (through fever, drugs, exertion, meditation, and other triggers), so random strings in that language are likely.)

You may want to define "metaphysical idea" (and thus that language) better. Some examples of what I mean by "metaphysical idea:"

Tasks apps w/ time estimates to gauge how much you'll overshoot?

If you can model everything as tasks, FogBugz has a feature I used to help myself complete grad school: https://fogbugz.com/evidence-based-scheduling/, which gives you a probability distribution over finishing times. It was incredibly useful! You might want to start the free trial to see if they still have the "if you have too few users, you can use it for free until you get big enough" deal they used to have.

As of (X years ago) it was missing appointment scheduling.

My most recent solution for individual scheduling is Skedpal. It does not have the overshoot estimation you want. Instead, it will flag tasks that are risky because they have little flexibility for rescheduling. This serves the purpose of deciding whether you can or can't schedule something: add it to your list (or to your calendar), hit update schedule, and if something goes to your hotlist or gets marked as red/yellow for flexibility, say no. You can include overshoot by giving yourself lots of slack (and % slack is a parameter to Skedpal's scheduling algorithm). This makes a constraint that binds you for planning when real constraints don't yet bind you. For sufficiently important stuff, you can turn off the slack to schedule that one item (and know that you're going to pay hell doing it.)

To schedule my team, I do a manual poor man's version of FogBugz. Someday I'll turn it into a web app. (I've had this in my head for 2+ years, so don't hold your breath.)

How An Algorithm Feels From Inside

The earliest citation in Wikipedia is from 1883, and it is a question and answer: "If a tree were to fall on an island where there were no human beings would there be any sound?" [The asker] then went on to answer the query with, "No. Sound is the sensation excited in the ear when the air or other medium is set in motion."

So, if this is truly the origin, they knew the nature of sound when the question was first asked.

Covid 6/10: Somebody Else’s Problem

Re: dominant assurance contracts/crowdfunding

The article makes the bad assumption that , the distribution of individual values of the public good is common knowledge. A good entrepreneur will do market research to try and determine . But better approximations cost more. Entrepreneurs will also be biased to think their idea is good. So, it is likely that many entrepreneurs will have bad models. Most individuals will also not know . So, there is another mode to profit for the small fraction of individuals who have decent approximations of : buy contracts likely to fail.

I'm not sure how this affects the whole scheme, but I'm pretty sure it limits the size of the failure payoffs to be significantly less than the value where  is common knowledge.

The assumption that  (the value to the -th individual) is known to that individual is also false. The individual has less variance on their estimate than others but would need to invest resources to know . I'm not sure what this error does to the contracts. I doubt it has as much effect as the common knowledge assumption.

What topics are on Dath Ilan's civics exam?

> knowing about the MCU, no matter how cool, doesn't pay rent

Is not "enables socialization" a form of rent?

In favor of this particular point, I know about the MCU despite disliking superhero movies and comics (except Watchmen) precisely because it is helpful in my social circles.

Regarding @jaspax's main point, it is not obvious that formal education is necessary to generate a shared mythopoetic structure. OTOH I can't think of an example of a long-lasting one that does not have a group actively involved in educating people about it. So, it is not obvious that it is a poor candidate for formal education either.

Load More