All Posts

Sorted by Magic (New & Upvoted)

Thursday, December 5th 2019
Thu, Dec 5th 2019

No posts for December 5th 2019
Shortform [Beta]
8BrienneYudkowsky2moThread on The Abolition of Man by C. S. Lewis
8Raemon2moAfter this weeks's stereotypically sad experience with the DMV.... (spent 3 hours waiting in lines, filling out forms, finding out I didn't bring the right documentation, going to get the right documentation, taking a test, finding out somewhere earlier in the process a computer glitched and I needed to go back and start over, waiting more, finally getting to the end only to learn I was also missing another piece of identification which rendered the whole process moot) ...and having just looked over a lot of 2018 posts [] investigating coordination failure... I find myself wondering if it's achievable to solve one particular way in which bureaucracy is terrible: the part where each node/person in the system only knows a small number of things, so you have to spend a lot of time rehashing things, and meanwhile can't figure out if your goal is actually achievable. (While attempting to solve this problem, it's important to remember that at least some of the inconvenience of bureaucracy may be an active ingredient [] rather than inefficiency. But at least in this case it didn't seem so: drivers licenses aren't a conserved resource that the DMV wants to avoid handing out. If I had learned early on that I couldn't get my license last Monday it would have not only saved me time, but saved DMV employee hassle) I think most of the time there's just no incentive to really fix this sort of thing (while you might have saved DMV employee hassle, you probably wouldn't save them time, since they still just work the same 8 hour shift regardless. And if you're the manager of a DMV you probably don't care too much about your employees having slightly nicer days. But, I dunno man, really!?. Does it seem like at least Hot New Startups could be sold on software that, I dunno, tracks all the requirements of a bureaucratic process and tries to compile "will this work?" at sta
4AABoyles2moAttention Conservation Warning: I envision a model which would demonstrate something obvious, and decide the world probably wouldn't benefit from its existence. The standard publication bias is that we must be 95% certain a described phenomenon exists before a result is publishable (at which time it becomes sufficiently "confirmed" to treat the phenomenon as a factual claim). But the statistical confidence of a phenomenon conveys interesting and useful information regardless of what that confidence is. Consider the space of all possible relationships: most of these are going to be absurd (e.g. the relationship between number of minted pennies and number of atoms in moons of Saturn), and exhibit no correlation. Some will exhibit weak correlations (in the range of p = 0.5). Those are still useful evidence that a pathway to a common cause exists! The universal prior on random relationships should be roughly zero, because most relationships will be absurd. What would science look like if it could make efficient use of the information disclosed by presently unpublishable results? I think I can generate a sort of agent-based model to imagine this. Here's the broad outline: 1. Create a random DAG representing some complex related phenomena. 2. Create an agent which holds beliefs about the relationship between nodes in the graph, and updates its beliefs when it discovers a correlation with p > 0.95. 3. Create a second agent with the same belief structure, but which updates on every experiment regardless of the correlation. 4. On each iteration have each agent select two nodes in the graph, measure their correlation, and update their beliefs. Then have them compute the DAG corresponding to their current belief matrix. Measure the difference between the DAG they output and the original DAG created in step 1. I believe that both agents will converge on the correct DAG, but the un-publication-biased agent will converge much more rapidly. There a
2Chris_Leong2moEDT agents handle Newcomb's problem as follows: they observe that agents who encounter the problem and one-box do better on average than those who encounter the problem and two-box, so they one-box. That's the high-level description, but let's break it down further. Unlike CDT, EDT doesn't worry about the fact that their may be a correlation between your decision and hidden state. It assumes that if the visible state before you made your decision is the same, then the counterfactuals generated by considering your possible decisions are comparable. In other words, any differences in hidden state, such as you being a different agent or money being placed in the box, are attributed to your decision (see my previous discussion here [])