This post is a decent first approximation. But it is important to remember that even successful communication is almost always occurring on more than just one of these levels at once.
Personally I find it useful to think of communication as having spontaneous layers of information which may include things like asserting social context, acquiring knowledge, reinforcing beliefs, practicing skills, indicating and detecting levels of sexual interest, and even play. And by spontaneous layers, I mean that we each contribute to the scope of a conversation, and th...
In retrospect, spelling words out loud, something I do tend to do with a moderate frequency, is something I've gotten much better at over the past ten years. I suspect that I've hijacked my typing skill to the task, as I tend to error correct my verbal spelling in exactly the same way. I devote little or no conscious thought or sense mode to the spelling process, except in terms of feedback.
As for my language skills, they are at least adequate. However, I have devoted special attention to improving them so I can't say that I don't share some bias away from being especially capable.
When you're trying to communicate facts, opinions, and concepts - most especially concepts - it is a useful investment of effort to try to categorize both your audience's crystallography and your own.
This is something of an oversimplification. Categories are one possible first step, but eventually you will need more nuance than that. I suggest forming estimates based on the communication being serving also as a sequence of experiments. And being very strict about not ruling things out, especially if you have not managed to beat down your typical mind fa...
Arguably, as seminal as the sequences are treated, why are the "newbies" the only ones who should be (re)reading them?
The number of assertions needed is now so large that it may be difficult for a human to acquire that much knowledge.
Especially given these are likely significantly lower bounds, and don't account for the problems of running on spotty evolutionary hardware, I suspect that the discrepancy is even greater than it first appears.
What I find intriguing about this result is that essentially it is one of the few I've seen that has a limit description of consciousness: you have on one hand a rating of complexity of your "conscious" cognitive system an...
This should not be underestimated as an issue. Status as we use it here and at overcoming bias tends to be simplified into something not unlike a monetary model.
It is possible to try to treat things like status reductively, but in the current discussion it will hopefully suffice to characterize it with more nuance than "social wealth".
If you only expect to find one empirically correct cluster of contrarian beliefs, then you will most likely find only one, regardless of what exists.
Treating this is as a clustering problem we can extract common clusters of beliefs from the general contrarian collection and determine degrees of empirical correctness. Presupposing a particular structure will introduce biases on the discoveries you can make.
there's really no reason those numbers should too much higher than they are for a random inhabitant of the city
Actually simply being in the local social network of the victim should increase the probability of involvement by a significant amount. This would of course be based on population, murder rates, and so on. And likely would also depend on estimates of criminology models for the crime in question.
Proof of how dangerous this sort of list can be.
I entirely forget about:
After all, how can you advance even pure epistemic rationality without constructing your own experiments on the world?
Or more succinctly and broadly, learn to:
pay attention
correct bias
anticipate bias
estimate well
With a single specific enumeration of means to accomplish these competencies you risk ignoring other possible curricula. And you encourage the same blind spots for the entire community of aspiring rationalists so educated.
This parallels some of the work I'm doing with fun-theoretic utility, at least in terms of using information theory. One big concern is what measure of complexity to use, as you certainly don't want to use a classical information measure - otherwise Kolmogorov random outcomes will be preferred to all others.
Lies, truth, and radical honesty are all that get in the way in understanding what is going on here.
You are communicating with someone, several of the many constantly changing layers (in addition to status signaling, empathy broadcasting, and performatives) of this communication are the transfer of information from you to that someone. The effectiveness of the communication of this information and its accuracy when received is something we can talk about fairly easily in terms of both instrumental (effectiveness) and epistemic (accurate) rationality.
To cl...
My post does describe a distinct model based on a Many Worlds interpretation where the probabilities are computed differently based on whether entanglement occurs or not - i.e. whether the universes influence each other. It is distinct from the typical model of decoherence.
As for photosythesis, it ought to behave in much the same way, as a network of states propagating through entangled universes, with the interactions of the states in those branches causing the highest probabilities to be assigned to the branches which have the lowest energy barriers.
Of...
It's as though no one here has ever heard of the bystander effect. The deadline is January 15th. Setting up a wiki page and saying "Anyone's free to edit." is the equivalent to killing this thing.
Also this is a philosophy, psychology, and technology journal, which means that despite the list of references for Singularity research you will also need to link this with the philosophical and/or public policy issues that the journal wants you to address (take a look at the two guest editors).
Another worry to me is that in all the back issues of this journal I looked over, the papers were almost always monographs (and baring that 2). I suspect that having many authors might kill the chances for this paper.
First of all consider a computer is incomplete without a program, so lets just think of a programmed computer - whether in hardware or software doesn't matter for our purposes.
This gives us a system that goes from some known start state to some outcome state through a series of intermediate steps. If each of these steps is deterministic, then the entire system reaches the same outcome in all universes where it had the same starting point.
If those steps were stochastic, perhaps because there is chance of memory corruption in our computer or because of a r...
I meant that setting the limit to no preference for a given C doesn't equate to a globally continuous function. But that when you adjust your preferences function to approximate the discontinuous function by a continuous one, the result will contain (at least one) no preference point between any two A < B.
Now perhaps there is a result which says that if you take the limit as you set all discontinuous C to no preference, that the resulting function is complete, consistent, transitive, and continuous, but I wouldn't take that to be automatic.
Consider, f...
We are talking about the same thing here just at different levels of generality. The function you describe is the same as the one I'm describing, except on a much narrower domain (only a single binary lottery between A and B). Then you project the range to just a question about C.
In the specific function you are talking about, you must hold that this is true for all A, B, and C to get continuity. In the function I describe, the A, B, and C are generalized out, so the continuity property is equivalent to the continuity of the function.
I was talking about utility functions, but I can see your point about generalizing the result to the mapping from arbitrary dilemmas to preferences. Realize though, that preference space isn't discrete.
You can describe it as the function from a mixed dilemma to the joint relation space for < and =. Which you can treat as a somewhat more complex version of the ordinals (certainly you can construct a map to a dense version of the ordinals if you have at least 2 dilemmas and dense probability space). That gives you a notion of the preference space where a...
That is my reading of it too. I know Stuart is putting forward analytic results here, I was concerned that this one was not correctly represented.
What I keep coming to here is, doesn't the entire point of this post come to the situations where the parameters in question, the bias of the coins, are not independent? And doesn't this contradict?
Which leads me to read the later half of this post as, we can (in principle, perhaps not computably) estimate 1 complex parameter with 100 data sets better than 100 independent unknown parameters from individual data sets. This shouldn't be surprising. I certainly don't find it as such.
The first half just points ou... (read more)