All of MendelSchmiedekamp's Comments + Replies

What I keep coming to here is, doesn't the entire point of this post come to the situations where the parameters in question, the bias of the coins, are not independent? And doesn't this contradict?

estimate 100 independent unknown parameters

Which leads me to read the later half of this post as, we can (in principle, perhaps not computably) estimate 1 complex parameter with 100 data sets better than 100 independent unknown parameters from individual data sets. This shouldn't be surprising. I certainly don't find it as such.

The first half just points ou... (read more)

This post is a decent first approximation. But it is important to remember that even successful communication is almost always occurring on more than just one of these levels at once.

Personally I find it useful to think of communication as having spontaneous layers of information which may include things like asserting social context, acquiring knowledge, reinforcing beliefs, practicing skills, indicating and detecting levels of sexual interest, and even play. And by spontaneous layers, I mean that we each contribute to the scope of a conversation, and th... (read more)

My peeve (well, one of many) is when I can't distinguish between flirty non-rejection rejections, vs. real, "you're inches from being a stalker" rejections, and there are severe penalties for erring in either directions. And I suspect that if it were possible to distinguish them, that would be a bad thing (for women). ETA: Downmod justification requested for this and my follow-up comment.
Voted up. I think a better metaphor is that of dimensions (since a conversation can easily take on several simultaneously) rather than levels.

In retrospect, spelling words out loud, something I do tend to do with a moderate frequency, is something I've gotten much better at over the past ten years. I suspect that I've hijacked my typing skill to the task, as I tend to error correct my verbal spelling in exactly the same way. I devote little or no conscious thought or sense mode to the spelling process, except in terms of feedback.

As for my language skills, they are at least adequate. However, I have devoted special attention to improving them so I can't say that I don't share some bias away from being especially capable.

When you're trying to communicate facts, opinions, and concepts - most especially concepts - it is a useful investment of effort to try to categorize both your audience's crystallography and your own.

This is something of an oversimplification. Categories are one possible first step, but eventually you will need more nuance than that. I suggest forming estimates based on the communication being serving also as a sequence of experiments. And being very strict about not ruling things out, especially if you have not managed to beat down your typical mind fa... (read more)

Arguably, as seminal as the sequences are treated, why are the "newbies" the only ones who should be (re)reading them?

The number of assertions needed is now so large that it may be difficult for a human to acquire that much knowledge.

Especially given these are likely significantly lower bounds, and don't account for the problems of running on spotty evolutionary hardware, I suspect that the discrepancy is even greater than it first appears.

What I find intriguing about this result is that essentially it is one of the few I've seen that has a limit description of consciousness: you have on one hand a rating of complexity of your "conscious" cognitive system an... (read more)

This should not be underestimated as an issue. Status as we use it here and at overcoming bias tends to be simplified into something not unlike a monetary model.

It is possible to try to treat things like status reductively, but in the current discussion it will hopefully suffice to characterize it with more nuance than "social wealth".

If you only expect to find one empirically correct cluster of contrarian beliefs, then you will most likely find only one, regardless of what exists.

Treating this is as a clustering problem we can extract common clusters of beliefs from the general contrarian collection and determine degrees of empirical correctness. Presupposing a particular structure will introduce biases on the discoveries you can make.

Bertrand Russell applied this method successfully to assess the value of Hegel's philosophy: Unpopular essays, chap. 1

there's really no reason those numbers should too much higher than they are for a random inhabitant of the city

Actually simply being in the local social network of the victim should increase the probability of involvement by a significant amount. This would of course be based on population, murder rates, and so on. And likely would also depend on estimates of criminology models for the crime in question.

Proof of how dangerous this sort of list can be.

I entirely forget about:

  • act effectively

After all, how can you advance even pure epistemic rationality without constructing your own experiments on the world?

Or more succinctly and broadly, learn to:

  • pay attention

  • correct bias

  • anticipate bias

  • estimate well

With a single specific enumeration of means to accomplish these competencies you risk ignoring other possible curricula. And you encourage the same blind spots for the entire community of aspiring rationalists so educated.

Proof of how dangerous this sort of list can be. I entirely forget about: * act effectively After all, how can you advance even pure epistemic rationality without constructing your own experiments on the world?
1Paul Crowley14y
Also, the first eleven Virtues of Rationality should be removed from the list.

This parallels some of the work I'm doing with fun-theoretic utility, at least in terms of using information theory. One big concern is what measure of complexity to use, as you certainly don't want to use a classical information measure - otherwise Kolmogorov random outcomes will be preferred to all others.

Lies, truth, and radical honesty are all that get in the way in understanding what is going on here.

You are communicating with someone, several of the many constantly changing layers (in addition to status signaling, empathy broadcasting, and performatives) of this communication are the transfer of information from you to that someone. The effectiveness of the communication of this information and its accuracy when received is something we can talk about fairly easily in terms of both instrumental (effectiveness) and epistemic (accurate) rationality.

To cl... (read more)

My post does describe a distinct model based on a Many Worlds interpretation where the probabilities are computed differently based on whether entanglement occurs or not - i.e. whether the universes influence each other. It is distinct from the typical model of decoherence.

As for photosythesis, it ought to behave in much the same way, as a network of states propagating through entangled universes, with the interactions of the states in those branches causing the highest probabilities to be assigned to the branches which have the lowest energy barriers.

Of... (read more)

It's as though no one here has ever heard of the bystander effect. The deadline is January 15th. Setting up a wiki page and saying "Anyone's free to edit." is the equivalent to killing this thing.

Also this is a philosophy, psychology, and technology journal, which means that despite the list of references for Singularity research you will also need to link this with the philosophical and/or public policy issues that the journal wants you to address (take a look at the two guest editors).

Another worry to me is that in all the back issues of this journal I looked over, the papers were almost always monographs (and baring that 2). I suspect that having many authors might kill the chances for this paper.

This prediction was right on the money. This is being tracked on PredictionBook by the way. I have some reservations about the usefulness of PB in general, but one thing that is quite valuable is its providing a central "diary" of upcoming predictions made at various dates in the past, that would otherwise be easy to forget.
I know. I was hoping somebody'd take the initiative, but failing that I'll muster the time to actually contribute to the article at some point.

First of all consider a computer is incomplete without a program, so lets just think of a programmed computer - whether in hardware or software doesn't matter for our purposes.

This gives us a system that goes from some known start state to some outcome state through a series of intermediate steps. If each of these steps is deterministic, then the entire system reaches the same outcome in all universes where it had the same starting point.

If those steps were stochastic, perhaps because there is chance of memory corruption in our computer or because of a r... (read more)

I meant that setting the limit to no preference for a given C doesn't equate to a globally continuous function. But that when you adjust your preferences function to approximate the discontinuous function by a continuous one, the result will contain (at least one) no preference point between any two A < B.

Now perhaps there is a result which says that if you take the limit as you set all discontinuous C to no preference, that the resulting function is complete, consistent, transitive, and continuous, but I wouldn't take that to be automatic.

Consider, f... (read more)

We are talking about the same thing here just at different levels of generality. The function you describe is the same as the one I'm describing, except on a much narrower domain (only a single binary lottery between A and B). Then you project the range to just a question about C.

In the specific function you are talking about, you must hold that this is true for all A, B, and C to get continuity. In the function I describe, the A, B, and C are generalized out, so the continuity property is equivalent to the continuity of the function.

So what did you mean by

I was talking about utility functions, but I can see your point about generalizing the result to the mapping from arbitrary dilemmas to preferences. Realize though, that preference space isn't discrete.

You can describe it as the function from a mixed dilemma to the joint relation space for < and =. Which you can treat as a somewhat more complex version of the ordinals (certainly you can construct a map to a dense version of the ordinals if you have at least 2 dilemmas and dense probability space). That gives you a notion of the preference space where a... (read more)

The function whose continuity is at issue is the function from real numbers to lotteries that mixes A and B. C is being used to build open sets in the space of lotteries of the form of all lotteries better (or worse) than C, whose preimage in the real numbers must be open, rather than half-open.

That is my reading of it too. I know Stuart is putting forward analytic results here, I was concerned that this one was not correctly represented.

Note, Independence II does not imply Independence, without using at least the consistency axiom.

The contrapositive of independence II is: For all A, B, C, D and p, if A ≤ B and C ≤ D, then pA + (1-p)C ≤ pB + (1-p)D. If we now take C and D to be the same lottery, we get independence, as long as C ≤ C. Now, given completeness, C ≤ C is always true (because at least one of C=C, CC must be true, and thus we can always get C ≤ C, -- switching C with C if needed!). So we don't need consistency, we need a weak form of completeness, in which every lottery can be at least compared with itself.
Transitivity and Continuity are unnecessary, however.

If we're using the Independence II as an axiom, you should be a little more precise, when you introduced it above, you referred to the base four axioms, including continuity.

Now, I only noticed consistency needed to convert between the two Independence formulations, which would make your statement correct. But on the face of things, it looks like you are trying to show a money pump theorem under discontinuous preferences by calling upon the continuity axiom.

Mathematically: Independence + other 3 axioms => Independence II Independence II => Independence Hence: ~Independence => ~Independence II My theorem implies: ~Independence II => You can be money pumped Hence: ~Independence => You can be money pumped

Correct, by definition, if you have a dense set (which by default we treat the probability space as) and we map it into another space than either that space is also dense, in which case the converging sequences will have limits or it will not be dense (in which case continuity fails). In the former case, continuity reduces to point-wise continuity.

Note, setting the limit to "no preference" does not resolve the discontinuity. But by intermediate value, there will exist at least one such point in any continuous approximation of the discontinuous function.

What is the discontinuous function? the function that assigns a preference to a dilemma? (particularly, mixed dilemmas parameterized by probabilities) With discrete range, that can never be continuous. I think you are complaining about the name "continuity axiom"; I am not the right target of that complaint! I don't know why it's called that, but I suspect you have jumped from the name to false beliefs about the axiom system. There is another continuous function, which is the assignment of utilities to lotteries. But I think this is continuous (to the extent that it can be defined) without invoking the continuity axiom. It is more the inverse map, from utilities to indifference-classes of lotteries, that risks not being continuous. I would complain more that this map is not well-defined, but there may be a way of arranging something like indifference-classes to have a finer topology than the order topology (eg, the left-limit topology, or the discrete topology).

Nice to see Europe catching up with, say India in this regard.

Does that answer your question?

This has been helpful. I'm much more familiar with the mathematics than the economics. Presently, I'm more worried about the mathematical chicanery involved in approximating a consistent continuous utility function out of things.

Continuity is no longer needed for these results...

But does doesn't the money pump result for non-independence rely on continuity? Perhaps I missed something there.

(Of note, this is what happens when I try to pull out a few details which are easy to relate and don't send entirely the wrong intuition - can't vouch for accuracy, but at least it seems we can talk about it.)

Actually, I realised you didn't need continuity at all. See the addendum; if you violate independence, you can be weakly money-pumped even without continuity (though the converse may be false).

Sorry I left this out. It's a huge simplification, but treat the set of p as a discrete subset set in the standard topology.

And that is discontinuous; but you can model it by a narrow spike around the value of p, making it continuous.
Hum, this seems to imply that the set of p is a finite set... Still doesn't change anything about the independence violation, though.

I'm very busy at the moment, but the short version is that one of my good candidates for a utility component function, c, has, c(A) < c(B) < c(pA + (1-p)B) for a subset of possible outcomes A and B, and choices of p.

This is only a piece of the puzzle, but if continuity in the von Neumann-Morgenstern sense falls out of it, I'll be surprised. Some other bounds are possible I suspect.

Independence fails here. We have B > A, yet there is a p such that (pA + (1-p)B) > B = (pB + (1-p)B). This violates independence for C = B. As this is an existence result ("for a subset of possible A, B and p..."), it doesn't say anything about continuity.
Perhaps I'm confused, but I thought that the inequality you described simply refers to a utility function with convex preferences (i.e. diminishing returns). I agree in general that discontinuity does not by itself entail the ability to be money-pumped--this should be trivially true from utility functions over strictly complementary goods.

Of note, you don't explain why discontinuous preferences necessarilly cause vulnerability to money pumping.

I'm concerned about this largely because the von Neumann-Morgenstern continuity axiom is problematic for constructing a functional utility theory from "fun theory".

The continuity hypothesis really is an unimportant "technical assumption." The only kind of thing it rules out are lexicographical preferences, like if you maximize X, but use Y as a tie-breaker. Specifically, it follows from independence that if AP; the only thing the continuity axiom requires is that at P there is no preference between B and the mixture; there is no tie-breaker. (Without the continuity axiom, it may well be that P is 0 or 1.) This is still true if you only have preferences involving p a rational number: the above is a Dedekind cut. If you restrict p to some smaller set that isn't dense, it's probably bad, but then I'd say you aren't taking probability seriously.
You really, really, really don't want to be touching continuity without knowing exactly what you're doing. See the hyperreals for an example of the sort of thing that happens in this case. Also look at non-measurable functions to see the fun in store. But most of the time, when people deny continuity, it's not on theoretical grounds but because they have a particular non-continuous preference theory in mind. That's perfectly fine. But generally, the non-continous theory can be approximated arbitrarily well by a continuous version that looks exactly the same in virtually all circumstances.
Can you elaborate? Maybe there is another solution to your problem than abandoning continuity.

Fair enough. Although in considering the implications of more than two options for the other conditions, I noticed something else worrisome.

The solution you present weakens a social welfare function, after all if I have two voters, and they vote (10,0,5) and (0,10,5) the result is an ambiguous ordering, not a strict ordering as required by Arrow's theorem (which is really a property of very particular endomorphisms on permutation groups).

It seems like a classic algorithmic sacrifice of completeness for power. Was that your intent?

Note, according to the wikipedia article listed, Arrow's theorem is valid "if the decision-making body has at least two members and at least three options to decide among". This makes me suspect the Pareto-efficiency counter-example as this assumes we have only 2 options.

It doesn't matter if there are ten thousand other options. If you sum numbers A-1 through A-N, and you sum numbers B-1 through B-N, and A-X > B-X for all X, then A must be larger than B; it doesn't matter how many alternatives there are.

What worries me about this tact is that I'm sufficiently clever to realize that in conducting a vast and complex research program to empirically test humanity to determine a global reflectively consistent utility function, I will be changing the utility trade-offs of humanity.

So I might as well make sure that I conduct my mass studies in such a way to ensure that the outcome is both correct and easier for me to perform my second much longer (essentially infinitely longer) time phase of my functioning.

So said AI would determine and then forever follow exactly what humanity's hidden utility function is. But there is no guarantee that this is a particularly friendly scenario.

I have a similar result, except I've never experienced a stimulant effects from anything other than blood sugar I'm not certain I can discount sleepiness. Also, I suffer from a migraine condition which has a much more severe affect on my mental faculties on a day-to-day basis.

And since improper sleeping is one of my triggers - "Happiness is getting enough sleep." Not too much, not too little.

This seems like the conflict between two deep seated heuristics, hence it would be difficult at best to argue for the right one.

Instead, I suggest a synthetic approach. Stop treating the two intuitions as a false dichotomy, and consider the continuum between them (or even beyond them).

This is essentially an instance of availability bias. Of course, the most interesting case, rather than just a declarative hypothesis elevated among the other inhabitants of the hypothesis space for that particular question, models have other effects that go far beyond merely availability.

This is because our initial model won't just form the first thing we think of when we examine the question, but some of the very structures we use when we formulate the question. Indeed, how we handle our models is easily responsible for the majority of the biases that h... (read more)

I expect that one source of the problem is seen in equating these two situations. On one hand you have 100 copies of the same movie. On the other hand, you have 100 distinct humans you could pay to save. To draw a direct comparison you would need to treat these as 100 copies of some idealized stranger. In which case the scope insensitivity might (depending on how you aggregate the utility of a copy's life) make more sense as a heuristic.

And this sort of simplification is likely one part of what is happening when we naively consider the questions:

How muc

... (read more)
Actually, I think a direct comparison would involve saving the same person 100 times (it was the same movie 100 times). I think at some point I'd begin to wonder if the gene pool wouldn't be better off without someone that accident-prone (/suicidal)... or at the very least I'd suspect that his ability to need saving would probably outlast my finances, in which case I might as well accept his inevitable death and quit paying sooner rather than later!

It's just that with two distinctly different conclusions from the results mentioned from two different sources: the article authors (in the abstract) and Gerald Weissmann, M.D., Editor-in-Chief (in the news article), I place a much lower confidence in later being a reasonable reading of the research paper.

But of course we could quite safely argue about readings and interpretations indefinitely. I'd point you to Derrida and Hermeneutics if you want to go that route.

In any case, I'll update my estimates on the likelihood of the research paper having an err... (read more)

So, perhaps the news article was based on press release that was based on the journal article. My point was that it was not produced solely from the abstract.

I don't see why this is your point? In the very least it doesn't present counter evidence to my claim that the abstract contains information not present in the news article which mitigates or negates the concerns of the original comment.

So what? That point was in response to your other claim about what the abstract did not contain.

But the abstract does not make any "just right" claims, unlike the summary on science daily. Which is what you where complaining about.

The abstract reads - we did an incremental test, and even at the lowest dosage we found an effect. This suggest that low dosages could be effective. I don't see anything wrong with that reasoning.

The science daily summary is simply misrepresenting it. So, the original commenter isn't missing something in the science news, it is science daily who made the error.

The news article was not based on the abstract. It was based on the journal article (which is available with a subscription) that the abstract summarized. It is not reasonable to expect that every point in the news article be supported by the abstract.

The following sounds like a control measurement was taken:

"Blood and urine samples were collected before and after each dose of DHA and at 8 wk after arrest of supplementation."

Also note, that the abstract doesn't say that 200mg is ideal as the science daily description does it says:

"It is concluded that low consumption of DHA could be an effective and nonpharmacological way to protect healthy men from platelet-related cardiovascular events."

Taking measurements before and after the treatment is good, but that is not the same as having a separate control group, which could filter out effects of timing, taking the dose with food or water, etc. The abstract also claims "Therefore, supplementation with only 200 mg/d DHA for 2 wk induced an antioxidant effect." It is likely that there was a more complete conclusion in the full article.

Well, the article abstract isn't consistent with the description you linked to. One of the dangers of paraphrasing science.

From the abstract: "Twelve healthy male volunteers (aged 53–65 yr) were assigned to consume an intake of successively 200, 400, 800, and 1600 mg/d DHA, as the only {omega}-3 fatty acid, for 2 wk each dose." I don't know what inconsistency you noticed between the news article and the abstract, but it seems the abstract itself describes a study that is missing the control group that gets a dosage of 0.

I'm interested, especially since this will likely be the closest such meet-up to State College, PA. I'm not the only one here, so I can ask around. Although, obviously, our transportation logistics will be more complicated.

I'm not part of this site at all but looks like there are some mutual acquaintances, common interests, etc. I'm also at Penn State and might consider a Pittsburgh event depending on the timing. I'd like to discuss this with someone via email (sbaum [at] ) if possible since I won't be checking this thread. Thanks.

No. The Medawar zone is more about scientific discoveries as marketable products to the scientific community, not the cultural and cognitive pressures of those communities which affect how those products are used as they become adopted.

Different phenomena, although there are almost certainly common causes.

Oh yes, but it's not just a prediliction for simple models in the first place, but also a tendency to culturally and cognitively simplify the model we access to use - even if the original model had extensions to handle this case and even to the tune of orders of magnitude of error.

Of course sometimes it may be worth computing an estimate that is (unknown to you) orders of magnitude off, in a very short amount of time. Certainly if the impact of the estimate is delayed and subtle less conscious trade-offs may factor in between cognitive effort to access and use a more detailed model and the consequences of error. Yet another form of akrasia.

Generally (and therefore somewhat inaccurately) speaking, one way that our brains seem to handle the sheer complexity computing in the real world us is a tendency to simplify the information we gather.

In many cases these sorts of extremely simple models didn't start that way. They may have started with more parameters and complexity. But as they were repeated, explained and applied the model becomes, in effect, simpler. The example begins to represent the entire model, rather than serving to show only a piece of it.

Technically the exponential radioactive... (read more)

So pretty much, this:
If errors were a few percent randomly up or down it wouldn't matter, but the inaccuracy is not tiny, over long timescales it's many orders of magnitude, and almost always in the same direction - growth/decay are slower over long term than exponential models predicts.

Don't have many mantras, although I stress the importance of understanding before trying to solve.

One that does stand out is more of a question:

"What am I not thinking here?" or "What are we forgetting here?" - Followed by estimations based on meta-biases and human error tendencies to make some hypotheses where cognitive, social, or cultural blind spots might be. And then comes the testing, followed by more hypotheses. And so on.

After all, every field of thought is developed by humans. It's a common point of failure.

Procrastination and laziness may be kinds of akrasia, but simply because that are the type most talked about here does not mean that they are an exhaustive description of "weaknesses of will". One example I find easy to bring up is trying to move while we are in pain. There are definite moments where a crisis of will occurs, and if you have a sharp shooting pain in your leg while walking you will either change your movement against your intended direction or overcome that moment and escape the akrasia for a time.

I do, however, suspect that this community would do a better job at fighting akrasia if we did not confound it solely with procrastination and "laziness".

You're right in that this, among other topics, I owe a top level post.

Although one worry I have with trying to lay out inferential steps is that some of these ideas (this one included) seem to encounter a sort of Xeno's paradox for full comprehension. It stops being enough to be willing to take the next step, it becomes necessary to take the inferential limit to get to the other side.

Which means that until I find a way to map people around that phenomena I'm hesitant in giving a large scale treatment. Just because it was the route I took, doesn't mean it's a good way to explain things generally, ala Typical Mind Fallacy born out by evidence.

But in any case I will return to it when I have the time.

Laying out the route you took might be a lot easier than looking for another route. Also, the feedback from comments might be a better way to look for another route than modeling other minds on your own. I suspect that people are voting you down because you sound like you're attempting to show off, rather than attempting to communicate. Several of your posts seem to be simple assertions that you possess knowledge or a theory. I did vote down the comment at the top of this thread, but I don't remember if that's why. I was surprised that I didn't vote down other of your comments where I remember having that reaction, so this theory-from-introspection isn't even a good theory of me. But it might work better for people who vote more. (the simple theory of when I vote you up is 21 May and 6 June, which disturbs me.)

Building on some of the more non-trivial theories of fun - specifically cognitive science research focusing on the human response to learning there is a direct relationship between human perception of subjectively unpleasant qualia and the complexity impact on the human of that qualia.

Admittedly extending this concept of suffering beyond humanity is a bit questionable. But it's better than a tautological or innately subjective definition, because with this model it is possible to estimate and compare with more intuitive expectations.

One nice effect of ha... (read more)

I don't mean to suggest that anything that subtracts a karma point isn't worth doing, just that it's evidence that you're not accomplishing what you'd like. You've made some claims (in other comments too) which would be very interesting if true, but weren't backed up enough for me to make the inferential jump. I'd like to see a full top level post on this idea, as it seems quite interesting if true, but it also seems to need more space to give the details and full supporting arguments.
Load More